Patent application number | Description | Published |
20080278607 | SCENE-BASED NON-UNIFORMITY CORRECTION METHOD USING LOCAL CONSTANT STATISTICS - A scene-based non-uniformity correction method employing local constant statistics for eliminating fixed pattern noise in a video comprising a plurality of images is disclosed, comprising the steps of providing an initial gain image, an initial offset image, a predetermined pyramid level, and a filter of a predetermined level; setting one of the plurality of input images to the current image; calculating a temporary true scene image for the current image based on the initial gain image and the initial offset image; accumulating a temporal mean image and a temporal standard deviation image based on the calculated temporary true scene image; setting another of the plurality of images to the current image and repeating the setting, calculating, and accumulating steps until substantially all of the images of the plurality of images have been processed, otherwise further perform the steps of calculating a Gaussian mean image based on the accumulated temporal mean and calculating a Gaussian gain image based on the accumulated temporal standard deviation image; spectrum shaping the Gaussian mean image and the Gaussian gain image based on the predetermined pyramid level and the filter of a predetermined level; multiplying the spectrum shaped Gaussian gain image by the initial gain image to obtain a final gain image; and multiplying the spectrum shaped Gaussian mean image by the initial gain image and add the initial offset image to obtain a final offset image. | 11-13-2008 |
20090088634 | TOOL TRACKING SYSTEMS AND METHODS FOR IMAGE GUIDED SURGERY - In one embodiment of the invention, a tool tracking system is disclosed including a computer usable medium having computer readable program code to receive images of video frames from at least one camera and to perform image matching of a robotic instrument to determine video pose information of the robotic instrument within the images. The tool tracking system further includes computer readable program code to provide a state-space model of a sequence of states of corrected kinematics information for accurate pose information of the robotic instrument. The state-space model receives raw kinematics information of mechanical pose information and adaptively fuses the mechanical pose information and the video pose information together to generate the sequence of states of the corrected kinematics information for the robotic instrument. Additionally disclosed are methods for image guided surgery. | 04-02-2009 |
20090088773 | METHODS OF LOCATING AND TRACKING ROBOTIC INSTRUMENTS IN ROBOTIC SURGICAL SYSTEMS - In one embodiment of the invention, a method is disclosed to locate a robotic instrument in the field of view of a camera. The method includes capturing sequential images in a field of view of a camera. The sequential images are correlated between successive views. The method further includes receiving a kinematic datum to provide an approximate location of the robotic instrument and then analyzing the sequential images in response to the approximate location of the robotic instrument. An additional method for robotic systems is disclosed. Further disclosed is a method for indicating tool entrance into the field of view of a camera. | 04-02-2009 |
20090088897 | METHODS AND SYSTEMS FOR ROBOTIC INSTRUMENT TOOL TRACKING - In one embodiment of the invention, a method for a robotic system is disclosed to track one or more robotic instruments. The method includes generating kinematics information for the robotic instrument within a field of view of a camera; capturing image information in the field of view of the camera; and adaptively fusing the kinematics information and the image information together to determine pose information of the robotic instrument. Additionally disclosed is a robotic medical system with a tool tracking sub-system. The tool tracking sub-system receives raw kinematics information and video image information of the robotic instrument to generate corrected kinematics information for the robotic instrument by adaptively fusing the raw kinematics information and the video image information together. | 04-02-2009 |
20090192524 | SYNTHETIC REPRESENTATION OF A SURGICAL ROBOT - A synthetic representation of a robot tool for display on a user interface of a robotic system. The synthetic representation may be used to show the position of a view volume of an image capture device with respect to the robot. The synthetic representation may also be used to find a tool that is outside of the field of view, to display range of motion limits for a tool, to remotely communicate information about the robot, and to detect collisions. | 07-30-2009 |
20090268010 | AUGMENTED STEREOSCOPIC VISUALIZATION FOR A SURGICAL ROBOT USING A CAPTURED FLUORESCENCE IMAGE AND CAPTURED STEREOSCOPIC VISIBLE IMAGES - An illumination channel, a stereoscopic optical channel and another optical channel are held and positioned by a robotic surgical system. A first capture unit captures a stereoscopic visible image from the first light from the stereoscopic optical channel while a second capture unit captures a fluorescence image from the second light from the other optical channel. An intelligent image processing system receives the captured stereoscopic visible image and the captured fluorescence image and generates a stereoscopic pair of fluorescence images. An augmented stereoscopic display system outputs a real-time stereoscopic image comprising a three-dimensional presentation of a blend of the stereoscopic visible image and the stereoscopic pair of fluorescence images. | 10-29-2009 |
20090268011 | AUGMENTED STEREOSCOPIC VISUALIZATION FOR A SURGICAL ROBOT USING A CAMERA UNIT WITH A MODIFIED PRISM - An endoscope with a stereoscopic optical channel is held and positioned by a robotic surgical system. A first capture unit captures: a visible first color component of a visible left image combined with a fluorescence left image from first light from one channel in the endoscope; a visible second color component of the visible left image from the first light; and a visible third color component of the visible left image from the first light. A second capture unit captures: a visible first color component of a visible right image combined with a fluorescence right image from second light from the other channel in the endoscope; a visible second color component of the visible right image from the second light; and a visible third color component of the visible right image from the second light. An augmented stereoscopic outputs a real-time stereoscopic image including a three-dimensional presentation including the visible left and right images and the fluorescence left and right images. | 10-29-2009 |
20090268012 | AUGMENTED STEREOSCOPIC VISUALIZATION FOR A SURGICAL ROBOT USING A CAPTURED VISIBLE IMAGE COMBINED WITH A FLUORESCENCE IMAGE AND A CAPTURED VISIBLE IMAGE - An endoscope with a stereoscopic optical channel is held and positioned by a robotic surgical system. A capture unit captures (1) a visible first image and (2) a visible second image combined with a fluorescence second image from the light. An intelligent image processing system receives (1) the visible first image and (2) the visible second image combined with the fluorescence second image and generates at least one fluorescence image of a stereoscopic pair of fluorescence images and a visible second image. An augmented stereoscopic display system outputs a real-time stereoscopic image including a three-dimensional presentation including in one eye, a blend of the at least one fluorescence image of a stereoscopic pair of fluorescence images and one of the visible first and second images; and in the other eye, the other of the visible first and second images. | 10-29-2009 |
20090268015 | AUGMENTED STEREOSCOPIC VISUALIZATION FOR A SURGICAL ROBOT - A robotic surgical system positions and holds an endoscope. A visible imaging system is coupled to the endoscope. The visible imaging system captures a visible image of tissue. An alternate imaging system is also coupled to the endoscope. The alternate imaging system captures a fluorescence image of at least a portion of the tissue. A stereoscopic video display system is coupled to the visible imaging system and to the alternate imaging system. The stereoscopic video display system outputs a real-time stereoscopic image comprising a three-dimensional presentation of a blend of a fluorescence image associated with the captured fluorescence image, and the visible image. | 10-29-2009 |
20090270678 | AUGMENTED STEREOSCOPIC VISUALIZATION FOR A SURGICAL ROBOT USING TIME DUPLEXING - An endoscope with a stereoscopic optical channel is again held and positioned by a robotic surgical system. A capture unit captures (1) at a first time, a first image from light from the channel; and (2) at a second time different from the first time, a second image from the light. Only one of the first image and the second image includes a combination of a fluorescence image and a visible image. The other of the first image and the second image is a visible image. An intelligent image processing system generates an artificial fluorescence image using the captured fluorescence image. An augmented stereoscopic display system outputs an augmented stereoscopic image of at least a portion of the tissue comprising the artificial fluorescence image. | 10-29-2009 |
20100164950 | EFFICIENT 3-D TELESTRATION FOR LOCAL ROBOTIC PROCTORING - An apparatus is configured to show telestration in 3-D to a surgeon in real time. A proctor is shown one side of a stereo image pair, such that the proctor can draw a telestration line on the one side with an input device. Points of interest are identified for matching to the other side of the stereo image pair. In response to the identified points of interest, regions and features are identified and used to match the points of interest to the other side. Regions can be used to match the points of interest. Features of the first image can be matched to the second image and used to match the points of interest to the second image, for example when the confidence scores for the regions are below a threshold value. Constraints can be used to evaluate the matched points of interest, for example by excluding bad points. | 07-01-2010 |
20100166323 | ROBUST SPARSE IMAGE MATCHING FOR ROBOTIC SURGERY - Systems, methods, and devices are used to match images. Points of interest from a first image are identified for matching to a second image. In response to the identified points of interest, regions and features can be identified and used to match the points of interest to a corresponding second image or second series of images. Regions can be used to match the points of interest when regions of the first image are matched to the second image with high confidence scores, for example above a threshold. Features of the first image can be matched to the second image, and these matched features may be used to match the points of interest to the second image, for example when the confidence scores for the regions are below the threshold value. Constraint can be used to evaluate the matched points of interest, for example by excluding bad points. | 07-01-2010 |
20100168562 | FIDUCIAL MARKER DESIGN AND DETECTION FOR LOCATING SURGICAL INSTRUMENT IN IMAGES - The present disclosure relates to systems, methods, and tools for tool tracking using image-derived data from one or more tool-located reference features. A method includes: capturing a first image of a tool that includes multiple features that define a first marker, where at least one of the features of the first marker includes an identification feature; determining a position for the first marker by processing the first image; determining an identification for the first marker by using the at least one identification feature by processing the first image; and determining a tool state for the tool by using the position and the identification of the first marker. | 07-01-2010 |
20100168918 | OBTAINING FORCE INFORMATION IN A MINIMALLY INVASIVE SURGICAL PROCEDURE - Methods of and a system for providing force information for a robotic surgical system. The method includes storing first kinematic position information and first actual position information for a first position of an end effector; moving the end effector via the robotic surgical system from the first position to a second position; storing second kinematic position information and second actual position information for the second position; and providing force information regarding force applied to the end effector at the second position utilizing the first actual position information, the second actual position information, the first kinematic position information, and the second kinematic position information. Visual force feedback is also provided via superimposing an estimated position of an end effector without force over an image of the actual position of the end effector. Similarly, tissue elasticity visual displays may be shown. | 07-01-2010 |
20100169815 | VISUAL FORCE FEEDBACK IN A MINIMALLY INVASIVE SURGICAL PROCEDURE - Methods of and a system for providing a visual representation of force information in a robotic surgical system. A real position of a surgical end effector is determined. A projected position of the surgical end effector if no force were applied against the end effector is also determined. Images representing the real and projected positions are output superimposed on a display. The offset between the two images provides a visual indication of a force applied to the end effector or to the kinematic chain that supports the end effector. In addition, tissue deformation information is determined and displayed. | 07-01-2010 |
20100317965 | VIRTUAL MEASUREMENT TOOL FOR MINIMALLY INVASIVE SURGERY - Robotic and/or measurement devices, systems, and methods for telesurgical and other applications employ input devices operatively coupled to tools so as to allow a system user to manipulate tissues and other structures being measured. The system may make use of three dimensional position information from stereoscopic images. Two or more discrete points can be designated in three dimensions so as to provide a cumulative length along a straight or curving structure, an area measurement, a volume measurement, or the like. The discrete points may be identified by a single surgical tool or by distances separating two or more surgical tools, with the user optionally measuring a structure longer than a field of view of the stereoscopic image capture device by walking a pair of tools “hand-over-hand” along the structure. By allowing the system user to interact with the tissues while designating the tissue locations, and by employing imaging data to determine the measurements, the measurement accuracy and ease of measurement may be enhanced. | 12-16-2010 |
20100318099 | VIRTUAL MEASUREMENT TOOL FOR MINIMALLY INVASIVE SURGERY - Robotic and/or measurement devices, systems, and methods for telesurgical and other applications employ input devices operatively coupled to tools so as to allow a system user to manipulate tissues and other structures being measured. The system may make use of three dimensional position information from stereoscopic images. Two or more discrete points can be designated in three dimensions so as to provide a cumulative length along a straight or curving structure, an area measurement, a volume measurement, or the like. The discrete points may be identified by a single surgical tool or by distances separating two or more surgical tools, with the user optionally measuring a structure longer than a field of view of the stereoscopic image capture device by walking a pair of tools “hand-over-hand” along the structure. By allowing the system user to interact with the tissues while designating the tissue locations, and by employing imaging data to determine the measurements, the measurement accuracy and ease of measurement may be enhanced. | 12-16-2010 |
20100331855 | Efficient Vision and Kinematic Data Fusion For Robotic Surgical Instruments and Other Applications - Robotic devices, systems, and methods for use in telesurgical therapies through minimally invasive apertures make use of joint-based data throughout much of the robotic kinematic chain, but selectively rely on information from an image capture device to determine location and orientation along the linkage adjacent a pivotal center at which a shaft of the robotic surgical tool enters the patient. A bias offset may be applied to a pose (including both an orientation and a location) at the pivotal center to enhance accuracy. The bias offset may be applied as a simple rigid transformation from the image-based pivotal center pose to a joint-based pivotal center pose. | 12-30-2010 |
20120020547 | Methods of Locating and Tracking Robotic Instruments in Robotic Surgical Systems - In one embodiment of the invention, a method is disclosed to locate a robotic instrument in the field of view of a camera. The method includes capturing sequential images in a field of view of a camera. The sequential images are correlated between successive views. The method further includes receiving a kinematic datum to provide an approximate location of the robotic instrument and then analyzing the sequential images in response to the approximate location of the robotic instrument. An additional method for robotic systems is disclosed. Further disclosed is a method for indicating tool entrance into the field of view of a camera. | 01-26-2012 |
20120120245 | SYSTEM AND METHOD FOR MULTI-RESOLUTION SHARPNESS TRANSPORT ACROSS COLOR CHANNELS - Provided are a system and method for image sharpening is provided that involves capturing an image, and then decomposing the image into a plurality of image-representation components, such as RGB components for example. Each image-representation component is transformed to obtain an unsharpened multi-resolution representation for each image-representation component. A multi-resolution representation includes a plurality of transformation level representations. Sharpness information is transported from an unsharpened transformation level representation of a first one of the image-representation components to a transformation level representation of an unsharpened multi-resolution representation of a second one of the image-representation components to create a sharpened multi-resolution representation of the second one of the image-representation components. The sharpened multi-resolution representation of the second one of the image-representation components is then transformed to obtain a sharpened image. The improved and sharpened image may then be displayed. | 05-17-2012 |
20120206582 | METHODS AND APPARATUS FOR DEMOSAICING IMAGES WITH HIGHLY CORRELATED COLOR CHANNELS - In one embodiment of the invention, an apparatus is disclosed including an image sensor, a color filter array, and an image processor. The image sensor has an active area with a matrix of camera pixels. The color filter array is in optical alignment over the matrix of the camera pixels. The color filter array assigns alternating single colors to each camera pixel. The image processor receives the camera pixels and includes a correlation detector to detect spatial correlation of color information between pairs of colors in the pixel data captured by the camera pixels. The correlation detector further controls demosaicing of the camera pixels into full color pixels with improved resolution. The apparatus may further include demosaicing logic to demosaic the camera pixels into the full color pixels with improved resolution in response to the spatial correlation of the color information between pairs of colors. | 08-16-2012 |
20120209287 | METHOD AND STRUCTURE FOR IMAGE LOCAL CONTRAST ENHANCEMENT - A local contrast enhancement method transforms a first plurality of color components of a first visual color image into a modified brightness component by using a first transformation. The first plurality of color components are in a first color space. The modified brightness component is a brightness component of a second color space. The second color space also includes a plurality of chromatic components. The method transforms all the color components of the first color space into the chromatic components of the second color space. The method then transforms the modified brightness component and the chromatic components of the second color space into a plurality of new color components, in the first color space, of a second visual color image. The method transmits the plurality of new color components to a device such as a display device. The second visual color image has enhanced contrast in comparison to the first visual color image. | 08-16-2012 |
20120237095 | ROBUST SPARSE IMAGE MATCHING FOR ROBOTIC SURGERY - Systems, methods, and devices are used to match images. Points of interest from a first image are identified for matching to a second image. In response to the identified points of interest, regions and features can be identified and used to match the points of interest to a corresponding second image or second series of images. Regions can be used to match the points of interest when regions of the first image are matched to the second image with high confidence scores, for example above a threshold. Features of the first image can be matched to the second image, and these matched features may be used to match the points of interest to the second image, for example when the confidence scores for the regions are below the threshold value. Constraint can be used to evaluate the matched points of interest, for example by excluding bad points. | 09-20-2012 |
20130107207 | METHOD AND SYSTEM FOR STEREO GAZE TRACKING | 05-02-2013 |
20130166070 | OBTAINING FORCE INFORMATION IN A MINIMALLY INVASIVE SURGICAL PROCEDURE - Methods of and a system for providing force information for a robotic surgical system. The method includes storing first kinematic position information and first actual position information for a first position of an end effector; moving the end effector via the robotic surgical system from the first position to a second position; storing second kinematic position information and second actual position information for the second position; and providing force information regarding force applied to the end effector at the second position utilizing the first actual position information, the second actual position information, the first kinematic position information, and the second kinematic position information. Visual force feedback is also provided via superimposing an estimated position of an end effector without force over an image of the actual position of the end effector. Similarly, tissue elasticity visual displays may be shown. | 06-27-2013 |
20130300836 | SINGLE-CHIP SENSOR MULTI-FUNCTION IMAGING - Mixed mode imaging is implemented using a single-chip image capture sensor with a color filter array. The single-chip image capture sensor captures a frame including a first set of pixel data and a second set of pixel data. The first set of pixel data includes a first combined scene, and the second set of pixel data includes a second combined scene. The first combined scene is a first weighted combination of a fluorescence scene component and a visible scene component due to the leakage of a color filter array. The second combined scene includes a second weighted combination of the fluorescence scene component and the visible scene component. Two display scene components are extracted from the captured pixel data in the frame and presented on a display unit. | 11-14-2013 |
20130300837 | SINGLE-CHIP SENSOR MULTI-FUNCTION IMAGING - Mixed mode imaging is implemented using a single-chip image capture sensor with a color filter array. The single-chip image capture sensor captures a frame including a first set of pixel data and a second set of pixel data. The first set of pixel data includes a first combined scene, and the second set of pixel data includes a second combined scene. The first combined scene is a first weighted combination of a fluorescence scene component and a visible scene component due to the leakage of a color filter array. The second combined scene includes a second weighted combination of the fluorescence scene component and the visible scene component. Two display scene components are extracted from the captured pixel data in the frame and presented on a display unit. | 11-14-2013 |
20140031659 | EFFICIENT AND INTERACTIVE BLEEDING DETECTION IN A SURGICAL SYSTEM - A bleeding detection unit in a surgical system processes information in an acquired scene before that scene is presented on a display unit in the operating room. For example, the bleeding detection unit analyzes the pixel data in the acquired scene and determines whether there are one or more initial sites of blood in the scene. Upon detection of an initial site of blood, the region is identified by an initial site icon in the scene displayed on the display unit. In one aspect, the processing is done in real-time which means that there is no substantial delay in presenting the acquired scene to the surgeon. | 01-30-2014 |
20140055489 | RENDERING TOOL INFORMATION AS GRAPHIC OVERLAYS ON DISPLAYED IMAGES OF TOOLS - An operator telerobotically controls tools to perform a procedure on an object at a work site while viewing real-time images of the work site on a display. Tool information is provided in the operator's current gaze area on the display by rendering the tool information over the tool so as not to obscure objects being worked on at the time by the tool nor to require eyes of the user to refocus when looking at the tool information and the image of the tool on a stereo viewer. | 02-27-2014 |
20140058564 | VISUAL FORCE FEEDBACK IN A MINIMALLY INVASIVE SURGICAL PROCEDURE - Methods of and a system for providing a visual representation of force information in a robotic surgical system. A real position of a surgical end effector is determined. A projected position of the surgical end effector if no force were applied against the end effector is also determined. Images representing the real and projected positions are output superimposed on a display. The offset between the two images provides a visual indication of a force applied to the end effector or to the kinematic chain that supports the end effector. In addition, tissue deformation information is determined and displayed. | 02-27-2014 |
20140111623 | STEREO IMAGING SYSTEM WITH AUTOMATIC DISPARITY ADJUSTMENT FOR DISPLAYING CLOSE RANGE OBJECTS - A stereo imaging system comprises a stereoscopic camera having left and right image capturing elements for capturing stereo images; a stereo viewer; and a processor configured to modify the stereo images prior to being displayed on the stereo viewer so that a disparity between corresponding points of the stereo images is adjusted as a function of a depth value within a region of interest in the stereo images after the depth value reaches a target depth value. | 04-24-2014 |
20140176680 | Methods and Apparatus for Demosaicing Images with Highly Correlated Color Channels - In one embodiment of the invention, an apparatus is disclosed including an image sensor, a color filter array, and an image processor. The image sensor has an active area with a matrix of camera pixels. The color filter array is in optical alignment over the matrix of the camera pixels. The color filter array assigns alternating single colors to each camera pixel. The image processor receives the camera pixels and includes a correlation detector to detect spatial correlation of color information between pairs of colors in the pixel data captured by the camera pixels. The correlation detector further controls demosaicing of the camera pixels into full color pixels with improved resolution. The apparatus may further include demosaicing logic to demosaic the camera pixels into the full color pixels with improved resolution in response to the spatial correlation of the color information between pairs of colors. | 06-26-2014 |
20140232824 | PROVIDING INFORMATION OF TOOLS BY FILTERING IMAGE AREAS ADJACENT TO OR ON DISPLAYED IMAGES OF THE TOOLS - An operator telerobotically controls tools to perform a procedure on an object at a work site while viewing real-time images of the object, tools and work site on a display. Tool information is provided by filtering a part of the real-time images for enhancement or degradation to indicate a state of a tool and displaying the filtered images on the display. | 08-21-2014 |
20140267603 | DEPTH BASED MODIFICATION OF CAPTURED IMAGES - An imaging system processes images of a plurality of objects which have been captured by an image capture device for display. Normal processing of the images is modified as either a function of a depth corresponding to one or more of the plurality of objects appearing in the captured images relative to the image capture device or as a function of the depth and one or more image characteristics extracted from the captured images. A depth threshold may be used to avoid inadvertent modifications due to noise. | 09-18-2014 |
20140267626 | INTELLIGENT MANUAL ADJUSTMENT OF AN IMAGE CONTROL ELEMENT - An imaging system comprises an image capturing device, a viewer, a control element, and a processor. The control element controls or adjusts an image characteristic of one of the image capturing device and the viewer. The processor is programmed to determine a depth value relative to the image capturing device, determine a desirable adjustment to the control element by using the determined depth value, and control adjustment of the control element to assist manual adjustment of the control element to the desirable adjustment. The processor may also be programmed to determine whether the adjustment of the control element is to be automatically or manually adjusted and control adjustment of the control element automatically to the desirable adjustment if the control element is to be automatically adjusted. | 09-18-2014 |
20140277736 | GEOMETRICALLY APPROPRIATE TOOL SELECTION ASSISTANCE FOR DETERMINED WORK SITE DIMENSIONS - A robotic system includes a processor that is programmed to determine and cause work site measurements for user specified points in the work site to be graphically displayed in order to provide geometrically appropriate tool selection assistance to the user. The processor is also programmed to determine an optimal one of a plurality of tools of varying geometries for use at the work site and to cause graphical representations of at least the optimal tool to be displayed along with the work site measurements. | 09-18-2014 |
20140282196 | ROBOTIC SYSTEM PROVIDING USER SELECTABLE ACTIONS ASSOCIATED WITH GAZE TRACKING - A robotic system provides user selectable actions associated with gaze tracking according to user interface types. User initiated correction and/or recalibration of the gaze tracking may be performed during the processing of individual of the user selectable actions. | 09-18-2014 |
20150025392 | EFFICIENT 3-D TELESTRATION FOR LOCAL AND REMOTE ROBOTIC PROCTORING - An apparatus is configured to show telestration in 3-D to a surgeon in real time. A proctor is shown one side of a stereo image pair, such that the proctor can draw a telestration line on the one side with an input device. Points of interest are identified for matching to the other side of the stereo image pair. In response to the identified points of interest, regions and features are identified and used to match the points of interest to the other side. Regions can be used to match the points of interest. Features of the first image can be matched to the second image and used to match the points of interest to the second image, for example when the confidence scores for the regions are below a threshold value. Constraints can be used to evaluate the matched points of interest, for example by excluding bad points. | 01-22-2015 |
20150042775 | Efficient Image Demosaicing and Local Contrast Enhancement - An efficient demosaicing method includes reconstructing missing green pixels after estimating green-red color difference signals and green-blue color difference signals that are used in the reconstruction, and then constructing missing red and blue pixels using the color difference signals. This method creates a full resolution frame of red, green and blue pixels. The full resolution frame of pixels is sent to a display unit for display. In an efficient demosaicing process that includes local contrast enhancement, image contrast is enhanced to build and boost a brightness component from the green pixels, and to build chromatic components from all of the red, green, and blue pixels. Color difference signals are used in place of the red and blue pixels in building the chromatic components. | 02-12-2015 |
20150077519 | AUGMENTED STEREOSCOPIC VISUALIZATION FOR A SURGICAL ROBOT USING A CAPTURED VISIBLE IMAGE COMBINED WITH A FLUOROESCENCE IMAGE AND A CAPTURED VISIBLE IMAGE - An endoscope with a stereoscopic optical channel is held and positioned by a robotic surgical system. A capture unit captures (1) a visible first image and (2) a visible second image combined with a fluorescence second image from the light. An intelligent image processing system receives (1) the visible first image and (2) the visible second image combined with the fluorescence second image and generates at least one fluorescence image of a stereoscopic pair of fluorescence images and a visible second image. An augmented stereoscopic display system outputs a real-time stereoscopic image including a three-dimensional presentation including in one eye, a blend of the at least one fluorescence image of a stereoscopic pair of fluorescence images and one of the visible first and second images; and in the other eye, the other of the visible first and second images. | 03-19-2015 |