Entries |
Document | Title | Date |
20080239063 | STEREOSCOPIC OBSERVATION SYSTEM - A stereoscopic observation system includes a stereoscopic image pick up apparatus to pick up left and right images at an inward angle, a stereoscopic image display apparatus to transmit the left and right images picked up by the stereoscopic image pick up apparatus to an observer so that the images are stereoscopically observed at a convergence angle, a convergence angle change portion provided in the stereoscopic image display apparatus and to change the convergence angle, a recognition portion to recognize the stereoscopic image pick up apparatus, and a control portion to control the convergence angle change portion on the basis of the result of the recognition in the recognition portion so that the convergence angle is substantially equal to the inward angle. | 10-02-2008 |
20080246836 | SYSTEM AND METHOD FOR PROCESSING VIDEO IMAGES FOR CAMERA RECREATION - Embodiments use point clouds to recreate a camera. The point cloud of the object may be formed from analysis of two dimensional images taken by the camera. Once the virtual camera has been formed, the camera may be used in the process of generating stereoscopic three dimensional images of the scene within the | 10-09-2008 |
20080278570 | Single-lens, single-sensor 3-D imaging device with a central aperture for obtaining camera position - A device and method for three-dimensional (3-D) imaging using a defocusing technique is disclosed. The device comprises a lens, a central aperture located along an optical axis for projecting an entire image of a target object, at least one defocusing aperture located off of the optical axis, a sensor operable for capturing electromagnetic radiation transmitted from an object through the lens and the central aperture and the at least one defocusing aperture, and a processor communicatively connected with the sensor for processing the sensor information and producing a 3-D image of the object. Different optical filters can be used for the central aperture and the defocusing apertures respectively, whereby a background image produced by the central aperture can be easily distinguished from defocused images produced by the defocusing apertures. | 11-13-2008 |
20080284842 | STEREO VIDEO SHOOTING AND VIEWING DEVICE - A stereo video shooting and viewing device includes: a body, having two groups of eyepieces spaced apart from each other by a certain distance corresponding to a distance between two human eyes; two micro display screens, disposed on front ends of the eyepieces; two digital camera lenses, disposed on an outer side of the body, spaced apart from each other by a certain distance corresponding to the distance between two human eyes, and used for synchronously capturing images with a visual angle difference corresponding to that of the human eyes; and a main control unit (MCU), connected to the two micro display screens and the two digital camera lenses, and used for processing the images synchronously captured by the two digital camera lenses and image signals received from exterior, and displaying the images on the two micro display screens separately. In this embodiment, the images with the visual angle difference corresponding to that of the human eyes captured by the camera lenses are separately displayed on the micro display screens, so as to form a stereo image once being viewed by human eyes, thereby having the advantages of simple structure and vivid stereoscopic effects. | 11-20-2008 |
20080309754 | Systems and Methods for Displaying Three-Dimensional Images - Systems and methods for displaying three-dimensional (3D) images are described. In particular, the systems can include a display block made from a transparent material with optical elements three-dimensional disposed therein. Each optical element becomes luminous when illuminated by a light ray. The systems can also include a computing device configured to generate two-dimensional (2D) images formatted to create 3D images when projected on the display block, by a video projector coupled to the computing device. The video projector is configured to project the 2D images on the block to create the 3D images by causing a set of the passive optical elements to become luminous. Various other systems and methods are described for displaying 3D images. | 12-18-2008 |
20080316299 | VIRTUAL STEREOSCOPIC CAMERA - The subject matter relates to a virtual stereoscopic camera for displaying 3D images. In one implementation, left and right perspectives of a source are captured by image capturing portions. The image capturing portions include an array of image capturing elements that are interspersed with an array of display elements in a display area. The image capturing elements are confined within limited portions of the display area and are separated by an offset distance. The captured left and right perspectives are synthesized so as to generate an image that is capable of being viewed in 3D. | 12-25-2008 |
20090002483 | Apparatus for and method of generating image, and computer program product - An apparatus includes a stereoscopic display region calculator calculating a stereoscopic display region to reproduce a three-dimensional positional relationship in image data displayed on a stereoscopic display device, based on two-dimensional or three-dimensional image data, a position of a target of regard of a virtual camera set in processing of rendering the image data, and orientations of the light beams output from the stereoscopic display device. The apparatus also includes an image processor performing image processing on image data outside a region representing the outside of the stereoscopic display region calculated by the stereoscopic display region calculator. The image processing is different from image processing on image data inside a region representing the inside of the stereoscopic display region. The apparatus also includes an image generator generating stereoscopic display image data from the two-dimensional or three-dimensional image data after processed by the image processor. | 01-01-2009 |
20090009591 | Image synthesizing apparatus and image synthesizing method - A stereoscopic image supplier acquires stereoscopic image data in a side-by-side layout format. Visual information supplier acquires visual information to be added to a stereoscopic image. Based on a 3D display for displaying a stereoscopic image, a 3D display information supplier acquires the coordinates of portions that are not used for 3D display representation as the coordinates of the pixels with which visual information is combined. An image synthesizer combines visual information obtained at Step S | 01-08-2009 |
20090015663 | METHOD AND SYSTEM FOR CONFIGURING A MONITORING DEVICE FOR MONITORING A SPATIAL AREA - A monitoring device for monitoring a spatial area comprises at least one image recording unit. A three-dimensional image of the spatial area is recorded and displayed in order to configure. the monitoring device. A configuration plane is defined using a plurality of spatial points which have been determined within the three-dimensional image. Subsequently, at least one variable geometry element is defined relative to the configuration plane. A data record which represents a transformation of the geometry element into the spatial area is generated and transferred to the monitoring device. | 01-15-2009 |
20090058992 | THREE DIMENSIONAL PHOTOGRAPHIC LENS SYSTEM - Provided is a three-dimensional image capturing lens system having a structure in which left and right image sensing lenses are provided, and light is synthesized to form an image on a single CCD (charge-coupled device) in order to prevent loss of light intensity. | 03-05-2009 |
20090086015 | SITUATIONAL AWARENESS OBSERVATION APPARATUS - A positionable sensor assembly for a real-time remote situation awareness apparatus includes a camera for capturing an image of a scene, a plurality of first acoustic transducers for capturing an audio input signal from an environment including the scene, at least one second acoustic transducer excitable to emit an audio output signal, a support structure for supporting the camera, the plurality of first acoustic transducers and the at least one second acoustic transducer, the support structure connected to a base, moveably at least about an axis of rotation relative to the base by a remote controllable support structure positioning actuator, and a transmission unit adapted to transfer in real-time between the transducer assembly and a remote location a captured image of the scene, a captured audio input signal from the environment, an excitation signal to the second acoustic transducer, and a control signal to the support structure positioning actuator. A positionable sensor assembly for a real-time remote situation awareness apparatus. The sensor assembly comprises a camera arranged to capture an image of a scene, a plurality of first acoustic transducers adapted to capture an audio input signal from an environment comprising said scene, at least one second acoustic transducer excitable to emit an audio output signal, a support structure arranged to support said camera, said plurality of first acoustic transducers and said at least one second acoustic transducer, said support structure connected to a base, moveably at least about an axis of rotation relative to said base by a support structure positioning actuator controllable from a remote location, and a transmission means adapted to transfer in real-time between said transducer assembly and said remote location a captured image of said scene, a captured audio input signal from said environment, an excitation signal to said second acoustic transducer, and a control signal to said support structure positioning actuator. | 04-02-2009 |
20090102914 | METHOD AND APPARATUS FOR GENERATING STEREOSCOPIC IMAGES FROM A DVD DISC - A system and method described herein provide stereoscopic video using standard DVD video data combined with enhancement data. In various embodiments, the enhancement data may be stored on the same DVD as the standard video, or provided via downloading and/or streaming to a stereoscopic DVD player. When stored on the DVD, the enhancement data is provided in various forms, including MPEG (-1 or -2) program Stream level; or the MPEG elementary stream level. In one embodiment, the enhancement data consists of a difference signal between left- and right-eye images taken on a pixel-by-pixel basis. | 04-23-2009 |
20090135247 | STEREOSCOPIC CAMERA FOR RECORDING THE SURROUNDINGS - A stereoscopic camera for recording the surroundings is provided with a right and a left image sensor having one lens each to display the surroundings on the image sensors, with the image sensors and the lenses being held by a carrier side-by-side and at a distance in reference to each other. The stereoscopic camera is additionally provided with a circuit board arranged on the carrier and comprising at least the signal and the supply lines of both image sensors. The image sensors are each mounted on a carrier substrate, which similar to the lenses, are arranged on the carrier and are distanced in reference to the circuit board, and have a flexible electric connection to the circuit board. | 05-28-2009 |
20090174765 | CAMERA DEVICE, LIQUID LENS, AND IMAGE PICKUP METHOD - A camera device is provided which has a compound eye structure only by a liquid lens unit and a control unit without requiring a plurality of lenses to be mounted in advance and is capable of taking a three-dimensional stereoscopic video image. In addition, a compact and lightweight three-dimensional stereoscopic camera is provided which can be switched to take a two-dimensional planar image or to take a three-dimensional stereoscopic image only by an electronic control with no need for a movable mechanism and can reduce the power consumption and improve the reliability. A camera device comprises a liquid lens ( | 07-09-2009 |
20090185029 | CAMERA, IMAGE DISPLAY DEVICE, AND IMAGE STORAGE DEVICE - Control information distinguishing whether an image is 2D or 3D is added to the image and then stored in a storage part. An output controlling part is provided which outputs the control information added to the image as a control signal when reading the image from the storage part and outputting it. As a result, when 2D images and 3D images which are stored in a mixed manner are replayed, it can be determined whether an output image is a 2D image or a 3D image according to the control signal, so that a suitable image can be displayed according to an output end device. | 07-23-2009 |
20090189974 | Systems Using Eye Mounted Displays - A display device is mounted on and/or inside the eye. The eye mounted display contains multiple sub-displays, each of which projects light to different retinal positions within a portion of the retina corresponding to the sub-display. The projected light propagates through the pupil but does not fill the entire pupil. In this way, multiple sub-displays can project their light onto the relevant portion of the retina. Moving from the pupil to the cornea, the projection of the pupil onto the cornea will be referred to as the corneal aperture. The projected light propagates through less than the full corneal aperture. The sub-displays use spatial multiplexing at the corneal surface. Various electronic devices interface to the eye mounted display. | 07-30-2009 |
20090207235 | Method for Determining Scattered Disparity Fields in Stereo Vision - In a system for stereo vision including two cameras shooting the same scene, a method is performed for determining scattered disparity fields when the epipolar geometry is known, which includes the steps of: capturing, through the two cameras, first and second images of the scene from two different positions; selecting at least one pixel in the first image, the pixel being associated with a point of the scene and the second image containing a point also associated with the above point of the scene; and computing the displacement from the pixel to the point in the second image minimising a cost function, such cost function including a term which depends on the difference between the first and the second image and a term which depends on the distance of the above point in the second image from a epipolar straight line, and a following check whether it belongs to an allowability area around a subset to the epipolar straight line in which the presence of the point is allowed, in order to take into account errors or uncertainties in calibrating the cameras. | 08-20-2009 |
20090244261 | METHOD FOR THE THREE-DIMENSIONAL MEASUREMENT OF FAST-MOVING OBJECTS - Light sectioning or fringe projection methods are used to measure the surface of objects ( | 10-01-2009 |
20090244262 | IMAGE PROCESSING APPARATUS, IMAGE DISPLAY APPARATUS, IMAGING APPARATUS, AND IMAGE PROCESSING METHOD - An image processing apparatus inputs stereoscopic images, detects a depth of each of the inputted stereoscopic images, lays out the stereoscopic images at least in partial overlap in such a manner that the lager the depth is, the more forward the corresponding stereoscopic image is placed, and records the stereoscopic images having been laid out. | 10-01-2009 |
20090262181 | REAL-TIME VIDEO SIGNAL INTERWEAVING FOR AUTOSTEREOSCOPIC DISPLAY - A method, apparatus and system for simultaneously capturing a plurality of video signals that carry images of a changing real three-dimensional scene, taken from respective different directions, and for combining them, in real time, into an interwoven video signal, ready to be fed to an active-matrix-based autostereoscopic display device. | 10-22-2009 |
20090262182 | THREE-DIMENSIONAL IMAGING APPARATUS - A three-dimensional imaging apparatus for imaging a three-dimensional object may include a microlens array, a sensor device, and a telecentric relay system positioned between the microlens array and the sensor device. A telecentric relay system may include a field lens and a macro objective that may include a macro lens and an aperture stop. A method of imaging a three-dimensional object may include providing a three-dimensional imaging apparatus including a microlens array, a sensor device, and a telecentric relay system positioned between the microlens array and the sensor device; and generating a plurality of elemental images on the sensor device, wherein each of the plurality of elemental images has a different perspective of the three-dimensional object. | 10-22-2009 |
20090262183 | Three-dimensional image obtaining device and processing apparatus using the same - An optical flux is radiated, which is focused at a measurement point in a specimen space, and a transmitted light amount is measured. A minute light absorption amount is measured from a transmitted light signal and a reference signal. While three-dimensionally scanning, a three-dimensional map in which light absorption amounts are represented by voxels (volume cells) is obtained. On this three-dimensional map, deconvolution processing with a light intensity distribution image in the vicinity of the measurement point being a convolution kernel is performed, so as to obtain a three-dimensional image of a specimen that is almost transparent in a non-dyed state. | 10-22-2009 |
20090268013 | STEREO CAMERA UNIT - An adjuster plate is provided between a front rail and a camera unit body having cameras. Pre-dimensioned positioning pins protrude from upper and lower surfaces of the adjuster plate. The positioning pins protruding from the upper surface of the adjuster plate are positioned by being fitted in pin fitting holes provided in the front rail. The positioning pins protruding from the lower surface of the adjuster plate are positioned by being fitted in pin fitting holes provided in the camera unit body. Even when the positions of the pin fitting holes in the front rail are changed, it is possible to cope with the change by only changing the protruding positions of the positioning pins. | 10-29-2009 |
20090268014 | Method and apparatus for generating a stereoscopic image - A method of generating a stereoscopic image is disclosed. The method includes defining at least two, three or more regions in a scene representing a region of interest, a near regions and/or a far region. This is followed by forming an image pair for each region, this image pair containing the information relating to objects in or partially in their respective region. The perceived depth within the regions is altered to provide the idea or best perceived depth within the region of interest and acceptable or more compressed perceived depths in the other regions. The image pairs are then mapped together to form a display image pair for viewing on a display device. | 10-29-2009 |
20090295908 | Method and device for high-resolution three-dimensional imaging which obtains camera pose using defocusing - A method and device for high-resolution three-dimensional (3-D) imaging which obtains camera pose using defocusing is disclosed. The device comprises a lens obstructed by a mask having two sets of apertures. The first set of apertures produces a plurality of defocused images of the object, which are used to obtain camera pose. The second set of optical filters produces a plurality of defocused images of a projected pattern of markers on the object. The images produced by the second set of apertures are differentiable from the images used to determine pose, and are used to construct a detailed 3-D image of the object. Using the known change in camera pose between captured images, the 3-D images produced can be overlaid to produce a high-resolution 3-D image of the object. | 12-03-2009 |
20090322859 | Method and System for 3D Imaging Using a Spacetime Coded Laser Projection System - A desktop three-dimensional imaging system and method projects a modulated plane of light that sweeps across a target object while a camera is set to collect an entire pass of the modulated plane of light over the object in one image to create a line stripe pattern. A spacetime coding scheme is applied to the modulation controller whereby a plurality of images of line stripe patterns can be analyzed and decoded to yield a three-dimensional image of the target object in a reduced scan time and with better accuracy than existing close range scanners. | 12-31-2009 |
20090322860 | SYSTEM AND METHOD FOR MODEL FITTING AND REGISTRATION OF OBJECTS FOR 2D-TO-3D CONVERSION - A system and method is provided for model fitting and registration of objects for 2D-to-3D conversion of images to create stereoscopic images. The system and method of the present disclosure provides for acquiring at least one two-dimensional (2D) image, identifying at least one object of the at least one 2D image, selecting at least one 3D model from a plurality of predetermined 3D models, the selected 3D model relating to the identified at least one object, registering the selected 3D model to the identified at least one object, and creating a complementary image by projecting the selected 3D model onto an image plane different than the image plane of the at least one 2D image. The registering process can be implemented using geometric approaches or photometric approaches. | 12-31-2009 |
20100007718 | MONOCULAR THREE-DIMENSIONAL IMAGING - A three-dimensional imaging system uses a single primary optical lens along with various configurations of apertures, refocusing facilities, and the like to obtain three offset optical channels each of which can be separately captured with an optical sensor. | 01-14-2010 |
20100007719 | Method and apparatus for 3D digitization of an object - In a method for 3D digitization of an object ( | 01-14-2010 |
20100013907 | Separation Type Unit Pixel Of 3-Dimensional Image Sensor and Manufacturing Method Thereof - A separation type unit pixel of an image sensor, which can control light that incidents onto a photodiode at various angles, and be suitable for a zoom function in a compact camera module by securing an incident angle margin, and a manufacturing method thereof are provided. The unit pixel of an image sensor includes: a first wafer including a photodiode containing impurities having an impurity type opposite to that of a semiconductor material and a pad for transmitting photoelectric charge of the photodiode to outside; a second wafer including a pixel array region in which transistors except the photodiode are arranged regularly, a peripheral circuit region having an image sensor structure except the pixel array, and a pad for connecting pixels with one another; and a connecting means connecting the pad of the first wafer and the pad of the second wafer. Accordingly, manufacturing processes can be simplified by constructing the upper wafer using only a photodiode and the lower wafer using the pixel array region except the photodiode, and costs are reduced since transistors are not included in the upper wafer portion, which in turn cannot affect the interaction with light. | 01-21-2010 |
20100026784 | METHOD AND SYSTEM TO CONVERT 2D VIDEO INTO 3D VIDEO - 2D/3D video conversion using a method for providing an estimation of visual depth for a video sequence, the method comprises an audio scene classification ( | 02-04-2010 |
20100039500 | Self-Contained 3D Vision System Utilizing Stereo Camera and Patterned Illuminator - A self-contained hardware and software system that allows reliable stereo vision to be performed. The vision hardware for the system, which includes a stereo camera and at least one illumination source that projects a pattern into the camera's field of view, may be contained in a single box. This box may contain mechanisms to allow the box to remain securely and stay in place on a surface such as the top of a display. The vision hardware may contain a physical mechanism that allows the box, and thus the camera's field of view, to be tilted upward or downward in order to ensure that the camera can see what it needs to see. | 02-18-2010 |
20100039501 | IMAGE RECORDING DEVICE AND IMAGE RECORDING METHOD - According to an image recording device and an image recording method according to the present invention, images can be recorded in such a manner that even an image processing apparatus not having a function that reads a plurality of image data from an extended image file storing the plurality of image data and reproduces or edits them can read representative image data in an extended image file. Furthermore, if a basic file has been deleted or altered, the basic file can be restored using the representative image data in the extended image file, so it is possible to provide another image processing apparatus with the representative image data before the alteration any time. | 02-18-2010 |
20100045779 | THREE-DIMENSIONAL VIDEO APPARATUS AND METHOD OF PROVIDING ON SCREEN DISPLAY APPLIED THERETO - A three-dimensional (3D) video apparatus and a method of providing an OSD object applied thereto are provided. The 3D video apparatus includes an on-screen display (OSD) generation unit which receives an OSD object and generates a reduced OSD object to be displayed on the 3D image on a screen, wherein the reduced OSD object is smaller than the received OSD object. An OSD insertion unit inserts the reduced OSD object into input 3D image data. | 02-25-2010 |
20100053307 | COMMUNICATION TERMINAL AND INFORMATION SYSTEM - A communication terminal includes: an image or video collecting and generating module, adapted to collect and generate 3D image or video data; a communication module, adapted to send or receive image or video and generated 3D image or video data; and an image or video display module, adapted to display the 3D image or video according to the collected or received and generated 3D image or video data. An information system includes a 3D image or video information center. The 3D image or video information center further includes: a 3D image or video information relay, adapted to interact with the communication terminal; and a 3D image or video information server, adapted to store 3D image or video information and convert between 3D image or video information and SMS. The communication terminal features simple structure and good performance. The information system supports communication through 3D image or video information. | 03-04-2010 |
20100066812 | IMAGE PICKUP APPARATUS AND IMAGE PICKUP METHOD - An image pickup apparatus having a simple configuration and being capable of performing switching between an image pickup mode based on a light field photography technique and a normal high-resolution image pickup mode is provided. The image pickup apparatus includes an image pickup lens | 03-18-2010 |
20100066813 | STEREO PROJECTION WITH INTERFERENCE FILTERS - The invention relates to a stereo projection system and a method for generating an optically perceptible three-dimensional pictorial reproduction. For each of the two perspective partial images (left or right) of the stereo image, regions of the visible spectrum, which are defined differently by colour filters, are masked in such a way that a plurality of only limited spectral intervals is transmitted in the region of the colour perception blue (B), green (G), and red (R). The position of the transmitted intervals is selected differently for the two perspective partial images. The number of transmitted intervals for the two perspective partial images is, according to the invention, selected as lower than 6 (b) either for the image generation or for the image detection by the stereo glasses, and equal to 6 (a). In the event of a reduced number (b), at least one transmitted interval for one of the perspective partial images is selected in transmission in the region of two colour perceptions blue (B), green (G) or red (R), and created by right-left permutation and subsequent combination with an adjacent interval. According to the permutated intervals, the associated image data is analogously permutated. In this way, the cost of the filters, especially interference filters for the stereo projection system or for the method for producing an optically perceptible, three-dimensional pictorial reproduction, is significantly reduced without considerably affecting the reproduction quality. The unpleasant flickering is also reduced. | 03-18-2010 |
20100073462 | Three dimensional image sensor - A three-dimensional (3D) image sensor includes a plurality of color pixels, and a plurality of distance measuring pixels. Where the plurality of color pixels and the plurality of distance measuring pixels are arranged in an array, and a group of distance measuring pixels, from among the plurality of distance measuring pixels, are disposed so that a corner of each distance measuring pixel in the group of distance-measuring pixels is adjacent to a corner of an adjacent distance-measuring pixel in the group of distance-measuring pixels. The group of distance measuring pixels is capable of jointly outputting one distance measurement signal. | 03-25-2010 |
20100079581 | 3D CAMERA USING FLASH WITH STRUCTURED LIGHT - An imaging device capable of capturing depth information or surface profiles of objects is disclosed herein. The imaging device uses an enclosed flashing unit to project a sequence of structured light patterns onto an object and captures the light patterns reflected from the surfaces of the object by using an image sensor that is enclosed in the imaging device. The imaging device is capable of capturing an image of an object such that the captured image is comprised of one or more color components of a two-dimensional image of the object and a depth component that specifies the depth information of the object. | 04-01-2010 |
20100079582 | Method and System for Capturing and Using Automatic Focus Information - Methods and digital image capture devices are provided for capturing and using automatic focus information. Methods include building a three dimension (3D) focus map for a digital image on a digital image capture device, using the 3D focus map in processing the digital image, and storing the digital image. Digital image capture devices include a processor, a lens, a display operatively connected to the processor, means for automatic focus operatively connected to the processor and the lens, and a memory storing software instructions, wherein when executed by the processor, the software instructions cause the digital image capture device to initiate capture of a digital image, build a three dimension (3D) focus map for the digital image using the means for automatic focus, and complete capture of the digital image. | 04-01-2010 |
20100085423 | Stereoscopic imaging - The invention represents a new form of stereoscopically-rendered three-dimensional model and various methods for constructing, manipulating, and displaying these models. The model consists of one or more stereograms applied to a substrate, where the shape of the substrate has been derived from the imagery or from the object itself, and the stereograms are applied to the substrate in a specific way that eliminates parallax for some points and reduces it in others. The methods offered can be (conservatively) 400 times more efficient at representing complex surfaces than conventional modelling techniques, and also provide for independent control of micro and macro parallaxes in a stereoscopically-viewed scene, whether presented in a VR environment or in stereo film or television. | 04-08-2010 |
20100097444 | Camera System for Creating an Image From a Plurality of Images - Methods and apparatus to create and display stereoscopic and panoramic images are disclosed. Apparatus is provided to control the position of a lens in relation to a reference lens. Methods and apparatus are provided to generate multiple images that are combined into a stereoscopic or a panoramic image. An image may be a static image. It may also be a video image. A controller provides correct camera settings for different conditions. An image processor creates a stereoscopic or a panoramic image from the correct settings provided by the controller. A panoramic video wall system is also disclosed. | 04-22-2010 |
20100110164 | THREE-DIMENSIONAL IMAGE COMMUNICATION TERMINAL - A three-dimensional image communication terminal can make communication in which there are a sense of being engaged on a place and a sense of reality by use of a three-dimensional image with naturalness and a high robust characteristic. A three-dimensional image communication terminal includes a three-dimensional image input section, a transmitting section that transmits an input image to a communication partner after image processing, a three-dimension image display section which monitor-displays a human image or an object image which was shot, and a telephone calling section which receives three-dimensional image information from a partner and communicates with the other end by voice. The three-dimensional image display section includes an integral photography type horizontal/vertical parallax display device. In the three-dimensional image input section, cameras | 05-06-2010 |
20100118121 | Device and Method for Visually Recording Two-Dimensional or Three-Dimensional Objects - A device for visually recording two-dimensional or three-dimensional objects, which comprises a camera for recording images of the two-dimensional or three-dimensional object and which is provided with, can be connected to or is connected to at least one evaluation unit for evaluating the recorded images. A single camera and at least one adjustable or pivotal mirror element are provided. According to the method for visually recording two-dimensional or three-dimensional objects while using a device of the aforementioned type, a camera and at least one adjustable mirror element are arranged relative to one another so that the objects to be recorded are situated in the coverage area of the at least one mirror element. The adjustable mirror element for recording the objects to be recorded is displaced or pivoted about one or two axes with an adjustable velocity. The camera records the objects projected in the at least one mirror element, and the recorded objects are routed from the camera to an evaluation unit for evaluation and are processed. | 05-13-2010 |
20100118122 | METHOD AND APPARATUS FOR COMBINING RANGE INFORMATION WITH AN OPTICAL IMAGE - A method for combining range information with an optical image is provided. The method includes capturing a first optical image of a scene with an optical camera, wherein the first optical image comprising a plurality of pixels. Additionally, range information of the scene is captured with a ranging device. Range values are then determined for at least a portion of the plurality of pixels of the first optical image based on the range information. The range values and the optical image are combined to produce a 3-dimensional (3D) point cloud. A second optical image of the scene from a different perspective than the first optical image is produced based on the 3D point cloud. | 05-13-2010 |
20100118123 | DEPTH MAPPING USING PROJECTED PATTERNS - Apparatus for mapping an object includes an illumination assembly ( | 05-13-2010 |
20100118124 | Method Of Forming Virtual Endoscope Image Of Uterus - A method of forming a virtual endoscope image of a uterus is disclosed. The virtual image showing an inner wall of a uterus is formed from a three-dimensional ultrasound uterus image obtained by hysterosalpingography with a solution. An inner wall of the uterus in the 3D virtual image is inspected and a virtual endoscope image of the uterus is formed by reflecting the inspection result on the 3D virtual uterus image. Also, the virtual endoscope images in every aspect are provided according to the positions of a view point or virtual light source. Thus, the inner wall of the uterus can be inspected more easily. | 05-13-2010 |
20100118125 | METHOD AND APPARATUS FOR GENERATING THREE-DIMENSIONAL (3D) IMAGE DATA - A method of generating three-dimensional (3D) image data from first and second image data obtained by photographing the same subject at different points of time, the method including generating third image data by adjusting locations of pixels in the second image data so that the second image data corresponds to the first image data, and generating the 3D image data based on a relationship between the third image data and the first image data. | 05-13-2010 |
20100123771 | Pixel circuit, photoelectric converter, and image sensing system including the pixel circuit and the photoelectric converter - Provided are a pixel circuit, a photoelectric converter, and an image sensing system thereof. The pixel circuit includes a photodiode and an output unit. The photodiode generates a first photo charge to detect the distance from an object and a second photo charge to detect the color of the object. The output unit generates at least one depth signal used to detect the distance based on the first photo charge generated by the photodiode, and a color signal used to detect the color of the object based on the second photo charge. | 05-20-2010 |
20100128108 | APPARATUS AND METHOD FOR ACQUIRING WIDE DYNAMIC RANGE IMAGE IN AN IMAGE PROCESSING APPARATUS - A method is provided for acquiring a Wide Dynamic Range (WDR) image in an image processing apparatus, in which images corresponding to each of consecutive frames are photographed with a stereo camera including at least two image pickup devices with different exposure times, a correlation between images photographed in each frame at different exposure times is checked, and the images photographed at the different exposure times are synthesized into one image based on the correlation. | 05-27-2010 |
20100128109 | Systems And Methods Of High Resolution Three-Dimensional Imaging - Embodiments of the invention provide systems and methods for three-dimensional imaging with wide field of view and precision timing. In accordance with one aspect, a three-dimensional imaging system includes an illumination subsystem configured to emit a light pulse with a divergence sufficient to irradiate a scene having a wide field of view. A sensor subsystem is configured to receive over a wide field of view portions of the light pulse reflected or scattered by the scene and including: a modulator configured to modulate as a function of time an intensity of the received light pulse portion to form modulated received light pulse portions; and means for generating a first image corresponding to the received light pulse portions and a second image corresponding to the modulated received light pulse portions. A processor subsystem is configured to obtain a three-dimensional image based on the first and second images. | 05-27-2010 |
20100134594 | Displaying Objects with Certain Visual Effects - Embodiments of the invention provide methods, systems, and articles for displaying objects in images, videos, or a series of images with WYSIWYG (what you see is what you get) effects, for calibrating and storing dimensional information of the display elements in a display system, and for constructing 3-dimensional features and size measurement information using one camera. Displaying merchandises with WYSIWYG effects allows online retailers to post vivid pictures of their sales items on the Internet to attract online customers. The processes of calibrating a display system and the processes of constructing 3-dimensional features and size measurement information using one camera are applications of the invention designed to achieve desired WYSIWYG effects. | 06-03-2010 |
20100134595 | 3-D Optical Microscope - A 3-D optical microscope, a method of turning a conventional optical microscope into a 3-D optical microscope, and a method of creating a 3-D image on an optical microscope are described. The 3-D optical microscope includes a processor, at least one objective lens, an optical sensor capable of acquiring an image of a sample, a mechanism for adjusting focus position of the sample relative to the objective lens, and a mechanism for illuminating the sample and for projecting a pattern onto and removing the pattern from the focal plane of the objective lens. The 3-D image creation method includes taking two sets of images, one with and another without the presence of the projected pattern, and using a software algorithm to analyze the two image sets to generating a 3-D image of the sample. The 3-D image creation method enables reliable and accurate 3-D imaging on almost any sample regardless of its image contrast. | 06-03-2010 |
20100141739 | STEREO VIDEO MICROSCOPE SYSTEM - A stereo video microscope system ( | 06-10-2010 |
20100149315 | Apparatus and method of optical imaging for medical diagnosis - Described herein is a novel 3-D optical imaging system based on active stereo vision and motion tracking for to tracking the motion of patient and for registering the time-sequenced images of suspicious lesions recorded during endoscopic or colposcopic examinations. The system quantifies the acetic acid induced optical signals associated with early cancer development. The system includes at least one illuminating light source for generating light illuminating a portion of an object, at least one structured light source for projecting a structured light pattern on the portion of the object, at least one camera for imaging the portion of the object and the structured light pattern, and means for generating a quantitative measurement of an acetic acid-induced change of the portion of the object. | 06-17-2010 |
20100149316 | System for accurately repositioning imaging devices - The invention teaches a method of automatically creating a 3D model of a scene of interest from an acquired image, and the use of such a 3D model for enabling user to determine real world distances from a displayed image of the scene of interest. | 06-17-2010 |
20100157019 | CAMERA FOR RECORDING SURFACE STRUCTURES, SUCH AS FOR DENTAL PURPOSES - A 3-D camera for obtaining an image of at least one surface of at least one object. The camera comprises a light source, arranged to illuminate the object, wherein a light beam emitted from the light source defines a projection optical path. The camera also includes at least one first aperture having a first predetermined size, interposed in the projection optical path such that the light beam passes through it. An image sensor receives light back-scattered by the object, the back-scattered light defining an observation optical path. At least one second aperture having a second predetermined size, is interposed in the observation optical path such that the back-scattered light passes through it. In one example embodiment of the invention, the first predetermined size is greater than the second predetermined size, and at least one optic is arranged in both the projection and observation optical paths. | 06-24-2010 |
20100165081 | IMAGE PROCESSING METHOD AND APPARATUS THEREFOR - An image processing method, including extracting shot information from metadata, wherein the shot information is used to classify a series of two-dimensional images into predetermined units; if it is determined, by using shot type information included in the shot information, that frames classified as a predetermined shot can be reproduced as a three-dimensional image, extracting background depth information from the metadata, wherein the background depth information is about a background of the frames classified as the predetermined shot; generating a depth map about the background of the frames by using the background depth information; if the frames classified as the predetermined shot comprise object, extracting object depth information about the object from the metadata; and generating a depth map about the object by using the object depth information. | 07-01-2010 |
20100165082 | Detection apparatus - The invention relates to an apparatus for the spatially resolved detection of objects located in a monitored zone, having a transmission device for the transmission of electromagnetic radiation into a transmission region, having a reception device for the reception of radiation reflected from a reception region, wherein the transmission region and the reception region overlap or intersect in a detection region which is disposed within the monitored zone, which covers a detection angle and in which the transmitted radiation is reflected by objects, having an imaging arrangement which is disposed in the propagation path of the transmitted radiation and/or of the reflected radiation and which covers the total detection region at the transmission side and/or at the reception side at all times; and having spatial resolution means for the influencing of the propagation direction of the radiation and/or for the operation of the reception device, in particular in a time varying manner, such that the position, in particular the spacing, of the detection region relative to the imaging arrangement and/or the size of the detection region determined by the degree to which the transmission region and the reception region overlap or intersect can be changed. | 07-01-2010 |
20100171813 | GATED 3D CAMERA - A camera for determining distances to a scene, the camera comprising: a light source comprising a VCSEL controllable to illuminate the scene with a train of pulses of light having a characteristic spectrum; a photosurface; optics for imaging light reflected from the light pulses by the scene on the photosurface; and a shutter operable to gate the photosurface selectively on and off for light in the spectrum. | 07-08-2010 |
20100171814 | APPARATUS FOR PROCESSING A STEREOSCOPIC IMAGE STREAM - A system is provided for processing a compressed image stream of a stereoscopic image stream, the compressed image stream having a plurality of frames in a first format, each frame consisting of a merged image comprising pixels sampled from a left image and pixels sampled from a right image. A receiver receives the compressed image stream and a decompressing module in communication with the receiver decompresses the compressed image stream. The left and right images of the decompressed image stream are stored in a frame buffer. A serializing unit reads pixels of the frames stored in the frame buffer and outputs a pixel stream comprising pixels of a left frame and pixels of a right frame. A stereoscopic image processor receives the pixel stream, buffers the pixels, performs interpolation in order to reconstruct pixels of the left and right images and outputs a reconstructed left pixel stream and a reconstructed right pixel stream, the reconstructed streams having a format different from the first format. A display signal generator receives the stereoscopic pixel stream to provide an output display signal. | 07-08-2010 |
20100177164 | Method and System for Object Reconstruction - A system and method are presented for use in the object reconstruction. The system comprises an illuminating unit, and an imaging unit (see FIG. | 07-15-2010 |
20100177165 | METHOD OF CONDUCTING PRECONDITIONED RELIABILITY TEST OF SEMICONDUCTOR PACKAGE USING CONVECTION AND 3-D IMAGING - A precondition reliability test of a semiconductor package, to determine a propensity of the package to delaminate, includes a baking test of drying the package, a moisture soaking test of moisturizing the dried package, a reflow test of heat-treating the moisturized package using hot air convection, and a three-dimensional imaging of the package to acquire a 3-D image of a surface of the package. The three-dimensional imaging is preferably carried out using a Moire interferometry technique during the course of the reflow test. Therefore, the delamination of the package can be observed in real time so that data on the start and rapid development of the delamination can be produced. The method also allows data which can be ordered as a Weibull Plot to be produced, thereby enabling a quantitative analysis of the reliability test results. | 07-15-2010 |
20100177166 | Creating and Viewing Three Dimensional Virtual Slides - Systems and methods for creating and viewing three dimensional virtual slides are provided. One or more microscope slides are positioned in an image acquisition device that scans the specimens on the slides and makes two dimensional images at a medium or high resolution. This two dimensional images are provided to an image viewing workstation where they are viewed by an operator who pans and zooms the two dimensional image and selects an area of interest for scanning at multiple depth levels (Z-planes). The image acquisition device receives a set of parameters for the multiple depth level scan, including a location and a depth. The image acquisition device then scans the specimen at the location in a series of Z-plane images, where each Z-plane image corresponds to a depth level portion of the specimen within the depth parameter. | 07-15-2010 |
20100182406 | SYSTEM AND METHOD FOR THREE-DIMENSIONAL OBJECT RECONSTRUCTION FROM TWO-DIMENSIONAL IMAGES - A system and method for three-dimensional acquisition and modeling of a scene using two-dimensional images are provided. The present disclosure provides a system and method for selecting and combining the three-dimensional acquisition techniques that best fit the capture environment and conditions under consideration, and hence produce more accurate three-dimensional models. The system and method provide for acquiring at least two two-dimensional images of a scene, applying a first depth acquisition function to the at least two two-dimensional images, applying a second depth acquisition function to the at least two two-dimensional images, combining an output of the first depth acquisition function with an output of the second depth acquisition function, and generating a disparity or depth map from the combined output. The system and method also provide for reconstructing a three-dimensional model of the scene from the generated disparity or depth map. | 07-22-2010 |
20100188483 | Single camera device and method for 3D video imaging using a refracting lens - An example embodiment of the present invention may include an apparatus that captures 3D images having a lens barrel, including a lens disposed at a first end of the lens barrel, an image capture element at the second end of the lens barrel, and a refracting lens positioned along the optical axis of the lens barrel. The image capture device may have an adjustable active region, the adjustable active region being a region capable of capturing an image that is smaller than the total image capture area of the image capture element. The image capture element may capture images continuously at a predetermined frame rate. The image capture element may change the adjustable active region and the set of positioning elements may be adapted to continuous change the position of the refracting lens among a series of predetermined positions at a rate corresponding to the predetermined frame rate. | 07-29-2010 |
20100188484 | IMAGE DATA OBTAINING METHOD AND APPARATUS THEREFOR - An image data obtaining method and apparatus therefor, where the image data obtaining method involves determining an image-capturing mode from among a first image-capturing mode for capturing an image of a target subject by using a filter having a first area for transmitting light and a second area for blocking light, and a second image-capturing mode for capturing the image of the target subject without using the filter; capturing the image of the target subject by selectively using the filter according to the determined image-capturing mode; and processing captured image data. | 07-29-2010 |
20100188485 | LIVE CONCERT/EVENT VIDEO SYSTEM AND METHOD - One aspect of the invention is a method of providing video to attendees of a live concert. Video of different views of the live concert is captured. A plurality of video streams are provided to attendees of the live concert while the live concert is occurring. The plurality of digital video streams enable an attendee of the live concert to select which of the plurality of digital video streams to view using a portable digital device associated with that attendee such that the attendee may choose from among the different views of the live concert. | 07-29-2010 |
20100194858 | Intermediate image generation apparatus and method - An intermediate image generation apparatus and method are described. An intermediate image may be generated from any one image of a left image and a right image, and the intermediate image may be interpolated by referring to the other image of the left image and the right image. | 08-05-2010 |
20100194859 | CONFIGURATION MODULE FOR A VIDEO SURVEILLANCE SYSTEM, SURVEILLANCE SYSTEM COMPRISING THE CONFIGURATION MODULE, METHOD FOR CONFIGURING A VIDEO SURVEILLANCE SYSTEM, AND COMPUTER PROGRAM - Video surveillance systems typically comprise a plurality of video cameras that are distributed in a surveillance region at different locations. The image data recorded by the surveillance cameras is collected in a surveillance center and automated or evaluated by surveillance personnel. It is known with the automated surveillance that certain image regions of a surveillance camera are selected and continuously monitored by means of digital image processing. A configuration module | 08-05-2010 |
20100201783 | Stereoscopic Image Generation Apparatus, Stereoscopic Image Generation Method, and Program - Influences of physiological stereoscopic elements are removed by image processing using projection transformation. A horopter-plane image projection unit | 08-12-2010 |
20100201784 | METHOD FOR THE MICROSCOPIC THREE-DIMENSIONAL REPRODUCTION OF A SAMPLE - A method for the three-dimensional imaging of a sample in which image information from different depth planes of the sample is stored in a spatially resolved manner, and the three-dimensional image of the sample is subsequently reconstructed from this stored image information is provided. A reference structure is applied to the illumination light, at least one fluorescing reference object is positioned next to or in the sample, images of the reference structure of the illumination light, of the reference object are recorded from at least one detection direction and evaluated. The light sheet is brought into an optimal position based on the results and image information of the reference object and of the sample from a plurality of detection directions is stored. Transformation operators are obtained on the basis of the stored image information and the reconstruction of the three-dimensional image of the is based on these transformation operators. | 08-12-2010 |
20100201785 | Method and system for displaying stereoscopic detail-in-context presentations - A method for generating a stereoscopic presentation of a region-of-interest in a monoscopic information representation. The method includes the steps of: (a) selecting first and second viewpoints for the region-of-interest; (b) creating a lens surface having a predetermined lens surface shape for the region-of-interest, the lens surface having a plurality of polygonal surfaces constructed from a plurality of points sampled from the lens surface shape; (c) creating first and second transformed presentations by overlaying the representation on the lens surface and perspectively projecting the lens surface with the overlaid representation onto a plane spaced from the first and second viewpoints, respectively; and, (d) displaying the first and second transformed presentations on a display screen to generate the stereoscopic presentation. | 08-12-2010 |
20100208033 | Personal Media Landscapes in Mixed Reality - An exemplary method includes accessing geometrically located data that represent one or more virtual items with respect to a three-dimensional coordinate system; generating a three-dimensional map based at least in part on real image data of a three-dimensional space as acquired by a camera; rendering to a physical display a mixed reality scene that includes the one or more virtual items at respective three-dimensional positions in a real image of the three-dimensional space acquired by the camera; and re-rendering to the physical display the mixed reality scene upon a change in the field of view of the camera. Other methods, devices, systems, etc., are also disclosed. | 08-19-2010 |
20100208034 | METHOD AND SYSTEM FOR THE DYNAMIC CALIBRATION OF STEREOVISION CAMERAS - The present invention generally provides a method of performing dynamic calibration of a stereo vision system using a specific stereo disparity algorithm adapted to provide for the determination of disparity in two dimensions, X and Y. In one embodiment of the present invention, an X/Y disparity map may be calculated using this algorithm without having to perform pre-warping or first finding the epipolar directions. Thus information related to camera misalignment and/or distortion can be preserved in the resulting X/Y disparity map and later extracted. | 08-19-2010 |
20100208035 | VOLUME RECOGNITION METHOD AND SYSTEM - The present invention relates to a volume recognition method comprising the steps of: | 08-19-2010 |
20100208036 | SECURITY ELEMENT - The present invention relates to asecurity element for security papers, value documents and the like, having a microoptical moiré magnification arrangement ( | 08-19-2010 |
20100208037 | IMAGE DISPLAYING SYSTEM AND IMAGE CAPTURING AND DISPLAYING SYSTEM - An image displaying system includes an image processor and a display unit. The image processor moves and/or deforms a first radiographic image and a second radiographic image that has been captured after the first radiographic image such that the first radiographic image and the second radiographic image are aligned. The display unit displays the first radiographic image and the second radiographic image that have been aligned, allows one of the first and second radiographic image to be viewed by the right eye, and allows the other of the first and second radiographic image to be viewed by the left eye. | 08-19-2010 |
20100208038 | METHOD AND SYSTEM FOR GESTURE RECOGNITION - A method of image acquisition and data pre-processing includes obtaining from a sensor an image of a subject making a movement. The sensor may be a depth camera. The method also includes selecting a plurality of features of interest from the image, sampling a plurality of depth values corresponding to the plurality of features of interest, projecting the plurality of features of interest onto a model utilizing the plurality of depth values, and constraining the projecting of the plurality of features of interest onto the model utilizing a constraint system. The constraint system may comprise an inverse kinematics solver. | 08-19-2010 |
20100225743 | Three-Dimensional (3D) Imaging Based on MotionParallax - Techniques and technologies are described herein for motion parallax three-dimensional (3D) imaging. Such techniques and technologies do not require special glasses, virtual reality helmets, or other user-attachable devices. More particularly, some of the described motion parallax 3D imaging techniques and technologies generate sequential images, including motion parallax depictions of various scenes derived from clues in views obtained of or created for the displayed scene. | 09-09-2010 |
20100238271 | Sensing Apparatus and Method for Detecting a Three-Dimensional Physical Shape of a Body - A sensing device having a sensing arrangement with a sensing end ( | 09-23-2010 |
20100245542 | DEVICE FOR COMPUTING THE EXCAVATED SOIL VOLUME USING STRUCTURED LIGHT VISION SYSTEM AND METHOD THEREOF - A device for computing an excavated soil volume using structured light is disclosed. A control sensor unit is provided at the hinge points of an excavator arm, and is configured to detect and output the location and the bent angle of the excavator arm. A microcontroller is configured to output a control signal so as to capture the images of a work area of a bucket, provided at one end of the excavator arm, using the output of the control sensor unit, convert the captured images into 3-Dimensional (3D) images, and compute an excavated soil volume. An illumination module is configured to include at least one light source that is controlled by the control signal and radiates light onto the work area. A structured light module is configured to capture the work area in response to the control signal. | 09-30-2010 |
20100245543 | MRI COMPATIBLE CAMERA THAT INCLUDES A LIGHT EMITTING DIODE FOR ILLUMINATING A SITE - Systems and methods of using MR-compatible cameras to view magnetic resonance imaging procedures. The MR-compatible camera systems may include a casing with at least two openings, including one oriented to permit a camera to view a site, and another opening oriented to permit a light source to illuminate a portion of the site. The camera systems may be used with either closed bore or open bore MRI systems. | 09-30-2010 |
20100245544 | IMAGING APPARATUS, IMAGING CONTROL METHOD, AND RECORDING MEDIUM - An imaging apparatus includes: a first imaging controller configured to control imaging by an imaging unit; a movement distance acquirer configured to acquire movement distance of the imaging unit required to generate a three-dimensional image of the imaged subject after imaging by the first imaging controller; a first determining unit configured to determine whether or not the imaging unit has moved the movement distance acquired by the movement distance acquirer; a second imaging controller configured to control imaging with respect to the imaging unit in the case where it is determined by the first determining unit that the imaging unit has moved the movement distance; and a three-dimensional image generator configured to generate a three-dimensional image from the image acquired by the first imaging controller and the image acquired by the second imaging controller. | 09-30-2010 |
20100259597 | FACE DETECTION APPARATUS AND DISTANCE MEASUREMENT METHOD USING THE SAME - Provided are a face detection apparatus and a distance measurement method using the same. The face detection apparatus detects a face using left and right images which are acquired from a stereo camera. The face detection apparatus measures distance from the stereo camera to the face using an image frame which is provided from the stereo camera without a stereo matching process. Accordingly, the face detection apparatus simultaneously performs face detection and distance measurement even in a low-performance system. | 10-14-2010 |
20100259598 | APPARATUS FOR DETECTING THREE-DIMENSIONAL DISTANCE - A three-dimensional distance detecting apparatus includes a stereo vision camera configured to detect a parallax of a selected point to be detected; and a pattern generating device configured to generate a pattern and project the pattern on the selected point when the selected point is a subject which does not induce a parallax. | 10-14-2010 |
20100259599 | Display system and camera system - A display apparatus and an imaging apparatus constructed such that a high-resolution clear three-dimensional video image can be viewed from any direction. The display apparatus projects frame images, projected from a projector such as an electronic projector, to a video image projection surface of a three-dimensional screen through a polygonal mirror provided around the three-dimensional screen, thereby providing a polyhedral video image such as a three-dimensional image to a person viewing from around the video image projection surface. The three-dimensional screen has a view field angle limiting filter and a directional reflection screen. The view field angle filter limits the angle of a view field in the left/right direction, the angle being the angle of the projection on the video image projection surface ( | 10-14-2010 |
20100265316 | THREE-DIMENSIONAL MAPPING AND IMAGING - Imaging apparatus includes an illumination subassembly, which is configured to project onto an object a pattern of monochromatic optical radiation in a given wavelength band. An imaging subassembly includes an image sensor, which is configured both to capture a first, monochromatic image of the pattern on the object by receiving the monochromatic optical radiation reflected from the object and to capture a second, color image of the object by receiving polychromatic optical radiation, and to output first and second image signals responsively to the first and second images, respectively. A processor is configured to process the first and second signals so as to generate and output a depth map of the object in registration with the color image. | 10-21-2010 |
20100265317 | IMAGE PICKUP APPARATUS - An image pickup apparatus of the present invention includes: a photographing section that can photograph a subject from a plurality of viewpoints with parallax, and can photograph a 2D moving image of the subject obtained by photographing from at least one of the viewpoints and a 3D image of the subject obtained by photographing from the plurality of the viewpoints; a recording section that records the 2D moving image and the 3D image; a subject situation determination section that determines a timing suitable for photographing the 3D image while photographing the 2D moving image; and a photographing control section that controls the photographing section so as to photograph the 3D image when the subject situation determination section determines that the timing is suitable for photographing the 3D image. | 10-21-2010 |
20100265318 | 3D Biplane Microscopy - A microscopy system is configured for creating 3D images from individually localized probe molecules. The microscopy system includes a sample stage, an activation light source, a readout light source, a beam splitting device, at least one camera, and a controller. The activation light source activates probes of at least one probe subset of photo-sensitive luminescent probes, and the readout light source causes luminescence light from the activated probes. The beam splitting device splits the luminescence light into at least two paths to create at least two detection planes that correspond to the same or different number of object planes of the sample. The camera detects simultaneously the at least two detection planes, the number of object planes being represented in the camera by the same number of recorded regions of interest. The controller is programmable to combine a signal from the regions of interest into a 3D data. | 10-21-2010 |
20100277569 | MOBILE INFORMATION KIOSK WITH A THREE-DIMENSIONAL IMAGING EFFECT - The present invention discloses a mobile information kiosk with a three-dimensional imaging effect, which is primarily applied to a hand-held mobile information kiosk. The kiosk includes a dual-lens photographing device with various light traveling angles, a displayer, a stereoscopic optical element which is provided on the displayer, and a data processing module for three-dimensional display. The displayer displays an interleaved grid-shape pattern which is processed by the data processing module. The grid-shape pattern is deflected leftward and rightward in a longitudinal series through the stereoscopic optical element and is projected respectively to both eyes of a user, such that the user can visually sense a three-dimensional image. | 11-04-2010 |
20100277570 | IMAGE PROCESSING SYSTEM - An image processing system includes a photographing apparatus and a processing apparatus. The photographing apparatus includes LEDs for emitting light with characteristics of spectroscopic distributions varied in a visible light area, an image pick-up device unit which picks-up a subject image that is illuminated by the LEDs and is formed by an image pick-up optical system and which outputs an image signal, and a control unit. The control unit sequentially lights-on the LEDs upon an instruction for photographing a subject spectroscopic image being input from an operating switch, picks-up the image, and thus controls the operation for capturing subject spectroscopic images. The processing apparatus includes a calculating device which performs a desired image calculation based on the image signal. | 11-04-2010 |
20100283832 | Miniaturized GPS/MEMS IMU integrated board - This invention documents the efforts on the research and development of a miniaturized GPS/MEMS IMU integrated navigation system. A miniaturized GPS/MEMS IMU integrated navigation system is presented; Laser Dynamic Range Imager (LDRI) based alignment algorithm for space applications is discussed. Two navigation cameras are also included to measure the range and range rate which can be integrated into the GPS/MEMS IMU system to enhance the navigation solution. | 11-11-2010 |
20100283833 | DIGITAL IMAGE CAPTURING DEVICE WITH STEREO IMAGE DISPLAY AND TOUCH FUNCTIONS - A digital image capturing device with stereo image display and touch functions is provided. The digital image capturing device includes an image capturing module, a central processing unit (CPU), and a touch display module. The CPU transmits a stereo image to the touch display module. The touch display module converts the stereo image into a multi-image, and then the multi-image is synthesized into the stereo image after being perceived by eyes. When a touch body performs a touch operation on the touch display module, for example, a contact/non-contact touch, the touch display module produces a first or second motion track, and the stereo image on the touch display module changes in real time along with the first or second motion track, so as to achieve an interactive effect of a virtual stereo image during the touch operation. | 11-11-2010 |
20100283834 | DEVICE FOR 3D IMAGING - A 3 dimensional (3D) imaging device is described. The device emits a laser pulse towards a scene. Radiation reflected by the scene includes information relating to the range between objects in the scene. A detector, detects the reflected radiation pulses and outputs signals characteristic of the scene to an imaging device or camera. Two image frames will be produced per radiation pulse, one frame being representative of the ‘close’ object and the second frame being representative of the ‘far’ object. The ratio of these frames may be processed by suitable means to produce a 3D image of the scene. | 11-11-2010 |
20100289877 | METHOD AND EQUIPMENT FOR PRODUCING AND DISPLAYING STEREOSCOPIC IMAGES WITH COLOURED FILTERS - A method for viewing a sequence of images producing a relief sensation is provided. | 11-18-2010 |
20100289878 | IMAGE PROCESSING APPARATUS, METHOD AND COMPUTER PROGRAM FOR GENERATING NORMAL INFORMATION, AND VIEWPOINT-CONVERTED IMAGE GENERATING APPARATUS - High-precision normal information on the surface of a subject is generated by capturing an image of the subject. A normal information generating device captures the image of the subject and thereby passively generates normal information on the surface of the subject. The normal information generating device includes: a stereo polarization image capturing section for receiving a plurality of polarized light beams of different polarization directions at different viewpoint positions and obtaining a plurality of polarization images of different viewpoint positions; and a normal information generating section for estimating a normal direction vector of the subject based on the plurality of polarization images of different viewpoint positions. | 11-18-2010 |
20100289879 | Remote Contactless Stereoscopic Mass Estimation System - A contactless system and method for estimating the mass or weight of a target object is provided. The target object is imaged and a spatial representation of the target animal is derived from the images. A virtual spatial model is provided of a characteristic object of a class of object to which the target object belongs. The virtual spatial model is reshape to optimally fit the spatial representation of the individual animal. Finally, the mass or weight of the target object is estimated as a function of shape variables characterizing the reshaped virtual object. | 11-18-2010 |
20100295924 | INFORMATION PROCESSING APPARATUS AND CALIBRATION PROCESSING METHOD - An information processing apparatus, which provides images for stereoscopic viewing by synthesizing images obtained by capturing an image of real space by a main image sensing device and sub image sensing device to a virtual image, measures the position and orientation of the main image sensing device, calculates the position and orientation of the sub image sensing device based on inter-image capturing device position and orientation held in a holding unit and the measured position and orientation of the main image sensing device. Then the information processing apparatus calculates an error using the measured position and orientation of the main image sensing device, the calculated position and orientation of the sub image sensing device, and held intrinsic parameters of the main image sensing device and sub image sensing device. The information processing apparatus calibrates the held information based on the calculated error. | 11-25-2010 |
20100309289 | 3D Positioning Apparatus and Method - A 3D positioning apparatus is used for an object that includes feature points and a reference point. The object undergoes movement from a first to a second position. The 3D positioning apparatus includes: an image sensor for capturing images of the object; and a processor for calculating, based on the captured images, initial coordinates of each feature point when the object is in the first position, initial coordinates of the reference point, final coordinates of the reference point when the object is in the second position, and final coordinates of each feature point. The processor calculates 3D translational information of the feature points using the initial and final coordinates of the reference point, and 3D rotational information of the feature points using the initial and final coordinates of each feature point. A 3D positioning method is also disclosed. | 12-09-2010 |
20100309290 | SYSTEM FOR CAPTURE AND DISPLAY OF STEREOSCOPIC CONTENT - A system for providing capture and display of stereoscopic content with interchangeable portable devices, the system including a portable transmitter/receiver device configured to communicate stereoscopic content between a first device where an image originates and a second device through which a user may view the stereoscopic content, wherein the portable transmitter/receiver device is configured to communicate through wired and/or wireless communication. | 12-09-2010 |
20100315488 | Conversion device and method converting a two dimensional image to a three dimensional image - Disclosed is an image conversion device and method converting a two-dimensional (2D) image into a three-dimensional (3D) image. The image conversion device may selectively adjust illumination within the 2D image, generate a disparity map for the illumination adjusted image, and selectively adjust a depth value of the disparity map based on edge discrimination. | 12-16-2010 |
20100315489 | TRANSPORT OF STEREOSCOPIC IMAGE DATA OVER A DISPLAY INTERFACE - A digital display interface ( | 12-16-2010 |
20100328428 | Optimized stereoscopic visualization - The present invention discloses a method comprising: calculating an X separation distance between a left eye and a right eye, said X separation distance corresponding to an interpupilary distance in a horizontal direction; and transforming geometry and texture only once for said left eye and said right eye. | 12-30-2010 |
20100328429 | STEREOSCOPIC IMAGE INTENSITY BALANCING IN LIGHT PROJECTOR - In a light projection system, potentially hierarchical levels of light intensity control ensure proper laser-light output intensity, color channel intensity, white point, left/right image intensity balancing, or combinations thereof The light projection system can include a light intensity sensor in an image path, in a light-source subsystem light-dump path, in a light-modulation subsystem light-dump path, in a position to measure light leaked from optical components, or combinations thereof. | 12-30-2010 |
20100328430 | LENS MODULE FOR FORMING STEREO IMAGE - The disclosure provides a lens module for forming a stereo image. The lens module includes a point light source, a two-dimensional scanning unit, a camera sensor unit, and a data processing unit. The two-dimensional scanning unit is configured for controlling the light from the point light source to project onto an object to obtain image points, which are reflected and arrayed in a matrix on the object and scanning the image points. The camera sensor unit is configured for receiving the light reflected by the object and capturing an image of the image points. The data processing unit is configured for receiving the image from the camera sensor unit and performing an analysis on the image to obtain depth information of the object. | 12-30-2010 |
20100328431 | Rendering method and apparatus using sensor in portable terminal - A method and an apparatus detect motion, rotation, and tilt for rendering using a sensor in a portable terminal. The rendering method using the sensor in the portable terminal includes pre-rendering a region of a size corresponding to a screen of the terminal and a surrounding region. A preset region of the pre-rendered regions is displayed. A motion of the terminal is detected using a sensor, and a region to display in the pre-rendered regions is changed according to the motion. | 12-30-2010 |
20100328432 | IMAGE REPRODUCING APPARATUS, IMAGE CAPTURING APPARATUS, AND CONTROL METHOD THEREFOR - An image reproducing apparatus for reproducing a stereoscopic image shot by a stereoscopic image capturing apparatus, the image reproducing apparatus comprises: an input unit which inputs image data of the stereoscopic image and additional data recorded in association with the image data; an acquisition unit which acquires depth information indicating a depth of a point of interest in the stereoscopic image set, during shooting, on the basis of the additional data; a generation unit which generates images to be superimposed on right and left images of the stereoscopic image, the images to be superimposed having parallax corresponding to the depth indicated by the depth information, on the basis of the depth information; and a display output unit which combines the right and left images of the stereoscopic image with the images to be superimposed, and outputs the combined right and left images of the stereoscopic image to a display apparatus. | 12-30-2010 |
20100328433 | Method and Devices for 3-D Display Based on Random Constructive Interference - The present invention relates to a method and an apparatus for 3-D display based on random constructive interference. It produces a number of discrete secondary light sources by using an amplitude-phase-modulator-array, which helps to create 3-D images by means of constructive interference. Next it employs a random-secondary-light-source-generator-array to shift the position of each secondary light source to a random place, eliminating multiple images due to high order diffraction. It could be constructed with low resolution liquid crystal screens to realize large size real-time color 3-D display, which could widely be applied to 3-D computer or TV screens, 3-D human-machine interaction, machine vision, and so on. | 12-30-2010 |
20100328434 | Microscope apparatus and cell culture apparatus - An imaging section of a microscope apparatus captures a plurality of microscope images each having the focal position which differs in the field being same with a light flux having passed through a microscopic optical system. A region separating section separates a cellular region from a non-cellular region by using the plurality of the microscope images. A focusing position calculating section finds a focusing position in a target pixel included in the cellular region based on a brightness change in the position being same in the plurality of the microscope images. A three dimensional information generating section generates three dimensional information of a cultured cell based on a position of the cellular region and the focusing position in the target pixel. | 12-30-2010 |
20110001793 | THREE-DIMENSIONAL SHAPE MEASURING APPARATUS, INTEGRATED CIRCUIT, AND THREE-DIMENSIONAL SHAPE MEASURING METHOD - It is possible to perform three-dimensional shape measurement with easy processing, regardless of whether an object is moving or not. An image capturing unit ( | 01-06-2011 |
20110001794 | SYSTEM AND METHOD FOR SHAPE CAPTURING AS USED IN PROSTHETICS, ORTHOTICS AND PEDORTHICS - Systems and methods are provided to capture a digital 3D image of a portion of a subject's body. The systems and methods may be effective under static or dynamic conditions, either under the weight of a load or under non-weighted conditions. The system includes a grid of intersecting, flexible fibers arranged so as to achieve a variable surface contour. The surface contour of the grid conforms to and matches the surface contour of the subject when the grid covers a portion of the subject (i.e. residual limb, or deformity to correct). The coordinates of each point of intersection of two or more flexible fibers of the grid is recorded and produces a signal that generates a digital 3D image corresponding to the surface contour of the subject. The method includes covering a portion of the subject (i.e. residual limb or deformity to correct) with a grid of intersecting, flexible fibers and generating a digital three dimensional image. Optionally, the method may further include using the digital 3D image to create a prosthesis, orthosis, or foot orthosis having a surface contour matching the surface contour of the subject. | 01-06-2011 |
20110001795 | FIELD MONITORING SYSTEM USING A MOBIL TERMINAL - The invention relates to a field monitoring system using a mobile terminal, the system comprising: a mobile terminal, which transmits context information and receives 3D image information corresponding to said context information, the context information consisting of audio-video information generated by photographing images of the field and by recording sounds in the field, and location information obtained by applying sensed signals from an accelerometer and form a Gyroscope sensor to a GPS signal including latitude, longitude and time; and a control server which receives the contest information, matches location information of the context information with a pre-stored map or architectural drawing information to generate 3D image information for the current location of the mobile terminal, and transmits the generated information to the mobile terminal. Therefore, by using a wireless terminal that utilizes a commercial communication module, GPS and INS, the invention presents advantages in finding out the location of each personnel member who is sent to even a wide-area disaster, and photographing any unexpected accident or situation or blind spots. | 01-06-2011 |
20110007135 | Image processing device, image processing method, and program - An image processing apparatus includes: an imaging unit configured to generate an imaged image by imaging a subject; a depth information generating unit configured to generate depth information relating to the imaged image; an image processing unit configured to extract, from the imaged image, an image of an object region including a particular subject out of subjects included in the imaged image and a surrounding region of the subject, based on the depth information, and generate a difference image to display a stereoscopic image in which the subjects included in the imaged image are viewed stereoscopically based on the extracted image; and a recording control unit configured to generate a data stream in which data corresponding to the imaged image and data corresponding to the difference image are correlated, and record the data stream as a moving image file. | 01-13-2011 |
20110007136 | Image signal processing apparatus and image display - An image signal processing apparatus and an image display which are allowed to achieve stereoscopic image display with a more natural sense of depth are provided. The image signal processing apparatus includes: a first motion vector detection section and an information obtaining section. The first motion vector detection section detects one or more two-dimensional motion vectors as motion vectors along an X-Y plane of an image, from an image-for-left-eye and an image-for-right-eye which have a parallax therebetween. The information obtaining section obtains, based on the detected two-dimensional motion vectors, information pertaining to a Z-axis direction. The Z-axis direction is a depth direction in a stereoscopic image formed with the image-for-left-eye and the image-for-right-eye. | 01-13-2011 |
20110018967 | RECORDING OF 3D IMAGES OF A SCENE - A method of recording 3D images of a scene based on the time-of-flight principle comprises illuminating a scene by emitting light carrying an intensity modulation, imaging the scene onto a pixel array using an optical system, detecting, in each pixel, intensity-modulated light reflected from the scene onto the pixel and determining, for each pixel, a distance value based on the phase of light detected in the pixel. The determination of the distance values comprises a phase-sensitive de-convolution of the scene imaged onto the pixel array such that phase errors induced by light spreading in the optical system are compensated for. | 01-27-2011 |
20110025823 | Three-dimensional measuring apparatus - A three-dimensional measuring apparatus includes a measurement stage on which an object is placed, a reference scale member having a plurality of reference points, an imaging unit, a driving mechanism, a high brightness detecting unit, and a three-dimensional measuring unit. The imaging unit captures an optical image of the object and the optical images of the plurality of reference points in the same field of view. The high brightness detecting unit detects the brightest portion of the object at each of N relative movement positions of the imaging unit and detects a reference point indicating the maximum brightness among the plurality of reference points, from a plurality of images that is continuously captured by the imaging unit. The three-dimensional measuring unit sets the height of the brightest portion at each of the relative movement positions to a height associated with the detected reference point. | 02-03-2011 |
20110025824 | MULTIPLE EYE PHOTOGRAPHY METHOD AND APPARATUS, AND PROGRAM - First macro photography is performed with each of the imaging systems being focused on a main subject to obtain first images, second photography is performed with one of the plurality of imaging systems being focused on a position farther away than the main subject to obtain a second image, processing is performed on each of the first images to transparentize an area other than the main subject, and each of the transparentized first images and an area other than the main subject of the second image are combined to generate a combined image corresponding to each of the imaging systems. | 02-03-2011 |
20110025825 | METHODS, SYSTEMS, AND COMPUTER-READABLE STORAGE MEDIA FOR CREATING THREE-DIMENSIONAL (3D) IMAGES OF A SCENE - Methods, systems, and computer program products for generating three-dimensional images of a scene are disclosed herein. According to one aspect, a method includes receiving a plurality of images of a scene. The method also includes determining attributes of the plurality of images. Further, the method includes determining, based on the attributes, a pair of images from among the plurality of images for use in creating a three-dimensional image. A user can, by use of the subject matter disclosed herein, use an image capture device for capturing a plurality of different images of the same scene and for converting the images into a three-dimensional, or stereoscopic, image of the scene. The subject matter disclosed herein includes a three-dimensional conversion process. The conversion process can include identification of suitable pairs of images, registration, rectification, color correction, transformation, depth adjustment, and motion detection and removal. | 02-03-2011 |
20110032334 | PREPARING VIDEO DATA IN ACCORDANCE WITH A WIRELESS DISPLAY PROTOCOL - In general, techniques are described for preparing video data in accordance with a wireless display protocol. For example, a portable device comprising a module to store video data, a wireless display host module and a wireless interface may implement the techniques of this disclosure. The wireless display host module determines one or more display parameters of a three-dimensional (3D) display device external from the portable device and prepares the video data to generate 3D video data based on the determined display parameters. The wireless interface then wirelessly transmits the 3D video data to the external 3D display device. In this way, a portable device implements the techniques to prepare video data in accordance with a wireless display protocol. | 02-10-2011 |
20110032335 | VIDEO STEREOMICROSCOPE - A video stereomicroscope includes a main objective ( | 02-10-2011 |
20110032336 | BODY SCAN - A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan. | 02-10-2011 |
20110037831 | METHOD FOR DETERMINING A THREE-DIMENSIONAL REPRESENTATION OF AN OBJECT USING POINTS, AND CORRESPONDING COMPUTER PROGRAM AND IMAGING SYSTEM - The method of the invention includes: determining a set of points of a space and a value of each of these points at a given moment, the set of points including the points of the object in the position thereof at the given moment; selecting a three-dimensional representation function that can be parameterized with parameters and an operation that gives, using the three-dimensional representation function, a function for estimating the value of each point in the space; and determining parameters, such that, for each point in the set, the estimation of the value of the point substantially gives the value of the point. | 02-17-2011 |
20110037832 | Defocusing Feature Matching System to Measure Camera Pose with Interchangeable Lens Cameras - A lens and aperture device for determining 3D information. An SLR camera has a lens and aperture that allows the SLR camera to determine defocused information. | 02-17-2011 |
20110037833 | METHOD AND APPARATUS FOR PROCESSING SIGNAL FOR THREE-DIMENSIONAL REPRODUCTION OF ADDITIONAL DATA - A method and apparatus for processing a signal, including: extracting three-dimensional (3D) reproduction information for reproducing a subtitle, which is reproduced with a video image, in 3D, from additional data for generating the subtitle; and reproducing the subtitle in 3D by using the additional data and the 3D reproduction information. | 02-17-2011 |
20110043609 | APPARATUS AND METHOD FOR PROCESSING A 3D IMAGE - Provided are an apparatus and a method for processing a three dimensional image. The apparatus for processing the three dimensional image includes a lattice pattern projection unit configured to project a reference lattice pattern onto a photographic object; a photographing unit configured to generate an image information by photographing the photographic object onto which the reference lattice pattern is projected; a depth information extraction unit configured to extract a depth information of the photographic object based on comparison between the reference lattice pattern projected onto the photographic object and a modified lattice pattern included in the image information; and a left and right eye information generation unit configured to generate a left and right eye information in correspondence with the depth information, the left and right eye information containing a three dimensional information of the photographic object. | 02-24-2011 |
20110043610 | THREE-DIMENSIONAL FACE CAPTURING APPARATUS AND METHOD AND COMPUTER-READABLE MEDIUM THEREOF - Disclosed is a 3D face capturing apparatus, method and computer-readable medium. As an example, the 3D face capturing method includes obtaining a face color image, obtaining a face depth image, aligning, by a computer, the face color image and the face depth image, obtaining, by the computer, a 3D face model by 2D modeling of the face color image and covering a modeled 2D face area on an image output by an image alignment module, removing by the computer, depth noise of the 3D face model, and obtaining, by the computer, an accurate 3D face model by aligning the 3D face model and a 3D face template, and removing residual noise based on a registration between the 3D face model and the 3D face template. | 02-24-2011 |
20110043611 | DEPTH AND LATERAL SIZE CONTROL OF THREE-DIMENSIONAL IMAGES IN PROJECTION INTEGRAL IMAGING - A method disclosed herein relates to displaying three-dimensional images. The method comprising, projecting integral images to a display device, and displaying three-dimensional images with the display device. Further disclosed herein is an apparatus for displaying orthoscopic 3-D images. The apparatus comprising, a projector for projecting integral images, and a micro-convex-mirror array for displaying the projected images. | 02-24-2011 |
20110050854 | IMAGE PROCESSING DEVICE AND PSEUDO-3D IMAGE CREATION DEVICE - The present invention provides an apparatus that includes a color and polarization image capturing section | 03-03-2011 |
20110058021 | RENDERING MULTIVIEW CONTENT IN A 3D VIDEO SYSTEM - There is disclosed methods and apparatuses for multi-view video encoding, decoding and display. A depth map is provided for each of the available views. The depth maps of the available views are used to synthesize a target view for rendering an image from the perspective of the target view based on images of the available views. | 03-10-2011 |
20110058022 | APPARATUS AND METHOD FOR PROCESSING IMAGE DATA IN PORTABLE TERMINAL - An apparatus and a method for processing image data in a portable terminal are provided. More specifically, an apparatus and a method are provided for reducing computation in similar block estimation of left and right image data when an image is compressed or a stereoscopic image is processed. The apparatus includes a camera for obtaining two or more image data; and a similar block detector for converting the obtained image data to binary data and estimating block similarity by comparing the converted binary data. | 03-10-2011 |
20110058023 | SYSTEM AND METHOD FOR STRUCTURED LIGHT ILLUMINATION WITH FRAME SUBWINDOWS - A structured light imaging (SLI) system includes a projection system, image sensor system and processing module. The projection system is operable to project a single SLI pattern into an imaging area. The image sensor system is operable to capture images of a 3D object moving through the imaging area. The image sensor system outputs a subset of camera pixels corresponding to a subwindow of a camera frame to generate subwindow images. The processing module is operable to create a 3D surface map of the 3D object based on the subwindow images. | 03-10-2011 |
20110063416 | 3D IMAGING DEVICE AND METHOD FOR MANUFACTURING SAME - The invention concerns a 3D imaging device comprising a photodetector ( | 03-17-2011 |
20110069154 | HIGH SPEED, HIGH RESOLUTION, THREE DIMENSIONAL SOLAR CELL INSPECTION SYSTEM - An optical inspection system and method are provided. A workpiece transport moves a workpiece in a nonstop manner. An illuminator includes a light pipe and is configured to provide a first and second strobed illumination field types. First and second arrays of cameras are arranged to provide stereoscopic imaging of the workpiece. The first array of cameras is configured to generate a first plurality of images of the workpiece with the first illumination field and a second plurality of images of the feature with the second illumination field. The second array of cameras is configured to generate a third plurality of images of the workpiece with the first illumination field and a fourth plurality of images of the feature with the second illumination field. A processing device stores at least some of the first, second, third, and fourth pluralities of images and provides the images to an other device. | 03-24-2011 |
20110074925 | METHOD AND SYSTEM FOR UTILIZING PRE-EXISTING IMAGE LAYERS OF A TWO-DIMENSIONAL IMAGE TO CREATE A STEREOSCOPIC IMAGE - Implementations of the present invention involve methods and systems for converting a 2-D multimedia image to a 3-D multimedia image by utilizing a plurality of layers of the 2-D image. The layers may comprise one or more portions of the 2-D image and may be digitized and stored in a computer-readable database. The layers may be reproduced as a corresponding left eye and right eye version of the layer, including a pixel offset corresponding to a desired 3-D effect for each layer of the image. The combined left eye layers and right eye layers may form the composite right eye and composite left eye images for a single 3-D multimedia image. Further, this process may be applied to each frame of a animated feature film to convert the film from 2-D to 3-D. | 03-31-2011 |
20110074926 | SYSTEM AND METHOD FOR CREATING 3D VIDEO - A system and method for generating 3D video from a plurality of 2D video streams is provided. A video capture device for capturing video to be transformed into 3D video includes a camera module for capturing a two-dimensional (2D) video stream, a location module for determining a location of the video capture device, an orientation module for determining an orientation of the video capture device, and a processing module for associating additional information with the 2D video stream captured by the camera module, the additional information including the orientation of the video capture device and the location of the video capture device. | 03-31-2011 |
20110074927 | METHOD FOR DETERMINING EGO-MOTION OF MOVING PLATFORM AND DETECTION SYSTEM - A method for determining ego-motion of a moving platform and a system thereof are provided. The method includes: using a first lens to capture a first and a second left image at a first and a second time, and using a second lens to capture a first and a second right image; segmenting the images into first left image areas, first right image areas, second left image areas, and second right image areas; comparing the first left image areas and the first right image areas, the second left image areas and the second right image areas, and the first right image areas and the second right image areas, so as to find plural common areas; selecting N feature points in the common areas to calculate depth information at the first and the second time, and determining the ego-motion of the moving platform between the first time and the second time. | 03-31-2011 |
20110080470 | VIDEO REPRODUCTION APPARATUS AND VIDEO REPRODUCTION METHOD - According to one embodiment, a video reproduction apparatus includes an image generator, a motion recognizer, a marker generator and an image synthesizer. The image generator is configured to generate a first pair of images with a difference in visual field for an operational object. The motion recognizer is configured to recognize three-dimensional gesture of a user. The marker generator is configured to identify three-dimensional designated coordinates based on the gesture recognized by the motion recognizer, and generate a second pair of images with a difference in visual field for a marker corresponding to the designated coordinates. The image synthesizer is configured to synthesize the first pair of images with the second pair of images to generate a third pair of images. | 04-07-2011 |
20110080471 | Hybrid method for 3D shape measurement - A method for three-dimensional shape measurement provides for generating sinusoidal fringe patterns by defocusing binary patterns. A method for three-dimensional shape measurement may include (a) projecting a plurality of binary patterns onto at least one object; (b) projecting three phase-shifted fringe patterns onto the at least one object; (c) capturing images of the at least one object with the binary patterns and the phase-shifted fringe patterns; (d) obtaining codewords from the binary patterns; (e) calculating a wrapped phase map from the phase-shifted fringe patterns; (f) applying the codewords to the wrapped phase map to produce an unwrapped phase map; and (g) computing coordinates using the unwrapped phase map for use in the three-dimensional shape measurement of the at least one object. A system for performing the method is also provided. The high-speed real-time 3D shape measurement may be used in numerous applications including medical science, biometrics, and entertainment. | 04-07-2011 |
20110090313 | MULTI-EYE CAMERA AND METHOD FOR DISTINGUISHING THREE-DIMENSIONAL OBJECT - A stereo camera captures a pair of R and L viewpoint images. Upon a half press of a shutter release button, a preliminary photographing procedure is carried out. A binary image generator applies binary processing to each image, and a shadow extracting section extracts a shadow of a main subject from each binary image. A size calculating section calculates a size of each shadow, and a difference calculating section calculates a difference in size of the shadow between the images. If an absolute value of the difference is a size difference threshold value or more, the main subject is distinguished as a three-dimensional object suited to a 3D picture mode. Otherwise, the main subject is distinguished as a printed sheet suited to a 2D picture mode. Upon a full press of the shutter release button, an actual photographing procedure is carried out in the established 3D or 2D picture mode. | 04-21-2011 |
20110090314 | METHOD AND APPARATUS FOR GENERATING STREAM AND METHOD AND APPARATUS FOR PROCESSING STREAM - Provided are a method and apparatus for generating a stream, and a method and apparatus for processing of the stream. The method of generating the stream includes: generating an elementary stream including three-dimensional (3D) image data providing a 3D image, and 3D detail information for reproducing the 3D image; generating a section including 3D summary information representing that a transport stream to be generated from the elementary stream provides the 3D image; and generating the transport stream with respect to the section and the elementary stream. | 04-21-2011 |
20110090315 | Capturing device, image processing method, and program - A capturing device includes a display section that changes between and displays a three-dimensional (3D) image and a two-dimensional (2D) image, and a controller that performs an image display control for the display section, wherein the controller changes a display mode of an image displayed on the display section, from a 3D image display to a 2D image display, in accordance with preset setting information, at the time of performing a focus control process. | 04-21-2011 |
20110090316 | METHOD AND APPARATUS FOR 3-D IMAGING OF INTERNAL LIGHT SOURCES - The present invention provides systems and methods for obtaining a three-dimensional (3D) representation of one or more light sources inside a sample, such as a mammal. Mammalian tissue is a turbid medium, meaning that photons are both absorbed and scattered as they propagate through tissue. In the case where scattering is large compared with absorption, such as red to near-infrared light passing through tissue, the transport of light within the sample is described by diffusion theory. Using imaging data and computer-implemented photon diffusion models, embodiments of the present invention produce a 3D representation of the light sources inside a sample, such as a 3D location, size, and brightness of such light sources. | 04-21-2011 |
20110096148 | IMAGING INSPECTION DEVICE - In a method for recording a thermographic inspection image ( | 04-28-2011 |
20110102545 | UNCERTAINTY ESTIMATION OF PLANAR FEATURES - In one embodiment, a method comprises generating three-dimensional (3D) imaging data for an environment using an imaging sensor, extracting an extracted plane from the 3D imaging data, and estimating an uncertainty of an attribute associated with the extracted plan. The method further comprises generating a navigation solution using the attribute associated with the extracted plane and the estimate of the uncertainty of the attribute associated with the extracted plane. | 05-05-2011 |
20110102546 | DISPERSED STORAGE CAMERA DEVICE AND METHOD OF OPERATION - A distributed storage network contains a user device that has a computing core, a DSN interface and either an integrated or an externally-connected camera or sensor. The camera or sensor collects data from its surrounding environment and processes the data at least partially through error coding dispersal storage functions that include slicing the data into a plurality of error-coded data slices. The error-coded data slices are output by the user device for one or more of storage within a dispersed storage network (DSN) memory, playing on a destination player, or broadcast consumption over a network. A storage integrity unit manages the storage capacity, use, and/or throughput of the system to ensure that data is processing in a real time or near real time so that data can be accurately processed and perceived by targeted end users. | 05-05-2011 |
20110102547 | Three-Dimensional Image Sensors and Methods of Manufacturing the Same - Image sensors include three-dimensional (3D) color image sensors having an array of sensor pixels therein. A 3-D color image sensor may include a 3-D image sensor pixel having a plurality of color sensors and a depth sensor therein. The plurality of color sensors may include red, green and blue sensors extending adjacent the depth sensor. A rejection filter is also provided. This rejection filter, which extends opposite a light receiving surface of the 3-D image sensor pixel, is configured to be selectively transparent to visible and near-infrared light relative to far-infrared light. The depth sensor may also include an infrared filter that is selectively transparent to near-infrared light having wavelengths greater than about 700 nm relative to visible light. | 05-05-2011 |
20110102548 | MOBILE TERMINAL AND METHOD FOR CONTROLLING OPERATION OF THE MOBILE TERMINAL - A mobile terminal and a method for controlling the operation of the same are provided. In the method, a screen including a preview image of a camera is displayed on a display module. Then, a preview window is set in a region of the screen and a predictive image of a three-dimensional (3D) stereoscopic image, which can be generated using images of a subject corresponding to the preview image, is displayed on the preview window. Thus, when images are captured to obtain a 3D stereoscopic image, the user can easily operate the camera using such a 3D stereoscopic image preview function. | 05-05-2011 |
20110102549 | THREE-DIMENSIONAL DIGITAL MAGNIFIER OPERATION SUPPORTING SYSTEM - The simulation regarding the state change of the subject in a real space provides a system which represents impacts to three-dimensional computer graphics caused by changes of state of three-dimensional computer graphics composed and fixed to subject, and state of image taking space by simulation, surface polygon model and similar surface polygon model 1 is selected, according to shape pattern, from surface polygon model 2 measures, in a three-dimensional way, subject image existing in the same space, a tracking process is performed on the computer graphics, following to the relative position change of the position changes of the subject and the camera caused in real three-dimensional space, subjects in the visual field of the camera and virtual three-dimensional computer graphics image is unified and displayed by displaying computer graphics image having the same relative position change on the image. | 05-05-2011 |
20110102550 | 3D IMAGING SYSTEM - An apparatus and method for computing a three dimensional model of a surface of an object are disclosed. At least one directional energy source ( | 05-05-2011 |
20110102551 | IMAGE GENERATION DEVICE AND IMAGE GENERATION METHOD - Provided is an image generation device generating a high-quality image of an object under a pseudo light source at any desired position, based on geometric parameters generated from a low-quality image of the object. The image generation device includes: a geometric parameter calculation unit ( | 05-05-2011 |
20110109724 | BODY SCAN - A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan. | 05-12-2011 |
20110115883 | Method And System For Adaptive Viewport For A Mobile Device Based On Viewing Angle - A 2D and/or 3D video processing device comprising a camera and a display captures images of a viewer as the viewer observes displayed 2D and/or 3D video content in a viewport. Face and/or eye tracking of viewer images is utilized to generate a different viewport. Current and different viewports may comprise 2D and/or 3D video content from a single source or from different sources. The sources of 2D and/or 3D content may be scrolled, zoomed and/or navigated through for generating the different viewport. Content for the different viewport may be processed. Images of a viewer's positions, angles and/or movements of face, facial expression, eyes and/or physical gestures are captured by the camera and interpreted by face and/or eye tracking. The different viewport may be generated for navigating through 3D content and/or for rotating a 3D object. The 2D and/or 3D video processing device communicates via wire, wireless and/or optical interfaces. | 05-19-2011 |
20110115884 | INFORMATION PROCESSING APPARATUS, METHOD, PROGRAM AND RECORDING MEDIUM - A data structure, recording medium, playing device, playing method, program, and program storing medium, which enable providing of a video format for 3D display, suitable for 3D display of captions and menu buttons. Caption data used for 2D display of caption and menu data used for 2D display of menu buttons are recorded in a disc as is a database of offset information, in which is described offset information, made up of offset direction representing the direction of shifting of an image for the left eye and an image for the right eye used for 3D display as to images for 2D display regarding the caption data and menu data, and an offset value representing the amount of shifting, correlated with the playing point-in-time of caption data and menu data, respectively. | 05-19-2011 |
20110122228 | THREE-DIMENSIONAL VISUAL SENSOR - A perspective transformation is performed to a three-dimensional model and a model coordinate system indicating a reference attitude of the three-dimensional model to produce a projection image expressing a relationship between the model and the model coordinate system, and a work screen is started up. A coordinate of an origin in the projection image and rotation angles of an X-axis, a Y-axis, and a Z-axis are displayed in work areas on the screen to accept a manipulation to change the coordinate and the rotation angles. The display of the projection image is changed by a manipulation. When an OK button located is pressed, the coordinate and rotation angle are fixed, and the model coordinate system is changed based on the coordinate and rotation angle. A coordinate of each constituent point of the three-dimensional model is transformed into a coordinate of the post-change model coordinate system. | 05-26-2011 |
20110128352 | FAST 3D-2D IMAGE REGISTRATION METHOD WITH APPLICATION TO CONTINUOUSLY GUIDED ENDOSCOPY - A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. The viewpoint is updated via a Gauss-Newton parameter update and certain of the steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source. The invention has far-reaching applications, particularly in the field of assisted endoscopy, including bronchoscopy and colonoscopy. Other applications include aerial and ground-based navigation. | 06-02-2011 |
20110134221 | OBJECT RECOGNITION SYSTEM USING LEFT AND RIGHT IMAGES AND METHOD - Disclosed herein are an object recognition system and method which extract left and right feature vectors from a stereo image of an object, find feature vectors which are present in both the extracted left and right feature vectors, compare information about the left and right feature vectors and the feature vectors present in both the extracted left and right feature vectors with information stored in a database, extract information of the object, and recognize the object. | 06-09-2011 |
20110134222 | Rolling Camera System - A 3D imaging system comprising: first and second rolling shutter photosurfaces having pixels; a first shutter operable to gate on and off the first photosurface; a light source controllable to transmit a train of light pulses to illuminate a scene; a controller that controls the first shutter to gate the first photosurface on and off responsive to transmission times of the light pulses and opens and closes bands of pixels in the photosurfaces to register light reflected from the light pulses by the scene that reach the 3D imaging system during periods when the first photosurface is gated on; and a processor that determines distances to the scene responsive to amounts of light registered by pixels in the photosurfaces. | 06-09-2011 |
20110134223 | METHOD AND A SYSTEM FOR REDUCING ARTIFACTS - A method for preparing an article of lenticular imaging. The method comprises receiving a plurality of source images, superimposing at least one deghosting element on the plurality of source images, the deghosting element being formed to reduce an estimated ghosting artifact from the article, interlacing the plurality of processed source images so as to form a spatially multiplexed image, and preparing the article by attaching an optical element to the spatially multiplexed image. | 06-09-2011 |
20110141237 | DEPTH MAP GENERATION FOR A VIDEO CONVERSION SYSTEM - In accordance with at least some embodiments of the present disclosure, a process for generating a depth map for converting a two-dimensional (2D) image to a three-dimensional (3D) image is described. The process may include generating a depth gradient map from the 2D image, wherein the depth gradient map is configured to associate one or more edge counts with one or more depth values, extracting an image component from the 2D image, wherein the image component is associated with a color component in a color space, determining a set of gains to adjust the depth gradient map based on the image component, and generating the depth map by performing depth fusion based on the depth gradient map and the set of gains. | 06-16-2011 |
20110141238 | STEREO IMAGE DATA TRANSMITTING APPARATUS, STEREO IMAGE DATA TRANSMITTING METHOD, STEREO IMAGE DATA RECEIVING DATA RECEIVING METHOD - [Object] To maintain perspective consistency among individual objects in an image in display of superimposition information. | 06-16-2011 |
20110149040 | METHOD AND SYSTEM FOR INTERLACING 3D VIDEO - A video processing device may generate and/or capture a plurality of view sequences of video frames, decimate at least some of the plurality of view sequences, and may generating a three-dimension (3D) video stream comprising the plurality of view sequences based on that decimation. The decimation may be achieved by converting one or more of the plurality of view sequences from progressive to interlaced video. The interlacing may be performed by removing top or bottom fields in each frame of those one or more view sequences during the conversion to interlaced video. The removed fields may be selected based on corresponding conversion to interlaced video of one or more corresponding view sequences. The video processing device may determine bandwidth limitations existing during direct and/or indirect transfer or communication of the generated 3D video stream. The decimation may be performed based on this determination of bandwidth limitations. | 06-23-2011 |
20110149041 | APPARATUS AND METHOD FOR CAMERA PARAMETER CALIBRATION - The present invention provides an apparatus and method for camera parameter calibration which is capable of easily and simply setting physical and optical characteristic parameters of a camera in order to acquire information on actual measurement of an image provided through the camera with high accuracy. The camera parameter calibration apparatus and method has an advantage of correct image analysis that it is capable of increasing accuracy of information of measurement through an image only with an intuitive interface manipulation, without taking a time-consuming and incorrect actual measurement procedure, by determining parameters of the space model corresponding to the image by displaying a 3D space model corresponding to a real space of the image on the image and changing and adjusting visual point parameters such that the 3D space model matches the display image, and regarding the determined parameters of the space model as camera parameters of the image. | 06-23-2011 |
20110149042 | METHOD AND APPARATUS FOR GENERATING A STEREOSCOPIC IMAGE - Provided is a method for generating stereoscopic image fedback by the interaction with a real world. Even though the method according to the related art interacts the user with the virtual object or forms the stereoscopic image without interacting with the object in the user space by controlling the virtual object using a separate apparatus, the present invention feedbacks the interaction between all the object in the user space including the object and users in the virtual space to the video reproducing system to implement a system for re-processing and reproducing the stereoscopic image, thereby making it possible to produce realistic stereoscopic image. | 06-23-2011 |
20110149043 | DEVICE AND METHOD FOR DISPLAYING THREE-DIMENSIONAL IMAGES USING HEAD TRACKING - Disclosed herein are a device and method for displaying 3D images. The device includes an image processing unit for calculating the location of a user relative to a reference point and outputting a 3D image which is obtained by performing image processing on 3D content sent by a server based on the calculated location of the user, the image processing corresponding to a viewpoint of the user, and a display unit for displaying the 3D image output by the image processing unit to the user. The method includes calculating the location of a user relative to a reference point, performing image processing on 3D content sent by a server from a viewpoint of the user based on the calculated location of the user, and outputting a 3D image which is obtained by the image processing, and displaying the 3D image output by the image processing unit to the user. | 06-23-2011 |
20110157311 | Method and System for Rendering Multi-View Image - A method and a system for rendering a multi-view image are provided. The method for rendering the multi-view image includes the following steps. An image capturing unit provides an original image and depth information thereof. Multiple threads of one processing unit perform a pixel rendering process and a hole filling process on at least one row of pixels of the original image according to the depth information by way of parallel processing to render at least one new-view image. View-angles of the at least one new-view image and the original image are different. Each of the threads performs a view interlacing process on at least one pixel of the original image and the at least one new-view image by way of parallel processing to render the multi-view image. | 06-30-2011 |
20110157312 | IMAGE PROCESSING APPARATUS AND METHOD - An image processing apparatus includes: a creating means that creates identification information identifying a type of an image of image data among a 2D image for planar viewing, a 3D image for stereoscopic viewing including left and right images, and a duplex image which is capable of both stereoscopic viewing including the left and right images and planar viewing with disparity lower than that of the 3D image; and a transmitting means that transmits the identification information created by the creating means and the image data. | 06-30-2011 |
20110157313 | Stereo Image Server and Method of Transmitting Stereo Image - The present invention discloses a method of transmitting three dimensional information. The method includes providing a remote server having a three dimensional image database, and a local terminal device coupled to the remote server via network, wherein the local terminal device includes a stereo image display. The local terminal device transmits a request command for three dimensional images to the remote server through the network, followed by sending the desired three dimensional images to the local terminal device through the network. | 06-30-2011 |
20110157314 | Image Processing Apparatus, Image Processing Method and Recording Medium - An image processing apparatus includes: a photographing unit that includes a fisheye lens and that captures an image through the fisheye lens; a memory unit that stores three-dimensional (3D) model information for defining a 3D space; a light source number calculation unit that calculates a number of light sources that irradiate light onto the image captured by the photographing unit and that calculates light source coordinate information that indicates positions of the light sources corresponding to the number of light sources in the image; a light source information calculation unit that calculates parameters regarding the light source in a real space as light source information about parameters in the 3D space, based on the light source coordinate information; a 3D image writing unit that writes a 3D image, based on the 3D model information and the light source information; and a display unit that displays the 3D image, thereby obtaining information about a light source that exists in a real space and displaying an image that reflects information about the light source. | 06-30-2011 |
20110157315 | INTERPOLATION OF THREE-DIMENSIONAL VIDEO CONTENT - Techniques are described herein for interpolating three-dimensional video content. Three-dimensional video content is video content that includes portions representing respective frame sequences that provide respective perspective views of a given subject matter over the same period of time. For example, the three-dimensional video content may be analyzed to identify one or more interpolation opportunities. If an interpolation opportunity is identified, frame data that is associated with the interpolation opportunity may be replaced with an interpolation marker. In another example, a frame that is not directly represented by data in the three-dimensional video content may be identified. For instance, the frame may be represented by an interpolation marker or corrupted data. The interpolation marker or corrupted data may be replaced with an interpolated representation of the frame. | 06-30-2011 |
20110157316 | IMAGE MANAGEMENT METHOD - A method for managing an image photographed by two or more image pickup devices corresponding to two or more viewpoints, comprises: storing a 2D image photographed by the two or more image pickup devices, with identifier indicating that the image is two-dimensional; and storing a 3D image photographed by the two or more image pickup devices, with identifier indicating that the image is three-dimensional. Hence, it becomes possible to search and display quickly an object 2D or 3D image by performing an access per folder. | 06-30-2011 |
20110164114 | THREE-DIMENSIONAL MEASUREMENT APPARATUS AND CONTROL METHOD THEREFOR - A three-dimensional measurement apparatus generates patterns to be projected onto the measurement object, images the measurement object using an imaging unit after projecting a plurality of types of generated patterns onto the measurement object using a projection unit, and computes the coordinate values of patterns on a captured image acquired by the imaging unit, based on the projected patterns, a geometric model of the measurement object, and information indicating the coarse position and orientation of the measurement object. Captured patterns on the captured image are corresponded with the patterns projected by the projection unit using the computed coordinate values, and the distances between the imaging unit and the patterns projected onto the measurement object are derived. The position and orientation of the measurement object are estimated using the derived distances and the geometric model of the measurement object, and the information on the coarse position and orientation is updated. | 07-07-2011 |
20110164115 | TRANSCODER SUPPORTING SELECTIVE DELIVERY OF 2D, STEREOSCOPIC 3D, AND MULTI-VIEW 3D CONTENT FROM SOURCE VIDEO - Transcoders are provided for transcoding three-dimensional content to two-dimensional content, and for transcoding three-dimensional content of a first type to three-dimensional content of another type. Transcoding of content may be performed due to user preference, display device capability, bandwidth constraints, user payment/subscription constraints, device loading, and/or for other reason. Transcoders may be implemented in a content communication network in a media source, a display device, and/or in any device/node in between. | 07-07-2011 |
20110169915 | Structured light system - A structured light system based on a fast, linear array light modulator and an anamorphic optical system captures three-dimensional shape information at high rates and has strong resistance to interference from ambient light. A structured light system having a modulated light source offers improved signal to noise ratios. A wand permits single point detection of patterns in structured light systems. | 07-14-2011 |
20110169916 | METHOD OF DISPLAYING IMAGE AND DISPLAY APPARATUS FOR PERFORMING THE SAME - A method of producing and a display apparatus for a three-dimensional or two-dimensional image is presented. First unit pixel data and second unit pixel data are generated from an input image. The first unit pixel data and the second unit pixel data are provided to first and second unit pixels to display first and second images, respectively. The first unit pixel has a wavelength range corresponding to a primary color that is different from a wavelength range of the second unit pixel corresponding to the primary color. When a stereoscopic image is displayed, the first image for a left eye and the second image for a right eye may be selectively provided to the left eye and the right eye of an observer although the first and second images are displayed at the same time. Thus, an afterimage may be prevented. | 07-14-2011 |
20110169917 | System And Process For Detecting, Tracking And Counting Human Objects of Interest - A system is disclosed that includes: at least one image capturing device at the entrance to obtain images; a reader device; and a processor for extracting objects of interest from the images and generating tracks for each object of interest, and for matching objects of interest with objects associated with RFID tags, and for counting the number of objects of interest associated with, and not associated with, particular RFID tags. | 07-14-2011 |
20110169918 | 3D IMAGE SENSOR AND STEREOSCOPIC CAMERA HAVING THE SAME - A three-dimensional (3D) image sensor and a stereoscopic camera having the same are provided. The 3D image sensor includes one or more image acquisition regions, each image acquisition region having a plurality of pixels; and an output signal generation controller configured to extract pixel signals from two regions of interest (ROIs) set within the image acquisition regions and output image signals based on the pixel signals, the ROIs being apart from each other. The output signal generation controller may minutely adjust the position of at least one of the ROIs in accordance with a convergence adjustment signal. The output signal generation controller may vertically or horizontally move each of the ROIs in accordance with a camera shake signal and may thus correct camera shake. The output signal generation controller may align the ROIs with left and right lenses in an optical system. The output signal generation controller may vertically or horizontally move each of the ROIs in accordance with an optical axis correction signal and may thus correct an optical axis error resulting from an optical is axis misalignment. | 07-14-2011 |
20110169919 | FRAME FORMATTING SUPPORTING MIXED TWO AND THREE DIMENSIONAL VIDEO DATA COMMUNICATION - Systems and methods are provided that relate to frame formatting supporting mixed two and three dimensional video data communication. For example, frames in frame sequence(s) may be formatted to indicate that a first screen configuration is to be used for displaying first video content, that a second screen configuration is to be used for displaying second video content, and so on. The screen configurations may be different or the same. In another example, the frames in the frame sequence(s) may be formatted to indicate that the first video content is to be displayed at a first region of a screen, that the second video content is to be displayed at a second region of the screen, and so on. The regions of the screen may partially overlap, fully overlap, not overlap, be configured such that one or more regions are within one or more other regions, etc. | 07-14-2011 |
20110169920 | SMALL STEREOSCOPIC IMAGE PHOTOGRAPHING APPARATUS - Disclosed is a compact type 3D image photographing apparatus which adjusts a convergence angle with respect to a subject using two lenses in order to photograph a stereoscopic image. The compact type image photographing apparatus includes a housing; a first actuator having a first lens and disposed in the housing so as to be moved left and right; a second actuator having a second lens, disposed in the housing so as to be moved left and right, and disposed to be spaced apart from the first actuator; a left/right driving part which is disposed at each of the first and second actuators so as to move the first actuator or the second actuator left and right when power is applied; an image sensor which is disposed at each lower side of the first and second actuators so as to photograph a subject through the first and second lenses; and a control part which is disposed at a lower side of the image sensor so as to control power supplied to the left/right driving part, the first actuator and the second actuator. | 07-14-2011 |
20110169921 | METHOD FOR PERFORMING OUT-FOCUS USING DEPTH INFORMATION AND CAMERA USING THE SAME - A method for performing out-focus of camera having a first lens and a second lens, comprising: photographing a first image with the first lens and photographing a second image with the second lens; extracting depth information of the photographed first image and second image; and performing out-focus on the first image or the second image using the extracted depth information. | 07-14-2011 |
20110175981 | 3D COLOR IMAGE SENSOR - A 3D color image sensor and a 3D optical imaging system including the 3D color image sensor are provided. The 3D color image sensor includes a semiconductor substrate, having a plurality of first photodiodes and a plurality of second photodiodes, and a wiring layer formed under the first photodiodes and the second photodiodes. A light filter array layer is disposed on the first and the second photodiodes, having a plurality of color filter patterns and infrared (IR) light filter patterns, wherein each of the IR light filter patterns receives depth information of 3D color image of an object and corresponds to the first photodiode, and each of the color filter patterns receives color image information of 3D color image of the object and corresponds to the second photodiode. | 07-21-2011 |
20110175982 | METHOD OF FLUORESCENT NANOSCOPY - An analysis of an object dyed with fluorescent coloring agents carried out with the aid of a fluorescent microscope which is modified for improved resolving power and called a nanoscope. The method is carried out with a microscope having an optical system for visualizing and projecting a sample image to a video camera which records and digitizes images of individual fluorescence molecules and nanoparticles at a low noise, a computer for recording and processing images, a sample holder arranged in front of an object lens, a fluorescent radiation exciting source and a set of replaceable suppression filters for separating the sample fluorescent light. Separately fluorescing visible molecules and nanoparticles are periodically formed in different object parts, the laser produces the oscillation thereof which is sufficient for recording the non-overlapping images of the molecules and nanoparticles and for decoloring already recorded fluorescent molecules, wherein tens of thousands of pictures of recorded individual molecule and nanoparticle images, in the form of stains having a diameter on the order of a fluorescent light wavelength multiplied by a microscope amplification, are processed by a computer for searching the coordinates of the stain centers and building the object image according to millions of calculated stain center co-ordinates corresponding to the co-ordinates of the individual fluorescent molecules and nanoparticles. With this invention it is possible to obtain a two-dimensional and a three-dimensional image with a resolving power better than 20 nm and to record a color image by dyeing proteins, nucleic acids and lipids with different coloring agents. | 07-21-2011 |
20110175983 | APPARATUS AND METHOD FOR OBTAINING THREE-DIMENSIONAL (3D) IMAGE - An apparatus and method for obtaining a three-dimensional image. A first multi-view image may be generated using patterned light of infrared light, and a second multi-view image may be generated using non-patterned light of visible light. A first depth image may be obtained from the first multi-view image, and a second depth image may be obtained from the second multi-view image. Then, stereo matching may be performed on the first depth image and the second depth image to generate a final depth image. | 07-21-2011 |
20110175984 | METHOD AND SYSTEM OF EXTRACTING THE TARGET OBJECT DATA ON THE BASIS OF DATA CONCERNING THE COLOR AND DEPTH - Provided are a method and system for extracting a target object from a background image, the method including: generating a scalar image of differences between the object image and the background, using a lightness and a color difference between the background and current video frame; initializing a mask to have a value equal to a value for a corresponding pixel of a mask of a previous video frame, where a value of the scalar image of differences for the pixel is less than a threshold, and to have a predetermined value otherwise; clustering the scalar image of differences and the depth data; filling the mask for each pixel position the current video frame, using a centroid of a cluster of the scalar image of differences and the depth data; and updating the background image on the basis of the filled mask and the scalar image of differences. | 07-21-2011 |
20110181701 | Image Data Processing - A method for processing image data of a sample is disclosed. The method comprises registering a first and a second images of at least partially overlapping spatial regions of the sample and processing data from the registered images to obtain integrated image data comprising information about the sample, said information being additional to that available from said first and second images. | 07-28-2011 |
20110181702 | METHOD AND SYSTEM FOR GENERATING A REPRESENTATION OF AN OCT DATA SET - A method of generating a representation of an OCT data set includes obtaining the OCT data set, representing a plurality of tuples, each of which comprises values of three spatial coordinates and a value of a scattering intensity, obtaining a color image data set representing a plurality of tuples, each of which comprises values of two spatial coordinates and a color value, and generating an image data set, representing a plurality of tuples, each of which comprises values of three spatial coordinates and a color value. Generating the image data set is performed depending on an analysis of the OCT data set and an analysis of the color image data set. | 07-28-2011 |
20110181703 | INFORMATION STORAGE MEDIUM, GAME SYSTEM, AND DISPLAY IMAGE GENERATION METHOD - A game system acquires an input image from an input section that applies light to a body and receives reflected light from the body. The game system controls the size of an object in a virtual space based on a distance between the input section and the body, the distance being determined based on the input image. The game system generates a display image including the object. | 07-28-2011 |
20110187825 | SYSTEMS AND METHODS FOR PRESENTING THREE-DIMENSIONAL CONTENT USING APERTURES - Systems and methods are presented for presenting three-dimensional video content to one or more viewers. In an exemplary embodiment, a system comprises a display comprising a plurality of pixels, an opaque material interposed in a line-of-sight between the display and the viewer, and a processor coupled to the display. The opaque material comprises a plurality of apertures formed therein. The processor and the display are cooperatively configured to display right channel content on a first subset of the plurality of pixels that are viewable by a right eye of the viewer through the apertures and display left channel content on a second subset of the plurality of pixels that are viewable by a left eye of the viewer through the apertures. | 08-04-2011 |
20110187826 | FAST GATING PHOTOSURFACE - An embodiment of the invention provides a camera comprising a photosurface that is electronically turned on and off to respectively initiate and terminate an exposure period of the camera at a frequency sufficiently high so that the camera can be used to determine distances to a scene that it images without use of an external fast shutter. In an embodiment, the photosurface comprises pixels formed on a substrate and the photosurface is turned on and turned off by controlling voltage to the substrate. In an embodiment, the substrate pixels comprise light sensitive pixels, also referred to as “photopixels”, in which light incident on the photosurface generates photocharge, and storage pixels that are insensitive to light that generates photocharge in the photopixels. In an embodiment, the photosurface is controlled so that the storage pixels accumulate and store photocharge substantially upon its generation during an exposure period of the photosurface. | 08-04-2011 |
20110187827 | METHOD AND APPARATUS FOR CREATING A STEREOSCOPIC IMAGE - A method of creating a stereoscopic image for display comprising the steps of: receiving a first image and a second image of the same scene captured from the same location, the second image being displaced from the first image by an amount; and transforming the second image such that at least some of the second image is displaced from the first image by a further amount; and outputting the first image and the transformed second image for stereoscopic display is disclosed. A corresponding apparatus is also disclosed. | 08-04-2011 |
20110187828 | APPARATUS AND METHOD FOR OBTAINING 3D LOCATION INFORMATION - An apparatus to obtain 3D location information from an image using a single camera or sensor includes a first table, in which the numbers of pixels are recorded according to the distance of a reference object. Using the prepared first table and a determined focal distance, a second table is generated in which the number of pixels is recorded according to the distance of a target object. Distance information is then calculated according to the detected number of pixels with reference to the second table. A method for obtaining 3D location information includes detecting a number of pixels of a target object from a first image, generating tables including numbers of pixels according to distance, detecting a central pixel and a number of pixels of the target object from a second image, and estimating two-dimensional location information one-dimensional distance of the target object from the tables and pixel information. | 08-04-2011 |
20110187829 | IMAGE CAPTURE APPARATUS, IMAGE CAPTURE METHOD AND COMPUTER READABLE MEDIUM - There is provided an image capture apparatus. The apparatus includes: an image capture section configured to capture an image of a subject; a focal point distance detector configured to detect a focal point distance from a main point of the image capture section to a focal point of the image capture section on the subject; an image acquisition section configured to acquire first and second images of the subject; an image position detector configured to detect a first image position and a second image position, wherein the first image position represents a position of a certain point on the subject in the first image, and the second image position represents a position of the certain point on the subject in the second image; a 3D image generator configured to generate a 3D image of the subject based on a difference between the first image position and the second image position; a parallelism computation section configured to compute parallelism based on the first and second image positions and the focal point distance; and a display section configured to display the parallelism. | 08-04-2011 |
20110187830 | METHOD AND APPARATUS FOR 3-DIMENSIONAL IMAGE PROCESSING IN COMMUNICATION DEVICE - An apparatus and a method for 3-Dimensional (3D) image processing in a communication device are provided. The method includes obtaining an original image for providing a 3D image digital multimedia broadcasting service, setting the original image provided from the controller into a right-side image and generating a left-side image which differs from the right-side image, converting the left-side image and the right-side image into a side-by-side format image by combining the left-side image and the right-side image, dividing each of the combined two images into a plurality of blocks, determining a search region for each of the divided blocks within an image, and estimating a motion vector of each block based on the search region. | 08-04-2011 |
20110187831 | APPARATUS AND METHOD FOR DISPLAYING THREE-DIMENSIONAL IMAGES - According to the present disclosure, there is disclosed a method and device for displaying a 3-dimensional image, which may provide an improved depth perception. The method according to present invention comprises: forming parallax images for left eye and right eye, each of the parallax images including a plurality of images corresponding to images at different depths for a same object; controlling a brightness of the images of each of the parallax images for the left eye and the right eye; and displaying the parallax images for the left eye and the right eye. | 08-04-2011 |
20110187832 | NAKED EYE THREE-DIMENSIONAL VIDEO IMAGE DISPLAY SYSTEM, NAKED EYE THREE-DIMENSIONAL VIDEO IMAGE DISPLAY DEVICE, AMUSEMENT GAME MACHINE AND PARALLAX BARRIER SHEET - The present invention realizes a naked eye three-dimensional video image display device that alleviates a jump point. In the naked eye three-dimensional video image display device of the invention, a slit of a parallax barrier is arranged in a zigzag or curved shape and the edge of the slit has a shape of an elliptic arc, so that a moderate view mix is generated to alleviate the jump point. Since a perforated parallax barrier is designed after an area to be viewed on a pixel arrangement surface is determined, the parallax barrier can be appropriately provided with an effect of the viewmix. | 08-04-2011 |
20110187833 | 3-D Camera Rig With No-Loss Beamsplitter Alternative - An apparatus and method for stereoscopic photography in which a direct view camera with a direct view lens is mounted to a support to obtain a direct view camera shot while a reflected view camera with a reflected view lens is mounted to the support in a down-looking camera configuration to obtain a reflected view camera shot without use of a beamsplitter when an interaxial spacing between the direct view camera and the reflected view camera does not cause an overlap between a direct view active optical area of the beamsplitter that would be used by the direct view lens and a reflected view active optical area of the beamsplitter that would be used by the reflected view lens. A reflective planar mirror can be positioned to (substantially fully) reflect light from a surface of the reflective planar mirror to the reflected view lens while a transparent planar glass is positioned to allow light to pass substantially perpendicularly through the transparent planar glass to the direct view lens, both the transparent planar glass and reflective planar mirror having substantially parallel surfaces and being integral or not. Alternatively, the transparent planar glass can be eliminated and a spacer used to adjust the mounting of the direct view camera to the support so as to restore a sight line for the direct view camera. In still other alternatives, either one or both of the direct and reflected view lenses can be replaced with a pinhole lens. | 08-04-2011 |
20110193939 | PHYSICAL INTERACTION ZONE FOR GESTURE-BASED USER INTERFACES - In a motion capture system having a depth camera, a physical interaction zone of a user is defined based on a size of the user and other factors. The zone is a volume in which the user performs hand gestures to provide inputs to an application. The shape and location of the zone can be customized for the user. The zone is anchored to the user so that the gestures can be performed from any location in the field of view. Also, the zone is kept between the user and the depth camera even as the user rotates his or her body so that the user is not facing the camera. A display provides feedback based on a mapping from a coordinate system of the zone to a coordinate system of the display. The user can move a cursor on the display or control an avatar. | 08-11-2011 |
20110193940 | 3D CMOS Image Sensors, Sensor Systems Including the Same - A three-dimensional (3D) CMOS image sensor (CIS) that sufficiently absorbs incident infrared-rays (IRs) and includes an infrared-ray (IR) receiving unit formed in a thin epitaxial film, thereby being easily manufactured using a conventional CIS process, a sensor system including the 3D CIS, and a method of manufacturing the 3D CIS, the 3D CIS including an IR receiving part absorbing IRs incident thereto by repetitive reflection to produce electron-hole pairs (EHPs); and an electrode part formed on the IR receiving part and collecting electrons produced by applying a predetermined voltage thereto. | 08-11-2011 |
20110193941 | IMAGE PROCESSING APPARATUS, IMAGING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes an image evaluation unit evaluating properness of synthesized images as the 3-dimensional images. The image evaluation unit performs the process of evaluating the properness through analysis of a block correspondence difference vector calculated by subtracting a global motion vector indicating movement of an entire image from a block motion vector which is a motion vector of a block unit of the synthesized images, compares a predetermined threshold value to one of a block area of a block having the block correspondence difference vector and a movement amount additional value, and performs a process of determining that the synthesized images are not proper as the 3-dimensional images, when the block area is equal to or greater than a predetermined area threshold value or when the movement amount addition value is equal to or greater than a predetermined movement amount threshold value. | 08-11-2011 |
20110193942 | Single-Lens, Single-Aperture, Single-Sensor 3-D Imaging Device - A device and method for three-dimensional (3-D) imaging using a defocusing technique is disclosed. The device comprises a lens having a substantially oblong aperture, a sensor operable for capturing light transmitted from an object through the lens and the substantially oblong aperture, and a processor communicatively connected with the sensor for processing the sensor information and producing a 3-D image of the object. The aperture may have an asymmetrical shape for distinguishing objects in front of versus in back of the focal plane. The aperture may also be rotatable, where the orientation of the observed pattern relative to the oblong aperture is varied with time thereby removing the ambiguity generated by image overlap. The disclosed device further comprises a light projection system configured to project a predetermined pattern onto a surface of the desired object thereby allowing for mapping of unmarked surfaces in three dimensions. | 08-11-2011 |
20110193943 | Stabilized Stereographic Camera System - A stabilized stereographic camera system may include an upright support having a longitudinal axis. A camera support coupled to one end of the upright support may mount first and second cameras, the first camera movable with respect to the second cameras. A ballast may be coupled to another end of the upright support. A balance mechanism may move a movable portion of the stereographic camera system relative to the upright support in response to movement of the first camera with respect to the second camera to maintain a constant weight distribution of the stereographic camera system. | 08-11-2011 |
20110199460 | GLASSES FOR VIEWING STEREO IMAGES - Spectacles are disclosed for use in viewing images or videos, and include at least two lenses each having an adjustable optical property and an image capture device associated with the spectacles for receiving a digital image comprising a plurality of digital image channels each containing multiple pixels. The spectacles further include the image capture device for capturing a digital image and a processor for computing a feature vector from pixel values of the digital image channels wherein the feature vector includes information that would indicate whether the image is an anaglyph or a non-anaglyph image. | 08-18-2011 |
20110199461 | FLOW LINE PRODUCTION SYSTEM, FLOW LINE PRODUCTION DEVICE, AND THREE-DIMENSIONAL FLOW LINE DISPLAY DEVICE - A motion locus creation system which is capable of displaying the trajectory of movement of an object to be tracked in an understandable way even if using no 3D model information. A camera unit forms a detection flag indicating whether or not the object to be tracked has been able to be detected from a captured image. A motion locus-type selection section determines the display type of a motion locus according to the detection flag. A motion locus creation section produces a motion locus according to coordinate data acquired by a tag reader section and a motion locus-type instruction signal selected by the motion locus-type selection section. | 08-18-2011 |
20110205337 | Motion Capture with Low Input Data Constraints - Systems, devices, method and arrangements are implemented in a variety of embodiments to facilitate motion capture of objects. Consistent with one such system, three-dimensional representations are determined for at least one object. Depth-based image data is used in the system, which includes a processing circuit configured and arranged to render a plurality of orientations for at least one object. Orientations from the plurality of orientations are assessed against the depth-based image data. An orientation is selected from the plurality of orientations as a function of the assessment of orientations from the plurality of orientations. | 08-25-2011 |
20110205338 | Apparatus for estimating position of mobile robot and method thereof - An apparatus and method for estimating the position of a mobile robot capable of reducing the time required to estimate the position is provided. The mobile robot position estimating apparatus includes a range data acquisition unit configured to acquire three-dimensional (3D) point cloud data, a storage unit configured to store a plurality of patches, each including points around a feature point which is extracted from previously acquired 3D point cloud data, and a position estimating unit configured to estimate the position of the mobile robot by tracking the plurality of patches from the acquired 3D point cloud data. | 08-25-2011 |
20110205339 | NONDIFFRACTING BEAM DETECTION DEVICES FOR THREE-DIMENSIONAL IMAGING - Embodiments of the present invention relate a nondiffracting beam detection module for generating three-dimensional image data that has a surface layer having a first surface and a light transmissive region, a microaxicon, and a light detector. The microaxicon receives light through the light transmissive region from outside the first surface and generates one or more detection nondiffracting beams based on the received light. The light detector receives the nondiffracting beams and generates three-dimensional image data associated with an object located outside the first surface based on the one or more detection nondiffracting beams received. In some cases, the light detector can localize a three-dimensional position on the object associated with each detection nondiffracting beam received. In other cases, the light detector can determine perspective projections based on the detection nondiffracting beams received and generates the three-dimensional image data, using tomography, based on the determined perspective projections. | 08-25-2011 |
20110205340 | 3D TIME-OF-FLIGHT CAMERA SYSTEM AND POSITION/ORIENTATION CALIBRATION METHOD THEREFOR - A camera system comprises a 3D TOF camera for acquiring a camera-perspective range image of a scene and an image processor for processing the range image. The image processor contains a position and orientation calibration routine implemented therein in hardware and/or software, which position and orientation calibration routine, when executed by the image processor, detects one or more planes within a range image acquired by the 3D TOF camera, selects a reference plane among the at least one or more planes detected and computes position and orientation parameters of the 3D TOF camera with respect to the reference plane, such as, e.g., elevation above the reference plane and/or camera roll angle and/or camera pitch angle. | 08-25-2011 |
20110211044 | Non-Uniform Spatial Resource Allocation for Depth Mapping - A method for depth mapping includes providing depth mapping resources including an illumination module, which is configured to project patterned optical radiation into a volume of interest containing the object, and an image capture module, which is configured to capture an image of the pattern reflected from the object. A depth map of the object is generated using the resources while applying at least one of the resources non-uniformly over the volume of interest. | 09-01-2011 |
20110211045 | METHOD AND SYSTEM FOR PRODUCING MULTI-VIEW 3D VISUAL CONTENTS - A method for producing 3D multi-view visual contents including capturing a visual scene from at least one first point of view for generating a first bidimensional image of the scene and a corresponding first depth map indicative of a distance of different parts of the scene from the first point of view. The method further includes capturing the visual scene from at least one second point of view for generating a second bidimensional image; processing the first bidimensional image to derive at least one predicted second bidimensional image predicting the visual scene captured from the at least one second point of view; deriving at least one predicted second depth map predictive of a distance of different parts of the scene from the at least one second point of view by processing the first depth map, the at least one predicted second bidimensional image and the second bidimensional image. | 09-01-2011 |
20110216165 | Electronic apparatus, image output method, and program therefor - Provided is an electronic apparatus including: a storage to store digital photograph images, shooting date and time information, and shooting location information; a current date and time obtaining unit to obtain a current date and time; a current location obtaining unit to obtain a current location; a controller to draw each of digital photograph images at a drawing position based on the shooting date and time and the shooting location in a virtual three-dimensional space, and to image the virtual three-dimensional space, in which each of digital photograph images is drawn; and an output unit to output the imaged virtual three-dimensional space. | 09-08-2011 |
20110216166 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM - An image processing device includes an operation reception portion which receives an instruction operation for displaying a desired image from a plane image or a stereoscopic image that is stored in a recording medium; an information output portion that is connected to a display device which displays the plane image or the stereoscopic image to output image information for displaying the image stored in the recording medium on the display device; and a control portion. | 09-08-2011 |
20110221866 | COMPUTER-READABLE STORAGE MEDIUM HAVING STORED THEREIN DISPLAY CONTROL PROGRAM, DISPLAY CONTROL APPARATUS, DISPLAY CONTROL SYSTEM, AND DISPLAY CONTROL METHOD - An image display apparatus includes a stereoscopic image display apparatus configured to display a stereoscopically visible image, and a planar image display apparatus configured to display a planar image. An adjustment section of the image display apparatus adjusts relative positions, relative sizes, and relative rotations of a left-eye image taken by a left-eye image imaging section and a right-eye image taken by a right-eye image imaging section. The adjusted left-eye image and the adjusted right-eye image are viewed by the left eye and the right eye of the user, respectively, thereby displaying the stereoscopic image on the stereoscopic image display apparatus. The adjusted left-eye image and the adjusted right-eye image are made semi-transparent and superimposed one on the other, and thus a resulting superimposed planar image is displayed on the planar image display apparatus. | 09-15-2011 |
20110221867 | method and device for optically aligning the axels of motor vehicles - A method and a device are provided for the optical axle alignment of wheels of a motor vehicle. At the wheels that are to be aligned, targets are mounted, having optically recordable marks, the targets being recordable by measuring units that have stereo camera devices. In a referencing process, using a referencing device that is integrated into the measuring units, a measuring position reference system is established for the measuring units. In a calibration process, in which a local 3D coordinate system is established using at least three marks of the target, the determination of a reference plane is carried out, using a significant mark of the target. Finally, using the reference plane, a vehicle longitudinal center plane is ascertained, while taking into account the measuring location reference system. In a subsequent measuring process, an image is recorded of at least three indeterminate marks taken during the calibration process, and their spatial position is ascertained in the local 3D coordinate system by the evaluation unit. | 09-15-2011 |
20110228050 | System for Positioning Micro Tool of Micro Machine and Method Thereof - The present invention discloses a system for positioning micro tool of micro machine is provided in this invention, wherein the system comprises a stereo-photographing device, an image analysis system, a PC-based controller, and a micro machine. The image analysis system can analyze position of the micro tool of the micro machine and work piece by using algorithm, and the micro tool is then positioned to a pre-determined location. A method for positioning micro tool of the micro machine is also provided in this invention. | 09-22-2011 |
20110228051 | Stereoscopic Viewing Comfort Through Gaze Estimation - A method of improving stereo video viewing comfort is provided that includes capturing a video sequence of eyes of an observer viewing a stereo video sequence on a stereoscopic display, estimating gaze direction of the eyes from the video sequence, and manipulating stereo images in the stereo video sequence based on the estimated gaze direction, whereby viewing comfort of the observer is improved. | 09-22-2011 |
20110234756 | DE-ALIASING DEPTH IMAGES - Techniques are provided for de-aliasing depth images. The depth image may have been generated based on phase differences between a transmitted and received modulated light beam. A method may include accessing a depth image that has a depth value for a plurality of locations in the depth image. Each location has one or more neighbor locations. Potential depth values are determined for each of the plurality of locations based on the depth value in the depth image for the location and potential aliasing in the depth image. A cost function is determined based on differences between the potential depth values of each location and its neighboring locations. Determining the cost function includes assigning a higher cost for greater differences in potential depth values between neighboring locations. The cost function is substantially minimized to select one of the potential depth values for each of the locations. | 09-29-2011 |
20110234757 | SUPER RESOLUTION OPTOFLUIDIC MICROSCOPES FOR 2D AND 3D IMAGING - A super resolution optofluidic microscope device comprises a body defining a fluid channel having a longitudinal axis and includes a surface layer proximal the fluid channel. The surface layer has a two-dimensional light detector array configured to receive light passing through the fluid channel and sample a sequence of subpixel shifted projection frames as an object moves through the fluid channel. The super resolution optofluidic microscope device further comprises a processor in electronic communication with the two-dimensional light detector array. The processor is configured to generate a high resolution image of the object using a super resolution algorithm, and based on the sequence of subpixel shifted projection frames and a motion vector of the object. | 09-29-2011 |
20110234758 | ROBOT DEVICE AND METHOD OF CONTROLLING ROBOT DEVICE - There is provided a robot device including an irradiation unit that irradiates pattern light to an external environment, an imaging unit that acquires an image by imaging the external environment, an external environment recognition unit that recognizes the external environment, an irradiation determining unit that controls the irradiation unit to be turned on when it is determined that irradiation of the pattern light is necessary based on an acquisition status of the image, and a light-off determining unit that controls the irradiation unit to be turned off when it is determined that irradiation of the pattern light is unnecessary or that irradiation of the pattern light is necessary to be forcibly stopped, based on the external environment. | 09-29-2011 |
20110234759 | 3D MODELING APPARATUS, 3D MODELING METHOD, AND COMPUTER READABLE MEDIUM - A 3D modeling apparatus includes: a generator configured to generate 3D models of a subject based on pairs of images; a selector configured to select a first 3D model and a second 3D model from the 3D models, wherein the second 3D model is to be superimposed on the first 3D model; an extracting unit configured to extract first feature points from the first 3D model and extract second feature points from the second 3D model; an acquiring unit configured to acquire coordinate transformation parameters based on the first and second feature points; a transformation unit configured to transform coordinates of the second 3D model into coordinates in a coordinate system of the first 3D model, based on the coordinate transformation parameters; and a combining unit configured to superimpose the second 3D model having the transformed coordinates on the first 3D model. | 09-29-2011 |
20110234760 | 3D IMAGE SIGNAL TRANSMISSION METHOD, 3D IMAGE DISPLAY APPARATUS AND SIGNAL PROCESSING METHOD THEREIN - A method for transmitting a 3D image signal an image display device, and an image signal processing method of the device are provided in order to reduce a collision between depth cues, which may occur in the vicinity of left and right corners in reproducing a 3D image. In the method for processing an image signal, first, an encoded video signal is obtained. Next, the encoded video signal is decoded to restore a plurality of image signals, and floating window information of each floating window is extracted from a picture header area of the encoded video signal. And then, an image at an inner area of left or right corner is suppressed according to the floating window information with respect to each of the plurality of images corresponding to the plurality of image signals, and the locally suppressed images are displayed in a stereoscopic manner. | 09-29-2011 |
20110234761 | THREE-DIMENSIONAL OBJECT EMERGENCE DETECTION DEVICE - Provided is a three-dimensional object emergence detecting device capable of detecting the emergence of a three-dimensional object rapidly and correctly at low costs. | 09-29-2011 |
20110242280 | PARALLAX IMAGE GENERATING APPARATUS AND METHOD - According to one embodiment, a parallax image generating apparatus is for generating, using a first image, a parallax images with a parallax therebetween. The apparatus includes following units. The first estimation unit estimates distribution information items indicating distributions of first depths in the first image by using first methods. The distribution information items falls within a depth range to be reproduced. The first combination unit combines the distribution information items to generate first depth information. The second calculation unit calculates second depth information indicating relative unevenness of an object in the first image. The third combination unit combines the first depth information and the second depth information by using a method different from the first methods, to generate third depth information. The generation unit generates the parallax images based on the third depth information and the first image. | 10-06-2011 |
20110242281 | 3D DENTAL CAMERA FOR RECORDING SURFACE STRUCTURES OF AN OBJECT BE MEASURED BY MEANS OF TRIANGULATION - The invention relates to a 3D dental camera for recording surface structures of a measuring object ( | 10-06-2011 |
20110242282 | Signal processing device, signal processing method, display device, and program product - A signal processing device includes a synchronization separation unit that separates horizontal and vertical synchronization signals from image signals, a dot counter which counts the number of dots of the image signals, a line counter which counts the number of lines of the image signals, a determination unit which determines the number of pixels in an image display area based on the number of dots and the number of lines, a control unit which controls the timing for shifting and outputting either of the left or the right image signal so that a left or a right image is displayed side by side in a display area in a size where a user can recognize the left or the right image among display areas in a display unit, and a first image signal shift unit which outputs the left or the right image signal to the display unit. | 10-06-2011 |
20110242283 | Method and Apparatus for Generating Texture in a Three-Dimensional Scene - In one aspect of the teachings herein, a 3D imaging apparatus uses a “texture tool” or facilitates the use of such a tool by an operator, to add artificial texture to a scene being imaged, for 3D imaging of surfaces or background regions in the scene that otherwise lack sufficient texture for determining sufficiently dense 3D range data. In at least one embodiment, the 3D imaging apparatus provides for iterative or interactive imaging, such as by dynamically indicating that one or more regions of the scene require artificial texturing via the texture tool for accurate 3D imaging, and then indicating whether or not the addition of artificial texture cured the texture deficiency and/or whether any other regions within the image require additional texture. | 10-06-2011 |
20110242284 | VIDEO PLAYBACK DEVICE - An image reproducing apparatus capable of outputting a 3D image signal or a non-3D image signal which can display a stereoscopic or a non-stereoscopic image to an image display apparatus, including: an AV processing unit operable to input data of contents and generate the 3D or non-3D image signal from the contents data; an output unit operable to output the 3D or non-3D image signal generated by the AV processing unit to the display apparatus in accordance with a 3D image output format being a format for outputting an image signal for stereoscopic display; and a receiving unit operable to receive an instruction inputted by a user. In a case where the receiving unit receives the instruction to display the contents in non-3D images when the output unit outputs the 3D image signal in accordance with the 3D image output format, the AV processing unit generates the non-3D image signal from the contents data and the output unit outputs the non-3D image signal to the display apparatus in accordance with the 3D image output mode. | 10-06-2011 |
20110249093 | THREE-DIMENSIONAL VIDEO IMAGING DEVICE - A three-dimensional (3D) video imaging device includes a liquid crystal layer, a color filter plate, a lens array, light shielding elements and an optical sheet, and the lens array includes lens elements installed onto a surface of the color filter plate, and the light shielding elements are installed onto a surface of the color filter plate or lens element or formed directly in the color filter plate, and the light shielding elements are arranged with an interval apart from each other and corresponding to the intervals among the lens elements, and a combination of the liquid crystal layer, color filter plate and optical sheet constitutes an LCD panel structure for installing the lens array and the light shielding elements into the LCD panel directly to reduce the thickness and simplify the manufacturing process, while preventing stray lights, improving the clarity of 3D images, and maintaining a high-brightness display effect. | 10-13-2011 |
20110249094 | Method and System for Providing Three Dimensional Stereo Image - The present invention provides a method for providing 3D stereo image. The method comprises: accepting a request submitted from a client system by an intermediate server system; selecting an image server based on the request and responding to the client system from the image server through a processor in the intermediate server system; requesting at least one 3D stereo image by the client system from the image server according to the response; and providing the at least one 3D stereo image to the client system by the image server system. The present invention also provides a system for providing 3D stereo image. | 10-13-2011 |
20110249095 | IMAGE COMPOSITION APPARATUS AND METHOD THEREOF - An image composition apparatus includes a synchronization unit for synchronizing a motion capture equipment and a camera; a three-dimensional (3D) restoration unit for restoring 3D motion capture data of markers attached for motion capture; a 2D detection unit for detecting 2D position data of the markers from a video image captured by the camera; and a tracking unit for tracking external and internal factors of the camera for all frames of the video image based on the restored 3D motion capture data and the detected 2D position data. Further, the image composition apparatus includes a calibration unit for calibrating the tracked external and internal factors upon completion of tracking in all the frames; and a combination unit for combining a preset computer-generated (CG) image with the video image by using the calibrated external and internal factors. | 10-13-2011 |
20110249096 | THREE-DIMENSIONAL MEASURING DEVICE AND BOARD INSPECTION DEVICE - A board inspection device includes an irradiation device for irradiating light on a printed circuit board, a CCD camera for imaging the irradiated part of the circuit board. First image processing is performed for a first exposure time such that an inspection target region is free of brightness saturation, and second image processing is performed using a second exposure time corresponding to the insufficiency of the first exposure time relative to a certain exposure time appropriate for measurement of a measurement standard region. Thereafter, image data for three-dimensional measurement is prepared for the inspection target region using the value of image data obtained by the first image processing, and image data for three-dimensional measurement is prepared for the measurement standard region using a value obtained by summing the image data value acquired by the second image processing and the image data value acquired by the first image processing. | 10-13-2011 |
20110249097 | DEVICE FOR RECORDING, REMOTELY TRANSMITTING AND REPRODUCING THREE-DIMENSIONAL IMAGES - The invention relates to an image recording device, an image reproduction device and a system with such a recording and reproduction device. The recording device comprises an optical axis ( | 10-13-2011 |
20110254922 | IMAGING SYSTEM USING MARKERS - A system for detecting a position of an object such as a surgical tool in an image guidance system includes a camera system with a detection array for detecting visible light a processor arranged to analyze the output from the array. Each object to be detected carries a single marker with a pattern of contrasted areas of light and dark intersecting at a specific single feature point thereon. The pattern includes components arranged in an array around the specific location arranged such that the processor is able to detect an angle of rotation of the pattern around the location and which are different from other markers of the system such that the processor is able to distinguish each marker from the other markers. | 10-20-2011 |
20110254923 | Image processing apparatus, method and computer-readable medium - Provided is an image processing apparatus, method and computer-readable medium. The image processing apparatus may perform modeling of a function that enables correction of a systematic error of a depth camera, using a single depth camera and a single calibration reference image. Additionally, the image processing apparatus may calculate a depth error or a distance error of an input image, and may correct a measured depth of the input image using a modeled function. | 10-20-2011 |
20110254924 | OPTRONIC SYSTEM AND METHOD DEDICATED TO IDENTIFICATION FOR FORMULATING THREE-DIMENSIONAL IMAGES - The invention relates to an optronic system for identifying an object including a photosensitive sensor, communication means and a computerized processing means making it possible to reconstruct the object in three dimensions on the basis of the images captured by the sensor and to identify the object on the basis of the reconstruction. The photosensitive sensor is able to record images of the object representing the intensity levels of an electromagnetic radiation reflected by the surface of the object captured from several observation angles around the object and the communication means are able to transmit the said images to the computerized processing means to reconstruct the object in three dimensions by means of a tomography function configured to process the images of the object representing the intensity levels of an electromagnetic radiation reflected by the surface of the object. The invention also relates to a method of computerized processing for object identification by reconstruction of the object in three dimensions. The invention is applied to the field of target detection, to the medical field and also microelectronics, for example. | 10-20-2011 |
20110254925 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes a first acquiring unit, a second acquiring unit, and a correction processor. The first acquiring unit acquires an intended viewing environment parameter, which is a parameter of an intended viewing environment, together with image data of a three-dimensional picture. The second acquiring unit acquires an actual viewing environment parameter, which is a parameter of an actual viewing environment for a user viewing the three-dimensional picture. The correction processor corrects the three-dimensional picture in accordance with a difference between the acquired intended viewing environment parameter and the acquired actual viewing environment parameter. | 10-20-2011 |
20110254926 | Data Structure, Image Processing Apparatus, Image Processing Method, and Program - A reproduction apparatus reproduces image data of a left eye image and a right eye image of a 3D content recorded in a recording medium. The recording medium stores information about black border widths according to each parallax amount in periphery of right and left image frames between the left eye image and the right eye image. A post processing unit generates and outputs a border-attached left eye image and a border-attached right eye image by inserting an image having the obtained black border width according to the parallax amount in the periphery of the right image frame and an image having the obtained black border width according to the parallax amount in the periphery of the left image frame into the left eye image and the right eye image. The present invention can be applied to an image processing apparatus for processing image data of 3D images. | 10-20-2011 |
20110254927 | IMAGE PROCESSING APPARATUS AND METHOD - An image processing apparatus executes a distortion correction on coordinates of a target pixel in a virtual viewpoint image based on distortion characteristics of a virtual camera and calculates coordinates in the virtual viewpoint image after the distortion correction. The image process apparatus calculates ideal coordinates in a captured image from the coordinates in the virtual viewpoint image after the distortion correction and calculates real coordinates in the captured image from the ideal coordinates in the captured image based on distortion characteristics of an imaging unit. The image process apparatus calculates a pixel value corresponding to the real coordinates from image data of the virtual viewpoint image and corrects the pixel value corresponding to the real coordinates based on ambient light amount decrease characteristics of the imaging unit and ambient light amount decrease characteristics of the virtual camera. | 10-20-2011 |
20110254928 | Time of Flight Camera Unit and Optical Surveillance System - A time of flight, TOF, camera unit for an optical surveillance system and an optical surveillance system comprising such a TOF camera is disclosed. The TOF camera unit comprises a radiation emitting unit for illuminating a surveillance area defined by a first plane, a radiation detecting unit for receiving radiation reflected from said surveillance area and for generating a three-dimensional image from said detected radiation, and at least one mirror for at least partly deflecting said emitted radiation into at least one second plane extending across to said first plane and for deflecting the radiation reflected from said second plane to the radiation detecting unit. The TOF camera and the at least one mirror may be ar-ranged on a common carrier element. | 10-20-2011 |
20110261160 | IMAGE INFORMATION PROCESSING APPARATUS, IMAGE CAPTURE APPARATUS, IMAGE INFORMATION PROCESSING METHOD, AND PROGRAM - The present invention relates to an image information processing apparatus, an image capture apparatus, an image information processing method, and a program that allow a depth value to smoothly transition in a scene change of stereoscopic content. | 10-27-2011 |
20110261161 | APPARATUS AND METHOD FOR PHOTOGRAPHING AND DISPLAYING THREE-DIMENSIONAL (3D) IMAGES - A method and apparatus for shooting and displaying a three-dimensional (3D) image are provided. Light intensities and light wavelengths of light entering a single point from various directions may be measured and the process is repeated to photographing a 3D image. Light may be emitted, from each point, to corresponding directions based on the measured light intensity and the measured light wavelength measured for each direction and thus, may display a natural 3D image. | 10-27-2011 |
20110261162 | Method for Automatically Generating a Three-Dimensional Reference Model as Terrain Information for an Imaging Device - A method for automatically generating a three-dimensional reference model as terrain information for a seeker head of an unmanned missile. A three-dimensional terrain model formed from model elements obtained with the aid of satellite and/or aerial reconnaissance is provided. Position data of the imaging device at least at one planned position and a direction vector from the planned position of the imaging device to a predetermined target point in the three-dimensional terrain model are provided. A three-dimensional reference model of the three dimensional terrain model is generated that incorporates only those model elements and sections of model elements from the terrain model, which in the viewing direction of the direction vector from the planned position of the imaging device, are not covered by other model elements and/or are not located outside the field of view of the imaging device. | 10-27-2011 |
20110261163 | Image Recognition - A subject ( | 10-27-2011 |
20110261164 | OPTICAL SECTIONING OF A SAMPLE AND DETECTION OF PARTICLES IN A SAMPLE - The invention relates to an apparatus, a method and a system for obtaining a plurality of images of a sample arranged in relation to a sample device. The apparatus comprises at least a first optical detection assembly having an optical axis and at least one translation unit arranged to move the sample device and the first optical detection assembly relative to each other. The movement of the sample device and the first optical detection assembly relative to each other is along a scanning path, which defines an angle theta relative to the optical axis, wherein theta is larger than zero. | 10-27-2011 |
20110261165 | MODEL FORMING APPARATUS, MODEL FORMING METHOD, PHOTOGRAPHING APPARATUS AND PHOTOGRAPHING METHOD - The present invention provides a model forming apparatus that can simply and efficiently form a three-dimensional model of an object using previously obtained three-dimensional model data of the object as a starting point. The apparatus comprises a photographing section | 10-27-2011 |
20110267428 | SYSTEM AND METHOD FOR MAPPING A TWO-DIMENSIONAL IMAGE ONTO A THREE-DIMENSIONAL MODEL - In one embodiment, a system includes a turbine comprising multiple components in fluid communication with a working fluid. The system also includes an imaging system in optical communication with at least one component. The imaging system is configured to receive a two-dimensional image of the at least one component during operation of the turbine, and to map the two-dimensional image onto a three-dimensional model of the at least one component to establish a composite model. | 11-03-2011 |
20110267429 | ELECTRONIC EQUIPMENT HAVING LASER COMPONENT AND CAPABILITY OF INSPECTING LEAK OF LASER AND INSPECTING METHOD FOR INSPECTING LEAK OF LASER THEREOF - The invention provides an electronic equipment having a laser component and capability of inspecting leak of laser and an inspecting method for inspecting leak of laser thereof. The electronic equipment according to the invention includes a three-dimensional image-capturing device. According to the invention, the three-dimensional image-capturing device is controlled to capture a two-dimensional image, and to measure an actual depth map. The captured two-dimensional image is processed to obtain an estimated depth map. The invention selectively determines that the laser component occurs leak of laser or malfunctions in accordance with the estimated depth map and the actual depth map. | 11-03-2011 |
20110267430 | Detection Device of Planar Area and Stereo Camera System - An object of the present invention is to provide a detection device of a planar area, which suppresses matching of luminance values in areas other than a planar area, and non-detection and erroneous detection of the planar area arising from an error in projection transform. | 11-03-2011 |
20110267431 | METHOD AND APPARATUS FOR DETERMINING THE 3D COORDINATES OF AN OBJECT - In a method of determining the 3D coordinates of the surface ( | 11-03-2011 |
20110279648 | SCANNED-BEAM DEPTH MAPPING TO 2D IMAGE - A method for constructing a 3D representation of a subject comprises capturing, with a camera, a 2D image of the subject. The method further comprises scanning a modulated illumination beam over the subject to illuminate, one at a time, a plurality of target regions of the subject, and measuring a modulation aspect of light from the illumination beam reflected from each of the target regions. A moving-mirror beam scanner is used to scan the illumination beam, and a photodetector is used to measure the modulation aspect. The method further comprises computing a depth aspect based on the modulation aspect measured for each of the target regions, and associating the depth aspect with a corresponding pixel of the 2D image. | 11-17-2011 |
20110279649 | DIGITAL PHOTOGRAPHING APPARATUS, METHOD OF CONTROLLING THE SAME, AND COMPUTER-READABLE STORAGE MEDIUM - An apparatus, computer readable medium, and a method of controlling a digital photographing apparatus comprising a plurality of optical systems, the method including deriving shake information from the plurality of optical systems; and determining a base optical system from among the plurality of optical systems according to the shake information. | 11-17-2011 |
20110279650 | ARRANGEMENT AND METHOD FOR DETERMINING A BODY CONDITION SCORE OF AN ANIMAL - An arrangement for determining a body condition score of an animal comprises a three-dimensional camera system directed towards the animal and provided for recording at least one three-dimensional image of the animal; and an image processing device connected to the three-dimensional camera system and provided for forming a three-dimensional surface representation of a portion of the animal from the three-dimensional image recorded by the three-dimensional camera system; for statistically analyzing the surface of the three-dimensional surface representation; and for determining the body condition score of the animal based on the statistically analyzed surface of the three-dimensional surface representation. | 11-17-2011 |
20110285820 | USING 3D TECHNOLOGY TO PROVIDE SECURE DATA VIEWING ON A DISPLAY - A method and system using 3D technology provides secure data viewing on a display. Secure data viewing is enable by having an image source module provide images to a processor module. The processor module receives the images provided by the image source module to create multiple series of images that interfere with each other resulting in an unreadable series of images displayed on a display module. An authorized viewer is able to view a readable series of images from the multiple interfering series of images displayed by the display module. | 11-24-2011 |
20110285821 | INFORMATION PROCESSING APPARATUS AND VIDEO CONTENT PLAYBACK METHOD - According to one embodiment, an information processing apparatus executes a browser and player software plugged in the browser. The player software is configured to play back video content received from a server. A capture module captures two-dimensional video data from the player software, the two-dimensional video data being obtained by playback of the video content. A converter converts the captured two-dimensional video data to three-dimensional video data, the three-dimensional video data includes left-eye video data and right-eye video data. A three-dimensional display control module displays a three-dimensional video on a display based on the left-eye video data and right-eye video data. | 11-24-2011 |
20110285822 | METHOD AND SYSTEM FOR FREE-VIEW RELIGHTING OF DYNAMIC SCENE BASED ON PHOTOMETRIC STEREO - A method and a related system of free-view relighting for a dynamic scene based on photometric stereo, the method including the steps of: 1) performing multi-view dynamic videos of an object using a multi-view camera array under a predetermined controllable varying illumination; 2) obtaining a three-dimensional shape model and surface reflectance peculiarities of the object; 3) obtaining a static relighted three-dimensional model of the object and a three-dimensional trajectory of the object; 4) obtaining a dynamic relighted three-dimensional model; and 5) performing a free-view dependent rendering to the dynamic relighted three-dimensional model of the object. | 11-24-2011 |
20110285823 | Device and Method for the Three-Dimensional Optical Measurement of Strongly Reflective or Transparent Objects - The invention relates to a device for three-dimensionally measuring an object, comprising a first projection device having a first infrared light source for projecting a displaceable first pattern onto the object, and at least one image capturing device for capturing images of the object in an infrared spectral range. The invention further relates to a method for three-dimensionally measuring an object, comprising the steps of projecting a first infrared pattern onto the object using a first projection device having a first infrared light source; and capturing images of the object using at least one image capturing device sensitive to infrared radiation, wherein the pattern is shifted between image captures. | 11-24-2011 |
20110285824 | METHOD FOR RECONSTRUCTING A THREE-DIMENSIONAL SURFACE OF AN OBJECT - Method for determining a disparity value of a disparity of each of a plurality of points on an object, the method including the procedures of detecting by a single image detector, a first image of the object through a first aperture, and a second image of the object through a second aperture, correcting the distortion of the first image, and the distortion of the second image, by applying an image distortion correction model to the first image and to the second image, respectively, thereby producing a first distortion-corrected image and a second distortion-corrected image, respectively, for each of a plurality of pixels in at least a portion of the first distortion-corrected image representing a selected one of the points, identifying a matching pixel in the second distortion-corrected image, and determining the disparity value according to the coordinates of each of the pixels and of the respective matching pixel. | 11-24-2011 |
20110292178 | THREE-DIMENSIONAL IMAGE PROCESSING - Systems and methods of 3D image processing are disclosed. In a particular embodiment, a three-dimensional (3D) media player is configured to receive input data including at least a first image corresponding to a scene and a second image corresponding to the scene and to provide output data to a 3D display device. The 3D media player is responsive to user input including at least one of a zoom command and a pan command. The 3D media player includes a convergence control module configured to determine a convergence point of a 3D rendering of the scene responsive to the user input. | 12-01-2011 |
20110292179 | IMAGING SYSTEM AND METHOD - According to one embodiment, an apparatus for determining the gradients of the surface normals of an object includes a receiving unit, establishing unit, determining unit, and selecting unit. The receiving unit is configured to receive data of three 2D images of the object, wherein each image is taken under illumination from a different direction. The establishing unit is configured to establish which pixels of the image are in shadow such that there is only data available from two images from these pixels. The determining unit is configured to determine a range of possible solutions for the gradient of the surface normal of a shadowed pixel using the data available for the two images. The selecting unit is configured to select a solution for the gradient using the integrability of the gradient field over an area of the object as a constraint and minimising a cost function. | 12-01-2011 |
20110292180 | Microscope - A microscope having a night vision apparatus, which apparatus can be impinged upon by beam paths proceeding from a specimen or object to be observed. | 12-01-2011 |
20110298892 | IMAGING SYSTEMS WITH INTEGRATED STEREO IMAGERS - An imaging system may include an integrated stereo imager that includes first and second imager arrays on a single integrated circuit. Image readout circuitry may be located between the first and second imager arrays and a horizontal electronic rolling shutter may be used to read image data out of the arrays. The layout of the arrays and image readout circuitry on the integrated circuit may help to reduce the size of the integrated circuit while maximizing the baseline separation between the arrays. Memory buffer circuitry may be used to convert image data from the arrays into raster-scan compliant image data. The raster-scan compliant image data may be provided to a host system. | 12-08-2011 |
20110298893 | APPARATUS FOR READING SPECTRAL INFORMATION - The apparatus for reading spectral information out of image patterns includes a solid-state image sensor for taking pictures of image patterns, a unit for making one-dimensional images out of lights having reflected at the image patterns, a spectroscope introducing the one-dimensional images into the solid-state image sensor, a shutter unit located in front of the solid-state image sensor, and a synchronizer turning the shutter unit on or off in synchronization with movement of the image patterns, the spectroscope disperses lights having entered thereinto into each of wavelengths, and makes three-dimensional image spectrum which is wavelength dispersive for each of pixels in association with each of locations of the image patterns. | 12-08-2011 |
20110298894 | STEREOSCOPIC FIELD SEQUENTIAL COLOUR DISPLAY CONTROL - The present invention relates to a method, an apparatus, a method and a computer program suitable for controlling a stereoscopic field sequential colour display to provide a first primary colour component image for a user's first eye and a second primary colour component image for the user's second eye, wherein the first and the second primary colour component images are both provided either at least partially overlapping in time or alternately in uninterrupted succession and wherein the primary colour components of the first and second primary colour component images are different from each other. | 12-08-2011 |
20110298895 | 3D VIDEO FORMATS - Several implementations relate to 3D video formats. One or more implementations provide adaptations to MVC and SVC to allow 3D video formats to be used. According to a general aspect, a set of images including video and depth is encoded. The set of images is related according to a particular 3D video format, and are encoded in a manner that exploits redundancy between the set of images. The encoded images are arranged in a bitstream in a particular order, based on the particular 3D video format that relates to the images. The particular order is indicated in the bitstream using signaling information. According to another general aspect, a bitstream is accessed that includes the encoded set of images. The signaling information is also accessed. The set of images is decoded using the signaling information. | 12-08-2011 |
20110298896 | SPECKLE NOISE REDUCTION FOR A COHERENT ILLUMINATION IMAGING SYSTEM - Described are methods and apparatus for reducing speckle noise in images, such as images of objects illuminated by coherent light sources and images of objects illuminated by interferometric fringe patterns. According to one method, an object is illuminated with a structured illumination pattern of coherent radiation projected along a projection axis. An angular orientation of the projection axis is modulated over an angular range during an image acquisition interval. Advantageously, shape features of the structured illumination pattern projected onto the surface of the object remain unchanged during image acquisition and the acquired images exhibit reduced speckle noise. The structured illumination pattern can be a fringe pattern such as an interferometric fringe pattern generated by a 3D metrology system used to determine surface information for the illuminated object. | 12-08-2011 |
20110304693 | FORMING VIDEO WITH PERCEIVED DEPTH - A method for providing a video with perceived depth comprising: capturing a sequence of video images of a scene with a single perspective image capture device; determining a relative position of the image capture device for each of the video images in the sequence of video images; selecting stereo pairs of video images responsive to the determined relative position of the image capture device; and forming a video with perceived depth based on the selected stereo pairs of video images. | 12-15-2011 |
20110304694 | SYSTEM AND METHOD FOR 3D VIDEO STABILIZATION BY FUSING ORIENTATION SENSOR READINGS AND IMAGE ALIGNMENT ESTIMATES - Methods and systems to for generating high accuracy estimates of the 3D orientation of a camera within a global frame of reference. Orientation estimates may be produced from an image-based alignment method. Other orientation estimates may be taken from a camera-mounted orientation sensor. The alignment-derived estimates may be input to a high pass filter. The orientation estimates from the orientation sensor may be processed and input to a low pass filter. The outputs of the high pass and low pass filters are fused, producing a stabilized video sequence. | 12-15-2011 |
20110304695 | MOBILE TERMINAL AND CONTROLLING METHOD THEREOF - A mobile terminal and controlling method thereof are disclosed, by which a focal position of a 3D image is controlled in accordance with a viewer's position and by which guide information on the focal position is provided to the viewer. The present invention includes a display unit configured to display a 3D image, a sensing unit configured to detect a position information of a viewer, the sensing unit comprising at least one selected from the group consisting of at least one proximity sensor, at least one distance sensor and at least one camera, and a controller receiving the position information of the user from the sensing unit, the controller controlling the mobile terminal to facilitate the viewer to find a focal position of the 3D image based on the position information of the user, or the controller controlling the mobile terminal to vary the focal position of the 3D image in accordance with a position of the viewer. | 12-15-2011 |
20110304696 | Time-of-flight imager - An improved solution for generating depth maps using time-of-flight measurements is described, more specifically a time-of-flight imager and a time-of-flight imaging method with an improved accuracy. A depth correction profile is applied to the measured depth maps, which takes into account propagation delays within an array of pixels of a sensor of the time-of-flight imager. | 12-15-2011 |
20110310226 | USE OF WAVEFRONT CODING TO CREATE A DEPTH IMAGE - A 3-D depth camera system, such as in a motion capture system, tracks an object such as a human in a field of view using an illuminator, where the field of view is illuminated using multiple diffracted beams. An image sensing component obtains an image of the object using a phase mask according to a double-helix point spread function, and determines a depth of each portion of the image based on a relative rotation of dots of light of the double-helix point spread function. In another aspect, dual image sensors are used to obtain a reference image and a phase-encoded image. A relative rotation of features in the images can be correlated with a depth. Depth information can be obtained using an optical transfer function of a point spread function of the reference image. | 12-22-2011 |
20110310227 | MOBILE DEVICE BASED CONTENT MAPPING FOR AUGMENTED REALITY ENVIRONMENT - Methods, apparatuses, and systems are provided to facilitate the deployment of media content within an augmented reality environment. In at least one implementation, a method is provided that includes extracting a three-dimensional feature of a real-world object captured in a camera view of a mobile device, and attaching a presentation region for a media content item to at least a portion of the three-dimensional feature responsive to a user input received at the mobile device. | 12-22-2011 |
20110310228 | METHOD FOR ADJUSTING ROI AND 3D/4D IMAGING APPARATUS USING THE SAME - A three-dimensional/four-dimensional (3D/4D) imaging apparatus and a region of interest (ROI) adjustment method and device are provided. An ROI is adjusted through an E image in a 3D/4D imaging mode, in which the E image is refreshed in real time when the ROI is adjusted and has a scan line range larger than that of the ROI. | 12-22-2011 |
20110310229 | PROFILE MEASURING DEVICE, PROFILE MEASURING METHOD, AND METHOD OF MANUFACTURING SEMICONDUCTOR PACKAGE - There is provided a profile measuring device. The profile measuring device includes: a projector which projects a certain pattern on an object to be measured using incoherent light having a plurality of wavelength components; a first imaging device which captures a first image of the object on which the certain pattern is projected; a second imaging device which captures a second image of the object on which the certain pattern is projected; and a computing device which measures a profile of the object based on the first image and the second image. | 12-22-2011 |
20110316974 | METHOD AND SYSTEM FOR REDUCING GHOST IMAGES OF THREE-DIMENSIONAL IMAGES - A method and system for reducing ghost images in three-dimensional (3D) images are disclosed. The method comprises: calculating a brightness difference distribution between a left-eye image and a right-eye image; determining a space factor indicating a brightness change resulting from the brightness difference distribution on the left-eye image or the right-eye image, the space factor being determined according to two-dimensional ( | 12-29-2011 |
20110316975 | STEREO IMAGING APPARATUS AND METHOD - A stereo imaging apparatus comprises two groups of optical imaging lens ( | 12-29-2011 |
20110316976 | IMAGING APPARATUS CAPABLE OF GENERATING THREE-DIMENSIONAL IMAGES, THREE-DIMENSIONAL IMAGE GENERATING METHOD, AND RECORDING MEDIUM - An imaging apparatus generates a 3D model using a photographed image of a subject and generates a 3D image based on the 3D model. When a corresponding point corresponding to a point forming the 3D model does not form a 3D model generated using a photographed image photographed at a different photographing position, the imaging apparatus determines that the point is noise, and removes the point determined as noise from the 3D model. The imaging apparatus generates a 3D image based on the 3D model from which the point determined as noise is removed. | 12-29-2011 |
20110316977 | METHOD OF CNC PROFILE CUTTING PROGRAM MANIPULATION - A method of CNC program file cutting manipulation. The CNC imaging system comprises a capture device and a bed located below the capture device, wherein the bed has at least two reference points affixed thereto. A cutting head is mounted above the bed, and a controller controls movement of the cutting head. An image processing device communicates with the capture device and with the controller. | 12-29-2011 |
20110316978 | INTENSITY AND COLOR DISPLAY FOR A THREE-DIMENSIONAL METROLOGY SYSTEM - Described are a method and apparatus for generating a display of a three-dimensional (“3D”) metrology surface. The method includes determining a 3D point cloud representation of a surface of an object in a point cloud coordinate space. An image of the object is acquired in a camera coordinate space and then transformed from the camera coordinate space to the point cloud coordinate space. The transformed image is mapped onto the 3D point cloud representation to generate a realistic display of the surface of the object. In one embodiment, a metrology camera used to acquire images for determination of the 3D point cloud is also used to acquire the image of the object so that the transformation between coordinate spaces is not performed. The display includes a grayscale or color shading for the pixels or surface elements in the representation. | 12-29-2011 |
20110316979 | Method and Apparatus For Vehicle Service System Optical Target Assembly - A machine vision vehicle wheel alignment system for acquiring measurements associated with a vehicle. The system includes at least one imaging sensor having a field of view and at least one optical target secured to a wheel assembly on a vehicle within the field of view of the imaging sensor. The optical target includes a plurality of visible target elements disposed on at least two surfaces in a determinable geometric and spatial configuration. A processing unit in the system is configured to receive at least two sets of image data from the imaging sensor, with each set of image data acquired at a different rotational position of the wheel assembly around an axis of rotation and representative of at least one visible target element on each of the two surfaces, from which the processing unit is configured to identify said axis of rotation of the wheel assembly. | 12-29-2011 |
20120002013 | VIDEO PROCESSING APPARATUS AND CONTROL METHOD THEREOF - There is provided a video processing apparatus comprising: a video data obtainment unit configured to obtain arbitrary viewpoint video data having an arbitrary viewpoint specified by a user, the arbitrary viewpoint video data being generated based on a plurality of video data having different viewpoints; an output unit configured to output the arbitrary viewpoint video data obtained by the video data obtainment unit to a display unit; and a notification unit configured to notify the user based on a degree of matching between first viewpoint information corresponding to the plurality of video data having different viewpoints and second viewpoint information corresponding to the arbitrary viewpoint video data. | 01-05-2012 |
20120007952 | RECORDING CONTROL APPARATUS, SEMICONDUCTOR RECORDING APPARATUS, RECORDING SYSTEM, AND NONVOLATILE STORAGE MEDIUM - A recording control apparatus includes an input unit operable to input plural kinds of image data composing a stereoscopic image or a high-definition image, and a recording controller operable to control recording of the plural kinds of image data input by the input unit, to a nonvolatile storage medium. The recording controller controls the recording so that the plural kinds of image data input by the input unit are recorded to different erase blocks of the nonvolatile storage medium in such a manner that different kinds of image data are not mixed in one erase block of the nonvolatile storage medium. | 01-12-2012 |
20120007953 | MOBILE TERMINAL AND 3D IMAGE CONTROLLING METHOD THEREIN - A mobile terminal and 3D image controlling method therein are provided, by which a user can be informed of a state of a 3D effect on one or more objects in a 3D image. The mobile terminal includes a first camera configured to capture a left-eye image for generating a 3D image, a second camera configured to capture a right-eye image for generating the 3D image, a display unit configured to display the 3D image generated based on the left-eye image and the right-eye image, and a controller configured to determine an extent of a 3D effect on at least one object included in the 3D image, and to control the display unit to display information indicating the determined extent of the 3D effect. | 01-12-2012 |
20120007954 | METHOD AND APPARATUS FOR A DISPARITY-BASED IMPROVEMENT OF STEREO CAMERA CALIBRATION - A method and apparatus for camera calibration. The method is for disparity estimation of the camera calibration and includes collecting statistical information from at least one disparity image, inferring sub-pixel misalignment between a left view and a right view of the camera, and utilizing the collected statistical information and the inferred sub-pixel misalignment for calibration refinement. | 01-12-2012 |
20120013710 | SYSTEM AND METHOD FOR GEOMETRIC MODELING USING MULTIPLE DATA ACQUISITION MEANS - A system and a method for modeling a predefined space including at least one three-dimensional physical surface, referred to hereinafter as a “measuring space”. The system and method use a scanning system enabling to acquire three-dimensional (3D) data of the measuring space and at least one two-dimensional (2D) sensor enabling to acquire 2D data of the measuring space. The system and method may enable generating a combined compound reconstructed data (CRD), which is a 3D geometrical model of the measuring space, by combining the acquired 2D data with the acquired 3D data, by reconstructing additional 3D points, from the combined 3D and 2D data thereby generating the CRD model. The generated CRD model includes a point cloud including a substantially higher density of points than that of its corresponding acquired 3D data point cloud from which the CRD was generated. | 01-19-2012 |
20120013711 | METHOD AND SYSTEM FOR CREATING THREE-DIMENSIONAL VIEWABLE VIDEO FROM A SINGLE VIDEO STREAM - It is provided a method for generating a 3D representation of a scene, initially represented by a first video stream captured by a certain camera at a first set of viewing configurations. The method includes providing video streams compatible with capturing the scene by cameras, and generating an integrated video stream enabling three-dimensional display of the scene by integration of two video streams. The method includes calculating parameters characterizing a viewing configuration by analysis of elements having known geometrical parameters. The scene may be a sport scene which a playing field, a group of on-field objects and a group of background objects. The method includes segmenting a frame to those portions, separately associating each portion to the different viewing configuration, and merging them into a single frame. Also, the method may include calculating of on-field footing locations of on-field objects, computing new locations in a new frame, and transforming the on-field objects to the respective frame as a 2D object. Furthermore, the method may include synthesizing at on-field objects by segmenting portions of the object from respective frames of the first video stream, stitching the portions together and rendering the stitched object within a synthesized frame. | 01-19-2012 |
20120013712 | SYSTEM AND ASSOCIATED METHODS OF CALIBRATION AND USE FOR AN INTERATIVE IMAGING ENVIRONMENT - In various embodiments, the present invention provides a system and associated methods of calibration and use for an interactive imaging environment based on the optimization of parameters used in various segmentation algorithm techniques. These methods address the challenge of automatically calibrating an interactive imaging system, so that it is capable of aligning human body motion, or the like, to a visual display. As such the present invention provides a system and method of automatically and rapidly aligning the motion of an object to a visual display. | 01-19-2012 |
20120013713 | IMAGE INTEGRATION UNIT AND IMAGE INTEGRATION METHOD - An image integration unit includes: an imaging section which is installed in a moving body and which images a plurality of time-series images at different times; a three-dimensional image information calculating section which calculates three-dimensional image information in each of the time-series images based on the time-series images imaged by the imaging section; a stationary body area extracting section which extracts stationary body areas in each of the time-series images based on the three-dimensional image information; and an integrating section which calculates the corresponding stationary body areas between the time-series images from each of the stationary body areas extracted in each of the time-series images, and matches the corresponding stationary body areas to integrate the time-series images. | 01-19-2012 |
20120019620 | IMAGE CAPTURE DEVICE AND CONTROL METHOD - An image capture device and method creates a first matrix for an image. A pixel value of each point in the first matrix is compared with a pixel value of a corresponding point in a 3D figure template, to detect a three-dimensional (3D) area in the image. A lens of the image capture device is moved and a foci of the lens is adjusted to ensure that the device capture a clear 3D figure image. A second matrix for the clear 3D figure image is created, and a pixel value of each point in the second matrix is compared with a pixel value of a corresponding point in a 3D facial template, to detect a 3D facial area in the clear 3D figure image. The lens is moved and the foci of the lens is adjusted to ensure that the device captures a clear 3D facial image. | 01-26-2012 |
20120019621 | Transmission of 3D models - A method and an apparatus for transmitting a 3D model associated to stereoscopic content are described, and more specifically a method and an apparatus for the progressive transmission of 3D models. Also described are a method and an apparatus for preparing a 3D model associated to a 3D video frame for transmission and a non-transient recording medium comprising such a prepared 3D model. The 3D model is split into one or more components. It is then determined whether a component of the one or more components is hidden by other 3D content. For transmission those components of the 3D model that are not hidden by other 3D content are transmitted first. The remaining components of the 3D model are transmitted subsequently. | 01-26-2012 |
20120019622 | THERMAL POWERLINE RATING AND CLEARANCE ANALYSIS USING THERMAL IMAGING TECHNOLOGY - A method and apparatus are provided to acquire direct thermal measurements, for example, from a LiDAR collecting vehicle or air vessel, of an overhead electrical conductor substantially simultaneous with collection of 3-dimensional location data of the conductor, and utilize temperature information derived from the direct thermal measurements in line modeling, line rating, thermal line analysis, clearance analysis, and/or vegetation management. | 01-26-2012 |
20120026290 | MOBILE TERMINAL AND CONTROLLING METHOD THEREOF - A mobile terminal and controlling method thereof are disclosed, by which object information on an object within a 2-dimensional (hereinafter abbreviated 2D) preview image can be provided as 3D object information of a 3-dimensional (hereinafter abbreviated 3D) type or object information on an object within a 3D preview image can be provided as object information of a 3D type. The present invention includes displaying a preview image via at least one camera on a screen of a touchscreen, recognizing a current position of the mobile terminal, searching for an object information on at least one object within the preview image based on the recognized current position, displaying the found object information within the preview image, and converting and displaying a touched specific point to a 3-dimensional (hereinafter abbreviated 3D) shape if the specific point within the preview image is touched. Accordingly, the present invention converts a preview image for augmented reality to a 2D or 3D image and also converts information on an object within the preview image to a 2D or 3D image, thereby providing a user with various images in the augmented reality. | 02-02-2012 |
20120026291 | IMAGE PROCESSING APPARATUS AND METHOD - Provided is an image processing apparatus and method thereof. The image processing apparatus may extract a three-dimensional (3D) bidirectional flow by analyzing data of an input object. The image processing apparatus may calculate a 3D volumetric center density of the input object based on the 3D bidirectional flow. | 02-02-2012 |
20120026292 | MONITOR COMPUTER AND METHOD FOR MONITORING A SPECIFIED SCENE USING THE SAME - A method for monitoring a specified scene obtains a scene image of the specified scene captured by an image capturing device, determines a first sub-area of the scene image, detects a three dimensional ( | 02-02-2012 |
20120026293 | METHOD AND MEASURING ASSEMBLY FOR DETERMINING THE WHEEL OR AXLE GEOMETRY OF A VEHICLE - In a method for determining a wheel or axle geometry of a vehicle, the following steps are provided: illuminating a wheel region with structured and with unstructured light during a motion of at least one wheel and/or of the vehicle; acquiring multiple images of the wheel region during the illumination, in order to create a three-dimensional surface model having surface parameters, a texture model having texture parameters, and a motion model having motion parameters of the sensed wheel region; calculating values for the surface parameters, the texture parameters, and the motion parameters using a variation computation as a function of the acquired images, in order to minimize a deviation of the three-dimensional surface model, texture model, and motion model from image data of the acquired images; and determining a rotation axis and/or a rotation center of the wheel as a function of the calculated values of the motion parameters. | 02-02-2012 |
20120026294 | DISTANCE-MEASURING OPTOELECTRONIC SENSOR FOR MOUNTING AT A PASSAGE OPENING - A distance measuring optoelectronic sensor ( | 02-02-2012 |
20120026295 | STEREO IMAGE PROCESSOR AND STEREO IMAGE PROCESSING METHOD - A stereo image processor ( | 02-02-2012 |
20120033043 | METHOD AND APPARATUS FOR PROCESSING AN IMAGE - A method and apparatus for processing an image are provided. The method includes obtaining an image, generating 3-dimensional (3D) disparity information that represents a degree of stereoscopic effects of the image, and outputting the 3D disparity information. | 02-09-2012 |
20120033044 | VIDEO DISPLAY SYSTEM, DISPLAY DEVICE AND SOURCE DEVICE - A video display system includes a source device for reproducing and outputting contents; and a display device for displaying contents which is output from the source device. Upon receiving a message for requesting display of a 3D video from the source device in a state of unreadiness to display the 3D video, the display device transmits a message for stopping reproduction of 3D contents to the source device. Upon receiving the message for stopping reproduction of 3D contents, the source device stops reproduction of the 3D contents. Upon completing preparations for displaying the 3D video, the display device transmits a message for reproducing the 3D contents to the source device. Upon receiving the message for reproducing the 3D contents, the source device reproduces and outputs the 3D contents. | 02-09-2012 |
20120033045 | Multi-Path Compensation Using Multiple Modulation Frequencies in Time of Flight Sensor - A method to compensate for multi-path in time-of-flight (TOF) three dimensional (3D) cameras applies different modulation frequencies in order to calculate/estimate the error vector. Multi-path in 3D TOF cameras might be caused by one of the two following sources: stray light artifacts in the TOF camera systems and multiple reflections in the scene. The proposed method compensates for the errors caused by both sources by implementing multiple modulation frequencies. | 02-09-2012 |
20120033046 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - Provided is an image processing apparatus including: an acquisition unit which acquires stereoscopic image information used for displaying a stereoscopic image on a display unit; a controller which performs one of a first control of allowing the stereoscopic image to be displayed as a planar image on the display unit and a second control of performing an image process on the stereoscopic image so that a parallax direction of the stereoscopic image and a parallax direction of the display unit are coincident with each other and allowing the stereoscopic image, which is subject to the image process, to be displayed on the display unit in the case where a parallax direction of the stereoscopic image displayed on the display unit and a parallax direction of the display unit are not coincident with each other based on the stereoscopic image information. | 02-09-2012 |
20120033047 | IMAGING DEVICE - An imaging device is provided that comprises a movement detection component configured to detect movement of the imaging device based on a force imparted to the imaging device, an imaging component configured to produce image data by capturing a subject image, a movement vector detection component configured to detect a movement vector based on a plurality of sets of image data produced by the imaging component, and a three-dimensional NR component configured to reduce a noise included in first image data produced by the imaging component, based on second image data produced earlier than the first image data, wherein the three-dimensional NR component is configured to decide whether to correct the second image data in response to the detection result of the movement vector detection component, based on both the detection result of the movement detection component and the detection result of the movement vector detection component. | 02-09-2012 |
20120033048 | 3D IMAGE DISPLAY APPARATUS, 3D IMAGE PLAYBACK APPARATUS, AND 3D IMAGE VIEWING SYSTEM - A 3D image display apparatus comprises a transmission-reception device and a control signal output device. The transmission-reception device receives a video data including a plurality of image informations which is base data of 3D images from a 3D image playback apparatus through a transmission cable and thereby generates an image signal. The control signal output device transmits a control signal for controlling light penetration states of penetration units for right and left eyes to shutter glasses. The transmission-reception device receives the video data from the 3D image playback apparatus through the transmission cable and thereby generates the image signal and a synchronizing signal. The synchronizing signal indicates which of the plurality of image informations is included in the image signal currently outputted. The control signal output device generates the control signal based on the synchronizing signal. | 02-09-2012 |
20120038745 | 2D to 3D User Interface Content Data Conversion - A method of two dimensional (2D) content data conversion to three dimensional (3D) content data in a 3D television involves receiving 3D video content and 2D user interface content data via a 2D to 3D content conversion module. A displacement represented by disparity data that defines a separation of left eye and right eye data for 3D rendering of the 2D user interface content data is determined The 3D video content is displayed on a display of the 3D television. 3D user interface content data is generated at a 3D depth on the display based upon the received 2D user interface content data and the determined displacement. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract. | 02-16-2012 |
20120044326 | Laser Scanner Device and Method for Three-Dimensional Contactless Recording of the Surrounding Area by Means of a Laser Scanner Device - The invention relates to a laser scanner device ( | 02-23-2012 |
20120056987 | 3D CAMERA SYSTEM AND METHOD - A system and method for generating 3D images comprising a plurality of fully-adjustable optical elements arranged in pyramidical configurations on parallel planes such that the cameras have different convergent points and focal points. | 03-08-2012 |
20120056988 | 3-D CAMERA - A 3-D camera is disclosed. The 3-D camera includes an optical system, a front-end block, and a processor. The front-end block further includes a combined image sensor to generate an image, which includes color information and near infra-red information of a captured object and a near infra-red projector to generate one or more patterns. The processor is to generate a color image and a near infra-red image from the image and then generate a depth map using the near infra-red image and the one or more patterns from a near infra-red projector. The processor is to further generate a full three dimensional color model based on the color image and the depth map, which may be aligned with each other. | 03-08-2012 |
20120056989 | IMAGE RECOGNITION APPARATUS, OPERATION DETERMINING METHOD AND PROGRAM - An object is to enable an accurate determination of an operation. An operator uses a relative relation between a virtual operation screen determined from an image or a position of an operator photographed by the aforementioned video camera and the operator to determine that the operation starts when a part of the operator comes on this side of the operation screen as viewed from the video camera, and from a configuration or a movement of each portion, it is determined which out of operations in advance estimated the configuration or the movement corresponds to. | 03-08-2012 |
20120056990 | IMAGE REPRODUCTION APPARATUS AND CONTROL METHOD THEREFOR - An image reproduction apparatus includes: a reproduction unit that reproduces a stereo image; a mode setting unit that sets one mode from a plurality of modes which include a first mode and a second mode; an adjustment unit that adjusts a maximum value of a disparity between an image for left eye and an image for right eye of the stereo image according to the mode that has been set by the mode setting unit; and a generation unit that generates a stereo image from the image for left eye and the image for right eye for which the disparity has been adjusted by the adjustment unit and outputs the generated stereo image, wherein the adjustment unit makes a maximum value of the disparity in the second mode less than a maximum value of the disparity in the first mode. | 03-08-2012 |
20120056991 | IMAGE SENSOR, VIDEO CAMERA, AND MICROSCOPE - An image sensor ( | 03-08-2012 |
20120056992 | IMAGE GENERATION SYSTEM, IMAGE GENERATION METHOD, AND INFORMATION STORAGE MEDIUM - An image generation system includes a captured image acquisition section that acquires a captured image captured by an imaging section, a depth information acquisition section that acquires depth information about a photographic object observed within the captured image, an object processing section that performs a process that determines a positional relationship between the photographic object and a virtual object in a depth direction based on the acquired depth information, and synthesizes the virtual object with the captured image, and an image generation section that generates an image in which the virtual object is synthesized with the captured image. | 03-08-2012 |
20120062702 | ONLINE REFERENCE GENERATION AND TRACKING FOR MULTI-USER AUGMENTED REALITY - A multi-user augmented reality (AR) system operates without a previously acquired common reference by generating a reference image on the fly. The reference image is produced by capturing at least two images of a planar object and using the images to determine a pose (position and orientation) of a first mobile platform with respect to the planar object. Based on the orientation of the mobile platform, an image of the planar object, which may be one of the initial images or a subsequently captured image, is warped to produce the reference image of a front view of the planar object. The reference image may be produced by the mobile platform or by, e.g., a server. Other mobile platforms may determine their pose with respect to the planar object using the reference image to perform a multi-user augmented reality application. | 03-15-2012 |
20120062703 | INFORMATION DISPLAY DEVICE, REPRODUCTION DEVICE, AND STEREOSCOPIC IMAGE DISPLAY DEVICE - To provide an information display device to be included in a device outputting stereoscopic images in which images for the right and left eyes are arranged alternately in chronological order. The information display device includes a display unit which has light emitting units and which displays information other than the stereoscopic images using the light emitting units each of which alternates between a light-up state and a light-out state, the light-up state lasting for a predetermined period from a start of a light emission of the light emitting unit to an end of the light emission. Each of the light emitting units emits light so that the light-up state is included at least once during a shutter open period from when a shutter of eyeglasses used for viewing the stereoscopic images is opened to when the shutter is closed, the shutter being for one of the is right and left eyes. | 03-15-2012 |
20120062704 | 3-D IMAGE PICKUP APPARATUS - A 3-D image pickup apparatus includes a lens portion that includes a lens system; and an adjustment ring portion that includes plural coaxially rotatable rings. Each ring adjusts a respective one of plural optical parameters of the lens system. | 03-15-2012 |
20120062705 | OVERLAPPING CHARGE ACCUMULATION DEPTH SENSORS AND METHODS OF OPERATING THE SAME - One embodiment includes sequentially resetting rows and applying a gating signal to the rows sequentially in order in which the rows are reset; accumulating at each of the rows photocharge generated in response to an optical signal reflected from an object and the gating signal for an integration time; and reading a result of photocharge accumulation from each of the rows. A phase of the gating signal applied to a row with respect to which the reading has been completed, may be changed. A period of photocharge accumulation based on the gating signal having a changed phase in at least one row, which has been subjected to the reading and then reset, may overlap a period of photocharge accumulation in at least one row in which photocharge accumulation based on the gating signal having a phase before being changed is being carried out. | 03-15-2012 |
20120069148 | IMAGE PRODUCTION DEVICE, IMAGE PRODUCTION METHOD, PROGRAM, AND STORAGE MEDIUM STORING PROGRAM - The image production device includes a deviation detecting device and an information production section. The deviation detecting device is configured to calculate the amount of relative deviation of left-eye image data and right-eye image data included with input image data. The information production section is configured to produce evaluation information related to the suitability of three-dimensional imaging based on reference information produced by the deviation detecting device which calculates the relative deviation amount. | 03-22-2012 |
20120069149 | PHOTOGRAPHING DEVICE AND CONTROLLING METHOD THEREOF, AND THREE-DIMENSIONAL INFORMATION MEASURING DEVICE - To make it easy to recognize that pixel resolving power is changed during photography operation. | 03-22-2012 |
20120069150 | IMAGE PROJECTION KIT AND METHOD AND SYSTEM OF DISTRIBUTING IMAGE CONTENT FOR USE WITH THE SAME - An image projection kit and an imagery content distribution system and method. In one aspect, the invention is a method of distributing projection clip files and/or displaying imagery associated with projection clip files on an architecture comprising: a) storing a plurality of projection clip files on a server that is accessible via a wide area network; b) authenticating a user's identity prior to allowing downloading of projection clip files stored on the server; c) identifying the projection clip files downloaded by the authenticated user; and d) charging the user a fee for the projection clip files downloaded by the user. | 03-22-2012 |
20120075422 | 3D INFORMATION GENERATOR FOR USE IN INTERACTIVE INTERFACE AND METHOD FOR 3D INFORMATION GENERATION - The present invention discloses a 3D information generator for use in an interactive interface. The 3D information generator includes: a MEMS light beam generator having at least one light source for providing a dot light beam and a MEMS mirror for projecting a movable scanning light beam according to the dot light beam to an object; an image sensor for sensing an image of the object to generate a 2D image information; and a processor for generating a distance information by triangulation method according to a reflection result of the scanning light beam scanning on the object, wherein the distance information is combined with the 2D image information to generate a 3D information. | 03-29-2012 |
20120075423 | Methods and Apparatus for Transient Light Imaging - In illustrative implementations of this invention, multi-path analysis of transient illumination is used to reconstruct scene geometry, even of objects that are occluded from the camera. An ultrafast camera system is used. It comprises a photo-sensor (e.g., accurate in the picosecond range), a pulsed illumination source (e.g. a femtosecond laser) and a processor. The camera emits a very brief light pulse that strikes a surface and bounces. Depending on the path taken, part of the light may return to the camera after one, two, three or more bounces. The photo-sensor captures the returning light bounces in a three-dimensional time image I(x,y,t) for each pixel. The camera takes different angular samples from the same viewpoint, recording a five-dimensional STIR (Space Time Impulse Response). A processor analyzes onset information in the STIR to estimate pairwise distances between patches in the scene, and then employs isometric embedding to estimate patch coordinates. | 03-29-2012 |
20120075424 | COMPUTER-READABLE STORAGE MEDIUM HAVING IMAGE PROCESSING PROGRAM STORED THEREIN, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - When an image of a marker existing in a real space is taken by using an outer camera, an image of a plurality of virtual characters which is taken by a virtual camera is displayed on an upper LCD so as to be superimposed on a taken real image of the real space. The virtual characters are located in a marker coordinate system based on the marker, and when a button operation is performed by a user on a game apparatus, the position and the orientation of each virtual character are changed. Then, when a button operation indicating a photographing instruction is provided by the user, an image being displayed is stored in a storage means. | 03-29-2012 |
20120075425 | HANDHELD DENTAL CAMERA AND METHOD FOR CARRYING OUT OPTICAL 3D MEASUREMENT - A handheld dental camera performs three-dimensional, optical measurements. The camera includes a light source that emits an illuminating beam, a scanning unit, a color sensor, and a deflector. The scanning unit focuses the illuminating beam onto a surface of an object to be measured. The surface of the object reflects the illuminating beam and forms a monitoring beam, which is detected by the color sensor. Focal points of wavelengths of the illuminating beam form chromatic depth measurement ranges. The scanning unit stepwise displaces the chromatic depth measurement ranges by a step width smaller than or equal to a length of each chromatic depth measurement range, so that a first chromatic depth measurement range in a first end position of the scanning unit and a second chromatic depth measurement range in a second end position are precisely adjoined in a direction of a measurement depth, or are partially overlapped. | 03-29-2012 |
20120075426 | IMAGE PICKUP SYSTEM - An image pickup system capable of shortening time lag between reading of signals from an image sensor and displaying of the signals when a 3D image signal from a camera having the single image sensor is displayed in real time with a time-division system. A solid state image pickup device has pixels that are arranged in two dimensions and are divided into image pickup areas. A reading unit reads signals from the image pickup areas. A mode setting unit sets either of a first shooting mode and a second shooting mode. A control unit controls the reading unit to read signals from all the image pickup areas as a single frame when the mode setting unit sets the first shooting mode, and reads the signals from the image pickup areas as different frames, respectively, when the mode setting unit sets the second shooting mode. | 03-29-2012 |
20120081517 | Image Processing Apparatus and Image Processing Method - According to one embodiment, an image processing apparatus includes a limiter, an input module, and a converter. The limiter is configured to limit an input image format according to an instruction for a 3D conversion for converting an input 2D image into a 3D image. The input module is configured to input a 2D image corresponding to an input image format based on the limitation. The converter is configured to convert the input 2D image into a 3D image. | 04-05-2012 |
20120081518 | METHOD FOR 3-DIMENSIONAL MICROSCOPIC VISUALIZATION OF THICK BIOLOGICAL TISSUES - The present invention discloses a method of visualizing the 3-dimensional microstructure of a thick biological tissue. This method includes: a process of immersing thick, opaque biological tissues in the optical-clearing solution, for example FocusClear (U.S. Pat. No. 6,472,216), and utilizing an optical scanning microscope and a cutter. In microscopy, the cutter removes a portion of the tissue after each round of optical scanning. Each round of optical scanning follows the principal that the depth of the removal plane is less than the depth of the boundary plane derived from the scanning This method acquires an image stack to provide the information of thick biological tissue's 3-dimensional microstructure with minimal interference by the tissue removal. | 04-05-2012 |
20120086775 | Method And Apparatus For Converting A Two-Dimensional Image Into A Three-Dimensional Stereoscopic Image - A method and apparatus for converting a two-dimensional image into a stereoscopic three-dimensional image. In one embodiment, a computer implemented method of converting a two-dimensional image into a stereoscopic three-dimensional image including for each pixel within a right eye image, identifying at least one corresponding pixel from a left eye image and determining a depth and an intensity value for the each pixel within the right eye image using the at least one corresponding pixel, wherein the depth value is stored in a right eye depth map and the intensity value is stored in the right eye image and inpainting at least one occluded region within the right eye image using the right eye depth map. | 04-12-2012 |
20120086776 | 3D display system with active shutter plate - A 3D display system uses a lenticular screen or a parallax barrier, along with a shutter plate, as a light directing device to allow a viewer's right eye to see a right image and the left eye to see a left image on a display panel. The right and left images are alternately displayed. The shutter plate has a plurality of right shutter segments and a plurality of left shutter segments arranged in an interleaving manner. When the right image is displayed, the right shutter segments are open and the left shutter segments are closed. When the left image is displayed, the right shutter segments are closed and the left shutter segments are open. But when the 3D display panel is used as a 2D display panel, both the right and left shutter segments are all open so that both the viewer's eyes see the image simultaneously. | 04-12-2012 |
20120086777 | SYSTEMS AND METHODS FOR DETECTING AND DISPLAYING THREE-DIMENSIONAL VIDEOS - A video player system includes a three-dimensional field detector and controller module for detecting a three-dimensional field of a video data to generate at least one of a detection signal and a control signal based on the three-dimensional field detected. The video data includes data for at least one image and the three-dimensional field within the image. The system also includes a display recomposition module coupled with the three-dimensional field detector and controller module, and the display recomposition module generates a recomposed three-dimensional field within the at least one image based on the detection signal and at least one of a plurality of display parameters associated with a display panel. The display panel is coupled with the three-dimensional field detector and controller module and the display recomposition module and displays the at least one image with the recomposed three-dimensional field based on at least one of the control signal and the display parameters. | 04-12-2012 |
20120086778 | TIME OF FLIGHT CAMERA AND MOTION TRACKING METHOD - In a motion tracking method using a time of flight (TOF) camera that is installed on a track system, three-dimensional (3D) images of people are captured using the TOF camera, and stored in a storage system to create a 3D image database. Scene images of a monitored area are captured in real-time and analyzed to check for motion. A movement direction of the motion is determined once motion has been detected and the TOF camera is moved along the track system to track the motion using a driving device according to the movement direction. | 04-12-2012 |
20120086779 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus is provided with a parallax detector configured to detect parallax between a left-eye image and a right-eye image used to display a 3D image, a parallax range computing unit configured to compute a range of parallax between the left-eye image and the right-eye image, a determining unit configured to determine whether or not the sense of depth when viewing the 3D image exceeds a preset range that is comfortable for a viewer, on the basis of the computed range of parallax, and a code generator configured to generate a code corresponding to the determination result of the determining unit. | 04-12-2012 |
20120086780 | Utilizing Depth Information to Create 3D Tripwires in Video - A method of processing a digital video sequence is provided that includes detecting a foreground object in an image captured by a depth camera, determining three-dimensional (3D) coordinates of the foreground object, and comparing the 3D coordinates to a 3D video tripwire to determine if the foreground object has crossed the 3D video tripwire. A method of defining a 3D video tripwire is also provided. | 04-12-2012 |
20120086781 | 3D Vision On A Chip - A 3D camera for determining distances to regions in a scene wherein gating or modulating apparatus for the 3D camera is incorporated on a photosurface of the camera on which light detectors of the camera are also situated. Each pixel in the photosurface may include its own pixel circuit for gating the pixel on or off or for modulating the sensitivity of the pixel to incident light. The circuit may comprise at least one amplifier inside the pixel, at least one feedback capacitor separate from the light sensitive element and connected between the input and output of each of the at least one amplifier, and at least one controllable connection through which current flows from the light sensitive element into the input of the at least one amplifier. The 3D camera may further include a light source and a controller. | 04-12-2012 |
20120092456 | VIDEO SIGNAL PROCESSING DEVICE, VIDEO SIGNAL PROCESSING METHOD, AND COMPUTER PROGRAM - A video signal processing device which includes, a stereoscopic image input unit which alternately inputs a video frame for a left eye and a video frame for a right eye, in a time sharing manner; a plane memory which maintains graphic data which overlaps with the video frame; a read phase addition unit which gives a phase difference when reading the graphic data from the plane memory at the time of displaying the video frame for the left eye and the video frame for the right eye; and a video overlapping unit which overlaps each graphic data of which a read phase is provided with a difference, with each of the video frame for the left eye and the video frame for the right eye. | 04-19-2012 |
20120092457 | STEREOSCOPIC IMAGE DISPLAY APPARATUS - A display apparatus | 04-19-2012 |
20120092458 | Method and Apparatus for Depth-Fill Algorithm for Low-Complexity Stereo Vision - A method and apparatus for depth-fill algorithm for low-complexity stereo vision. The method includes utilizing right and left images of a stereo camera to estimate depth of the scene, wherein the estimated depth relates to each pixel of the image, and updating a depth model with the current depth utilizing the estimated depth of the scene. | 04-19-2012 |
20120092459 | METHOD AND DEVICE FOR RECONSTRUCTION OF A THREE-DIMENSIONAL IMAGE FROM TWO-DIMENSIONAL IMAGES - The disclosure relates to a method for reconstruction of a three-dimensional image of an object. A first image is acquired of the object lit by a luminous flux having, in a region including the object, a luminous intensity dependant on the distance, with a light source emitting the luminous flux. A second image is acquired of the object lit by a luminous flux having, in a region including the object, a constant luminous intensity. For each pixel of a three-dimensional image, a relative distance of a point of the object is determined as a function of the intensity of a pixel corresponding to the point of the object in each of the acquired images. | 04-19-2012 |
20120092460 | System And Method For Alerting Visually Impaired Users Of Nearby Objects - A system and method for assisting a visually impaired user including a time of flight camera, a processing unit for receiving images from the time of flight camera and converting the images into signals for use by one or more controllers, and one or more vibro-tactile devices, wherein the one or more controllers activates one or more of the vibro-tactile devices in response to the signals received from the processing unit. The system preferably includes a lanyard means on which the one or more vibro-tactile devices are mounted. The vibro-tactile devices are activated depending on a determined position in front of the user of an object and the distance from the user to the object. | 04-19-2012 |
20120092461 | FOCUS SCANNING APPARATUS - Disclosed is a handheld scanner for obtaining and/or measuring the 3D geometry of at least a part of the surface of an object using confocal pattern projection techniques. Specific embodiments are given for intraoral scanning and scanning of the interior part of a human ear. | 04-19-2012 |
20120098933 | CORRECTING FRAME-TO-FRAME IMAGE CHANGES DUE TO MOTION FOR THREE DIMENSIONAL (3-D) PERSISTENT OBSERVATIONS - An imaging platform minimizes inter-frame image changes when there is relative motion of the imaging platform with respect to the scene being imaged, where the imaging platform may be particularly susceptible to image change, especially when it is configured with a wide field of view or high angular rate of movement. In one embodiment, a system is configured to capture images and comprises: a movable imaging platform having a sensor that is configured to capture images of a scene, each image comprising a plurality of pixels; and an image processor configured to: digitally transform captured images with respect to a common field of view (FOV) such that the transformed images appear to be taken by a non-moving imaging platform, wherein the pixel size and orientation of pixels of each transformed image are the same. A method for measuring and displaying 3-D features is also described. | 04-26-2012 |
20120098934 | Methods and Systems for Presenting Adjunct Content During a Presentation of a Media Content Instance - An exemplary method includes an adjunct content presentation system including adjunct content within a first image of a media content instance by setting a pixel value of a first group of pixels included in the first image to be greater than a predetermined neutral pixel value, including the adjunct content within a second image of the media content instance by setting a pixel value of a second group of pixels included in the second image and corresponding to the first group of pixels to be less than the predetermined neutral pixel value, and presenting the first and second images. The respective pixel values are set to result in the adjunct content being perceptible to a first viewer viewing only one of the first and second images and substantially imperceptible to a second viewer viewing both the first and second images. Corresponding methods and systems are also disclosed. | 04-26-2012 |
20120098935 | 3D TIME-OF-FLIGHT CAMERA AND METHOD - The present invention relates to a 3D time-of-flight camera for acquiring information about a scene, in particular for acquiring depth images of a scene, information about phase shifts of a scene or environmental information about the scene. The proposed camera particularly compensates motion artifacts by real-time identification of affected pixels and, preferably, corrects its data before actually calculating the desired scene-related information values from the raw data values obtained from radiation reflected by the scene. | 04-26-2012 |
20120098936 | PHOTOGRAPHING EQUIPMENT - Photographing equipment includes an image pickup portion, a display portion which displays an image acquired by the image pickup portion, an object detecting portion which detects a reference object of a predetermined size or a larger size within an image pickup range of the image pickup portion among objects in the image acquired by the image pickup portion, and a display controlling portion which displays a representation recommending 3D photographing on the display portion if the object detecting portion detects the reference object. | 04-26-2012 |
20120098937 | Markerless Geometric Registration Of Multiple Projectors On Extruded Surfaces Using An Uncalibrated Camera - A method for registering multiple projectors on a vertically extruded three dimensional display surface with a known aspect ratio includes recovering both the camera parameters and the three dimensional shape of the surface from a single image of the display surface from an uncalibrated camera, capturing images from the projectors to relate the projector coordinates with the display surface points, and segmenting parts of the image for each projector to register the projectors to create a seamlessly wall-paper projection on the display surface using a representation between the projector coordinates with display surface points using a rational Bezier patch. A method for performing a deterministic geometric auto-calibration to find intrinsic and extrinsic parameters of each projector is included. | 04-26-2012 |
20120105584 | CAMERA WITH SENSORS HAVING DIFFERENT COLOR PATTERNS - An image capture device includes a lens arrangement having a first lens associated with a first digital image sensor and a second lens associated with a second digital image sensor; the first digital image sensor having photosites of a first predetermined color pattern for producing a first digital image; the second digital image sensor having photosites of a different second predetermined color pattern for producing a second digital image. The image capture device also includes a device for causing the lens arrangement to capture a first digital image from the first digital image sensor and a second digital image from the second digital image sensor at substantially the same time; a processor aligning the first and second digital images; and the processor using values of the second image based on the alignment between the first and second images operates on the first digital image to produce the enhanced digital image. | 05-03-2012 |
20120105585 | IN-HOME DEPTH CAMERA CALIBRATION - A system and method are disclosed for calibrating a depth camera in a natural user interface. The system in general obtains an objective measurement of true distance between a capture device and one or more objects in a scene. The system then compares the true depth measurement to the depth measurement provided by the depth camera at one or more points and determines an error function describing an error in the depth camera measurement. The depth camera may then be recalibrated to correct for the error. The objective measurement of distance to one or more objects in a scene may be accomplished by a variety of systems and methods. | 05-03-2012 |
20120105586 | LATENT FINGERPRINT DETECTORS AND FINGERPRINT SCANNERS THEREFROM - An automatic fingerprint system includes an optical sensor having a first light source that provides a collimated beam for interrogating a first sample surface, and a camera including a lens and a photodetector array having a camera field of view (FOV | 05-03-2012 |
20120105587 | METHOD AND APPARATUS OF MEASURING DEPTH INFORMATION FOR 3D CAMERA - Provided is a depth information measuring method and apparatus for a three-dimensional (3D) camera. The depth information measuring method and apparatus may output, to an object, an optical pulse of which an intensity is higher than an intensity of an ambient light. The depth information measuring method and apparatus may generate a voltage that is proportional to a log value of an intensity of a light reflected from the object. The depth information measuring method and apparatus may use discharging units, and the discharging units may respectively include dischargers having different discharging speeds or capacitors having different capacities. | 05-03-2012 |
20120105588 | IMAGE CAPTURE DEVICE - An image capture device, to/from which a piece of equipment for use to shoot video or record audio is attachable and removable, includes: a communications section adapted to get, when such a piece of equipment is attached to the device, property information of the piece of equipment; a processor adapted to determine, by reference to the property information, whether a user interface to control the operation of the piece of equipment needs to be displayed or not; a display section adapted to display the user interface when instructed by the processor to do so; and a touchscreen panel adapted to allow the user to operate the user interface. | 05-03-2012 |
20120105589 | REAL TIME THREE-DIMENSIONAL MENU/ICON SHADING - An image display apparatus, comprising: a two-dimensional display for displaying three-dimensional object images; an imaging unit for capturing an image of a user who is in the state of viewing the display screen; a processing unit for determining a face direction orientation of the user from the captured image; a tilt sensor for determining an angle of the image display apparatus; wherein the processing unit determines a virtual light direction by subtracting the angle of the image display apparatus from the face direction; and a projection image generator for projecting the three-dimensional objects onto the display, wherein lighting and shading is applied to the three-dimensional objects based on the virtual light direction is disclosed. A method for displaying three-dimensional images and a computer program for implementing the method are also disclosed. | 05-03-2012 |
20120105590 | ELECTRONIC EQUIPMENT - Electronic equipment includes a target output image generating portion that generates a target output image by changing a depth of field of a target input image by image processing, a monitor that displays on a display screen a distance histogram indicating a distribution of distance between an object at each position in the target input image and an apparatus that photographed the target input image, and displays on the display screen a selection index that is movable along a distance axis in the distance histogram, and a depth of field setting portion that sets a depth of field of the target output image based on a position of the selection index determined by an operation for moving the selection index along the distance axis. | 05-03-2012 |
20120105591 | METHOD AND APPARATUS FOR HIGH-SPEED CALIBRATION AND RECTIFICATION OF A STEREO CAMERA - A calibration and rectification method includes arranging a monitor vertically relative to the optical axis of the stereo camera; displaying 3D patterns, similar to patterns obtained by projecting pattern images of various postures produced by a panel virtually located in front of the stereo camera, onto the monitor through a 3D graphical technique; and the stereo camera acquiring the 3D patterns displayed on the monitor to perform calibration and rectification, thereby correcting images of the stereo camera. | 05-03-2012 |
20120113223 | User Interaction in Augmented Reality - Techniques for user-interaction in augmented reality are described. In one example, a direct user-interaction method comprises displaying a 3D augmented reality environment having a virtual object and a real first and second object controlled by a user, tracking the position of the objects in 3D using camera images, displaying the virtual object on the first object from the user's viewpoint, and enabling interaction between the second object and the virtual object when the first and second objects are touching. In another example, an augmented reality system comprises a display device that shows an augmented reality environment having a virtual object and a real user's hand, a depth camera that captures depth images of the hand, and a processor. The processor receives the images, tracks the hand pose in six degrees-of-freedom, and enables interaction between the hand and the virtual object. | 05-10-2012 |
20120113224 | Determining Loudspeaker Layout Using Visual Markers - A method consistent with certain implementations involves at a listening position, capturing a plurality of photographic images with a camera of a corresponding plurality of loudspeakers forming part of an audio system; determining from the plurality of captured images, a geometric configuration representing a positioning of the plurality of loudspeakers connected to the audio system; and outputting the geometric configuration of the plurality of loudspeakers to the audio system. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract. | 05-10-2012 |
20120113225 | BIOMETRIC MEASUREMENT SYSTEMS AND METHODS - In various embodiments, the present disclosure provides a method of generating crop biometric information in field conditions that includes scanning top surfaces of various plant crown structures of a plurality of plants in one or more rows of plants within a field to collect scan data of the crown structures. Additionally, the method includes converting the scan data into a high spatial resolution 3-dimensional field contour map that illustrates an aggregate 3-dimensional field contour of the scanned plants. The method further includes extracting, from the high spatial resolution 3-dimensional field contour map, biometric information relating to the plants in each of one or more selected rows of the scanned rows of plants. | 05-10-2012 |
20120113226 | 3D IMAGING DEVICE AND 3D REPRODUCTION DEVICE - A 3D imaging device is provided that includes an identification unit, a parallax information decision unit, a display position decision unit, and a 3D display control unit. The identification unit calculates parallax information with respect to an object image based on a first image signal and a second image signal. The identification unit also sets identification information for identifying an object image. The identification unit further outputs first parallax information and identification information. The parallax information decision unit decides on second parallax information based on first parallax information so that identification information is visually recognizable at a depth separate from the object image. The 3D display control unit is coupled to at least one of the identification unit, the parallax information decision unit, and the display position decision unit. The 3D display control unit displays identification information superimposed on the first second image signals based on second parallax information. | 05-10-2012 |
20120113227 | APPARATUS AND METHOD FOR GENERATING A FULLY FOCUSED IMAGE BY USING A CAMERA EQUIPPED WITH A MULTI-COLOR FILTER APERTURE - Provided are an apparatus and method for generating a fully focused image. A depth map generation unit generates a depth map of an input image obtained by a multiple color filter aperture (MCA) camera. A channel shifting & alignment unit extractes subimages which include objects with same focal distance based on the depth map, and performing color channel alignment and removing out-of-focus blurs for each subimages obtained from the depth map. An image fusing unit fuses the subimages to generate a fully focused image. | 05-10-2012 |
20120120195 | 3D CONTENT ADJUSTMENT SYSTEM - A 3D content adjustment system includes a processor. A camera is coupled to the processor. A non-transitory, computer-readable medium is coupled to the processor and the camera. The computer-readable medium includes a content adjustment engine including instructions that when executed by the processor receive viewer information from the camera, modify a plurality of original stereoscopic images using the viewer information to create a plurality of modified stereoscopic images, and output the plurality of modified stereoscopic images. | 05-17-2012 |
20120120196 | IMAGE COUNTING METHOD AND APPARATUS - The image counting method includes the steps of: acquiring 3D images from the region by a 3D camera, wherein the 3D images include a plurality of pixels, and the pixels have x, y and z coordinate values and pixel data; mapping the x, y and z coordinate values and the pixel data of the pixels into a plurality of correlative coordinate values of a spatial correlative coordinate represented as (x, z, t), wherein t is the number of pixels whose pixel data are lower than a threshold in y direction with the same x and z coordinate values; grouping the correlative coordinate values into a plurality of groups according to a correlation between each of the correlative coordinate values in x-z plane; and comparing the correlative coordinate values of each of groups with the correlative coordinate values of the 3D images of the specific objects to determine the number of specific objects in the region. | 05-17-2012 |
20120120197 | APPARATUS AND METHOD FOR SHARING HARDWARE BETWEEN GRAPHICS AND LENS DISTORTION OPERATION TO GENERATE PSEUDO 3D DISPLAY - A system, method, and computer program product for providing pseudo 3D user interface effects in a digital camera with existing lens distortion correction hardware. A distortion map normally used to correct captured images instead alters a displayed user interface object image to support production of a “pseudo 3D” version of the object image via production of at least one modified image. A blending map also selectively mixes the modified image with a second image to produce a distorted blended image. A set or series of such images may be produced automatically or at user direction to generate static or animated effects in-camera without a graphics accelerator, resulting in hardware cost savings and extended battery life. | 05-17-2012 |
20120120198 | THREE-DIMENSIONAL SIZE MEASURING SYSTEM AND THREE-DIMENSIONAL SIZE MEASURING METHOD - A system for measuring a three-dimensional (3D) size of an object in a space according to an indicating mark is provided, wherein the indicating mark is used to point to one of a plurality of measuring points on the object. The system includes an image capturing module, an spatial vector calculating module, and a measuring module. The image capturing module captures an image of the space. The spatial vector calculating module respectively calculates a spatial vector corresponding to the indicating mark when the indicating mark is used to point to each of the measuring points on the object in the image. The measuring module calculates spatial coordinates of the measuring points according to the spatial vectors so as to obtain the 3D size of the object. | 05-17-2012 |
20120120199 | METHOD FOR DETERMINING THE POSE OF A CAMERA WITH RESPECT TO AT LEAST ONE REAL OBJECT - A method for determining the pose of a camera with respect to at least one real object, the method comprises the following steps: operating the camera ( | 05-17-2012 |
20120120200 | COMBINING 3D VIDEO AND AUXILIARY DATA - A three dimensional [3D] video signal ( | 05-17-2012 |
20120127271 | STEREO VIDEO CAPTURE SYSTEM AND METHOD - A method is provided for a stereo video capture system. The stereo video system includes a stereo video monitor, a control platform, and a three-dimensional (3D) capture imaging device. The method includes capturing at least a first image and a second image, with a parallax between the first image and the second image based on a first parallax configuration. The method also includes receiving the first and second images; and calculating a value of at least one parallax setting parameter associated with the first and second images and corresponding to the first parallax configuration. Further, the method includes determining whether the value is within a pre-configured range. When the value is out of the pre-configured range, the method includes converting the first parallax configuration into a second parallax configuration. The method also includes sending, the second parallax configuration to the 3D imaging capture device, and adopting the second parallax configuration in operation. | 05-24-2012 |
20120127272 | 3D IMAGE CAPTURING DEVICE AND METHOD - A 3D image capturing device includes a first light incident hole, a second light incident hole, a photosensitive element, and a processing unit. The photosensitive element is located at an intersection point between a first light path formed by the first light incident hole and a second light path formed by the second light incident hole. The processing unit includes a 3D image sensing module and a 3D image synthesizing module. The 3D image sensing module enables the photosensitive element to alternately sense at least one first image and at least one second image. The first image is sensed by the photosensitive element through the first light path and the second image is sensed by the photosensitive element through the second light path. The 3D image synthesizing module synthesizes alternate first image and second image to generate at least one 3D image. | 05-24-2012 |
20120127273 | IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREOF - An image processing apparatus includes a depth map generator which generates a depth map of a predetermined image which includes at least one object; a disparity estimator which estimates a reference disparity of a left eye image and a right eye image at a predetermined distance from the object based on the generated depth map; a disparity calculator which calculates a changed disparity of the left eye image and the right eye image at a changed distance by using the estimated reference disparity if the predetermined distance is changed; and a three-dimensional (3D) image generator which generates a 3D image which moves horizontally from the left eye image and the right eye image corresponding to the changed disparity. | 05-24-2012 |
20120127274 | APPARATUS AND METHOD FOR PROVIDING IMAGES IN WIRELESS COMMUNICATION SYSTEM AND PORTABLE DISPLAY APPARATUS AND METHOD FOR DISPLAYING IMAGES - A mobile communication system for displaying a three-dimensional (3D) image is provided. The mobile communication system includes an image providing apparatus to generate a first two-dimensional (2D) image Transport Stream (TS), and a second 2D image TS by capturing the same target in different directions, a Multicast Broadcast Service (MBS) server to control at least two base stations included in an MBS area to individually transmit the first 2D image TS and the second 2D image TS, and a portable display apparatus to receive the first 2D image TS and the second 2D image TS, to divide the first 2D image TS to and the second 2D image TS into first 2D image data and second 2D image data, respectively, and to display a 2D image or a 3D image based on an image quality of each of the first 2D image data and the second 2D image data. | 05-24-2012 |
20120127275 | IMAGE PROCESSING METHOD FOR DETERMINING DEPTH INFORMATION FROM AT LEAST TWO INPUT IMAGES RECORDED WITH THE AID OF A STEREO CAMERA SYSTEM - An image processing method is described for determining depth information from at least two input images recorded by a stereo camera system, the depth information being determined from a disparity map taking into account geometric properties of the stereo camera system, characterized by the following method steps for ascertaining the disparity map: transforming the input images into signature images with the aid of a predefined operator, calculating costs based on the signature images with the aid of a parameter-free statistical rank correlation measure for ascertaining a cost range for predefined disparity levels in relation to at least one of the at least two input images, performing a correspondence analysis for each point of the cost range for the predefined disparity levels, the disparity to be determined corresponding to the lowest costs, and ascertaining the disparity map from the previously determined disparities. | 05-24-2012 |
20120133737 | IMAGE SENSOR FOR SIMULTANEOUSLY OBTAINING COLOR IMAGE AND DEPTH IMAGE, METHOD OF OPERATING THE IMAGE SENSOR, AND IMAGE PROCESSING SYTEM INCLUDING THE IMAGE SENSOR - An image sensor includes a light source that emits modulated light such as visible light, white light, or white light-emitting diode (LED) light to a target object, a plurality of pixels, and an image processing unit. The pixels include at least one pixel for outputting pixel signals according to light reflected by the target object. The image processing unit simultaneously generates a color image and a depth image from the pixel signals of the at least one pixel. | 05-31-2012 |
20120133738 | Data Processing System and Method for Providing at Least One Driver Assistance Function - The invention relates to a data processing system and a method for providing at least one driver assistance function. A stationary receiving unit ( | 05-31-2012 |
20120133739 | IMAGE PROCESSING APPARATUS - An image processing apparatus which employs a basic configuration including an image processing controller which processes images captured by a stereo camera and a recognition processing controller which recognizes an object based on information from the image processing controller includes a target object area specifying unit, feature amount extracting unit and smoke determining unit as functions of enabling recognition of a smoky object. The target object specifying unit specifies an area of an object which is a detection target, by canceling the influence of the background, the feature amount extracting unit extracts an image feature amount for recognizing a smoky object in a target object area and the smoke deciding unit decides whether the object in the target object area is a smoky object or an object other than the smoky object, based on the extracted image feature amount. | 05-31-2012 |
20120133740 | FLUORESCENT NANOSCOPY METHOD - A method for analysis of an object dyed with fluorescent coloring agents. Separately fluorescing visible molecules or nanoparticles are periodically formed in different object parts, the laser produces the oscillation thereof which is sufficient for recording the non-overlapping images of the molecules or nanoparticles and for decoloring already recorded fluorescent molecules, wherein tens of thousands of pictures of recorded individual molecule or nanoparticle images, in the form of stains having a diameter on the order of a fluorescent light wavelength multiplied by a microscope amplification, are processed by a computer for searching the coordinates of the stain centers and building the object image according to millions of calculated stain center co-ordinates corresponding to the co-ordinates of the individual fluorescent molecules or nanoparticles. Two-dimensional and three-dimensional images are provided for proteins, nucleic acids and lipids with different coloring agents. | 05-31-2012 |
20120133741 | CAMERA CHIP, CAMERA AND METHOD FOR IMAGE RECORDING - The invention relates to a camera chip (C) for image acquisition. It is characterized in that pixel groups (P | 05-31-2012 |
20120133742 | GENERATING A TOTAL DATA SET - The invention relates to generating a total data set of at least one segment of an object for determining at least one characteristic by merging individual data sets determined by means of an optical sensor moving relative to the object and of an image processor, wherein individual data sets of sequential images of the object contain redundant data that are matched for merging the individual data sets. In order that the data obtained by scanning the object are of sufficient quantity for performing an optimal analysis, but without being too great an amount of data for processing, the invention proposes that individual data sets determined per unit of time be varied as a function of the relative motion between the optical sensor and the object. | 05-31-2012 |
20120133743 | THREE-DIMENSIONAL IMAGE PICKUP DEVICE - The 3D image capture device of this invention includes: a light-transmitting section | 05-31-2012 |
20120140038 | ZERO DISPARITY PLANE FOR FEEDBACK-BASED THREE-DIMENSIONAL VIDEO - The techniques of this disclosure are directed to the feedback-based stereoscopic display of three-dimensional images, such as may be used for video telephony (VT) and human-machine interface (HMI) application. According to one example, a region of interest (ROI) of stereoscopically captured images may be automatically determined based on determining disparity for at least one pixel of the captured images are described herein. According to another example, a zero disparity plane (ZDP) for the presentation of a 3D representation of stereoscopically captured images may be determined based on an identified ROI. According to this example, the ROI may be automatically identified, or identified based on receipt of user input identifying the ROI. | 06-07-2012 |
20120140039 | RUNNING-ENVIRONMENT RECOGNITION APPARATUS - A running-environment recognition apparatus includes an information recognition section mounted in a vehicle and configured to recognize an information of at least a frontward region of the vehicle relative to a traveling direction of the vehicle; and a road-surface calculating section configured to calculate a road surface of a traveling road and a portion lower than the road surface in the frontward region of the vehicle, from the information recognized by the information recognition section. | 06-07-2012 |
20120140040 | INFORMATION DISPLAY SYSTEM, INFORMATION DISPLAY APPARATUS, INFORMATION PROVISION APPARATUS AND NON-TRANSITORY STORAGE MEDIUM - The utility of a service using AR is improved. The information display apparatus which applies an information display according to augmented reality (AR) acquires a reference image for matching with a subject in a captured image and scale information which shows a scale of the subject in the reference image from an information provision apparatus. Then, an inputter-outputter of the information display apparatus displays the captured image as well as a distribution of the reference images by a scale of the subject. On this distribution display, a guide display according to the scale of the subject in the captured image is performed. This guide display moves on the distribution display corresponding to a change of an angle of view. Accordingly, it is easily set the angle of view by which the matching using the acquired reference image can be performed appropriately, and the convenience improves. | 06-07-2012 |
20120140041 | METHOD AND SYSTEM FOR THE REMOTE INSPECTION OF A STRUCTURE - The invention relates to a method for the remote inspection of a structure, comprising the following operations:
| 06-07-2012 |
20120140042 | WARNING A USER ABOUT ADVERSE BEHAVIORS OF OTHERS WITHIN AN ENVIRONMENT BASED ON A 3D CAPTURED IMAGE STREAM - A computer-implemented method, system, and program includes a behavior processing system for capturing a three-dimensional movement of a monitored user within a particular environment monitored by a supervising user, wherein the three-dimensional movement is determined by using at least one image capture device aimed at the monitored user. The behavior processing system identifies a three-dimensional object properties stream using the captured movement. The behavior processing system identifies a particular defined adverse behavior of the monitored user represented by the three-dimensional object properties stream by comparing the identified three-dimensional object properties stream with multiple adverse behavior definitions. In response to identifying the particular defined adverse behavior from among the multiple adverse behavior definitions, the behavior processing system triggers a warning system to notify the supervising user about the particular defined adverse behavior of the monitored user through an output interface only detectable by the supervising user. | 06-07-2012 |
20120147142 | IMAGE TRANSMITTING APPARATUS AND CONTROL METHOD THEREOF, AND IMAGE RECEIVING APPARATUS AND CONTROL METHOD THEREOF - Provided are an image transmitting apparatus and method, and an image receiving apparatus and method. The image transmitting apparatus includes: a video processor which converts a first video signal, including a left-eye image and a right-eye image corresponding to a frame of a three-dimensional (3D) image, into a second video signal by increasing a number of data bits per pixel of a first video signal and merging two pieces of pixel information respectively corresponding to the left-eye image and the right-eye image into one piece of pixel information; and a video output unit which transmits the second video signal. | 06-14-2012 |
20120147143 | OPTICAL SYSTEM HAVING INTEGRATED ILLUMINATION AND IMAGING OPTICAL SYSTEMS, AND 3D IMAGE ACQUISITION APPARATUS INCLUDING THE OPTICAL SYSTEM - An optical system including integrated illumination and imaging optical systems, and a 3-dimensional (3D) image acquisition apparatus including the optical system. In the optical system of the 3D image acquisition apparatus, the illumination optical system and the imaging optical system are integrated to have a coaxial optical path. Accordingly, there is no parallax between the illumination optical system and the imaging optical system, so that depth information about an object acquired using illumination light may reflect actual distances between the object and the 3D image acquisition apparatus. Consequently, the depth information about the object may be more precise. The zero parallax between the illumination optical system and the imaging optical system may improve utilization efficiency of the illumination light. As a result, a greater amount of light may be incident on the 3D image acquisition apparatus, which ensures to acquire further precise depth information about the object. | 06-14-2012 |
20120147144 | TRAJECTORY PROCESSING APPARATUS AND METHOD - A trajectory processing apparatus comprises a trajectory database configured to store a position coordinate of a movable body detected from a camera image in association with data that specifies the camera image from which the movable body is detected, and a camera image database configured to store the camera image. A control section fetches the position coordinate of the movable body and the specifying data for the camera image from which the movable body is detected from the trajectory database. Further, the position coordinate of the movable body fetched from the trajectory database is displayed in a display section as a trajectory of the movable body. Furthermore, the control section acquires from the camera image database the camera image specified by the specifying data fetched from the trajectory database. Moreover, this camera image is displayed in the display section. | 06-14-2012 |
20120154535 | CAPTURING GATED AND UNGATED LIGHT IN THE SAME FRAME ON THE SAME PHOTOSURFACE - A photosensitive surface of an image sensor, hereafter a photosurface, of a gated | 06-21-2012 |
20120154536 | METHOD AND APPARATUS FOR AUTOMATICALLY ACQUIRING FACIAL, OCULAR, AND IRIS IMAGES FROM MOVING SUBJECTS AT LONG-RANGE - The present invention relates to a method and apparatus for long-range facial and ocular acquisition. One embodiment of a system for acquiring an image of a subject's facial feature(s) includes a steerable telescope configured to acquire the image of the facial feature(s), a first computational imaging element configured to minimize the effect of defocus in the image of the facial feature(s), and a second computational imaging element configured to minimize the effects of motion blur. In one embodiment, the detecting, the acquiring, the minimizing the effect of the motion, and the minimizing the effect of the defocus are performed automatically without a human input. | 06-21-2012 |
20120154537 | IMAGE SENSORS AND METHODS OF OPERATING THE SAME - According to example embodiments, a method of operating a three-dimensional image sensor comprises measuring a distance of an object from the three-dimensional image sensor using light emitted by a light source module, and adjusting an emission angle of the light emitted by the light source module based on the measured distance. The three-dimensional image sensor includes the light source module. | 06-21-2012 |
20120154538 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - According to one embodiment, an image processing apparatus including background image generator which generates background image, receiver which receives additional information, depth calculator which determines first depth based on additional information, and calculates second depth based on first depth, first three-dimensional image generator which generates first object image based on additional information, and generates first three-dimensional image based on first object image and first depth, second three-dimensional image generator which generates second object image based on additional information, and generates second three-dimensional image based on second object image and second depth, at least part of second three-dimensional image being displayed in area overlapping first three-dimensional image, image composite module which generates video signal by displaying background image, displaying second three-dimensional image in front of displayed background image, and displaying first three-dimensional image in front of displayed second three-dimensional image, and output module which outputs video signal. | 06-21-2012 |
20120154539 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - According to one embodiment, an image processing apparatus including a background image generator which generates a background image, a receiver which receives additional information, a depth memory which stores in advance a depth for each of types of the additional information, a depth decide module which determines a type of the additional information received by the receiver, and reads a depth which is associated with the determined type from the depth memory, a three-dimensional image generator which generates an object image based on the additional information, and generates a three-dimensional image based on the object image and the depth which is read by the depth decide module, an image composite module which generates a video signal by displaying the background image and displaying the three-dimensional image in front of the displayed background image, and an output module which outputs the video signal generated by the image composite module. | 06-21-2012 |
20120154540 | THREE DIMENSIONAL MEASUREMENT APPARATUS AND THREE DIMENSIONAL MEASUREMENT METHOD - A three dimensional measurement apparatus includes a projection unit that projects, to a measurement target object, a first pattern light including alternately arranged bright parts and dark parts and a second pattern light in which a phase of the first pattern light is shifted, and an imaging unit that images the measurement target object on which the first or second pattern light is projected. When a period of repetitions of the bright parts and the dark parts of the pattern light is one period, a range of imaging on the measurement target object by one pixel included in the imaging unit is an image distance, and the length of one period of the projected pattern light on the measurement target object surface is M times the image distance, the projection unit and the imaging unit are arranged to satisfy “2×N−0.2≦M≦2×N+0.2 (where N is not less than 2)”. | 06-21-2012 |
20120154541 | APPARATUS AND METHOD FOR PRODUCING 3D IMAGES - A camera module includes a single lens system, a sensor and an image enhancer. The image enhancer is operable to enhance a single image captured by the sensor via the single lens system. The image enhancer performs opto-algorithmic processing to extend the depth of field of the single lens system, a mapping to derive a depth map from the captured single image; and image processing to calculate suitable offsets from the depth map as is required to produce a 3-dimensional image. The calculated offsets are applied to appropriate image channels so as to obtain the 3-dimensional image from the single image capture. | 06-21-2012 |
20120162370 | Apparatus and method for generating depth image - Provided is a depth image generating apparatus. The depth image generating apparatus may include a filtering unit, a modulation unit, and a sensing unit. The filtering unit may band pass filter an infrared light of a first wavelength band among infrared lights received from an object. The modulation unit may modulate the infrared light of the first wavelength band to an infrared light of a second wavelength band. The sensing unit may generate an electrical signal by sensing the modulated infrared light of the second wavelength band. | 06-28-2012 |
20120162371 | THREE-DIMENSIONAL MEASUREMENT APPARATUS, THREE-DIMENSIONAL MEASUREMENT METHOD AND STORAGE MEDIUM - A three-dimensional measurement apparatus comprises a light irradiation unit adapted to irradiate a measurement target with pattern light, an image capturing unit adapted to capture an image of the measurement target, and a measurement unit adapted to measure a three-dimensional shape of the measurement target from the captured image, the three-dimensional measurement apparatus further comprising: a change region extraction unit adapted to extract a change region where a change has occurred when comparing an image of the measurement target captured in advance with the captured image of the measurement target; and a light characteristic setting unit adapted to set characteristics of the pattern light from the change region, wherein the measurement unit measures the three-dimensional shape of the measurement target at the change region in a captured image after irradiation of the change region with the pattern light with the characteristics set by the light characteristic setting unit. | 06-28-2012 |
20120162372 | APPARATUS AND METHOD FOR CONVERGING REALITY AND VIRTUALITY IN A MOBILE ENVIRONMENT - Disclosed herein are an apparatus and a method for converging reality and virtuality in a mobile environment. The apparatus includes an image processing unit, a real environment virtualization unit, and a reality and virtuality convergence unit. The image processing unit corrects real environment image data captured by at least one camera included in a mobile terminal. The real environment virtualization unit generates real object virtualization data virtualized by analyzing each real object of the corrected real environment image data in a three-dimensional (3D) fashion. The reality and virtuality convergence unit generates a convergent image, in which the real object virtualization data and at least one virtual object of previously stored virtual environment data are converged by associating the real object virtualization data with the virtual environment data, with reference to location and direction data of the mobile terminal. | 06-28-2012 |
20120162373 | DYNAMIC RANGE THREE-DIMENSIONAL IMAGE SYSTEM - Disclosed is a system of a dynamic range three-dimensional image, including: an optical detector including a gain control terminal capable of controlling an optical amplification gain; a pixel detecting module for detecting a pixel signal for configuring an image by receiving an output of the optical detector; a high dynamic range (HDR) generating module for acquiring a dynamic range image by generating a signal indicating a saturation degree of the pixel signal and combining the pixel signal based on the pixel signal detected by the pixel detecting module; and a gain control signal generating module generating an output signal for supplying required voltage to the gain control terminal of the optical detector based on the magnitude of the signal indicating the saturation degree of the pixel signal. | 06-28-2012 |
20120162374 | METHODS, SYSTEMS, AND COMPUTER-READABLE STORAGE MEDIA FOR IDENTIFYING A ROUGH DEPTH MAP IN A SCENE AND FOR DETERMINING A STEREO-BASE DISTANCE FOR THREE-DIMENSIONAL (3D) CONTENT CREATION - Methods, systems, and computer program products for determining a depth map in a scene are disclosed herein. According to one aspect, a method includes collecting focus statistical information for a plurality of focus windows. The method may also include determining a focal distance for each window. Further, the method may include determining near, far, and target focus distances. The method may also include calculating a stereo-base and screen plane using the focus distances. | 06-28-2012 |
20120162375 | STEREOSCOPIC IMAGE CAPTURING METHOD, SYSTEM AND CAMERA - A camera and camera system is provided with an optical device ( | 06-28-2012 |
20120162376 | Three-Dimensional Data Preparing Method And Three-Dimensional Data Preparing Device - The invention provides a three-dimensional data preparing method, using image pickup units | 06-28-2012 |
20120162377 | ILLUMINATION/IMAGE-PICKUP SYSTEM FOR SURFACE INSPECTION AND DATA STRUCTURE - In order to commonly perform image processing at an image processing device, even if inspection contents or valid inspection regions are changed for every image or if an illumination/image-pickup system for surface inspection per se is changed, a data structure for image processing is constructed in a manner that valid image-pickup region data indicative of a valid image-pickup region that is valid for inspection in the captured image, and image processing specifying data for specifying contents of the image processing performed on the valid image-pickup region are associated with the captured image data indicative of the image captured by the image-pickup device. | 06-28-2012 |
20120162378 | METHOD AND SYSTEM FOR VISION-BASED INTERACTION IN A VIRTUAL ENVIRONMENT - Method, computer program and system for tracking movement of a subject. The method includes receiving data from a plurality of fixed position sensors comprising a distributed network of time of flight camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to features indicative of motion of the subject relative to the fixed position sensors and one or more other portions of the subject, and presenting one or more objects on one or more three dimensional display screens. The plurality of fixed position sensors are used to track motion of the features of the subject to manipulate the volumetric three-dimensional representation to determine interaction of one or more of the features of the subject and one or more of the one or more objects on one or more of the one or more three dimensional display screens. | 06-28-2012 |
20120169845 | METHOD AND APPARATUS FOR ADAPTIVE SAMPLING VIDEO CONTENT - In a method of encoding video, the video is analyzed to determine a sampling format for the video from a plurality of sampling formats. The video is sampled using the determined sampling format to produce a video portion having a subset of information of the video. The video portion is encoded to form an output bit stream. | 07-05-2012 |
20120169846 | METHOD FOR CAPTURING THREE DIMENSIONAL IMAGE - A method for capturing a three dimensional (3D) image by using a single-lens camera is provided. First, a first image is captured. According to a focus distance of the single-lens camera in capturing the first image and an average distance between two human eyes, an overlap width between the first image and a second image required for capturing the second image of the 3D image is calculated. Then, the first image and a real-time image captured by the single-lens camera are displayed, and an overlap area is marked on the first image according to the calculated overlap width. A horizontal shift of the single-lens camera is adjusted, to locate the real-time image in the overlap area. Finally, the real-time image is captured as the second image, and the first and second images are output as the 3D image. | 07-05-2012 |
20120169847 | ELECTRONIC DEVICE AND METHOD FOR PERFORMING SCENE DESIGN SIMULATION - A method performs scene design simulation using an electronic device. The method obtains a scene image of a specified scene, determines edge pixels of the scene image, fits the edge pixels to a plurality of feature lines, and determines a part of the feature lines to obtain an outline of the scene image. The method further determines a vanishing point and a plurality of sight lines of the specified scene to create a 3D model of the specified scene, and adjusts a display status of a received virtual 3D image in the 3D model of the specified scene according to the vanishing point, the sight lines, and an actual size of the specified scene. | 07-05-2012 |
20120169848 | Image Processing Systems - An image processing system includes a calculation unit, a reconstruction unit, a confidence map estimation unit and an up-sampling unit. The up-sampling unit is configured to perform a joint bilateral up-sampling on depth information of a first input image based on a confidence map of the first input image and a second input image with respect to an object and increase a first resolution of the first input image to a second resolution to provide an output image with the second resolution. | 07-05-2012 |
20120169849 | AUTOMATED EXTENDED DEPTH OF FIELD IMAGING APPARATUS AND METHOD - An imaging apparatus and method enables an automated extended depth of field capability that automates and simplifies the process of creating extended depth of field images. An embodiment automates the acquisition of an image “stack” or sequence and stores metadata at the time of image acquisition that facilitates production of a composite image having an extended depth of field from at least a portion of the images in the acquired sequence. An embodiment allows a user to specify, either at the time of image capture or at the time the composite image is created, a range of distances that the user wishes to have in focus within the composite image. An embodiment provides an on-board capability to produce a composite, extended depth of field image from the image stack. One embodiment allows the user to import the image stack into an image-processing software application that produces the composite image. | 07-05-2012 |
20120176473 | DYNAMIC ADJUSTMENT OF PREDETERMINED THREE-DIMENSIONAL VIDEO SETTINGS BASED ON SCENE CONTENT - Predetermined three-dimensional video parameter settings may be dynamically adjusted based on scene content. One or more three-dimensional characteristics associated with a given scene may be determined. One or more scale factors may be determined from the three-dimensional characteristics. The predetermined three-dimensional video parameter settings can be adjusted by applying the scale factors to the predetermined three-dimensional video parameter settings. The scene may be displayed on a three-dimensional display using the resulting adjusted set of predetermined three-dimensional video parameters. | 07-12-2012 |
20120176474 | ROTATIONAL ADJUSTMENT FOR STEREO VIEWING - An apparatus for viewing of a three dimensional image of a scene including a monocular device worn by a viewer over one eye for displaying first two-dimensional images of the scene and a mechanism for rotating the first display in response to perceived rotational misalignments with a second display that displays second two dimensional images in stereo image pairs. A second display for displaying the second two-dimensional images of the scene. A way of determining lateral, longitudinal and rotational misalignments of the first images relative to the second images. A controller for providing lateral and longitudinal shifts of the first images on the first display and rotational movements of the mechanism to align the first and second images so the viewer perceives a three dimensional image of the scene. | 07-12-2012 |
20120176475 | 3D Microscope Including Insertable Components To Provide Multiple Imaging And Measurement Capabilities - A three-dimensional (3D) microscope includes various insertable components that facilitate multiple imaging and measurement capabilities. These capabilities include Nomarski imaging, polarized light imaging, quantitative differential interference contrast (q-DIC) imaging, motorized polarized light imaging, phase-shifting interferometry (PSI), and vertical-scanning interferometry (VSI). | 07-12-2012 |
20120176476 | 3D TIME-OF-FLIGHT CAMERA AND METHOD - 3D time-of-flight camera and a corresponding method for acquiring information about a scene. To increase the frame rate, the proposed camera comprises a radiation source, a radiation detector comprising one or more pixels, wherein a pixel comprises two or more detection units each detecting samples of a sample set of two or more samples and an evaluation unit that evaluates said sample sets of said two or more detection units and generates scene-related information from said sample sets. Said evaluation unit comprises a rectification unit that rectifies a subset of samples of said sample sets by use of a predetermined rectification operator defining a correlation between samples detected by two different detection units of a particular pixel, and an information value calculator that determines an information value of said scene-related information from said subset of rectified samples and the remaining samples of the sample sets. | 07-12-2012 |
20120176477 | Methods, Systems, Devices and Associated Processing Logic for Generating Stereoscopic Images and Video - The present invention includes methods, systems, devices and associated processing logic for generating stereoscopic 3-Dimensional images and/or video from 2-Dimensional images or video. There may be provided a stereoscopic 3D generating system to extrapolate and render 2D complementary images and or video from a first 2D image and/or video. The complementary images and/or video, when combined with the first image or video, or a second complementary image or video, form a stereoscopic image of the scene captured in the first image or video. The stereoscopic 3D generation system may generate a complementary image or images, such that when a viewer views the first image or a second complementary image (shifted in the other direction from the first complementary image) with one eye and the complementary image with the other eye, an illusion of depth in the image is created (e.g. a stereoscopic 3D image). | 07-12-2012 |
20120182390 | Counting system for vehicle riders - There is provided a system and method for counting riders arbitrarily positioned in a vehicle. There is provided a method comprising receiving, from at least one camera filtered to capture non-visible light, video data corresponding to the vehicle passing through a light source filtered for non-visible light, converting the video data into a 3D height map, and analyzing the 3D height map to determine a number of riders in the vehicle. The camera and light source may be mounted in a permanent position using a gantry or another suitable system where the vehicle travels across the camera and light system in a determined manner, for example through a vehicle track. Multiple cameras may be used to increase detection accuracy. To detect persons in the 3D height map, the analysis may search for height patterns indicating heads and shoulders of persons, compare against height map templates, or use machine-learning methods. | 07-19-2012 |
20120182391 | DETERMINING A STEREO IMAGE FROM VIDEO - A method of producing a stereo image from a digital video includes receiving a digital video including a plurality of digital images captured by an image capture device; and using a processor to produce stereo suitability scores for at least two digital images from the plurality of digital images. The method further includes selecting a stereo candidate image based on the stereo suitability scores; producing a stereo image from the selected stereo candidate image wherein the stereo image includes the stereo candidate image and an associated stereo companion image based on the plurality of digital images from the digital video; and storing the stereo image whereby the stereo image can be presented for viewing by a user. | 07-19-2012 |
20120182392 | Mobile Human Interface Robot - A method of object detection for a mobile robot includes emitting a speckle pattern of light onto a scene about the robot while maneuvering the robot across a work surface, receiving reflections of the emitted speckle pattern off surfaces of a target object in the scene, determining a distance of each reflecting surface of the target object, constructing a three-dimensional depth map of the target object, and classifying the target object. | 07-19-2012 |
20120182393 | PORTABLE APPARATUS AND MICROCOMPUTER - The data processing unit generates image data such that the camera unit is caused to acquire a plurality of data captured with a focal length changed in response to instructions for imaging operation by an operation unit and three-dimensional display data are generated from the plurality of captured data based on the correlation of focused images which are different according to the focal lengths of the acquired plurality of captured data with the focal length thereof. Since each of the plurality of data captured with a focal length changed is different in a focused image according to the focal length, the plurality of captured data is subjected to the processing for generating three-dimensional display data based on the correlation of a focused image different according to the focal length with the focal length to allow the three-dimensional display data to be generated. | 07-19-2012 |
20120182394 | 3D IMAGE SIGNAL PROCESSING METHOD FOR REMOVING PIXEL NOISE FROM DEPTH INFORMATION AND 3D IMAGE SIGNAL PROCESSOR THEREFOR - A three-dimensional (3D) image signal processing method increases signal-to-noise ratio by performing pixel binning on depth information obtained by a 3D image sensor, without changing a filter array detecting the depth information. The processing method may be used in a 3D image signal processor, and a 3D image processing system including the 3D image signal processor. | 07-19-2012 |
20120182395 | THREE-DIMENSIONAL IMAGING DEVICE AND OPTICAL TRANSMISSION PLATE - A 3D image capture device includes: a light-transmitting section | 07-19-2012 |
20120188342 | USING OCCLUSIONS TO DETECT AND TRACK THREE-DIMENSIONAL OBJECTS - A mobile platform detects and tracks a three-dimensional (3D) object using occlusions of a two-dimensional (2D) surface. To detect and track the 3D object, an image of the 2D surface with the 3D object is captured and displayed and the 2D surface is detected and tracked. Occlusion of a region assigned as an area of interest on the 2D surface is detected. The shape of the 3D object is determined based on a predefined shape or by using the shape of the area of the 2D surface that is occluded along with the position of the camera with respect to the 2D to calculate the shape. Any desired action with respect to the position of the 3D object on the 2D surface may be performed, such as rendering and displaying a graphical object on or near the displayed 3D object. | 07-26-2012 |
20120188343 | IMAGING APPARATUS - An imaging apparatus includes an imaging unit configured to capture a subject to generate an image, a detector configured to detect tilt of the imaging apparatus; a storage unit configured to store the images generated by the imaging unit with the images being related to the detection results of the detector, and a controller configured to select at least two images as images for generating a three-dimensional image from the plurality of images stored in the storage section based on the detection results related to the images. | 07-26-2012 |
20120194644 | Mobile Camera Localization Using Depth Maps - Mobile camera localization using depth maps is described for robotics, immersive gaming, augmented reality and other applications. In an embodiment a mobile depth camera is tracked in an environment at the same time as a 3D model of the environment is formed using the sensed depth data. In an embodiment, when camera tracking fails, this is detected and the camera is relocalized either by using previously gathered keyframes or in other ways. In an embodiment, loop closures are detected in which the mobile camera revisits a location, by comparing features of a current depth map with the 3D model in real time. In embodiments the detected loop closures are used to improve the consistency and accuracy of the 3D model of the environment. | 08-02-2012 |
20120194645 | LIVING ROOM MOVIE CREATION - A system and method are disclosed living room movie creation. Movies can be directed, captured, and edited using a system that includes a depth camera. A virtual movie set can be created by using ordinary objects in the living room as virtual props. The system is able to capture motions of actors using the depth camera and to generate a movie based thereon. Therefore, there is no need for the actors to wear any special markers to detect their motion. A director may view scenes from the perspective of a “virtual camera” and record those scenes for later editing. | 08-02-2012 |
20120194646 | Method of Enhancing 3D Image Information Density - A method of enhancing 3D image information density, comprising providing a confocal fluorescent microscope and a rotational stage. 3D image samples at different angles are collected. A deconvolution process of the 3D image samples by a processing unit is performed. A registration process of the deconvoluted 3D image samples by the processing unit is performed. An interpolation process of the registered 3D image samples by the processing unit is performed to output a 3D image in high resolution. | 08-02-2012 |
20120194647 | THREE-DIMENSIONAL MEASURING APPARATUS - A three-dimensional measuring apparatus includes: a projecting unit that projects a stripe to a projectable region which is a peripheral region of an intersection point on a measurement object when a vertical line is drawn down toward the measurement object; an imaging unit that includes a plurality of imaging regions at which the measurement object to which the stripe is projected is imaged within the projectable region; and a control unit that performs a process of three-dimensionally measuring the measurement object based on an image imaged by the imaging unit. | 08-02-2012 |
20120194648 | VIDEO/ AUDIO CONTROLLER - An apparatus for controlling video and/or audio material, the apparatus comprising: at least one sensor that generates a signal responsive to a physiological parameter in a user's body; a processor, that receives the signal and determines an emotional state of the user responsive to the signal; and a controller that controls the V/A material in accordance with the emotion state of the user. | 08-02-2012 |
20120200670 | Method and apparatus for a disparity limit indicator - In accordance with an example embodiment of the present invention, an apparatus is disclosed. The apparatus includes a stereoscopic camera system, a user interface, and a disparity range system. The user interface includes a display screen. The user interface is configured to display on the display screen an image formed by the stereoscopic camera system. The image corresponds to a scene viewable by the stereoscopic camera system. The disparity range system is configured to detect a disparity for the scene. The disparity range system is configured to provide an indication on the display screen in response to the detected disparity. | 08-09-2012 |
20120200671 | Apparatus And Method For Three-Dimensional Image Capture With Extended Depth Of Field - An optical system for capturing three-dimensional images of a three-dimensional object is provided. The optical system includes a projector for structured illumination of the object. The projector includes a light source, a grid mask positioned between the light source and the object for structured illumination of the object, and a first Wavefront Coding (WFC) element having a phase modulating mask positioned between the grid mask and the object to receive patterned light from the light source through the grid mask. The first WFC element is constructed and arranged such that a point spread function of the projector is substantially invariant over a wider range of depth of field of the grid mask than a point spread function of the projector without the first WFC element. | 08-09-2012 |
20120200672 | 3D IMAGING DEVICE - A 3D imaging device includes: a first imaging unit having a first variable power lens and a first driving unit that drives the first variable power lens along an optical axis; a second imaging unit having a second variable power lens and a second driving unit that drives the second variable power lens along an optical axis; a storage unit that temporarily stores the first photographic image and the second photographic image; a parallax determining unit that determines a parallax in a horizontal direction between the first photographic image and the second photographic image; a parallax adjusting unit that generates a third photographic image excluding a first parallax adjusting image from the first photographic image and a fourth photographic image excluding a second parallax adjusting image from the second photographic image; and a photographic information recording unit that records information about a magnification of the first variable power lens. | 08-09-2012 |
20120200673 | IMAGING APPARATUS AND IMAGING METHOD - The present invention provides an imaging apparatus which generates, based on a captured image, a depth map of an object with a high degree of precision. | 08-09-2012 |
20120200674 | System and Method for Determining Whether to Operate a Robot in Conjunction with a Rotary Parlor - In certain embodiments, a system includes a three-dimensional camera and a processor communicatively coupled to the three-dimensional camera. The processor is operable to determine a first hind location of a first hind leg of a dairy livestock based at least in part on visual data captured by the three-dimensional camera and determine a second hind location of a second hind leg of the dairy livestock based at least in part on the visual data. The processor is further operable to determine a measurement, wherein the measurement is the distance between the first hind location and the second hind location. Additionally, the processor is operable to determine whether the measurement exceeds a minimum threshold. | 08-09-2012 |
20120206572 | METHOD OF CALCULATING 3D OBJECT DATA WITHIN CONTROLLABLE CONSTRAINTS FOR FAST SOFTWARE PROCESSING ON 32 BIT RISC CPUS - Systems and methods are described to allow arbitrary 3D data to be rendered to a 2D viewport on a device with limited processing capabilities. 3D vertex data is received comprising vertices and connections conforming to coordinate processing constraints. A position and orientation of a camera in world co-ordinates is received to render the 3D vertex data from. A processing zone of the plurality of processing zones the position of the camera is in is determined. The vertices of the 3D vertex data assigned to the determined processing zone are transformed based on the position and orientation of the camera for rendering to the viewport. | 08-16-2012 |
20120206573 | METHOD AND APPARATUS FOR DETERMINING DISPARITY OF TEXTURE - A method and system to determine the disparity associated with one or more textured regions of a plurality of images is presented. The method comprises the steps of breaking up the texture into its color primitives, further segmenting the textured object into any number of objects comprising such primitives, and then calculating a disparity of these objects. The textured objects emerge in the disparity domain, after having their disparity calculated. Accordingly, the method is further comprised of defining one or more textured regions in a first of a plurality of images, determining a corresponding one or more textured regions in a second of the plurality of images, segmenting the textured regions into their color primitives, and calculating a disparity between the first and second of the plurality of images in accordance with the segmented color primitives. | 08-16-2012 |
20120206574 | COMPUTER-READABLE STORAGE MEDIUM HAVING STORED THEREIN DISPLAY CONTROL PROGRAM, DISPLAY CONTROL APPARATUS, DISPLAY CONTROL SYSTEM, AND DISPLAY CONTROL METHOD - A virtual object placed in a three-dimensional virtual space and a user interface are stereoscopically displayed on an upper LCD of a game apparatus. An image used for adjusting a display position of the user interface in the depth direction is displayed on a lower LCD. A user adjusts a UI adjustment slider by using a touch pen to adjust the parallax of the user interface. This allows the adjustment of the parallax of the user interface separately from the parallax of the three-dimensional virtual space, and thereby the depth perception of the user interface can be adjusted. | 08-16-2012 |
20120206575 | Method and Device for Calibrating a 3D TOF Camera System - A method for calibrating a three dimensional time-of-flight camera system mounted on a device, includes determining at a first instant a direction vector relating to an object; determining an expected direction vector and expected angle for the object to be measured at a second instant with reference to the device's assumed trajectory and optical axis of the camera; determining a current direction vector and current angle at the second instant; determining an error represented by a difference between the current direction vector and the expected direction vector; and using the error to correct the assumed direction of the main optical axis of the camera system such that said error is substantially eliminated. | 08-16-2012 |
20120206576 | STEREOSCOPIC IMAGING METHOD AND SYSTEM THAT DIVIDES A PIXEL MATRIX INTO SUBGROUPS - A stereoscopic imaging method where a pixel matrix is divided into groups such that parallax information is received by one pixel group and original information is received by another pixel group. The parallax information may, specifically, be based on polarized information received by subgroups of the one pixel, group and by processing all of the information received multiple images are rendered by the method. | 08-16-2012 |
20120212580 | COMPUTER-READABLE STORAGE MEDIUM HAVING DISPLAY CONTROL PROGRAM STORED THEREIN, DISPLAY CONTROL APPARATUS, DISPLAY CONTROL SYSTEM, AND DISPLAY CONTROL METHOD - A display control apparatus displays, by a virtual stereo camera taking an image of a virtual three-dimensional space in which a player object is positioned, a stereoscopically viewable image of the virtual three-dimensional space. At this time, when an object distance represents a distance from a point of view position of the virtual stereo camera to the player object, and a stereoscopic view reference distance represents a distance from the point of view position of the virtual stereo camera to a reference plane corresponding to a position at which a parallax is not generated when the image of the virtual three-dimensional space is taken by the virtual stereo camera, a camera parameter is set based on a stereoscopic view ratio which is a ratio of the stereoscopic view reference distance to the object distance, The stereoscopically viewable image is generated based on the camera parameter. | 08-23-2012 |
20120212581 | IMAGE CAPTURE APPARATUS AND IMAGE SIGNAL PROCESSING APPARATUS - An image capture apparatus includes an image capture unit that has a plurality of unit pixels each including a plurality of photo-electric conversion units per condenser unit, and a recording unit that records captured image signals, which are captured by the image capture unit and are respectively read out from the plurality of photo-electric conversion units, and the recording unit records identification information which allows to identify each photo-electric conversion unit used to obtain the captured image signal in association with that captured image signal. | 08-23-2012 |
20120212582 | Systems and methods for monitoring caregiver and patient protocolcompliance - A system and methods is provided for facilitating, monitoring and recording caregiver and patient compliance with established hospital hand hygiene protocols. The system comprises a 3-D imaging and monitoring assembly and an optional intelligent programmable monitor/sanitizer. Three dimensional imagery tracks a caregiver's movements and location while generating a representative image value. Information acquired by the imaging system determines the proximity of a caregiver to the patient and/or contamination source and determines if the sanitizers provided have been utilized and if so, at an appropriate time and distance from the patient per hospital protocol. While being monitored, a representative Avatar based on physical characteristics derived from three dimensional images of the caregiver and patient may be generated so as to maintain anonymity of both unless a violation of institutional protocol occurs which may be forensically recorded in real-time for analysis. | 08-23-2012 |
20120212583 | Charged Particle Radiation Apparatus, and Method for Displaying Three-Dimensional Information in Charged Particle Radiation Apparatus - Disclosed is a charged particle radiation apparatus capable of capturing a change in a sample due to gaseous atmosphere, light irradiation, heating or the like without exposing the sample to atmosphere. The present invention relates to a sample holder provided with a sample stage that is rotatable around a rotation axis perpendicular to an electron beam irradiation direction, the sample holder being capable of forming an airtight chamber around the sample stage. A sample is allowed to chemically react in any atmosphere, and three-dimensional analysis on the reaction is enabled. A sample liable to change in atmosphere can be three-dimensionally analyzed without exposing the sample to the atmosphere. | 08-23-2012 |
20120218386 | Systems and Methods for Comprehensive Focal Tomography - A method and system for forming a three-dimensional image of a three-dimensional scene using a two-dimensional image sensor are disclosed. Formation of a three-dimensional image is enabled by locating a coded aperture in an image field provided by a collector lens, wherein the coded aperture modulates the image field to form a modulated image at the image sensor. The three-dimensional image is reconstructed by deconvolving the modulation code from the image data, thereby enabling high-resolution images to be formed at a plurality of focal ranges. | 08-30-2012 |
20120224027 | STEREO IMAGE ENCODING METHOD, STEREO IMAGE ENCODING DEVICE, AND STEREO IMAGE ENCODING PROGRAM - The stereo image encoding device | 09-06-2012 |
20120224028 | METHOD OF FABRICATING MICROLENS, AND DEPTH SENSOR INCLUDING MICROLENS - A method of fabricating a microlens includes forming layer of photoresist on a substrate, patterning the layer of photoresist, and then reflowing the photoresist pattern. The layer of photoresist is formed by coating the substrate with liquid photoresist whose viscosity is 150 to 250 cp. A depth sensor includes a substrate and photoelectric conversion elements at an upper portion of the substrate, a metal wiring section disposed on the substrate, an array of the microlenses for focusing incident light as beams onto the photoelectric conversion elements and which beams avoid the wirings of the metal wiring section. The depths sensor also includes a layer presenting a flat upper surface on which the microlenses are formed. The layer may be a dedicated planarization layer or an IR filter, interposed between the microlenses and the metal wiring section. | 09-06-2012 |
20120229605 | OPTICAL OBSERVATION INSTRUMENT WITH AT LEAST TWO OPTICAL TRANSMISSION CHANNELS THAT RESPECTIVELY HAVE ONE PARTIAL RAY PATH - An optical observation instrument has two optical transmission channels for transmitting two partial ray bundles ( | 09-13-2012 |
20120229606 | DEVICE AND METHOD FOR OBTAINING THREE-DIMENSIONAL OBJECT SURFACE DATA - The concept includes projecting at the object surface, along a first optical axis, two or more two-dimensional (2D) images containing together one or more distinct wavelength bands. The wavelength bands vary in intensity along a first image axis, forming a pattern, within at least one of the projected images. Each projected image generates a reflected image along a second optical axis. The 3D surface data is obtained by comparing the object data with calibration data, which calibration data was obtained by projecting the same images at a calibration reference surface, for instance a planar surface, for a plurality of known positions along the z-axis. Provided that the z-axis is not orthogonal to the second optical axis, the z-axis coordinate at each location on the object surface can be found if the light intensity combinations of all predefined light intensity patterns are linearly independent along the corresponding z-axis. | 09-13-2012 |
20120229607 | SYSTEMS AND METHODS FOR PERSISTENT SURVEILLANCE AND LARGE VOLUME DATA STREAMING - In general, the present disclosure relates to persistent surveillance (PS), wide medium and small in area, and large volume data streaming (LVSD), e.g., from orbiting aircraft or spacecraft, and PS/LVSD specific data compression. In certain aspects, the PS specific data or LVSD compression and image alignment may utilize registration of images via accurate knowledge of camera position, pointing, calibration, and a 3D model of the surveillance area. Photogrammetric systems and methods to compress PS specific data or LVSD while minimizing loss are provided. In certain embodiments, to achieve data compression while minimizing loss, a depth model is generated, and imagery is registered to the depth model. | 09-13-2012 |
20120229608 | IMAGE PROCCESSING DEVICE, IMAGING DEVICE, AND IMAGE PROCESSING METHOD - Stereoscopic tracking during a zooming period is facilitated to alleviate eye fatigue. An image processing device includes an imaging unit | 09-13-2012 |
20120229609 | THREE-DIMENSIONAL VIDEO CREATING DEVICE AND THREE-DIMENSIONAL VIDEO CREATING METHOD - A three-dimensional video creating device ( | 09-13-2012 |
20120236118 | ELECTRONIC DEVICE AND METHOD FOR AUTOMATICALLY ADJUSTING VIEWING ANGLE OF 3D IMAGES - In a method for adjusting a viewing angle of 3D images using an electronic device, the electronic device includes a distance sensor, a camera lens and a 3D display screen. The distance sensor senses a distance between a viewer and the 3D display screen, and the camera lens to capture a digital image of the viewer. The method calculates a viewing angle of the viewer according to the distance and a displacement between the viewer and the 3D display screen, and calculates an angle difference between the viewing angle of the viewer and a viewing angle range of the 3D display screen. The method further adjusts a viewing angle of a 3D image according to the angle difference, and displays the 3D image on the 3D display screen according to the viewing angle of the 3D image. | 09-20-2012 |
20120236119 | APPARATUS AND METHOD FOR ESTIMATING CAMERA MOTION USING DEPTH INFORMATION, AND AUGMENTED REALITY SYSTEM - Provided is a camera motion estimation method and apparatus that may estimate a motion of a depth camera in real time. The camera motion estimation apparatus may extract an intersection point from plane information of a depth image, calculate a feature point associated with each of planes included in the plane information using the extracted intersection point, and extract a motion of a depth camera providing the depth image using the feature point. Accordingly, in an environment where an illumination environment dynamically varies, or regardless of a texture state within a space, the camera motion estimation apparatus may estimate a camera motion. | 09-20-2012 |
20120236120 | AUTOMATIC STEREOLOGICAL ANALYSIS OF BIOLOGICAL TISSUE INCLUDING SECTION THICKNESS DETERMINATION - Systems and methods are provided for automatic determination of slice thickness of an image stack in a computerized stereology system, as well as automatic quantification of biological objects of interest within an identified slice of the image stack. Top and bottom boundaries of a slice can be identified by applying a thresholded focus function to determine just-out-of-focus focal planes. Objects within an identified slice can be quantified by performing a color processing segmentation followed by a gray-level processing segmentation. The two segmentation processes generate unique identifiers for features in an image that can then be used to produce a count of the features. | 09-20-2012 |
20120236121 | Methods of Operating a Three-Dimensional Image Sensor Including a Plurality of Depth Pixels - In a method of operating a three-dimensional image sensor according to example embodiments, modulated light is emitted to an object of interest, the modulated light that is reflected from the object of interest is detected using a plurality of depth pixels, and a plurality of pixel group outputs respectively corresponding to a plurality of pixel groups are generated based on the detected modulated light by grouping the plurality of depth pixels into the plurality of pixel groups including a first pixel group and a second pixel group that have different sizes from each other. | 09-20-2012 |
20120236122 | IMAGE PROCESSING DEVICE, METHOD THEREOF, AND MOVING BODY ANTI-COLLISION DEVICE - An image processing device is disclosed that is able to accurately recognize objects at a close distance. The image processing device includes a camera unit, and an image processing unit. The camera unit includes a lens, a focusing unit and an image pick-up unit. The focusing unit drives the lens to sequentially change the focusing distance of the camera unit to perform a focus-sweep operation, so that clear images of objects at different positions in an optical axis of the lens are sequentially formed on the image pick-up unit. The image processing unit receives the plurality of images obtained by the image pick-up unit in the focus-sweep operation, identifies objects with clear images formed in the plurality of images, and produces an object distribution view according to the focusing distances used when picking up the plurality of image to show a position distribution of the identified objects. | 09-20-2012 |
20120242793 | DISPLAY DEVICE AND METHOD OF CONTROLLING THE SAME - Disclosed are a display device and a method of controlling the same. The display device and the method of controlling the same include a camera capturing a gesture made by a user, a display displaying a stereoscopic image, and a controller controlling presentation of the stereoscopic image in response to a distance between the gesture and the stereoscopic image on a virtual space and an approach direction of the gesture with respect to the stereoscopic image. Accordingly, the presentation of the stereoscopic image can be controlled in response to a distance and an approach direction with respect to the stereoscopic image. | 09-27-2012 |
20120242794 | PRODUCING 3D IMAGES FROM CAPTURED 2D VIDEO - A method of producing a stereo image from a temporal sequence of digital images, comprising: receiving a temporal sequence of digital images; analyzing pairs of digital images to produce corresponding stereo suitability scores, wherein the stereo suitability score for a particular pair of images is determined responsive to the relative positions of corresponding features in the particular pair of digital image; selecting a pair of digital images including a first image and a second image based on the stereo suitability scores; using a processor to analyze the selected pair of digital images to produce a motion consistency map indicating regions of consistent motion, the motion consistency map having an array of pixels; producing a stereo image pair including a left view image and a right view image by combining the first image and the second image responsive to the motion consistency map; and storing the stereo image pair in a processor-accessible memory. | 09-27-2012 |
20120242795 | DIGITAL 3D CAMERA USING PERIODIC ILLUMINATION - A method of operating a digital camera, includes providing a digital camera, the digital camera including a capture lens, an image sensor, a projector and a processor; using the projector to illuminate one or more objects with a sequence of patterns; and capturing a first sequence of digital images of the illuminated objects including the reflected patterns that have depth information. The method further includes using the processor to analyze the first sequence of digital images including the depth information to construct a second, 3D digital image of the objects; capturing a second 2D digital image of the objects and the remainder of the scene without the reflected patterns, and using the processor to combine the 2D and 3D digital images to produce a modified digital image of the illuminated objects and the remainder of the scene. | 09-27-2012 |
20120242796 | AUTOMATIC SETTING OF ZOOM, APERTURE AND SHUTTER SPEED BASED ON SCENE DEPTH MAP - A Depth Map (DM) is able to be utilized for many parameter settings involving cameras, camcorders and other devices. Setting parameters on the imaging device includes zoom setting, aperture setting and shutter speed setting. | 09-27-2012 |
20120242797 | VIDEO DISPLAYING APPARATUS AND VIDEO DISPLAYING METHOD - According to one embodiment, a video displaying apparatus includes a separator, a generator and a controller. The separator is configured to separate a video signal for 3D video display into first and second video signals. The generator is configured to generate a first video frame in which a frame of the first video signal is displayed in a first area on a screen, to generate a second video frame in which a frame of the first or second video signal is displayed in a second area different from the first area, to generate a third video frame similar to the first video frame, and to generate a fourth video frame in which a frame of the second or first video signal is displayed in the second area. The controller is configured to sequentially display the first to fourth video frames in this order. | 09-27-2012 |
20120242798 | SYSTEM AND METHOD FOR SHARING VIRTUAL AND AUGMENTED REALITY SCENES BETWEEN USERS AND VIEWERS - A preferred method for sharing user-generated virtual and augmented reality scenes can include receiving at a server a virtual and/or augmented reality (VAR) scene generated by a user mobile device. Preferably, the VAR scene includes visual data and orientation data, which includes a real orientation of the user mobile device relative to a projection matrix. The preferred method can also include compositing the visual data and the orientation data into a viewable VAR scene; locally storing the viewable VAR scene at the server; and in response to a request received at the server, distributing the processed VAR scene to a viewer mobile device. | 09-27-2012 |
20120242799 | VEHICLE EXTERIOR MONITORING DEVICE AND VEHICLE EXTERIOR MONITORING METHOD - A vehicle exterior monitoring device obtains position information of a three-dimensional object present in a detected region, divides the detected region with respect to an horizontal direction into plural first divided regions, derives a first representative distance corresponding to a peak in distance distribution of each first divided region based on the position information, groups the first divided regions based on the first representative distance to generate one or more first divided region groups, divides the first divided region group with respect to a vertical direction into plural second divided regions, groups second divided regions having relative distances close to the first representative distance to generate a second divided region group, and limits a target range for which the first representative distance is derived within the first divided region group in which the second divided region group is generated to a vertical range corresponding to the second divided region group. | 09-27-2012 |
20120242800 | APPARATUS AND SYSTEM FOR INTERFACING WITH COMPUTERS AND OTHER ELECTRONIC DEVICES THROUGH GESTURES BY USING DEPTH SENSING AND METHODS OF USE - Disclosed herein are systems and methods for gesture capturing, detection, recognition, and mapping them into commands which allow one or many users to interact with electronic games or any electronic device interfaces. Gesture recognition methods, apparatus and system are disclosed from which application developers can incorporate gesture-to-character inputs into their gaming, learning or the like applications. Also herein are systems and methods for receiving 3D data reflecting hand, fingers or other body parts movements of a user, and determining from that data whether the user has performed gesture commands for controlling electronic devices, or computer applications such as games or others. | 09-27-2012 |
20120242801 | Vision Enhancement for a Vision Impaired User - This invention concerns a vision enhancement apparatus that improves vision for a vision-impaired user of interface equipment. Interface equipment stimulates the user's cortex, directly or indirectly, to provide artificial vision. It may include a passive sensor to acquire real-time high resolution video data representing the vicinity of the user. A sight processor to receive the acquired high resolution data and automatically: Analyse the high resolution data to extract depth of field information concerning objects of interest. Extract lower resolution data representing the vicinity of the user. And, provide both the depth of field information concerning objects of interest and the lower resolution data representing the vicinity of the user to the interface equipment to stimulate artificial vision for the user. | 09-27-2012 |
20120242802 | STEREOSCOPIC IMAGE DATA TRANSMISSION DEVICE, STEREOSCOPIC IMAGE DATA TRANSMISSION METHOD, STEREOSCOPIC IMAGE DATA RECEPTION DEVICE, AND STEREOSCOPIC IMAGE DATA RECEPTION METHOD - [Object] To realize facilitation of processing on the reception side. | 09-27-2012 |
20120242803 | STEREO IMAGE CAPTURING DEVICE, STEREO IMAGE CAPTURING METHOD, STEREO IMAGE DISPLAY DEVICE, AND PROGRAM - A conventional device adjusts the convergence angle of its imaging unit in a manner that the left-and-right face detection areas are at the same coordinates and thus forms a stereo image that will be placed on a display screen during display, but fails to place a subject at an intended stereoscopic position. A stereo image capturing device ( | 09-27-2012 |
20120249738 | LEARNING FROM HIGH QUALITY DEPTH MEASUREMENTS - A depth camera computing device is provided, including a depth camera and a data-holding subsystem holding instructions executable by a logic subsystem. The instructions are configured to receive a raw image from the depth camera, convert the raw image into a processed image according to a weighting function, and output the processed image. The weighing function is configured to vary test light intensity information generated by the depth camera from a native image collected by the depth camera from a calibration scene toward calibration light intensity information of a reference image collected by a high-precision test source from the calibration scene. | 10-04-2012 |
20120249739 | METHOD AND SYSTEM FOR STEREOSCOPIC SCANNING - Provided is a system and method for scanning a target area, including capturing images from onboard a platform for use in producing one or more stereoscopic views. A first set of at least two image sequences of at least two images each, covering the target area or a subsection thereof is captured. As the platform continues to move forward, at least one other set of images covering the same target area or subsection thereof is captured. At least one captured image from each of at least two of the sets may be used in producing a stereoscopic view. | 10-04-2012 |
20120249740 | THREE-DIMENSIONAL IMAGE SENSORS, CAMERAS, AND IMAGING SYSTEMS - A three-dimensional image sensor may include a light source module configured to emit at least one light to an object, a sensing circuit configured to polarize a received light that represents the at least one light reflected from the object and configured to convert the polarized light to electrical signals, and a control unit configured to control the light source module and sensing circuit. A camera may include a receiving lens; a sensor module configured to generate depth data, the depth data including depth information of objects based on a received light from the objects; an engine unit configured to generate a depth map of the objects based on the depth data, configured to segment the objects in the depth map, and configured to generate a control signal for controlling the receiving lens based on the segmented objects; and a motor unit configured to control focusing of the receiving lens. | 10-04-2012 |
20120249741 | ANCHORING VIRTUAL IMAGES TO REAL WORLD SURFACES IN AUGMENTED REALITY SYSTEMS - A head mounted device provides an immersive virtual or augmented reality experience for viewing data and enabling collaboration among multiple users. Rendering images in a virtual or augmented reality system may include capturing an image and spatial data with a body mounted camera and sensor array, receiving an input indicating a first anchor surface, calculating parameters with respect to the body mounted camera and displaying a virtual object such that the virtual object appears anchored to the selected first anchor surface. Further operations may include receiving a second input indicating a second anchor surface within the captured image that is different from the first anchor surface, calculating parameters with respect to the second anchor surface and displaying the virtual object such that the virtual object appears anchored to the selected second anchor surface and moved from the first anchor surface. | 10-04-2012 |
20120249742 | METHOD FOR VISUALIZING FREEFORM SURFACES BY MEANS OF RAY TRACING - The invention relates to visualizing freeform surfaces, like NURBS surfaces, from three-dimensional construction data via Virtual beams from a virtual camera are sent out of a virtual image plane in a scene having at least one object and at least one freeform surface. Lighting values are calculated for each point where a beam intersects the freeform surface. The lighting values are then attributed to the pixels associated with the different points of intersection. The freeform surface is defined by two parameters (u, v), and related equations define all points of the surface of the freeform surface The subdivision of the freeform surface for determining the intersections with the beams based on the two parameters (u, v) is regular, so that the surface fragments form meshes of a two-dimensional grid of the freeform surface in the parameter space. | 10-04-2012 |
20120249743 | METHOD AND APPARATUS FOR GENERATING IMAGE WITH HIGHLIGHTED DEPTH-OF-FIELD - A method that highlights a depth-of-field (DOF) region of an image and performs additional image processing by using the DOF region. The method includes: obtaining a first pattern image and a second pattern image that are captured by emitting light according to different patterns from an illumination device; detecting a DOF region by using the first pattern image and the second pattern image; determining weights to highlight the DOF region; and generating the highlighted DOF image by applying the weights to a combined image of the first pattern image and the second pattern image. | 10-04-2012 |
20120249744 | Multi-Zone Imaging Sensor and Lens Array - An imaging module includes a matrix of detector elements formed on a single semiconductor substrate and configured to output electrical signals in response to optical radiation that is incident on the detector elements. A filter layer is disposed over the detector elements and includes multiple filter zones overlying different, respective, convex regions of the matrix and having different, respective passbands. | 10-04-2012 |
20120249745 | METHOD AND DEVICE FOR GENERATING A REPRESENTATION OF SURROUNDINGS - It is proposed that, on the assumption that the surrounding area forms a known topography, a representation is produced from a form of the topography, the camera position relative to the topography and the image in the form of a virtual representation of the view from an observation point which is at a distance from the camera position. This makes it possible to select an advantageous perspective of objects which are imaged in the image, thus making it possible for an operator to easily identify the position of the objects relative to the camera. | 10-04-2012 |
20120257016 | THREE-DIMENSIONAL MODELING APPARATUS, THREE-DIMENSIONAL MODELING METHOD AND COMPUTER-READABLE RECORDING MEDIUM STORING THREE-DIMENSIONAL MODELING PROGRAM - In three-dimensional modeling apparatus, an image obtaining section obtains image sets picked up by stereoscopic camera. A generating section generates three-dimensional models. A three-dimensional model selecting section selects a first three-dimensional model and a second three-dimensional model to be superimposed on the first three-dimensional model among generated three-dimensional models. A extracting section extracts first and second feature points from the selected first and second three-dimensional model. A feature-point selecting section selects feature points having a closer distance to stereoscopic camera from the extracted first and second feature points. A parameter obtaining section obtains a transformation parameter for transforming a coordinate of the second three-dimensional model into a coordinate system of the first three-dimensional model. A transforming section transforms the coordinate of the second three-dimensional model into the coordinate system of the first three-dimensional model. And a superimposing section superimposes the second three-dimensional model on the first three-dimensional model. | 10-11-2012 |
20120257017 | METHOD AND SURVEYING SYSTEM FOR NONCONTACT COORDINATE MEASUREMENT ON AN OBJECT SURFACE - Noncontact coordinate measurement. With a 3D image recording unit, a first three-dimensional image of a first area section of the object surface is electronically recorded in a first position and first orientation, the first three-dimensional image being composed of a multiplicity of first pixels, with which in each case a piece of depth information is coordinated. First 3D image coordinates in an image coordinate system are coordinated with the first pixels. The first position and first orientation of the 3D image recording unit in the object coordinate system are determined by a measuring apparatus coupled to the object coordinate system by means of an optical reference stereocamera measuring system. First 3D object coordinates in the object coordinate system are coordinated with the first pixels from the knowledge of the first 3D image coordinates and of the first position and first orientation of the 3D image recording unit. | 10-11-2012 |
20120257018 | STEREOSCOPIC DISPLAY DEVICE, METHOD FOR GENERATING IMAGE DATA FOR STEREOSCOPIC DISPLAY, AND PROGRAM THEREFOR - Provided is a stereoscopic display device provided with a stereoscopic display panel and a display controller, the stereoscopic display panel including a lenticular lens, a color filter substrate, a TFT substrate, etc. Unit pixels arranged in a horizontal direction parallel to the direction in which both eyes of viewer are arranged are alternately used as left-eye pixels and right-eye pixels. The display controller determined, according to temperature information from a temperature sensor, the contraction/expansion of the lens by a stereoscopic image generating module and generates 3D image data for driving the display panel in which the amount of disparity in a specific disparity direction is corrected on the basis of parameter information defined by an effective linear expansion coefficient inherent in the stereoscopic display panel, or the like and the magnitude of the temperature to thereby ensure a predetermined stereoscopic visual recognition range even when the lens are contracted/expanded. | 10-11-2012 |
20120257019 | STEREO IMAGE DATA TRANSMITTING APPARATUS, STEREO IMAGE DATA TRANSMITTING METHOD, STEREO IMAGE DATA RECEIVING APPARATUS, AND STEREO IMAGE DATA RECEIVING METHOD - [Object] To maintain perspective consistency with individual objects in an image when displaying captions (caption units) based on an ARIB method in a superimposed manner. | 10-11-2012 |
20120257020 | RASTER SCANNING FOR DEPTH DETECTION - Techniques are provided for determining distance to an object in a depth camera's field of view. The techniques may include raster scanning light over the object and detecting reflected light from the object. One or more distances to the object may be determined based on the reflected image. A 3D mapping of the object may be generated. The distance(s) to the object may be determined based on times-of-flight between transmitting the light from a light source in the camera to receiving the reflected image from the object. Raster scanning the light may include raster scanning a pattern into the field of view. Determining the distance(s) to the object may include determining spatial differences between a reflected image of the pattern that is received at the camera and a reference pattern. | 10-11-2012 |
20120262549 | Full Reference System For Predicting Subjective Quality Of Three-Dimensional Video - A method of generating a predictive picture quality rating is provided. In general, a disparity measurement is made of a three-dimensional image by comparing left and right sub-components of the three-dimensional image. Then the left and right sub-components of the three-dimensional image are combined (fused) into a two-dimensional image, using data from the disparity measurement for the combination. A predictive quality measurement is then generated based on the two-dimensional image, and further including quality information about the comparison of the original three-dimensional image. | 10-18-2012 |
20120262550 | SIX DEGREE-OF-FREEDOM LASER TRACKER THAT COOPERATES WITH A REMOTE STRUCTURED-LIGHT SCANNER - Measuring three surface sets on an object surface with a measurement device and scanner, each surface set being 3D coordinates of a point on the object surface. The method includes: the device sending a first light beam to the first retroreflector and receiving a second light beam from the first retroreflector, the second light beam being a portion of the first light beam, a scanner processor and a device processor jointly configured to determine the surface sets; selecting the source light pattern and projecting it onto the object to produce the object light pattern; imaging the object light pattern onto a photosensitive array to obtain the image light pattern; obtaining the pixel digital values for the image light pattern; measuring the translational and orientational sets with the device; determining the surface sets corresponding to three non-collinear pattern elements; and saving the surface sets. | 10-18-2012 |
20120262551 | THREE DIMENSIONAL IMAGING DEVICE AND IMAGE PROCESSING DEVICE - The 3D image capture device includes a light-transmitting section with n transmitting areas (where n is an integer and n≧2) that have different transmission wavelength ranges and each of which transmits a light ray falling within a first wavelength range, a solid-state image sensor that includes a photosensitive cell array having a number of unit blocks, and a signal processing section that processes the output signal of the image sensor. Each unit block includes n photosensitive cells including a first photosensitive cell that outputs a signal representing the quantity of the light ray falling within the first wavelength range. The signal processing section generates at least two image data with parallax by using a signal obtained by multiplying a signal supplied from the first photosensitive cell by a first coefficient, which is real number that is equal to or greater than zero but less than one. | 10-18-2012 |
20120268563 | AUGMENTED AUDITORY PERCEPTION FOR THE VISUALLY IMPAIRED - A person is provided with the ability to auditorily determine the spatial geometry of his current physical environment. A spatial map of the current physical environment of the person is generated. The spatial map is then used to generate a spatialized audio representation of the environment. The spatialized audio representation is then output to a stereo listening device which is being worn by the person. | 10-25-2012 |
20120268564 | ELECTRONIC APPARATUS AND VIDEO DISPLAY METHOD - An electronic apparatus includes a camera, a tracking module, a three-dimensional video adjusting module, a display, and an output direction controller. The tracking module is configured to recognize a position of a user based on a video picked up by the camera. The three-dimensional video adjusting module is output one of first video data and second video data by adjusting an input three-dimensional video signal, the first video data corresponding to a first stereoscopic video that appears stereoscopically when viewed from a predetermined position, the second video data corresponding to a second stereoscopic video that appears stereoscopically when viewed from the recognized position of the user. | 10-25-2012 |
20120268565 | Operating Assembly Including Observation Apparatus, The Use of Such an Operating Assembly, and an Operating Facility - An operating assembly, in particular for a medical operating field, comprises an operating lighting apparatus having a lighting support arm mounted on a support to be independently pivotally movable, and observation apparatus comprising a camera support arm mounted on support to be independently pivotally moveable, a camera module with a camera support mounted to be pivotally movable relative to the camera support arm and drive means for driving the camera support, and control means for controlling the drive means. The camera module comprises two observation cameras, each of which has a vision axis and a vision direction, each end of the camera support carrying a camera so that the cameras are disposed on either side of an axis of symmetry passing through the middle of the camera support. | 10-25-2012 |
20120268566 | THREE-DIMENSIONAL COLOR IMAGE SENSORS HAVING SPACED-APART MULTI-PIXEL COLOR REGIONS THEREIN - A three-dimensional color image sensor includes color pixels and depth pixels therein. A semiconductor substrate is provided with a depth region therein, which extends adjacent a surface of the semiconductor substrate. A two-dimensional array of spaced-apart color regions are provided within the depth region. Each of the color regions includes a plurality of different color pixels therein (e.g., red, blue and green pixels) and each of the color pixels within each of the spaced-apart color regions are spaced-apart from all other color pixels within other color regions. | 10-25-2012 |
20120268567 | THREE-DIMENSIONAL MEASUREMENT APPARATUS, PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A three-dimensional measurement apparatus selects points corresponding to geometric features of a three-dimensional shape model of a target object, projects a plurality of selected points corresponding to the geometric features onto a range image based on approximate values indicating the position and orientation of the target object and imaging parameters at the time of imaging of the range image, searches regions of predetermined ranges respectively from the plurality of projected points for geometric features on the range image which correspond to the geometric features of the three-dimensional shape model, and associates these geometric features with each other. The apparatus then calculates the position and orientation of the target object using differences of distances on a three-dimensional space between the geometric features of the three-dimensional shape model and those on the range image, which are associated with each other. | 10-25-2012 |
20120274744 | STRUCTURED LIGHT IMAGING SYSTEM - Structured light imaging method and systems are described. An imaging method generates a stream of light pulses, converts the stream after reflection by a scene to charge, stores charge converted during the light pulses to a first storage element, and stores charge converted between light pulses to a second storage element. A structured light image system includes an illumination source that generates a stream of light pulses and an image sensor. The image sensor includes a photodiode, first and second storage elements, first and second switches, and a controller that synchronizes the image sensor to the illumination source and actuates the first and second switches to couple the first storage element to the photodiode to store charge converted during the light pulses and to couple the second storage element to the photodiode to store charge converted between the light pulses. | 11-01-2012 |
20120274745 | THREE-DIMENSIONAL IMAGER AND PROJECTION DEVICE - The systems and methods described herein include a device that can scan the surrounding environment and construct a 3D image, map, or representation of the surrounding environment using, for example, invisible light projected into the environment. In some implementations, the device can also project into the surrounding environment one or more visible radiation pattern patterns (e.g., a virtual object, text, graphics, images, symbols, color patterns, etc.) that are based at least in part on the 3D map of the surrounding environment. | 11-01-2012 |
20120274746 | METHOD FOR DETERMINING A SET OF OPTICAL IMAGING FUNCTIONS FOR THREE-DIMENSIONAL FLOW MEASUREMENT - The invention relates to a method for determining a set of optical imaging functions that describe the imaging of a measuring volume onto each of a plurality of detector surfaces on which the measuring volume can be imaged at in each case a different observation angle by means of detection optics. In addition to the assignment of in each case one image position (x, y) to each volume position (X, Y, Z), the method according to the invention envisages that the shape of the image of a punctiform particle in the measuring volume be described by shape parameter values (a, b, 100 , I) and that the corresponding set of shape parameter values be assigned to each volume position (X, Y. Z) for each detector surface. | 11-01-2012 |
20120281071 | Optical Scanning Device - A device for scanning a body orifice or surface including a light source and a wide angle lens. The light from the light source is projected in a pattern distal or adjacent to the wide angle lens. Preferably, the pattern is within a focal surface of the wide angle lens. The pattern intersects a surface of the body orifice, such as an ear canal, and defines a partial lateral portion of the pattern extending along the surface. A processor is configured to receive an image of the lateral portion from the wide angle lens and determine a position of the lateral portion in a coordinate system using a known focal surface of the wide angle lens. Multiple lateral portions are reconstructed by the processor to build a three-dimensional shape. This three-dimensional shape may be used for purposes such as diagnostic, navigation, or custom-fitting of medical devices, such as hearing aids. | 11-08-2012 |
20120287239 | Robot Vision With Three Dimensional Thermal Imaging - A robot vision system provides images to a remote robot viewing station, using a single long wave infrared camera on-board the robot. An optical system, also on-board the robot, has mirrors that divide the camera's field of view so that the camera receives a stereoscopic image pair. An image processing unit at the viewing station receives image data from the camera and processes the image data to provide a stereoscopic image. | 11-15-2012 |
20120287240 | CAMERA CALIBRATION USING AN EASILY PRODUCED 3D CALIBRATION PATTERN - A system for computing one or more calibration parameters of a camera is disclosed. The system comprises a processor and a memory. The processor is configured to provide a first object either marked with or displaying three or more fiducial points. The fiducial points have known 3D positions in a first object reference frame. The processor is further configured to provide a second object either marked with or displaying three or more fiducial points. The fiducial points had known 3D positions in a second object reference frame. The processor is further configured to place the first object and the second object in a fixed position such that the fiducial point positions of the first and second objects are non-planar. The processor is further configured to compute one or more calibration parameters of the second camera using computations based on images taken of the fiducials. | 11-15-2012 |
20120287241 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR AQUATIC DISPLAY - The present disclosure relates to a computer-implemented method for providing an aquatic display. The method may include capturing real-time live video of aquatic content. The method may further include providing the real-time live video of aquatic content to one or more display devices and displaying the real-time live video of aquatic content as a screensaver. Numerous other features are also within the scope of the present disclosure. | 11-15-2012 |
20120287242 | ADAPTIVE HIGH DYNAMIC RANGE CAMERA - An embodiment of the invention provides a time of flight 3D camera comprising a photosensor having a plurality of pixels that generate and accumulate photoelectrons responsive to incident light, which photosensor is tiled into a plurality of super pixels, each partitioned into a plurality of pixel groups and a controller that provides a measure of an amount of light incident on a super pixel responsive to quantities of photoelectrons from pixel groups in the super pixel that do not saturate a readout pixel comprised in the photosensor. | 11-15-2012 |
20120287243 | STEREOSCOPIC CAMERA USING ANAGLYPHIC DISPLAY DURING CAPTURE - A digital camera for capturing stereoscopic images, comprising: an image sensor; an optical system; a user interface; a color image display; a data processing system; a buffer memory; a storage memory; and a program memory storing instructions configured to implement a method for capturing stereoscopic images. The method includes: capturing a first digital image of a scene in response to user activation of a user interface element; storing the first digital image; displaying a stream of stereoscopic preview images on the color image display, wherein the stereoscopic preview images are anaglyph stereoscopic images formed by combining the stored first digital image with a stream of evaluation digital images of the scene captured using the image sensor; capturing a second digital image of the scene in response to user activation of a user interface element; and storing a stereoscopic image based on the first digital image and the second digital image. | 11-15-2012 |
20120287244 | NON-COHERENT LIGHT MICROSCOPY - An optical microscope ( | 11-15-2012 |
20120287245 | OCCUPANCY SENSOR AND ASSOCIATED METHODS - A device to detect occupancy of an environment includes a sensor to capture video frames from a location in the environment. The device may compare rules with data using a rules engine. The microcontroller may include a processor and memory to produce results indicative of a condition of the environment. The device may also include an interface through which the data is accessible. The device may generate results respective to the location in the environment. The microcontroller may be in communication with a network. The video frames may be concatenated to create an overview to display the video frames substantially seamlessly respective to the location in which the sensor is positioned. The overview may be viewable using the interface and the results of the analysis performed by the rules engine may be accessible using the interface. | 11-15-2012 |
20120287246 | IMAGE PROCESSING APPARATUS CAPABLE OF DISPLAYING IMAGE INDICATIVE OF FACE AREA, METHOD OF CONTROLLING THE IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM - An image processing apparatus capable of appropriately displaying a face frame in a manner superimposed on a three-dimensional video image. In a three-dimensional photography image pickup apparatus as the image processing apparatus, two video images are acquired by shooting an object, and a face area is detected in each of the two video images. The face area detected in one of the two video images and the face area detected in the other video image are associated with each other. The three-dimensional photography image pickup apparatus generates face area-related information including positions on a display panel where face area images are to be displayed. The face area images are generated according to the face area-related information. The two video images are combined with the respective face area images, and the combined video images are output to the display panel. | 11-15-2012 |
20120293623 | METHOD AND SYSTEM FOR INSPECTING SMALL MANUFACTURED OBJECTS AT A PLURALITY OF INSPECTION STATIONS AND SORTING THE INSPECTED OBJECTS - A method and system for inspecting small, manufactured objects at a plurality of inspection stations and sorting the inspected objects are provided. Coins, coin blanks, tablets or pills are fed from a centrifugal feeder and conveyed or transferred by a transfer subsystem. The objects are spaced at equal intervals during conveyance to provide a “metering effect” which allows the proper spacing between objects for inspection and rejection of defects. The inspection stations may include imaging assemblies in the form of conventional cameras and/or three-dimensional sensors such as triangulation or confocal sensors. The inspection stations may include a circumference vision station and/or an eddy current station. Circumferential defects (like in edge lettering) on coins or rim defects on pills can be detected at the circumference vision station by another imaging assembly. Metal chips, foreign metallic debris, etc. in or on the tablets/pills can be detected at the eddy current station. | 11-22-2012 |
20120293624 | SYSTEM AND METHOD OF REVISING DEPTH OF A 3D IMAGE PAIR - A method of revising depth of a three-dimensional (3D) image is disclosed. The method comprises the following steps: firstly, at least one initial depth map associated with one image of the 3D image pair based on stereo matching technique is received, wherein the one image comprises a plurality of pixels, and the initial depth map carries an initial depth value of each pixel. Then, the inconsistence among the pixels of the one image of the 3D image pair is detected to estimate a reliable map. Finally, the initial depth value is interpolated according to the reliable map and the proximate pixels, so as to generate a revised depth map by revising the initial depth value. | 11-22-2012 |
20120293625 | 3D-CAMERA AND METHOD FOR THE THREE-DIMENSIONAL MONITORING OF A MONITORING AREA - A 3D-camera ( | 11-22-2012 |
20120293626 | THREE-DIMENSIONAL DISTANCE MEASUREMENT SYSTEM FOR RECONSTRUCTING THREE-DIMENSIONAL IMAGE USING CODE LINE - Disclosed herein is a 3D distance measurement system. The 3D distance measurement system includes an image projection device for projecting a pattern image including one or more patterns on a target object, and an image acquisition device for acquiring a projected pattern image, analyzing the projected pattern image using the patterns, and then reconstructing a 3D image. Each of the patterns includes one or more preset identification factors so that the patterns can be uniquely recognized, and each of the identification factors is one of a point, a line, and a surface, or a combination of two or more of a point, a line, and a surface. The 3D distance measurement system is advantageous in that it reconstructs a 3D image using a single pattern image, thus greatly improving processing speed and the utilization of a storage space and enabling a 3D image to be accurately reconstructed. | 11-22-2012 |
20120293627 | 3D IMAGE INTERPOLATION DEVICE, 3D IMAGING APPARATUS, AND 3D IMAGE INTERPOLATION METHOD - A 3D image interpolation device performs frame interpolation on 3D video. The 3D image interpolation device includes: a range image interpolation unit that generates at least one interpolation range image to be interpolated between a first range image indicating a depth of a first image included in the 3D video and a second range image indicating a depth of a second image included in the 3D video; an image interpolation unit that generates at least one interpolation image to be interpolated between the first image and the second image; and an interpolation parallax image generation unit generates, based on interpolation image, at least one pair of interpolation parallax images having parallax according to a depth indicated by the interpolation range image. | 11-22-2012 |
20120293628 | CAMERA INSTALLATION POSITION EVALUATING METHOD AND SYSTEM - A camera installation position evaluating system includes a processor, the processor executing a process including setting a virtual plane orthogonal to the optic axis of a camera mounted on a camera mounted object, generating virtually a camera image to be captured by the camera, with use of data about a three-dimensional model of the camera mounted object, data about the virtual plane set by the setting and parameters of the camera, and computing a boundary between an area of the three-dimensional model of the camera mounted object and an area of the virtual plane set by the setting, on the camera image generated by the generating. Accordingly, the camera installation position evaluating system is able to quantitatively obtain the camera's view range at the present camera installation position based on the computed boundary. | 11-22-2012 |
20120293629 | IRIS SCANNING APPARATUS EMPLOYING WIDE-ANGLE CAMERA, FOR IDENTIFYING SUBJECT, AND METHOD THEREOF - Embodiments provide an iris scanning apparatus for identifying a subject, employing a wide-angle image collector, and a method thereof. A wide angle camera is employed in the iris scanning apparatus to allow a user to easily locate a small eye region of a subject without having to check back and forth between an image display and the subject's face. The apparatus and method are also capable of measuring the distance to the subject's eye and displaying the distance information on the image display, and informing the user as to whether the eye of the subject is within operating range of the iris scanning apparatus. Also, iris scanning is automatically performed without the user's input when an eye is positioned within operating range, and is not performed if an image captured by the iris scanning apparatus does not contain an eye region, in order to prevent erroneous operation. | 11-22-2012 |
20120300034 | INTERACTIVE USER INTERFACE FOR STEREOSCOPIC EFFECT ADJUSTMENT - Present embodiments contemplate systems, apparatus, and methods to determine a user's preference for depicting a stereoscopic effect. Particularly, certain of the embodiments contemplate receiving user input while displaying a stereoscopic video sequence. The user's preferences may be determined based upon the input. These preferences may then be applied to future stereoscopic depictions. | 11-29-2012 |
20120300035 | ELECTRONIC CAMERA - An electronic camera includes an imager. An imager captures a scene through an optical system. A distance adjuster adjusts an object distance to a designated distance. A depth adjuster adjusts a depth of field to a predetermined depth, corresponding to completion of an adjustment of the distance adjuster. An acceptor accepts a changing operation for changing a length of the designated distance. A changer changes the depth of field to an enlarged depth greater than the predetermined depth, in response to the changing operation. | 11-29-2012 |
20120300036 | Optimizing Stereo Video Display - System and method for video processing. First video levels for pixels for a left image of a stereo image pair are received from a GPU. Gamma corrected video levels (g-levels) are generated via a gamma look-up table (LUT) based on the first video levels. Outputs of the gamma LUT are constrained by minimum and/or maximum values, thereby excluding values for which corresponding post-OD display luminance values differ from static display luminance values by more than a specified error. Overdriven video levels are generated via a left OD LUT based on the g-levels. The overdriven video levels correspond to display luminance values that differ from corresponding static display luminance values by less than the error threshold, and are provided to a display device for display of the left image. This process is repeated for second video levels for a right image of the stereo image pair, using a right OD LUT. | 11-29-2012 |
20120300037 | THREE-DIMENSIONAL IMAGING SYSTEM USING A SINGLE LENS SYSTEM - The passive imaging system of the present application includes first and second input polarizers on the light receiving side of a light receiving lens. A first half of the split polarizer performs vertical polarization of incoming light while the second half of the split polarizer performs horizontal polarization of the incoming light. The input polarizing structure provides parallax to accomplish 3D imaging. A third or interleaving polarizer is provided between the lens and an imaging device and is adjacent to and closely spaced from (<10 microns) the image plane of the device. The interleaving polarizer is sectional so that alternating sections, along the direction of parallax created by the input polarizer(s), pass vertically and horizontally polarized light. The resulting image frame formed at the image plane of the imager is similarly sectional so that sections of the image alternate between vertically polarized light and horizontally polarized light, e.g., for example the odd sections of the image are images of vertical polarized light (received from the left side) and even sections of the image are images of horizontally polarized light (received from the right side). Once an image frame has been captured, it is divided into two parallactic image frames, one of vertically polarized light imaged from the left side and one of horizontally polarized light imaged from the right side. The two resulting frames are combined to form a 3D image. | 11-29-2012 |
20120307008 | PORTABLE ELECTRONIC DEVICE WITH RECORDING FUNCTION - A portable electronic device with recording function includes a main body, a hinge member, a covering body, a first image capturing module and a second image capturing module. One side of the hinge member is connected to the main body and another side is connected to the covering body so that the covering body has at least a rotary axial direction of movement. The first image capturing module is disposed on the main body and includes a zoom lens and a first image-sensing element. The second image capturing module is disposed on the covering body and includes a stereoscopic image recoding unit and at least a second image sensing element. | 12-06-2012 |
20120307009 | METHOD AND APPARATUS FOR GENERATING IMAGE WITH SHALLOW DEPTH OF FIELD - A method and an apparatus for generating an image with shallow depth of field are provided. The present method includes following steps. A subject is photographed according to a first aperture value, so as to generate a first aperture value image. The subject is photographed according to a second aperture value, so as to generate a second aperture value image, wherein the second aperture value is greater than the first aperture value. The first aperture value image and the second aperture value image are analyzed to generate an image difference value. If the image difference value is greater than a threshold, an image processing is performed on the first aperture value image to obtain the image with shallow depth of field. | 12-06-2012 |
20120307010 | OBJECT DIGITIZATION - Digitizing objects in a picture is discussed herein. A user presents the object to a camera, which captures the image comprising color and depth data for the front and back of the object. For both front and back images, the closest point to the camera is determined by analyzing the depth data. From the closest points, edges of the object are found by noting large differences in depth data. The depth data is also used to construct point cloud constructions of the front and back of the object. Various techniques are applied to extrapolate edges, remove seams, extend color intelligently, filter noise, apply skeletal structure to the object, and optimize the digitization further. Eventually, a digital representation is presented to the user and potentially used in different applications (e.g., games, Web, etc.). | 12-06-2012 |
20120307011 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD FOR DISPLAYING VIDEO IMAGE CAPABLE OF ACHIEVING IMPROVED OPERABILITY AND REALISM, AND NON-TRANSITORY STORAGE MEDIUM ENCODED WITH COMPUTER READABLE PROGRAM FOR CONTROLLING IMAGE PROCESSING APPARATUS - An exemplary embodiment provides an image processing apparatus. The image processing apparatus includes a first object control unit for changing an orientation or a direction of movement of a first object in a three-dimensional space, a virtual camera control unit for determining a direction of shooting of the virtual camera in the three-dimensional space, and a display data generation unit for generating display data based on the determined direction of shooting of the virtual camera. The virtual camera control unit includes a detection unit for detecting whether the first object is hidden by another object when viewed from the virtual camera and a first following change unit for increasing a degree of causing the direction of shooting of the virtual camera to follow the orientation or the direction of movement of the first object based on a result of detection. | 12-06-2012 |
20120307012 | ELECTRONIC DEVICE MOTION DETECTION AND RELATED METHODS - An electronic device may include an optical source generating an optical output, a lens cooperating with the optical source and projecting a grid optical pattern from the optical output, and a video sensor detecting changes in the grid optical pattern caused by movement of an object. | 12-06-2012 |
20120307013 | FOOD PROCESSING APPARATUS FOR DETECTING AND CUTTING TOUGH TISSUES FROM FOOD ITEMS - A food processing apparatus for detecting and cutting tough tissues from food items such as fish, meat, or poultry. At least one x-ray machine associated to a first conveyor for imaging incoming food items on the first conveyor based on a generated x-ray image indicating the location of the tough tissues in the food items. A vision system supplies second image data of the food items subsequent to the imaging by the x-ray machine. The second image data including position related data indicating the position of the food items on the second conveyor prior to the cutting. A mapping mechanism determines an estimated coordinate position of the food items on the second conveyor by utilizing the x-ray image and tracking position data. The processor compares the estimated coordinate position of the food items to the actual position on the second conveyor based on the second image data. | 12-06-2012 |
20120307014 | METHOD AND APPARATUS FOR ULTRAHIGH SENSITIVE OPTICAL MICROANGIOGRAPHY - Embodiments herein provide an ultrahigh sensitive optical microangiography (OMAG) system that provides high sensitivity to slow flow information, such as that found in blood flow in capillaries, while also providing a relatively low data acquisition time. The system performs a plurality of fast scans (i.e., B-scans) on a fast scan axis, where each fast scan includes a plurality of A-scans. At the same time, the system performs a slow scan (i.e., C-scan), on a slow scan axis, where the slow scan includes the plurality of fast scans. A detector receives the spectral interference signal from the sample to produce a three dimensional (3D) data set. An imaging algorithm is then applied to the 3D data set in the slow scan axis to produce at least one image of the sample. In some embodiments, the imaging algorithm may separate flow information from structural information of the sample. | 12-06-2012 |
20120314031 | INVARIANT FEATURES FOR COMPUTER VISION - Technology is described for determining and using invariant features for computer vision. A local orientation may be determined for each depth pixel in a subset of the depth pixels in a depth map. The local orientation may an in-plane orientation, an out-out-plane orientation or both. A local coordinate system is determined for each of the depth pixels in the subset based on the local orientation of the corresponding depth pixel. A feature region is defined relative to the local coordinate system for each of the depth pixels in the subset. The feature region for each of the depth pixels in the subset is transformed from the local coordinate system to an image coordinate system of the depth map. The transformed feature regions are used to process the depth map. | 12-13-2012 |
20120314032 | METHOD FOR PILOT ASSISTANCE FOR THE LANDING OF AN AIRCRAFT IN RESTRICTED VISIBILITY - Method for pilot assistance in landing an aircraft with restricted visibility, in which the position of a landing point is defined by at least one of a motion compensated, aircraft based helmet sight system and a remotely controlled camera during a landing approach, and with the landing point is displayed on a ground surface in the at least one of the helmet sight system and the remotely controlled camera by production of symbols that conform with the outside view. The method includes one of producing or calculating during an approach, a ground surface based on measurement data from an aircraft based 3D sensor, and providing both the 3D measurement data of the ground surface and a definition of the landing point with reference to a same aircraft fixed coordinate system. | 12-13-2012 |
20120314033 | APPARATUS AND METHOD FOR GENERATING 3D IMAGE DATA IN A PORTABLE TERMINAL - The present invention relates to an apparatus and a method for processing three-dimensional (3D) image data of a portable terminal, and particularly, to an apparatus and a method for enabling contents sharing and reproduction (playback) between various 3D devices using a file structure for effectively storing a 3D image obtained using a plurality of cameras, and a stored 3D related parameter, and sharing and reproduction between various 3D devices are possible using a file structure for effectively storing a 3D image (for example, a stereo image) obtained using a plurality of cameras, and a stored 3D related parameter. | 12-13-2012 |
20120320156 | Image recording apparatus - An image recording apparatus includes an image-sensing unit, a video-processing unit and a power module. The image-sensing unit further includes a frame, a first connector, and a first image-sensing module for capturing an image. The video-processing unit further includes a body, a second connector, a video-processing circuit, a video-output module, and a memory module. Through an electric engagement of the first connector and the second connector, image data captured by the first image-sensing module can be transmitted to the video-processing unit for being processed by the video-processing circuit into an electronic image file readable and storable to an ordinary computer apparatus. The electronic image file can be stored in the memory module. The video-processing module can also forward the electronic image file to a foreign video-displaying apparatus through the video-output module. The power module is to provide electricity to the image-sensing unit and the video-processing unit. | 12-20-2012 |
20120320157 | COMBINED LIGHTING, PROJECTION, AND IMAGE CAPTURE WITHOUT VIDEO FEEDBACK - A “Concurrent Projector-Camera” uses an image projection device in combination with one or more cameras to enable various techniques that provide visually flicker-free projection of images or video, while real-time image or video capture is occurring in that same space. The Concurrent Projector-Camera provides this projection in a manner that eliminates video feedback into the real-time image or video capture. More specifically, the Concurrent Projector-Camera dynamically synchronizes a combination of projector lighting (or light-control points) on-state temporal compression in combination with on-state temporal shifting during each image frame projection to open a “capture time slot” for image capture during which no image is being projected. This capture time slot represents a tradeoff between image capture time and decreased brightness of the projected image. Examples of image projection devices include LED-LCD based projection devices, DLP-based projection devices using LED or laser illumination in combination with micromirror arrays, etc. | 12-20-2012 |
20120320158 | INTERACTIVE AND SHARED SURFACES - The interactive and shared surface technique described herein employs hardware that can project on any surface, capture color video of that surface, and get depth information of and above the surface while preventing visual feedback (also known as video feedback, video echo, or visual echo). The technique provides N-way sharing of a surface using video compositing. It also provides for automatic calibration of hardware components, including calibration of any projector, RGB camera, depth camera and any microphones employed by the technique. The technique provides object manipulation with physical, visual, audio, and hover gestures and interaction between digital objects displayed on the surface and physical objects placed on or above the surface. It can capture and scan the surface in a manner that captures or scans exactly what the user sees, which includes both local and remote objects, drawings, annotations, hands, and so forth. | 12-20-2012 |
20120320159 | Apparatus And Method To Automatically Distinguish Between Contamination And Degradation Of An Article - An inspection apparatus includes an imaging unit producing image signals; a processing unit for receiving the image signal; the imaging unit producing a stack of images of an article at different focal lengths in response to the processing unit; the processing unit generating a depth map from the stack of images; the processing unit analyzing the depth map to derive a depth profile of an object of interest; the processing unit determining a surface mean for the article from the stack of images; and the processing unit characterizing the article as degraded or contaminated in response to the depth profile and the surface mean. | 12-20-2012 |
20120320160 | DEVICE FOR ESTIMATING THE DEPTH OF ELEMENTS OF A 3D SCENE - Device comprising: | 12-20-2012 |
20120320161 | DEPTH AND LATERAL SIZE CONTROL OF THREE-DIMENSIONAL IMAGES IN PROJECTION INTEGRAL IMAGING - A method disclosed herein relates to displaying three-dimensional images. The method comprises projecting integral images to a display device, and displaying three-dimensional images with the display device. Further disclosed herein is an apparatus for displaying orthoscopic 3-D images. The apparatus comprises a projector for projecting integral images, and a micro-convex-mirror array for displaying the projected images. | 12-20-2012 |
20120327187 | ADVANCED REMOTE NONDESTRUCTIVE INSPECTION SYSTEM AND PROCESS - A system for inspecting a test article incorporates a diagnostic imaging system for a test article. A command controller receives two dimensional (2D) images from the diagnostic imaging system. A three dimensional (3D) computer aided design (CAD) model visualization system and an alignment system for determining local 3D coordinates are connected to the command controller. Computer software modules incorporated in the command controller are employed, in aligning, the 2D images and 3D CAD model responsive to the local 3D coordinates. The 2D images and 3D CAD model are displayed with reciprocal registration. The alignment system is then directed to selected coordinates in the 2D images or 3D CAD model. | 12-27-2012 |
20120327188 | Vehicle-Mounted Environment Recognition Apparatus and Vehicle-Mounted Environment Recognition System - A vehicle-mounted environment recognition apparatus including a simple pattern matching unit which extracts an object candidate from an image acquired from a vehicle-mounted image capturing apparatus by using a pattern shape stored in advance and outputs a position of the object candidate, an area change amount prediction unit which calculates a change amount prediction of the extracted object candidate on the basis of an object change amount prediction calculation method set differently for each area of a plurality of areas obtained by dividing the acquired image, detected vehicle behavior information, and an inputted position of the object candidate, and outputs a predicted position of an object, and a tracking unit which tracks the object on the basis of an inputted predicted position of the object. | 12-27-2012 |
20120327189 | Stereo Camera Apparatus - A stereo camera apparatus which carries out distance measuring stably and with high accuracy by making measuring distance resolution variable according to a distance to an object is provided. A stereo camera apparatus | 12-27-2012 |
20120327190 | MONITORING SYSTEM - A monitoring system includes at least one three-dimensional (3D) time-of-flight (TOF) camera configured to monitor a safety-critical area. An evaluation unit is configured to activate a safety function upon an entrance of at least one of an object and a person into the monitored area and to suppress the activation of the safety function where at least one clearance element is recognized as being present on the at least one of the object and the person. | 12-27-2012 |
20120327191 | 3D IMAGING DEVICE AND 3D IMAGING METHOD - A 3D imaging device obtains a 3D image (3D video) that achieves an appropriate 3D effect and/or intended placement. The 3D imaging device ( | 12-27-2012 |
20120327192 | METHOD AND DEVICE FOR OPTICAL SCANNING OF THREE-DIMENSIONAL OBJECTS BY MEANS OF A DENTAL 3D CAMERA USING A TRIANGULATION METHOD - A dental 3D camera for optically scanning a three-dimensional object, and a method for operating a dental 3D camera. The camera operates in accordance with a triangulation procedure to acquire a plurality of images of the object. The method comprises forming at least one comparative signal based on at least two images of the object acquired by the camera while at least one pattern is projected on the object, and determining at least one camera shake index based on the at least one comparative signal. | 12-27-2012 |
20120327193 | VOICE-BODY IDENTITY CORRELATION - A system and method are disclosed for tracking image and audio data over time to automatically identify a person based on a correlation of their voice with their body in a multi-user game or multimedia setting. | 12-27-2012 |
20130002822 | PRODUCT ORDERING SYSTEM, PROGRAM AND METHOD - A product ordering system storing the information of 3D product models including data for forming each of the 3D product models and the scale of each of the 3D product models. A method for ordering a product using the system includes first capturing an image. Then, sensing the distance between the image capturing unit and the user, next, obtaining specific image data from the captured image according to one selected 3D product model, then, converting life size data needed to form a 3D model of the user according to the focus of the image capturing unit and the sensed distance. After that, generating a 3D model of the user according to the scale of the 3D product model, and overlaying the selected 3D product model with the 3D model of the user and displaying the combination for viewing by the user. | 01-03-2013 |
20130002823 | IMAGE GENERATING APPARATUS AND METHOD - An image generating apparatus may include a first reflector and a second reflector. When a light emitter emits an infrared light, the infrared light may be reflected from a first reflector and be omni-directionally reflected. The reflected infrared light that is the infrared light reflected from the object may be reflected from a second reflector and be transferred to a sensor. The sensor may receive the reflected infrared light and generate a depth image of the object. | 01-03-2013 |
20130002824 | INTEGRATED OTOSCOPE AND THREE DIMENSIONAL SCANNING SYSTEM - A multi-purpose device that can be used, as, among other things, an otoscope and a three dimensional scanning system is disclosed. | 01-03-2013 |
20130002825 | IMAGING DEVICE - This 3D image capture device includes a light-transmitting section | 01-03-2013 |
20130002826 | CALIBRATION DATA SELECTION DEVICE, METHOD OF SELECTION, SELECTION PROGRAM, AND THREE DIMENSIONAL POSITION MEASURING APPARATUS - Appropriate selection of calibration data by shortening process time without wasteful processing is provided. Before measuring a three dimensional point of a target object from a stereo image, the calibration data according to an in-focus position of taking optical systems are applied to the stereo image. To select the calibration data, an object distance is acquired according to parallax obtained from the stereo image being reduced. The object distance is an estimated focusing distance corresponding to the in-focus position. One of the calibration data assigned with a set distance region in which the estimated focusing distance is included is selected. Respective view images are reduced in a range in which it is possible to detect any one of the set distance regions determined for a respective reference focusing distance corresponding to the calibration data. | 01-03-2013 |
20130010066 | NIGHT VISION - A robot is provided that includes a processor executing instructions that generate an image. The robot also includes a depth sensor that captures depth data about an environment of the robot. Additionally, the robot includes a software component executed by the processor configured to generate a depth map of the environment based on the depth data. The software component is also configured to generate the image based on the depth map and red-green-blue (RGB) data about the environment. | 01-10-2013 |
20130010067 | Camera and Method for Focus Based Depth Reconstruction of Dynamic Scenes - A dynamic scene is reconstructed as depths and an extended depth of field video by first acquiring, with a camera including a lens and sensor, a focal stack of the dynamic scene while changing a focal depth. An optical flow between the frames of the focal stack is determined, and the frames are warped according to the optical flow to align the frames and to generate a virtual static focal stack. Finally, a depth map and a texture map for each virtual static focal stack is generated using a depth from defocus, wherein the texture map corresponds to an EDOF image. | 01-10-2013 |
20130010068 | AUGMENTED REALITY SYSTEM - Methods and systems for providing an augmented reality system are disclosed. In one instance, an augmented reality system may: identify a feature within a three-dimensional environment; project information into the three-dimensional environment; collect an image of the three-dimensional environment and the projected information; determine at least one of distance and orientation of the feature from the projected information; identify an object within the three-dimensional environment; and perform markerless tracking of the object. | 01-10-2013 |
20130010069 | METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR WIRELESSLY CONNECTING A DEVICE TO A NETWORK - From a bit stream, at least the following are decoded: a stereoscopic image of first and second views; a maximum positive disparity between the first and second views; and a minimum negative disparity between the first and second views. In response to the maximum positive disparity violating a limit on positive disparity, a convergence plane of the stereoscopic image is adjusted to comply with the limit on positive disparity. In response to the minimum negative disparity violating a limit on negative disparity, the convergence plane is adjusted to comply with the limit on negative disparity. | 01-10-2013 |
20130010070 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus configured to estimate a position and orientation of a measuring object using an imaging apparatus includes an approximate position and orientation input unit configured to input a relative approximate position and orientation between the imaging apparatus and the measuring object, a first position and orientation updating unit configured to update the approximate position and orientation by matching a three-dimensional shape model to a captured image, a position and orientation difference information input unit configured to calculate and acquire a position and orientation difference amount of the imaging apparatus relative to the measuring object having moved after the imaging apparatus has captured an image of the measuring object or after last position and orientation difference information has been acquired, and a second position and orientation updating unit configured to update the approximate position and orientation based on the position and orientation difference amount. | 01-10-2013 |
20130010071 | METHODS AND SYSTEMS FOR MAPPING POINTING DEVICE ON DEPTH MAP - Disclosed are methods for determining and tracking a current location of a handheld pointing device, such as a remote control for an entertainment system, on a depth map generated by a gesture recognition control system. The methods disclosed herein enable identifying a user's hand gesture, and generating corresponding motion data. Further, the handheld pointing device may send motion, such as acceleration or velocity, and/or orientation data such as pitch, roll, and yaw angles. The motion data of user's hand gesture and motion data (orientation data) as received from the handheld pointing device are then compared, and if they correspond to each other, it is determined that the handheld pointing device is in active use by the user as it is held by a particular hand. Accordingly, a location of the handheld pointing device on the depth map can be determined. | 01-10-2013 |
20130010072 | SENSOR, DATA PROCESSING SYSTEM, AND OPERATING METHOD - An image sensor includes a unit pixel including a plurality of color pixels with a depth pixel. A first signal line group of first signal lines is used to supply first control signals that control operation of the plurality of color pixels, and a separate second signal line group of second signal lines is used to supply second control signals that control operation of the depth pixel. | 01-10-2013 |
20130010073 | SYSTEM AND METHOD FOR GENERATING A DEPTH MAP AND FUSING IMAGES FROM A CAMERA ARRAY - A method, apparatus, system, and computer program product for of digital imaging. Multiple cameras comprising lenses and digital images sensors are used to capture multiple images of the same subject, and process the multiple images using difference information (e.g., an image disparity map, an image depth map, etc.). The processing commences by receiving a plurality of image pixels from at least one first image sensor, wherein the first image sensor captures a first image of a first color, receives a stereo image of the first color, and also receives other images of other colors. Having the stereo imagery, then constructing a disparity map and an associated depth map by searching for pixel correspondences between the first image and the stereo image. Using the constructed disparity map, captured images are converted into converted images, which are then combined with the first image, resulting in a fused multi-channel color image. | 01-10-2013 |
20130010074 | MEASUREMENT APPARATUS, MEASUREMENT METHOD, AND FEATURE IDENTIFICATION APPARATUS - It is an object to measure a position of a feature around a road. An image memory unit stores images in which neighborhood of the road is captured. Further, a three-dimensional point cloud model memory unit | 01-10-2013 |
20130010075 | CAMERA WITH SENSORS HAVING DIFFERENT COLOR PATTERNS - An image capture device includes a lens arrangement having a first lens associated with a first digital image sensor and a second lens associated with a second digital image sensor; the first digital image sensor having photosites of a first predetermined color pattern for producing a first digital image; the second digital image sensor having photosites of a different second predetermined color pattern for producing a second digital image. The image capture device also includes a device for causing the lens arrangement to capture a first digital image from the first digital image sensor and a second digital image from the second digital image sensor at substantially the same time; a processor aligning the first and second digital images; and the processor using values of the second image based on the alignment between the first and second images operates on the first digital image to produce the enhanced digital image. | 01-10-2013 |
20130010076 | MULTI-CORE PROCESSOR FOR PORTABLE DEVICE WITH DUAL IMAGE SENSORS - A multi-core processor is used in a portable device that has first and second image sensors spaced from each other for capturing images of a scene from slightly different perspectives. The multi-core processor has a first image sensor interface for receiving data from the image sensor, a second image sensor interface for receiving data from the second image sensor, multiple processing units and, the four processing units and the first and second sensor interfaces being integrated onto a single chip. The processing units are configured to simultaneously process the data from the first and second image interfaces to generate stereoscopic image data. | 01-10-2013 |
20130010077 | THREE-DIMENSIONAL IMAGE CAPTURING APPARATUS AND THREE-DIMENSIONAL IMAGE CAPTURING METHOD - A three-dimensional image capturing apparatus generates depth information to be used for generating a three-dimensional image from an input image, and includes: a capturing unit obtaining the input image in capturing; an object designating unit designating an object in the input image; a resolution setting unit setting depth values, each representing a different depth position, so that in a direction parallel to a depth direction of the input image, depth resolution near the object is higher than depth resolution positioned apart from the object, the object being designated by the object designating unit; and a depth map generating unit generating two-dimensional depth information corresponding to the input image by determining, for each of regions in the input image, a depth value, from among the depth values set by the resolution setting unit, indicating a depth position corresponding to one of the regions. | 01-10-2013 |
20130010078 | STEREOSCOPIC IMAGE TAKING APPARATUS - A stereoscopic image taking apparatus ( | 01-10-2013 |
20130016182 | COMMUNICATING AND PROCESSING 3D VIDEOAANM Booth; Robert C.AACI IvylandAAST PAAACO USAAGP Booth; Robert C. Ivyland PA USAANM Bhat; Dinkar N.AACI PrincetonAAST NJAACO USAAGP Bhat; Dinkar N. Princeton NJ USAANM Leary; Patrick J.AACI HorshamAAST PAAACO USAAGP Leary; Patrick J. Horsham PA US - There is a communicating of 3D video having associated metadata. The communicating may involve devices and includes receiving a first video bitstream with the 3D video encoded in a first format, receiving the associated metadata. The communicating also includes forming a protocol message, utilizing a processor, including the associated metadata. The communicating also includes transmitting a second video bitstream with the 3D video encoded in a second format and transmitting the protocol message separate from the second video bitstream. The protocol message includes data fields holding processing data, based on the associated metadata, describing the rendering of the 3D video. The processing data is operable to be read and utilized to direct a client device to process the 3D video extracted from a video bitstream and present the 3D video on a display. Also, there is a processing of the 3D video. The processing utilizes the protocol message. | 01-17-2013 |
20130016183 | Dual Mode User Interface System and Method for 3D VideoAANM Lazarus; David B.AACI Elkins ParkAAST PAAACO USAAGP Lazarus; David B. Elkins Park PA USAANM Zhang; YaxiAACI WayneAAST PAAACO USAAGP Zhang; Yaxi Wayne PA US - A system is provided for use with a video input signal and a video unit. The video input signal can be one of a two dimensional video signal and a three dimensional video signal. The video unit can display a three dimensional video and a two dimensional video. The system includes a receiver portion, a processing portion, a switching portion and an output portion. The receiver portion can receive the video input signal. The processing portion can output a first signal in a first mode of operation and can output a second signal in a second mode of operation, wherein the first signal is based on the video input signal and the second signal is based on the video input signal. The switching portion can switch the processing portion from the first mode of operation to the second mode of operation. The output portion can provide an output signal to the video unit, wherein the output signal is based on the first signal when the processing portion operates in the first mode of operation and wherein the output signal is based on the second signal when the processing portion operates in the second mode of operation. The first signal includes a two dimensional video signal, whereas the second signal includes a three dimensional video signal. | 01-17-2013 |
20130016184 | SYSTEM AND METHOD FOR LOCATING AND DISPLAYING AIRCRAFT INFORMATION - A system and method for locating and displaying aircraft information, such as three-dimensional models and various information about an aircraft component. The system may include a portable display, a remote processor, and one or more location and/or orientation-determining components. The models and other various information displayed on the portable display may correspond with a location and orientation of the portable display relative to the aircraft component. The location and/or orientation-determining components may include one or more infrared cameras for communicating with the remote processor and a plurality of infrared targets or infrared markers. The remote processor may be configured to filter the information provided to the portable display based on the portable display's location and orientation relative to the aircraft component, geographic location, user input, or other various parameters. | 01-17-2013 |
20130016185 | LOW-COST IMAGE-GUIDED NAVIGATION AND INTERVENTION SYSTEMS USING COOPERATIVE SETS OF LOCAL SENSORS - An augmentation device for an imaging system has a bracket structured to be attachable to an imaging component, and a projector attached to the bracket. The projector is arranged and configured to project an image onto a surface in conjunction with imaging by the imaging system. A system for image-guided surgery has an imaging system, and a projector configured to project an image or pattern onto a region of interest during imaging by the imaging system. A capsule imaging device has an imaging system, and a local sensor system. The local sensor system pro-vides information to reconstruct positions of the capsule endoscope free from external monitoring equipment. | 01-17-2013 |
20130021441 | METHOD AND IMAGE SENSOR HAVING PIXEL STRUCTURE FOR CAPTURING DEPTH IMAGE AND COLOR IMAGE - An image sensor having a pixel structure for capturing a depth image and a color image. The image sensor has a pixel structure that shares a floating diffusion (FD) node and a readout node, and operates with different pixel structures, according to a depth mode and a color mode. | 01-24-2013 |
20130021442 | ELECTRONIC CAMERA - An electronic camera includes an imager. An imager repeatedly outputs an image indicating a space captured on an imaging surface. A displayer displays the image outputted from the imager. A superimposer superimposes an index indicating a position of at least a focal point onto the image displayed by the displayer. A position changer changes a position of the index superimposed by the superimposer according to a focus adjusting operation. A setting changer changes a focusing setting in association with the process of the position changer. | 01-24-2013 |
20130021443 | CAMERA SYSTEM WITH COLOR DISPLAY AND PROCESSOR FOR REED-SOLOMON DECODING - A camera system including: a substrate having a coding pattern printed thereon and | 01-24-2013 |
20130021444 | CAMERA SYSTEM WITH COLOR DISPLAY AND PROCESSOR FOR REED-SOLOMON DECODING - A handheld digital camera device including: a digital camera unit having a first image sensor for capturing images and a color display for displaying captured images to a user; an integral processor configured for: controlling operation of the first image sensor and color display; decoding an imaged coding pattern printed on a substrate, the printed coding pattern employing Reed-Solomon encoding; and performing an action in the handheld digital camera device based on the decoded coding pattern. The decoding includes the steps of: detecting target structures defining an extent of a data area; determining the data area using the detected target structures; and Reed-Solomon decoding the coding pattern contained in the determined data area. | 01-24-2013 |
20130021445 | Camera Projection Meshes - A 3D rendering method is proposed to increase the performance when projecting and compositing multiple images or video sequences from real-world cameras on top of a precise 3D model of the real world. Unlike previous methods that relied on shadow-mapping and that were limited in performance due to the need to re-render the complex scene multiple times per frame, the proposed method uses, for each camera, one Camera Projection Mesh (“CPM”) of fixed and limited complexity per camera. The CPM that surrounds each camera is effectively molded over the surrounding 3D world surfaces or areas visible from the video camera. Rendering and compositing of the CPMs may be entirely performed on the Graphic Processing Unit (“GPU”) using custom shaders for optimal performance. The method also enables improved view-shed analysis and fast visualization of the coverage of multiple cameras. | 01-24-2013 |
20130027517 | METHOD AND APPARATUS FOR CONTROLLING AND PLAYING A 3D IMAGE - A 3D image playing apparatus is provided. The 3D image playing apparatus includes a plurality of speakers which output a plurality of test sounds, a receiver which receives the test sounds output from the plurality of speakers, a location detector which detects a location of the receiver by receiving feedback of the test sounds received at the receiver and analyzing the test sounds, a 3D processor which adjusts a 3D effect of a 3D image according to the location detected by the location detector, and a display unit which outputs the 3D image having the 3D effect adjusted by the 3D processor. | 01-31-2013 |
20130027518 | MICROSCOPE STABILITY USING A SINGLE OPTICAL PATH AND IMAGE DETECTOR - Stabilization, via active-feedback positional drift-correction, of an optical microscope imaging system in up to 3-dimensions is achieved using the optical measurement path of an image sensor. Nanometer-scale stability of the imaging system is accomplished by correcting for positional drift using fiduciary references sparsely distributed within or in proximity to the experimental sample. | 01-31-2013 |
20130027519 | AUTOMATED THREE DIMENSIONAL MAPPING METHOD - An automated three dimensional mapping method estimating three dimensional models taking advantage of a plurality of images. Positions and attitudes for at least one camera are recorded when images are taken. The at least one camera is geometrically calibrated to indicate the direction of each pixel of an image. A stereo disparity is calculated for a plurality of image pairs covering a same scene position setting a disparity and a certainty measure estimate for each stereo disparity. The different stereo disparity estimates are weighted together to form a 3D model. The stereo disparity estimates are reweighted automatically and adaptively based on the estimated 3D model. | 01-31-2013 |
20130027520 | 3D IMAGE RECORDING DEVICE AND 3D IMAGE SIGNAL PROCESSING DEVICE - A 3D image signal processing device performs a signal processing on at least one image signal of a first viewpoint signal as an image signal generated at a first viewpoint and a second viewpoint signal as an image signal generated at a second viewpoint different from the first viewpoint. The device includes an image processor that executes a predetermined image processing on at least one image signal of the first viewpoint signal and the second viewpoint signal, and a controller that controls the image processor. The controller controls the image processor to perform an feathering process on at least one image signal of the first viewpoint signal and the second viewpoint signal, the feathering process being a process for smoothing pixel values of pixels positioned on a boundary between an object included in the image represented by the at least one image signal and an image adjacent to the object. | 01-31-2013 |
20130033571 | METHOD AND SYSTEM FOR CROPPING A 3-DIMENSIONAL MEDICAL DATASET - A method and gesture-based control system for manipulating a 3-dimensional medical dataset include translating a body part, detecting the translation of the body part with a camera system. The method and system include translating a crop plane in the 3-dimensional medical dataset based on the translating the body part. The method and system include cropping the 3-dimensional medical dataset at the location of the crop plane after translating the crop plane and displaying the cropped 3-dimensional medical dataset using volume rendering. | 02-07-2013 |
20130033572 | OPTIMIZING USAGE OF IMAGE SENSORS IN A STEREOSCOPIC ENVIRONMENT - The invention is directed to systems, methods and computer program products for optimizing usage of image sensors in a stereoscopic environment. The method includes: (a) providing a first image sensor, where the first image sensor is associated with a first image sensor area and a first imaging area; (b) determining a distance from the camera to an object to be captured; and (c) shifting the first imaging area along a length of the first image sensor area, where the amount of the shifting is based at least partially on the distance from the camera to the object, and where the first imaging area can shift along an entire length of the first image sensor area. The invention optimizes usage of an image sensor by permitting an increase in disparity control. Additionally, the invention reduces the closest permissible distance of an object to be captured using a stereoscopic camera. | 02-07-2013 |
20130033573 | OPTICAL PHASE EXTRACTION SYSTEM HAVING PHASE COMPENSATIONFUNCTION OF CLOSED LOOP TYPE AND THREE-DIMENSIONAL IMAGEEXTRACTION METHOD THEREOF - Provided is an image extraction method of optical phase extraction system. The image extraction method may include checking whether a phase error due to an environmental disturbance of optical fiber occurs by monitoring an output signal obtained by interfering reflection optical signals reflected through two paths. When a phase error occurs, an error is compensated using a phase compensation control method of closed loop type through one of the two paths and an image is extracted by capturing an image of object in a state that the image of object is shifted by the set phase value when a phase error is compensated. According to the inventive concept, a phase error occurring in an optical fiber type interferometer due to an environmental disturbance is minimized or compensated. Also, since an interference image accurately shifted by the phase value set among arbitrary various phase values is obtained through a camera, reliability of three-dimensional phase information being extracted is guaranteed. | 02-07-2013 |
20130033574 | METHOD AND SYSTEM FOR UNVEILING HIDDEN DIELECTRIC OBJECT - The invention relates to the remote measurement of the dielectric permittivity of dielectrics. A 3D microwave and a 3D optical range images of an interrogated scene are recorded at the same time moment. The images are digitized and overlapped. A space between the microwave and optical image is measured, and a dielectric permittivity of the space between these images is determined. If the dielectric permittivity is about 3, then hidden explosive materials or components of thereof are suspected. The invention makes it possible to remotely determine the dielectric permittivity of a moving, irregularly-shaped dielectric objects. | 02-07-2013 |
20130033575 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an imaging element that photographs multiple viewing point images corresponding to images observed from different viewing points and an image processing unit separates an output signal of the imaging element, acquires the plurality of viewing point images corresponding to the images observed from the different viewing points, and generates a left eye image and a right eye image for three-dimensional image display, on the basis of the plurality of acquired viewing point images. The image processing unit generates parallax information on the basis of the plurality of viewing point images obtained from the imaging element and generates a left eye image and a right eye image for three-dimensional image display by 2D3D conversion processing using the generated parallax information. By this configuration, a plurality of viewing point images are acquired on the basis of one photographed image and images for three-dimensional image display are generated. | 02-07-2013 |
20130033576 | IMAGE PROCESSING DEVICE AND METHOD, AND PROGRAM - There is provided an image processing device including capturing portions that respectively capture a first image and a second image that form an image for a right eye and an image for a left eye which can be stereoscopically viewed in three dimensions, a comparison portion that compares the first image and the second image captured by the capturing portions, a determination portion that determines, based on a comparison result of the comparison portion, which of the first image and the second image is the image for the right eye and which is the image for the left eye, and an output portion that outputs each of the first image and the second image, as the image for the right eye and the image for the left eye, based on a determination result of the determination portion. | 02-07-2013 |
20130033577 | MULTI-LENS CAMERA WITH A SINGLE IMAGE SENSOR - A multiple-lens camera has only one image sensor to capture a number of images at different viewing angles. Using a single image sensor, instead of a number of separate image sensors, to capture multiple images simultaneously, one can avoid the calibration process to calibrate the different image sensors to make sure that color balance and the gain are the same for all the image sensors used. The camera has an adjustment mechanism for adjusting the distance between the image lenses, and a processor to receive from the image sensor electronic signals indicative of image data of the captured of images. The camera has a connector to transfer the processed image data to an external device or to an image display. The image display device is configured to display one of said plurality of images. | 02-07-2013 |
20130033578 | PROCESSING MULTI-APERTURE IMAGE DATA - A method and a system for processing multi-aperture image data are described, wherein the method comprises: capturing image data associated with one or more objects by simultaneously exposing an image sensor in an imaging system to spectral energy associated with at least a first part of the electromagnetic spectrum using at least a first aperture and to spectral energy associated with at least a second part of the electromagnetic spectrum using at least a second and third aperture; generating first image data associated with said first part of the electromagnetic spectrum and second image data associated with said second part of the electromagnetic spectrum; and, generating depth information associated with said captured image on the basis displacement information in said second image data, preferably on the basis of displacement information in an auto-correlation function of the high-frequency image data associated with said second image data. | 02-07-2013 |
20130033579 | PROCESSING MULTI-APERTURE IMAGE DATA - A method and a system for processing multi-aperture image data is described wherein the method comprises: capturing image data associated of one or more objects by simultaneously exposing an image sensor in an imaging system to spectral energy associated with at least a first part of the electromagnetic spectrum using at least a first aperture and to spectral energy associated with at least a second part of the electromagnetic spectrum using at least a second aperture; generating first image data associated with said first part of the electromagnetic spectrum and second image data associated with said second part of the electro-magnetic spectrum; and, generating depth information associated with said captured image on the basis of first sharpness information in at least one area of said first image data and second sharpness information in at least one area of said second image data. | 02-07-2013 |
20130033580 | THREE-DIMENSIONAL VISION SENSOR - Enabling height recognition processing by setting a height of an arbitrary plane to zero for convenience of the recognition processing. A parameter for three-dimensional measurement is calculated and registered through calibration and, thereafter, an image pickup with a stereo camera is performed on a plane desired to be recognized as having a height of zero in actual recognition processing. Three-dimensional measurement using the registered parameter is performed on characteristic patterns (marks m | 02-07-2013 |
20130038690 | METHOD AND APPARATUS FOR GENERATING THREE-DIMENSIONAL IMAGE INFORMATION - A method and apparatus for generating three-dimensional image information is disclosed. The method involves directing light captured within the field of view of the lens to an aperture plane of the lens, receiving the captured light at a spatial discriminator located proximate the aperture plane, the discriminator including a first portion disposed to transmit light having a first optical state through a first portion of the single imaging path and a second portion disposed to transmit light having a second optical state through a second portion of the single imaging path. The first and second portions of the single imaging path provide respective first and second perspective viewpoints within the field of view of the lens for forming respective first and second images at an image sensor disposed at an image plane of the lens. The first image represents objects within the field of view from the first perspective viewpoint and the second image represents the objects from the second perspective viewpoint, the first and second images together being operable to represent three dimensional spatial attributes of the objects. The method also involves receiving the first image at a first plurality of sensor elements on the image sensor, the first plurality of elements being responsive to light having the first optical state, and receiving the second image at a second plurality of sensor elements on the image sensor, the second plurality of elements being responsive to light having the second optical state. | 02-14-2013 |
20130038691 | ASYMMETRIC ANGULAR RESPONSE PIXELS FOR SINGLE SENSOR STEREO - Depth sensing imaging pixels include pairs of left and right pixels forming an asymmetrical angular response to incident light. A single microlens is positioned above each pair of left and right pixels. Each microlens spans across each of the pairs of pixels in a horizontal direction. Each microlens has a length that is substantially twice the length of either the left or right pixel in the horizontal direction; and each microlens has a width that is substantially the same as a width of either the left or right pixel in a vertical direction. The horizontal and vertical directions are horizontal and vertical directions of a planar image array. A light pipe in each pixel is used to improve light concentration and reduce cross talk. | 02-14-2013 |
20130038692 | Remote Control System - A remote control system comprises a mobile object, a remote controller for remotely controlling the mobile object, and a storage unit where background images to simulate a driving room or an operation room of the mobile object are stored. The mobile object has a stereo camera, a camera control unit for controlling image pickup direction of the stereo camera, and a first communication unit for communicating information including at least images photographed by the stereo camera. The remote controller has a second communication unit for communicating to and from the first communication unit, a control unit for controlling the mobile object, and a display unit for synthesizing at least a part of the images photographed by the stereo camera and the background images and for displaying the images so that a stereoscopic view can be displayed. | 02-14-2013 |
20130038693 | METHOD AND APPARATUS FOR REDUCING FRAME REPETITION IN STEREOSCOPIC 3D IMAGING - The present invention is directed towards enhancing the reproduction of three-dimensional dynamic scenes on digital light processing (DLP) and (liquid crystal display) LCD projectors and displays by adding optimal amount of motion blur to stimulate the covered eye to continue perceiving scene picture changes. Too much blur would bring smearing, but a lack of blur induces motion breaking. | 02-14-2013 |
20130038694 | METHOD FOR MOVING OBJECT DETECTION USING AN IMAGE SENSOR AND STRUCTURED LIGHT - A method for detecting moving objects including people. Enhanced monitoring, safety and security is provided through the use of a monocular camera and a structured light source, by trajectory computation, velocity computation, or counting of people and other objects passing through a laser plane arranged perpendicular to the ground, and which can be setup anywhere near a portal, a hallway or other open area. Enhanced security is provided for portals such as revolving doors, mantraps, swing doors, sliding doors, etc., using the monocular camera and structured light source to detect and, optionally, prevent access violations such as “piggybacking” and “tailgating”. | 02-14-2013 |
20130038695 | PLAYBACK APPARATUS, DISPLAY APPARATUS, RECORDING APPARATUS AND STORAGE MEDIUM - A terminal device ( | 02-14-2013 |
20130044186 | Plane-based Self-Calibration for Structure from Motion - Robust techniques for self-calibration of a moving camera observing a planar scene. Plane-based self-calibration techniques may take as input the homographies between images estimated from point correspondences and provide an estimate of the focal lengths of all the cameras. A plane-based self-calibration technique may be based on the enumeration of the inherently bounded space of the focal lengths. Each sample of the search space defines a plane in the 3D space and in turn produces a tentative Euclidean reconstruction of all the cameras that is then scored. The sample with the best score is chosen and the final focal lengths and camera motions are computed. Variations on this technique handle both constant focal length cases and varying focal length cases. | 02-21-2013 |
20130044187 | 3D CAMERA AND METHOD OF MONITORING A SPATIAL ZONE - A 3D camera ( | 02-21-2013 |
20130044188 | STEREOSCOPIC IMAGE REPRODUCTION DEVICE AND METHOD, STEREOSCOPIC IMAGE CAPTURING DEVICE, AND STEREOSCOPIC DISPLAY DEVICE - A stereoscopic image is displayed with an appropriate amount of parallax based on auxiliary information recorded in a three-dimensional-image file. The size of a display which performs 3D display is acquired (Step S | 02-21-2013 |
20130057650 | OPTICAL GAGE AND THREE-DIMENSIONAL SURFACE PROFILE MEASUREMENT METHOD - An optical gage ( | 03-07-2013 |
20130057651 | METHOD AND SYSTEM FOR POSITIONING OF AN ANTENNA, TELESCOPE, AIMING DEVICE OR SIMILAR MOUNTED ONTO A MOVABLE PLATFORM - Method and system for positioning an antenna ( | 03-07-2013 |
20130057652 | Handheld Scanning Device - A handheld, cordless scanning device for the three-dimensional image capture of patient anatomy without the use of potentially hazardous lasers, optical reference targets for frame alignment, magnetic reference receivers, or the requirement that the scanning device be plugged in while scanning. The device generally includes a housing having a front end and a rear end. The rear end includes a handle and trigger. The front end includes a pattern projector for projecting a unique pattern onto a target object and a camera for capturing live video of the projected pattern as it is deformed around the object. The front end of the housing also includes a pair of focus beam generators and an indexing beam generator. By utilizing data collected with the present invention, patient anatomy such as anatomical features and residual limbs may be digitized to create accurate three-dimensional representations which may be utilized in combination with computer-aided-drafting programs. | 03-07-2013 |
20130057653 | APPARATUS AND METHOD FOR RENDERING POINT CLOUD USING VOXEL GRID - A method for rendering point cloud using a voxel grid, includes generating bounding box including all the point cloud and dividing the generated bounding box into voxels to make the voxel grid; and allocating at least one texture plane to each of the voxels of the voxel grid. Further, the method includes orthogonally projecting points within the voxel to the allocated texture planes to generate texture images; and rendering each voxel of the voxel grid by selecting one of the texture planes within the voxel by using central position of the voxel and the 3D camera position and rendering using the texture images corresponding to the selected texture plane. | 03-07-2013 |
20130057654 | METHOD AND SYSTEM TO SEGMENT DEPTH IMAGES AND TO DETECT SHAPES IN THREE-DIMENSIONALLY ACQUIRED DATA - A method and system analyzes data acquired by image systems to more rapidly identify objects of interest in the data. In one embodiment, z-depth data are segmented such that neighboring image pixels having similar z-depths are given a common label. Blobs, or groups of pixels with a same label, may be defined to correspond to different objects. Blobs preferably are modeled as primitives to more rapidly identify objects in the acquired image. In some embodiments, a modified connected component analysis is carried out where image pixels are pre-grouped into regions of different depth values preferably using a depth value histogram. The histogram is divided into regions and image cluster centers are determined. A depth group value image containing blobs is obtained, with each pixel being assigned to one of the depth groups. | 03-07-2013 |
20130063560 | COMBINED STEREO CAMERA AND STEREO DISPLAY INTERACTION - One embodiment of the present invention provides a system that facilitates interaction between a stereo image-capturing device and a three-dimensional (3D) display. The system comprises a stereo image-capturing device, a plurality of trackers, an event generator, an event processor, and a 3D display. During operation, the stereo image-capturing device captures images of a user. The plurality of trackers track movements of the user based on the captured images. Next, the event generator generates an event stream associated with the user movements, before the event processor in a virtual-world client maps the event stream to state changes in the virtual world. The 3D display then displays an augmented reality with the virtual world. | 03-14-2013 |
20130063561 | VIRTUAL ADVERTISING PLATFORM - In embodiments, a virtual advertising platform may use a three-dimensional mapping algorithm to insert a virtual image within a digital video stream. The virtual advertising platform may apply a three-dimensional mapping algorithm to the virtual digital image, wherein the three-dimensional mapping algorithm causes the virtual digital image to be recomposited within a plurality of frames within a received two-dimensional digital data feed in place of a spatial region within the two-dimensional data feed. The mapping algorithm may enable application of analogous geometric changes to the virtual digital image that are present in the spatial region within the plurality of video frames within the two-dimensional digital video data feed, and may send the recomposited digital data feed for display to a user, wherein the recomposited digital data feed is a virtualized digital data feed that includes the virtual digital image in place of the spatial region. | 03-14-2013 |
20130063562 | METHOD AND APPARATUS FOR OBTAINING GEOMETRY INFORMATION, LIGHTING INFORMATION AND MATERIAL INFORMATION IN IMAGE MODELING SYSTEM - A method and apparatus for obtaining geometry information, material information, and lighting information in an image modeling system are provided. Geometry information, material information, and lighting information of an object may be extracted from a single-view image captured in a predetermined light condition, by applying pixel values defined by a geometry function, a material function, and a lighting function. | 03-14-2013 |
20130063563 | TRANSPROJECTION OF GEOMETRY DATA - Systems and methods for transprojection of geometry data acquired by a coordinate measuring machine (CMM). The CMM acquires geometry data corresponding to 3D coordinate measurements collected by a measuring probe that are transformed into scaled 2D data that is transprojected upon various digital object image views captured by a camera. The transprojection process can utilize stored image and coordinate information or perform live transprojection viewing capabilities in both still image and video modes. | 03-14-2013 |
20130063564 | IMAGE PROCESSOR, IMAGE PROCESSING METHOD AND PROGRAM - Disclosed herein is an image processor including: a first emission section for emitting light at a first wavelength to a subject; a second emission section for emitting light at a second wavelength longer than the first wavelength to the subject; an imaging section for capturing an image of the subject; a detection section for detecting a body region representing at least one of the skin and eyes of the subject based on a first captured image acquired by image capture at the time of emission of the light at the first wavelength and a second captured image acquired by image capture at the time of emission of the light at the second wavelength; a calculation section for calculating viewpoint information; and a display control section for controlling a display mechanism adapted to allow the subject to visually recognize an image as a stereoscopic image. | 03-14-2013 |
20130063565 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING SYSTEM - Provided is an information processing apparatus including a correction unit that corrects at least position coordinates used to specify a position of an enlarged-image group generated in accordance with a scanning method of a microscope that scans a sample in vertical, horizontal, and depth directions and generates an image data group corresponding to the enlarged-image group of the sample, and a stereoscopic image generation unit that generates a stereoscopic image of the enlarged-image group by giving parallax to the corrected image data group. | 03-14-2013 |
20130063566 | DETERMINING A DEPTH MAP FROM IMAGES OF A SCENE - A technique determines a depth measurement associated with a scene captured by an image capture device. The technique receives at least first and second images of the scene, in which the first image is captured using at least one different camera parameter than that of the second image. At least first and second image patches are selected from the first and second images, respectively, the selected patches corresponding to a common part of the scene. The selected image patches are used to determine which of the selected image patches provides a more focused representation of the common part. At least one value is calculated based on a combination of data in the first and second image patches, the combination being dependent on the more focused image patch. The depth measurement of the common part of the scene is determined from the at least one calculated value. | 03-14-2013 |
20130063567 | PORTAL WITH RFID TAG READER AND OBJECT RECOGNITION FUNCTIONALITY, AND METHOD OF UTILIZING SAME - An RFID/object recognition system monitors the passage of an object through a portal into a space. An RFID reader adjacent the portal communicates with an RFID tag within a preselected distance from the RFID reader. A data processor processes data from the RFID reader. A 3-dimensional scanner has an RGB camera and a depth sensor with an infrared laser projector and a monochrome CMOS sensor. An infrared laser controller is electronically coupled with the infrared laser projector, and a monochrome CMOS processor is electronically coupled with the monochrome CMOS sensor. The infrared laser controller, monochrome CMOS processor, and RGB camera are electronically coupled with a processor. The RFID reader receives data from an RFID tag when an RFID-tagged object passes within the preselected distance from the RFID reader through the portal. The 3-dimensional object recognition assembly identifies where the RFID-tagged object is located within the defined space. | 03-14-2013 |
20130063568 | CAMERA SYSTEM COMPRISING COLOR DISPLAY AND PROCESSOR FOR DECODING DATA BLOCKS IN PRINTED CODING PATTERN - A camera system including: a substrate having a coding pattern printed thereon and | 03-14-2013 |
20130063569 | IMAGE-CAPTURING APPARATUS AND IMAGE-CAPTURING METHOD - This invention relates to capturing an image of a subject as a three-dimensional image using a single image-capturing apparatus. The image-capturing apparatus includes a first polarization means, a lens system, and an image-capturing device array having a second polarization means. The first polarization means includes first and second regions arranged along a first direction, and the second polarization means includes multiple third and fourth regions arranged alternately along a second direction. First region transmission light having passed the first region passes the third region and reaches the image-capturing device, and second region transmission light having passed the second region passes the fourth region and reaches the image-capturing device. Thus, an image is captured to obtain a three-dimensional image in which a distance between a barycenter BC | 03-14-2013 |
20130070053 | IMAGE CAPTURING DEVICE AND IMAGE CAPTURING METHOD OF STEREO MOVING IMAGE, AND DISPLAY DEVICE, DISPLAY METHOD, AND PROGRAM OF STEREO MOVING IMAGE - If there is a discrepancy between a capturing system and a display system of a stereoscopic moving image, the phenomena, such as that the contour of a motion area of an image looks double, may occur, thus posing a problem in the quality of a reproduced image. An image capturing device capturing a stereoscopic moving image includes: an image capturing unit configured to capture a right-eye moving image and a left-eye moving image constituting the stereoscopic moving image, respectively; a unit configured to set an image capturing mode corresponding to the display system of a display device displaying the stereoscopic moving image; and a synchronous signal control unit configured to supply to the image capturing unit a synchronous signal used for capturing the right-eye moving image and the left-eye moving image, respectively, and control a phase of the synchronous signal to supply in accordance with the set image capturing mode. | 03-21-2013 |
20130070054 | IMAGE PROCESSING APPARATUS, FLUORESCENCE MICROSCOPE APPARATUS, AND IMAGE PROCESSING PROGRAM - A three-dimensional image without luminance irregularity is generated while achieving good contrast. An image processing apparatus is provided, including an image combining portion that generates combined images by combining, for each depth position in a specimen, a plurality of fluorescence images captured with differing exposure levels at each of different depth positions of the specimen; a smoothed-luminance calculating portion that calculates a representative luminance from the individual combined images and that calculates a smoothed luminance for the individual combined images by smoothing the calculated representative luminance in the depth direction; a luminance correcting portion that generates corrected images by correcting the luminance of the individual combined images on the basis of differences between the smoothed luminance and the representative luminance calculated; and a three-dimensional image generating portion that generates a three-dimensional image of the specimen from the plurality of corrected images. | 03-21-2013 |
20130070055 | SYSTEM AND METHOD FOR IMPROVING METHODS OF MANUFACTURING STEREOSCOPIC IMAGE SENSORS - Described herein are methods, systems and apparatus to improve imaging sensor production yields. In one method, a stereoscopic image sensor pair is provided from a manufacturing line. One or more images of a correction pattern are captured by the image sensor pair. Correction angles of the sensor pair are determined based on the images of the correction pattern. The correction angles of the sensor pair are represented graphically in a three dimensional space. Analysis of the graphical representation of the correction angles through statistical processing results in a set of production correction parameters that may be input into a manufacturing line to improve sensor pair yields. | 03-21-2013 |
20130070056 | METHOD AND APPARATUS TO MONITOR AND CONTROL WORKFLOW - A method to monitor a vehicle inspection workflow, includes (i) identifying an inspector in a vehicle inspection area, (ii) determining an inspection procedure being performed by the inspector, the inspection procedure being at least a portion of a vehicle inspection, (iii) determining a workflow rule that corresponds to the inspection procedure, the workflow rule including at least one workflow limit, (iv) generating workflow data by monitoring the inspector within the inspection area, the workflow data being determined from recorded video data, (v) determining whether the inspector violated the workflow rule by comparing the workflow data to the at least one workflow limit of the workflow rule, and (vi) indicating that the inspector did not adequately perform the inspection procedure if it is determined that the inspector violated the workflow rule. | 03-21-2013 |
20130070057 | METHOD AND SYSTEM FOR GENERATING A HIGH RESOLUTION IMAGE - A method for generating an image is provided. The method includes estimating a high resolution image from a plurality of low resolution images and downsampling the estimated high resolution image to obtain estimates of a plurality of low resolution images. The method also includes generating a desired high resolution image based upon comparison of the downsampled low resolution images and the plurality of low resolution images. | 03-21-2013 |
20130070058 | SYSTEMS AND METHODS FOR TRACKING A MODEL - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose. | 03-21-2013 |
20130076860 | THREE-DIMENSIONAL RELATIONSHIP DETERMINATION - Example embodiments disclosed herein relate to determining relationships between locations based on beacon information. At least three sensors of a device can be used to determine locations of a beacon. The device can determine a three-dimensional relationship between the locations. | 03-28-2013 |
20130076861 | METHOD AND APPARATUS FOR PROBING AN OBJECT, MEDIUM OR OPTICAL PATH USING NOISY LIGHT - A method and apparatus for optically probing an object(s) and/or a medium and/or an optical path using noisy light. Applications disclosed include but are not limited to 3D digital camera, detecting material or mechanical properties of optical fiber(s), intrusion detection, and determining an impulse response. In some embodiments, an optical detector is illuminated by a superimposition of a combination of noisy light signals. Various signal processing techniques are also disclosed herein. | 03-28-2013 |
20130076862 | Image Acquiring Device And Image Acquiring System - An image acquiring device comprises a first camera 14 for acquiring video images, consisting of frame images continuous in time series, a second camera 15 being in a known relation with the first camera and used for acquiring two or more optical spectral images of an object to be measured, and an image pickup control device 21, and in the image acquiring device, the image pickup control device is configured to extract two or more feature points from one of the frame images, to sequentially specify the feature points in the frame images continuous in time series, to perform image matching between the frame images regarding the frame images corresponding to the two or more optical spectral images based on the feature points, and to synthesize the two or more optical spectral images according to the condition obtained by the image matching. | 03-28-2013 |
20130076863 | SURGICAL STEREO VISION SYSTEMS AND METHODS FOR MICROSURGERY - Surgical stereo vision systems and methods for microsurgery are described that enable hand-eye collocation, high resolution, and a large field of view. A digital stereo microscope apparatus, an operating system with a digital stereo microscope, and a method are described using a display unit located over an area of interest such that a human operator places hands, tools, or a combination thereof in the area of interest and views a magnified and augmented live stereo view of the area interest with eyes of the human operator substantially collocated with the hands of the human operator. | 03-28-2013 |
20130076864 | LIQUID CRYSTAL DISPLAY DEVICE - A liquid crystal display device for carrying out a | 03-28-2013 |
20130076865 | POSITION/ORIENTATION MEASUREMENT APPARATUS, PROCESSING METHOD THEREFOR, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A position/orientation measurement apparatus holds a three-dimensional shape model of a object, acquires approximate value indicating a position and an orientation of the object, acquires a two-dimensional image of the object, projects a geometric feature of the three-dimensional shape model on the two-dimensional image based on the approximate value, calculates the direction of the geometric feature of the three-dimensional shape model projected on the two-dimensional image, detects an image feature based on the two-dimensional image, calculates the direction of the image feature, associates the image feature and the geometric feature by comparing the direction of the image feature calculated based on the two-dimensional image and the direction of the geometric feature calculated based on the three-dimensional shape model, and calculates the position and orientation of the object by correcting the approximate value based on the distance between the geometric feature and the image feature associated therewith. | 03-28-2013 |
20130083164 | ACTIVE 3D TO PASSIVE 3D CONVERSION - Various arrangements for using an active 3D signal to create passive 3D images are presented. An active 3D frame may be received. The active 3D frame may comprise a first perspective image and a second perspective image. The first perspective image may be representative of a different perspective than the second perspective image. The first perspective image may be tinted with a first color. The second perspective image may be tinted with a second color different from the first color. The first perspective image tinted with the first color may be displayed. The second perspective image tinted with the second color may be displayed. | 04-04-2013 |
20130083165 | APPARATUS AND METHOD FOR EXTRACTING TEXTURE IMAGE AND DEPTH IMAGE - Provided are an apparatus and method for extracting a texture image and a depth image. The apparatus of the present invention may project a pattern image on a target object, may capture a scene image on which the target image is reflected from the target object, and may simultaneously extract the texture image and the depth image using the scene image. | 04-04-2013 |
20130083166 | IMAGING APPARATUS AND METHOD FOR CONTROLLING SAME - An imaging element includes a plurality of photoelectric conversion units that output an image signal for each pixel through a micro lens. An imaging signal processing circuit separates image signals output from the imaging element into a left-eye image signal and a right-eye image signal. An image combining circuit generates combined image data by performing arithmetic average processing for left-eye image data and right-eye image data. A recording medium control I/F unit controls to record left-eye image data and right-eye image data for use in 3D display and combined image data for use in 2D display in different regions in an image file. | 04-04-2013 |
20130083167 | PROJECTOR APPARATUS AND VIDEO DISPLAY METHOD - There is provided a projector apparatus including a terminal unit supplied with video data output by a source apparatus; a video projection processing unit that generates a projection video based on the video data and projects the generated projection video through a projection lens, a distance detection unit that detects a distance to a display surface on which the projection video projected through the projection lens is displayed, a projection angle detection unit that detects a projection angle of the projection video projected through the projection lens, and a control unit that calculates a display size of the projection video on the display surface based on the distance detected by the distance detection unit and the projection angle detected by the projection angle detection unit and transmits the calculated display size as the data regarding the display capability of the projector apparatus from the terminal unit to the source apparatus. | 04-04-2013 |
20130088573 | METHODS FOR CONTROLLING SCENE, CAMERA AND VIEWING PARAMETERS FOR ALTERING PERCEPTION OF 3D IMAGERY - Mathematical relationships between the scene geometry, camera parameters, and viewing environment are used to control stereography to obtain various results influencing the viewer's perception of 3D imagery. The methods may include setting a horizontal shift, convergence distance, and camera interaxial parameter to achieve various effects. The methods may be implemented in a computer-implemented tool for interactively modifying scene parameters during a 2D-to-3D conversion process, which may then trigger the re-rendering of the 3D content on the fly. | 04-11-2013 |
20130088574 | Detective Adjusting Apparatus for Stereoscopic Image and Related Method - A detective adjusting apparatus for a stereoscopic image and a related method is disclosed in the present invention. The apparatus includes an image capturing device, an image processing device, a stereoscopic display and an image analyzing/displaying device. The present invention utilizes the method to calculate an angle between eyes' position and a central position according to the face position, to determine the stereoscopic image by analyzing parameters of the stereoscopic image and the stereoscopic display, and to display the continuous stereoscopic image having at least two viewpoints, so the a viewer can watch the stereoscopic image clearly. The present invention is applied to an optical grating or an auto-stereoscopic screen made by lens. | 04-11-2013 |
20130088575 | METHOD AND APPARATUS FOR OBTAINING DEPTH INFORMATION USING OPTICAL PATTERN - Provided is an apparatus and method for obtaining depth information using an optical pattern. The apparatus for obtaining depth information using the optical pattern may include: a pattern projector to generate the optical pattern using a light source and an optical pattern projection element (OPPE), and to project the optical pattern towards an object area, the OPPE comprising a pattern that includes a plurality of pattern descriptors; an image obtaining unit to obtain an input image by photographing the object area; and a depth information obtaining unit to measure a change in a position of at least one of the plurality of pattern descriptors in the input image, and to obtain depth information of the input image based on the change in the position. | 04-11-2013 |
20130088576 | OPTICAL TOUCH SYSTEM - An image system comprises a light source, an image sensing device, and a computing apparatus. The light source is configured to illuminate an object comprising at least one portion. The image sensing device is configured to generate a picture comprising an image. The image is produced by the object and comprises at least one part corresponding to the at least one portion of the object. The computing apparatus is configured to determine an intensity value representing the at least one part and to determine at least one distance between the at least one portion and the image sensing device using the intensity value and a dimension of the at least one part of the image. | 04-11-2013 |
20130088577 | MOBILE DEVICE, SERVER ARRANGEMENT AND METHOD FOR AUGMENTED REALITY APPLICATIONS - A mobile device ( | 04-11-2013 |
20130093850 | IMAGE PROCESSING APPARATUS AND METHOD THEREOF - An image processing method is disclosed. A 2D image is virtually divided into a plurality of blocks. With respect to each block, an optimum contrast value and a corresponding focus step are obtained. An object distance for an image in each block is obtained according to the respective focus step of each block. A depth map is obtained from the object distances of the blocks. The 2D image is synthesized to form a 3D image according to the depth map. | 04-18-2013 |
20130093851 | IMAGE GENERATOR - An image generator is provided which generates a monitor display image that facilitates easy recognition of a three-dimensional object in an overhead view image. An image generator includes: an overhead view image generation section for generating an overhead view image by performing a projective transformation, with a virtual viewpoint above a vehicle, of an image captured by an on-board camera for capturing an image of a surrounding region of the vehicle; a three-dimensional object detection section for recognizing a three-dimensional object present in the surrounding region and outputting three-dimensional object attribute information showing an attribute of the three-dimensional object; and an image composition section for generating a monitor display image for vehicle driving assistance by performing image composition of a grounding plane mark showing a grounding location of the three-dimensional object with a portion at the grounding location in the overhead view image, based on the three-dimensional object attribute information. | 04-18-2013 |
20130093852 | PORTABLE ROBOTIC DEVICE - A portable robotic device (PRD) as well as related devices and methods are described herein. The PRD includes a 3-D imaging sensor configured to acquire corresponding intensity data frames and range data frames of the environment. An imaging processing module configured to identify a matched feature in the intensity data frames, obtain sets of 3-D coordinates representing the matched feature in the range data frames, and determine a pose change of the PRD based on the 3-D coordinates; and perform 3-D data segmentation of the range data frames to extract planar surfaces. | 04-18-2013 |
20130093853 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - The present technology relates to an information processing apparatus and an information processing method that can reproduce high-quality stereoscopic images with low delay. | 04-18-2013 |
20130100249 | STEREO CAMERA DEVICE - Degradation of a three-dimensional measurement accuracy of a stereo camera device that takes an image of an object in a wide wavelength band is suppressed. In order to achieve the above object, a stereo camera device includes: a stereo image acquiring part for taking an image of light from the object to acquire a stereo image; a corresponding point searching part for performing a corresponding point search between images constituting the stereo image; a wavelength acquiring part for acquiring a representative wavelength of a wavelength component of the light; a parameter acquiring part for acquiring each parameter value corresponding to the representative wavelength with respect to at least one of camera parameters of the stereo image acquiring part in which the parameter value fluctuates according to the wavelength component of the light; and a three-dimensional information acquiring part for acquiring three-dimensional information on the object from a result of the corresponding point search using the each parameter value. | 04-25-2013 |
20130100250 | Methods and apparatus for imaging of occluded objects from scattered light - In exemplary implementations of this invention, a 3D range camera “looks around a corner” to image a hidden object, using light that has bounced (reflected) off of a diffuse reflector. The camera can recover the 3D structure of the hidden object. | 04-25-2013 |
20130100251 | IMAGE CAPTURING DEVICE AND IMAGE CAPTURING METHOD - Present invention provides an imaging element that includes a first pixel group and a second pixel group, a pickup execution control unit that performs pixel addition by exposing the first pixel group and the second pixel group of the imaging element during the same exposure in a case of pickup in an SN mode and performs pixel addition by exposing the first pixel group and the second pixel group of the imaging element during different exposure times in a case of pickup in a DR mode, a diaphragm that is arranged in a light path through which the light fluxes which are incident to the imaging element pass, and a diaphragm control unit that, in the case of pickup in the DR mode, sets the diaphragm value of the diaphragm to be a value which is greater than that of the case of pickup in the SN mode. | 04-25-2013 |
20130100252 | OBJECT REGION EXTRACTION SYSTEM, METHOD AND PROGRAM - Provided is an object region extraction system, an object region extraction method and an object region extraction program, which can improve non-extraction of an object region in three-dimensional space caused by extraction error of an object region from an image, and also reduce incorrect extraction of an object region in three-dimensional space. | 04-25-2013 |
20130107000 | Scanning Laser Time of Flight 3D Imaging | 05-02-2013 |
20130107001 | DISTANCE ADAPTIVE 3D CAMERA | 05-02-2013 |
20130107002 | IMAGING APPARATUS | 05-02-2013 |
20130107003 | APPARATUS AND METHOD FOR RECONSTRUCTING OUTWARD APPEARANCE OF DYNAMIC OBJECT AND AUTOMATICALLY SKINNING DYNAMIC OBJECT | 05-02-2013 |
20130107004 | STRAIN MEASUREMENT APPARATUS, LINEAR EXPANSION COEFFICIENT MEASUREMENT METHOD, AND CORRECTION COEFFICIENT MEASUREMENT METHOD FOR TEMPERATURE DISTRIBUTION DETECTOR | 05-02-2013 |
20130107005 | IMAGE PROCESSING APPARATUS AND METHOD | 05-02-2013 |
20130107006 | CONSTRUCTING A 3-DIMENSIONAL IMAGE FROM A 2-DIMENSIONAL IMAGE AND COMPRESSING A 3-DIMENSIONAL IMAGE TO A 2-DIMENSIONAL IMAGE | 05-02-2013 |
20130107007 | Constructing a 3-Dimensional Image from a 2-Dimensional Image and Compressing a 3-Dimensional Image to a 2-Dimensional Image | 05-02-2013 |
20130107008 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR CAPTURING IMAGES | 05-02-2013 |
20130107009 | THREE-DIMENSIONAL IMAGE PICKUP APPARATUS, LIGHT-TRANSPARENT UNIT, IMAGE PROCESSING APPARATUS, AND PROGRAM | 05-02-2013 |
20130113885 | THREE-DIMENSIONAL IMAGING USING A SINGLE CAMERA - The attenuation and other optical properties of a medium are exploited to measure a thickness of the medium between a sensor and a target surface. Disclosed herein are various mediums, arrangements of hardware, and processing techniques that can be used to capture these thickness measurements and obtain three-dimensional images of the target surface in a variety of imaging contexts. This includes general techniques for imaging interior/concave surfaces as well as exterior/convex surfaces, as well as specific adaptations of these techniques to imaging ear canals, human dentition, and so forth. | 05-09-2013 |
20130113886 | 3D Image Photographing Apparatus and Method - A three-dimensional (3D) image photographing apparatus including: an image sensor; a 3D shutter which is disposed on the light path of the image light and comprises a first opening and a second opening, and which sequentially executes a first operation of passing a first image light by opening only the first opening, a second operation of blocking both the first opening and the second opening, and a third operation of passing a second image light by opening only the second opening; and a control unit which is electrically connected to the image sensor and the 3D shutter and which synchronizes a vertical synchronization signal which is applied to the image sensor with a starting time of the second operation by controlling the image sensor and the 3D shutter. | 05-09-2013 |
20130113887 | APPARATUS AND METHOD FOR MEASURING 3-DIMENSIONAL INTEROCULAR CROSSTALK - An apparatus and method for measuring 3-dimensional (3D) interocular crosstalk is disclosed. A light sensor detects luminance of a stereoscopic image displayed in a display and outputs a luminance value indicating the detected luminance. A controller calculates 3D interocular crosstalk based on a gray difference and a residual luminance ratio. | 05-09-2013 |
20130113888 | DEVICE, METHOD AND PROGRAM FOR DETERMINING OBSTACLE WITHIN IMAGING RANGE DURING IMAGING FOR STEREOSCOPIC DISPLAY - An obstacle determining unit obtains predetermined index values for each of subranges of each imaging range of each imaging unit, compares the index values of the subranges at mutually corresponding positions in the imaging ranges of the different imaging units, and if a difference between the index values in the imaging ranges of the different imaging units is large enough to satisfy a predetermined criterion, determines that the imaging range of at least one of the imaging units contains an obstacle that is close to the imaging optical system of the at least one of the imaging units. | 05-09-2013 |
20130120534 | DISPLAY DEVICE, IMAGE PICKUP DEVICE, AND VIDEO DISPLAY SYSTEM - A display device comprising a storage unit that stores a video signal of a subject, image capturing position information, and shooting direction information, a display unit, a display posture information acquiring unit, a calculating unit that calculates a relative positional relation of a plurality of image capturing positions in a stereoscopic space based on the image capturing position information stored in the storage unit, and calculates a relative directional relation of a plurality of shooting directions in the display direction of the display unit based on the shooting direction information stored in the storage unit and the display direction information acquired by the display posture information acquiring unit, and a control unit that controls the display unit displaying a rotation video obtained by rotating the video of the subject corresponding to the video signal stored in the storage unit according to the relative directional relation calculated by the calculating unit. | 05-16-2013 |
20130120535 | THREE-DIMENSIONAL IMAGE PROCESSING APPARATUS AND ELECTRIC POWER CONTROL METHOD OF THE SAME - A three-dimensional image processing apparatus and a method of controlling power of same are provided. The three-dimensional image processing apparatus may include a display, a three-dimensional image filter disposed a prescribed distance from the display to adjust optical paths of the displayed view images, a camera configured to capture an image of a user, an ambient light sensor, and a controller configured to control the view images, the three-dimensional image filter, or the camera. The controller may determine a position of the user based on the captured image and adjust a perceived three-dimensional view of the view images based on the determined position of the user. Moreover, the controller may control an operational state of the camera and the at least one process based on the determined position of the user or a detected amount of ambient light. | 05-16-2013 |
20130120536 | Optical Self-Diagnosis of a Stereoscopic Camera System - The present invention relates to a method for the optical self-diagnosis of a camera system and to a camera system for carrying out the method. The method comprises recording stereo images obtained from in each case at least two partial images ( | 05-16-2013 |
20130120537 | SINGLE-LENS 2D/3D DIGITAL CAMERA - A single-lens 2D/3D camera has a light valve placed in relationship to a lens module to control the light beam received by the lens module for forming an image on an image sensor. The light valve has a light valve area positioned in a path of the light beam. The light valve has two or more clearable sections such that only one section is made clear to allow part of the light beam to pass through. By separately making clear different sections on the light valve, a number of images as viewed through slightly different angles can be captured. The clearable sections include a right section and a left section so that the captured images can be used to produce 3D pictures or displays. The clearable sections also include a middle section so that the camera can be used as a 2D camera. | 05-16-2013 |
20130127993 | METHOD FOR STABILIZING A DIGITAL VIDEO - A method for stabilizing an input digital video. Input camera positions are determined for each of the input video frames, and an input camera path is determined representing input camera position as a function of time. A smoothing operation is applied to the input camera path to determine a smoothed camera path, and a corresponding sequence of smoothed camera positions. A stabilized video frame is determined corresponding to each of the smoothed camera positions by: selecting an input video frame having a camera position near to the smoothed camera position; warping the selected input video frame responsive to the input camera position; warping a set of complementary video frames captured from different camera positions than the selected input video frame; and combining the warped input video frame and the warped complementary video frames to form the stabilized video frame. | 05-23-2013 |
20130127994 | VIDEO COMPRESSION USING VIRTUAL SKELETON - Optical sensor information captured via one or more optical sensors imaging a scene that includes a human subject is received by a computing device. The optical sensor information is processed by the computing device to model the human subject with a virtual skeleton, and to obtain surface information representing the human subject. The virtual skeleton is transmitted by the computing device to a remote computing device at a higher frame rate than the surface information. Virtual skeleton frames are used by the remote computing device to estimate surface information for frames that have not been transmitted by the computing device. | 05-23-2013 |
20130127995 | PREPROCESSING APPARATUS IN STEREO MATCHING SYSTEM - A preprocessing apparatus in a stereo matching system is provided. In the preprocessing apparatus, coordinate information of a stereo camera is stored, a new address of the pixel is specified using the coordinate information, and left and right images received from the stereo camera are rectified using the new address of the pixel. | 05-23-2013 |
20130127996 | METHOD OF RECOGNIZING STAIRS IN THREE DIMENSIONAL DATA IMAGE - A method of recognizing stairs in a 3D data image includes an image acquirer that acquires a 3D data image of a space in which stairs are located. An image processor calculates a riser height between two consecutive treads of the stairs in the 3D data image, identifies points located between the two consecutive treads according to the calculated riser height, and detects a riser located between the two consecutive treads through the points located between the two consecutive treads. Then, the image processor calculates a tread depth between two consecutive risers of the stairs in the 3D data image, identifies points located between the two consecutive risers according to the calculated tread depth, and detects a tread located between the two consecutive risers through the points located between the two consecutive risers. | 05-23-2013 |
20130127997 | 3D IMAGE PICKUP OPTICAL APPARATUS AND 3D IMAGE PICKUP APPARATUS - An optical apparatus used for a 3D image pickup apparatus for taking two subject images having a disparity by using two lens apparatuses, each of which is directly connectable to an image pickup apparatus, and one image pickup apparatus, the optical apparatus including: a first attaching unit for detachably attaching a first lens apparatus; a second attaching unit for detachably attaching a second lens apparatus; a camera attaching unit for detachably attaching the image pickup apparatus, the image pickup apparatus including an image pickup portion; and a switch unit for alternately switching light rays from the first and second lens apparatuses to guide the light ray to the image pickup apparatus in a state that the first and second lens apparatuses and the image pickup apparatus are connected to the optical apparatus. Intermediate images are formed in the optical apparatus by the first and second lens apparatuses. | 05-23-2013 |
20130127998 | MEASUREMENT APPARATUS, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - A measurement apparatus includes a projection unit configured to project pattern light onto an object to be measured, an imaging unit configured to capture an image of the object to be measured on which the pattern light is projected to acquire a captured image of the object to be measured, a measurement unit configured to measure a position and/or orientation of the object to be measured on the basis of the captured image, a position and orientation of the projection unit, and a position and orientation of the imaging unit, a setting unit configured to set identification resolution of the pattern light using a range of variation in the position and/or orientation of the object to be measured; and a change unit configured to change a pattern shape of the pattern light in accordance with the identification resolution. | 05-23-2013 |
20130135438 | GATE CONTROL SYSTEM AND METHOD - A gate control system includes a system control unit, a gate device, and an image obtaining unit. The system control unit further includes a gate control unit and a determination unit. The image obtaining unit captures image of a scene in an operating area of the gate device and obtains distance information. The gate control unit generates three dimension data according to the captured images and the distance information, and the determination unit determines according to the three dimension data whether a person has passed through the gate device. The system control unit controls the gate device according to the determination unit. The disclosure further provides a gate control method. | 05-30-2013 |
20130135439 | STEREOSCOPIC IMAGE GENERATING DEVICE AND STEREOSCOPIC IMAGE GENERATING METHOD - A stereoscopic image generating device includes: a correction parameter calculating unit that calculates correction parameters based on a plurality of pairs of feature points corresponding to the same points on the object, from the first image and a second image photographing the object; a correction error calculating unit that, for each pair of feature points, corrects the position of the feature point on at least one image, using the correction parameters, and calculates the amount of correction error; a maldistribution degree calculating unit that finds the degree of maldistribution of feature points; a threshold value determining unit that determines a threshold value such that the threshold value is smaller when the degree of maldistribution increases; and a correction unit that, when the amount of correction error is equal to or lower than the threshold value, corrects the position of the object in the images using the correction parameters. | 05-30-2013 |
20130135440 | Aerial Photograph Image Pickup Method And Aerial Photograph Image Pickup Apparatus - The aerial photograph image pickup method comprises a first step of acquiring still images along an outward route and a return route respectively, a second step of preparing a stereo-image with regard to three images adjacent to each other in advancing direction, and of preparing another stereo-image by relative orientation on one more set of adjacent images and of preparing two sets of stereo-images, a third step of connecting two sets of stereo-images by using feature points extracted from a portion of an image common to the two sets of stereo-images, a step of connecting all stereo-images in the outward route direction and in the return route direction according to images acquired in the first step by repeating the second and third steps, and a step of selecting common tie points from the images adjacent to each other in the adjacent course and connecting the adjacent stereo-images in the course. | 05-30-2013 |
20130141537 | Methodology For Performing Depth Estimation With Defocused Images Under Extreme Lighting Conditions - A methodology for performing a depth estimation procedure with defocused images under extreme lighting conditions includes a camera device with a sensor for capturing blur images of a photographic target under extreme lighting conditions. The extreme lighting conditions may include over-exposed conditions and/or under-exposed conditions. The camera device also includes a depth generator that performs the depth estimation procedure by utilizing the captured blur images. The depth estimation procedure includes a clipped-pixel substitution procedure to compensate for the extreme lighting conditions. | 06-06-2013 |
20130141538 | SYSTEM AND METHOD FOR DEPTH FROM DEFOCUS IMAGING - An imaging system includes a positionable device configured to axially shift an image plane, wherein the image plane is generated from photons emanating from an object and passing through a lens, a detector plane positioned to receive the photons of the object that pass through the lens, and a computer programmed to characterize the lens as a mathematical function, acquire two or more elemental images of the object with the image plane of each elemental image at different axial positions with respect to the detector plane, determine a focused distance of the object from the lens, based on the characterization of the lens and based on the two or more elemental images acquired, and generate a depth map of the object based on the determined distance. | 06-06-2013 |
20130141539 | MONOCULAR STEREOSCOPIC IMAGING DEVICE - The monocular stereoscopic imaging device according to one aspect of the presently disclosed subject matter includes: an imaging optical system including a zoom lens and a diaphragm; a pupil dividing unit configured to divide a light flux having passed through a imaging optical system into multiple light fluxes; an imaging unit configured to receive the multiple light fluxes, so as to continuously acquire a left-eye image and a right-eye image; and a controlling unit configured to control a zoom lens driving unit to move the zoom lens in accordance with an instruction of changing the focus distance, and configured to control the diaphragm driving unit to maintain at a substantially constant level a stereoscopic effect of the left-eye image and the right-eye image three-dimensionally displayed on a display unit before and after the zoom lens is moved. | 06-06-2013 |
20130141540 | TARGET LOCATING METHOD AND A TARGET LOCATING SYSTEM - A target locating method and a target locating system. Images of a target area are recorded utilizing recording devices carried by a vehicle. The recorded images of the target area are matched with a corresponding three dimensional area of a three dimensional map including transferring a target indicator from the recorded images of the target area to the three dimensional map of the corresponding target area. The coordinates of the target indicator position are read in the three dimensional map. The read coordinates of the target indicator position are made available for position requiring equipment. | 06-06-2013 |
20130141541 | COMPACT CAMERA ACTUATOR AND COMPACT STEREO-SCOPIC IMAGE PHOTOGRAPHING DEVICE - The purpose of the present invention is to provide a compact three-dimensional image photographing device capable of adjusting the first angle of view on an object being picked up on the image sensor by adjusting the space between two lenses by moving the two lenses horizontally, left and right. The compact three-dimensional image photographing device of the present invention comprises a case; a first holder and a second holder mounted spaced apart from each other on the left and right sides in the case so that the holders can move in the left and right directions, each of the holders having a compact camera actuator therein; a guide shaft, passing through the first and second holders and thus mounted on the case, for guiding the left and right movements of the first holder and the second holder; and left and right driving portions, mounted respectively on the first holder and the second holder, for moving the first holder and the second holder left and right. | 06-06-2013 |
20130141542 | THREE-DIMENSIONAL OBJECT DETECTION DEVICE AND THREE-DIMENSIONAL OBJECT DETECTION METHOD - A three-dimensional object detection device | 06-06-2013 |
20130147916 | MICROSCOPY SYSTEM AND METHOD FOR CREATING THREE DIMENSIONAL IMAGES USING PROBE MOLECULES | 06-13-2013 |
20130147917 | COMPUTING DEVICE AND HOUSEHOLD MONITORING METHOD USING THE COMPUTING DEVICE - In a household monitoring method using a computing device, the computing device is connected to one or more depth-sensing cameras and an alarm device. The computing device controls the depth-sensing cameras to capture real-time images of monitored areas in front of the depth-sensing cameras. A presence of a person is detected from the images. If the person is detected to be in exigency, the computing device notifies relevant personnel of the exigency. | 06-13-2013 |
20130147918 | STEREO IMAGE GENERATION APPARATUS AND METHOD - There is provided a stereo image generation apparatus which extracts a plurality of sets of feature points from each image of a subject formed in a left-half area and a right-half area captured using a stereo adapter such that feature points in each set correspond to the same one of points on the subject. The apparatus then calculates a set of correction parameters based on the sets of feature points and an evaluation value indicating a degree of likelihood of being correct feature point for a point with respect to a corresponding feature point extracted. For a given feature point of interest extracted from the one of the left-half and right-half areas, the evaluation value is high in a possible shifting area within which image shifting may occur due to distortion caused by the structure of the stereo adapter and the mounting potion error of the stereo adapter. | 06-13-2013 |
20130147919 | Multi-View Difraction Grating Imaging With Two-Dimensional Displacement Measurement For Three-Dimensional Deformation Or Profile Output - Hardware and software methodology is described for three-dimensional imaging in connection with optical transmission grating used to achieve a plurality of views of an area or entirety of an object imaged with a single digital camera for recording and subsequent processing. Such processing produces three-dimensional data (in terms of element displacement and/or object profile) using a two-dimensional displacement measurement technique only. | 06-13-2013 |
20130147920 | IMAGING DEVICE - The imaging device includes an optical system, imaging unit, and control unit. The optical system is configured to include a focus lens. The imaging unit is configured to capture a left-eye subject and a right-eye subject via the optical system. A image captured by the imaging unit includes a left-eye image for the left-eye subject and the right-eye image for the right-eye subject. The control unit is configured to generate a first AF evaluation for the left-eye image and a second AF evaluation for the right-eye image. The control unit generates a third AF evaluation value on the basis of the first AF evaluation value and the second AF evaluation value. The control unit controls the drive of the focus lens on the basis of the third AF evaluation value. | 06-13-2013 |
20130147921 | Generation of patterned radiation - Imaging apparatus includes an illumination assembly, including a plurality of radiation sources and projection optics, which are configured to project radiation from the radiation sources onto different, respective regions of a scene. An imaging assembly includes an image sensor and objective optics configured to form an optical image of the scene on the image sensor, which includes an array of sensor elements arranged in multiple groups, which are triggered by a rolling shutter to capture the radiation from the scene in successive, respective exposure periods from different, respective areas of the scene so as to form an electronic image of the scene. A controller is coupled to actuate the radiation sources sequentially in a pulsed mode so that the illumination assembly illuminates the different, respective areas of the scene in synchronization with the rolling shutter. | 06-13-2013 |
20130147922 | STEREO IMAGE PROCESSING APPARATUS AND STEREO IMAGE PROCESSING METHOD - Provided is a stereo image processing apparatus wherein parallax can be calculated with high precision. A window-function shifting unit ( | 06-13-2013 |
20130155187 | MOBILE DEVICE CAPTURE AND DISPLAY OF MULTIPLE-ANGLE IMAGERY OF PHYSICAL OBJECTS - Methods and systems for capturing and displaying multiple-angle imagery of physical objects are presented. With respect to capturing, multiple images of an object are captured from varying angles in response to user input. The images are analyzed to determine whether at least one additional image is desirable to allow generation of a visual presentation of the object. The user is informed to initiate capturing of the at least one more image based on the analysis. The additional image is captured in response to second user input. The presentation is generated based on the multiple images and the additional image. For displaying, a visual presentation of an object is accessed, the presentation having multiple images of the object from varying angles. The presentation is presented to the user of a mobile device according to user movement of the device. The user input determines a presentation speed and order of the images. | 06-20-2013 |
20130155188 | THERMAL IMAGING CAMERA WITH COMPASS CALIBRATION - A thermal imaging camera may include an electronic compass that can be calibrated after assembly of the thermal imaging camera. The electronic compass may include a magnetic sensor configured to sense three orthogonal components of a magnetic field. In some examples, the camera includes a processor configured to receive a plurality of measurements from the magnetic sensor as a physical orientation of the magnetic sensor is changed in a three-dimensional space. The processor may generate a plurality of data points from the plurality of measurements and control a display so as to display a simulated three-dimensional plot of the data points. The processor may control the display so the display updates in substantially real-time as new data points are generated by changing the physical orientation of the magnetic sensor. | 06-20-2013 |
20130155189 | OBJECT MEASURING APPARATUS AND METHOD - An exemplary object measuring method includes changing a focal length of a zoom lens in response to a user operation and taking images. The method then displays the images or one of them, determines a selected area, and defines the selected area as representing a object in the image. The method further determines virtual X and Y coordinate differences between a center point of an image and the object in the image. Next, the method calculates the actual differences between the testing device and the object. The method then controls the driving unit to drive the testing device to move a determined distance in an X direction and to move a determined distance in a Y direction. | 06-20-2013 |
20130155190 | DRIVING ASSISTANCE DEVICE AND METHOD - An exemplary driving assistance method includes obtaining images of a surrounding environment of a vehicle captured by cameras mounted on the vehicle, each of the captured images comprising distance information indicating a distance between the corresponding camera and object captured by the corresponding camera. Next, the method includes extracting the distance information from the obtained captured images. The method then creates 3D models based on the extracted distance information, coordinates of each pixel of the at least one captured image and a reference point determined according to the captured images. Further, the method includes controlling display devices to display the created 3D models. | 06-20-2013 |
20130155191 | DEVICE FOR MEASURING THREE DIMENSIONAL SHAPE - A device for measuring three dimensional shape includes a first irradiation unit, a first grating control unit, a second irradiation unit, a second grating control unit, an imaging unit, and an image processing unit. After performance of a first imaging operation as imaging processing of a single operation among a multiplicity of imaging operations performed by irradiation of said first light pattern of multiply varied phases, a second imaging operation is performed as imaging processing of a single operation among a multiplicity of imaging operations performed by irradiation of said second light pattern of multiply varied phases. After completion of the first imaging operation and the second imaging operation, shifting or switching operation of the first grating and the second grating is performed simultaneously. | 06-20-2013 |
20130155192 | STEREOSCOPIC IMAGE SHOOTING AND DISPLAY QUALITY EVALUATION SYSTEM AND METHOD APPLICABLE THERETO - A stereoscopic image shooting system including an image shooting module and a score evaluation module is provided. The image shooting module is used for shooting a plurality of multi-view images of an object. The score evaluation module analyzes a plurality of stereoscopic images formed from the multi-view images to calculate a stereoscopic quality score of the stereoscopic images. | 06-20-2013 |
20130155193 | IMAGE QUALITY EVALUATION APPARATUS AND METHOD OF CONTROLLING THE SAME - Autocorrelation coefficients for three dimensions defined by the horizontal direction, the vertical direction, and the time direction of evaluation target moving image data are acquired. A plurality of noise amounts are calculated by executing frequency analysis of the acquired autocorrelation coefficients for the three dimensions and multiplying each frequency analysis result by a visual response function representing the visual characteristic of a spatial frequency or a time frequency. The product of the plurality of calculated noise amounts is calculated as the moving image noise evaluation value of the evaluation target moving image data. | 06-20-2013 |
20130155194 | ANAGYLPHIC STEREOSCOPIC IMAGE CAPTURE DEVICE - The device comprises an aperture stop disc divided into a plurality of mutually exclusive filtering segments comprising a first, a second and a third filtering segment; the third filtering segment is adapted to pass a third portion of the spectrum which is included in the portion of the spectrum passing the first and second filtering segments. | 06-20-2013 |
20130155195 | Method and system for object reconstruction - A system and method are presented for use in the object reconstruction. The system comprises an illuminating unit, and an imaging unit (see FIG. | 06-20-2013 |
20130155196 | METHOD AND APPARATUS FOR COMMUNICATING USING 3-DIMENSIONAL IMAGE DISPLAY - Provided is a communication method using a three-dimensional (3D) image display device. In the communication method, motion information is determined using a motion image obtained by photographing a user's motion indicating the user's request in relation to an opposite party, distance information indicating the distance between the user who is moving and the 3D image display device is determined, and then, the user's request is determined based on the motion information and the distance information | 06-20-2013 |
20130162777 | 3D CAMERA MODULE AND 3D IMAGING METHOD USING SAME - An 3D camera module includes a first and a second imaging units, a storage unit, a color separation unit, a main processor unit, an image processing unit, a driving unit, an image combining unit and two OIS units. The first and second imaging units capture images of an object(s) from different angles. The color separation unit separates the images into red, green and blue colors. The main processor unit calculates MTF values of the images and determines a shooting mode of the 3D camera module. The image processing unit processes the images to compensate for blurring of the images caused by being out of focus. The driving unit drives the first and second imaging units to optimum focusing positions according to MTF values. The image combining unit combines the images into a 3D image. The OIS units respectively detect and compensate for shaking of the first and second imaging units. | 06-27-2013 |
20130162778 | MOTION RECOGNITION DEVICE - A motion recognition device capable of recognizing the motion of an object without contact with the object is provided. Further, a motion recognition device that has a simple structure and can recognize the motion of an object regardless of the state of the object is provided. By using a 3D TOF range image sensor in the motion recognition device, information on changes in position and shape is detected easily. Further, information on changes in position and shape of a fast-moving object is detected easily. Motion recognition is performed on the basis of pattern matching. Imaging data used for pattern matching is acquired from a 3D range measuring sensor. Object data is selected from imaging data on an object that changes over time, and motion data is estimated from a time change in selected object data. The motion recognition device performs operation defined by output data generated from the motion data. | 06-27-2013 |
20130162779 | IMAGING DEVICE, IMAGE DISPLAY METHOD, AND STORAGE MEDIUM FOR DISPLAYING RECONSTRUCTION IMAGE - A sub-image extractor extracts a target sub-image from a light field image. A partial area definer defines a predetermined area in the target sub-image as a partial area. A pixel extractor extracts pixels from the partial area, the number of pixels meeting correspondence areas of a generation image. The pixel arranger arranges the extracted pixels to the correspondence areas of the generation image in an arrangement according to the optical path of the optical system which photographs the light field image. Pixels are extracted for all sub-images in the light field image, and are arranged to the generation image to generate a reconstruction image. | 06-27-2013 |
20130162780 | STEREOSCOPIC IMAGING DEVICE AND SHADING CORRECTION METHOD - Provided is a technique for improving the quality of an image obtained by a pupil-division-type stereoscopic imaging device. First and second images obtained by the stereoscopic imaging device according to the present invention have the shading of an object in a pupil division direction. Therefore, when the first and second images are composed, reference data in which shading is cancelled is generated. The amount of shading correction for the first and second images is determined on the basis of the reference data and shading correction is performed on the first and second images on the basis of the determined amount of shading correction. | 06-27-2013 |
20130169754 | AUTOMATIC INTELLIGENT FOCUS CONTROL OF VIDEO - The invention is directed to systems, methods and computer program products for providing focus control for an image-capturing device. An exemplary method includes capturing an image frame using an image-capturing device and recording video tracking data and gaze tracking data for one or more image frames following the captured image frame and/or for one or more images frames prior to the captured image frame. The exemplary method additionally includes calculating a focus distance and a depth of field based at least partially on the recorded video tracking data and recorded gaze tracking data. The exemplary method additionally includes displaying the captured image frame based on the calculated focus distance and the calculated depth of field. | 07-04-2013 |
20130169755 | SIGNAL PROCESSING DEVICE FOR PROCESSING PLURALITY OF 3D CONTENT, DISPLAY DEVICE FOR DISPLAYING THE CONTENT, AND METHODS THEREOF - A display device includes a plurality of reception units receiving a plurality of content, a storage unit, a plurality of scaler units reducing data sizes of the plurality of content, storing the respective content with the reduced data sizes in the storage unit, and reading the respective content stored in the storage unit according to an output timing, a plurality of frame rate conversion units converting frame rates of the respective read content, and a video output unit combining and displaying the respective content output from the plurality of frame rate conversion units. Accordingly, the resources can be minimized. | 07-04-2013 |
20130169756 | DEPTH SENSOR, METHOD OF CALCULATING DEPTH IN THE SAME - A depth calculation method of a depth sensor includes outputting a modulated light to a target object, detecting four pixel signals from a depth pixel based on a reflected light reflected by the target object, determining whether each of the four pixel signals is saturated based on results of comparing a magnitude of each of the four pixel signals with a threshold value, and calculating depth to the target object based on the determination result. | 07-04-2013 |
20130169757 | IMAGE PICKUP APPARATUS THAT DETERMINES SHOOTING COMPOSITION, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM - An image pickup apparatus capable of generating image signals for viewing images shot in a composition (vertical or horizontal) intended by a photographer as a three-dimensional image. The apparatus has an image pickup device for converting an optical image to a picked-up image signal as an electric signal. The device includes a plurality of unit pixels, each of which has a plurality of photo diodes for converting the optical image to the picked-up image signal. When an image pickup operation is performed, a posture of the image pickup apparatus is determined, and the plurality of photo diodes in each unit pixel are grouped into a plurality of photo diode groups according to a result of the determination. A plurality of image signals are generated from picked-up image signals output from the photo diode groups, respectively. | 07-04-2013 |
20130176396 | BROADBAND IMAGER - A broadband imager, which is able to image both IR and visible light, is disclosed. In one embodiment, an IR sensitive region of an IR pixel underlies the R, G, B sensitive regions of R, G, and B visible pixels. Therefore, the IR pixel receives IR light through a same surface area of the photosensor through which the R, G, and B pixels receive visible light. However, the IR light generates electron-hole pairs deeper below the common surface area shared by the RGB and IR pixels, than visible light. The photosensor also has a charge accumulation region for accumulating charges generated in the IR sensitive region and an electrode above the charge accumulation region for providing a voltage to accumulate the charges generated in the IR pixel. | 07-11-2013 |
20130176397 | Optimized Stereoscopic Camera for Real-Time Applications - A method is provided for an optimized stereoscopic camera with low processing overhead, especially suitable for real-time applications. By constructing a viewer-centric and scene-centric model, the mapping of scene depth to perceived depth may be defined as an optimization problem, for which a solution is analytically derived based on constraints to stereoscopic camera parameters including interaxial separation and convergence distance. The camera parameters may thus be constrained prior to rendering to maintain a desired perceived depth volume around a stereoscopic display, for example to ensure user comfort or provide artistic effects. To compensate for sudden scene depth changes due to unpredictable camera or object movements, as may occur with real-time applications such as video games, the constraints may also be temporally interpolated to maintain a linearly corrected and approximately constant perceived depth range over time. | 07-11-2013 |
20130176398 | Display Shelf Modules With Projectors For Displaying Product Information and Modular Shelving Systems Comprising the Same - Modular shelving systems and display shelves for modular shelving systems are disclosed. In one embodiment, a modular shelving system includes a shelf support frame comprising a back plane portion and a base portion. At least one display shelf module is removably coupled to the back plane portion of the shelf support frame such that the display shelf module is vertically and horizontally positionable on the back plane portion of the shelf support frame. The display shelf module may include a top and bottom panels, and side panels that define an interior volume. A display panel may be affixed to a front of the display shelf module. A projector may be disposed in the interior volume of the display shelf module. The projector projects an optical signal onto a rear surface of the display panel such that image data is visible on a front surface of the display panel. | 07-11-2013 |
20130176399 | SYSTEM AND METHOD FOR CREATING A THREE-DIMENSIONAL IMAGE FILE - A system includes a light source configured to illuminate a subject with a pattern, a first optical sensor configured to capture a first pattern image and a first object image of the subject, and a processing unit configured to determine elevation data of the subject based on the first pattern image and create a three-dimensional image file based on the first object image and the elevation data of the subject. | 07-11-2013 |
20130182075 | GEOSPATIAL AND IMAGE DATA COLLECTION SYSTEM INCLUDING IMAGE SENSOR FOR CAPTURING 3D GEOSPATIAL DATA AND 2D IMAGE DATA AND RELATED METHODS - A geospatial and image data collection system includes a laser source configured to direct laser radiation toward a geospatial area, and an image sensor. The image sensor is configured to be operable in a first sensing mode to sense reflected laser radiation from the geospatial area representative of three dimensional (3D) geospatial data, and a second sensing mode to sense ambient radiation from the geospatial area representative of two dimensional (2D) image data. In addition, a controller is configured to operate the image sensor in the first and second sensing modes to generate the 3D geospatial data and 2D image data registered therewith. | 07-18-2013 |
20130182076 | ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF - According to one embodiment, an apparatus includes an output module configured to output a video signal, a display configured to display a video based on the video signal on a screen, an image capture module configured to capture an image of an observer and to output image data, a recognition module configured to perform facial recognition or of left-eye and right-eye regions from the image data, a presentation module configured to present a left-eye image displayed on the screen to a left eye and to present a right-eye image displayed on the screen to a right eye based on a recognition result of the recognition module, and a controller configured to inhibit the facial recognition by the recognition module when the recognition module fails in the facial recognition. | 07-18-2013 |
20130182077 | ENHANCED CONTRAST FOR OBJECT DETECTION AND CHARACTERIZATION BY OPTICAL IMAGING - Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels. | 07-18-2013 |
20130182078 | STEREOSCOPIC IMAGE DATA CREATING DEVICE, STEREOSCOPIC IMAGE DATA REPRODUCING DEVICE, AND FILE MANAGEMENT METHOD - Conventional methods have been unable to display stereoscopic images that are safe and have a high degree of freedom because only one type of maximum parallax and one type of minimum parallax are transmitted with a 3-dimensional image. This stereoscopic image data creating device, this stereoscopic image data reproducing device, and this file management method are characterized by comprising: multiplexing 3D information that includes a plurality of sets of image data corresponding to each of a plurality of points of view, a first maximum parallax as the maximum value of a parallax determined geometrically from a mechanism of imaging portion, a first minimum parallax representing a parallax at the position of a subject at the closet distance from the imaging portion as the limit of the suitable parallax range from the mechanism of the imaging portion, a second maximum parallax as the maximum value of the parallax of the actually generated stereoscopic image, and a second minimum parallax as the minimum value of the parallax of the actually generated stereoscopic image; handling the data as a single set of stereoscopic image data; and determining whether the parallax can be adjusted, a stereoscopic image can be displayed, or the like using the 3D information to allow a stereoscopic image to be displayed safer and in a more agreeable manner. | 07-18-2013 |
20130188017 | Instant Calibration of Multi-Sensor 3D Motion Capture System - A method for instantly determining the mutual geometric positions and orientations between a plurality of 3D motion capture sensors has three or more reference markers mounted fixedly relative to each other on substantially one single plane which are sensed by each sensor. Said method enables said sensors to cooperate as a larger sensing system for 3D motion capture applications without requiring said sensors to be mounted rigidly relative to each other. | 07-25-2013 |
20130188018 | SYSTEM & METHOD FOR PROCESSING STEREOSCOPIC VEHICLE INFORMATION - A stereoscopic measurement system determines relative location of a point on an object based on a stereo image pair of the object. The system comprises an image capture device for capturing a stereo image pair of the object, the image pair comprising a first image and a second image of the object. The system comprises a processing system configurable to designate a first point and a second point on the first image, designate the first point and the second point on the second image, define stereo points based on the designated points, and to calculate a distance between the stereo points. | 07-25-2013 |
20130188019 | System and Method for Three Dimensional Imaging - A method of operating a camera with a microfluidic lens to identify a depth of an object in image data generated by the camera has been developed. The camera generates an image with the object in focus, and a second image with the object out of focus. An image processor generates a plurality of blurred images from image data of the focused image, and identifies blur parameters that correspond to the object in the second image. The depth of the object from the camera is identified with reference to the blur parameters. | 07-25-2013 |
20130188020 | METHOD AND DEVICE FOR DETERMINING DISTANCES ON A VEHICLE - A method for determining distances for chassis measurement of a vehicle having a body and at least one wheel includes determining a center of rotation of a wheel of the vehicle by projecting a structured light pattern at least onto the wheel, recording a light pattern reflected by the wheel using a calibrated imaging sensor system, determining a 3D point cloud from the reflected light pattern, and determining the center of rotation of the wheel from the 3D point cloud. The method also includes determining a point on the body by evaluating the previously determined 3D point cloud or by evaluating a plurality of grey-scale images recorded under unstructured illumination. A height level is determined as a vertical distance between the center of rotation of the wheel and the point on the body. | 07-25-2013 |
20130188021 | MOBILE TERMINAL AND CONTROL METHOD THEREOF - A mobile terminal and a control method of the mobile terminal are disclosed. A mobile terminal and a control method of the mobile terminal according to the present invention comprises a memory storing captured stereoscopic images; a display displaying the stored stereoscopic images; and a controller obtaining at least one information of a user viewing the display and changing attributes of the stereoscopic images according to the obtained information of the user and displaying the attribute-changed images. The present invention, by changing attributes of stereoscopic images according to obtained information of a user and displaying the attribute-changed images, can provide stereoscopic images optimized to a user watching a display apparatus. | 07-25-2013 |
20130194387 | IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS AND IMAGE-PICKUP APPARATUS - The image processing method includes acquiring parallax images produced by image capturing, performing position matching of the parallax images to calculate difference between the parallax images, and deciding, in the difference, an unnecessary image component different from an image component corresponding to the parallax. The method is capable of accurately deciding the unnecessary image component included in a captured image without requiring image capturing multiple times. | 08-01-2013 |
20130194388 | IMAGING UNIT OF A CAMERA FOR RECORDING THE SURROUNDINGS - An imaging unit of a camera for recording the surroundings has an image sensor with a lens for the display of the surroundings on the image sensor. The image sensor and the lens are held by a carrier. The camera additionally has a circuit board and at least the signal and the supply lines of the image sensor arranged an the carrier. The image sensor is mounted an a carrier substrate, which similar to the lens, is arranged on the carrier at a distance from the circuit board, and has a flexible electrical connection to the circuit board. | 08-01-2013 |
20130201285 | 3-D GLASSES WITH ILLUMINATED LIGHT GUIDE - 3-D or other glasses with an illuminated light guide for tracking with a camera are presented. The light guide can be a light guide plate (LGP) or other transparent and/or translucent material that conveys light through its interior. LEDs embedded within the frames illuminate the light guide and optionally can be dimmed or brightened depending on ambient lighting. | 08-08-2013 |
20130201286 | CONFIGURABLE ACCESS CONTROL SENSING DEVICE - An access control device comprises at least one transit authorization request device, such as an ID sensor activated by access card or badge or a biometric sensor (fingerprint, retina), said transit authorization request device to be activated by a person requesting authorization to pass through said passageway or doorway and a presence detection and tracking device for detecting the presence of a person in the vicinity of said passageway or doorway and for tracking the movement of a person within or through said passageway or doorway. According to the invention, the access control device further comprises a control unit configured for assigning a virtual transit ticket to a person after authorization to pass through said passageway or doorway has been granted to said person, said virtual transit ticket being representative of the transit privileges granted to said person, i.e. the privileges regarding the transit direction through said passageway of doorway, and for controlling said presence detection and tracking device to track the movement of the person with the virtual transit ticket with respect to the granted transit privileges, said control unit comprising a processing module with a configurable decision table for generating an output control signal based on an output signal of said at least one transit authorization request device and an output signal of said presence detection and tracking device, said output control signal to be used for the controlling of the passage of persons through a passageway or a doorway. | 08-08-2013 |
20130201287 | THREE-DIMENSIONAL MEASUREMENT SYSTEM AND METHOD - A three-dimensional measurement system includes a projector | 08-08-2013 |
20130201288 | HIGH DYNAMIC RANGE & DEPTH OF FIELD DEPTH CAMERA - In order to maximize the dynamic range and depth of field for a depth camera used in a time of flight system, the light source is modulated at a plurality of different frequencies, a plurality of different peak optical powers, a plurality of integration subperiods, a plurality of lens foci, aperture and zoom settings during each camera frame time. The different sets of settings effectively create subrange volumes of interest within a larger aggregate volume of interest, each having their own frequency, peak optical power, lens aperture, lens zoom and lens focus products consistent with the distance, object reflectivity, object motion, field of view, etc. requirements of various ranging applications. | 08-08-2013 |
20130201289 | LOW PROFILE DEPTH CAMERA - The use of one or more angled or curved and diverging light pipes or reflectors placed in a light source's, e.g. diode's, emission path at appropriate distances, angles and divergence, such that a diode's emission spot size is modified and or redirected from the diode's natural emission path to alternative planes at angle to the diode's natural emission path so that a diode emission safe spot size can be achieved on any plane at angle to the original diode natural emission path at minimum distances from the diode's point of emission. | 08-08-2013 |
20130201290 | OCCUPANCY SENSOR AND ASSOCIATED METHODS - A device to detect occupancy of an environment includes a sensor to capture video frames from a location in the environment. The device may compare rules with data using a rules engine. The microcontroller may include a processor and memory to produce results indicative of a condition of the environment. The device may also include an interface through which the data is accessible. The device may generate results respective to the location in the environment. The microcontroller may be in communication with a network. The video frames may be concatenated to create an overview to display the video frames substantially seamlessly respective to the location in which the sensor is positioned. The overview may be viewable using the interface and the results of the analysis performed by the rules engine may be accessible using the interface. | 08-08-2013 |
20130208091 | AMBIENT LIGHT ALERT FOR AN IMAGE SENSOR - An image camera component and its method of operation are disclosed. The image camera component detects when ambient light within a field of view of the camera component interferes with operation of the camera component to correctly identify distances to objects within the field of view of the camera component. Upon detecting a problematic ambient light source, the image camera component may cause an alert to be generated so that a user can ameliorate the problematic ambient light source. | 08-15-2013 |
20130208092 | SYSTEM FOR CREATING THREE-DIMENSIONAL REPRESENTATIONS FROM REAL MODELS HAVING SIMILAR AND PRE-DETERMINED CHARACTERISITICS - A particular subject of the invention is the creation of three-dimensional representations from real models having similar and predetermined characteristics, using a system comprising a support suitable for receiving such a real object, configured to present the object in a pose similar to that of use of said at least one real object, an image acquisition device configured to obtain at least two distinct images of the real object from at least two separate viewpoints and a data processing device configured to receive these images, to clip a representation of the real object in each of these images in order to obtain at least two textures of the real object, obtaining a generic three-dimensional model of the real object and creating a three-dimensional model of the real object from the textures and the generic three-dimensional model obtained. | 08-15-2013 |
20130208093 | SYSTEM FOR REDUCING DEPTH OF FIELD WITH DIGITAL IMAGE PROCESSING - An electronic device may have a camera module. The camera module may capture images having an initial depth of field. The electronic device may receive user input selecting a focal plane and an effective f-stop for use in producing a modified image with a reduced depth of field. The electronic device may include image processing circuitry that selectively blurs various regions of a captured image, with each region being blurred to an amount that varies with distance to the user selected focal plane and in response to the user selected effective f-stop (e.g., a user selected level of depth of field). | 08-15-2013 |
20130208094 | APPARATUS AND METHOD FOR PROVIDING THREE DIMENSIONAL MEDIA CONTENT - A system that incorporates teachings of the exemplary embodiments may include, for example, means for generating a disparity map based on a depth map, means for determining accuracy of pixels in the depth map where the determining means identifies the pixels as either accurate or inaccurate based on a confidence map and the disparity map, and means for providing an adjusted depth map where the providing means adjusts inaccurate pixels of the depth map using a cost function associated with the inaccurate pixels. Other embodiments are disclosed. | 08-15-2013 |
20130208095 | STEREO CAMERA WITH AUTOMATIC CONTROL OF INTEROCULAR DISTANCE BASED ON LENS SETTINGS - A stereographic camera system and method of operating a stereographic camera system are disclosed. The stereographic camera system may include a left camera and a right camera including respective lenses having a focal length and a focus distance, an interocular distance mechanism to set an intraocular distance between the left and right cameras, and a controller. The controller may receive inputs indicating the focal length and the focus distance of the lenses. The controller may control the intraocular distance mechanism, based on the focal length and focus distance of the lenses and one or both of a distance to a nearest foreground object and a distance to a furthest background object, to automatically set the interocular distance such that a maximum disparity of a stereographic image captured by the left and right cameras does not exceed a predetermined maximum disparity. | 08-15-2013 |
20130215228 | METHOD AND APPARATUS FOR ROBUSTLY COLLECTING FACIAL, OCULAR, AND IRIS IMAGES USING A SINGLE SENSOR - The present invention relates to a method and apparatus for standoff facial and ocular acquisition. Embodiments of the invention address the problems of atmospheric turbulence, defocus, and field of view in a way that minimizes the need for additional hardware. One embodiment of a system for acquiring an image of a facial feature of a subject includes a single wide field of view sensor configured to acquire a plurality of images over a large depth of field containing the subject and a post processor coupled to the single sensor and configured to synthesize the image of the facial feature from the plurality of images. | 08-22-2013 |
20130215229 | REAL-TIME COMPOSITING OF LIVE RECORDING-BASED AND COMPUTER GRAPHICS-BASED MEDIA STREAMS - The present disclosure relates to providing composite media streams based on a live recording of a real scene and a synchronous virtual scene. A method for providing composite media streams comprises recording a real scene using a real camera; creating a three-dimensional representation of the real scene based on the recording; synchronizing a virtual camera inside a virtual scene with the real camera; generating a representation of the virtual scene using the virtual camera; compositing at least the three-dimensional representation of the real scene with the representation of the virtual scene to generate a media stream; and providing the media stream at the real camera. A camera device, a processing pipeline for compositing media streams, and a system are defined. | 08-22-2013 |
20130215230 | Augmented Reality System Using a Portable Device - A system and a method are disclosed for capturing real world objects and reconstructing a three-dimensional representation of real world objects. The position of the viewing system relative to the three-dimensional representation is calculated using information from a camera and an inertial motion unit. The position of the viewing system and the three-dimensional representation allow the viewing system to move relative to the real objects and enables virtual content to be shown with collision and occlusion with real world objects. | 08-22-2013 |
20130215231 | LIGHT FIELD IMAGING DEVICE AND IMAGE PROCESSING DEVICE - This image capture device includes: an image sensor | 08-22-2013 |
20130215232 | STEREO IMAGE PROCESSING DEVICE AND STEREO IMAGE PROCESSING METHOD - An image segmenting unit ( | 08-22-2013 |
20130222543 | METHOD AND APPARATUS FOR GENERATING DEPTH INFORMATION FROM IMAGE - An apparatus for generating depth information includes a sensing unit and a final depth providing unit. The sensing unit is configured to sense light received from multiple subjects, and to provide initial depth data having distance information about the subjects and two-dimensional (2D) image data having 2D image information about an image obtained from the subjects. The final depth providing unit is configured to generate estimated depth data having estimated distance information about the subjects by transforming the 2D image data into three-dimensional (3D) data, and to provide final depth data based on the initial depth data and the estimated depth data. | 08-29-2013 |
20130222544 | Parallel Online-Offline Reconstruction for Three-Dimensional Space Measurement - A measurement apparatus for automatic three-dimensional measurement of space includes a camera sensor array that is configured to generate low-resolution video recordings. The camera sensor array is further configured to automatically generate high-resolution images at geometrically suitable positions in the space. Automatic recording of the high-resolution images is based on a three-dimensional real-time reconstruction of the video recordings. A measurement system includes the measurement apparatus and a corresponding method is implemented for the automatic three-dimensional measurement of the space. | 08-29-2013 |
20130222545 | METHOD FOR CONFIGURING A MONITORING SYSTEM AND CONFIGURABLE MONITORING SYSTEM - The invention relates to a method for configuring a monitoring system which is based on a recording of altitude maps, comprising the following steps:—an altitude map of a monitored area is recorded with the monitoring system in a state in which no objects or persons to be detected are located in the monitored area,—determination of destruction points, caused by obstacles, in the altitude map which is recorded in this way,—definition of a detection area as a component area of the altitude map in such a way that all the disruption points, or at least some of said disruption points, lie outside the detection area,—setting of an evaluation unit of the monitoring system such that it only evaluates movements within the detection area. The invention also relates to a correspondingly configurable monitoring system. | 08-29-2013 |
20130222546 | SOLID-STATE IMAGE PICKUP ELEMENT AND IMAGE PICKUP APPARATUS - A plurality of pixels PD | 08-29-2013 |
20130222547 | ON-CHIP 4D LIGHTFIELD MICROSCOPE - The present invention extends on-chip lensless microscope systems ( | 08-29-2013 |
20130229490 | DIGITAL SIGNAGE SYSTEM AND METHOD FOR DISPLAYING CONTENT ON DIGITAL SIGNAGE - An exemplary digital signage method includes obtaining an image captured by a camera, the image comprising distance information indicating distances between the camera and objects shot by the camera. The method then creates a 3D scene model according to the captured image and the distances between the camera and objects captured by the camera. Next, the method determines whether one or more persons appear in the created 3D scene model. The method then determines a distance between the one or more persons and the digital signage when one or more persons appear in the created 3D scene model, determines the content according to a stored relationship and the determined distance, obtains the determined content, and further controls the at least one display to display the obtained content. | 09-05-2013 |
20130229491 | METHOD OF OPERATING A THREE-DIMENSIONAL IMAGE SENSOR - In a method of operating a three-dimensional image sensor including a light source module according to example embodiments, the three-dimensional image sensor detects a position change of an object by generating a two-dimensional image in a low power standby mode. The three-dimensional image sensor switches a mode from the low power standby mode to a three-dimensional operating mode when the position change of the object is detected in the two-dimensional image. The three-dimensional image sensor performs gesture recognition for the object by generating a three-dimensional image using the light source module in the three-dimensional operating mode. | 09-05-2013 |
20130229492 | METHOD AND APPARATUS FOR AN OPERATING UNIT FOR A HOME APPLIANCE - An offset operating unit for a home appliance includes a camera, an evaluation unit which evaluates picture data generated by the camera for recognition of an operating gesture from a predefined set of operating gestures, wherein the operating gestures serve for operator control of the home appliance, wherein the home appliance includes a control device for function control spaced apart from the operating unit, an interface for wireless data transmission between the operating unit and the control device, wherein information regarding the recognized operating gesture is transmitted to the control device via the interface in a wireless manner, and a housing in which the camera, the evaluation unit and the interface are integrated. | 09-05-2013 |
20130229493 | THREE-DIMENSIONAL CONFOCAL MICROSCOPY APPARATUS AND FOCAL PLANE SCANNING AND ABERRATION CORRECTION UNIT - Provided is a 3-dimensional confocal microscopy apparatus which is manufactured by combining a confocal microscope and an optical tweezers technique, wherein a pair of lenses for focal plane displacement where one lens is movable in the optical axis direction is arranged between a fixed objective lens and a fluorescent light imaging camera, and the 3-dimensional confocal microscopy apparatus also includes a mean which corrects the aberration of a fluorescent confocal image obtained by the fluorescent imaging camera. Accordingly, it is possible to provide a 3-dimensional confocal microscopy apparatus which can acquire a 3-dimensional image of a specimen during a manipulation of the specimen using optical tweezers without affecting an optical trap. | 09-05-2013 |
20130235160 | OPTICAL PULSE SHAPING - An embodiment of the invention relates to providing a method of illuminating a scene imaged by a camera, which includes illuminating the scene with a train of light pulses and adjusting exposure times of the camera relative to transmission times of the light pulses so that the light pulses emulate a light pulse having a desired pulse shape. | 09-12-2013 |
20130235161 | System and method for pulsed illumination interferometry - A scanning interferometer for obtaining surface profile data for an object to be scanned in which a carriage-driven focal mechanism moves through a range of predetermined scan positions at which interference fringe images are to be captured while using a high resolution, linear position measurement device attached to the motor-driven carriage in order to identify its precise vertical scan position, and both light pulses are emitted and an image capture device is triggered into simultaneous operation only upon the position measurement device signaling that the focal mechanism is arrived at one of the predetermined scan positions. | 09-12-2013 |
20130242054 | GENERATING HI-RES DEWARPED BOOK IMAGES - Systems and methods for generating high resolution dewarped images for an image of a document captured by a 3D stereo digital camera system, or a mobile phone camera capturing a sequence of images, which may improve OCR performance. Example embodiments include a compact stereo camera with two sensors mounted at fixed locations, and a multi-resolution pipeline to process and to dewarp the images using a three dimensional surface model based on curve profiles of the computed depth map. Example embodiments also include a mobile phone including a camera which captures a sequence of images, and a processor which computes a disparity map using the captured sequence of image frames, computes a model of the at least one document page by generating a cylindrical three dimensional geometric surface using the computed disparity map, and renders a dewarped image from the computed model. | 09-19-2013 |
20130242055 | APPARATUS AND METHOD FOR EXTRACTING DEPTH IMAGE AND TEXTURE IMAGE - Disclosed are a method and an apparatus for acquiring a texture image and a depth image in a scheme for acquiring a depth image based on a pattern image. An apparatus for acquiring a texture image and a depth image may include a pattern image irradiating unit to irradiate, onto a target object, a first pattern image and a second pattern image having a color complementary to a color of the first pattern image, an image taking unit to take a first screen image and a second screen image formed by irradiating the first pattern image and the second pattern image onto the target object, respectively, and an image processing unit to simultaneously extract a texture image and a depth image of the target object in the taken first screen image and the taken second screen image. | 09-19-2013 |
20130250062 | STEREOSCOPIC IMAGE CAPTURE - Stereoscopic image capture is provided. A blur value expected for multiple pixels in left and right images is predicted. The blur value is predicted based on designated capture settings. A disparity value expected for multiple pixels in the left and right images is predicted. The disparity value is predicted based on the designated capture settings. Stressed pixels are identified by comparing the predicted disparity value to a lower bound of disparity value determined from the predicted blur value using a predetermined model. A pixel with predicted disparity value less than the lower bound is identified as a stressed pixel. The predicted disparity is adjusted by modifying the designated capture settings to reduce the number of stressed pixels, or an alert to the presence of stressed pixels is given to the user. | 09-26-2013 |
20130250063 | BABY MONITORING SYSTEM AND METHOD - An exemplary baby monitoring method includes obtaining an image captured by a camera. The image includes a distance information indicating distances between the camera and objects captured by the camera. The method then creates a 3D scene model according to the captured image and the distances between the camera and objects captured by the camera. Next, the method determines whether any baby appears in the created 3D scene model, and further determines whether the distances between the babies and electronic appliances are less than a preset value according to the distances between the camera and the babies, the distances between the camera and the electronic appliances, and a stored horizontal field of view of the camera. The method then outputs a warning and cuts off power supply to the electronic appliances whose distance to the one or more babies are less than the preset value. | 09-26-2013 |
20130250064 | DISPLAY APPARATUS FOR CONTROLLING FIELD OF VIEW - A display apparatus for controlling a field of view is provided. When a mode it set to a photograph mode, the display apparatus may control the field of view when photographing, by sensing an input light emitted from an object and transmitted through a first panel on which a field of view control pattern is formed, and a second panel on which an imaging pattern is formed. | 09-26-2013 |
20130250065 | RANGE-FINDING SYSTEM AND VEHICLE MOUNTING THE RANGE-FINDING SYSTEM - A range-finding system includes two imaging devices to capture multiple images from two different viewpoints and a parallax calculator to calculate parallax based on the multiple images captured by the two imaging devices. The parallax calculator includes an estimation unit to estimate a correction value based on the amount of image deviation in a lateral direction corresponding to pixel position in the images captured by the two imaging devices and a correction unit to correct a pre-correction parallax or an image based on the correction value estimated by the estimation unit. | 09-26-2013 |
20130250066 | THREE DIMENSIONAL CAMERA AND PROJECTOR FOR SAME - A 3D imaging apparatus comprising a projector, comprising a laser array comprising a plurality of individual emitters, a mask for providing a structured light pattern, wherein a distance between the laser array and the mask is substantially minimized according to a non-uniformity profile of the plurality of individual emitters and according to a uniformity criterion related to the light intensity distribution across the mask plane, projection optics to image the structured light pattern onto an object, an imaging sensor adapted to capture an image of the object with the structured light pattern projected thereon and a processing unit adapted to process the image to determine range parameters. | 09-26-2013 |
20130258055 | METHOD AND DEVICE FOR GENERATING THREE-DIMENSIONAL IMAGE - A method and a device for generating a three-dimensional image are provided. In the method, in an N | 10-03-2013 |
20130258056 | OPTICAL PATH ADJUSTING DEVICE AND PHOTOGRAPHING APPARATUS INCLUDING THE SAME - An optical adjusting device includes: a rotating unit that includes and rotates about a through-hole through which light passes; at least one moving unit that is movably disposed relative to the rotating unit and is linearly movable between an advance position corresponding to the through-hole and a retreat position outside the through-hole; a transmitting unit that is disposed between the rotating unit and the moving unit and transmits a rotational force of the rotating unit to the moving unit; and an optical unit that is disposed on the moving unit and blocks at least part of light passing through the through-hole. | 10-03-2013 |
20130258057 | IMAGE PROCESSING DEVICE, AUTOSTEREOSCOPIC DISPLAY DEVICE, IMAGE PROCESSING METHOD AND COMPUTER PROGRAM PRODUCT - According to an image processing device includes an acquiring unit and a correcting unit. The acquiring unit is configured to acquire a stereoscopic image containing a plurality of parallax images each having a mutually different parallax. The correcting unit is configured to perform, for each pixel of the stereoscopic image, correction to set a parallax number of a parallax image viewed at a viewpoint position to a first parallax number of a parallax image to be viewed at the viewpoint position by using correction data representing a difference value between the first parallax number and a second parallax number of a parallax image that is actually viewed at the viewpoint position and on which distortion correction for correcting distortion of light beams has been performed. | 10-03-2013 |
20130258058 | IMAGE PROCESSING SYSTEM AND MICROSCOPE SYSTEM INCLUDING THE SAME - An image processing system includes an image acquisition unit, a candidate value estimation unit, a band characteristics evaluation unit, an effective frequency determination unit and a candidate value modification unit. The acquisition unit acquires images. The estimation unit estimates, for each pixel of the images, a candidate value of a 3D shape. The evaluation unit calculates, for each pixel, a band evaluation value of a band included in the images. The determination unit determines an effective frequency of the pixel based on statistical information of the band evaluation value. The modification unit performs data correction or data interpolation for the candidate value based on the effective frequency and calculates a modified candidate value representing the 3D shape. | 10-03-2013 |
20130258059 | THREE-DIMENSIONAL (3D) IMAGE PHOTOGRAPHING APPARATUS AND METHOD - A three-dimensional (3D) image photographing apparatus includes a photographing unit configured to photograph a first photo, and capture an image after the first photo is photographed; a feature extracting unit configured to extract feature points from the first photo and the image, and match the feature points extracted from the first photo to the feature points extracted from the image; a position and gesture estimating unit configured to determine a relationship between a position and a gesture of the 3D image photographing apparatus when the first photo is photographed, and a position and a gesture of the 3D image photographing apparatus when the image is captured, based on the matched feature points, the photographing unit configured to photograph the image as a second photo in response to the relationship satisfying a predetermined condition; and a synthesizing unit configured to synthesize the first and second photos to a 3D image. | 10-03-2013 |
20130258060 | INFORMATION PROCESSING APPARATUS THAT PERFORMS THREE-DIMENSIONAL SHAPE MEASUREMENT, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - Information processing apparatus that performs three-dimensional shape measurement with high accuracy at high speed while taking into account lens distortion of a projection device. An image input unit of an information processing apparatus inputs image data of a measurement object photographed by a camera in a state where a predetermined pattern light is projected by a projector. An association unit calculates associations between coordinates on the image data of the measurement object and coordinates on image data of the predetermined pattern light. A three-dimensional coordinate calculation unit calculates a viewing vector of the camera from which lens distortion thereof has been eliminated and a viewing vector of the projector to which lens distortion thereof has been added. The calculation unit calculates coordinates of a point, in a three-dimensional space, of intersection between the camera viewing vector and the projector viewing vector, for each association. | 10-03-2013 |
20130258061 | STEREOSCOPIC IMAGE INSPECTION DEVICE, STEREOSCOPIC IMAGE PROCESSING DEVICE, AND STEREOSCOPIC IMAGE INSPECTION METHOD - A stereoscopic image inspection device configured to determine that a stereoscopic image or part of the stereoscopic image is horizontally reversed is provided. A depth obtaining unit obtains depth information of the stereoscopic image. An occlusion detection unit detects an occlusion area of the stereoscopic image. A determination unit evaluates image continuity between the occlusion area and adjacent areas adjacent to the occlusion area, and identifies a first area to which the occlusion area belongs among the adjacent areas based on the evaluated image continuity, and determines whether or not the stereoscopic image has a depth contradiction based on a depth position of the first area included in the depth information. | 10-03-2013 |
20130265392 | PLANE-CHARACTERISTIC-BASED MARKERLESS AUGMENTED REALITY SYSTEM AND METHOD FOR OPERATING SAME - A marker-less augmented reality system that may extract a plurality of planes included in an image generated by a camera, based on three-dimensional (3D) information of the image, and may estimating a pose of the camera based on a correspondence among the plurality of planes extracted, and an operating method thereof. | 10-10-2013 |
20130265393 | IMAGE CAPTURE ENVIRONMENT CALIBRATION METHOD AND INFORMATION PROCESSING APPARATUS - In a method of calibrating an image capture environment based on a captured image obtained by capturing an image of a physical space by an image capturing unit that captures the image of the physical space, an image of the physical space is captured using the image capturing unit, an index serving as a reference for calibration is detected from the captured image of the physical space, the position and orientation of the image capturing unit are calculated from the detected index, and image capturing unit unique information, geometric information associated with the physical space, or the relationship between the image capturing unit and the physical space is calibrated using the obtained data. | 10-10-2013 |
20130265394 | 3D STEREOSCOPIC CAMERA MODULE - The present invention relates to a 3D stereoscopic camera module, the module including an upper case, and left and right cameras arranged on the upper case, each formed at a pre-determined space apart, wherein the left and right cameras include at least three or more coils arranged at an inner surface of the upper case, a housing arranged inside the upper case and mounted therein with a lens, at least three or more magnets mounted at the housing to face the coils, and an image sensor converting an optical signal of an image captured by the lens to an electrical signal, whereby a convergence angle can be easily controlled and an excellent short-distanced 3D stereoscopic shooting can be realized. | 10-10-2013 |
20130271572 | ELECTRONIC DEVICE AND METHOD FOR CREATING THREE-DIMENTIONAL IMAGE - An electronic device for creating a three-dimensional image includes a number of image capturing units, an outline detecting unit, a coordinate determining unit, and an image synthesizing unit. Each image capturing unit captures an object located in one corresponding direction of a three-dimensional scene with different focal length and then captures a number of images of the object in each direction. The outline detecting unit detects an outline of the object in each captured image. The coordinate determining unit determines three-dimensional coordinates of each point on the detected outline. The image synthesizing unit synthesizes the detected outlines of objects from the captured images captured in the same direction together according to the three-dimensional coordinates of the outlines, creates a three-dimensional image along each direction with the corresponding synthesized outlines, and stitches the three-dimensional images of different directions together to obtain a combined image. | 10-17-2013 |
20130271573 | METHOD AND APPARATUS FOR DETERMINING THE 3D COORDINATES OF AN OBJECT - In a method for determining the 3D coordinates of an object a pattern ( | 10-17-2013 |
20130271574 | Method And Apparatus For Contactless Data Acquisition In A Vehicle Service System - An improved vehicle service system having at least one pattern-projecting, machine-vision sensor for acquiring images of objects and surface during a vehicle service or inspection procedure, and which is configured to process acquired images to identify measurements and/or relative three-dimensional locations associated with the vehicle undergoing service or inspection, vehicle components, surface, or objects in the environment surrounding the vehicle. The improved vehicle service system is further configured to utilized the identified measurements and/or relative three-dimensional locations during a vehicle service or inspection procedure. | 10-17-2013 |
20130271575 | Dynamically Controlling an Imaging Microscopy System - System and method for controlling an imaging microscopy system (IMS). A control module may be coupled to an IMS configured to capture an image of a specified region of a specimen based on a specified perspective by controlling the specimen's position and/or orientation relative to an image capture subsystem of the IMS corresponding to the specified perspective. A | 10-17-2013 |
20130271576 | DEVICE CONTROL EMPLOYING THREE-DIMENSIONAL IMAGING - At least a portion of a wellsite is disposed in a control volume of three-dimensional space. At least one camera is configured to provide three-dimensional imaging of the control volume. At least one device is disposed in, or is expected to be moved into, the control volume so that the at least one device is included in the three-dimensional imaging when the at least one device is disposed in the control volume and the at least one camera provides the three-dimensional imaging. In an exemplary embodiment, the at least one camera is connected to a drilling rig. | 10-17-2013 |
20130271577 | INFORMATION PROCESSING APPARATUS AND METHOD - An information processing apparatus includes a model storing unit configured to store a three-dimensional form model for acquiring the position and posture of a measurement target object, an image acquiring unit configured to acquire an image of the measurement target object, a first position and posture acquiring unit configured to acquire a first position and posture of the three-dimensional form model in a first coordinate system on the basis of a first geometric feature of the three-dimensional form model and a first geometric feature within the image, and a second position and posture acquiring unit configured to acquire a second position and posture of the three-dimensional form model in a second coordinate system that is different from the first coordinate system on the basis of a second geometric feature of the three-dimensional form model and a second geometric feature within the image and the first position and posture. | 10-17-2013 |
20130278722 | LENS DOCKING STATION - A lens docking station used for detachably connecting to an electronic device is disclosed, wherein the electronic device includes a first lens module and a first transmission interface. The lens docking station includes a chute, a second lens module, and a second transmission interface. The chute is located next to the first lens module. The second lens module can slide along the chute to a position corresponding to the first lens module. The second transmission interface is used for communicating to the first transmission interface, and for electrically connecting to the second lens module, allowing the electronic device to control the second lens module to capture an image, and obtain the image; whereby the two images captured by the first lens module and the second lens module can be integrated to form a three-dimensional image. | 10-24-2013 |
20130278723 | THREE-DIMENSIONAL MEASUREMENT SYSTEM AND THREE-DIMENSIONAL MEASUREMENT METHOD - A three-dimensional measurement system includes a measurement carrier, first and second projection modules, an image-capturing module, and a control unit. The measurement carrier carries a test object on a measurement plane. The first projection module projects a first patterned structure light onto the test object along a first optical axis that forms a first incident angle relative to the measurement plane, and the second projection module projects a second patterned structure light onto the test object along a second optical axis that forms a second incident angle different from the first incident angle. The image-capturing module captures first and second patterned images formed after reflection of the first and second patterned structure lights from the test object. The control unit controls the first and second projection modules, and measures a three-dimensional shape of the test object according to the first and second patterned images. | 10-24-2013 |
20130278724 | METHOD AND APPARATUS FOR RECOGNIZING THREE-DIMENSIONAL OBJECT - A method and an apparatus for recognizing a three-dimension object using a light source are provided. The method of recognizing a three-dimension object of a terminal including a display unit for displaying an operation state of the terminal and a shoot unit for receiving an image includes receiving a first image by setting a first brightness as a brightness of the display unit, receiving a second image by setting a second brightness as the brightness of the display unit, and recognizing the three-dimension object based on brightness change of a preset part by comparing the second image with the first image. The apparatus and the method for recognizing a three-dimension object prevent a security function from being incapacitated using a two-dimension photograph. | 10-24-2013 |
20130278725 | Integrated Structured Light 3D Scanner - A modular, flexible 3D scanner is provided which integrates motion control, data acquisition, data processing and report generation functions in a system having a single user interface for all functions. Control software includes an interface and components to assist a user in creating motion control scripts that are used to move a part through various positions at which images are captured. Analysis software is called from the control software to process data into an accurate 3D rendering of the part, which is compared to a virtual model of the part as designed. A report is generated showing where the measured dimensions of the part vary from the as designed dimensions of the part. The disclosed 3D scanner can be used in conjunction with a CNC machine to provide on-machine inspection to reduce rework, labor and scrap. | 10-24-2013 |
20130278726 | IMAGING SYSTEM USING A LENS UNIT WITH LONGITUDINAL CHROMATIC ABERRATIONS AND METHOD OF OPERATING - A lens unit of an imaging unit features longitudinal chromatic aberration. From an imaged scene, an imaging sensor unit captures non-color-corrected first images of different spectral content. An intensity processing unit computes broadband luminance sharpness information from the first images on the basis of information descriptive for imaging properties of the lens unit. A chrominance processing unit computes chrominance information on the basis of the first images. A synthesizing unit combines the computed chrominance and luminance information to provide an output image having, for example, extended depth-of-field. The imaging unit may be provided in a camera system, a digital microscope, as examples. | 10-24-2013 |
20130286161 | THREE-DIMENSIONAL FACE RECOGNITION FOR MOBILE DEVICES - A mobile device can generate a three-dimensional model of a person's face by capturing and processing a plurality of two-dimensional images. During operation, the mobile device uses an image-capture device to capture a set of images of the person from various orientations as the person or any other user sweeps the mobile device in front of the person's face from one side of his/her face to the opposing side. The device determines orientation information for the captured images, and detects a plurality of features of the person's face from the captured images. The device then generates a three-dimensional model of the person's face from the detected features and their orientation information. The three-dimensional model of the person's face facilitates identifying and/or authenticating the person's identity. | 10-31-2013 |
20130286162 | APPARATUS FOR EXTRACTING IMAGE OBJECT IN 3D IMAGE SYSTEM AND METHOD THEREOF - Disclosed are an apparatus for extracting an image object in a 3D image system and a method thereof. | 10-31-2013 |
20130293678 | VIRTUAL NAVIGATION SYSTEM FOR VIDEO - System and method for adjusting the parameters of an image capturing device, such as a camera that captures a sequence of images and generates another image from the captured sequence of images using parameters that identify a virtual location and overlaying the generated image with a 3D generated image where distortion compensation has occurred been applied to the image. | 11-07-2013 |
20130293679 | Upper-Body Skeleton Extraction from Depth Maps - A method for processing data includes receiving a depth map of a scene containing at least an upper body of a humanoid form. The depth map is processed so as to identify a head and at least one arm of the humanoid form in the depth map. Based on the identified head and at least one arm, and without reference to a lower body of the humanoid form, an upper-body pose, including at least three-dimensional (3D) coordinates of shoulder joints of the humanoid form, is extracted from the depth map. | 11-07-2013 |
20130293680 | Arrangement and Method for Timber Rotation - The present invention relates to an arrangement and a method for monitoring progress of timber rotation before sawing machine in a sawmill. This aim is reached by providing a separate 3D-rotation scanner which identifies the previously 3D-scanned and measured log and which measures the rotation progress in the log turner in real time. Accordingly to the method, the already scanned and measured timber (log) is scanned again with additional 3D-rotation scanner and the newly scanned position of the timber is compared with the parameters of the previously scanned data and accordingly the process of the timber (log) rotation can be evaluated in real time. As a result, it is monitored if the timber (log) is turned into an optimized position before entering sawing machine and if there is a need of correction of the timber's (log's) position, the scanner will instruct the log turner or the operator to do so. | 11-07-2013 |
20130293681 | 2D/3D REAL-TIME IMAGER AND CORRESPONDING IMAGING METHODS - The present invention relates generally to methods and devices of generating an electrical representation of at least one object in a scene in the real word. The detail real-time imager for the representation of a scene of a real world comprises:—at least an illuminator ( | 11-07-2013 |
20130293682 | IMAGE CAPTURE DEVICE, IMAGE CAPTURE METHOD, AND PROGRAM - Disclosed is an image capture device in which, in an image capture possibility determination unit | 11-07-2013 |
20130300830 | Automatic Detection of Noteworthy Locations - By providing 3D representations of noteworthy locations for comparison with images, the 3D location of the imaging device, as well as the orientation of the device may be determined. The 3D location and orientation of the imaging device then allows for enhanced navigation in a collection of images, as well as enhanced visualization and editing capabilities. The 3D representations of noteworthy locations may be provided in a database that may be stored local or remote to the imaging device or a programmable device processing images obtained from the imaging device. | 11-14-2013 |
20130300831 | CAMERA SCENE FITTING OF REAL WORLD SCENES - A system fits a camera scene to real world scenes. The system receives from an image sensing device an image depicting a scene, a location of the image sensing device in real world coordinates, and locations of a plurality of points in the scene in the real world coordinates. Pixel locations of the plurality of points are determined and recorded. The center of the image is determined, and each pixel in the image is mapped to an angular offset from the center of the image. Vectors are generated. The vectors extend from the image sensing device to the locations of the plurality of points, and the vectors are used to determine a pose of the image sensing device. | 11-14-2013 |
20130300832 | SYSTEM AND METHOD FOR AUTOMATIC VIDEO FILMING AND BROADCASTING OF SPORTS EVENTS - A system for automatically video filming an ongoing sports activity within a field uses an imaging device to continuously capture the entire field and the activities of the players involved in the game, and generate video signals. A position measuring arrangement, including multiple transmitters coupled to the different players, regularly measures the spatial locations of the different players with time, as the game continues, and generates position signals that indicate these spatial positions as functions of time. A data processor is coupled to the imaging device and the position measuring arrangement. The data processor receives the position signals and the video signals, and analyzes the position signals to edit the video signals, and to generate an edited output video content which is delivered to the spectators of the sports activity. | 11-14-2013 |
20130300833 | 3D LOCALISATION MICROSCOPY AND 4D LOCALISATION MICROSCOPY AND TRACKING METHODS AND SYSTEMS - A 3D localisation microscopy system, 4D localisation microscopy system, or an emitter tracking system arranged to cause a phase difference between light passing to or from one part of the objective relative to light passing to or from another part of the objective, to produce a point emitter image which comprises two lobes, a separation between which is related to the position of the emitter relative to the objective of the imaging system, and in the 4D system a further property of which image or of said light to or from the objective is related to another location independent property of the emitter. | 11-14-2013 |
20130300834 | SINGLE-LENS 3-D IMAGING DEVICE USING POLOARIZATION CODED APERTURE MASKS COMBINED WITH POLARIZATION SENSITIVE SENSOR - A device and method for three-dimensional (3-D) imaging using a defocusing technique is disclosed. The device comprises a lens, at least one polarization-coded aperture obstructing the lens, a polarization-sensitive sensor operable for capturing electromagnetic radiation transmitted from an object through the lens and the at least one polarization-coded aperture, and a processor communicatively connected with the sensor for processing the sensor information and producing a 3-D image of the object. | 11-14-2013 |
20130300835 | Method and Apparatus to Guarantee Minimum Contrast for Machine Vision System - In one aspect, this disclosure presents a method and apparatus for verifying that minimum object contrast requirements are met within a region representing a volume to be monitored by a machine vision system. In complementary fashion, the disclosure also presents a methodology for constraining the positions of the lighting sources to be used for illuminating the monitored volume at a minimum height above the floor, and for the use of a key light that provides asymmetrical lighting within the monitored volume relative to the camera(s) used for imaging the monitored volume. Correspondingly, the disclosure also presents a method and apparatus for monitoring for proper operation of the key light and responding to improper operation. The minimum contrast verification and key light monitoring operations can be implemented using standalone apparatuses, or can be incorporated into the machine vision system. | 11-14-2013 |
20130300836 | SINGLE-CHIP SENSOR MULTI-FUNCTION IMAGING - Mixed mode imaging is implemented using a single-chip image capture sensor with a color filter array. The single-chip image capture sensor captures a frame including a first set of pixel data and a second set of pixel data. The first set of pixel data includes a first combined scene, and the second set of pixel data includes a second combined scene. The first combined scene is a first weighted combination of a fluorescence scene component and a visible scene component due to the leakage of a color filter array. The second combined scene includes a second weighted combination of the fluorescence scene component and the visible scene component. Two display scene components are extracted from the captured pixel data in the frame and presented on a display unit. | 11-14-2013 |
20130300837 | SINGLE-CHIP SENSOR MULTI-FUNCTION IMAGING - Mixed mode imaging is implemented using a single-chip image capture sensor with a color filter array. The single-chip image capture sensor captures a frame including a first set of pixel data and a second set of pixel data. The first set of pixel data includes a first combined scene, and the second set of pixel data includes a second combined scene. The first combined scene is a first weighted combination of a fluorescence scene component and a visible scene component due to the leakage of a color filter array. The second combined scene includes a second weighted combination of the fluorescence scene component and the visible scene component. Two display scene components are extracted from the captured pixel data in the frame and presented on a display unit. | 11-14-2013 |
20130300838 | METHODS AND DEVICES FOR GENERATING A REPRESENTATION OF A 3D SCENE AT VERY HIGH SPEED - The present invention relates to a 3D landscape real-time imager. It also relates to methods for operating such an imager. Such an imager comprises: —at least one illuminating part which is designed to scan at least a portion of the landscape at a given range and having an ultra-short laser pulse source emitting at least one wavelength, and an optical rotating block, with a vertical axis of rotation, and controlled such that given packets of pulses are shaped in a pattern of rotating beams sent toward the said at least partial landscape; —at least one receiving part which comprises a set of SPAD detector arrays, each arranged along a vertical direction and rotating at a given speed in synchronism with the optical rotating block of the illuminating part, the detection data of the SPAD detector arrays being combined to acquire 3D imaging data of the said at least partial landscape in a central controller. | 11-14-2013 |
20130307932 | 3D IMAGING USING STRUCTURED LIGHT FOR ACCURATE VEHICLE OCCUPANCY DETECTION - What is disclosed is a method which combines structured illumination in the SWIR wavelength range with the detection capabilities of NIR to generate a 3D image of a scene for accurate vehicle occupancy determination. In one embodiment, structured light is projected through a customized optical element comprising a patterned grid. Wavelengths of the received structured pattern are shifted to a CCD detectable range. The shifted light comprises an image in a structured pattern. The wavelength-shifted light is detected using an infrared detector operating in the NIR. For each pixel in the detected patterned image, an amount of distortion caused by 3D surface variation at this pixel location is determined. The distortion is converted to a depth value. The process repeats for all pixels. A 3D image is constructed using each pixel's depth value. The number of occupants in the vehicle is determined from the constructed 3D image. | 11-21-2013 |
20130307933 | METHOD OF RECORDING AN IMAGE AND OBTAINING 3D INFORMATION FROM THE IMAGE, CAMERA SYSTEM - Two or more images are taken wherein during the image taking a focal sweep is performed. The exposure intensity is modulated during the focal sweep and done so differently for the images. This modulation provides for a watermarking of depth information in the images. The difference in exposure during the sweep watermarks the depth information differently in the images. By comparing the images a depth map for the images can be calculated. A camera system has a lens and a sensor and a means for performing a focal sweep and means for modulating the exposure intensity during the focal sweep. Modulating the exposure intensity can be done by modulating a light source or the focal sweep or modulating the transparency of a transparent medium in the light path. | 11-21-2013 |
20130307934 | System and Method for Providing 3D Sound - Systems and methods are provided for associating position information and sound. The method includes obtaining position information of an object at a given time; obtaining position information of a camera at the given time; determining a relative position of the object relative to the camera's position; and associating sound information with the relative position of the object. In another aspect, the position and orientation of a microphone are also tracked to calibrate the sound produced by an object or person, and the calibrated sound is associated with the relative position of the object, that is relative to the camera. | 11-21-2013 |
20130307935 | IMAGING SYSTEM AND METHOD - There is provided an imaging system comprising image capture apparatus and display apparatus. The image capture apparatus is for capturing an image of an operator work site, the image including depth information. The display apparatus is in communication with the image capture apparatus, and comprises at least one display screen. The display apparatus is arranged to receive the image captured by the image capture apparatus, including the depth information, and display to the operator the image captured by the image capture apparatus, including the depth information, on the display screen. The display screen is located between the operator's eyes and the position of the work site. There is also provided a method for capturing and displaying an image of an operator work site. | 11-21-2013 |
20130314500 | STEREOSCOPIC IMAGING APPARATUS - A stereoscopic imaging apparatus comprising a single photographic optical system; an image sensor, on which subject images that have passed through different first and second areas in a predetermined direction, respectively, are formed after being pupil-split, photoelectrically converting the subject images that have passed through the first and second areas, respectively, to output a first image and a second image; a diaphragm restricting a light flux incident on the image sensor; a subject information acquiring device acquiring distance information of a subject within a photographic angle of view or a device acquiring an amount of parallax of the subject; and a diaphragm control device controlling an F-number of the diaphragm so that a parallax between the first image and the second image is in a predetermined range based on the acquired distance information of the subject or the acquired amount of parallax of the subject. | 11-28-2013 |
20130314501 | SYSTEM AND METHOD FOR RENDERING AFFECTED PIXELS - Systems and methods are provided for minimizing re-projection artifacts. From a re-projection method, a distortion map determines areas of excessive stretching or compressing of pixels, which are then rendered. The re-projection and render are composited to create a new stereo image. Systems and methods are also provided for a “depth layout” process, whereby a new offset view of a 2D plate is created, allowing a stereo pair of images. Custom geometry that approximates the scene is created and 2D rotoscoping is employed to cut out shapes from the geometry. A depth map is created, from which an offset image may be re-projected, resulting in a stereo pair. Systems and methods are also provided for a fast re-projection technique, which creates a novel view from an existing image. The process generates a disparity map and then a distortion map, which is then used to create the new view. | 11-28-2013 |
20130314502 | PORTABLE MOBILE LIGHT STAGE - A subject is imaged using imaging equipment arranged on portable, wireless vehicles. The vehicles are positioned in a pattern in proximity to the subject and illuminate the subject in order to collect image data. The image data can be collected by cameras carried by the vehicles in addition to or instead of external high speed cameras. | 11-28-2013 |
20130314503 | VEHICLE VISION SYSTEM WITH FRONT AND REAR CAMERA INTEGRATION - A vehicular vision system includes a forward facing camera module having a forward facing camera having forward field of view, and includes a rearward facing camera having a rearward field of view. The forward facing camera module includes an image processor, a decoder and an encoder. Image data captured by the rearward facing camera is fed to the decoder and an output of the decoder is fed to the image processor. The image processor is operable to process image data captured by the forward facing camera to at least detect objects in the forward field of view and is operable to process the decoder output to at least detect objects the rearward field of view. An image processor output is fed to the encoder and an encoder output is fed to a display that is viewable by a driver of the vehicle during a reversing maneuver of the vehicle. | 11-28-2013 |
20130314504 | METHOD AND DEVICE FOR IMAGING AT LEAST ONE THREE-DIMENSIONAL COMPONENT - A method for imaging at least one three-dimensional component, which is produced by a generative manufacturing method, is disclosed. The method, in an embodiment, includes determining at least two layer images of the component during production thereof by a detection device, which is designed to detect with spatial resolution a measured quantity characterizing the energy input in the component. The method further includes generating a three-dimensional image of the component based on the determined layer images by a computing device and displaying the image by a display device. A device for carrying out the method is also disclosed. | 11-28-2013 |
20130314505 | System And Process For Detecting, Tracking And Counting Human Objects of Interest - A system is disclosed that includes: at least one image capturing device at the entrance to obtain images; a reader device; and a processor for extracting objects of interest from the images and generating tracks for each object of interest, and for matching objects of interest with objects associated with RFID tags, and for counting the number of objects of interest associated with, and not associated with, particular RFID tags. | 11-28-2013 |
20130314506 | Camera Device - An example device includes first and second cameras and a touchscreen user interface configured to concurrently display first and second buttons and a real time image captured by the selected one of the cameras. The first button is operable for selecting between the first and second cameras, and the second button is operable for taking a picture using the selected one of the cameras. | 11-28-2013 |
20130314507 | IMAGE CAPTURING DEVICE AND DATA PROCESSING METHOD - A mobile phone includes: a first image capturing element for generating left-eye image data; a second image capturing element for generating right-eye image data; a CPU; and a RAM. The CPU determines whether or not the left-eye image data includes an image of an obstacle, such as a finger, that blocks entrance of external light into the first image capturing element, and determines whether or not the right-eye image data includes an image of an obstacle that blocks entrance of external light into the second image capturing element. When one of the left-eye image data and the right-eye image data includes the image of the obstacle and the other of the left-eye image data and the right-eye image data does not include the image of the obstacle, the CPU deletes the one of the left-eye image data and the right-eye image data from the RAM. | 11-28-2013 |
20130321579 | System and Method for Scanning and Analyzing a Users Ergonomic Characteristics - A system including a | 12-05-2013 |
20130321580 | 3-DIMENSIONAL DEPTH IMAGE GENERATING SYSTEM AND METHOD THEREOF - A 3-dimensional depth image generating system and method thereof are provided. The 3-dimensional depth image generating system includes a first and a second camera devices and an image processing device. The first and the second camera devices are apart for a predetermined distance, and respectively captures an object to obtain a firs and a second images. The image processing device is connected with the first and the second camera devices and respectively obtains a first and a second partial images, wherein the first and the second partial images both include a first predetermined portion and a second predetermined portion of the object, and sizes of the first partial image and the second partial image are respectively smaller than that of the first image and the second image. Wherein, the image processing device combines the first and the second partial images to generate a 3-dimensional depth image of the object. | 12-05-2013 |
20130321581 | Spatio-Temporal Light Field Cameras - Spatio-temporal light field cameras that can be used to capture the light field within its spatio temporally extended angular extent. Such cameras can be used to record 3D images, 2D images that can be computationally focused, or wide angle panoramic 2D images with relatively high spatial and directional resolutions. The light field cameras can be also be used as 2D/3D switchable cameras with extended angular extent. The spatio-temporal aspects of the novel light field cameras allow them to capture and digitally record the intensity and color from multiple directional views within a wide angle. The inherent volumetric compactness of the light field cameras make it possible to embed in small mobile devices to capture either 3D images or computationally focusable 2D images. The inherent versatility of these light field cameras makes them suitable for multiple perspective light field capture for 3D movies and video recording applications. | 12-05-2013 |
20130321582 | SYSTEM AND METHOD FOR MEASURING THREE-DIMENSIONAL SURFACE FEATURES - In some embodiments, a system for measuring surface features may include a patter projector, at least one digital imaging device, and an image processing device. The pattern projector may project, during use, a pattern of light on a surface of an object. In some embodiments, the pattern projector moves, during use, the pattern of light along the surface of the object. In some embodiments, the pattern projector moves the pattern of light in response to electronic control signals. At least one of the digital imaging devices may record, during use, at least one image of the projected patter of light. The image processing device which, during use, converts projected patterns of light recorded on at least one of the images to three-dimensional data points representing a surface geometry of the object using relative positions and relative angles between the at least one imaging device and the pattern projector. | 12-05-2013 |
20130321583 | IMAGING SYSTEM AND METHOD FOR USE OF SAME TO DETERMINE METRIC SCALE OF IMAGED BODILY ANATOMY - Repositionable imaging system and method for creating a 3D image of a closed cavity of a patient's anatomy containing the imaging system, estimating characteristics of motion of the system within the cavity, and determining a metric dimension characterizing the cavity without the use of mechanical measurement. The system may include tracking device located externally with respect to the cavity to determine a scale measurement of the system's motion and the structure of the cavity. The system's computer processor is configured to determine triangulated features of the cavity and recover motion scale data with the use of data received from the tracking device. A specific imaging system includes an endoscope (equipped with an electrical coil and complemented with an external electromagnetic tracker). | 12-05-2013 |
20130321584 | DEPTH IMAGE GENERATING METHOD AND APPARATUS AND DEPTH IMAGE PROCESSING METHOD AND APPARATUS - A depth image generation method is provided. The depth image generation method may include emitting light of different modulation frequencies to an object; detecting the light of the different modulation frequencies reflected from the object; and generating a depth image related to a distance to the object using the light of the different modulation frequencies. | 12-05-2013 |
20130321585 | System and Method for 3D Imaging using Structured Light Illumination - A biometrics system captures and processes a handprint image using a structured light illumination to create a 2D representation equivalent of a rolled inked handprint. A processing unit calculates 3D coordinates of the hand from the plurality of images and maps the 3D coordinates to a 2D flat surface to create a 2D representation equivalent of a rolled inked handprint. | 12-05-2013 |
20130329011 | Probabilistic And Constraint Based Articulated Model Fitting - A depth sensor obtains images of articulated portions of a user's body such as the hand. A predefined model of the articulated body portions is provided. Representative attract points of the model are matched to centroids of the depth sensor data, and a rigid transform of the model is performed, in an initial, relatively coarse matching process. This matching process is then refined in a non-rigid transform of the model, using attract point-to-centroid matching. In a further refinement, an iterative process rasterizes the model to provide depth pixels of the model, and compares the depth pixels of the model to the depth pixels of the depth sensor. The refinement is guided by whether the depth pixels of the model are overlapping or non-overlapping with the depth pixels of the depth sensor. Collision, distance and angle constraints are also imposed on the model. | 12-12-2013 |
20130329012 | 3-D IMAGING AND PROCESSING SYSTEM INCLUDING AT LEAST ONE 3-D OR DEPTH SENSOR WHICH IS CONTINUALLY CALIBRATED DURING USE - 3D imaging and processing method and system including at least one 3D or depth sensor which is continuously calibrated during use are provided. In one embodiment, a calibration apparatus or object is continuously visible in the field of view of each 3D sensor. In another embodiment, such as a calibration apparatus is not needed. Continuously calibrated 3D sensors improve the accuracy and reliability of depth measurements. The calibration system and method can be used to ensure the accuracy of measurements using any of a variety of 3D sensor technologies. To reduce the cost of implementation, the invention can be used with inexpensive, consumer-grade 3D sensors to correct measurement errors and other measurement deviations from the true location and orientation of an object in 3D space. | 12-12-2013 |
20130329013 | HAND HELD DIMENSION CAPTURE APPARATUS, SYSTEM AND METHOD - A method of determining dimension information indicative of the dimensions of an object is disclosed, the method including: receiving a depth image of the object; and processing the depth information. The processing may include: determining a region of interest (ROI) in the image corresponding to a corner of the object; generating local normal information indicative of local normals corresponding to points in the image; generating, based at least in part on the ROI and the local normal information, object face information indicative of an association of points in the image with sides of the object; and determining the dimension information based at least in part on the object face information. | 12-12-2013 |
20130329014 | ELECTRONIC DEVICE, IMAGE DISPLAY METHOD, AND IMAGE DISPLAY PROGRAM - According to an aspect, an electronic device includes: an imaging unit for capturing a subject; and a storage unit for storing therein an image captured by the imaging unit. The image stored in the storage unit is an image that forms a three-dimensional image when displayed in combination with other image of the same subject stored in the storage unit. | 12-12-2013 |
20130335528 | IMAGING DEVICE CAPABLE OF PRODUCING THREE DIMENSIONAL REPRESENTATIONS AND METHODS OF USE - Described herein is a system and method to create a 3D representation of an observed scene by combining multiple views from a moving image capture device. The output is a point cloud or a mesh model. Models can be captured at arbitrary scales varying from small objects to entire buildings. The visual fidelity of produced models is comparable to that of a photograph when rendered using conventional graphics rendering. Despite offering fine-scale accuracies, the mapping results are globally consistent, even at large scales. | 12-19-2013 |
20130335529 | CAMERA POSE ESTIMATION APPARATUS AND METHOD FOR AUGMENTED REALITY IMAGING - An apparatus for providing an estimate for a 3D camera pose relative to a scene from 2D image data of a 2D image frame provided by the camera is provided, the apparatus using four types of observations: (a) detected 2D-3D point correspondences; (b) tracked 2D-3D point correspondences; (c) motion model observations; and (d) edge observations. | 12-19-2013 |
20130335530 | SYSTEM AND METHOD FOR VISUAL INSPECTION AND 3D WHITE LIGHT SCANNING OF OFF-LINE INDUSTRIAL GAS TURBINES AND OTHER POWER GENERATION MACHINERY - Internal components of gas or steam turbines are inspected with a 3D scanning camera inspection system that is inserted and positioned within the turbine, for example through a gas turbine combustor nozzle port. Three dimensional internal component measurements are performed using projected light patterns generated by a stripe projector and a 3D white light matrix camera. Real time dimensional information is gathered without physical contact, which is helpful for extracting off-line engineering information about the scanned structures. Exemplary 3D scans, preferably with additional visual images, are performed of the gas path side of a gas turbine combustor support housing, combustor basket and transition with or without human intervention. | 12-19-2013 |
20130335531 | APPARATUS FOR PROJECTING GRID PATTERN - The present invention relates to an apparatus for projecting a grid pattern, and more particularly, to an apparatus for projecting a grid pattern that projects an image of a grid pattern onto a test object during a three-dimensional measurement. The apparatus for projecting a grid pattern comprises: a camera which takes, as an input, a grid pattern image using grid pattern projecting means including a grid pattern signal generating unit and a grid pattern emitting unit, wherein the grid pattern signal generating unit receives grid pattern information to emit light in the form of grid pattern onto the test object and generates a grid pattern signal, and controls the grid pattern signal, wherein the grid pattern emitting unit controls a micro-mirror for a light source and a laser scanner using the grid pattern signal to emit a grid pattern; information processing means for extracting a three-dimensional image; and output means. According to the present invention, the size of the apparatus for projecting a grid pattern may be reduced such that the apparatus may be internally or externally built into a mobile device or three-dimensional measurement device. The apparatus for projecting a grid pattern of the present invention may solve the focusing problems of conventional apparatuses for projecting a grid pattern, and may project a grid pattern image to a high-speed camera in real time to perform a three-dimensional measurement. | 12-19-2013 |
20130342650 | THREE DIMENSIONAL IMAGING DEVICE - An imaging device may include a housing and a pair of lenses mounted in the housing. A mechanism maintains the lenses in a horizontal orientation when the housing is rotated about an imaging axis. | 12-26-2013 |
20130342651 | ENCODING DATA IN DEPTH PATTERNS - A depth imaging system comprises a depth camera input to receive a depth map representing an observed scene imaged by a depth camera, the depth map including a plurality of pixels and a depth value for each of the plurality of pixels. The depth imaging system further comprises a tag identification module to identify a 3D tag imaged by the depth camera and represented in the depth map, the 3D tag comprising one or more depth features, each of the one or more depth features comprising one or more characteristics recognizable by the depth camera. The depth imaging system further comprises a tag decoding module to translate the one or more depth features into machine-readable data. | 12-26-2013 |
20130342652 | TRACKING AND FOLLOWING PEOPLE WITH A MOBILE ROBOTIC DEVICE - Tracking and following technique embodiments are presented that are generally employed to track and follow a person using a mobile robotic device having a color video camera and a depth video camera. A computer associated with the mobile robotic device is used to perform various actions. Namely, in a tracking mode, a face detection method and the output from the color video camera is used to detect potential persons in an environment. In addition, a motion detection method and the output from the depth video camera is also used to detect potential persons in the environment. Detection results obtained using the face and motion detection methods are then fused and used to determine the location of one or more persons in the environment. Then, in a following mode, a mobile robotic device following method is used to follow a person whose location was determined in the tracking mode. | 12-26-2013 |
20130342653 | CARGO SENSING - Cargo presence detection systems and methods are described herein. One cargo presence detection system includes one or more sensors positioned in an interior space of a container, and arranged to provide spatial data about at least a portion of the interior space of the container and a detection component that receives the spatial data from the one or more sensors and identifies if one or more cargo items are present in the interior space of the container based on analysis of the spatial data. | 12-26-2013 |
20130342654 | 3D VIDEO REPRODUCTION DEVICE, NON-TRANSITORY RECORDING MEDIUM, 3D DISPLAY DEVICE, 3D IMAGING DEVICE, AND 3D VIDEO REPRODUCTION METHOD - A stereoscopic video reproduction device comprising a first acquisition unit; a second acquisition unit; a decision unit; a parallax correction unit configured to correct a parallax when the binocular fusion is impossible, such that the binocular fusion becomes possible in the stereoscopic video in a predetermined interval in which the binocular fusion is decided to be impossible; and an output unit configured to: when outputting the acquired stereoscopic video to the stereoscopic display, output, to the stereoscopic display, the stereoscopic video in which the parallax is corrected in the stereoscopic video by the parallax correction unit in the predetermined interval in which the binocular fusion is decided to be impossible in a case where the decision unit decides that the binocular fusion is impossible; and output the acquired stereoscopic video as is to the stereoscopic display in a case where the decision unit decides that the binocular fusion is possible. | 12-26-2013 |
20130342655 | METHOD AND APPARATUS FOR MACRO PHOTOGRAPHIC STEREO IMAGING - A macro photographic apparatus for creating focus stacked images of a specimen may include a rigid longitudinal member having a longitudinal axis and including a camera mount thereon. A translation device may be fixed to the member for translating the specimen along the member toward and away from the camera mount. A rotation device may be mounted on the translation device. The rotation device may support the specimen and enable rotation of the specimen around first and second axes that are perpendicular to the longitudinal axis of the member. At any single position of the translation device along the longitudinal axis of the member, as the specimen is rotated around the first and second axes, the spatial location of the specimen remains substantially the same. | 12-26-2013 |
20130342656 | IMAGE PICK-UP APPARATUS HAVING A FUNCTION OF RECOGNIZING A FACE AND METHOD OF CONTROLLING THE APPARATUS - It is judged whether or not a human face detecting mode is set. When it is determined that the human face detecting mode is set, a two-dimensional face detecting process is performed to detect a human face. When it is determined that a human face has not been detected in the two-dimensional face detecting process, a three-dimensional face detecting process is performed to detect a human face. In addition, when an animal face detecting mode is set, a three-dimensional face detecting process is performed to detect a face of an animal corresponding to the set detecting mode. | 12-26-2013 |
20140002605 | IMAGING SYSTEM AND METHOD | 01-02-2014 |
20140002606 | ENHANCED IMAGE PROCESSING WITH LENS MOTION | 01-02-2014 |
20140002607 | INVARIANT FEATURES FOR COMPUTER VISION | 01-02-2014 |
20140002608 | LINE SCANNER USING A LOW COHERENCE LIGHT SOURCE | 01-02-2014 |
20140002609 | APPARATUS AND METHOD FOR GENERATING DEPTH IMAGE USING TRANSITION OF LIGHT SOURCE | 01-02-2014 |
20140002610 | REAL-TIME 3D SHAPE MEASUREMENT SYSTEM | 01-02-2014 |
20140002611 | TIME-OF-FLIGHT DEPTH IMAGING | 01-02-2014 |
20140002612 | STEREOSCOPIC SHOOTING DEVICE | 01-02-2014 |
20140002613 | THREE-DIMENSIONAL IMAGING USING A SINGLE CAMERA | 01-02-2014 |
20140015929 | THREE DIMENSIONAL SCANNING WITH PATTERNED COVERING - A method for obtaining 3D information of an object including the steps of retrieving two or more images of the object, the object being covered in an elastic covering comprising a uniform, matching a plurality of corresponding points in the two or more images based on the pattern, and generating a point cloud representing an object based the plurality of corresponding points. | 01-16-2014 |
20140015930 | ACTIVE PRESENCE DETECTION WITH DEPTH SENSING - In vision-based authentication platforms for secure resources such as computer systems, false positives and/or false negatives in the detection of walk-away events are reduced or eliminated by incorporating depth information into tracking authenticated system operators. | 01-16-2014 |
20140015931 | METHOD AND APPARATUS FOR PROCESSING VIRTUAL WORLD - A virtual world processing apparatus and method are provided. Sensed information related to an image taken in a real world is transmitted to a virtual world using image sensor capability information, which is information on a capability of an image sensor. | 01-16-2014 |
20140015932 | 3DIMENSION IMAGE SENSOR AND SYSTEM INCLUDING THE SAME - A 3D image sensor includes a first color filter configured to pass wavelengths of a first region of visible light and wavelengths of infrared light; a second color filter configured to pass wavelengths of a second region of visible light and the wavelengths of infrared light; and an infrared sensor configured to detect the wavelengths of infrared light passed through the first color filter. | 01-16-2014 |
20140015933 | IMAGE PROCESSING APPARATUS, IMAGING SYSTEM, AND IMAGE PROCESSING SYSTEM - An image processing apparatus includes: an image acquisition unit for acquiring original images obtained by imaging an object at different focal positions; an image generation unit for generating a plurality of observation images from the original images, the observation images being mutually different in at least either focal position or DOF; and an image displaying unit for displaying the observation images on a display device. The image generation unit generates the observation images by performing combine processing for selecting two or more original images from the original images and focus-stacking the selected original images to generate a single observation image, for plural times while differing a combination of the selected original images. The image displaying unit selects the observation images to be displayed, when the observation images displayed on the display device are switched, such that the focal position or the DOF changes sequentially. | 01-16-2014 |
20140015934 | COMPUTERIZED IMAGING OF SPORTING TROPHIES AND METHOD OF PROVIDING A REPLICA - Methods are disclosed for providing replicas of a sporting trophy and for scoring the sporting trophy. The first method includes providing a sporting trophy to be scanned; scanning the sporting trophy to provide three-dimensional image data of the sporting trophy; and providing the three-dimensional image data of the sporting trophy to a replica generating system to provide a replica of the sporting trophy. The second method includes providing three-dimensional digital data of a sporting trophy having a volume and a surface area; providing at least one sporting-relevant measurement based on the three-dimensional data of the sporting trophy; and providing a score of the sporting trophy based on the at least one sporting-relevant measurement. | 01-16-2014 |
20140015935 | METHODS AND SYSTEMS FOR THREE DIMENSIONAL OPTICAL IMAGING, SENSING, PARTICLE LOCALIZATION AND MANIPULATION - Embodiments include methods, systems, and/or devices that may be used to image, obtain three-dimensional information from a scence, and/or locate multiple small particles and/or objects in three dimensions. A point spread function (PSF) with a predefined three dimensional shape may be implemented to obtain high Fisher information in 3D. The PSF may be generated via a phase mask, an amplitude mask, a hologram, or a diffractive optical element. The small particles may be imaged using the 3D PSF. The images may be used to find the precise location of the object using an estimation algorithm such as maximum likelihood estimation (MLE), expectation maximization, or Bayesian methods, for example. Calibration measurements can be used to improve the theoretical model of the optical system. Fiduciary particles/targets can also be used to compensate for drift and other type of movement of the sample relative to the detector. | 01-16-2014 |
20140022346 | METHOD AND APPARATUS FOR IMPROVING DEPTH OF FIELD (DOF) IN MICROSCOPY - A method for improving depth for field (DOF) in microscopic imaging, the method comprising combining a sequence of images captured from different focal distances to form an all-focus image, comprising computing a focus measure at every pixel, finding the largest peaks at each position in the focus measure as multiple candidate values and blending the multiple candidates values according to the focus measure to determine the all-focus image. | 01-23-2014 |
20140022347 | METHOD AND APPARATUS FOR SIMULATING DEPTH OF FIELD (DOF) IN MICROSCOPY - A method and apparatus for simulating depth of field (DOF) in microscopic imaging, the method comprising computing a blur quantity for each pixel of an all-focus image, performing point spread function operations on one or more regions of the all-focus image, computing intermediate and normalized integral images on the regions and determining an output pixel for the each pixel based on the intermediate and normalized integral images. | 01-23-2014 |
20140022348 | DEPTH MAPPING USING TIME-CODED ILLUMINATION - A method for depth mapping includes illuminating an object with a time-coded pattern and capturing images of the time-coded pattern on the object using a matrix of detector elements. The time-coded pattern in the captured images is decoded using processing circuitry embedded in each of the detector elements so as to generate respective digital shift values, which are converted into depth coordinates. | 01-23-2014 |
20140022349 | IMAGE PROCESSING APPARATUS AND METHOD - An image processing apparatus includes a light receiver to transduce a light reflected from an object into an electron corresponding to an intensity of the light, a measurer to measure quantities of charge on the electron with respect to at least two different divided time sections of an integration time section for acquiring a depth image, and an image generator to generate a depth image using at least one of the at least two measured quantities of charge on the electron. | 01-23-2014 |
20140022350 | METHOD AND DEVICE FOR HIGH-RESOLUTION IMAGING WHICH OBTAINS CAMERA POSE USING DEFOCUSING - A method and device for high-resolution three-dimensional (3-D) Imaging which obtains camera pose using defocusing is disclosed. The device comprises a lens obstructed by a mask having two sets of apertures. The first set of apertures produces a plurality of defocused images of the object which are used to obtain camera pose. The second set of optical filters produces a plurality of defocused images of a projected pattern of markers on the object. The images produced by the second set of apertures are differentiable from the images used to determine pose, and are used to construct a detailed 3-D image of the object. Using the known change in camera pose between captured images, the 3-D images produced can be overlaid to produce a high-resolution 3-D image of the object. | 01-23-2014 |
20140022351 | PHOTOGRAPHING APPARATUS, PHOTOGRAPHING CONTROL METHOD, AND EYEBALL RECOGNITION APPARATUS - A photographing control method is provided. The photographing control method includes capturing an image of an object, detecting a facial area from within the captured image of the object, adjusting a location of the photographing apparatus based on a location of the detected facial area, and adjusting a zooming state of the photographing apparatus so that a size of the detected facial area falls within a predetermined size range. | 01-23-2014 |
20140022352 | MOTION BLUR COMPENSATION - Disclosed is a method for compensating for motion blur when performing a 3D scanning of at least a part of an object by means of a 3D scanner, where the motion blur occurs because the scanner and the object are moved relative to each other while the scanning is performed, and where the motion blur compensation comprises:—determining whether there is a relative motion between the scanner and the object during the acquisition of the sequence of focus plane images;—if a relative motion is determined, performing a motion compensation based on the determined motion; and—generating a 3D surface from the sequence of focus plane images. | 01-23-2014 |
20140022353 | SAFETY IN DYNAMIC 3D HEALTHCARE ENVIRONMENT - The present invention relates to safety in a dynamic 3D healthcare environment. The invention in particular relates to a medical safety-system for dynamic 3D healthcare environments, a medical examination system with motorized equipment, an image acquisition arrangement, and a method for providing safe movements in dynamic 3D healthcare environments. In order to provide improved safety in dynamic 3D healthcare environments with a facilitated adaptability, a medical safety-system ( | 01-23-2014 |
20140022354 | SOLID-STATE IMAGING ELEMENT, DRIVING METHOD THEREOF, AND IMAGING DEVICE - A pixel pair ( | 01-23-2014 |
20140028799 | Use of Color and Intensity Modulation of a Display for Three-Dimensional Object Information - Methods and systems for using a mobile device with a multi-element display, a camera, and a controller to determine a 3D model of a target object. The multi-element display is configured to generate a light field. At least a portion of the light field reflects from a target object. The camera is configured to capture a plurality of images based on the portion of the light field reflected from the target object. The controller is configured to determine a 3D model of the target object based on the images. The 3D model includes three-dimensional shape and color information about the target object. In some examples, the light field could include specific light patterns, spectral content, and other forms of modulated/structured light. | 01-30-2014 |
20140028800 | Multispectral Binary Coded Projection - Illumination of an object with spectral structured light, and spectral measurement of light reflected therefrom, for purposes which include derivation of a three-dimensional (3D) measurement of the object, such as depth and/or contours of the object, and/or for purposes which include measurement of a material property of the object, such as by differentiating between different types of materials. | 01-30-2014 |
20140028801 | Multispectral Binary Coded Projection - Illumination of an object with spectral structured light, and spectral measurement of light reflected therefrom, for purposes which include derivation of a three-dimensional (3D) measurement of the object, such as depth and/or contours of the object, and/or for purposes which include measurement of a material property of the object, such as by differentiating between different types of materials. | 01-30-2014 |
20140036034 | THREE-DIMENSIONAL PRINTER WITH LASER LINE SCANNER - A three-dimensional printer includes a laser line scanner and hardware to rotate the scanner relative to an object on a build platform. In this configuration, three-dimensional surface data can be obtained from the object, e.g., for use as an input to subsequent processing steps such as the generation of tool instructions to fabricate a three-dimensional copy of the object, or various surfaces thereof. | 02-06-2014 |
20140036035 | PRINTER WITH LASER SCANNER AND TOOL-MOUNTED CAMERA - A three-dimensional printer includes a laser line scanner and hardware to rotate the scanner relative to an object on a build platform. In this configuration, three-dimensional surface data can be obtained from the object, e.g., for use as an input to subsequent processing steps such as the generation of tool instructions to fabricate a three-dimensional copy of the object, or various surfaces thereof. | 02-06-2014 |
20140043435 | THREE-DIMENSIONAL IMAGING THROUGH MULTI-IMAGE PROCESSING - Embodiments of imaging devices of the present disclosure automatically utilize sequential image captures in an image processing pipeline. In one embodiment, control processing circuitry captures a plurality of sub-frames, each of the sub-frames comprising a representation of a scene captured at a different focus position of a lens. A depth map of the scene may be generated comprising a depth of the portion of the scene based at least upon the focus position at which the one of the sub-frames was captured. An output frame of the scene may also be generated by utilizing pixel values of the sub-frame which are determined to be at optimal focus positions. | 02-13-2014 |
20140043436 | Capturing and Aligning Three-Dimensional Scenes - Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor. | 02-13-2014 |
20140043437 | IMAGE PICKUP APPARATUS, IMAGE PICKUP SYSTEM, METHOD OF CONTROLLING IMAGE PICKUP APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An image pickup apparatus ( | 02-13-2014 |
20140043438 | Systems and Methods for Detecting a Tilt Angle from a Depth Image - A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels. | 02-13-2014 |
20140043439 | METHOD FOR OPERATING A CAMERA AND A PROJECTOR IN A SYNCHRONIZED MANNER - A method for synchronizing a camera that has an image sensor with a projector that can generate a synchronization signal which corresponds to the projector frame rate. The projector is operated with a frame rate (F | 02-13-2014 |
20140049609 | WIDE ANGLE DEPTH DETECTION - Embodiments for a depth sensing camera with a wide field of view are disclosed. In one example, a depth sensing camera comprises an illumination light projection subsystem, an image detection subsystem configured to acquire image data having a wide angle field of view, a logic subsystem configured to execute instructions, and a data-holding subsystem comprising stored instructions executable by the logic subsystem to control projection of illumination light and to determine depth values from image data acquired via the image sensor. The image detection subsystem comprises an image sensor and one or more lenses. | 02-20-2014 |
20140049610 | ILLUMINATION LIGHT PROJECTION FOR A DEPTH CAMERA - Various embodiments of TOF depth cameras and methods for illuminating image environments with illumination light are provided herein. In one example, a TOF depth camera configured to collect image data from an image environment illuminated by illumination light includes a light source including a plurality of surface-emitting lasers configured to generate coherent light. The example TOF camera also includes an optical assembly configured to transmit light from the plurality of surface-emitting lasers to the image environment and an image sensor configured to detect at least a portion of return light reflected from the image environment. | 02-20-2014 |
20140049611 | MOBILE TERMINAL AND CONTROLLING METHOD THEREOF - A mobile terminal including a camera configured to photograph a plurality of images; a touchscreen configured to display one of the plurality of the photographed images or a first synthetic image created from synthesizing at least two of the plurality of the photographed images together as a representative image; and a controller configured to receive a first touch input applied while the representative image is displayed, extract an image having a best resolution for an object or region selected by the first touch input from the plurality of the photographed images, and display the extracted image via the touchscreen. | 02-20-2014 |
20140049612 | IMAGE PROCESSING DEVICE, IMAGING DEVICE, AND IMAGE PROCESSING METHOD - An image processing device which creates a depth map using a multi-view image includes: a detection unit which detects at least one feature in each of the images included in the multi-view image; a calculation unit which calculates reliability of correspondence between images included in the multi-view image in preparation for creating the depth map; and a creation unit which creates the depth map using the correspondence between the images included in the multi-view image when the calculated reliability is a first reliability, and creates the depth map without using the correspondence between the images included in the multi-view image when the calculated reliability is a second reliability which is lower than the first reliability. | 02-20-2014 |
20140049613 | FIGURE-GROUND ORGANIZATION OF 3-D SCENES - Systems and methods for processing a pair of 2-D images are described. In one example, a stereoscopic set of images is converted into a collection of regions that represent individual 3-D objects in the pair of images. In one embodiment, the system recovers the 3-D point P for each point p that appears in both images. It estimates the 3-D orientation of the floor plane, and the image capture planes and their height from the floor. The system then identifies the collection B of points P that do not represent points on the floor and generates a projection C of B onto a plane parallel to the floor. It blurs the projection C and identifies peaks in the blurred image, then fits symmetric figures to the points in C around the identified peaks. The system projects the 3-D figures associated with the symmetric figures back onto the 2-D images. | 02-20-2014 |
20140049614 | IMAGE PROCESSING APPARATUS, IMAGING APPARATUS, AND IMAGE PROCESSING METHOD - An image processing apparatus includes: an image obtainment unit that obtains an image; an information obtainment unit that obtains embedment information that is to be embedded into a region within the image obtained by the image obtainment unit; a depth information obtainment unit that obtains depth information indicating a depth value of each pixel in the image obtained by the image obtainment unit; and an embedment region determination unit that determines, using the depth information, an embedment region into which the embedment information is to be embedded. | 02-20-2014 |
20140055564 | APPARATUS AND METHOD FOR PROCESSING DIGITAL SIGNAL - Disclosed are an apparatus and method for processing a digital signal including a 3D image. The apparatus includes a receiver for receiving a first digital signal including a main image and a second digital signal including an additional image, the main image and the additional image being 3D images, and the additional image being an expanded image of the main image seamlessly connected to an edge region of the main image, an image processor for generating the main image and the additional image to output a 3D image by decoding the first and second digital signals, and a display unit for displaying the output 3D image. The display unit includes a first display unit for displaying the main image, and a transparent second display unit connected to an edge of the first display unit, the second display unit configured to display the additional image. | 02-27-2014 |
20140055565 | 3-DIMENSIONAL IMAGE ACQUISITION APPARATUS AND 3D IMAGE ACQUISITION METHOD FOR SIMULTANEOUSLY OBTAINING COLOR IMAGE AND DEPTH IMAGE - A 3-dimensional (3D) image acquisition apparatus capable of simultaneously obtaining a color image and a depth image in a single shooting operation is provided. The apparatus includes a light source for radiating illumination light having a predetermined wavelength onto an object; a lens unit having at least four object lenses; an image sensor including at least four sensing regions for individually receiving light focused by the object lenses and for generating images; and at least three optical shutters individually facing at least three of the at least four object lenses and for modulating incident light with predetermined gain waveforms. | 02-27-2014 |
20140055566 | GESTURE RECOGNITION SYSTEM AND METHOD - A gesture recognition system includes an EDOF lens, an image sensor, and a processing unit. The image sensor successively captures image frames through the EDOF lens. The processing unit is configured to perform gesture recognition according to at least one object image within a sharpness range in the image frames thereby eliminating the interference from background objects. | 02-27-2014 |
20140055567 | Video Infrared Retinal Image Scanner - A method of scanning a retinal image includes providing a light source, emitting radiation from the light source toward a beam splitter, focusing the radiation with a focusing lens on a retina, collecting radiation reflected by the retina with a camera, producing an image signal representative of a plurality of images of the retina based on the collected radiation, selecting one of the plurality of images of the retina for display from the image signal, displaying the selected image of the retina on a display, comparing the selected image of the retina to at least one of a plurality of images of retinas stored in the database that matches the selected image of the retina, and displaying the one of the matching image of the retina on the display along with the selected image of the retina. | 02-27-2014 |
20140055568 | ANALYSIS APPARATUS FOR CONTACTLESS ANALYSIS OF THE SHAPE OF A TRANSPARENT BODY, AND METHOD FOR CARRYING OUT THE CONTACTLESS ANALYSIS - The invention relates to an analysis apparatus for the contactless analysis of the shape of a transparent body, in particular of a substantially spherical active substance bead, having at least one support for the body and at least one image recording apparatus, wherein the support has a test image, in particular a test grid, and at least one detection means is provided in order to detect, using the detection means, the three-dimensional shape and/or contour of the body and/or the test image which is modulated by the optical properties of the body, in particular the test grid. The invention also relates to a method for the contactless analysis of the shape of the transparent body. | 02-27-2014 |
20140063189 | System and Method for Refining Coordinate-Based Three-Dimensional Images Obtained from a Three-Dimensional Measurement System - A system uses range and Doppler velocity measurements from a lidar system and images from a video system to estimate a six degree-of-freedom trajectory of a target and generate a three-dimensional image of the target. The system may refine the three-dimensional image by reducing the stochastic components in the transformation parameters between video frame times. | 03-06-2014 |
20140063190 | CAMERA LENS - A camera lens is formed with a casing and a lens stack having at least one lens element, the lens stack arranged within the casing. A spring arrangement is pre-compressed to exert a clamping force between the casing and the lens stark basically in a direction of an optical axis of the at least one lens element. | 03-06-2014 |
20140063191 | VIRTUAL ACCESS CONTROL - Virtual access control may include detecting entry of a person into a virtual controlled zone, and counting and/or identifying people including the person entering into the virtual controlled zone. Virtual access control may further include determining an authorization of the person to continue through the virtual controlled zone based on a facial identification of the person, and alerting the person to stop, exit from, or continue through the virtual controlled zone based on the determined authorization. An alarm may be generated if the person violates directions provided by the alert. | 03-06-2014 |
20140063192 | THREE-DIMENSIONAL SHAPE MEASURING APPARATUS, THREE-DIMENSIONAL SHAPE MEASURING METHOD, PROGRAM, AND STORAGE MEDIUM - An information processing apparatus includes a projection unit configured to project a projection pattern onto an object, an imaging unit configured to capture an image of the object on which the projection pattern is projected, and a derivation unit configured to derive a three-dimensional shape of the object based on the image captured by the imaging unit. The projection pattern projected on the object by the projection unit includes a first pattern including a continuous luminance variation repetitively arranged at certain distances in a predetermined direction, and a second pattern having information for identifying the position of the measurement pattern in the captured image in an area between peaks in the measurement pattern. | 03-06-2014 |
20140063193 | Natural 3D Motion For Film And Video - “Natural 3D Motion” is created by the use of moving the camera in a lateral direction with foreground and background elements, at a consistent speed, throughout a scene, in such a way that it allows the viewer time to perceive three dimensional depth and experience immersion in the scene without the use of 3D glasses, 3D Projectors or 3D Video Displays. Its beneficial affects, depending upon the nature of the video/film/movie, including enhancing a sense of calm, especially when the scenes comprise peaceful images beyond the traditional film or movie experience. Additionally, the presentation of “Natural 3D Motion” film/movies can be viewed without the need for the viewer to sit through an entire film/movie as the experience of three dimensional depth can capture the viewer's imagination and provide a calming effect while viewing even a portion of a full “Natural 3D Motion” film or movie. | 03-06-2014 |
20140063194 | SUB-DIFFRACTION LIMIT IMAGE RESOLUTION IN THREE DIMENSIONS - The present invention generally relates to sub-diffraction limit image resolution and other imaging techniques, including imaging in three dimensions. In one aspect, the invention is directed to determining and/or imaging light from two or more entities separated by a distance less than the diffraction limit of the incident light. In some cases, the position of the entities can be determined in all three spatial dimensions (i.e., in the x, y, and z directions), and in certain cases, the positions in all three dimensions can be determined to an accuracy of less than about 1000 nm. In some cases, the z positions may be determined using one of a variety of techniques that uses intensity information or focal information (e.g., a lack of focus) to determine the z position. Non-limiting examples of such techniques include astigmatism imaging, off-focus imaging, or multi-focal-plane imaging. | 03-06-2014 |
20140063195 | STEREOSCOPIC MOVING PICTURE GENERATING APPARATUS AND STEREOSCOPIC MOVING PICTURE GENERATING METHOD - A stereoscopic picture generating apparatus comprising: a storage unit to get stored with a first image containing partial images and a second image containing partial images corresponding respectively to the partial images contained in the first image; and an arithmetic unit to extract a first position defined as an existing position of a first partial image contained in the first image and a second position defined as an existing position of a second partial image contained in the first image, to calculate a first differential quantity defined as a difference between the first position and the second position, to calculate a third position defined as a new existing position of a third partial image contained in the second image that corresponds to the first partial image based on the first differential quantity, and to generate a third image based on the third position of the third partial image. | 03-06-2014 |
20140063196 | COMPREHENSIVE AND INTELLIGENT SYSTEM FOR MANAGING TRAFFIC AND EMERGENCY SERVICES - A comprehensive and intelligent system for managing traffic and emergency services, which includes a plurality of 3D cameras positioned throughout a city, specifically at traffic intersections, which are capable of determining traffic conditions throughout the city's roads and transmitting it to emergency service providers so that better emergency response routes may be planned, and live video from an emergency scene may be transmitted to the emergency service providers, a plurality of 3D cameras positioned on vehicles driving on the city's roads, which are operative to alert drivers to an imminent accident so that drivers may respond accordingly and avoid the accident, and a plurality of location determination means positioned on or near traffic signals and vehicles, which are used to determine the relative speed and position of vehicles from traffic signals, and inform drivers as to whether or not they should proceed through an intersection given the time until a traffic signal turns red and the position and speed of a vehicle. | 03-06-2014 |
20140063197 | IMAGE GENERATION DEVICE - An image generation device that enhances visual recognizability between a recognized 3D (three-dimensional) object and a substitute image synthesized in an area of a 3D object image which is a photographed image of the 3D object includes an image synthesis section that recognizes a 3D object present in the peripheral area of a vehicle and outputs 3D object attribute information indicative of attributes of this 3D object, determines a 3D object image area as an image area of the 3D object in the photographed image based on position information included in the 3D object attribute information, outputs at least one of a substitute image of the 3D object applied with a color based on the color information and a substitute image of the 3D object under a specified directional posture of the 3D object specified based on type information and directional posture information and generates a bird's-eye view image with a substitution image in which the substitution image outputted from a substitution image output section is synthesized at the position of the 3D object image area. | 03-06-2014 |
20140071240 | FREE SPACE DETECTION SYSTEM AND METHOD FOR A VEHICLE USING STEREO VISION - In free space detection system and method for a vehicle, left and right images captured from the vehicle environment in a direction of travel of the vehicle are transformed to obtain a depth image with disparity values. The depth image is transformed to obtain a road function and an occupancy grid map. A cost estimation value corresponding to each disparity value on the same image column in a detecting area of the occupancy grid map is estimated using a cost function and the road function such that initial boundary disparity values each defined by one disparity value on the same image column whose the cost estimation value is maximum are optimized to obtain optimized boundary disparity values by which a free space is determined. | 03-13-2014 |
20140071241 | Devices and Methods for Augmented Reality Applications - In a particular embodiment, a method includes evaluating, at a mobile device, a first area of pixels to generate a first result. The method further includes evaluating, at the mobile device, a second area of pixels to generate a second result. Based on comparing a threshold with a difference between the first result and the second result, a determination is made that the second area of pixels corresponds to a background portion of a scene or a foreground portion of the scene. | 03-13-2014 |
20140071242 | REAL-TIME PEOPLE COUNTING SYSTEM USING LAYER SCANNING METHOD - Disclosed herein is a method for counting the number of the targets using the layer scanning method. The steps of this method includes constructing a background frame, filtering the noise of foreground frame and classifying the targets, and screening the area of targets based on layer scanning to calculate the number of targets by determining the highest positions of the respective targets. In addition, the dynamic numbers of targets are calculated using algorithm. Accordingly, the present invention is beneficial in automatically, effectively and precisely calculating the number of the targets in/out a specific area, achieving the flow control for targets and reducing artificial error upon calculation. | 03-13-2014 |
20140071243 | Shape Measuring Device, Program Installed Into This Device, And Recording Medium Storing This Program - Provided is a shape measuring device capable of making a user to feel that three-dimensional shape data is easily acquirable. Right and left light projecting sections are individually turned on to automatically adjust exposure time or brightness of illumination so that an image displayed in a display section has the optimum brightness. Further, scanning is performed with a plurality of striped patterns using the light projecting section, and in synchronization therewith, a plurality of striped images are acquired by a camera. Subsequently, a 2D texture image of an object is acquired by using ring illumination or all-white uniform illumination of the light projecting section. A PC performs image processing and an analysis on the acquired image data with a measurement algorithm, to generate stereoscopic shape data. Further, a 3D texture image generated by mapping the two-dimensional texture image onto the stereoscopic shape data is displayed in a display section (monitor). | 03-13-2014 |
20140071244 | SOLID-STATE IMAGE PICKUP DEVICE AND CAMERA SYSTEM - There are provided a solid-state image pickup device and a camera system that include no useless pixel arrangement and are capable of suppressing decrease in resolution caused by adopting stereo function. A pixel array section including a plurality of pixels arranged in an array is included. Each of the plurality of pixels has a photoelectric conversion function. Each of the plurality of pixels in the pixel array section includes a first pixel section and a second pixel section. The first pixel section includes at least a light receiving function. The second pixel section includes at least a function to detect electric charge that has been subjected to photoelectric conversion. The first and second pixel sections are formed in a laminated state. Further, the first pixel section is formed to have an arrangement in a state shifted in a direction different from first and second directions that are used as references. The second direction is orthogonal to the first direction. The second pixel section is formed in a square arrangement along the first direction and the second direction orthogonal to the first direction. | 03-13-2014 |
20140078257 | Method for visualization of three-dimensional objects on a person - A method is provided for visualization of three-dimensional objects on a person. Electro-magnetic radiation is emitted toward a person. Electro-magnetic radiation reflected by the person is received and processed to obtain a three-dimensional representation in a form of an elevation profile of the person. If the elevation profile has a detected surface with a reflection coefficient that is different from a reflection coefficient of at least one neighboring surface and the detected surface has a stepped transition toward at least one neighboring surface, then the object characterized by the detected surface is projected onto a photographic representation of the person in the place where the detected surface has been detected. | 03-20-2014 |
20140078258 | REAL-TIME MONOCULAR VISUAL ODOMETRY - Systems and methods are disclosed for multithreaded visual odometry by acquired with a single camera on-board a vehicle; using 2D-3D correspondences for continuous pose estimation; and combining the pose estimation with 2D-2D epipolar search to replenish 3D points. | 03-20-2014 |
20140078259 | LIGHT FIELD IMAGE CAPTURE DEVICE AND IMAGE SENSOR - An image sensor | 03-20-2014 |
20140078260 | METHOD FOR GENERATING AN ARRAY OF 3-D POINTS - A method that estimates the coordinates of the calibration points in the projector image plane using local homographies obtained from the imaging camera. First, a dense set of correspondences between projector and camera pixels is found by projecting onto the calibration object an identical pattern sequence as the one later projected when scanning the target, reusing most of the software components written for the scanning application. Second, the set of correspondences is used to compute a group of local homographies that allow projection back of any of the points in the calibration object onto the projector image plane with sub-pixel precision. In the end, the data projector is then calibrated as a normal camera. | 03-20-2014 |
20140078261 | METHOD AND APPARATUS FOR QUANTITATIVE 3-D IMAGING - Described is a method and apparatus for obtaining additional information from an object and a method for surface imaging and three-dimensional imaging. Single lens, single aperture, single sensor system and stereo optic systems may be modified in order to successfully generate surface maps of objects or three-dimensional representations of target objects. A variety of the aspects of the present invention provide examples of the use of an addressable pattern in order to overcome mismatching common to standard defocusing techniques. | 03-20-2014 |
20140085422 | IMAGE PROCESSING METHOD AND DEVICE - Methods and devices ( | 03-27-2014 |
20140085423 | STEREO CAMERA APPARATUS FOR A MOBILE DEVICE, AND IMAGING METHOD THEREFOR - A method of shooting an OSMU stereoscopic image using a mobile stereoscopic camera is provided. The method includes an operation of placing the mobile stereoscopic camera on a predetermined location of a mobile shooting place, an operation of determining a focusing distance in a wide-angle shooting condition when a far point of the mobile stereoscopic camera is infinity, an operation of calculating a distance between two cameras of the mobile stereoscopic camera by setting the far point of the mobile stereoscopic camera to be infinity at a location of the mobile stereoscopic camera, an operation of adjusting the distance between the cameras of the mobile stereoscopic camera to be the calculated distance between the cameras of the mobile stereoscopic camera, and an operation of shooting the OSMU stereoscopic image using the mobile stereoscopic camera of which the distance between the cameras is adjusted. | 03-27-2014 |
20140085424 | SCANNER - A method for 3D imaging using phase shifted structured light, in which projection means project multiple phase shifted mask images of a transmission mask on to an object scene, the mask images being captured by a camera to form a camera image, depth information being derived from the camera image by measurement of the camera image and computation from the measurement for each of the phase shifted fringe patterns and a 3D scanner carrying out the method. | 03-27-2014 |
20140085425 | IMAGE PROCESSOR, IMAGE CAPTURE DEVICE, IMAGE PROCESSING METHOD AND PROGRAM - This image processor | 03-27-2014 |
20140085426 | Structured light systems with static spatial light modulators - Structured light systems are based on temporally modulated light sources and static spatial light modulators. | 03-27-2014 |
20140085427 | Method and Device for 3-D Display Based on Random Constructive Interference - The present invention relates to a method and an apparatus for 3-D display based on random constructive interference. It produces a number of discrete secondary light sources by using an amplitude-phase-modulator-array, which helps to create 3-D images by means of constructive interference. Next it employs a random-secondary-light-source-generator-array to shift the position of each secondary light source to a random place, eliminating multiple images due to high order diffraction. It could be constructed with low resolution liquid crystal screens to realize large size real-time color 3-D display, which could widely be applied to 3-D computer or TV screens, 3-D human-machine interaction, machine vision, and so on. | 03-27-2014 |
20140085428 | Distance Measurement by Means of a Camera Sensor - The invention relates to a distance-determining method by means of a camera sensor, wherein a distance between the camera sensor and a target object is determined on the basis of camera information, which method is defined by the fact that the camera information comprises a spatial extent of the region covered by the target object on a light sensor in the camera sensor. | 03-27-2014 |
20140085429 | SENSOR POSITIONING FOR 3D SCANNING - A method for obtaining a refined pose for a 3D sensor for online 3D modeling of a surface geometry of an object, the pose encompassing six degrees of freedom (DOF) including three translation parameters and three orientation parameters, the method comprising: providing the 3D sensor, the 3D sensor being adapted to capture 3D point measurements of the surface of the object from a viewpoint; providing a geometry model of at least part of the surface; observing a portion of the surface of the object with the 3D sensor; measuring an initialization pose for the 3D sensor by at least one of positioning device pose measurement, predicted pose tracking and target observation; finding a best fit arrangement of the 3D point measurements in the geometry model using the initialization pose; generating the refined pose for the 3D sensor using the best fit arrangement. | 03-27-2014 |
20140092217 | SYSTEM FOR CORRECTING RPC CAMERA MODEL POINTING ERRORS USING 2 SETS OF STEREO IMAGE PAIRS AND PROBABILISTIC 3-DIMENSIONAL MODELS - A modeling engine (ME) for generating or “bootstrapping” a three dimensional edge model (3DEM) from two stereo pairs of images and correcting engine (CE) for correcting a camera model associated with an image are provided. The ME back projects edge detected images into 3DEMs using camera models associated with the stereo images, updates and stereo updates voxel probabilities in the 3DEMs with back projections of the edge detected images, and then merges the 3DEMs together to create a “sparse” 3DEM. The CE calculates a registration solution mapping an edge detected image of an image to an expected edge image. The expected edge image is a projection of a 3DEM using a camera model associated with the image. The CE corrects the camera model by applying the registration solution to the camera model based on determining whether the registration solution is a high confidence registration solution. | 04-03-2014 |
20140092218 | APPARATUS AND METHOD FOR STEREOSCOPIC VIDEO WITH MOTION SENSORS - An apparatus and method for a video capture device for recording 3 Dimensional (3D) stereoscopic video with motion sensors is provided are provided. The apparatus includes includes a camera unit having one lens for capturing video, a video encoder/decoder for encoding the captured video, a motion sensor for capturing motion data of the video capture device corresponding to the captured video, and a controller for controlling the video encoder/decoder and motion sensor to encode the captured video with the captured motion data. | 04-03-2014 |
20140092219 | DEVICE FOR ACQUIRING STEREOSCOPIC IMAGES - The device is based on the Wheatstone principle. Mirrors are adjusted angularly so that the right and left stereoscopic images of a scene are formed on a sensor in such a manner as to leave free an area between these two images. A slot is formed between the internal mirrors to let past, on the one hand, a structured or pulse light and, on the other, this structured or pulse light once it has been reflected on objects of the scene. The device further comprises an optic associated with the slot and the lens system to form an image on said area from said reflected structured or pulse light. | 04-03-2014 |
20140098191 | ANNOTATION METHOD AND APPARATUS - The present invention relates to an annotating method comprising the steps of:
| 04-10-2014 |
20140098192 | IMAGING OPTICAL SYSTEM AND 3D IMAGE ACQUISITION APPARATUS INCLUDING THE IMAGING OPTICAL SYSTEM - An imaging optical system includes an objective lens configured to focus light having a first wavelength band and light having a second wavelength band, an optical shutter module configured to reflect the light having the first wavelength band, which is focused by the objective lens, without modulating the light having the first wavelength band and to modulate the light having the second wavelength band, which is focused by the objective lens, and reflect the modulated light having the second wavelength band, and an image sensor configured to respectively sense the light having the first wavelength band and the modulated light having the second wavelength band, which are reflected by the optical shutter module, and to output a first image signal with respect to the light having the first wavelength band and a second image signal with respect to the modulated light having the second wavelength band. | 04-10-2014 |
20140104385 | METHOD AND APPARATUS FOR DETERMINING INFORMATION ASSOCIATED WITH A FOOD PRODUCT - Certain aspects of an apparatus and method for determining information associated with a food product may include a server that is communicably coupled to a computing device. The server may deconstruct a three dimensional (3-D) image of the food product to identify one or more ingredients in the food product. Further, the server may compare the deconstructed 3-D image with a database of pre-stored images of food products to determine a type of the one or more ingredients in the food product. The server may determine nutritional information associated with the food product based on the determined type of the one or more ingredients. | 04-17-2014 |
20140104386 | OBSERVATION SUPPORT DEVICE, OBSERVATION SUPPORT METHOD AND COMPUTER PROGRAM PRODUCT - According to an embodiment, an observation support device includes an acquiring unit, a determining unit, and a generating unit. The acquiring unit is configured to acquire observation information capable of identifying an observing direction vector indicating an observing direction of an observing unit that observes an object for which a three-dimensional model is to be generated. The determining unit is configured to determine whether or not observation of the object in a direction indicated by an observed direction vector in which at least part of the object can be observed is completed based on a degree of coincidence of the observing direction vector and the observed direction vector. The generating unit is configured to generate completion information indicating whether or not observation of the object in the direction indicated by the observed direction vector is completed. | 04-17-2014 |
20140104387 | HANDHELD PORTABLE OPTICAL SCANNER AND METHOD OF USING - A system and methods for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device and using the results for a variety of applications. The disclosure is based on the combination of off-the-shelf equipment (e.g. an RGB-D camera and a smartphone/tablet computer) in a self-contained unit capable of performing complex spatial reasoning tasks using highly optimized computer vision algorithms. New applications are disclosed using the instantaneous results obtained and the wireless connectivity of the host device for remote collaboration. One method includes steps of projecting a dot pattern from a light source onto a plurality of points on a scene, measuring distances to the points, and digitally reconstructing an image or images of the scene, such as a 3D view of the scene. A plurality of images may also be stitched together to re-position an orientation of the view of the scene. | 04-17-2014 |
20140104388 | Optical Lens Module Assembly With Auto Focus and 3-D Imaging Function - An optical lens module assembly is disclosed which can perform “Auto Focus” and produce “3-Dimensional (3-D) images”, “2-Dimensional (2-D) video movies and still photographs” having all the objects in the area of it's view (field of view) to be fully focused. Due to the fact that all the objects in the field of view including the background is fully focused with high image quality, these video movies and still photographs can easily be converted to high quality 3-Dimensional (3-D) video movies and 3-D still photographs. The conversion may be done by using software or hardware or a combination of both software and hardware. | 04-17-2014 |
20140104389 | Methods and Camera Systems for Recording and Creation of 3-Dimension (3-D) Capable Videos and 3-Dimension (3-D) Still Photos - A camera system is disclosed which can produce 2-Dimensional (2-D) video movies and still photographs having all the objects in the area of its view to be fully focused. Due to the fact that all the objects including the background is fully focused with high image quality, these video movies and still photographs can easily be converted to high quality 3-Dimensional (3-D) video movies and 3-D still photographs. The conversion may be done by using software or hardware or a combination of both software and hardware. | 04-17-2014 |
20140104390 | SINGLE-LENS, SINGLE-SENSOR 3-D IMAGING DEVICE WITH A CENTRAL APERTURE FOR OBTAINING CAMERA POSITION - A device and method for three-dimensional (3-D) imaging using a defocusing technique is disclosed. The device comprises a lens, a central aperture located along an optical axis for projecting an entire image of a target object, at least one defocusing aperture located off of the optical axis, a sensor operable for capturing electromagnetic radiation transmitted from an object through the lens and the central aperture and the at least one defocusing aperture, and a processor communicatively connected with the sensor for processing the sensor information and producing a 3-D image of the object. Different optical filters can be used for the central aperture and the defocusing apertures respectively, whereby a background image produced by the central aperture can be easily distinguished from defocused images produced by the defocusing apertures. | 04-17-2014 |
20140104391 | DEPTH SENSOR, IMAGE CAPTURE MEHOD, AND IMAGE PROCESSING SYSTEM USING DEPTH SENSOR - An image capture method performed by a depth sensor includes; emitting a first source signal having a first amplitude towards a scene, and thereafter emitting a second source signal having a second amplitude different from the first amplitude towards the scene, capturing a first image in response to the first source signal and capturing a second image in response to the second source signal, and interpolating the first and second images to generate a final image. | 04-17-2014 |
20140104392 | GENERATING IMAGE INFORMATION - The present invention relates to a method for generating an image information. According to the method, a light field information of an environment ( | 04-17-2014 |
20140104393 | CALIBRATION DEVICE AND CALIBRATION METHOD - Provided is a calibration device that can calculate the mounting state of a stereo camera without the placing of a physical marker. In this device, an area setting processor ( | 04-17-2014 |
20140111615 | Automated Optical Dimensioning and Imaging - Disclosed are various embodiments for automatically generating media and/or data associated with an item. An item imaging apparatus may apply an imaging sequence based on an item being imaged to gather media and/or data associated with the item. The media and/or data associated with the item may be used in the generation of additional data associated with the item. The media and/or data may be in a profile of the item in an electronic marketplace. | 04-24-2014 |
20140111616 | Structured light 3D scanner with refractive non-absorbing pattern forming element - A structured light 3D scanner consisting of a pattern projector; a digital imaging camera; and a controlling and processing circuitry is disclosed. Several novel variants of pattern projector are claimed. One embodiment comprises one or more transparent refractive pattern forming element, and two or more independently switchable light sources. Another embodiment comprises an array of light emitting diodes (LEDs), grown on the same semiconductor substrate, and an optical lens projecting the image formed by the said diode array. | 04-24-2014 |
20140111617 | OPTICAL SOURCE DRIVER CIRCUIT FOR DEPTH IMAGER - A depth imager such as a time of flight camera comprises a driver circuit and an optical source. The driver circuit comprises a frequency control module and a controllable oscillator having a control input coupled to an output of the frequency control module. An output of the controllable oscillator is coupled to an input of the optical source, and a driver signal provided by the driver circuit to the optical source utilizing the controllable oscillator varies in frequency under control of the frequency control module in accordance with a designated type of frequency variation, such as a ramped or stepped variation. The driver circuit may additionally or alternatively comprise an amplitude control module, such that a driver signal provided to the optical source varies in amplitude under control of the amplitude control module in accordance with a designated type of amplitude variation. | 04-24-2014 |
20140111618 | Three-Dimensional Measuring Device and Three-Dimensional Measuring System - A three-dimensional measuring device includes a light source unit, a light projecting optical unit, a light receiving optical unit, a light receiving element, a scanning unit, an angle detector, an illumination light source unit, an image pickup unit and a control arithmetic unit. The control arithmetic unit comprises a distance data processing unit for controlling the scanning unit, for calculating a distance to the object to be measured based on a received light signal, and for calculating a three-dimensional data of the object based on a calculated distance and a detection signal from the angle detector, and an image data processing unit for acquiring an illuminated image and an unilluminated image, for acquiring a difference image based on both images, for detecting a retroreflective target based on the difference image and a detected intensity of a reflected light from the difference image, and for calculating a position of the target. | 04-24-2014 |
20140111619 | DEVICE AND METHOD FOR ACQUIRING IMAGE - An image acquisition device is provided. The image acquisition device includes: a pattern generator that generates a plurality of incident light patterns using a plurality of light sources and that projects the generated plurality of incident light patterns to a target object; a pattern acquisition unit that acquires a pattern image that is formed in the target object by the plurality of incident light patterns; and an operation unit that generates a three-dimensional image of the target object using the pattern image. | 04-24-2014 |
20140111620 | IMAGING OPTICAL SYSTEM FOR 3D IMAGE ACQUISITION APPARATUS, AND 3D IMAGE ACQUISITION APPARATUS INCLUDING THE IMAGING OPTICAL SYSTEM - An imaging optical system and a three-dimensional (3D) image acquisition apparatus which includes the imaging optical system are provided. The imaging optical system includes an object lens configured to transmit light; first and second image sensors having different sizes from each other; a beamsplitter on which the light transmitted by the object lens is incident, the beamsplitter being configured to split the light incident thereon into light of a first wavelength band and light of a second wavelength band, and to direct the light of the first wavelength band to the first image sensor and the light of the second wavelength band to the second image sensor; and at least one optical element, disposed between the beamsplitter and the second image sensor, configured to reduce an image that is incident on the second image sensor, the optical element including at least one of a Fresnel lens and a diffractive optical element. | 04-24-2014 |
20140111621 | STEREO-VISION SYSTEM - This invention concerns a measuring system for the calculation of dimensions, including area and FIG. | 04-24-2014 |
20140118495 | Terrain-Based Virtual Camera Tilting, And Applications Thereof - Embodiments alter the swoop trajectory depending on the terrain within the view of the virtual camera. To swoop into a target, a virtual camera may be positioned at an angle relative to the upward normal vector from the target. That angle may be referred to as a tilt angle. According to embodiments, the tilt angle may increase more quickly in areas of high terrain variance (e.g., mountains or cities with tall buildings) than in areas with less terrain variance (e.g., flat plains). To determine the level of terrain variance in an area, embodiments may weigh terrain data having higher detail more heavily than terrain data having less detail. | 05-01-2014 |
20140118496 | Pre-Calculation of Sine Waves for Pixel Values - A system and method for determining positions in three-dimensional space are described. The system includes a controller, a phase image module, a presentation module and a phase determination module. The controller receives projector geometry parameters. A phase image module determines a plurality of sinusoidal images where a constant phase represents a flat plane in a three-dimensional space based on the projector geometry parameters. A presentation module projects the plurality of sinusoidal images to be captured by a camera. The phase determination module determines a phase value at a camera pixel. The phase determination module determines an intersection between the flat plane of the phase value and the camera pixel to identify a ray-plane intersection. | 05-01-2014 |
20140118497 | IMAGE SENSING APPARATUS FOR SENSING DEPTH IMAGE - Provided is an image sensing apparatus for obtaining a depth image. The image sensing apparatus may improve accuracy of the depth image by placing transmission gates and floating diffusion nodes of sub-pixels in a cross arrangement per pixel or row. The image sensing apparatus may represent the depth pixels of the depth image by controlling the binning of sensing signals output from the sub-pixels to place the depth pixels in a cross arrangement. | 05-01-2014 |
20140118498 | WEARABLE DEVICE, DANGER WARNING SYSTEM AND METHOD - A wearable device detects movement data of a user. A camera of the wearable device captures a three-dimensional (3D) image of an area surrounding the user. A determination is made as to whether a dangerous object appears in the area by analyzing the 3D image. If a dangerous object appears in the area, the a dangerous object appears in the area determines a distance between the user and the dangerous object, and triggers an alarm device to send out an alarm, on condition that the distance between the user and the dangerous object falls within a preset alarm range and a movement direction of the user is approaching the dangerous object. | 05-01-2014 |
20140118499 | MICROSCOPE SYSTEM - In order to properly perform smoothing depending on depth-of-field so as to provide a 3D image with improved display image quality, a microscope system comprises: a microscope device | 05-01-2014 |
20140118500 | SYSTEM AND METHOD FOR FINDING CORRESPONDENCE BETWEEN CAMERAS IN A THREE-DIMENSIONAL VISION SYSTEM - This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene involving microscopic and near microscopic objects under manufacture moved by a manipulator, so as to acquire contemporaneous images of a runtime object and determine the pose of the object for the purpose of guiding manipulator motion. At least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly can be combined with the searched 2D object features in images of other camera assemblies, based on their trained object features to generate a set of 3D features and thereby determine a 3D pose of the object. | 05-01-2014 |
20140118501 | CALIBRATION SYSTEM FOR STEREO CAMERA AND CALIBRATION APPARATUS FOR CALIBRATING STEREO IMAGE - Disclosed is a calibration system for stereo cameras. The calibration system includes a rig control module configured to, when a camera calibration parameter is input, control an auto rig according to the camera calibration parameter to perform physical calibration for a camera, a stereo image calibration apparatus configured to calibrate a stereo image to acquire the camera calibration parameter, and output the acquired camera calibration parameter, and a camera control module configured to output the camera calibration parameter, which is input from the stereo image calibration apparatus, to the rig control module, or screen-output the camera calibration parameter. Therefore, physical calibration and image processing calibration for a camera are performed in association with each other. | 05-01-2014 |
20140118502 | SYSTEM AND METHOD FOR EXTRACTING A 3D SHAPE OF A HOT METAL SURFACE - A system for extracting a 3D shape of a hot metal surface, includes: a light source unit emitting four types of light, i.e. two types of light each having different wave bandwidths that are different from a wave bandwidth emitted by a hot target object, and two other types of light having the same wave bandwidths as the above-described two types of light, respectively, so that the types of light having the same bandwidths are polarized in different directions so as not to interfere with one another; an image acquisition unit simultaneously emitting the four types of light, polarized from the light source, onto the target object so as to acquire 2D images of the target object; and an image processing unit using a photometric stereo technique so as to combine the 2D images acquired by the image acquisition unit and extract a 3D shape of the surface of the target object. | 05-01-2014 |
20140125766 | ACCUMULATING CHARGE FROM MULTIPLE IMAGING EXPOSURE PERIODS - Embodiments related to accumulating charge during multiple exposure periods in a time-of-flight depth camera are disclosed. For example, one embodiment provides a method including accumulating a first charge on a photodetector during a first exposure period for a first light pulse, transferring the first charge to a charge storage mechanism, accumulating a second charge during a second exposure period for the first light pulse, and transferring the second charge to the charge storage mechanism. The method further includes accumulating an additional first charge during a first exposure period for a second light pulse, adding the additional first charge to the first charge to form an updated first charge, accumulating an additional second charge on the photodetector for a second exposure period for the second light pulse, and adding the additional second charge to the second charge to form an updated second charge. | 05-08-2014 |
20140125767 | CAPTURING AND ALIGNING THREE-DIMENSIONAL SCENES - Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor. | 05-08-2014 |
20140125768 | CAPTURING AND ALIGNING THREE-DIMENSIONAL SCENES - Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor. | 05-08-2014 |
20140125769 | CAPTURING AND ALIGNING THREE-DIMENSIONAL SCENES - Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor. | 05-08-2014 |
20140125770 | CAPTURING AND ALIGNING THREE-DIMENSIONAL SCENES - Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor. | 05-08-2014 |
20140132720 | DISPLAY DEVICE FOR PUNCHING OR PRESSING MACHINES - A display device for punching and/or pressing machines. The display device can be constructed to be modular and of compact size so as to be positioned as desired on new and existing machine designs. The display device is formed with displays, which are configurable to provide a plurality of qualitative and quantitative information. The display device can be configured to be function with consoles of punching and pressing machines so as to be one or more of a supplemental device, an interfacing device, and an interactive device. | 05-15-2014 |
20140132721 | Structured Light Active Depth Sensing Systems Combining Multiple Images to Compensate for Differences in Reflectivity and/or Absorption - A receiver sensor captures a plurality of images, at two or more (different) exposure times, of a scene onto which a code mask is projected. The two or more of the plurality of images are combined by extracting decodable portions of the code mask from each image to generate a combined image. Alternatively, two receiver sensors, each at a different exposure time, are used to capture a plurality of images. The first and second images are then combined by extracting decodable portions of the code mask from each image to generate a combined image. Depth information for the scene may then be ascertained based on the combined image and using the code mask. | 05-15-2014 |
20140132722 | DYNAMIC ADJUSTMENT OF LIGHT SOURCE POWER IN STRUCTURED LIGHT ACTIVE DEPTH SENSING SYSTEMS - A method and device is provided that compensates for different reflectivity/absorption coefficients of objects in a scene/object when performing active depth sensing using structured light. A receiver sensor captures an image of a scene onto which a code mask is projected. One or more parameters are ascertained from the captured image. Then a light source power for a projecting light source is dynamically adjusted according to the one or more parameters to improve decoding of the code mask in a subsequently captured image. Depth information for the scene may then be ascertained based on the captured image based on the code mask. In one example, the light source power is fixed at a particular illumination while an exposure time for the receiver sensor is adjusted. In another example, an exposure time for the receiver sensor is maintained/kept at a fixed value while the light source power is adjusted. | 05-15-2014 |
20140132723 | METHODS FOR CALIBRATING A DIGITAL PHOTOGRAPHIC IMAGE OF UTILITY STRUCTURES - The present invention relates to methods of performing precise measurements of utility poles, utility-pole attachments and connected spans for the purpose of load analysis, safety analysis, and related tasks using low density sparse LiDAR data to pre-compute the matrices required to perform precise photogrammetric analysis of utility poles, utility-pole attachments and connected spans as imaged by a camera with a known spatial geometry relative to a LiDAR sensor. | 05-15-2014 |
20140132724 | 3D IMAGE DISPLAY APPARATUS INCLUDING ELECTROWETTING LENS ARRAY AND 3D IMAGE PICKUP APPARATUS INCLUDING ELECTROWETTING LENS ARRAY - Provided are an integral imaging type 3-dimensional (3D) image display apparatus and a 3D image pickup apparatus for increasing a depth by using an electrowetting lens array. The 3D image display apparatus includes a display panel and an electrowetting lens array having an electrically adjustable variable focal distance. The 3D image display apparatus displays a plurality of images having different depths on different focal planes and thus a depth of a 3D image by using one display panel. | 05-15-2014 |
20140132725 | ELECTRONIC DEVICE AND METHOD FOR DETERMINING DEPTH OF 3D OBJECT IMAGE IN A 3D ENVIRONMENT IMAGE - An electronic device for determining a depth of a 3D object image in a 3D environment image is provided. The electronic device includes a sensor and a processor. The sensor obtains a sensor measuring value. The processor receives the sensor measuring value and obtains a 3D object image with a depth information and a 3D environment image with a depth information, wherein the 3D environment image is separated into a plurality of environment image groups according to the depth information of the 3D environment image and there is a sequence among the plurality of environment image groups, selects one of the environment image groups and determines a corresponding depth of the selected the environment image group as a depth of the 3D object image in the 3D environment image according to the sequence and the sensor measuring value to integrate the 3D object image into the 3D environment image. | 05-15-2014 |
20140132726 | IMAGE DISPLAY APPARATUS AND METHOD FOR OPERATING THE SAME - An image display apparatus and a method for operating the same are disclosed. The image display apparatus includes a camera configured to capture image; a display configured to display a three-dimensional content screen, and a controller configured to change at least one of a depth of a predetermined object in the 3D content screen or an on screen display (OSD) if the OSD is included in the 3D content screen, wherein the display displays a 3D content screen including the object or OSD having the changed depth. Accordingly, it is possible to increase user convenience. | 05-15-2014 |
20140132727 | Shape from Motion for Unknown, Arbitrary Lighting and Reflectance - Systems and methods are disclosed for determining three dimensional (3D) shape by capturing with a camera a plurality of images of an object in differential motion; derive a general relation that relates spatial and temporal image derivatives to BRDF derivatives; exploiting rank deficiency to eliminate BRDF terms and recover depth or normal for directional lighting; and using depth-normal-BRDF relation to recover depth or normal for unknown arbitrary lightings. | 05-15-2014 |
20140132728 | METHODS AND SYSTEMS FOR MEASURING HUMAN INTERACTION - A method and system for measuring and reacting to human interaction with elements in a space, such as public places (retail stores, showrooms, etc.) is disclosed which may determine information about an interaction of a three dimensional object of interest within a three dimensional zone of interest with a point cloud 3D scanner having an image frame generator generating a point cloud 3D scanner frame comprising an array of depth coordinates for respective two dimensional coordinates of at least part of a surface of the object of interest, within the three dimensional zone of interest, comprising a three dimensional coverage zone encompassing a three dimensional engagement zone and a computing comparing respective frames to determine the time and location of a collision between the object of interest and a surface of at least one of the three dimensional coverage zone or the three dimensional engagement zone encompassed by the dimensional coverage zone. | 05-15-2014 |
20140132729 | METHOD AND APPARATUS FOR CAMERA-BASED 3D FLAW TRACKING SYSTEM - Systems and methods facilitate long-range, accurate fiducial tracking using a mixture of pan-tilt and camera devices enabling generalized 3D tracking of fiducials with the automatic mapping of flaw data to component models within standard CAD packages. The invention is suitable to many various tracking applications, particularly large inspection sites such as aircraft surfaces which require vast coverage with a medium-degree of accuracy. A method of surface inspection comprises the steps of moving a fiducial target over a surface under inspection, and tracking the fiducial as it is moved by capturing and storing the coordinates of the fiducial in a database for subsequent retrieval. Machine vision is used to acquire surface inspection data associated with the coordinates of the fiducial as it is moved. The inspection data is integrated into a CAD model, enabling the use of finite element analysis (FEA) to determine or predict flaw and material behavior over time. | 05-15-2014 |
20140132730 | Compact 3D Scanner with Fixed Pattern Projector and Dual Band Image Sensor - A structured light 3D scanner consisting of a specially designed fixed pattern projector and a camera with a specially designed image sensor is disclosed. A fixed pattern projector has a single fixed pattern mask of sine-like modulated transparency and three infrared LEDs behind the pattern mask; switching between the LEDs shifts the projected patterns. An image sensor has pixels sensitive in the visual band, for acquisition of conventional image and the pixels sensitive in the infrared band, for the depth acquisition. | 05-15-2014 |
20140132731 | Apparatus And Method For Three-Dimensional Image Capture With Extended Depth Of Field - An optical system for capturing three-dimensional images of a three-dimensional object is provided. The optical system includes a projector for structured illumination of the object. The projector includes a light source, a grid mask positioned between the light source and the object for structured illumination of the object, and a first Wavefront Coding (WFC) element having a phase modulating mask positioned between the grid mask and the object to receive patterned light from the light source through the grid mask. The first WFC element is constructed and arranged such that a point spread function of the projector is substantially invariant over a wider range of depth of field of the grid mask than a point spread function of the projector without the first WFC element. | 05-15-2014 |
20140132732 | Apparatus and Method for Determining Disparity of Tetured Regions - A method and system to determine the disparity associated with one or more textured regions of a plurality of images is presented. The method comprises the steps of breaking up the texture into its color primitives, further segmenting the textured object into any number of objects comprising such primitives, and then calculating a disparity of these objects. The textured objects emerge in the disparity domain, after having their disparity calculated. Accordingly, the method is further comprised of defining one or more textured regions in a first of a plurality of images, determining a corresponding one or more textured regions in a second of the plurality of images, segmenting the textured regions into their color primitives, and calculating a disparity between the first and second of the plurality of images in accordance with the segmented color primitives. | 05-15-2014 |
20140139629 | ASSOCIATING AN OBJECT WITH A SUBJECT - A method comprises receiving one or more depth images from a depth camera, the depth images indicating a depth of a surface imaged by each pixel of the depth images. The method may further comprise identifying a human subject imaged by the depth images and recognizing within one or more of the depth images a beacon emitted from a control device. A position of the control device may be assessed in three dimensions, and the control device may be associated with the human subject based on a proximity of the control device to the human subject or other parameter of the control device with relation to the human subject. | 05-22-2014 |
20140139630 | METHODS AND APPARATUS FOR IMAGING WITHOUT RETRO-REFLECTION - A non-retro-reflective imaging system and methods in which a relay optic is configured to segment a source image into a plurality of slices and reimage each of the slices individually onto a rotated image plane such that a substantially in-focus reconstruction of the entire image is obtained, while substantially eliminating retro-reflection from the system. According to one example a non-retro-reflective imaging system includes a segmented relay optic configured to reimage a source image onto an image plane tilted with respect to an optical axis of the system, and further configured to slice the image volume into a plurality of image slices and spatially position the plurality of image slices such that a depth of focus of each image slice overlaps the tilted image plane. The system further includes an image sensor co-aligned with the tilted image plane and configured to produce a reconstructed image from the plurality of image slices. | 05-22-2014 |
20140139631 | DYNAMIC CONSERVATION OF IMAGING POWER - Representative implementations of devices and techniques provide adaptable settings for imaging devices and systems. Operating modes may be defined based on whether movement is detected within a predetermined area. One or more parameters of illumination or modulation may be dynamically adjusted based on the present operating mode. | 05-22-2014 |
20140139632 | DEPTH IMAGING METHOD AND APPARATUS WITH ADAPTIVE ILLUMINATION OF AN OBJECT OF INTEREST - A depth imager such as a time of flight camera or a structured light camera is configured to capture a first frame of a scene using illumination of a first type, to define a first area associated with an object of interest in the first frame, to identify a second area to be adaptively illuminated based on expected movement of the object of interest, to capture a second frame of the scene with adaptive illumination of the second area using illumination of a second type different than the first type, possibly with variation in at least one of output light amplitude and frequency, and to attempt to detect the object of interest in the second frame. The illumination of the first type may comprise substantially uniform illumination over a designated field of view, and the illumination of the second type may comprise illumination of substantially only the second area. | 05-22-2014 |
20140139633 | Method and System for Counting People Using Depth Sensor - A sensor system according to an embodiment of the invention may process depth data and visible light data for a more accurate detection. Depth data assists where visible light images are susceptible to false positives. Visible light images (or video) may similarly enhance conclusions drawn from depth data alone. Detections may be object-based or defined with the context of a target object. Depending on the target object, the types of detections may vary to include motion and behavior. Applications of the described sensor system include motion guided interfaces where users may interact with one or more systems through gestures. The sensor system described may also be applied to counting systems, surveillance systems, polling systems, retail store analytics, or the like. | 05-22-2014 |
20140139634 | CONFOCAL IMAGING USING ASTIGMATISM - The present disclosure provides computing device implemented methods, apparatuses, and computing device readable media for confocal imaging using astigmatism. Confocal imaging can include receiving an image of a portion of an object captured by a confocal imaging device having a particular astigmatic character, determining an image pattern associated with the image, and determining a distance between a focus plane of the confocal imaging device and the portion of the object based, at least in part, on information regarding the image pattern. Confocal imaging can also include receiving data representing an image pattern associated with an image of an object captured by a confocal imaging device having a particular astigmatic character and having an image sensor with a plurality of pixels, and determining a positional relationship between the object and a focus plane of the confocal imaging device based on a distribution of the diffraction pattern over a portion of the plurality of pixels. | 05-22-2014 |
20140139635 | REAL-TIME MONOCULAR STRUCTURE FROM MOTION - Systems and methods are disclosed for multithreaded navigation assistance by acquired with a single camera on-board a vehicle; using 2D-3D correspondences for continuous pose estimation; and combining the pose estimation with 2D-2D epipolar search to replenish 3D points. | 05-22-2014 |
20140139636 | IMAGE SENSING UNIT, 3D IMAGE PROCESING APPARATUS, 3D CAMERA SYSTEM, AND 3D IMAGE PROCESSING METHOD - An image sensing unit is disclosed. In one aspect, the sensing unit includes optical sensors for acquiring a two dimensional (2D) image from a subject and micro-structures for supporting the optical sensors and adjusting the heights of the optical sensors. | 05-22-2014 |
20140139637 | Wearable Electronic Device - In one embodiment, a device includes a device body that includes a touch-sensitive display and a processor. The device also includes a band coupled to the device body and an optical sensor in or on the band. The optical sensor faces outward from the band and captures images. The processor communicates with the optical sensor to process captured images. | 05-22-2014 |
20140139638 | ELECTRONIC DEVICE AND METHOD FOR CALIBRATING SPECTRAL CONFOCAL SENSORS - A measurement machine includes an optical lens and a spectral confocal sensor. An electronic device adjusts a zoom ratio of the lens to be a maximum ratio, and calculates X, Y, Z coordinate differences between the lens center and the sensor center. The electronic device calibrates the X, Y coordinate differences at least twice, to obtain calibrated X, Y coordinate differences. The X, Y differences are replaced by the calibrated X, Y coordinate differences when the calibrated X, Y coordinate differences satisfy first predetermined requirements. The electronic device further calibrates the Z coordinate difference at least twice to obtain a calibrated Z coordinate difference. The Z coordinate difference is replaced by the calibrated Z coordinate difference when the calibrated Z coordinate difference satisfies second predetermined requirements. | 05-22-2014 |
20140139639 | REAL-TIME 3D RECONSTRUCTION WITH POWER EFFICIENT DEPTH SENSOR USAGE - Embodiments disclosed facilitate resource utilization efficiencies in Mobile Stations (MS) during 3D reconstruction. In some embodiments, camera pose information for a first color image captured by a camera on an MS may be obtained and a determination may be made whether to extend or update a first 3-Dimensional (3D) model of an environment being modeled by the MS based, in part, on the first color image and associated camera pose information. The depth sensor, which provides depth information for images captured by the camera, may be disabled, when the first 3D model is not extended or updated. | 05-22-2014 |
20140139640 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - An image processing apparatus includes a coordinate conversion unit and a drawing unit. The coordinate conversion unit obtains a coordinate of an intersection P | 05-22-2014 |
20140146140 | IMAGE CAPTURING DEVICE, SEMICONDUCTOR INTEGRATED CIRCUIT, AND IMAGE CAPTURING METHOD - An image capturing device in one aspect of the present invention includes: an imaging device; a lens optical system including a focus lens; a driving section for driving one of the imaging device and the focus lens; a displacement control section for controlling displacement of the imaging device or the focus lens to be driven based on a predetermined displacement pattern; a diaphragm having an aperture having a size which can be changed and being provided in the lens optical system; an aperture control section for controlling the size of the aperture of the diaphragm; a synchronizing section for controlling the displacement control section and the aperture control section based on exposure timing; and an image capturing parameter determining section for determining the exposure time, the size of the diaphragm aperture, and the displacement pattern, wherein: the predetermined displacement pattern includes a first-type displacement pattern and a second-type displacement pattern by which the imaging device or the focus lens is displaced over different ranges between a first focus position and a second focus position in an image capturing scene; the first-type displacement pattern and the second-type displacement pattern are repeated alternately; and the aperture control section controls the diaphragm so as to have a first aperture and a second aperture having a different size from that of the first aperture in the first-type displacement pattern and the second-type displacement pattern based on timing from the synchronizing section. | 05-29-2014 |
20140146141 | THREE-DIMENSIONAL IMAGE PICKUP LENS SYSTEM AND IMAGE PICKUP SYSTEM INCLUDING THE SAME - A three-dimensional image pickup lens system, including two lens apparatuses, each of the two lens apparatuses including: an optical element drivers in a direction containing a component perpendicular to an optical axis; a driving unit driving the optical element; a detector detecting vibration of a corresponding one of the lens apparatuses; a first generator generating a first signal driving the optical element to correct image blur due to the vibration; a second generator generating a second signal driving the optical element to a position corresponding to a set angle of convergence; a retainer retaining an effective maximum correction amount as a correctable maximum image stabilization amount determined based on the second signals generated in the lens apparatuses and is common to the lens apparatuses; and a controller driving the optical element by the driving unit based on the first signal, the second signal, and the effective maximum correction amount. | 05-29-2014 |
20140152769 | THREE-DIMENSIONAL SCANNER AND METHOD OF OPERATION - A three-dimensional scanner is provided. The scanner includes a projector that emits a light pattern onto a surface. The light pattern includes a first region having a pair of opposing saw-tooth shaped edges and a first phase. A second region is provided in the light pattern having a pair of opposing saw-tooth shaped edges and a second phase, the second region being offset from the first region by a first phase difference. A third region is provided in the light pattern having a third pair of opposing saw-tooth shape edges and having a third phase, the third region being offset from the second region by a second phase difference. A camera is coupled to the projector and configured to receive the light pattern. A processor determines three-dimensional coordinates of at least one point on the surface from the reflected light of the first region, second region and third region. | 06-05-2014 |
20140152770 | System and Method for Wide Area Motion Imagery - A system for detecting moving objects within a predetermined geographical area is provided. The system is designed to convey object movement information from an airborne surveillance platform to a ground-based operator station with reduced data transmission. This is accomplished by computer processing image data on the surveillance platform prior to transmitting data to the ground station. First, the system constructs a 3D model of the area under surveillance, for example, by obtaining many different views of the area using an aircraft. One 3D model is maintained at the surveillance platform, and another is transmitted to the ground station. During a surveillance mission, a succession of relatively low data, 2D images are created and aligned with the surveillance platform's 3D model. The alignment reveals differences in the images (tracking data) which is then transmitted to the ground station for use with the ground station's 3D model to resolve object movement information. | 06-05-2014 |
20140152771 | METHOD AND APPARATUS OF PROFILE MEASUREMENT - A system and method for profile measurement based on triangulation involves arrangement of an image acquisition assembly relative to an illumination assembly such that an imaging plane is parallel to a light plane (measurement plane defined by where the light plane impinges on the object), which supports uniform pixel resolution in the imaging plane. The image acquisition assembly includes an imaging sensor having a sensor axis and a lens having a principal axis, wherein the lens axis is offset from the imaging axis. | 06-05-2014 |
20140152772 | METHODS TO COMBINE RADIATION-BASED TEMPERATURE SENSOR AND INERTIAL SENSOR AND/OR CAMERA OUTPUT IN A HANDHELD/MOBILE DEVICE - A device for generating thermal images includes a low resolution infrared (IR) sensor supported within a housing and having a field of view. The IR sensor is configured to generate thermal images of objects within the field of view having a first resolution. A spatial information sensor supported within the housing is configured to determine a position for each of the thermal images generated by the IR sensor. A processing unit supported within the housing is configured to receive the thermal images and to combine the thermal images based on the determined positions of the thermal images to produce a combined thermal image having a second resolution that is greater than the first resolution. | 06-05-2014 |
20140152773 | MOVING IMAGE CAPTURING DEVICE, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING DEVICE, AND IMAGE DATA PROCESSING METHOD - A capture device is equipped with a stereo camera, and generates a plurality of demosaiced images of different sizes in which the left and right frame images have been reduced in stepwise fashion. Further, by cycling through the pixel rows of the rows of the images according to a predetermined rule to produce a connected stream, there is generated a virtual composite image that includes the plurality of demosaiced images, in which the pixel rows of the rows are pixel rows having undergone one round of connection. A host terminal sends to the capture device a data request signal designating a plurality of areas within the composite image, having a shared range in the longitudinal direction. The capture device clips out the designated areas, and sends to the host terminal a stream of a new composite image comprising only the clipped out areas. The host terminal cuts this into separate images, which are expanded into consecutive addresses in a main memory. | 06-05-2014 |
20140152774 | VEHICLE PERIPHERY MONITORING DEVICE - A vehicle periphery monitoring device includes a first bird's-eye view image generation section generating a first bird's-eye view image through a two-dimensional plane projective transformation based on a photographed image acquired by an in-vehicle camera module, a second bird's-eye view image generation section generating a second bird's-eye view image through a three-dimensional plane projective transformation based on the photographed image, and a displaying image generation section generating a first displaying image for monitor displaying from the first bird's-eye view image and a second displaying image for monitor displaying having a higher displaying magnification than the first displaying image, from a predetermined area of the second bird's-eye view image corresponding to a predetermined area of the first bird's-eye view image. | 06-05-2014 |
20140152775 | METHOD FOR PROCESSING AN IMAGE AND ELECTRONIC DEVICE FOR SAME - In one embodiment of the present invention, a method for processing an image in an electronic device having a plurality of optical lenses is provided. The method for processing an image may comprise the steps of: obtaining a first image according to a first mode by means of a first optical lens; obtaining a second image simultaneously with or after the obtaining of the first image according to a second mode by means of a second optical lens; and processing and storing the first image obtained according to the first mode and the second image obtained according to the second mode. Here, the first and second modes are different from each other and may vary. | 06-05-2014 |
20140160243 | THREE-DIMENSIONAL SHAPE MEASURING APPARATUS AND CONTROL METHOD THEREOF - An exposure amount of at least one or more first patterns used to determine positions at the time of triangulation is set to be larger than that of other patterns, so as to reduce the influence of shot noise in the first patterns, to improve precision, and to reduce power consumption as a whole. To this end, a three-dimensional shape measuring apparatus, which measures a three-dimensional shape of an object to be measured by projecting pattern light of a plurality of types of patterns onto the object to be measured, and capturing images of the object to be measured, controls a projector unit and image capture unit to set an exposure amount of the first patterns to be larger than that of patterns other than the first patterns. | 06-12-2014 |
20140160244 | MONOCULAR CUED DETECTION OF THREE-DIMENSIONAL STRUCTURES FROM DEPTH IMAGES - Detection of three dimensional obstacles using a system mountable in a host vehicle including a camera connectible to a processor. Multiple image frames are captured in the field of view of the camera. In the image frames, an imaged feature is detected of an object in the environment of the vehicle. The image frames are portioned locally around the imaged feature to produce imaged portions of the image frames including the imaged feature. The image frames are processed to compute a depth map locally around the detected imaged feature in the image portions. Responsive to the depth map, it is determined if the object is an obstacle to the motion of the vehicle. | 06-12-2014 |
20140168367 | CALIBRATING VISUAL SENSORS USING HOMOGRAPHY OPERATORS - A plurality of homography operators define respective mappings between pairs of coordinate spaces, wherein the coordinate spaces include a coordinate space of a first visual sensor, a virtual coordinate space, and a coordinate space of a second visual sensor. Calibration between the first and second visual sensors is provided using the plurality of homography operators. | 06-19-2014 |
20140168368 | METHOD AND APPARATUS FOR TRIANGULATION-BASED 3D OPTICAL PROFILOMETRY - A method for determining a centerline for a triangulation-based optical profilometry system, compensating for the spatial variations of the reflectance of an object's surface. The method comprises providing a luminous line on the object, the luminous line being a triangulation line superposed with a compensation line; capturing an image of the triangulation line and of the compensation line; for each position along the imaged triangulation line, determining a transverse triangulation profile from the imaged triangulation line and a transverse compensation profile from the imaged compensation line; determining a transverse correction profile given by the reciprocal of the transverse compensation profile; multiplying the transverse triangulation profile with the transverse correction profile to obtain a corrected transverse triangulation profile; computing a center of the corrected transverse triangulation profile. The centers determined at positions along the triangulation line form the centerline. Embodiments of a triangulation-based optical profilometry system integrating the method are disclosed. | 06-19-2014 |
20140168369 | SINGLE FREQUENCY TIME OF FLIGHT DE-ALIASING - A system and method are disclosed for determining a depth map using TOF with low power consumption. In order to disambiguate, or de-alias, the returned distance(s) for a given phase shift, the system may emit n different frequencies of light over n successive image frames. After n frames of data are collected, the distances may be correlated by a variety of methodologies to determine a single distance to the object as measured over n image frames. As one frequency may be emitted per image frame, the depth map may be developed while consuming low power. | 06-19-2014 |
20140168370 | DEVICE FOR OPTICALLY SCANNING AND MEASURING AN ENVIRONMENT - A method for optically scanning and measuring an environment by means of a hand-held scanner for producing 3D-scans is provided. The method including providing a hand-held scanner having at least one projector and at least one camera. At least one pattern is projected onto an object in the environment with the at least one projector. At least one camera images of the object which has the pattern projected thereon is recorded with a plurality of frames. Three-dimensional coordinates of points on the surface of the object are determined from each frame in the plurality of frames. A ring closure is determined in the plurality of frames. The determination comprising the steps of forming a frustum for each frame, comparing a last frustum of the last frame with a plurality of frusta to form an intersection, and selecting a frustum having the largest intersection. | 06-19-2014 |
20140168371 | IMAGE PROCESSING APPARATUS AND IMAGE REFOCUSING METHOD - An image refocusing method for use in an image processing apparatus is provided. The image processing apparatus has an image capturing unit and an image processing unit. The method has the following steps of: receiving lights of a scene via the image capturing unit to output a raw image having information of different views; rearranging the raw image from the image sensor to obtain multiple different view sub-images; performing a refocusing process to at least one specific view sub-image of the different view sub-images corresponding to a specific view to generate multiple refocused view images, wherein a first focusing position of the refocused view images is different from a second focusing position of the at least one specific view sub-image; and outputting the refocused view images to a stereoscopic display device. | 06-19-2014 |
20140168372 | SENSING APPARATUS AND SENSING METHOD FOR GENERATING THREE-DIMENSIONAL IMAGE INFORMATION - A sensing apparatus includes an infrared light generating device, an image sensing unit, a processing circuit and a control circuit. The image sensing unit is arranged for detecting a first infrared light signal reflected from an object to generate a first sensing signal when the infrared light generating device is activated, and detecting a second infrared light signal reflected from the object to generate a second sensing signal when the infrared light generating device is deactivated. The processing circuit is coupled to the image sensing unit, and is arranged for generating three-dimensional image information of the object according to at least the first sensing signal and the second sensing signal, wherein the three-dimensional image information includes depth information. The control circuit is arranged for control activation and deactivation of the infrared light generating device, sensing operations of the image sensing unit, and signal processing operations of the processing circuit. | 06-19-2014 |
20140168373 | LIGHT PATH ADJUSTMENT APPARATUS AND PHOTOGRAPHING APPARATUS INCLUDING THE SAME - A light path adjustment apparatus includes: a support plate with a first through hole through which light passes; a plurality of optical units, including first and second optical units, that move between an open location where the first through hole is opened by the optical units moving toward an outside of the first through hole and a closing location where the first through hole is divided into a plurality of regions by the plurality of optical units moving toward the first through hole and blocking at least a part of the light; and a rotation plate that has a second through hole corresponding to the first through hole and is rotatably disposed with respect to the support plate. The first and second optical units correspond to first and second regions of the first through hole, respectively, and are coupled to the support plate and the rotation plate, respectively. | 06-19-2014 |
20140168374 | CONVEYING SYSTEM FOR PIECES OF LUGGAGE, CHECK-IN SYSTEM COMPRISING SUCH A CONVEYING SYSTEM AND METHOD FOR USING SUCH A CONVEYING SYSTEM - The invention provides a conveying system for pieces of luggage, which may or may not form part of a check-in system. The conveying system comprises a conveyor for conveying pieces of luggage, detection means provided with a camera for making an image of at least one object, normally being a single piece of luggage, present on the conveyor at a checking location, and image processing means for the automated processing of images made by the camera. The camera is designed to make images of the infrared type, wherein the image processing means are designed to process images of the infrared type. The image processing means have at their disposal information regarding infrared images of non-suspect pieces of luggage exhibiting at least one area with an elevated temperature. The image processing means are further designed to make a comparison between an infrared image made by the camera that exhibits at least one elevated temperature area and said information for deeming or not deeming the at least one object that is the subject of the infrared image made by the camera to be “suspect” on the basis of the comparison by the image processing means. The invention further provides a method for using such a conveying system. | 06-19-2014 |
20140168375 | IMAGE CONVERSION DEVICE, CAMERA, VIDEO SYSTEM, IMAGE CONVERSION METHOD AND RECORDING MEDIUM RECORDING A PROGRAM - Provided are an image conversion device, a camera, a video system, an image conversion method and a program which are capable of performing a desired image conversion even when the orientation of multiple regions within one image is to be changed in different directions. This system is provided with: a region dividing unit ( | 06-19-2014 |
20140168376 | IMAGE PROCESSING APPARATUS, PROJECTOR AND IMAGE PROCESSING METHOD - An image processing apparatus includes an imaging unit configured to image a region including a target on which an image is projected to acquire imaged data, a distance measuring unit configured to compute distance data associated with a distance between the target and the imaging unit based on the imaged data acquired from the imaging unit, a plane estimating unit configured to estimate a plane corresponding to the target based on the distance data, and a range specifying unit configured to generate image data associated with the region based on the imaged data and specify a projectable range to the target based on the image data and in formation associated with the plane. | 06-19-2014 |
20140176676 | IMAGE INTERACTION SYSTEM, METHOD FOR DETECTING FINGER POSITION, STEREO DISPLAY SYSTEM AND CONTROL METHOD OF STEREO DISPLAY - The disclosure provides a stereo display system including a stereo display, a depth detector, and a computing processor. The stereo display displays a left eye image and a right eye image, such that a left eye and a right eye of a viewer generate a parallax to view a stereo image. The depth detector captures a depth data of a three-dimensional space. The computing processor controls image display of the stereo display. The computing processor analyzes an eyes position of the viewer according to the depth data, and when the viewer moves horizontally, vertically, or obliquely in the three-dimensional space relative to the stereo display, the computing processor adjusts the left eye image and the right eye image based on variations of the eyes position. Furthermore, an image interaction system, a method for detecting finger position, and a control method of stereo display are also provided. | 06-26-2014 |
20140176677 | 3D Scene Scanner and Position and Orientation System - A hand-held mobile 3D scanner ( | 06-26-2014 |
20140176678 | Method for High-Resolution 3D-Localization Microscopy - A method for high-resolution 3D-localization microscopy of a sample having fluorescence emitters, in which the fluorescence emitters in the sample are excited to emit fluorescent radiation and the sample is displayed with spatial resolution in wide-field microscopy. Excitation is caused such that in reference to the spatial resolution at least some fluorescence emitters are isolated. A three-dimensional localization is determined in a localization analysis, which includes in the depth direction of the display a z-coordinate and a x-coordinate as well as a y-coordinate orthogonal in reference thereto, for each isolated fluorescence emitter showing a precision exceeding the local resolution. A table of localization imprecision is provided, which states the imprecision of the localization, regarding its z-coordinate as a function of the z-coordinate and a number of photons collected during imaging in the wide-field microscopy. Localization imprecision being determined for each localized fluorescence emitter by accessing the table of localization imprecision for the localization determined during the localization analysis. | 06-26-2014 |
20140176679 | Method for Automatically Classifying Moving Vehicles - The invention is directed to a method for classifying a moving vehicle. The object of the invention is to find a novel possibility for classifying vehicles moving in traffic which allows a reliable automatic classification based on two-dimensional image data. This object is met according to the invention in that an image of a vehicle is recorded by means of a camera and the position and perspective orientation of the vehicle are determined therefrom, rendered two-dimensional views are generated from three-dimensional vehicle models which are stored in a database in positions along an anticipated movement path of the vehicle and are compared with the recorded image of the vehicle, and the vehicle is classified from the two-dimensional view found to have the best match by assignment of the associated three-dimensional vehicle model. | 06-26-2014 |
20140184745 | Accurate 3D Finger Tracking with a Single Camera - An object tracking device includes a camera with a field of view oriented in a first direction and a mirror with a field of reflection oriented in a second direction. When an object is in a first region in the field of view of the electronic camera, the camera has a direct view of the object and a reflected view of the object from the mirror. A processor coupled with the camera is configured to receive a first image data set and a second image data set from the camera. The first image data set and the second image data set each include the direct view of the object and the reflected view of the object from the mirror. | 07-03-2014 |
20140184746 | IMAGE PROCESSING METHOD AND APPARAUTS - An image processing apparatus is provided. The image processing apparatus determines whether a first charge quantity of charges stored in a first charge storage is greater than or equal to a predetermined saturation level, the first charge storage among a plurality of charge storages configured to store charges generated by a sensor of a depth camera. According to the determination result, when the first charge quantity is greater than or equal to the saturation level, the image processing apparatus may calculate the first charge quantity from at least one second charge quantity of charges stored in at least one second charge storage which is different from the first charge storage among the plurality of charge storages. | 07-03-2014 |
20140184747 | Visually-Assisted Stereo Acquisition from a Single Camera - A method is disclosed of indicating a suitable pose for a camera for obtaining a stereoscopic image, with the camera comprising an imaging sensor. The method comprises obtaining and storing a first image of a scene using the imaging sensor when the camera is in a first pose; moving the camera to a second pose; and obtaining a second image of the scene when the camera is in the second pose. One or more disparity vectors are determined, each disparity vector being determined between a feature identified within the first image and a corresponding feature identified in the second image. On the basis of the one or more disparity vectors, a determination is made of whether the second image of the scene is suitable for use, together with the first image, as a stereoscopic image pair. | 07-03-2014 |
20140184748 | SINGLE-SENSOR SYSTEM FOR EXTRACTING DEPTH INFORMATION FROM IMAGE BLUR - Hardware and software methodology are described for three-dimensional imaging in connection with a single sensor. A plurality of images is captured at different degrees of focus without focus change of an objective lens between such images. Depth information is extracted by comparing image blur between the images captured on the single sensor. | 07-03-2014 |
20140192158 | Stereo Image Matching - The description relates to stereo image matching to determine depth of a scene as captured by images. More specifically, the described implementations can involve a two-stage approach where the first stage can compute depth at highly accurate but sparse feature locations. The second stage can compute a dense depth map using the first stage as initialization. This improves accuracy and robustness of the dense depth map. | 07-10-2014 |
20140192159 | CAMERA REGISTRATION AND VIDEO INTEGRATION IN 3D GEOMETRY MODEL - Apparatus, systems, and methods may operate to receive a real image or real images of a coverage area of a surveillance camera. Building Information Model (BIM) data associated with the coverage area may be received. A virtual image may be generated using the BIM data. The virtual image may include at least one three-dimensional (3-D) graphics that substantially corresponds to the real image. The virtual image may be mapped with the real image. Then, the surveillance camera may be registered in a BIM coordination system using an outcome of the mapping. | 07-10-2014 |
20140192160 | THREE-DIMENSIONAL IMAGE SENSING DEVICE AND METHOD OF SENSING THREE-DIMENSIONAL IMAGES - A three-dimensional image sensing device includes a light source, a sensing module, and a signal processing module. The sensing module includes a pixel array, a control unit, and a light source driver. The light source generates flashing light with a K multiple of a frequency of flicker noise or a predetermined frequency. The pixel array samples the flashing light to generate a sampling result. The control unit executes an image processing on the sampling result to generate a spectrum. The light source driver drives the light source according to the K multiple of the frequency or the predetermined frequency. The signal processing module generates the K multiple of the frequency according to the spectrum, or outputs the predetermined frequency to the light source driver, and generates depth information according to a plurality of first images/a plurality of second images during turning-on/turning-off of the light source included in the sampling result. | 07-10-2014 |
20140192161 | THREE-DIMENSIONAL RECONSTRUCTION OF A MILLIMETER-WAVE SCENE BY OPTICAL UP-CONVERSION AND CROSS-CORRELATION DETECTION - An apparatus and method may be used to create images, e.g., three-dimensional images, based on received radio-frequency (RP), e.g., millimeter wave, signals carrying image data. The RF signals may be modulated onto optical carrier signals, and the resulting modulated optical signals may be cross-correlated. The resulting cross-correlations may be used to extract image data that may be used to generate three-dimensional images. | 07-10-2014 |
20140192162 | SINGLE-EYE STEREOSCOPIC IMAGING DEVICE, IMAGING METHOD AND RECORDING MEDIUM - After AE/AF/AWB operation, a subject distance is calculated for each pixel, and a histogram which shows the distance distribution is created based thereon. The class with the highest frequency which is the peak at the side nearer than the focus distance is searched based on the histogram and a rectangular area Ln which includes pixels which have a subject distance within the searched range is set. The average parallax amount Pn which is included in the rectangular area Ln is calculated and it is confirmed whether Pn is within a range of parallax amounts a and a−t1. In a case where Pn is not within a range of parallax amounts a and a−t1 which is set in advance, the aperture value is adjusted such that Pn is within the range of the parallax amounts a and a−t1. | 07-10-2014 |
20140192163 | IMAGE PICKUP APPARATUS AND INTEGRATED CIRCUIT THEREFOR, IMAGE PICKUP METHOD, IMAGE PICKUP PROGRAM, AND IMAGE PICKUP SYSTEM - An imaging device generates distance information for each object in a plurality of images having the same viewpoint. During the generation, the imaging device detects distances from the viewpoint to some of the objects intermittently, and estimates the distances from the viewpoint to the other objects using the detected distances. The imaging device extracts object areas from the images, estimates the correspondence between the object areas of a target image targeted for distance estimation and the object areas of a reference image having been subjected to distance detection by a comparison therebetween, and allocates, for each of the object areas of the target image, the distance information of the corresponding object area of the reference image. | 07-10-2014 |
20140198183 | SENSING PIXEL AND IMAGE SENSOR INCLUDING SAME - A depth-sensing pixel included in a three-dimensional (3D) image sensor includes: a photoelectric conversion device configured to generate an electrical charge by converting modulated light reflected by a subject; a capture transistor, controlled by a capture signal applied to the gate thereof, the photoelectric conversion device being connected to the drain thereof; and a transfer transistor, controlled by a transfer signal applied to the gate thereof, the source of the capture transistor being connected to the drain thereof, and a floating diffusion region being connected to the source thereof. | 07-17-2014 |
20140204178 | IMAGING DEVICE AND SHADING CORRECTION METHOD - An image pick apparatus includes: an imaging element that includes a first pixel group and a second pixel group that respectively photo-electrically converts luminous fluxes that pass through different areas of a single imaging optical system, generates a first image formed by an output value of the first pixel group and a second image formed by an output value of the second pixel group, and is built therein with a pixel addition unit that adds a pixel value of the first pixel group and a pixel value of the second pixel group to generate a two-dimensional image; and an image processing section that matches a shading shape of the first image and a shading shape of the second image with a shading shape of the two-dimensional image by calculating correction coefficients stored in a memory for the output values of the first pixel group and the second pixel group. | 07-24-2014 |
20140204179 | DEPTH-SENSING CAMERA SYSTEM - A depth-sensing camera system includes a common sensor configured to record color and infrared data from a target, an infrared illuminant that projects infrared light on the target, and a control logic that switches a mode of operation of the camera between a color mode and an infrared mode. When the camera system is in the infrared mode, the infrared illuminant operates to project the infrared light on the target and when in the color mode, the infrared illuminant is disabled. The camera system also includes a color buffer for storing color chroma and luma values and a depth buffer for storing infrared luma data. | 07-24-2014 |
20140204180 | Structured light system - A structured light system based on a fast, linear array light modulator and an anamorphic optical system captures three-dimensional shape information at high rates and has strong resistance to interference from ambient light. A structured light system having a modulated light source offers improved signal to noise ratios. A wand permits single point detection of patterns in structured light systems. | 07-24-2014 |
20140210946 | NON-DESTRUCTIVE INSPECTION APPARATUS AND METHOD FOR TOUGHENED COMPOSITE MATERIALS - A non-destructive composite material inspection apparatus and method thereof inspect the fiber direction and fracture toughness. The apparatus includes a light module and a stereoscopic microcamera module. The light module generates a polarized light that has a polarization orientation projecting to an inspection area on the surface layer of the composite material. The stereoscopic microcamera module captures the reflection light from the inspection area and outputs an image. When the polarization orientation and the fiber direction are parallel, the image is a bright field image. When the polarization orientation and the fiber direction are orthogonal, the image is a dark field image. The bright and dark field images show the fiber direction and toughened particle distribution and the toughness of the composite material is then predicted. | 07-31-2014 |
20140210947 | Coordinate Geometry Augmented Reality Process - Embodiments of the invention include a method, a system, and a mobile device that incorporate augmented reality technology into land surveying, 3D laser scanning, and digital modeling processes. By incorporating the augmented reality technology, the mobile device can display an augmented reality image comprising a real view of a physical structure in the real environment and a 3D digital model of an unbuilt design element overlaid on top of the physical structure at its intended tie-in location. In an embodiment, a marker can be placed at predetermined set of coordinates at or around the tie-in location, determined by surveying equipment, on that the 3D digital model of the unbuilt design element can be visualized in a geometrically correct orientation with respect to the physical structure. Embodiments of the present invention can also be applied to a scaled down 3D printed object representing the physical structure if visiting the project site is not possible. | 07-31-2014 |
20140210948 | Structured light system - A structured light system based on a fast, linear array light modulator and an anamorphic optical system captures three-dimensional shape information at high rates and has strong resistance to interference from ambient light. A structured light system having a modulated light source offers improved signal to noise ratios. A wand permits single point detection of patterns in structured light systems. | 07-31-2014 |
20140210949 | COMBINATION OF NARROW-AND WIDE-VIEW IMAGES - A dual field of view (FOV) image is generated as a combination of a small footprint image and large footprint image, where the large footprint image is generated based on rendering to the current viewpoint at least a first image captured relatively farther away from a target so as to appear as a continuation of a small footprint image which is relatively closer to the target. Preferably, both the first image and the small footprint image are captured with the same fixed narrow field of view (NFOV) imaging device. The system is able to operate in real time, using a variety of image capture devices, and provides the benefits of both NFOV and wide FOV (WFOV) without limitations of conventional techniques, including operation at a longer range from a target, with higher resolution, while innovative processing of the captured images provides orientation information via a dual FOV image. | 07-31-2014 |
20140218476 | Zebra Lights - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for rendering zebra stripes on a three dimensional (3D) object. In one aspect, a method includes rendering an image of an object from the perspective of a camera For each pixel of a plurality of pixels of the image, a point on the surface of the object corresponding to the pixel is determined. An angle between a surface normal at the point and a line between the point and the light source is determined. A zebra light color for the pixel is determined using a stripe function and the angle, the stripe function specifying alternating high and low intensities for various angles. A blended pixel color for the pixel is determined by blending a material color for the point with the zebra light color. | 08-07-2014 |
20140218477 | METHOD AND SYSTEM FOR CREATING A THREE DIMENSIONAL REPRESENTATION OF AN OBJECT - A method and a system for creating three dimensional representation of an object are provided. The method includes projecting a pattern image comprising one or more pattern elements and capturing an image of the object with the pattern image projected onto the object. The method includes extracting intensity values corresponding to at least two color components of each pixel of the captured image. For each color component, a probability map is generated using the corresponding intensity values. The probability map for a color component represents a probability that a pixel of the captured image is representing a part of a pattern element. A joint probability distribution is determined by combining the probability maps. Using the joint probability distribution, specific pattern elements are located in the captured image. The method includes determining three dimensional coordinates representing points on surface of the object based on the location of the specific pattern elements. | 08-07-2014 |
20140218478 | METHOD AND APPARATUS FOR STEREOSCOPIC IMAGING - A device is disclosed that utilizes color or polarization to generate two separate images of the same object taken from the two perspectives that correspond to the left and right eyes of an observer. The two separate images are captured through a single camera objective (e.g., a single shutter camera), resulting in a single image with 3D information encoded in the color or polarization. Advantageously, the images are captured simultaneously, permitting obtaining stereoscopic images of both static and moving subjects, allowing 3D video capture. The device can be an attachment to a conventional, single shutter, image capturing device, such as a camera, a smart phone with a camera feature, computer with a camera feature, and so on. | 08-07-2014 |
20140218479 | 3D ENDOSCOPE DEVICE - A 3D endoscope device includes an endoscopic scope, an image processor, and an image display device. A line which connects the center of a first image and the center of a second image formed on the light receiving surface of the CMOS sensor is orthogonal to a parallax direction. On the light receiving surface of the CMOS sensor, a first region where the center of the first image and the second image are formed is divided into a plurality of first divided regions, and a second region is divided into a plurality of second divided regions. When reading data constituting video signals from the first region and the second region, the CMOS sensor reads data by alternately scanning the first divided region at a position corresponding to the image for left-eye and the second divided region at a position corresponding to the image for right-eye. | 08-07-2014 |
20140218480 | HAND HELD PORTABLE THREE DIMENSIONAL SCANNER - Embodiments of the invention may include a scanning device to scan three dimensional objects. The scanning device may generate a three dimensional model. The scanning device may also generate a texture map for the three dimensional model. Techniques utilized to generate the model or texture map may include tracking scanner position, generating depth maps of the object and generation composite image of the surface of the object. | 08-07-2014 |
20140218481 | Method for Determining Whether a Vehicle can Pass Through an Object by Means of a 3-D Camera - The invention relates to a method and to a device for determining whether a vehicle can pass through an object by means of a (spatially resolving) | 08-07-2014 |
20140232825 | Calibration of a 3D camera - A method for calibrating a 3D camera includes determining an actuation of an element by a person and calibrating the 3D camera based on the determining of the actuation. | 08-21-2014 |
20140232826 | METHOD FOR DETECTING OBJECTS IN A WAREHOUSE AND/OR FOR SPATIAL ORIENTATION IN A WAREHOUSE - A method for detecting objects in a warehouse and/or for spatial orientation in a warehouse includes:
| 08-21-2014 |
20140232827 | TIME-TO-DIGITAL CONVERTER AND METHOD THEREFOR - Time-to-digital converter system including: an event detector configured for detecting an event and generating an event detection signal upon detection of the event; and a time-to-digital converter coupled or connectable to the event detector and including a fine resolution part configured for counting fine time intervals, organized such that the fine resolution part is activated in response to the event detection signal and deactivated in response to a reference clock. 3D imager including an array of pixels, with in each pixel such a time-to-digital converter system, and further including a reference clock generator. | 08-21-2014 |
20140240459 | LASER FRAME TRACER - A laser frame tracer ( | 08-28-2014 |
20140240460 | LASER FRAME TRACER - A laser frame tracer ( | 08-28-2014 |
20140240461 | 3D CAMERA USING FLASH WITH STRUCTURED LIGHT - An imaging device capable of capturing depth information or surface profiles of objects is disclosed herein. The imaging device uses an enclosed flashing unit to project a sequence of structured light patterns onto an object and captures the light patterns reflected from the surfaces of the object by using an image sensor that is enclosed in the imaging device. The imaging device is capable of capturing an image of an object such that the captured image is comprised of one or more color components of a two-dimensional image of the object and a depth component that specifies the depth information of the object. | 08-28-2014 |
20140240462 | FAST GATING PHOTOSURFACE - An embodiment of the invention provides a camera comprising a photosurface having a substrate comprising photopixels and associated storage pixels and a controller that controls the photosurface to image a scene by maintaining a bias between the photopixels and their respective storage pixels at all times during an exposure period of the photosurface so that photocharge, substantially upon its generation in a photopixel by light from the scene incident on the photopixel moves towards the photopixel's storage pixel. | 08-28-2014 |
20140240463 | Video Refocusing - A video refocusing system operates in connection with refocusable video data, information, images and/or frames, which may be light field video data, information, images and/or frames, that may be focused and/or refocused after acquisition or recording. A video acquisition device acquires first refocusable light field video data of a scene, stores first refocusable video data representative of the first refocusable light field video data, acquires second refocusable light field video data of the scene after acquiring the first refocusable light field video data, determines a first virtual focus parameter (such as a virtual focus depth) using the second refocusable light field video data, generates first video data using the stored first refocusable video data and the first virtual focus parameter, wherein the first video data includes a focus depth that is different from an optical focus depth of the first refocusable light field video data, and outputs the first video data. | 08-28-2014 |
20140247326 | METHOD AND SYSTEM FOR ALIGNMENT OF A PATTERN ON A SPATIAL CODED SLIDE IMAGE - A method for preparing a spatial coded slide image in which a pattern of the spatial coded slide image is aligned along epipolar lines at an output of a projector in a system for 3D measurement, comprising: obtaining distortion vectors for projector coordinates, each vector representing a distortion from predicted coordinates caused by the projector; retrieving an ideal pattern image which is an ideal image of the spatial coded pattern aligned on ideal epipolar lines; creating a real slide image by, for each real pixel coordinates of the real slide image, retrieving a current distortion vector; removing distortion from the real pixel coordinates using the current distortion vector to obtain ideal pixel coordinates in the ideal pattern image; extracting a pixel value at the ideal pixel coordinates in the ideal pattern image; copying the pixel value at the real pixel coordinates in the real slide image. | 09-04-2014 |
20140247327 | IMAGE PROCESSING DEVICE, METHOD, AND RECORDING MEDIUM THEREFOR - An image processing device comprising a representative parallax acquisition unit, a scene separation unit to separate the stereoscopic video into multiple scenes when a parallax width does not comply with an allowable parallax width, a parallax adjustment unit to decide whether a scene parallax width complies with the allowable parallax width, and uniformly adjust the representative parallaxes for the respective stereoscopic image frames constituting the scene such that the scene parallax width complies with the allowable parallax width, the scene parallax width being defined by a maximum value and a minimum value of the representative parallaxes for the respective stereoscopic image frames constituting the scene, and an output unit, wherein the representative parallaxes for the respective stereoscopic image frames include a statistical operation value to be calculated based on parallaxes that, of parallaxes for the stereoscopic image frames, meet a predetermined condition. | 09-04-2014 |
20140253686 | COLOR 3-D IMAGE CAPTURE WITH MONOCHROME IMAGE SENSOR - A method for forming a color surface contour image of one or more teeth projects each of a plurality of structured patterns onto the one or more teeth and records image data from the structured pattern onto a monochrome sensor array. Surface contour image data is generated according to the recorded image data from the structured pattern projection. Light of first, second, and third spectral bands is projected onto the one or more teeth and first, second, and third color component image data is recorded on the monochrome sensor array. The first, second, and third color component image data is combined with color calibration data to generate a set of color values for each image pixel. The generated set of color values is assigned to the corresponding pixel in the generated surface contour image data to generate the color surface contour image. The generated color surface contour image is displayed. | 09-11-2014 |
20140253687 | 3D TRANSLATOR DEVICE - Various embodiments include a three-dimensional (3D) translator device for translating two-dimensional (2D) visual imagery into 3D physical representations that users can feel with their fingers and thus interact with and experience physically. The 3D translator device may enable users to feel/interact with 2D images displayed on devices such as 2D touchscreen devices by translating the 2D images into a 3D touch surface coordinate data set. The 3D translator device may actuate based on the 3D touch surface coordinate data set so that the users can feel the 3D representation of the 2D images, and translating the users' touches on the 3D translator device's 3D touchpanel into touch inputs that can be processed by a 2D touchscreen device (i.e., “2D touch inputs). | 09-11-2014 |
20140253688 | Time of Flight Sensor Binning - A time-of-flight sensor device generates and analyzes a high-resolution depth map frame from a high-resolution image to determine a mode of operation for the time-of-flight sensor and an illuminator and to control the time-of-flight sensor and illuminator according to the mode of operation. A binned depth map frame can be created from a binned image from the time-of-flight sensor and combined with the high-resolution depth map frame to create a compensated depth map frame. | 09-11-2014 |
20140253689 | Measuring Instrument - A measuring instrument comprises an spherical camera ( | 09-11-2014 |
20140253690 | METHOD FOR ADJUSTING ROI AND 3D/4D IMAGING APPARATUS USING THE SAME - A three-dimensional/four-dimensional (3D/4D) imaging apparatus and a region of interest (ROI) adjustment method and device are provided. An ROI is adjusted through an E image in a 3D/4D imaging mode, in which the E image is refreshed in real time when the ROI is adjusted and has a scan line range larger than that of the ROI. | 09-11-2014 |
20140267609 | SYSTEMS AND METHODS FOR ENHANCING DIMENSIONING, FOR EXAMPLE VOLUME DIMENSIONING - A dimensioning system can include stored data indicative of coordinate locations of each reference element in a reference image containing a pseudorandom pattern of elements. Data indicative of the coordinates of elements appearing in an acquired image of a three-dimensional space including an object can be compared to the stored data indicative of coordinate locations of each reference element. After the elements in the acquired image corresponding to the reference elements in the reference image are identified, a spatial correlation between the acquired image and the reference image can be determined. Such a numerical comparison of coordinate data reduces the computing resource requirements of graphical comparison technologies. | 09-18-2014 |
20140267610 | DEPTH IMAGE PROCESSING - Embodiments described herein can be used to detect holes in a subset of pixels of a depth image that has been specified as corresponding to a user, and to fill such detected holes. Additionally, embodiments described herein can be used to produce a low resolution version of a subset of pixels that has been specified as corresponding to a user, so that when an image including a representation of the user is displayed, the image respects the shape of the user, yet is not a mirror image of the user. Further, embodiments described herein can be used to identify pixels, of a subset of pixels specified as corresponding to the user, that likely correspond to a floor supporting the user. This enables the removal of the pixels, identified as likely corresponding to the floor, from the subset of pixels specified as corresponding to the user. | 09-18-2014 |
20140267611 | RUNTIME ENGINE FOR ANALYZING USER MOTION IN 3D IMAGES - Disclosed herein are systems and methods for a runtime engine for analyzing user motion in a 3D image. The runtime engine is able to use different techniques to analyze the user's motion, depending on what the motion is. The runtime engine might choose a technique that depends on skeletal tracking data and/or one that instead uses image segmentation data to determine whether the user is performing the correct motion. The runtime engine might determine how to perform positional analysis or time/motion analysis of the user's performance based on what motion is being performed. | 09-18-2014 |
20140267612 | Method and Apparatus for Adaptive Exposure Bracketing, Segmentation and Scene Organization - A method, system and computer program are provided that present a real-time approach to Chromaticity maximization to be used in image segmentation. The ambient illuminant in a scene may be first approximated. The input image may then be preprocessed to remove the impact of the illuminant, and approximate an ambient white light source instead. The resultant image is then choma-maximized. The result is an adaptive Chromaticity maximization algorithm capable of adapting to a wide dynamic range of illuminations. A segmentation algorithm is put in place as well that takes advantage of such an approach. This approach also has applications in HDR photography and real-time HDR video. | 09-18-2014 |
20140267613 | PHOTOSENSOR HAVING ENHANCED SENSITIVITY - A method of controlling a photosensor having adjacent light sensitive pixels in which photocharge is generated in depletion zones of the pixels by light incident on the photosensor, comprising applying voltage to gate electrodes of the photopixels so that the depletion zone of one of the pixels extends into and lies under a portion of the depletion region of the other pixel. | 09-18-2014 |
20140267614 | 2D/3D Localization and Pose Estimation of Harness Cables Using A Configurable Structure Representation for Robot Operations - A robot is made to recognize and manipulate different types of cable harnesses in an assembly line. This is achieved by using a stereo camera system to define a 3D cloud of a given cable harness. Pose information of specific parts of the cable harness are determined from the 3D point cloud, and the cable harness is then re-presented as a collection of primitive geometric shapes of known dimensions, whose positions and orientations follow the spatial position of the represented cable harness. The robot can then manipulate the cable harness by using the simplified representation as a reference. | 09-18-2014 |
20140267615 | WEARABLE CAMERA - An image capture system includes a camera including a processor, an imager coupled to a lens mounted on a substantially curved camera body, said curved camera body anatomically shaped for mounting to a forehead region; and a headband to secure the camera to the forehead to capture a picture or video. | 09-18-2014 |
20140267616 | VARIABLE RESOLUTION DEPTH REPRESENTATION - An apparatus, image capture device, computing device, computer readable medium are described herein. The apparatus includes logic to determine a depth indicator. The apparatus also includes logic to vary a depth information of an image based on the depth indicator, and logic to generate the variable resolution depth representation. A depth indicator may be lighting, texture, edges, contours, colors, motion, time, or any combination thereof. | 09-18-2014 |
20140267617 | ADAPTIVE DEPTH SENSING - An apparatus, system, and a method are described herein. The apparatus includes one or more sensors, wherein the sensors are coupled by a baseline rail. The apparatus also includes a controller device that is to move the one or more sensors along the baseline rail such that the baseline rail is to adjust a baseline between of each of the one or more sensors. | 09-18-2014 |
20140267618 | Capturing and Refocusing Imagery - Systems and methods for generating depth data from images captured by a camera-enabled mobile device are provided. The depth data can be used to refocus one or more portions of an image captured by the camera-enabled mobile device. A user can select different portions of the captured image to bring different portions of the image into focus and out of focus. Depth data for an image can be generated from a reference image and a sequence of images captured by the image capture device. The sequences of images can be acquired using a suitable camera motion. A refocused image can be generated with portions of the image out of focus relative to the reference image. | 09-18-2014 |
20140267619 | DIAGNOSING MULTIPATH INTERFERENCE AND ELIMINATING MULTIPATH INTERFERENCE IN 3D SCANNERS USING PROJECTION PATTERNS - A method for determining 3D coordinates of points on a surface of the object by providing a non-contact 3D measuring device having a projector and camera coupled to a processor, projecting a pattern onto the surface to determine a first set of 3D coordinates of points on the surface, determining susceptibility of the object to multipath interference by projecting and reflecting rays from the measured 3D coordinates of the points, selecting a pattern as a single line stripe or a single spot based on the susceptibility to multipath interference, and projecting the pattern onto the surface to determine a second set of 3D coordinates. | 09-18-2014 |
20140267620 | DIAGNOSING MULTIPATH INTERFERENCE AND ELIMINATING MULTIPATH INTERFERENCE IN 3D SCANNERS BY DIRECTED PROBING - A method for determining 3D coordinates of points on a surface of the object by providing a remote probe having a probe tip and a non-contact 3D measuring device having a projector and camera coupled to a processor, projecting a pattern onto the surface to determine a first set of 3D coordinates of points on the surface, determining susceptibility of the object to multipath interference by projecting and reflecting rays from the measured 3D coordinates of the points, projecting a first light to direct positioning of the remote probe by the user, the first light determined at least in part by the susceptibility to multipath interference, touching the probe tip to the surface at the indicated region, illuminating at least three spots of light on the remote probe, capturing an image of the at least three spots with the camera, and determining 3D coordinates of the probe tip. | 09-18-2014 |
20140267621 | STEREO CAMERA UNIT - A stereo camera unit includes a camera stay, a pair of lenses fixed to the camera stay, image-capturing devices for receiving light condensed by the lens, and a mount board mounting the image-capturing devices. The mount board is fixed to the camera stay via fastening members such as a screw, whereby the relative position of the lens with respect to the image-capturing device is determined. A metal core substrate having a same metal material as the camera stay in a core layer is employed for the mount board. | 09-18-2014 |
20140267622 | STEREO CAMERA - A stereo camera for measuring distance to an object using two images of the object having parallax includes an optical multiplexer to set a length of light path of each of the two images having different spectrum properties and parallax to the same length and to superimpose each of the light paths to one light path; an image capturing element to detect luminance of at least two images having different spectrum properties; an optical device to focus a superimposed image on the image capturing element; and a distance computing unit to compute distance to the object using parallax between the two images. | 09-18-2014 |
20140267623 | Three-Dimensional Scanner With External Tactical Probe and Illuminated Guidance - An assembly that includes a projector and camera is used with a processor to determine three-dimensional (3D) coordinates of an object surface. The processor fits collected 3D coordinates to a mathematical representation provided for a shape of a surface feature. The processor fits the measured 3D coordinates to the shape and, if the goodness of fit is not acceptable, selects and performs at least one of: changing a pose of the assembly, changing an illumination level of the light source, changing a pattern of the transmitted With the changes in place, another scan is made to obtain 3D coordinates. | 09-18-2014 |
20140267624 | CALIBRATION DEVICE, CALIBRATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM - A calibration device includes a part for controlling a projection device to project a pattern onto a three-dimensional object having characteristic points, a part for controlling an imaging device to acquire images wherein the three-dimensional object with the pattern projected thereon is imaged from imaging directions, a part for estimating a relative position and attitude of the imaging device with respect to the three-dimensional object based on positions of the characteristic points of the three-dimensional object on the images, a part for estimating reflection positions of projection light rays corresponding to characteristic points of the pattern on the three-dimensional object based on positions of the characteristic points of the pattern and the relative position and attitude of the imaging device, for each image, and a part for identifying positions and directions of the projection light rays corresponding to the characteristic points of the pattern based on the reflection positions. | 09-18-2014 |
20140267625 | SYSTEMS AND METHODS FOR DYNAMICALLY IDENTIFYING A PATIENT SUPPORT SURFACE AND PATIENT MONITORING - Various patient monitoring systems can include a sensor configured to collect three dimensional information. The systems can identify a location of a patient support surface based on the three dimensional information. The systems can set a two dimensional planar threshold based on the patient support surface. The systems can identify a patient location above the patient support surface based on the three dimensional information and compare the patient location to the two dimensional planar threshold. Exceeding the threshold can be indicative of a high risk of a patient fall. An alert can be generated based on the threshold being exceeded. The systems can repeat the identification of the patient support surface location and the setting of the threshold to account for changes in the patient area. | 09-18-2014 |
20140267626 | INTELLIGENT MANUAL ADJUSTMENT OF AN IMAGE CONTROL ELEMENT - An imaging system comprises an image capturing device, a viewer, a control element, and a processor. The control element controls or adjusts an image characteristic of one of the image capturing device and the viewer. The processor is programmed to determine a depth value relative to the image capturing device, determine a desirable adjustment to the control element by using the determined depth value, and control adjustment of the control element to assist manual adjustment of the control element to the desirable adjustment. The processor may also be programmed to determine whether the adjustment of the control element is to be automatically or manually adjusted and control adjustment of the control element automatically to the desirable adjustment if the control element is to be automatically adjusted. | 09-18-2014 |
20140285624 | 3D TRACKED POINT VISUALIZATION USING COLOR AND PERSPECTIVE SIZE - One exemplary embodiment involves receiving a plurality of three-dimensional (3D) track points for a plurality of frames of a video, wherein the 3D track points are extracted from a plurality of two-dimensional source points. The embodiment further involves rendering the 3D track points across a plurality of frames of the video on a two-dimensional (2D) display. Additionally, the embodiment involves coloring each of the 3D track points wherein the color of each 3D track point visually distinguishes the 3D track point from a plurality of surrounding 3D track points, and wherein the color of each 3D track point is consistent across the frames of the video. The embodiment also involves sizing each of the 3D track points based on a distance between a camera that captured the video and a location of the 2D source points referenced by the respective one of the 3D track points. | 09-25-2014 |
20140285625 | MACHINE VISION 3D LINE SCAN IMAGE ACQUISITION AND PROCESSING - A machine vision system may perform compressive sensing by aggregating signals from multiple pixels. The aggregation of signals may be based on a sampling function. The sampling function may be formed of a product of a random basis, which may be sparse, and a filtering function. | 09-25-2014 |
20140285626 | Representation and Compression of Depth Data - The techniques and arrangements described herein provide for layered compression of depth image data. In some examples, an encoder may partition depth image data into a most significant bit (MSB) layer and a least significant bit (LSB) layer. The encoder may quantize the MSB layer and generate quantization difference data based at least in part on the quantization of the MSB layer. The encoder may apply the quantization difference data to the LSB layer to generate an adjusted LSB layer. | 09-25-2014 |
20140285627 | SOLID-STATE IMAGE PICKUP DEVICE, METHOD OF DRIVING SOLID-STATE IMAGE PICKUP DEVICE, AND ELECTRONIC APPARATUS - Provided is a solid-state image pickup device that includes: a plurality of pixels each including a photoelectric conversion element; and a transmittance control element provided on a light incident side of the photoelectric conversion element of at least a part of the plurality of pixels, and configured to change a transmittance of incident light by an external input. | 09-25-2014 |
20140285628 | WIDEBAND AMBIENT LIGHT REJECTION - Optical apparatus includes an image sensor and objective optics, which are configured to collect and focus optical radiation over a range of wavelengths along a common optical axis toward a plane of the image sensor. A dispersive element is positioned to spread the optical radiation collected by the objective optics so that different wavelengths in the range are focused along different, respective optical axes toward the plane. | 09-25-2014 |
20140285629 | SOLID-STATE IMAGING DEVICE - A color filter array has G filters, R filters, and B filters. A pair of phase difference pixels adjoining in a horizontal direction is provided with one of the G, R, and B filters. In the color filter array, a fundamental array pattern, including the G, R, and B filters, is repeatedly disposed in horizontal and vertical directions. The G filters, which most greatly contributes to obtainment of luminance information, are disposed in every line extending in the horizontal direction, the vertical direction, and slanting directions. Both of the R filters and the B filters are disposed in every line extending in the slanting directions. The number of the G filters is larger than that of the R filters or the B filters. | 09-25-2014 |
20140293008 | SELF DISCOVERY OF AUTONOMOUS NUI DEVICES - A system and method providing a capture device autonomously determining its own operational window in the presence of other such devices. The capture device includes an imaging sensor having a field of view and an illumination source. A processor includes code instructing the processor to scan the field of view another illumination source operating in a recurring window of time proximate to the capture device. If illumination occurs from another source within the recurring window, an operational window for the second illumination source is determined an a new a new operational window within the recurring window established for the capture device. | 10-02-2014 |
20140293009 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An image representing reflected light of structured light is acquired, distance information is calculated, from the image, and a measurement line map storing the distance information is generated. The structured light forms a plurality of measurement lines at discrete positions, and the measurement line map is formed as a two-dimensional array in which one dimension includes elements in a number corresponding to a number of the measurement lines, and another dimension includes elements in a number corresponding to a number of pixels of the image in a lengthwise direction of the measurement lines. | 10-02-2014 |
20140300699 | METHOD AND SYSTEM OF DISCRETIZING THREE-DIMENSIONAL SPACE AND OBJECTS FOR TWO-DIMENSIONAL REPRESENTATION OF SPACE AND OBJECTS - In one exemplary embodiment, a method includes obtaining a digital image of an object. A coordinate-space position of a digital camera is defined in relation to the digital object for the digital image. A coordinate-space region around the coordinate-space position is defined. The coordinate-space region is associated with the digital image based on the coordinate-space position defined for the digital image. A digital image of a room is obtained. A three-dimensional representation of the digital image of the room is created according to the coordinate system based on the positional information of another digital camera that obtained the digital image of the room. An object proxy is located in the three-dimensional representation of the digital image. A coordinate-space region of the room of the object proxy is mapped with a substantially matching coordinate-space region of the object. The digital image of the object associated with substantially matching coordinate-space region is overlaid onto the digital image of the room. | 10-09-2014 |
20140300700 | BURST-MODE TIME-OF-FLIGHT IMAGING - An imager includes an emitter, an array of pixel elements, and driver logic. The emitter releases bursts of light pulses with pauses between bursts. Each element of the array has a finger gate biasable to attract charge to the surface, a reading node to collect the charge, and a transfer gate to admit such charge to the reading node and to deter such charge from being absorbed into the finger gate. The driver logic biases the finger gates with the modulated light pulses such that the finger gates of adjacent first and second elements cycle with unequal phase into and out of a charge-attracting state. To reduce the effects of ambient light on the imager, the driver logic is configured to bias the transfer gates so that the charge is admitted to the reading node only during the bursts and is prevented from reaching the reading node during the pauses. | 10-09-2014 |
20140300701 | 3D IMAGE ACQUISITION APPARATUS AND METHOD OF GENERATING DEPTH IMAGE IN THE 3D IMAGE ACQUISITION APPARATUS - Provided are a three-dimensional (3D) image acquisition apparatus, and a method of generating a depth image in the 3D image acquisition apparatus. The method may include sequentially projecting a light transmission signal, which is generated from a light source, to a subject, modulating reflected light, which is reflected by the subject, using a light modulation signal, calculating a phase delay using a combination of a first plurality of images of two groups, from among a second plurality of images of all groups obtained by capturing the modulated reflected light, and generating a depth image based on the phase delay. | 10-09-2014 |
20140307052 | APPARATUSES AND METHODS FOR EXTRACTING DEFECT DEPTH INFORMATION AND METHODS OF IMPROVING SEMICONDUCTOR DEVICE MANUFACTURING PROCESSES USING DEFECT DEPTH INFORMATION - Apparatuses and methods for extracting defect depth information and methods of improving semiconductor device manufacturing processes using defect depth information are provided. The apparatuses may include an inspection assembly configured to obtain a plurality of optical images of a portion of an inspection object including a defect along a depth direction and a processor circuit configured to generate defect data using the plurality of optical images and provide defect depth information by comparing the defect data with comparison data in a library database. | 10-16-2014 |
20140307053 | METHOD OF PROMPTING PROPER ROTATION ANGLE FOR IMAGE DEPTH ESTABLISHING - A controlling method suitable for an electronic apparatus is disclosed herein. The electronic apparatus includes a motion sensor, an image capturing unit, a display unit and a processing unit. The controlling method includes following steps. An initial orientation of the electronic apparatus is obtained by the motion sensor when a first image is captured by the electronic apparatus. A predetermined rotation angle relative to the initial orientation is assigned. A rotation prompt indicating the predetermined rotation angle is displayed via the display unit. | 10-16-2014 |
20140313292 | DETERMINING DEPTH DATA FOR A CAPTURED IMAGE - A method, system, and one or more computer-readable storage media for depth acquisition from density modulated binary patterns are provided herein. The method includes capturing a number of images for a scene using an IR camera and a number of IR lasers including diffraction grates. Each image includes a density modulated binary pattern carrying phase information. The method also includes performing pixel based phase matching for the images to determine depth data for the scene based on the phase information carried by the density modulated binary patterns. | 10-23-2014 |
20140313293 | DEPTH MEASURING SCHEMES - A system may include a first apparatus configured to: project a first particle pattern onto a first target region; and a second apparatus configured to: detect the first particle pattern from a second target region that is overlapped with at least a part of the first target region, project a second particle pattern that is different from the first particle pattern onto the second target region, capture an image of the second target region, and process the captured image to reconstruct a three-dimensional (3D) image of the second target region. | 10-23-2014 |
20140320601 | APPARATUS AND METHOD FOR AN INCLINED SINGLE PLANE IMAGING MICROSCOPE BOX (ISPIM BOX) - An apparatus for inclined single plane Illumination microscopy of a sample includes a laser for launching excitation light beams at a plurality of wavelengths, a laser beam expander, an injection arm optically coupled to the laser beam expander, a conventional back-to-back microscope system, a universal dichroic mirror optically coupled to the injection arm to direct the excitation light beams into the conventional back-to-back microscope onto a sample plane in an imaging plane, and to receive fluorescence light from the sample, a universal optical adaptor optically coupled to the universal dichroic mirror, a re-imaging component optically coupled to the universal optical adaptor; and a camera output connector optically coupled to the re-imaging component, where the laser beam expander, injection arm, universal optical adapter, re-imaging component, and camera are combined in a modular unit which is arranged and configured to be coupled to the conventional back-to-back microscope. | 10-30-2014 |
20140320602 | Method, Apparatus and Computer Program Product for Capturing Images - In accordance with various example embodiments, methods, apparatuses, and computer program products are provided. A method comprises receiving a panchromatic image of a scene captured from a panchromatic image sensor, receiving a colour image of the scene captured from a colour image sensor, and generating a modified image of the scene based at least in part on processing the panchromatic image and the colour image. The apparatus comprises at least one processor and at least one memory, configured to, cause the apparatus to perform receiving a panchromatic image of a scene captured from a panchromatic image sensor, receiving a colour image of the scene captured from a colour image sensor, and generating a modified image of the scene based at least in part on processing the panchromatic image and the colour image. | 10-30-2014 |
20140320603 | METHOD AND DEVICE FOR DETERMINING 3D COORDINATES OF AN OBJECT - The invention relates to a method for determining 3D coordinates of an object ( | 10-30-2014 |
20140327741 | 3D Camera And Method Of Image Processing 3D Images - A method comprises obtaining first information, the first information including depth information with a first range of unambiguity. A first image processing is performed on the first information to generate first modified information. Furthermore, second information is obtained, the second information including depth information with a second range of unambiguity. | 11-06-2014 |
20140327742 | STEREOSCOPIC MICROSCOPE - An electronic stereoscopic microscope for detecting and reproducing pairs of stereoscopic part images comprises a camera unit having at least one electronic image sensor, a dog-leg objective for generating an image of an object on the image sensor, wherein the objective comprises a first leg facing the object to be imaged, a second leg facing the image sensor and including an angle with the first leg and deflection means provided between the legs, wherein the first leg extends along an imaging axis and the second leg extends along a detection axis. The microscope furthermore comprises an electronic viewfinder for reproducing stereoscopic part images detected by means of the image sensor, with the electronic viewfinder being arranged in an observation position or being movable into an observation position which is provided at a rear side of the objective in an extension of the imaging axis. | 11-06-2014 |
20140333722 | APPARATUS AND METHOD OF PROCESSING DEPTH IMAGE USING RELATIVE ANGLE BETWEEN IMAGE SENSOR AND TARGET OBJECT - An apparatus for processing a depth image using a relative angle between an image sensor and a target object includes an object image extractor to extract an object image from the depth image, a relative angle calculator to calculate a relative angle between an image sensor used to photograph the depth image and a target object corresponding to the object image, and an object image rotator to rotate the object image based on the relative angle and a reference angle. | 11-13-2014 |
20140333723 | OBSERVATION SYSTEM, OBSERVATION PROGRAM, AND OBSERVATION METHOD - An observation system includes a microscope optical system, an image pickup unit, an image pickup control unit, a detection unit, and an observation control unit. The image pickup unit is configured to take an image of a field-of-view range of the microscope optical system. The image pickup control unit is configured to cause the image pickup unit to take images of an observation sample in the field-of-view range at a plurality of focal positions and generate detection images. The detection unit is configured to detect a three-dimensional position of an observation target object in the observation sample from the detection images. The observation control unit is configured to fit the field-of-view range of the microscope optical system to the three-dimensional position. | 11-13-2014 |
20140333724 | IMAGING DEVICE, IMAGING METHOD AND PROGRAM STORAGE MEDIUM - An imaging device includes: imaging units that capture a same object of imaging from different viewpoints; a detection unit that detects a subject from respective frame images; a range computing unit that, if plural subjects are detected, computes a range expressed by a difference between a maximum value and a minimum value among values relating to distances between the detected subjects and the corresponding imaging units; an adjusting unit that, if the difference between a range of a specific frame and a range of a frame immediately before or after the specific frame exceeds a threshold value, adjusts the range of the specific frame such that the difference is reduced; a parallax amount computing unit that computes a parallax amount corresponding to the adjusted range; and a stereoscopic image generation unit that generates a stereoscopic image from captured viewpoint images based on the computed parallax amount. | 11-13-2014 |
20140333725 | 3-DIMENSIONAL CAMERA MODULE AND METHOD FOR AUTO FOCUSING THE SAME - An exemplary embodiment of the present invention is such that an auto focus search section of a first actuator and an auto focus search section of a second actuator are different to thereby optimize an auto locus effect of the 3-D camera module. | 11-13-2014 |
20140333726 | IMAGE PROCESSING APPARATUS, IMAGE CAPTURING APPARATUS, AND DISPLAY APPARATUS - An image processing apparatus includes an image processing intensity determination unit that determines an intensity of image processing and an image processing unit that performs the image processing to image information in accordance with the intensity determined by the image processing intensity determination unit. The image processing intensity determination unit determines the intensity of the image processing at a target pixel included in the image information on the basis of a depth value indicating a depth corresponding to the target pixel and a vertical position of the target pixel in the image information. | 11-13-2014 |
20140333727 | THREE-DIMENSIONAL MEASURING DEVICE - A three-dimensional measuring device includes an extraction unit that extracts an image data set with a brightness value of each of pixels in image data within an effective range from among a plurality of image data sets at each of coordinate positions of an object to be measured, and a three-dimensional measurement unit that performs three-dimensional measurement relating to each of the coordinate positions of the object to be measured based on the extracted image data set. The extraction unit extracts the image data set imaged under a pattern light with the highest irradiation brightness among a plurality of types of pattern lights when there is a plurality of sets of the image data sets with the brightness value of each of the pixels in the image data within the effective range from among the plurality of the image data sets. | 11-13-2014 |
20140340481 | SYSTEMS AND METHODS FOR DETECTION OF CLEAR AIR TURBULENCE - Systems and methods for detection of clear air turbulence are provided. One system includes an image capture device suitable to capture one or more images of an optical phenomenon caused by non-horizontally oriented ice crystals. The system also includes a computer processor configured to receive the one or more images from the image capture device, analyze the one or more images by comparing one or more characteristics of the one or more images to one or more threshold values, and determine based on the comparing, an occurrence of clear air turbulence. | 11-20-2014 |
20140340482 | Three Dimensional Microscopy Imaging - A system and method for creating three dimensional images using probe molecules is disclosed and described. A sample is mounted on a stage. The sample has a plurality of probe molecules. The sample is illuminated with light, causing the probe molecules to luminesce. The probe luminescence can be split into at least four paths corresponding to at least four detection planes corresponding to object planes in the sample. The at least four detection planes are detected linearly via an sCMOS camera. Object planes in corresponding recorded regions of interest are recorded in the camera. A signal from the regions of interest is combined into a three dimensional image. | 11-20-2014 |
20140340483 | Method for Three-Dimensional High Resolution Localization Microscopy - A three-dimensional high-resolution localization microscopy method including illuminating a sample by excitation radiation to excite fluorescence markers in the sample to luminesce, and imaging the sample in an image frame via imaging optics along an imaging direction, wherein the image frame contains images of the luminescing fluorescence markers, and the imaging optics have a plane of focus and an optical resolution. The excitation step and imaging steps are repeated multiple times to generate a plurality of image frames, wherein the excitation steps are performed to isolate the images of the luminescing fluorescence markers in each image frame for at least some of the luminescing fluorescence markers. The location of the corresponding fluorescence marker is determined in each instance in the generated plurality of image frames from the isolated images of the luminescing fluorescence markers, and a highly resolved total image is generated from the locations determined in this way. | 11-20-2014 |
20140340484 | ILLUMINATION APPARATUS AND METHOD FOR THE GENERATION OF AN ILLUMINATED REGION FOR A 3D CAMERA - An illumination apparatus ( | 11-20-2014 |
20140347442 | RGBZ PIXEL ARRAYS, IMAGING DEVICES, CONTROLLERS & METHODS - A pixel array includes color pixels that have a layout, and depth pixels having a layout that starts from the layout of the color pixels. Photodiodes of adjacent depth pixels can be joined to form larger depth pixels, while still efficiently exploiting the layout of the color pixels. Moreover, some embodiments are constructed so as to enable freeze-frame shutter operation of the pixel array. | 11-27-2014 |
20140347443 | INDIRECT REFLECTION SUPPRESSION IN DEPTH IMAGING - A depth-sensing method for a time-of-flight depth camera includes irradiating a subject with pulsed light of spatially alternating bright and dark features, and receiving the pulsed light reflected back from the subject onto an array of pixels. At each pixel of the array, a signal is presented that depends on distance from the depth camera to the subject locus imaged onto that pixel. In this method, the subject is mapped based on the signal from pixels that image subject loci directly irradiated by the bright features, while omitting or weighting negatively the signal from pixels that image subject loci under the dark features. | 11-27-2014 |
20140347444 | METHOD AND DEVICE FOR STEREO BASE EXTENSION OF STEREOSCOPIC IMAGES AND IMAGE SEQUENCES - The invention relates to a method and a device for improving the depth impression of stereoscopic images and image sequences. In autostereoscopic multi-viewer display devices, generally a plurality of intermediate perspectives are generated, which lead to a reduced stereo base upon perception by the viewers. The stereo base widening presented in this application leads to a significant improvement and thus to a more realistic depth impression. It can either be effected during recording in the camera or be integrated into a display device. The improvement in the depth impression is achieved by the generation of synthetic perspectives situated, in the viewing direction of the camera lenses, on the left and right of the extreme left and extreme right recorded camera perspective on the right and left lengthening of the connection line formed by the extreme left and extreme right camera perspectives. These synthetic perspectives are calculated only on the basis of a disparity map, which is supplied or which is calculated in a preprocessing step. In this case, the method presented solves the following problems: 1. Calculation of the new extension perspectives, 2. Correct repositioning of the camera perspectives supplied within the visual zones, 3. Definition of which disparities are intended to be continued in the case of collision in the extension and 4. Interpolation of the image regions which become visible as a result of the extension of the stereo base. In this case, instances of left and right masking are also identified and optically correctly maintained and supplemented in the extension perspectives. | 11-27-2014 |
20140347445 | 3D IMAGE ACQUISITION APPARATUS AND METHOD OF DRIVING THE SAME - Provided is a 3-dimensional (3D) image acquisition apparatus and a method of driving the same. The 3D image acquisition apparatus includes a light source, an optical shutter, an image sensor, an image signal processor, and a controller. The light source is configured to project illumination light on an object. The optical shutter is configured to modulate the illumination light reflected from the object with a predetermined gain waveform. The image sensor is configured to generate a depth image by detecting the illumination light modulated by the optical shutter. The image signal processor is configured to calculate a distance from the 3D image acquisition apparatus to the object using the depth image generated by the image sensor. The controller is configured to control an operation of the light source and an operation of the optical shutter. | 11-27-2014 |
20140347446 | METHOD AND APPARATUS FOR IC 3D LEAD INSPECTION HAVING COLOR SHADOWING - A system for three-dimensional inspection of leads mounted on an integrated circuit device includes an integrated circuit device, a first light source having a first color, a second light source having a second color different from the first color, a RGB color camera and a processor. The first light source is disposed at an acute angle to the integrated circuit device, and is configured to illuminate the leads such that lead shadows are created in a first color plane. The second light source is disposed in front of a surface of the integrated circuit device on which the leads are mounted, and is configured to illuminate the leads in a second color plane. The camera is configured to image the illuminated leads and lead shadows. The processor is configured to analyze the first and second color planes of a single image to detect three-dimensional bent leads. | 11-27-2014 |
20140347447 | OPTICAL SECTIONING OF A SAMPLE AND DETECTION OF PARTICLES IN A SAMPLE - An apparatus for obtaining a plurality of images of a sample includes a sample device suitable for holding a liquid sample; a first optical detection assembly including a first image acquisition device, the first optical detection assembly having an optical axis and an object plane, the object plane including an image acquisition area from which electromagnetic waves can be detected as an image by the first image acquisition device; one translation unit arranged to move the sample device and the first optical detection assembly relative to each other; and an image illumination device, wherein the apparatus is arranged to move the sample device and the first optical detection assembly relative to each other along a scanning path, which defines an angle theta relative to the optical axis, wherein theta is in the range of about 0.3 to about 89.7 degrees. | 11-27-2014 |
20140347448 | Determining the Characteristics of a Road Surface by Means of a 3D Camera - The invention relates to a method and a device for detecting the condition of a pavement surface by means of a 3D camera. | 11-27-2014 |
20140354775 | EDGE PRESERVING DEPTH FILTERING - A scene is illuminated with modulated illumination light that reflects from surfaces in the scene as modulated reflection light. Each of a plurality of pixels of a depth camera receive the modulated reflection light and observe a phase difference between the modulated illumination light and the modulated reflection light. For each of the plurality of pixels, an edginess of that pixel is recognized, and the phase difference of that pixel is smoothed as a function of the edginess of that pixel. | 12-04-2014 |
20140354776 | ULTRASONIC IMAGE PROCESSING APPARATUS AND METHOD - Disclosed herein is an ultrasonic image processing method and apparatus. The ultrasonic image processing method includes acquiring volume data by radiating ultrasonic waves to an area around the uterus, extracting at least one object candidate group based on the acquired volume data, and displaying the at least one extracted object candidate group on a screen. The ultrasonic image processing apparatus includes a data acquisition unit acquiring volume data of an area around the uterus using ultrasonic waves, a data processing unit extracting at least one object candidate group based on the acquired volume data, and a display unit displaying the at least one extracted object candidate group on a screen. | 12-04-2014 |
20140354777 | APPARATUS AND METHOD FOR OBTAINING SPATIAL INFORMATION USING ACTIVE ARRAY LENS - Disclosed herein are an apparatus and method for obtaining spatial information using an active array lens. In order to obtain spatial information in the apparatus for obtaining spatial information including the active microlens, at least one active pattern for varying a microlens' focus is determined by controlling voltage applied to a pattern of the active microlens, and at least one projection image captured by the at least one active pattern is obtained in a time-division unit. | 12-04-2014 |
20140354778 | INTEGRATED THREE-DIMENSIONAL VISION SENSOR - A three-dimensional scene sensor comprises: a deformable optical system modifying focal distance by control signal, optics imaging the scene by analog image sensor for depths corresponding to distances; the image sensor comprising a matrix of pixels grouped into sub-matrices of macro-pixels being a sub-assembly of pixels, each macro-pixel operating independently for acquisition and reading of data; a matrix of elementary processors, each macro-pixel directly connected to a dedicated processor wherein pixel data for the macro-pixel are transmitted and processed by the processor, each processor carries out, for each pixel, local processing operations calculating depth information for macro-pixel, the processors operating in parallel and independently such that the depth information is processed and calculated in parallel over all macro-pixels of the image sensor, the processors connected to at least one processing unit allowing calculations using high-level input data, calculated starting from the pixel data directly produced by the image sensor. | 12-04-2014 |
20140362184 | Method of Error Correction for 3D Imaging Device - A method is presented for correcting errors in a 3D scanner. Measurement errors in the 3D scanner are determined by scanning each of a plurality of calibration objects in each of a plurality of sectors in the 3D scanner's field of view. The calibration objects have a known height, a known width, and a known length. The measurements taken by the 3D scanner are compared to the known dimensions to derive a measurement error for each dimension in each sector. An estimated measurement error is calculated based on scans of each of the plurality of calibration objects. When scanning target objects in a given sector, the estimated measurement error for that sector is used to correct measurements obtained by the 3D scanner. | 12-11-2014 |
20140362185 | METHOD FOR CORRECTING THE ZOOM SETTING AND/OR THE VERTICAL OFFSET OF FRAMES OF A STEREO FILM AND CONTROL OR REGULATING SYSTEM OF A CAMERA RIG HAVING TWO CAMERAS - The invention relates to a method for correcting the zoom setting and/or the vertical offset in an image assembled from two sub-frames of a stereo film, wherein the one sub-frame is provided by a first camera of a camera rig and the second sub-frame is provided by a second camera of the camera rig, wherein a vertical offset is changed via a change in the pitch setting, wherein, during the operation when recording the stereo film, a difference between the present zoom-results in the first sub-frame relative to the second sub-frame is measured and/or a vertical offset of the image points present in the first sub-frame in relation to those corresponding image points in the second sub-frame is measured and, on the basis of this information, correction values are calculated, with which the zoom difference and/or the vertical offset is reduced, given appropriate application to the zoom and/or pitch setting. The invention also relates to a controlling means or regulating means of a camera rig having two cameras, which is configured to carry out said method. | 12-11-2014 |
20140362186 | METHOD AND APPARATUS FOR CALIBRATION OF STEREO-OPTICAL THREE-DIMENSIONAL SURFACE-MAPPING SYSTEM - A system for, and method of, extracting a surface profile from a stereo pair of images obtained at an arbitrary setting S of an optical system, includes determining surface profile reconstruction parameters for images obtained with the optical system at a reference setting So of the optical system; determining warping parameters for a digital image processor for warping images obtained with the optical system at the arbitrary setting S into images corresponding to the reference setting So; obtaining the stereo pair of images from at least one camera of the optical system; warping the stereo pair of images into images corresponding to the reference setting So, and using the surface profile reconstruction parameters to determine the surface profile. In a particular embodiment, the surface profile is passed to a computer model of tissue deformation and used to determine an intra-surgery location of a tumor or other anatomic feature of tissue. | 12-11-2014 |
20140362187 | STEREOSCOPIC IMAGE DISPLAY CONTROL DEVICE, IMAGING APPARATUS INCLUDING THE SAME, AND STEREOSCOPIC IMAGE DISPLAY CONTROL METHOD - A system control unit | 12-11-2014 |
20140368613 | DEPTH MAP CORRECTION USING LOOKUP TABLES - Depth map correction using lookup tables is described. In an example depth maps may be generated that measure a depth to an object using differences in phase between light transmitted from a camera which illuminates the object and light received at the camera which has been reflected from the object. In various embodiments depth maps may be subject to errors caused by received light undergoing multiple reflections before being received by the camera. In an example a correction for an estimated depth of an object may be computed and stored in a lookup table which maps the amplitude and phase of the received light to a depth correction. In an example the amplitudes and frequencies of each modulation frequency may be to access lookup table which stores corrections for the depth of an object and which allows an accurate depth map to be obtained. | 12-18-2014 |
20140375769 | 3D WEARABLE GLOVE SCANNER - Disclosed is a 3D scanner in the form of a wearable glove that can be worn by a user to swiftly scan the objects that the user touches. Touching the edges or corners of the object is enough for the present invention to automatically generate the necessary 3D model of the object. The user can scan an object with holes regardless of the object's size. The wearable glove is thin and light, and can be folded and carried in the user's pocket ready for use at any time or place. | 12-25-2014 |
20140375770 | METHOD AND APPARATUS FOR DETECTION OF FOREIGN OBJECT DEBRIS - A method and a system for the detection of Foreign Object Debris (FOD) on a surface of a transport infrastructure are described. The method comprises receiving 3D profiles of the surface from at least one 3D laser sensor, the 3D laser sensor including a camera and a laser line projector, the 3D laser sensor being adapted to be displaced to scan the surface of the transport infrastructure and acquire 3D profiles of the surface; analyzing the 3D profiles using a parametric surface model to determine a surface model of the surface; identifying pixels of the 3D profiles located above the surface using the surface model; generating a set of potential FOD by applying a threshold on the pixels located above the surface model to identify a set of at least one protruding object; providing detection information about the potential FOD. | 12-25-2014 |
20150009290 | COMPACT LIGHT MODULE FOR STRUCTURED-LIGHT 3D SCANNING - A compact light module is disclosed. Multiples of such compact light modules may be used when implementing structured-light 3D scanning with a mobile computing device. In particular, a first compact light module may be adapted to diffuse light in a first pattern of parallel lines of light and a second compact light module may be adapted to diffuse light in a second pattern of parallel lines of light, the second pattern of parallel lines of light being generally perpendicular to the first pattern of parallel lines of light. A processor may control activation of the first compact light module, the second compact light module and a photography subsystem to obtain a plurality of images. The processor may then process the plurality of images to construct a three dimensional image of an object to be scanned. | 01-08-2015 |
20150009291 | ON-LINE STEREO CAMERA CALIBRATION DEVICE AND METHOD FOR GENERATING STEREO CAMERA PARAMETERS - An on-line stereo camera calibration method employed by an electronic device with a stereo camera device includes: retrieving a feature point set, and utilizing a stereo camera calibration circuit on the electronic device to calculate a stereo camera parameter set based on the retrieved feature point set. In addition, an on-line stereo camera calibration device on an electronic device with a stereo camera device includes a stereo camera calibration circuit. The stereo camera calibration circuit includes an input interface and a stereo camera calibration unit. The input interface is used to retrieve a feature point set. The stereo camera calibration unit is used to calculate a stereo camera parameter set based on at least the retrieved feature point set. | 01-08-2015 |
20150009292 | THREE-DIMENSIONAL OBJECT DETECTION DEVICE - A three-dimensional object detection device includes an image capturing unit, an image conversion unit, a three-dimensional object detection unit and a light source detection unit. The image conversion unit converts a viewpoint of the images obtained by the image capturing unit to create bird's-eye view images. The three-dimensional object detection unit detects a presence of a three-dimensional object within the adjacent lane. The three-dimensional object detection unit determines the presence of the three-dimensional object within the adjacent lane-when the difference waveform information is at a threshold value or higher. The three-dimensional object detection unit set a threshold value lower so that the three-dimensional object is more readily detected in a rearward area than forward area with respect to a line connecting the light source and the image capturing unit. | 01-08-2015 |
20150009293 | METHOD FOR CONTINUATION OF IMAGE CAPTURE FOR ACQUIRING THREE-DIMENSIONAL GEOMETRIES OF OBJECTS - A method for capturing at least one sub-region of a three-dimensional geometry of at least one object, for the purpose of updating an existing virtual three-dimensional geometry of the sub-region, optionally after elements of the object present in the sub-region have been modified, removed and/or added, wherein the method includes the following steps: a) providing the existing virtual three-dimensional geometry of the object, for example, from an earlier image capture, b) capturing of two-dimensional images from which spatial information of the three-dimensional geometry of the objects is obtained, c) automatic addition of spatial information obtained to existing spatial information, if applicable d) updating the existing virtual three-dimensional geometry of the sub-region of the object based on added information, e) optionally repeating the process from step b). | 01-08-2015 |
20150009294 | IMAGE PROCESSING DEVICE AND METHOD, AND IMAGING DEVICE - An image processing device comprising: an image acquisition device; a parallax information acquisition device; and a calculation device configured to calculate a first pixel and a second pixel for each pixel of the acquired image, the first digital filter and the second digital filter corresponding to the parallax information for each pixel of the acquired image, the first digital filter group and the second digital filter group being digital filter groups for giving a parallax to the acquired image and having left-right symmetry to each other, and each of the first digital filter group and the second digital filter group having filter sizes that are different depending on a magnitude of the parallax to be given, wherein the left-right symmetry of the first and second digital filter groups is different between a central part and an edge part of the image. | 01-08-2015 |
20150015671 | METHOD AND SYSTEM FOR ADAPTIVE VIEWPORT FOR A MOBILE DEVICE BASED ON VIEWING ANGLE - A 2D and/or 3D video processing device comprising a camera and a display captures images of a viewer as the viewer observes displayed 2D and/or 3D video content in a viewport. Face and/or eye tracking of viewer images is utilized to generate a different viewport. Current and different viewports may comprise 2D and/or 3D video content from a single source or from different sources. The sources of 2D and/or 3D content may be scrolled, zoomed and/or navigated through for generating the different viewport. Content for the different viewport may be processed. Images of a viewer's positions, angles and/or movements of face, facial expression, eyes and/or physical gestures are captured by the camera and interpreted by face and/or eye tracking. The different viewport may be generated for navigating through 3D content and/or for rotating a 3D object. The 2D and/or 3D video processing device communicates via wire, wireless and/or optical interfaces. | 01-15-2015 |
20150015672 | IMAGE PROCESSING DEVICE, IMAGING DEVICE, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM - An image processing method according to the present invention cuts out partial images corresponding to trimming regions, which is specified for multiple viewpoint images of a stereoscopic image obtained by pupil-division-scheme imaging, from the respective viewpoint images, generates a stereoscopic partial image including multiple partial images, generates parallax information that indicates the parallax between the partial images, adjusts the parallax between the partial images based on the parallax information, and then, for the partial images after the parallax adjustment, enhances the sharpness as the adjusted parallax amount decreases and reduces the sharpness as the adjusted parallax amount increases. | 01-15-2015 |
20150022634 | OBJECT INSPECTION SYSTEM - An inspection system is provided that includes at least one three-dimensional camera that is used to inspect an object to determine whether the object contains any defects. The defects that are capable of being detected by the inspection system include holes, tears, and improper thickness, and overlap. The inspection system is configured to alert a user in the event that the object contains a defect. | 01-22-2015 |
20150022635 | USING MULTIPLE FLASHES WHEN OBTAINING A BIOMETRIC IMAGE - A mobile communication device may have a photography subsystem, multiple light sources on a posterior side and an image signal processor (ISP). The ISP may control the photography subsystem and the timing of the flashing of the multiple light sources to obtain multiple images. From the multiple images, the ISP may construct a three-dimensional biometric. | 01-22-2015 |
20150022636 | METHOD AND SYSTEM FOR VOICE CAPTURE USING FACE DETECTION IN NOISY ENVIRONMENTS - Embodiments of the present invention are capable of determining a face direction associated with a detected subject (or multiple detected subjects) of interest within a 3D space using face detection procedures, while simultaneously avoiding the pick up of other environmental sounds. In addition, if more than one face is detected, embodiments of the present invention can automatically detect an active speaker based on the recognition of facial movements consistent with the performance of providing audio (e.g., tracking mouth movements) by those subjects whose faces were detected. Once determinations are made regarding face direction of the detected subject, embodiments of the present invention may dynamically adjust the audio acquisition capabilities of the audio capture device (e.g., microphone devices) relative to the location of the detected subject using beamforming techniques for instance. As such, embodiments of the present invention can detect the direction of the “talking object” and guide the audio subsystem to filter out any sound not coming from that direction. | 01-22-2015 |
20150022637 | Three-Dimensional Image Processing Apparatus, Three-Dimensional Image Processing Method, Three-Dimensional Image Processing Program, Computer-Readable Recording Medium, And Recording Device - A three-dimensional image processing apparatus includes: an image capturing part for acquiring reflected light to capture a plurality of pattern projected images; a distance image generating part capable of generating a distance image based on the plurality of pattern projected images; a tone conversion part for tone-converting the distance image generated in the distance image generating part to a low-tone distance image that has a lower number of tones than the number of tones of the distance image and is obtained by replacing height information in the distance image with a shade value of the image; and a tone conversion condition automatic setting part for automatically setting, based on the height information in the distance image, a tone conversion parameter for prescribing a tone conversion condition at the time of tone-converting the distance image to the low-tone distance image in the tone conversion part. | 01-22-2015 |
20150022638 | Three-Dimensional Image Processing Apparatus, Three-Dimensional Image Processing Method, Three-Dimensional Image Processing Program, Computer-Readable Recording Medium, And Recording Device - A head section includes: a light projecting part for projecting incident light as structured illumination of a predetermined projection pattern; the image capturing part for acquiring reflected light that is projected by the light projecting part and reflected on an inspection target, to capture a plurality of pattern projected images; a distance image generating part capable of generating a distance image based on the plurality of pattern projected images captured in the image capturing part; a head-side storage part for holding the distance image generated in the distance image generating part; and a head-side communication part for transmitting the distance image held in the storage part to the controller section. The controller section includes: a controller-side communication part for communicating with the head-side communication part; and an inspection executing part for executing predetermined inspection processing on the distance image received in the controller-side communication part. | 01-22-2015 |
20150022639 | METHOD OF CAPTURING THREE-DIMENSIONAL (3D) INFORMATION ON A STRUCTURE - A method of capturing three-dimensional (3D) information on a structure includes obtaining a first image set containing images of the structure with at least one physical camera located at a first position, the first image set comprising one or more images; reconstructing 3D information from the first image set; obtaining 3D information from a virtual camera out of an existing global model of the structure; determining correspondences between the obtained 3D information and the reconstructed 3D information; determining camera motion between the obtained 3D information and the reconstructed 3D information from the correspondences; and updating the existing global model by entering the obtained 3D information using the determined camera motion. | 01-22-2015 |
20150022640 | METHOD AND SYSTEM FOR VOLUME DETERMINATION USING A STRUCTURE FROM MOTION ALGORITHM - A volume determining method for an object on a construction site is disclosed. The method may include moving a mobile camera along a path around the object while orienting the camera repeatedly onto the object. The method may include capturing a series of images of the object from different points on the path and with different orientations with the camera, the series being represented by an image data set; performing a structure from motion evaluation with a defined algorithm using the series of images and generating a spatial representation; scaling the spatial representation with help of given information about a known absolute reference regarding scale; defining a ground surface for the object and applying it onto the spatial representation; and calculating and outputting the absolute volume of the object based on the scaled spatial representation and the defined ground surface. | 01-22-2015 |
20150022641 | SYSTEM AND METHOD FOR ANALYZING DATA CAPTURED BY A THREE-DIMENSIONAL CAMERA - In an exemplary embodiment, a system includes a three-dimensional camera and a processor communicatively coupled to the three-dimensional camera. The processor is operable to determine a first edge of a dairy livestock, determine a second edge of the dairy livestock, determine a third edge of the dairy livestock, and determine a fourth edge of the dairy livestock. | 01-22-2015 |
20150029309 | METHOD, SYSTEM, APPARATUS, AND COMPUTER PROGRAM FOR 3D ACQUISITION AND CARIES DETECTION - A system and apparatus for obtaining images of an object, a method for operating an optical camera system to obtain images of the object, and a computer program that operates in accordance with the method. The system includes an optical system and at least one processing system. The optical system is arranged to capture at least one first image of the object while the optical system operates in an imaging mode, and is also arranged to capture at least one second image of the object while the optical system operates in a diagnostic mode. The at least one processing system is arranged to combine the first and second images. | 01-29-2015 |
20150029310 | OPTICAL SYSTEM AND METHOD FOR ACTIVE IMAGE ACQUISITION - An active image acquisition system and method are introduced. The active image acquisition system retrieves light information of an object for creating an object model. The active image acquisition system includes a processing unit, a light-emitting unit and a capturing unit. The processing unit generates a plurality of modulating signals and a plurality of synchronous signals seriatim. The light emitting unit modulates a first light beam by the modulating signals, and the first light beam is emitted to the object. The object reflects a second light beam after the first light beam incident into the object with the light information. The capturing unit generates an image after capturing the second light beam in each modulating signals modulating the first light beam. The processing unit performs a first algorithm for calculating a plurality of the images and the modulating signals, and creating the object model. | 01-29-2015 |
20150029311 | IMAGE PROCESSING METHOD AND IMAGE PROCESSING APPARATUS - An image processing method comprising: (a) receiving at least one input image; (b) acquiring depth map from the at least one input image; and (c) performing a defocus operation according to the depth map upon one of the input images, to generate a processed image. | 01-29-2015 |
20150029312 | APPARATUS AND METHOD FOR DETECTING OBJECT AUTOMATICALLY AND ESTIMATING DEPTH INFORMATION OF IMAGE CAPTURED BY IMAGING DEVICE HAVING MULTIPLE COLOR-FILTER APERTURE - Disclosed are an apparatus and a method for detecting an object automatically and estimating depth information of an image captured by an imaging device having a multiple color-filter aperture. A background generation unit detects a movement from a current image frame among a plurality of continuous image frames captured by an MCA camera to generate a background image frame corresponding to the current image frame. An object detection unit detects an object region included in the current image frame based on differentiation between a plurality of color channels of the current image frame and a plurality of color channels of the background image frame. According to an embodiment of the present invention, it is possible to automatically detect an object by a repetitively updated background image frame and to accurately estimate object information by separately detecting an object for each color channel by considering a property of the MCA camera. | 01-29-2015 |
20150035943 | In-Ear Orthotic for Relieving Temporomandibular Joint-Related Symptoms - An in-ear orthotic with one or more features to help manage or reduce pain, discomfort, or other symptoms associated with temporomandibular joint disorder. Also disclosed are methods of using optical scanning to create a three dimensional replication of the ear canal that is used to design a customized in-ear orthotic to help manage one or more symptoms of temporomandibular joint disorder. | 02-05-2015 |
20150035944 | Method for Measuring Microphysical Characteristics of Natural Precipitation using Particle Image Velocimetry - A method and video sensor for precipitation microphysical features measurement based on particle image velocimetry. The CCD camera is placed facing towards the light source, which forms a three-dimensional sampling space. As the precipitation particles fall through the sampling space, double-exposure images of precipitation particles illuminated by pulse light source are recorded by CCD camera. Combined with the telecentric imaging system, the time between the two exposures are adaptive and can be adjusted according to the velocity of precipitation particles. The size and shape can be obtained by the images of particles; the fall velocity can be calculated by particle displacement in the double-exposure image and interval time; the drop size distribution and velocity distribution, precipitation intensity, and accumulated precipitation amount can be calculated by time integration. This invention provides a method for measuring the shape, size, velocity, and other microphysical characteristics of various precipitation particles. | 02-05-2015 |
20150035945 | METHOD AND SYSTEM FOR MANUFACTURING A WIG HEAD SHELL - An object head is prepared by placing a plurality of markers on the object head at a location where a wig is needed. A digital stereoscopic camera acquires a plurality of images of the object head at the location where the wig is needed from a plurality of different positions. Each of the images contains at least three of the markers on the object head. The digital image files are transferred from the camera to a digital processing device and the images are combined (stitched, sewn) in the digital processing device, where a three-dimensional digital model of the object head is generated. The wig head shell is manufactured from the three-dimensional model, for example by way of a 3D printer or by manual forming of a plaster model. | 02-05-2015 |
20150035946 | METHODS AND SYSTEMS FOR THREE DIMENSIONAL OPTICAL IMAGING, SENSING, PARTICLE LOCALIZATION AND MANIPULATION - Embodiments include methods, systems, and/or devices that may be used to image, obtain three-dimensional information from a scene, and/or locate multiple small particles and/or objects in three dimensions. A point spread function (PSF) with a predefined three dimensional shape may be implemented to obtain high Fisher information in 3D. The PSF may be generated via a phase mask, an amplitude mask, a hologram, or a diffractive optical element. The small particles may be imaged using the 3D PSF. The images may be used to find the precise location of the object using an estimation algorithm such as maximum likelihood estimation (MLE), expectation maximization, or Bayesian methods, for example. Calibration measurements can be used to improve the theoretical model of the optical system. Fiduciary particles/targets can also be used to compensate for drift and other type of movement of the sample relative to the detector. | 02-05-2015 |
20150035947 | MOBILE DEVICE CAPTURE AND DISPLAY OF MULTIPLE-ANGLE IMAGERY OF PHYSICAL OBJECTS - Methods and systems for displaying multiple-perspective imagery of physical objects are presented. In an example method, a visual presentation of an object is accessed at a mobile device, the presentation having multiple images of the object from varying perspectives relative to the object. Input is received from a user of the mobile device by way of detecting movement imparted on the mobile device by the user. The presentation is presented to the user of the mobile device according to the input. The input determines a presentation speed and order of the images. | 02-05-2015 |
20150042755 | METHOD FOR INSTRUCTING A 3DPRINTING SYSTEM COMPRISING A 3D PRINTER AND 3D PRINTING SYSTEM - A method for instructing a 3D printing system that includes a 3D printer provided with a printing coordinate system to print at least one first object onto an existing second object comprises providing or receiving at least one image representing at least a part of the existing second object, determining or receiving an alignment between at least part of the at least one first object and at least part of the existing second object, determining a pose of the existing second object relative to the printing coordinate system according to the at least one image, and providing the 3D printing system with the pose and the alignment for the 3D printer to print at least part of the at least one first object onto the existing second object according to the pose and the alignment. | 02-12-2015 |
20150042756 | IMAGE CAPTURING DEVICE, IMAGE DISPLAY METHOD, AND RECORDING MEDIUM - In the related art, it was difficult to compare lengths of a plurality of objects which were present at different places. However, it is possible to easily compare lengths of photographed objects using an image capturing device which displays a length of an object which is calculated based on parallax information, by obtaining an image in which the object is photographed and the parallax information corresponding to the image as inputs, the device including an object extraction unit which extracts an image of an object using the parallax information from the photographed image; a comparison data maintaining unit which maintains the image of the object and the length of the object; an object comparison unit which compares the length of the object which is extracted using the object extraction unit to a length of comparison data which is extracted from the comparison data maintaining unit; and an image composition unit which combines a comparison result with the photographed image, and outputs the image. | 02-12-2015 |
20150042757 | LASER SCANNING SYSTEMS AND METHODS - A three-dimensional scanner uses a rotatable mounting structure to secure a laser line source in a manner that permits rotation of a projected laser line about an axis of the laser, along with movement of the laser through an arc in order to conveniently position and orient the resulting laser line. Where the laser scanner uses a turntable or the like, a progressive calibration scheme may be employed with a calibration fixture to calibrate a camera, a turntable, and a laser for coordinated use as a three-dimensional scanner. Finally, parameters for a scan may be automatically created to control, e.g., laser intensity and camera exposure based on characteristics of a scan subject such as surface characteristics or color gradient. | 02-12-2015 |
20150042758 | LASER SCANNING SYSTEMS AND METHODS - A three-dimensional scanner uses a rotatable mounting structure to secure a laser line source in a manner that permits rotation of a projected laser line about an axis of the laser, along with movement of the laser through an arc in order to conveniently position and orient the resulting laser line. Where the laser scanner uses a turntable or the like, a progressive calibration scheme may be employed with a calibration fixture to calibrate a camera, a turntable, and a laser for coordinated use as a three-dimensional scanner. Finally, parameters for a scan may be automatically created to control, e.g., laser intensity and camera exposure based on characteristics of a scan subject such as surface characteristics or color gradient. | 02-12-2015 |
20150042759 | DEVICE FOR OPTICALLY SCANNING AND MEASURING AN ENVIRONMENT - A device for optically scanning and measuring an environment is provided. The device includes a movable scanner having at least one first projector for producing at least one uncoded first pattern on an object in the environment. The scanner includes at least one camera for recording images of the object provided with the pattern and a controller coupled to the first projector and the camera. The device further includes at least one second projector which projects a stationary uncoded second pattern on the object while the scanner is moved. Wherein the controller has a processor configured to determine a set of three-dimensional coordinates of points on a surface of the object from a set of images acquired by the camera based at least in part on the first pattern. The controller is further configured to register the set of images relative based in part on the stationary second pattern. | 02-12-2015 |
20150049168 | DYNAMIC ADJUSTMENT OF IMAGING PARAMETERS - Representative implementations of devices and techniques provide adjustable parameters for imaging devices and systems. Dynamic adjustments to one or more parameters of an imaging component may be performed based on changes to the relative velocity of the imaging component or to the proximity of an object to the imaging component. | 02-19-2015 |
20150049169 | HYBRID DEPTH SENSING PIPELINE - An apparatus for a hybrid tracking and mapping is described herein. The apparatus includes logic to determine a plurality of depth sensing techniques. The apparatus also includes logic to vary the plurality of depth sensing techniques based on a camera configuration. Additionally, the apparatus includes logic to generate a hybrid tracking and mapping pipeline based on the depth sensing techniques and the camera configuration. | 02-19-2015 |
20150049170 | METHOD AND APPARATUS FOR VIRTUAL 3D MODEL GENERATION AND NAVIGATION USING OPPORTUNISTICALLY CAPTURED IMAGES - A method and device for opportunistically collecting images of a location via a mobile device, the device including algorithms that allow the selective winnowing of collected images to facilitate the creation of a 3D model from the images while minimizing the storage needed to hold the pictures. | 02-19-2015 |
20150049171 | PROCESSING APPARATUS - A processing apparatus including a chuck table for holding a workpiece, a processing unit for processing the workpiece held on the chuck table, and a feeding mechanism for relatively moving the chuck table and the processing unit in an X direction as a feeding direction. The processing apparatus further includes a three-dimensional imaging mechanism for imaging the workpiece held on the chuck table in three dimensions composed of the X direction, a Y direction perpendicular to the X direction, and a Z direction perpendicular to both the X direction and the Y direction and then outputting an image signal obtained above, a control unit for generating a three-dimensional image according to the image signal output from the three-dimensional imaging mechanism, and an output unit for outputting the three-dimensional image generated by the control unit. | 02-19-2015 |
20150054917 | SCALING A THREE DIMENSIONAL MODEL USING A REFLECTION OF A MOBILE DEVICE - A computer-implemented method for scaling a three dimensional model of an object is described. In one embodiment, first and second calibration images may be shown on a display of a mobile device. The display of the mobile device may be positioned relative to a mirrored surface. A reflection of an object positioned relative to the mobile device may be captured via a camera on the mobile device. A reflection of the first and second calibration images may be captured. The captured reflection of the object and the captured reflection of the first and second calibration images may be shown on the display. A reflection of the displayed captured reflection of the first and second calibration images may be captured. When the captured reflection of the displayed captured reflection of the first calibration image is positioned relative to the captured reflection of the second calibration image may be detected. | 02-26-2015 |
20150054918 | THREE-DIMENSIONAL SCANNER - A three-dimensional (3D) scanner including a light-source module, a screen, a rotary platform, an image capturing unit and a process unit is provided. The light source module is configured to emit a beam. The screen disposed on a transmission path of the beam has a projection surface facing the light source module. The rotary platform carrying a 3D object is disposed between the light source module and the screen. The 3D object is rotated to a plurality of object orientations about a rotating axis to form a plurality of object shadows on the projection surface. The image capturing unit is configured to capture the object shadows from the projection surface to obtain a plurality of object contour images. The process unit is configured to read and process the object contour images to build the digital 3D model related to the 3D object according to the object contour images. | 02-26-2015 |
20150054919 | THREE-DIMENSIONAL IMAGE SENSOR MODULE AND METHOD OF GENERATING THREE-DIMENSIONAL IMAGE USING THE SAME - A 3D image sensor module includes an image sensor including a plurality of color pixels and a plurality of infrared pixels, and a variable filter suitable for selectively filtering visible rays or infrared rays from light, which is incident on the image sensor, in a time-division way. | 02-26-2015 |
20150054920 | THREE-DIMENSIONAL OBJECT DETECTION DEVICE - A three-dimensional object detection device includes an image capturing unit, an image conversion unit, a three-dimensional object detection unit, a three-dimensional object assessment unit and a control unit. The image conversion unit converts a viewpoint of the images obtained by the image capturing unit to create bird's-eye view images. The three-dimensional object detection unit detects a presence of a three-dimensional object within the predetermined detection area by vehicle width direction detection processing. The three-dimensional object assessment unit assesses whether the three-dimensional object detected is another vehicle that is present within the predetermined detection area. The control unit suppresses assessment by the three-dimensional object assessment unit that the three-dimensional object is a vehicle when a specified detection position has moved rearward within the detection area in a host vehicle progress direction and arrived at a predetermined position in the host vehicle progress direction within the detection area. | 02-26-2015 |
20150054921 | SLIDE SCANNER WITH DYNAMIC FOCUS AND SPECIMEN TILT AND METHOD OF OPERATION - An instrument and method for scanning large microscope specimen on a specimen holder has a scanning optical microscope that is configured to scan the specimen in one of brightfield and fluorescence. The specimen is dynamically tillable about a scan direction during a scan to maintain focus along the length of each scan line as the scan proceeds. A three dimensional image of the specimen can be obtained wherein the specimen tilt and relative focus are maintained from a first image contour to a second image contour through a thickness of a specimen. | 02-26-2015 |
20150054922 | FOCUS SCANNING APPARATUS - A scanner includes a camera, a light source for generating a probe light incorporating a spatial pattern, an optical system for transmitting the probe light towards the object and for transmitting at least a part of the light returned from the object to the camera, a focus element within the optical system for varying a position of a focus plane of the spatial pattern on the object, unit for obtaining at least one image from said array of sensor elements, unit for evaluating a correlation measure at each focus plane position between at least one image pixel and a weight function, a processor for determining the in-focus position(s) of each of a plurality of image pixels for a range of focus plane positions, or each of a plurality of groups of image pixels for a range of focus plane positions, and transforming in-focus data into 3D real world coordinates. | 02-26-2015 |
20150054923 | DEPTH SENSING WITH DEPTH-ADAPTIVE ILLUMINATION - An adaptive depth sensing system (ADSS) illuminates a scene with a pattern that is constructed based on an analysis of at least one prior-generated depth map. In one implementation, the pattern is a composite pattern that includes two or more component patterns associated with different depth regions in the depth map. The composite pattern may also include different illumination intensities associated with the different depth regions. By using this composite pattern, the ADSS can illuminate different objects in a scene with different component patterns and different illumination intensities, where those objects are located at different depths in the scene. This process, in turn, can reduce the occurrence of defocus blur, underexposure, and overexposure in the image information. | 02-26-2015 |
20150062300 | Wormhole Structure Digital Characterization and Stimulation - The present disclosure relates to digitally characterizing and simulating wormhole structures in rock. One example method includes receiving internal imaging data of a core sample of a rock formation; generating, by one or more processors of a computing system, a digital core sample model of the structure of the core sample based on the internal imaging data; and analyzing, by the one or more processors, the core sample model to determine the porosity value of the core sample. | 03-05-2015 |
20150062301 | NON-CONTACT 3D HUMAN FEATURE DATA ACQUISITION SYSTEM AND METHOD - A non-contact 3D human data acquisition system and method includes a depth-sensing camera used to acquire the front and back depth image data of static body of a test individual, and a human characteristic algorithmic processor electrically connected with the depth-sensing camera, so as to acquire the depth image data for subsequent processing. The human characteristic algorithmic processor includes a human depth data analysis module, a human sire measurement module, and a 3D human feature data acquisition module. The depth-sensing camera could be used to capture depth images, and the human characteristic algorithmic processor can be performed without contacting, with the human body or available in remote control. This allows one individual to rapidly and easily obtain important characteristic data of the human body, conduct 3D human body analysis, and collect important characteristic sizes, thus helping to set up statistical databases for further analysis, research, and other applications. | 03-05-2015 |
20150062302 | MEASUREMENT DEVICE, MEASUREMENT METHOD, AND COMPUTER PROGRAM PRODUCT - According to an embodiment, a measurement device includes a first calculator, a second calculator, and a determination unit. The first calculator is configured to calculate, by using images of an object from viewpoints, first confidence for each of a plurality of first three-dimensional points in three-dimensional space, the first confidence indicating likelihood that the first three-dimensional point is a point on the object. The second calculator is configured to calculate, by using distance information indicating a measurement result of a distance from a measurement position to a measured point on the object, second confidence for each of a plurality of second three-dimensional points in the three-dimensional space, the second confidence indicating likelihood that the second three-dimensional point is a point on the object. The determination unit is configured to determine a three-dimensional point on the object by using the first confidence and the second confidence. | 03-05-2015 |
20150070465 | INTERACTIVE TELEVISION - An interactive television includes a plurality of features not available in a single television device. The interactive television includes a touch screen and can include features such as wireless internet access, a three-dimensional camera, a built-in surge protector, a surge protected outlet in the television base, and the like. The interactive television can include software for displaying time, time zones, calendars and the like. The interactive television can include an alarm clock and can be used for text messaging. The interactive television can include several technological advances in a single device, providing the user with a complete, interactive television experience. | 03-12-2015 |
20150070466 | Camera Devices And Systems Based On A Single Image Sensor And Methods For Manufacturing The Same - A camera device includes a single imaging sensor, a plurality of imaging objectives associated with the single imaging sensor, and a plurality of dedicated image areas within the single imaging sensor, each of the plurality of dedicated image areas corresponding to a respective one of the plurality of imaging objectives, such that images formed by each of the plurality of imaging objectives may be recorded by the single imaging sensor. | 03-12-2015 |
20150070467 | DEPTH KEY COMPOSITING FOR VIDEO AND HOLOGRAPHIC PROJECTION - According to embodiments herein, depth key compositing is the process of detecting specific desired portionsobjects of a digital image using mathematical functions based on depth, in order to separate those specific portionsobjects for further processing. In one particular embodiment, a digital visual image is captured from a video capture device, and one or more objects are determined within the digital visual image that are within a particular depth range of the video capture device. From there, the one or more objects may be isolated from portions of the digital visual image not within the particular depth range, and the isolated objects are processed for visual display apart from the portions of the digital visual image not within the particular depth range. Also, in certain embodiments, the detected portion of the digital image (isolated objects) may be layered with another image, such as for film production, or used for holographic projection. | 03-12-2015 |
20150070468 | USE OF A THREE-DIMENSIONAL IMAGER'S POINT CLOUD DATA TO SET THE SCALE FOR PHOTOGRAMMETRY - A triangulation-type, three-dimensional imager device uses photogrammetry to provide alignment or registration of the multiple point clouds of an object generated by the imager. The imager does not need a calibrated artifact such as a scale bar in its use of the photogrammetry process but instead uses the point cloud data generated by the imager to set the scale required by and utilized in the photogrammetry process. | 03-12-2015 |
20150070469 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - A plurality of cross section images which can be included in the range of a specific error occurring when specifying a correspondence cross section in a three-dimensional medical image of an object corresponding to a cross section of interest of the object are acquired from the three-dimensional medical image. The acquired cross section images are displayed on a display screen. | 03-12-2015 |
20150070470 | Apparatus, System, and Method for Mobile, Low-Cost Headset for 3D Point of Gaze Estimation - An apparatus, system, and method for a mobile, low-cost headset for 3D point of gaze estimation. A point of gaze apparatus may include an eye tracking camera configured to track the movements of a user's eye and a scene camera configured to create a three-dimensional image and a two-dimensional image in the direction of the user's gaze. The point of gaze apparatus may include an image processing module configured to identify a point of gaze of the user and identify an object located at the user's point of gaze by using information from the eye tracking camera and the scene camera. | 03-12-2015 |
20150077517 | APPARATUS FOR REAL-TIME 3D CAPTURE - One variation of a real-time 3D capture system for a mobile electronic device having a camera includes an infrared projector that projects a pattern onto an imaging target; an infrared sensor that captures the pattern; a control module that controls the projector and sensor, takes data from the sensor, determines depth information from the data, and transmits the depth information to the mobile electronic device; a battery that provides power to the projector, sensor, and control module; a software module connected to the mobile electronic device that controls communication of data from the camera and depth information between the control module and the mobile electronic device; a mounting bracket that removably attaches the apparatus to the mobile electronic device such that the capture system when attached maintains alignment with the camera; and a chassis that holds the projector, sensor, control module, and battery, and attaches to the mounting bracket. | 03-19-2015 |
20150077518 | APPARATUS FOR MOBILE PATTERN PROJECTION - Integration of an apparatus for recording a three-dimensional image of a measurement object in mobile computer apparatuses provides additional functions. The integration can be very cost-effective and simple. Smartphones, tablet computers and personal digital assistants can be used for such image capture. | 03-19-2015 |
20150085075 | OPTICAL MODULES THAT REDUCE SPECKLE CONTRAST AND DIFFRACTION ARTIFACTS - An optical module, for use in a depth camera, includes a plurality of laser emitting elements, each of which emits a corresponding laser beam, and a micro-lens array (MLA) that includes a plurality of lenslets. Laser beams emitted by adjacent laser emitting elements at least partially overlap one another prior to being incident on the MLA For each lenslet of at least a majority of the lenslets of the MLA, the lenslet is at least partially filled by light corresponding to laser beams emitted by at least two of the laser emitting elements. The inclusion of the plurality of laser emitting elements is used to reduce speckle contrast. The overlap of the laser beams, and the at least partially filling of the lenslets of the MLA with light corresponding to laser beams emitted by multiple laser emitting elements, is used to reduce diffraction artifacts. | 03-26-2015 |
20150085076 | APPROACHES FOR SIMULATING THREE-DIMENSIONAL VIEWS - Approaches enable image content (e.g., still or video content) to be displayed to provide a viewer with an appearance or view of the content that is based at least in part upon a current relative position and/or orientation of the viewer with respect to the device, as well as changes in that relative position and/or orientation. For example, positional data can be used to render image content from a perspective that is consistent with a viewing angle for the current relative position of the viewer. As that viewing angle changes, the content can be re-rendered or otherwise updated to display the image content from a perspective that reflects the change in viewing angle. The content can include various portions, and different adjustments can be applied to each portion based upon these and/or other such changes. These adjustments can include, for example, changes due to parallax or occlusion, which when added to the rendered content in response to relative movement between a viewer and a device can enhance the experience of the viewer and increase realism for content rendered on a two- or three-dimensional display screen. | 03-26-2015 |
20150085077 | READ-OUT MODE CHANGEABLE DIGITAL PHOTOGRAPHING APPARATUS AND METHOD OF CONTROLLING THE SAME - Provided are a digital photographing apparatus in which a read-out mode may be changed when capturing a moving image, and a method of controlling the digital photographing apparatus. The method includes capturing a moving image having a predetermined frame rate in a first read-out mode, estimating a brightness of surroundings and then determining a shutter speed of a frame to be currently captured based on the estimated brightness of the surroundings, determining whether or not an exposure time that is dependent on the shutter speed is longer than a predetermined time period, and changing the first read-out mode to a second read-out mode that has a shorter read-out time than the first read-out mode when the exposure time is longer than the predetermined time period. | 03-26-2015 |
20150085078 | Method and System for Use in Detecting Three-Dimensional Position Information of Input Device - An objective of the present invention is to provide a method and system of detecting three-dimensional position information of an input device. Herein, the input device comprises at least one light-emitting source; imaging information of the light-emitting source is captured by a camera; an input light spot of the light-emitting source is detected based on the imaging information; three-dimensional position information of the input device is obtained based on light spot attribute information of the input light spot by means of a predetermined mapping relationship. Compared with the prior art, the present invention can capture imaging information of a light-emitting source by only one camera to further obtain three-dimensional position information of an input device to which the light-emitting source belongs, thereby reducing hardware costs of the system as well as computational complexity. | 03-26-2015 |
20150085079 | DYNAMIC RANGE OF COLOR CAMERA IMAGES SUPERIMPOSED ON SCANNED THREE-DIMENSIONAL GRAY-SCALE IMAGES - A laser scanner scans an object by measuring first and second angles with angle measuring devices, sending light onto an object and capturing the reflected light to determine a distances and gray-scale values to points on the object, capturing a sequence of color images with a color camera at different exposure times, determining 3D coordinates and gray-scale values to points on the object, determining from the sequence of color images an enhanced color image having a higher dynamic range than available from any single color image, and superimposing the enhanced color image on the 3D gray-scale image to obtain an enhanced 3D color image. | 03-26-2015 |
20150092015 | CAMERA BASED SAFETY MECHANISMS FOR USERS OF HEAD MOUNTED DISPLAYS - The disclosure provides methods and systems for warning a user of a head mounted display that the user approaches an edge of field of view of a camera or one or more tangible obstacles. The warning includes presenting audio and/or displayable messages to the user, or moving the display(s) of the head mounted displays away of the user's eyes. The determination that the user approaches the edge of scene or a tangible obstacle is made by dynamically tracking motions of the users through analysis of images and/or depth data obtained from image sensor(s) and/or depth sensor(s) secured to either the head mounted display, arranged outside of the scene and not secured to the head mounted display, or a combination of both. | 04-02-2015 |
20150092016 | Method For Processing Data And Apparatus Thereof - A method for processing data and an apparatus thereof are provided. The method includes: projecting a first structure light and a second structure light onto a surface of a target object, wherein the first structure light is a stripe-structure light; capturing a first image comprising the target object; detecting first image information corresponding to the first structure light in the first image, wherein the first image information is stripe image information; detecting second image information corresponding to the second structure light in the first image; and obtaining a depth of the target object based on the first image information and the second image information. | 04-02-2015 |
20150092017 | METHOD OF DECREASING NOISE OF A DEPTH IMAGE, IMAGE PROCESSING APPARATUS AND IMAGE GENERATING APPARATUS USING THEREOF - Provided are a method of decreasing the noise of a depth image which predicts the noise for each pixel of the depth image using the difference in depth values of two adjacent pixels of the depth image and the reflectivity of each pixel of an intensity image, and an image processing apparatus and an image generating apparatus that use the method. | 04-02-2015 |
20150092018 | METHOD AND APPARATUS GENERATING COLOR AND DEPTH IMAGES - Provided are methods and apparatuses generating a color image and a depth image by using a first filter that transmits light in multiple wavelength bands and a second filter that transmits light in a particular wavelength band that is included in multiple wavelength bands. | 04-02-2015 |
20150092019 | IMAGE CAPTURE DEVICE - Read electrodes are provided to drain signal charge of pixels from photoelectric conversion units provided in the pixels separately to a vertical transfer unit. During a first exposure period during which an object is illuminated with infrared light, signal charge obtained from a first pixel, and signal charge obtained from a second pixel adjacent to the first pixel, are added together in the vertical transfer unit to produce first signal charge. During a second exposure period during which the object is not illuminated with infrared light, signal charge obtained from the first pixel, and signal charge obtained from the second pixel adjacent to the first pixel, are transferred without being added to the first signal charge in the vertical transfer unit, and are added together in another packet to produce second signal charge. | 04-02-2015 |
20150097928 | INTRA-FRAME CONTROL OF PROJECTOR ON-OFF STATES - A “Concurrent Projector-Camera” uses an image projection device in combination with one or more cameras to enable various techniques that provide visually flicker-free projection of images or video, while real-time image or video capture is occurring in that same space. The Concurrent Projector-Camera provides this projection in a manner that eliminates video feedback into the real-time image or video capture. More specifically, the Concurrent Projector-Camera dynamically synchronizes a combination of projector lighting (or light-control points) on-state temporal compression in combination with on-state temporal shifting during each image frame projection to open a “capture time slot” for image capture during which no image is being projected. This capture time slot represents a tradeoff between image capture time and decreased brightness of the projected image. Examples of image projection devices include LED-LCD based projection devices, DLP-based projection devices using LED or laser illumination in combination with micromirror arrays, etc. | 04-09-2015 |
20150103143 | CALIBRATION SYSTEM OF A STEREO CAMERA AND CALIBRATION METHOD OF A STEREO CAMERA - A calibration method of a stereo camera includes optionally setting a plurality of camera calibration parameters and a plurality of image rectification parameters of the stereo camera; executing an image capture step on at least one left eye pattern and at least one right eye pattern corresponding to each two-dimensional image of at least one two-dimensional image; generating a plurality of new camera calibration parameters according to a plurality of first images corresponding to all left eye patterns corresponding to the at least one two-dimensional image and a plurality of second images corresponding to all right eye patterns corresponding to the at least one two-dimensional image; and generating a plurality of new image rectification parameters of the stereo camera according to the plurality of new camera calibration parameters. | 04-16-2015 |
20150103144 | IMAGE PROCESSING DEVICE, IMAGING DEVICE, AND IMAGE PROCESSING PROGRAM - An image processing apparatus is provided, that is configured to: extract a first pixel value corresponding to a first viewpoint that is one of a plurality of viewpoints to capture a subject image, at a target pixel position from image data having the first pixel value; extract second and third luminance values corresponding to second and third viewpoints that are different from the first viewpoint, at the target pixel position from luminance image data having the second and third luminance values; and calculate at least any of second and third pixel values of the second and third viewpoints such that a relational expression between the second or third pixel value and the first pixel value extracted by the pixel value extracting unit remains correlated with a relational expression defined by the second and third luminance values. | 04-16-2015 |
20150103145 | IMAGE PROCESSING APPARATUS, IMAGING APPARATUS, AND IMAGE PROCESSING METHOD - An image processing apparatus includes an image acquisition unit that acquires a plurality of images, a corresponding point acquisition unit, a first fundamental matrix calculation unit, a depth calculation unit, a corresponding point extraction unit, and a fundamental matrix determination unit. The corresponding point acquisition unit acquires a plurality of first corresponding points of the images. The first fundamental matrix calculation unit calculates a first fundamental matrix based on the first corresponding points. The depth calculation unit calculates depths for the first corresponding points based on the first fundamental matrix. The corresponding point extraction unit extracts a plurality of second corresponding points from the first corresponding points based on the depths. The fundamental matrix determination unit calculates a second fundamental matrix based on the second corresponding points. | 04-16-2015 |
20150109414 | PROBABILISTIC TIME OF FLIGHT IMAGING - An embodiment of the invention provides a time of flight three-dimensional TOF-3D camera that determines distance to features in a scene responsive to amounts of light from the scene registered by pixels during different exposure periods and an experimentally determined probabilistic model of how much light the pixels are expected to register during each of the different exposure periods. | 04-23-2015 |
20150109415 | SYSTEM AND METHOD FOR RECONSTRUCTING 3D MODEL - A system and method for reconstructing a three-dimensional (3D) model are described. The 3D model reconstruction system includes: a low resolution reconstruction unit that converts a first depth map, acquired by scanning a scene with a depth camera, into a second depth map having a low resolution, processes the second depth map to extract pose change information about a pose change of the depth camera, and reconstructs a low resolution 3D model in real-time; and a high resolution reconstruction unit that processes the first depth map by using the pose change information of the depth camera that is extracted from the second depth map and reconstructs a high resolution 3D model. | 04-23-2015 |
20150109416 | DEPTH MAP GENERATION - Aspects of the disclosure relate generally to generating depth data from a video. As an example, one or more computing devices may receive an initialization request for a still image capture mode. After receiving the request to initialize the still image capture mode, the one or more computing devices may automatically begin to capture a video including a plurality of image frames. The one or more computing devices track features between a first image frame of the video and each of the other image frames of the video. Points corresponding to the tracked features may be generated by the one or more computing devices using a set of assumptions. The assumptions may include a first assumption that there is no rotation and a second assumption that there is no translation. The one or more computing devices then generate a depth map based at least in part on the points. | 04-23-2015 |
20150109417 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR MODIFYING ILLUMINATION IN AN IMAGE - In accordance with an example embodiment a method, apparatus and a computer program product are provided. The method comprises partitioning an image into a plurality of super pixel cell areas and determining surface orientations for the plurality of super pixel cell areas. A surface orientation is determined for a super pixel cell area based on depth information associated with the image. The method further comprises receiving at least one virtual light source indication for modifying an illumination associated with the image. The illumination is modified by modifying brightness associated with one or more super pixel cell areas from among the plurality of super pixel cell areas based on the at least one virtual light source indication and surface orientations corresponding to the one or more super pixel cell areas from among the determined surface orientations for the plurality of super pixel cell areas. | 04-23-2015 |
20150116459 | SENSING DEVICE AND SIGNAL PROCESSING METHOD THEREOF - A sensing device provided in the present invention includes a first sensor, a second sensor, a synchronizer, and a combiner. The first sensor has a first resolution and is configured to generate a first image data stream. The second sensor has a second resolution and is configured to generate a second image data stream. The second resolution is different from the first resolution. The synchronizer is electrically coupled to the first sensor and the second sensor and configured to control timing sequences of the first image data stream and the second image data stream such that the first image data stream and the second image data stream have synchronized vertical synchronization signals. The combiner is configured to combine the first image data stream and the second image data stream to form an output data stream. | 04-30-2015 |
20150116460 | METHOD AND APPARATUS FOR GENERATING DEPTH MAP OF A SCENE - A method and an apparatus for generating the depth map of a scene are described. The method comprises the steps of: projecting a structured light pattern with homogeneous density onto the scene to obtain a first depth map; segmenting the scene into at least one area based on the depth information in the first depth map; and projecting a structured light pattern with a heterogeneous density onto the scene by adapting the density of the light pattern to the at least one area of the scene to obtain a second depth map of the scene. | 04-30-2015 |
20150116461 | METHOD AND SCANNER FOR TOUCH FREE DETERMINATION OF A POSITION AND 3-DIMENSIONAL SHAPE OF PRODUCTS ON A RUNNING SURFACE - Line scanning of a radiated and defined strip pattern ( | 04-30-2015 |
20150124052 | IMAGE PROCESSING APPARATUS, INFORMATION PROCESSING APPARATUS, AND IMAGE PROCESSING METHOD - An image processing apparatus comprising an interpolation processing unit configured to generate a plurality of interpolation images based on a plurality of viewpoint images. The image processing apparatus further comprises a development processing unit configured to develop a subject image based on a development parameter and a plurality of ray vectors associated with the plurality of interpolation images and the plurality of viewpoint images. | 05-07-2015 |
20150124053 | INFORMATION PROCESSOR - Disclosed herein is an information processor configured to register user face identification data, the information processor including: a captured image display section adapted to display part of a captured image on a display; a guidance display section adapted to display, on the display, guidance prompting a user to rotate his or her face relative to an imaging device; and a registration processing section adapted to register face identification data based on user's face image included in the captured image after or while the guidance is displayed. | 05-07-2015 |
20150124054 | YIELD MEASUREMENT AND BASE CUTTER HEIGHT CONTROL SYSTEMS FOR A HARVESTER - A system is provided that can include a 3D sensor. The 3D sensor can be configured to detect an area of an elevator on a harvester. The 3D sensor can further be configured to transmit a first signal associated with the area. The system can also include a processing device in communication with the 3D sensor. The system can further include a memory device in which instructions executable by the processing device are stored for causing the processing device to receive the first signal and determine a volume of a material on the elevator based on the first signal. | 05-07-2015 |
20150124055 | INFORMATION PROCESSING APPARATUS, METHOD, AND STORAGE MEDIUM - To perform a high-accuracy three-dimensional measurement by performing an appropriate calibration in accordance with various temperature changes, an information processing apparatus that decides a temperature-dependent parameter of a projection apparatus configured to project a pattern onto a measurement target object to perform a three-dimensional measurement includes a holding unit configured to hold a relationship in which the temperature-dependent parameter of the projection apparatus is set as a temperature function, a temperature input unit configured to input a temperature of the projection apparatus, and a temperature-dependent parameter decision unit configured to decide the temperature-dependent parameter of the projection apparatus based on the temperature of the projection apparatus which is input by the temperature input unit and the relationship. | 05-07-2015 |
20150124056 | APPARATUS AND METHOD FOR PICKING UP ARTICLE DISPOSED IN THREE-DIMENSIONAL SPACE USING ROBOT - An article pickup device configured so as to select a first and second three-dimensional points present in the vicinity of each other based on position information of the plurality of three-dimensional points acquired by a three-dimensional measurement instrument and image data acquired by a camera, acquire an image gradient information in a partial image region including points on an image corresponding to these three-dimensional points, judge whether the first and second three-dimensional points are present on the same article based on a position information of the three-dimensional points and the image gradient information, and add the first and second three-dimensional points to the same connected set when it is judged that the first and second three-dimensional points are present on the same article. | 05-07-2015 |
20150124057 | APPARATUS AND METHOD FOR PICKING UP ARTICLE RANDOMLY PILED USING ROBOT - An article pickup apparatus configured so as to measure surface positions of articles randomly piled on the three-dimensional space using a three-dimensional measurement instrument to acquire position information of three-dimensional points, determine a connected set made by connecting three-dimensional points present in the vicinity of each other among the three-dimensional points, and identify a position and posture of an article based on the position information of three-dimensional points belonging to the connected set. The posture of the article is identified by calculating a main component direction of the connected set by applying main component analysis to the three-dimensional points belonging to the connected set and identifying the posture of the article based on the main component direction. | 05-07-2015 |
20150124058 | CLOUD-INTEGRATED HEADPHONES WITH SMART MOBILE TELEPHONE BASE SYSTEM AND SURVEILLANCE CAMERA - The invention discloses technological improvements to acoustic devices, particularly the headphone, previously functioning as an audio device but now being configured as a smart, acoustic multimedia device comprising, but not restricted to, a headband and two earpieces, which may be worn on, over, around the head/ear, or handheld. Either of the earpieces or headband houses a smart mobile telephone base system (consisting of an internal circuit board fitted with Internal/flash/Micro-USB/SD card and/or cloud-integrated memory, a full duplex transceiver/transmitter-receiver unit which is functional at different frequencies and capable of modulation/demodulation) thus possessing the ability for use as a UE (user equipment/smart mobile device); a smart alert response/surveillance/security camera; and micro-speakers which provide loudspeaker functions when remotely connected to an external audio source. All features are operable through a control interface and/or a smart remote controller unit. | 05-07-2015 |
20150130901 | METHOD AND SYSTEM FOR ENHANCED STRUCTURAL VISUALIZATION BY TEMPORAL COMPOUNDING OF SPECKLE TRACKED 3D ULTRASOUND DATA - An ultrasound device acquires ultrasound image data corresponding to a plurality of volume frames of an object and applies morphing or motion compensation to the acquired ultrasound image data to track a particular region of the object. The ultrasound device compounds the motion compensated ultrasound image data that corresponds to the tracked particular region of the object within the plurality of volume frames of the object. The ultrasound device generates a stationary single three dimensional (3D) volume of the tracked particular region of the object. The ultrasound device acquires the ultrasound image data over at least a portion of a movement cycle. The ultrasound device generates motion tracking information and speckle tracking information from the acquired ultrasound image data for the tracked particular region of the object and group pixels corresponding to the tracked particular region of the object based on the generated motion tracking and speckle tracking information. | 05-14-2015 |
20150130902 | DISTANCE SENSOR AND IMAGE PROCESSING SYSTEM INCLUDING THE SAME - A pixel of a distance sensor includes a photosensor that generates photocharges corresponding to light incident in a first direction. The photosensor includes a plurality of first layers having a cross-sectional area increasing along the first direction after a first depth and at least one transfer gate which receives a transfer control signal for transferring the photocharges to a floating diffusion node. A strong electric field is formed in the direction in which the photocharges move horizontally or vertically in the pixel, thereby accelerating the photocharges, allowing for increased sensitivity and demodulation contrast. | 05-14-2015 |
20150130903 | POWER EFFICIENT LASER DIODE DRIVER CIRCUIT AND METHOD - A voltage mode laser diode driver selectively turns on and off a laser diode. An output stage has an output node configured to be connected to one of the terminals of the laser diode. Depending upon implementation, an active swing controller drives the output stage in a manner that substantially prevents inductive kickback from causing the output node voltage to swinging above the voltage level at the voltage output of the power supply, or swing below ground. The output stage provides a discharge path around the laser diode to shunt current associated with the inductive kickback, and substantially eliminates ringing on the output node of the output stage while the laser diode is off. A power supply controller adjusts the voltage level of the voltage output of the power supply so that current through the laser diode when on and emitting light is substantially equal to a predetermined desired current. | 05-14-2015 |
20150130904 | Depth Sensor and Method of Operating the Same - A method of operating a depth sensor includes generating a first photo gate signal and second through fourth photo gate signals respectively having 90-, 180- and 270-degree phase differences from the first photo gate signal, applying the first photo gate signal and the third photo gate signal to a first row of a pixel array and the second photo gate signal and the fourth photo gate signal to a second row adjacent to the first row in a first frame using a first clock signal, and applying the first photo gate signal and the third photo gate signal to a first column of the pixel array and the second photo gate signal and the fourth photo gate signal to a second column adjacent to the first column in a second frame using a second clock signal. | 05-14-2015 |
20150130905 | 3D SHAPE MEASUREMENT APPARATUS - Provided is a 3D shape measurement apparatus that can obtain a phase delay distribution image of an object to be measured from a single image and has simple optics. The 3D shape measurement apparatus | 05-14-2015 |
20150130906 | ARTICULATED ARM COORDINATE MEASUREMENT MACHINE HAVING A 2D CAMERA AND METHOD OF OBTAINING 3D REPRESENTATIONS - A portable articulated arm coordinate measuring machine includes a noncontact 3D measuring device that has a projector configured to emit a first pattern of light onto an object, a scanner camera arranged to receive the first pattern of light reflected from the surface of the object, an edge-detecting camera arranged to receive light reflected from an edge feature of the object, and a processor configured to determine first 3D coordinates of an edge point of the edge feature based on electrical signals received from the scanner camera and the edge-detecting camera. | 05-14-2015 |
20150130907 | PLENOPTIC CAMERA DEVICE AND SHADING CORRECTION METHOD FOR THE CAMERA DEVICE - A plenoptic camera device and a shading correction method thereof are provided. The plenoptic camera device includes a processor including a shading correction block configured to determine a four-dimensional axis with respect in a raw image, generate a four-dimensional profile by applying a polynomial fit with respect to the plurality of pixels in the raw image based on the four-dimensional axis, and calculate a gain using the four-dimensional profile and a non-volatile memory device configured to store the gain. Accordingly, the plenoptic camera device can remove a vignetting effect using the gain. | 05-14-2015 |
20150130908 | DIGITAL DEVICE AND METHOD FOR PROCESSING THREE DIMENSIONAL IMAGE THEREOF - The present invention relates to a digital device capable of obtaining both a color image and a depth image and a method of processing a three dimensional image using the same. The method can include the steps of switching a resolution of a light-receiving unit from a first resolution to a second resolution which is lower than the first resolution, sensing a visible light and an infrared light from a prescribed subject, extracting color image information from the visible light sensed by a first sensing unit of the light-receiving unit during a first time, extracting depth image information from the infrared light sensed by a second sensing unit of the light-receiving unit during a second time, determining whether extraction of both the color image information and the depth image information for the subject is completed and if the extraction of the color image information and the extraction of the depth image information for the subject are completed, implementing a 3D image of the subject based on the extracted color image information and the depth image information. | 05-14-2015 |
20150138319 | IMAGE PROCESSOR, 3D IMAGE CAPTURE DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - An image processor | 05-21-2015 |
20150138320 | High Accuracy Automated 3D Scanner With Efficient Scanning Pattern - A high accuracy automated | 05-21-2015 |
20150138321 | IMAGING APPARATUS, IMAGING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - An imaging apparatus includes: a face detector that detects a face of a subject from images captured by an imaging unit; a display controller that causes a display unit to display a guide image for prompting the subject to change a direction of the face; an angle calculation unit that calculates an angle of turn of the face of the subject from a reference position of the direction of the face when the guide image is displayed, based on pieces of image data before and after a change in the direction of the face; a distance calculation unit that calculates a distance between the imaging apparatus and the face based on the calculated angle; and an image processing unit that performs image processing on at least one of the pieces of image data, according to the change in the direction of the face, based on the angle and the distance. | 05-21-2015 |
20150138322 | IMAGE PROCESSING DEVICE AND ITS CONTROL METHOD, IMAGING APPARATUS, AND STORAGE MEDIUM - Provided is an image processing device that includes an acquisition unit configured to acquire an image data and a depth data correspond to the image data; a calculating unit configured to calculate a position and attitude change for each depth from the image data and the depth data; and a determining unit configured to determine a position and attitude change of a whole image by a position and attitude change data calculated by the calculating unit based on a statistic of the position and attitude change calculated by the calculating unit. | 05-21-2015 |
20150138323 | IMAGE PROCESSING DEVICE AND METHOD, AND PROGRAM - There is provided an image processing device including capturing portions that respectively capture a first image and a second image that form an image for a right eye and an image for a left eye which can be stereoscopically viewed in three dimensions, a comparison portion that compares the first image and the second image captured by the capturing portions, a determination portion that determines, based on a comparison result of the comparison portion, which of the first image and the second image is the image for the right eye and which is the image for the left eye, and an output portion that outputs each of the first image and the second image, as the image for the right eye and the image for the left eye, based on a determination result of the determination portion. | 05-21-2015 |
20150145954 | GENERATING A THREE-DIMENSIONAL MODEL OF AN INDUSTRIAL PLANT USING AN UNMANNED AERIAL VEHICLE - Generating a three-dimensional model of an industrial plant using an unmanned aerial vehicle is described herein. One method includes capturing, using an unmanned aerial vehicle, a number of visual images of an industrial plant, capturing, using the unmanned aerial vehicle, a number of infrared images of the industrial plant, and forming a three-dimensional model of the industrial plant by combining the number of visual images and the number of infrared images. | 05-28-2015 |
20150145955 | THREE-DIMENSIONAL IMAGER AND PROJECTION DEVICE - The systems and methods described herein include a device that can scan the surrounding environment and construct a 3D image, map, or representation of the surrounding environment using, for example, invisible light projected into the environment. In some implementations, the device can also project into the surrounding environment one or more visible radiation pattern patterns (e.g., a virtual object, text, graphics, images, symbols, color patterns, etc.) that are based at least in part on the 3D map of the surrounding environment. | 05-28-2015 |
20150145956 | THREE-DIMENSIONAL OBJECT DETECTION DEVICE, AND THREE-DIMENSIONAL OBJECT DETECTION METHOD - A three-dimensional object detection device has a camera, a three-dimensional object detection unit, a lens cleaning device, a lens state assessment unit and a controller. The camera has a lens for forming an image of an area rearward of a vehicle. The three-dimensional object detection unit detects a three-dimensional object rearward of the vehicle based on the captured images. The lens cleaning device sprays cleaning fluid to clean the lens of the camera. The lens state assessment unit accesses whether the lens is in a predetermined state subject to control based on a timing at which cleaning fluid is sprayed on the lens. The controller suppresses detection of the three-dimensional object by retaining detection or assessment results for a predetermined length of time that were obtained immediately before the lens was assessed to be in the state subject to control, upon assessment that the lens state is subject to control. | 05-28-2015 |
20150145957 | THREE DIMENSIONAL SCANNER AND THREE DIMENSIONAL SCANNING METHOD THEREOF - A 3D scanning method is provided. The 3D scanning method includes generating first 3D scan data by performing a 3D scan job. When a predetermined user command is input, a mode of the 3D scan job is changed to a correction mode. When the 3D scan job is resumed while the correction mode is maintained, the first 3D scan data is corrected based on second 3D scan data generated by resuming the 3D scan job. | 05-28-2015 |
20150145958 | MULTI-LENS IMAGE CAPTURING APPARATUS - The multi-lens image capturing apparatus includes multiple imaging optical systems arranged such that their optical axes are separate from one another in a direction orthogonal to the optical axes, and an image capturing unit in which multiple image-capturing areas each performing image capturing of an object through a corresponding one of the imaging optical systems are provided. The multiple imaging optical systems include multiple first imaging optical systems each having a first field angle and at least one second imaging optical system having a second field angle wider than the first field angle. When viewed from a direction of the optical axes, the optical axis of the at least one second imaging optical system is located in a first area surrounded by lines connecting positions of the optical axes of the multiple first imaging optical systems. | 05-28-2015 |
20150145959 | Use of Spatially Structured Light for Dynamic Three Dimensional Reconstruction and Reality Augmentation - The disclosure relates to marker-less augmented reality methods which are operable in mobile applications and which involve an object-tracking method that uses projection and detection of a structured light pattern to track the location, orientation, and/or movement of an object in a scene. | 05-28-2015 |
20150145960 | IR SIGNAL CAPTURE FOR IMAGES - Technologies and implementations for capturing images from IR signals are generally disclosed. | 05-28-2015 |
20150145961 | Non-uniform spatial resource allocation for depth mapping - A method for depth mapping includes providing depth mapping resources, including a radiation source, which projects optical radiation into a volume of interest containing an object, and a sensor, which senses the optical radiation reflected from the object. The volume of interest has a depth that varies with angle relative to the radiation source and the sensor. A depth map of the object is generated using the resources while applying at least one of the resources non-uniformly over the volume of interest, responsively to the varying depth as a function of the angle. | 05-28-2015 |
20150294142 | APPARATUS AND A METHOD FOR DETECTING A MOTION OF AN OBJECT IN A TARGET SPACE - An apparatus for detecting a motion of an object in a target space, wherein the object is located at a distance from an image-capturing device which is configured to measure the distance and to provide a sensor signal indicative of the distance, the sensor signal being decomposable in a decomposition including odd harmonics if the object is at rest. The apparatus includes a determining module configured to receive the sensor signal and to generate at least one motion signal which depends on at least one even harmonic of the decomposition of the sensor signal; and a detection module configured to detect the motion of the object based on the at least one motion signal and to provide a detection signal indicating the motion of the object. | 10-15-2015 |
20150294499 | REAL-TIME 3D RECONSTRUCTION WITH POWER EFFICIENT DEPTH SENSOR USAGE - Embodiments disclosed facilitate resource utilization efficiencies in Mobile Stations (MS) during 3D reconstruction. In some embodiments, camera pose information for a first color image captured by a camera on an MS may be obtained and a determination may be made whether to extend or update a first 3-Dimensional (3D) model of an environment being modeled by the MS based, in part, on the first color image and associated camera pose information. The depth sensor, which provides depth information for images captured by the camera, may be disabled, when the first 3D model is not extended or updated. | 10-15-2015 |
20150301313 | STEREOSCOPIC LENS FOR DIGITAL CAMERAS - An apparatus is disclosed, the apparatus including a lens body configured to fit within a standard cinematic movie camera, the lens body including a plurality of optical elements including a plurality of lenses and a sensor. The plurality of optical elements is arranged to receive two channels of visual images and provide the two channels of images to the sensor. | 10-22-2015 |
20150302570 | DEPTH SENSOR CALIBRATION AND PER-PIXEL CORRECTION - Various technologies described herein pertain to correction of an input depth image captured by a depth sensor. The input depth image can include pixels, and the pixels can have respective depth values in the input depth image. Moreover, per-pixel correction values for the pixels can be determined utilizing depth calibration data for a non-linear error model calibrated for the depth sensor. The per-pixel correction values can be determined based on portions of the depth calibration data respectively corresponding to the pixels and the depth values. The per-pixel correction values can be applied to the depth values to generate a corrected depth image. Further, the corrected depth image can be output. | 10-22-2015 |
20150302590 | TRACKING SYSTEM AND TRACKING METHOD USING THE SAME - A tracking system and method using the same is disclosed which is capable of minimizing a restriction of surgical space by achieving a lightweight of the system as well as a reduction of a manufacturing cost through calculating a three-dimensional coordinates of each of makers using one image forming unit. In the tracking system and method using the same, lights emitted from the markers are transferred to one image forming unit through two optical paths, an image sensor of the image forming unit forms two images (direct image and reflection image) of the two optical paths of the markers, and therefore, the system and method using the same has an effect of reducing a manufacturing cost of the tracking system with small and lightweight, and relatively low restriction of surgical space comparing with conventional tracking system since it is possible to calculate a spatial position and direction of the markers attached on a target by using one image forming unit. | 10-22-2015 |
20150304631 | Apparatus for Generating Depth Image - An apparatus for generating depth image is provided, the apparatus according to an exemplary embodiment of the present disclosure being configured to perform an accurate stereo matching even in a low light level by obtaining RGB images and/or IR images, and using the obtained RGB images and/or IR images to extraction of a depth image. | 10-22-2015 |
20150304633 | IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, IMAGE PICKUP APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An image processing method includes the steps of determining a first unnecessary component included in each of a plurality of parallax images based on a plurality of pieces of relative difference information of the parallax images (S | 10-22-2015 |
20150304634 | MAPPING AND TRACKING SYSTEM - LK-SURF, Robust Kalman Filter, HAR-SLAM, and Landmark Promotion SLAM methods are disclosed. LK-SURF is an image processing technique that combines Lucas-Kanade feature tracking with Speeded-Up Robust Features to perform spatial and temporal tracking using stereo images to produce 3D features can be tracked and identified. The Robust Kalman Filter is an extension of the Kalman Filter algorithm that improves the ability to remove erroneous observations using Principal Component Analysis and the X84 outlier rejection rule. Hierarchical Active Ripple SLAM is a new SLAM architecture that breaks the traditional state space of SLAM into a chain of smaller state spaces, allowing multiple tracked objects, multiple sensors, and multiple updates to occur in linear time with linear storage with respect to the number of tracked objects, landmarks, and estimated object locations. In Landmark Promotion SLAM, only reliable mapped landmarks are promoted through various layers of SLAM to generate larger maps. | 10-22-2015 |
20150304636 | METHODS FOR END-USER PARALLAX ADJUSTMENT - A thermal imaging system having visible light and infrared camera modules can perform various methods for reducing parallax errors between captured visible light and infrared images. The system can perform a first calibration method, which can be manual or automatic, and can receive subsequent parallax refinement adjustments via a user interface. The parallax refinement adjustments may be stored in memory for future use. Systems can include an add-on lens capable of interfacing with the infrared camera module for producing modified infrared images. The system can perform methods to reduce parallax between modified infrared and visible light images, and can receive subsequent parallax refinement adjustments to further reduce parallax between the modified infrared and visible light images. The add-on lens parallax refinement data can be stored in memory of the camera memory of the lens for future use in parallax correction. | 10-22-2015 |
20150304637 | POLARIZATION INDEPENDENT OPTICAL SHUTTER USING CHOLESTERIC LIQUID CRYSTALS AND THREE-DIMENSIONAL IMAGE ACQUISITION APPARATUS EMPLOYING THE SAME - Example embodiments relate to an optical shutter including a first polarization filter having a nanopore-cholesteric liquid crystal layer, which includes a cholesteric liquid crystal matrix and a plurality of liquid crystal nanopores embedded in the cholesteric liquid crystal matrix, and having a reflective wavelength band that varies according to electrical control, and a second polarization filter that is parallel to the first polarization filter, includes a nanopore-cholesteric liquid crystal layer, which includes a cholesteric liquid crystal matrix and a plurality of liquid crystal nanopores embedded in the cholesteric liquid crystal matrix, and has a reflective wavelength band that varies according to electrical control. | 10-22-2015 |
20150304638 | METHOD AND APPARATUS FOR OBTAINING 3D IMAGE - The present invention provides an apparatus and a method for obtaining a 3D image. The apparatus for obtaining the 3D image, according to one embodiment of the present invention, comprises a light transmitting portion for emitting infrared ray (IR) structured light onto a recognized object; a light receiving portion comprising an RGB-IR sensor for receiving infrared rays and visible light reflected from the recognized object; a processor for obtaining 3D image information including depth information and a visible light image of the recognized object by using each of the infrared rays and the visible light, which are received by the light receiving portion; and a lighting portion for controlling a lighting cycle of the infrared ray (IR) structured light. Also, the present invention further comprises an image recovery portion for recovering a 3D image of the recognized object by using the 3D image information which is obtained by the processor, and a display portion for providing the recovered 3D image on a visual screen. The present invention, by means of the method and the apparatus, for obtaining the 3D image, can adaptively respond to the brightness of ambient light so as to eliminate interference by the RGB-IR sensor. As a result, more accurate 3D images can be obtained regardless of time or place of image capturing, such as night, day, a dark space, or a bright space. | 10-22-2015 |
20150308816 | SYSTEMS AND METHODS FOR ENHANCING DIMENSIONING - A dimensioning system can include stored data indicative of coordinate locations of each reference element in a reference image containing a pseudorandom pattern of elements. Data indicative of the coordinates of elements appearing in an acquired image of a three-dimensional space including an object can be compared to the stored data indicative of coordinate locations of each reference element. After the elements in the acquired image corresponding to the reference elements in the reference image are identified, a spatial correlation between the acquired image and the reference image can be determined. Such a numerical comparison of coordinate data reduces the computing resource requirements of graphical comparison technologies. | 10-29-2015 |
20150309315 | USING FREEFORM OPTICS FOR AUGMENTED OR VIRTUAL REALITY - Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image-generating source to provide one or more frames of image data in a time-sequential manner, a light modulator configured to transmit light associated with the one or more frames of image data, a substrate to direct image information to a user's eye, wherein the substrate houses a plurality of reflectors, a first reflector of the plurality of reflectors to reflect transmitted light associated with a first frame of image data at a first angle to the user's eye, and a second reflector to reflect transmitted light associated with a second frame of the image data at a second angle to the user's eye. | 10-29-2015 |
20150312549 | GENERATION AND USE OF A 3D RADON IMAGE - Certain aspects relate to systems and techniques for efficiently recording captured plenoptic image data and for rendering images from the captured plenoptic data. The plenoptic image data can be captured by a plenoptic or other light field camera. In some implementations, four dimensional radiance data can be transformed into three dimensional data by performing a Radon transform to define the image by planes instead of rays. A resulting Radon image can represent the summed values of energy over each plane. The original three-dimensional luminous density of the scene can be recovered, for example, by performing an inverse Radon transform. Images from different views and/or having different focus can be rendered from the luminous density. | 10-29-2015 |
20150312550 | COMBINING TWO-DIMENSIONAL IMAGES WITH DEPTH DATA TO DETECT JUNCTIONS OR EDGES - A method, system, apparatus, article of manufacture, and computer program product provide the ability to detect junctions. 3D pixel image data is obtained/acquired based on 2D image data and depth data. Within a given window over the 3D pixel image data, for each of the pixels within the window, an equation for a plane passing through the pixel is determined/computed. For all of the determined planes within the given window, an intersection of all of the planes is computed. A spectrum of the intersection/matrix is analyzed. Based on the spectrum, a determination is made if the pixel at the intersection is of 3 or more surfaces, 2 surfaces, or is 1 surface. | 10-29-2015 |
20150312555 | SINGLE-LENS, SINGLE-SENSOR 3-D IMAGING DEVICE WITH A CENTRAL APERTURE FOR OBTAINING CAMERA POSITION - A device and method for three-dimensional (3-D) imaging using a defocusing technique is disclosed. The device comprises a lens, a central aperture located along an optical axis for projecting an entire image of a target object, at least one defocusing aperture located off of the optical axis, a sensor operable for capturing electromagnetic radiation transmitted from an object through the lens and the central aperture and the at least one defocusing aperture, and a processor communicatively connected with the sensor for processing the sensor information and producing a 3-D image of the object. Different optical filters can be used for the central aperture and the defocusing apertures respectively, whereby a background image produced by the central aperture can be easily distinguished from defocused images produced by the defocusing apertures. | 10-29-2015 |
20150312557 | IMAGE PROCESSING DEVICE AND MOBILE COMPUTING DEVICE HAVING THE SAME - In an example embodiment, an image processing device includes a pixel array including pixels two-dimensionally arranged and configured to capture an image, each of the pixels including a plurality of photoelectric conversion elements and an image data processing circuit configured to generate image data from pixel signals output from the pixels. The image processing device further includes a color data processing circuit configured to extract color data from the image data and output extracted color data. The image processing device further includes a depth data extraction circuit configured to extract depth data from the image data and output extracted depth data. The image processing device further includes an output control circuit configured to control the output of the color data and the depth data. | 10-29-2015 |
20150312561 | VIRTUAL 3D MONITOR - A right near-eye display displays a right-eye virtual object, and a left near-eye display displays a left-eye virtual object. A first texture derived from a first image of a scene as viewed from a first perspective is overlaid on the right-eye virtual object and a second texture derived from a second image of the scene as viewed from a second perspective is overlaid on the left-eye virtual object. The right-eye virtual object and the left-eye virtual object cooperatively create an appearance of a pseudo 3D video perceivable by a user viewing the right and left near-eye displays. | 10-29-2015 |
20150316368 | LASER DEVICE FOR PROJECTING A STRUCTURED LIGHT PATTERN ONTO A SCENE - The present invention relates to a laser device ( | 11-05-2015 |
20150317781 | EXTRINSIC CALIBRATION OF IMAGING SENSING DEVICES AND 2D LIDARS MOUNTED ON TRANSPORTABLE APPARATUS - A method and system for determining extrinsic calibration parameters for at least one pair of sensing devices mounted on transportable apparatus obtains ( | 11-05-2015 |
20150319417 | ELECTRONIC APPARATUS AND METHOD FOR TAKING A PHOTOGRAPH IN ELECTRONIC APPARATUS - Various exemplary embodiments related to an electronic apparatus and a method for taking a photograph in the electronic apparatus are disclosed, and according to an exemplary embodiment, the electronic apparatus may include a display that displays a screen; a depth sensor that outputs a first image signal and depth information; an image sensor that outputs a second image signal; and a control unit that controls to display a preview screen on the display using the first image signal, obtain both depth information of a photographing moment and an image of the photographing moment using the second image signal in response to a request of photographing, and store the image and the depth information. Also, other various exemplary embodiments may be possible. | 11-05-2015 |
20150319421 | METHOD AND APPARATUS FOR OPTIMIZING DEPTH INFORMATION - Method and apparatus for optimizing depth information are provided. One of a left image and a right image is divided into a plurality of segmentations for obtaining a plurality of segmentation maps. A necessary repair depth map is obtained, and the necessary repair depth map is partitioned into a plurality of depth planes according to a plurality of primary depth values and a camera parameter. The primary depth values are recorded in the necessary repair depth map having a plurality of holes. A plurality of optimized depth values are respectively generated for the holes in each of the depth planes by using the segmentation maps, and the optimized depth values are filled into the depth planes to obtain an optimized depth map. | 11-05-2015 |
20150319422 | METHOD FOR PRODUCING IMAGES WITH DEPTH INFORMATION AND IMAGE SENSOR - The invention relates to the production of images associating with each point of the image a depth, i.e. a distance between the observed point and the camera that produced the image. | 11-05-2015 |
20150319425 | IMAGE PROCESS APPARATUS - An image process apparatus includes an image capture device, a filter, a receiver, an input interface, a mixture unit, and an output interface. The image capture device captures an original image and generates a depth map corresponding to the original image, wherein the original image includes at least a first object within a first depth range of the depth map and the other objects not within the first depth range of the depth map. The receiver stores a value of the first depth range and the input interface receives an input image. The filter removes the other objects from the original image and generates a temporary image which includes the first object based on the value of the first depth range. The mixture unit combines the temporary image with the input image, and generates a blending image which is then outputted by the output interface to an external display. | 11-05-2015 |
20150332450 | Stereoscopic Image Capture with Performance Outcome Prediction in Sporting Environments - Methods and apparatus relating to predicting outcome in a sporting environment are described. The methods and apparatus are used to relate trajectory performance of an object to body motions and body orientation associated with a generating the trajectory of the object. When equipment is utilized to generate the trajectory of an object, than the effects of equipment motions and equipment orientation can be also related to trajectory performance. The method and apparatus can be used to predict body motions and body orientations that increase the likelihood of achieving a desired outcome including specifying optimum motions and orientations for a particular individual. The method and apparatus may be used in training, coaching and broadcasting environments. | 11-19-2015 |
20150334318 | A METHOD AND APPARATUS FOR DE-NOISING DATA FROM A DISTANCE SENSING CAMERA - It is inter alia disclosed to determine a phase difference between a light signal transmitted by a time of flight camera system and a reflected light signal received by at least one pixel sensor of an array of pixel sensors in an image sensor of the time of flight camera system, wherein the reflected light signal received by the at least one pixel sensor is reflected from an object illuminated by the transmitted light signal ( | 11-19-2015 |
20150334348 | PRIVACY CAMERA - A privacy camera, such as a light field camera that includes an array of cameras or an RGBZ camera(s)) is used to capture images and display images according to a selected privacy mode. The privacy mode may include a blur background mode and a background replacement mode and can be automatically selected based on the meeting type, participants, location, and device type. A region of interest and/or an object(s) of interest (e.g. one or more persons in a foreground) is determined and the privacy camera is configured to clearly show the region/object of interest and obscure or replace the background according to the selected privacy mode. The displayed image includes the region/object(s) of interest clearly shown (e.g. in focus) and any objects in a background of the combined image shown having a limited depth of field (e.g. blurry/not in focus) and/or the background replaced with another image and/or fill. | 11-19-2015 |
20150334371 | OPTICAL SAFETY MONITORING WITH SELECTIVE PIXEL ARRAY ANALYSIS - An imaging sensor device includes pixel array processing functions that allow time-of-flight (TOF) analysis to be performed on selected portions of the pixel array, while two-dimensional imaging analysis is performed on the remaining portions of the array, reducing processing load and response time relative to performing TOF analysis for all pixels of the array. The portion of the pixel array designated for TOF analysis can be pre-defined through configuration of the imaging sensor device. Alternatively, the imaging sensor device can dynamically select the portions of the pixel array on which TOF analysis is to be performed based on object detection and classification by the two-dimensional imaging analysis. Embodiments of the imaging sensor device can also implement a number of safety and redundancy functions to achieve a high degree of safety integrity, making the sensor suitable for use in various types of safety monitoring applications. | 11-19-2015 |
20150334372 | METHOD AND APPARATUS FOR GENERATING DEPTH IMAGE - A method of generating a depth image includes irradiating an object with a light which is generated from a light source, acquiring a plurality of phase difference signals which have different phase differences from one another, by sensing a reflection light reflected from the object, generating a first depth image based on the plurality of phase difference signals, generating a second depth image based on phase difference signals in which a motion artifact has not occurred, among the plurality of phase difference signals, generating a third depth image by combining the first depth image and the second depth image. | 11-19-2015 |
20150334374 | 3D IMAGE CAPTURE APPARATUS WITH DEPTH OF FIELD EXTENSION - A 3D imaging apparatus with enhanced depth of field to obtain electronic images of an object for use in generating a 3D digital model of the object. The apparatus includes a housing having mirrors positioned to receive an image from an object external to the housing and provide the image to an image sensor. The optical path between the object and the image sensor includes an aperture element having apertures for providing the image along multiple optical channels with a lens positioned within each of the optical channels. The depth of field of the apparatus includes the housing, allowing placement of the housing directly on the object when obtaining images of it. | 11-19-2015 |
20150334375 | IMAGE PICKUP ELEMENT AND IMAGE PICKUP APPARATUS - An image pickup element includes: light-receiving elements, a pair thereof in a row direction outputting pixel signals forming a pair of captured images that have parallax of an object, a group of light-receiving elements being formed by a central light-receiving element being disposed between the light-receiving elements in the pair; a microlens that refracts light from the object and causes the light-receiving elements to receive the light; a color filter that transmits light in accordance with color and is one of R, G, and B for each pair of light-receiving elements, R/G or B/G alternating in the row direction, and a different color being disposed for the pair and the central light-receiving element; and wiring, between the light-receiving elements in the row direction, that transmits I/O signals of the light-receiving elements. The microlens is a cylindrical lens that extends in the column direction and covers the group of light-receiving elements. | 11-19-2015 |
20150334376 | METHOD OF ACQUIRING DEPTH IMAGE AND IMAGE ACQUIRING APPARATUS USING THEREOF - A method of acquiring a depth image and an image acquiring apparatus. The method of acquiring the depth image includes receiving a plurality of phase images with respect to a subject, the plurality of phase images having phases different from one another, eliminating noise from the plurality of phase images, and acquiring the depth image with respect to the subject utilizing the plurality of phase images from which noise has been eliminated. | 11-19-2015 |
20150339522 | METHOD OF AND ANIMAL TREATMENT SYSTEM FOR PERFORMING AN ANIMAL RELATED ACTION ON AN ANIMAL PART IN AN ANIMAL SPACE - A method determining a position of an animal part in an animal space in at least one direction, including: obtaining a two-dimensional image containing depth information; preprocessing the image according to first and second preprocessing modes, to provide respective first and second preprocessed images; comparing the first preprocessed image and the second preprocessed image to obtain at least one image difference; if the at least one image difference is below or equal to a respective predetermined threshold, processing the first preprocessed image according to a first position determining mode, and if the at least one image difference is above the predetermined threshold, processing the second preprocessed image according to a second position determining mode, to provide the position of the animal part. If a fast but not very accurate processing mode, and a slower but more accurate mode, are available, the method can suitably select one of these modes. | 11-26-2015 |
20150339824 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR DEPTH ESTIMATION - In an example embodiment, a method, apparatus and computer program product are provided. The method includes computing a first cost volume for a light-field image. A first depth map comprising depth information of the plurality of sub-images of the light-field image is computed based on the first cost volume. A first view image comprising reconstruction information is reconstructed based on the depth information of the plurality of sub-images. A second cost volume corresponding to the first cost volume is computed based on the reconstruction information. The second cost volume is filtered based on the first view image to generate an aggregated cost volume. A second depth map is generated based on the aggregated cost volume. The second depth map facilitates generation of a second view image that is associated with a resolution higher than a resolution of the first view image. | 11-26-2015 |
20150339825 | Stereo Camera Apparatus - A stereo camera apparatus which carries out distance measuring stably and with high accuracy by making measuring distance resolution variable according to a distance to an object is provided. A stereo camera apparatus 1 takes in two images, changes resolution of a partial area of each image that is taken in, and calculates a distance from a vehicle to an object that is imaged in the partial area, based on disparity of the partial area of each image in which resolution is changed. Thus, even when the object exists at a long distance and is small in size, distance measuring processing can be carried out stably. | 11-26-2015 |
20150341612 | IMAGE GENERATION APPARATUS AND METHOD FOR CONTROLLING THE SAME - An image generation apparatus includes a holding unit configured to acquire and hold pixel values of pixel groups that serve as targets for comparison from the respective first and second images; a reading unit configured to sequentially read out, from the pixel values of the respective pixel groups that are held in the holding unit, pixel values in one direction for each area that is set depending on a pixel position of the range image; a calculation unit configured to calculate an evaluation value based on the pixel values read out by the reading unit; an estimation unit configured to estimate the subject distance based on the evaluation value; and a generation unit configured to generate a range image based on the subject distance estimated by the estimation unit. | 11-26-2015 |
20150347841 | SYSTEM AND METHOD FOR 3D IRIS RECOGNITION - Aspects of the disclosure provide an iris recognition system. The iris recognition system can include a three-dimensional (3D) sensor that is configured to capture a 3D image of an iris, an iris feature extractor that is configured to generate an iris template based on the 3D image of the iris, and a memory that is configured to store the iris template. | 12-03-2015 |
20150348272 | IMAGE PROCESSING APPARATUS AND METHOD FOR OBTAINING POSITION AND ORIENTATION OF IMAGING APPARATUS - An image processing apparatus obtains location information of each image feature in a captured image based on image coordinates of the image feature in the captured image. The image processing apparatus selects location information usable to calculate a position and an orientation of the imaging apparatus among the obtained location information. The image processing apparatus obtains the position and the orientation of the imaging apparatus based on the selected location information and an image feature corresponding to the selected location information among the image features included in the captured image. | 12-03-2015 |
20150350627 | METHOD AND APPARATUS FOR GENERATING DEPTH VALUE CORRESPONDING TO SUBJECT USING DEPTH CAMERA - A method and apparatus for generating a depth value corresponding to a subject by using a depth camera includes dividing a predetermined time section into n sub-time sections in order to measure the depth value, acquiring a voltage value corresponding to the amount of light reflected from the subject in each sub-time section, by using at least one photodiode included in the depth camera, quantizing the voltage value to any one level among predetermined levels, on the basis of the acquired n voltage values, and outputting the quantized value. | 12-03-2015 |
20150355101 | Image Inspection Apparatus, Image Inspection Method, Image Inspection Program, Computer-Readable Recording Medium And Recording Device - An image inspection apparatus includes: an imaging section for capturing an image of a workpiece from a certain direction; an illumination section for illuminating the workpiece from different directions at least three times; an illumination controlling section for sequentially turning on the illumination sections one by one; an imaging generating section for driving the imaging section to generate a plurality of images; a normal vector calculating section for calculating a normal vector with respect to the surface of the workpiece at each of pixels by use of a pixel value of each of pixels having a corresponding relation among the plurality of images; and a contour image generating section for performing differential processing in an X-direction and a Y-direction on the calculated normal vector at each of the pixels, to generate a contour image that shows a contour of inclination of the surface of the workpiece. | 12-10-2015 |
20150355102 | Image Inspection Apparatus, Image Inspection Method, Image Inspection Program, Computer-Readable Recording Medium And Recording Device - An inspection apparatus includes: an imaging section for capturing a first reference image by simultaneously turning on three or more illumination sections, and a second reference image at imaging timing temporally after the first reference image; a tracking target image designating section for designating a tracking target image in the first reference image and is used for tracking a position of the workpiece; a corresponding position estimating section for searching a position of the tracking target image designated by the tracking target image designating section from the second reference image, and specifying a position including the tracking target image in the second reference image, to estimate a corresponding relation of pixels that correspond among each of the partial illumination images; and an inspection image generating section for generating an inspection image for photometric stereo based on the corresponding relation of each of the pixels of the partial illumination images. | 12-10-2015 |
20150355103 | Inspection Apparatus, Inspection Method, And Program - To facilitate setting of a parameter at the time of generating an inspection image from an image acquired by using a photometric stereo principle. A photometric processing part generates an inspection image based on a plurality of luminance images acquired by a camera. A display control part and a display part switch and display the luminance image and the inspection image, or simultaneously display these images. An inspection tool setting part adjusts a control parameter of the camera and a control parameter of an illumination apparatus. Further, when the control parameter is adjusted, the display control part updates the image being displayed on the display part to an image where the control parameter after the change has been reflected. | 12-10-2015 |
20150358602 | Inspection Apparatus, Inspection Method, And Program - A photometric processing part calculates a normal vector of a surface of a workpiece from a plurality of luminance images acquired by a camera in accordance with the photometric stereo method, and performs synthesis processing of synthesizing at least two images out of an inclination image made up of pixel values based on the normal vector calculated from the plurality of luminance images and at least one reduced image of the inclination image, to generate an inspection image showing a surface shape of the inspection target. In particular, a characteristic size setting part sets a characteristic size which is a parameter for giving weight to a component of a reduced image at the time of performing the synthesis processing. The photometric processing part can generate a different inspection image in accordance with the set characteristic size. | 12-10-2015 |
20150358604 | DEVICE FOR THE ACQUISITION OF A STEREOSCOPY IMAGE PAIR - The invention is a stereophotogrammetry device intended to reduce the 3D surface reconstruction artefacts due to specular reflections when a unique camera body is used. Indeed, specular reflection of the camera flash on shiny objects are creating virtual objects in the scene inducing spikes in reconstruction. The device is constituted of a computing unit enabling 3D reconstruction, a unique camera body | 12-10-2015 |
20150362826 | CAMERA AND CAMERA ASSEMBLY - A main body of a camera accommodates a light receiving part. The main body has a flat bottom surface. A support is aligned with the main body in a left-right direction, and supports the main body in such a manner that an orientation of the light receiving part can be controlled in a vertical direction. A bottom surface of the support is located on a common plane on which a bottom surface of the main body is also located. The bottom surface of the support can be attached to and detached from a stand member which is mounted to a display device. This camera promises enhanced stability of mounting of the camera to an edge of a display device. | 12-17-2015 |
20150365653 | COORDINATE MEASURING DEVICE WITH A SIX DEGREE-OF-FREEDOM HANDHELD PROBE AND INTEGRATED CAMERA FOR AUGMENTED REALITY - A method of combining 2D images into a 3D image includes providing a coordinate measurement device and a six-DOF probe having an integral camera associated therewith, the six-DOF probe being separate from the coordinate measurement device. In a first instance, the coordinate measurement device determines the position and orientation of the six-DOF probe and the integral camera captures a first 2D image. In a second instance, the six-DOF probe is moved, the coordinate measurement device determines the position and orientation of the six-DOF probe, and the integral camera captures a second 2D image. A cardinal point common to the first and second image is found and is used, together with the first and second images and the positions and orientations of the six-DOF probe in the first and second instances, to create the 3D image. | 12-17-2015 |
20150373319 | SHAPE MEASUREMENT SYSTEM, IMAGE CAPTURE APPARATUS, AND SHAPE MEASUREMENT METHOD - A shape measurement system includes one or more lighting units located in a case that illuminate a target object located in the case, one or more image capture units located in the case that capture an image of the target object, a holding unit that holds the image capture units and the lighting units so as to form a polyhedron shape approximating a sphere, a selector that selects at least one of the image capture units and at least one of the lighting units to be operated, and a shape calculator that calculates a 3-D shape of the target object based on image data captured by the selected image capture unit under light emitted by the selected lighting unit. | 12-24-2015 |
20150373321 | SIX DEGREE-OF-FREEDOM TRIANGULATION SCANNER AND CAMERA FOR AUGMENTED REALITY - A 3D coordinate measuring system includes a six-DOF unit having a unit frame of reference and including a structure, a retroreflector, a triangulation scanner, and an augmented reality (AR) color camera. The retroreflector, scanner and AR camera are attached to the structure. The scanner includes a first camera configured to form a first image of the pattern of light projected onto the object by a projector. The first camera and projector configured to cooperate to determine first 3D coordinates of a point on the object in the unit frame of reference, the determination based at least in part on the projected pattern of light and the first image. The system also includes a coordinate measuring device having a device frame of reference and configured to measure a pose of the retroreflector in the device frame of reference, the measured pose including measurements of six degrees-of-freedom of the retroreflector. | 12-24-2015 |
20150377413 | METHOD AND SYSTEM FOR CONFIGURING A MONITORING DEVICE FOR MONITORING A SPATIAL AREA - A failsafe monitoring device for monitoring a spatial area comprises at least one image recording unit. A three-dimensional image of the spatial area is recorded and a representation of said three-dimensional image is displayed in order to configure the monitoring device. A configuration plane is defined using a plurality of spatial points which have been determined within the three-dimensional image. Subsequently, at least one variable geometry element is defined relative to the configuration plane. A data record which represents a transformation of the geometry element into the spatial area is generated and transferred to the monitoring device. | 12-31-2015 |
20150379333 | Three-Dimensional Motion Analysis System - A system for recording and displaying motion of a user includes a computer-based controller comprising a computer-readable memory and a three-dimensional motion detector that comprises a red-green-blue video camera, a depth sensor and a microphone. The system records user motion and renders the user as a three-dimensional frame comprising a plurality of joints. The joints are tracked throughout the motion and then curves representing movement of the joint(s) are displayed. | 12-31-2015 |
20150379705 | SYSTEM AND METHOD FOR FILTERING DATA CAPTURED BY A 3D CAMERA - A system for processing an image comprises a three-dimensional camera that captures an image of a dairy livestock and a processor communicatively coupled to the three-dimensional camera. The processor accesses a first pixel having a first depth location, a second pixel having a second depth location, and a third pixel having a third depth location. The processor determines that the second pixel is an outlier among the first pixel and the third pixel based upon the first depth location, the second depth location, and the third depth location, and discards the second pixel from the image based at least in part upon the determination. | 12-31-2015 |
20150381955 | IMAGE PROJECTION SYSTEM - An image projection system includes an image projection means to project an image on a projection surface to form a reference image thereon; an capturing means to image the reference image to obtain an capturing result; a projection conditions correction means to correct projection conditions of the image projection mean on the basis of the capturing result; an intervening member detection means to detect an intervening member lying between the image projection means and the projection surface to obtain a detection result; and an capturing result correction means to correct the capturing result on the basis of the detection result. The projection conditions correction means corrects the projection conditions on the basis of the capturing result corrected by the capturing result correction means when the intervening member is detected by the intervening member detection means. | 12-31-2015 |
20150381963 | SYSTEMS AND METHODS FOR MULTI-CHANNEL IMAGING BASED ON MULTIPLE EXPOSURE SETTINGS - A multi-channel image capture system includes: a multi-channel image sensor including a plurality of first pixels configured to detect light in a first band and a plurality of second pixels configured to detect light in a second band different from the first band; an image signal processor coupled to the multi-channel image sensor, the image signal processor being configured to: store a first plurality of capture parameters and a second plurality of capture parameters; control the multi-channel image sensor to capture a first image frame according to the first plurality of capture parameters; control the multi-channel image sensor to capture a second image frame according to the second plurality of capture parameters; and transmit the first image frame and the second image frame to a host processor. | 12-31-2015 |
20150381967 | METHOD FOR PERFORMING OUT-FOCUS USING DEPTH INFORMATION AND CAMERA USING THE SAME - A camera and a method for extracting depth information by the camera having a first lens and a second lens are provided. The method includes photographing, by the first lens, a first image; photographing, by the second lens, a second image of a same scene; down-sampling the first image to a resolution of the second image if the first image is an image having a higher resolution than a resolution of the second image; correcting the down-sampled first image to match the down-sampled first image to the second image; and extracting the depth information from the corrected down-sampled first image and the second image. | 12-31-2015 |
20160004031 | IMAGE PICKUP INFORMATION OUTPUT APPARATUS AND LENS APPARATUS EQUIPPED WITH SAME - Image pickup information output apparatus which outputs information about image pickup condition derived from combination of positions/states of condition decision members serving as optical members that affect fulfillment of the condition, comprising: setting unit for setting a condition setting value as the condition to be fulfilled; controller for driving one of the condition decision members to control its position/state based on the condition setting value, condition calculator for calculating information about the condition as calculated condition based on the combination of positions/states of the condition decision members; determination unit for determining whether or not the condition setting value changed; decision unit for determining the information about condition to be output, based on the calculated condition and the determination made by the determination unit as to whether or not the condition setting value changed; and output unit for outputting information about the condition to be output determined by the decision unit. | 01-07-2016 |
20160004920 | SPACE-TIME MODULATED ACTIVE 3D IMAGER - Three-dimensional imagers have conventionally been constrained by size, weight, and power (“SWaP”) limitations, readout circuitry bottlenecks, and the inability to image in sunlit environments. As described herein, an imager can illuminate a scene with spatially and temporally modulated light comprising a viewpoint-invariant pattern. The imager can detect diffuse reflections from the illuminated scene to produce a first image. The first image is background-corrected and compared with respect to a known reference image to produce a depth image. The imager can perform in sunlit environments and has improved SWaP and signal-to-noise (SNR) characteristics over existing technologies. | 01-07-2016 |
20160004926 | SYSTEM FOR ACCURATE 3D MODELING OF GEMSTONES - A computerized system, kit and method for producing an accurate 3D-Model of a gemstone by obtaining an original 3D-model of an external surface of the gemstone; imaging at least one selected junction with only portions of its associated facets and edges disposed adjacent the junction, the location of the junction being determined based on information obtained at least partially by using the original 3D model; analyzing results of the imaging to obtain information regarding details of the gemstone at the junction; and using the information for producing an accurate 3D-model of said external surface of the gemstone, which is more accurate than the original 3-D model. | 01-07-2016 |
20160007009 | IMAGING DEVICE AND A METHOD FOR PRODUCING A THREE-DIMENSIONAL IMAGE OF AN OBJECT - An imaging device includes an image sensor circuit including a pixel element. The pixel element is configured to receive during a first receiving time interval electromagnetic waves having a first wavelength, and to receive during a subsequent second receiving time interval electromagnetic waves having a second wavelength. The imaging device includes an image processing circuit configured to produce a color image of the object based on a first pixel image data and a second pixel image data. The first pixel image data is based on the electromagnetic waves having the first wavelength received by the pixel element during the first receiving time interval. The second pixel image data is based on the electromagnetic waves having the second wavelength received by the pixel element during the second receiving time interval. | 01-07-2016 |
20160008111 | EXTRAORAL DENTAL SCANNER | 01-14-2016 |
20160010990 | Machine Vision System for Forming a Digital Representation of a Low Information Content Scene | 01-14-2016 |
20160012588 | Method for Calibrating Cameras with Non-Overlapping Views | 01-14-2016 |
20160014315 | APPARATUS AND METHOD FOR RECONSTRUCTING A THREE-DIMENSIONAL PROFILE OF A TARGET SURFACE | 01-14-2016 |
20160014390 | Electronic Devices With Connector Alignment Assistance | 01-14-2016 |
20160014391 | User Input Device Camera | 01-14-2016 |
20160017866 | WIND TOWER AND WIND FARM INSPECTIONS VIA UNMANNED AIRCRAFT SYSTEMS - An unmanned aircraft system (UAS) to inspect equipment and a method of inspecting equipment with the UAS are described. The UAS includes a scanner to obtain images of the equipment and a memory device to store information for the UAS. The UAS also includes a processor to determine a real-time flight path based on the images and the stored information, and a camera mounted on the UAS to obtain camera images of the equipment as the UAS traverses the real-time flight path. | 01-21-2016 |
20160019688 | METHOD AND SYSTEM OF ESTIMATING PRODUCE CHARACTERISTICS - Disclosed are various embodiments for a method, system, and apparatus for taking three-dimensional images of produce. The three-dimensional image may be used to estimate the volume and other dimensions of the imaged produce. | 01-21-2016 |
20160021356 | Method and a System of a Portable Device for Recording 3D Images for 3d Video and Playing 3D Games - The present invention utilises a portable device for recording 3D images of stationary or moving objects. The main object of the present invention is to record 2D image by capturing a single image of the object, using the focusing system of the imaging device to aim transmitting modulated emission or projecting pattern, sensing the reflection from the object, measuring the angle between the transmission (or projection) and imaging, and using the information to create a 3D image. | 01-21-2016 |
20160021357 | DETERMINING THREE DIMENSIONAL INFORMATION USING A SINGLE CAMERA - Two dimensional images captured by a camera or other device may be used to generate three dimensional information for target objects included in the two dimensional images. Sensor information and other information associated with the device capturing the two dimensional images may be obtained and used to determine a displacement or movement of the camera during capture of the two dimensional images. The displacement or movement may be used to calculate a distance of the target object in the two dimensional images. The distance information may be used to generate virtual planes corresponding to the target objects. | 01-21-2016 |
20160021359 | IMAGING PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes: a subject detector that detects a predetermined subject in at least one of a first input image and a second input image between which parallax is present; and a combiner that combines a subject frame corresponding to the detected subject with each of the first and second input images stereoscopically displayed on a display section in such a way that the subject frames combined with the first and second input images also provide a depth sensation. | 01-21-2016 |
20160029005 | IMAGE-CAPTURING APPARATUS AND IMAGE-CAPTURING METHOD - This invention relates to capturing an image of a subject as a three-dimensional image using a single image-capturing apparatus. The image-capturing apparatus includes a first polarization means, a lens system, and an image-capturing device array having a second polarization means. The first polarization means includes first and second regions arranged along a first direction, and the second polarization means includes multiple third and fourth regions arranged alternately along a second direction. First region transmission light having passed the first region passes the third region and reaches the image-capturing device, and second region transmission light having passed the second region passes the fourth region and reaches the image-capturing device. Thus, an image is captured to obtain a three-dimensional image in which a distance between a barycenter BC | 01-28-2016 |
20160033263 | Measurement device for the three-dimensional optical measurement of objects with a topometric sensor and use of a multi-laser-chip device - A measurement device for the three-dimensional optical measurement of objects with a topometric sensor includes at least one projection unit for projecting a pattern onto an object and at least one image recording unit for recording the pattern that is scattered back from the object. The projection unit has a laser-light source and a pattern generator to which the laser light radiation from the laser-light source can be supplied. The laser-light source has at least one multi-laser-chip device having a plurality of laser diode chips in a common multi-laser-chip package, wherein the laser diode chips are attached to a mounting surface of the multi-laser-chip package and are in thermal communication with the multi-laser-chip package via the mounting surface. | 02-04-2016 |
20160034777 | SPHERICAL LIGHTING DEVICE WITH BACKLIGHTING CORONAL RING - A method for capturing three-dimensional photographic lighting of a spherical lighting device is described. Calculation of boundaries of the spherical lighting device based on lighting properties of at least one light source in a set location of the spherical lighting device is performed. A mapping of multitude points of the spherical lighting device to three-dimensional vectors of at least one camera device using a logical grid is performed. A measurement of brightness of the logical grid of the spherical lighting device is performed. The method further comprises determining brightest grid point of the logical grid of the spherical lighting device, wherein the brightest grid point of the logical grid is measured within a region brightness of the spherical lighting device. The method further comprises calculating the region of brightness of the spherical lighting device based on the determined brightest grid point of the logical grid. | 02-04-2016 |
20160037149 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus obtains shape data including data which indicates vertexes of each of a plurality of polygons representing a stereoscopic object and color data indicating a color of each polygon. The shape data and the color data are converted into shape data and color data in a data format including an area which stores the shape data and an unused area which does not store the shape data. Color data on one of the plurality of polygons is stored in the unused areas corresponding to a plurality of polygons. The converted shape data and color data are output. | 02-04-2016 |
20160037151 | HAND-HELD ELECTRONIC APPARATUS, IMAGE CAPTURING APPARATUS AND METHOD FOR OBTAINING DEPTH INFORMATION - A hand-held electronic apparatus, an image capturing apparatus and a method for obtaining depth information are provided. The image capturing apparatus includes a time of fly (TOF) image capturer, a TOF controller, a main and sub image capturers, and a controller. The TOF image capturer calculates a TOF depth map according to a TOF image, defines an effective region and an un-effective region according to the TOF depth map, and obtains a first depth information set of the effective region. The main and sub image capturers captures a first and second images, respectively. The controller obtains a second depth information set of the un-effectively region by comparing the first and second images, and generates an overall depth map by combining the first depth information set and the second depth information set. | 02-04-2016 |
20160044296 | PRECHARGED LATCHED PIXEL CELL FOR A TIME OF FLIGHT 3D IMAGE SENSOR - A pixel cell includes a latch having an input terminal and an output terminal. The latch is coupled to provide a latched output signal at the output terminal responsive to the input terminal. A first precharge circuit is coupled to precharge the input terminal of the latch to a first level during a reset of the pixel cell. A single photon avalanche photodiode (SPAD) is coupled to provide a SPAD signal to the input terminal of the latch in response to a detection of a photon incident on the SPAD. | 02-11-2016 |
20160044299 | 3D TRACKED POINT VISUALIZATION USING COLOR AND PERSPECTIVE SIZE - One exemplary embodiment involves receiving a plurality of three-dimensional (3D) track points for a plurality of frames of a video, wherein the 3D track points are extracted from a plurality of two-dimensional source points. The embodiment further involves rendering the 3D track points across a plurality of frames of the video on a two-dimensional (2D) display. Additionally, the embodiment involves coloring each of the 3D track points wherein the color of each 3D track point visually distinguishes the 3D track point from a plurality of surrounding 3D track points, and wherein the color of each 3D track point is consistent across the frames of the video. The embodiment also involves sizing each of the 3D track points based on a distance between a camera that captured the video and a location of the 2D source points referenced by the respective one of the 3D track points. | 02-11-2016 |
20160044300 | 3D SCANNER, 3D SCAN METHOD, COMPUTER PROGRAM, AND STORAGE MEDIUM - A 3D scanner includes a stage on which a target object, which is targeted for scan, is to be located, a location object that is to be placed on the stage during scan and on which the target object is to be located during scan, a display unit configured to display an image on the stage, an imaging unit configured to perform image capturing on the stage, and a control unit configured to generate 3D model data of the target object based on a video image of the target object captured by the imaging unit, and to cause the display unit to display, on the stage, an image indicating a direction in which to move the target object, based on video images of the target object and the location object captured by the imaging unit. | 02-11-2016 |
20160050346 | INTEGRATED DEPTH CAMERA - The depth camera includes a control module and a lighting module. The control module includes a control board, a control unit mounted on the control board, a seat mounted on the control board and over the control unit and a lens rooted in the seat. The lighting module is superposed on the control module and includes a lighting board with a through hole for receiving the lens, lighting units mounted on the lighting board and a reflector set composed of a base plate and reflectors formed thereon. Each reflector has an opening surrounding one of the lighting units. | 02-18-2016 |
20160050347 | DEPTH CAMERA - The depth camera includes a control module and a lighting module. The control module includes a control board, a control unit, a seat and a lens mounted on the seat. The lighting module is mechanically and electrically connected to the control module and includes a base board having a through hole for being passed by the lens and corresponding to the control board in position. A plurality of lighting units are symmetrically arranged on the base board and beside the through hole. A shade module is fastened on the base board and has a mount board and shades thereon. The bottom of each shade has an opening for receiving one of the lighting units. | 02-18-2016 |
20160050372 | SYSTEMS AND METHODS FOR DEPTH ENHANCED AND CONTENT AWARE VIDEO STABILIZATION - Systems and methods for depth enhanced and content aware video stabilization are disclosed. In one aspect, the method identifies keypoints in images, each keypoint corresponding to a feature. The method then estimates the depth of each keypoint, where depth is the distance from the feature to the camera. The method selects keypoints of within a depth tolerance. The method determines camera positions based on the selected keypoints, each camera position representing the position of the camera when the camera captured one of the images. The method determines a first trajectory of camera positions based on the camera positions, and generates a second trajectory of camera positions based on the first trajectory and adjusted camera positions. The method generates adjusted images by adjusting the images based on the second trajectory of camera positions. | 02-18-2016 |
20160057403 | Stereo Camera Device - The present invention prevents adverse effects on an external device due to radiation noise from a signal line, stereo camera device is provided with: a case; a first image-capturing unit provided at one end in a longitudinal direction of the case; a second image-capturing unit provided at the other end; a circuit board provided inside the case, a processing circuit connected to each of the first image-capturing unit and the second image-capturing unit by a signal line being mounted on the circuit board, and a connector for outputting a signal processed by the processing circuit to an external apparatus being disposed on the circuit board; and a partition member for partitioning the inside of the case into a plurality of spaces along the longitudinal direction at a first interval that corresponds to a frequency bandwidth in which radiation noise from the signal line is suppressed. | 02-25-2016 |
20160061581 | SCALE ESTIMATING METHOD USING SMART DEVICE - A scale estimating method through metric reconstruction of objects using a smart device is disclosed, in which the smart device is equipped with a camera for image capture and an inertial measurement unit (IMU). The scale estimating method is adapting a batch, vision-centric approach only using IMU to estimate the metric scale of a scene reconstructed by algorithm with Structure from Motion like (SfM) output. Monocular vision and noisy IMU can be integrated with the disclosed scale estimating method, in which a 3D structure of an object of interest up to an ambiguity in scale and reference frame can be resolved. Gravity data and a real-time heuristic algorithm for determining sufficiency of video data collection are utilized for improving upon scale estimation accuracy so as to be independent of device and operating system. Application of the scale estimation includes determining pupil distance and 3D reconstruction using video images. | 03-03-2016 |
20160063865 | method and apparatus for guiding a vehicle in the surroundings of an object - A method for guiding a vehicle in the surrounding environment of an object. The method includes reading in a multiplicity of view ray endpoints in a three-dimensional image, produced by a stereo camera of the vehicle, of a surrounding environment, containing the object, of the vehicle, at least one of the view ray endpoints representing an outer surface of the object, connecting the multiplicity of view ray endpoints to form a polygon that represents a free surface to be traveled by the vehicle, and generating a driving corridor, provided for the vehicle, for driving around the object, based on the free surface. | 03-03-2016 |
20160065862 | Image Enhancement Based on Combining Images from a Single Camera - Provided are systems and methods for image enhancement based on combining multiple related images, such as images of the same object taken from different imaging angles. This approach allows simulating images captured from longer distances using telephoto lenses. Initial images may be captured using a simple camera equipped with shorter focal length lenses, typically used on camera phones, tablets, and laptops. The initial images may be taken using a single camera. An object or, more specifically, a center line of the object is identified in each image. The object is typically present in the foreground portion of the initial images. The initial images may be cross-faded along the object center line to yield a combined image. Separating of the foreground and background portions of each image may be separated and separately processed, such as blurring the background portion and sharpening the foreground portion. | 03-03-2016 |
20160065930 | TECHNOLOGIES FOR IMPROVING THE ACCURACY OF DEPTH CAMERAS - Technologies for improving the accuracy of depth camera images include a computing device to generate a foreground mask and a background mask for an image generated by a depth camera. The computing device identifies areas of a depth image of a depth channel of the generated image having unknown depth values as one of interior depth holes or exterior depth holes based on the foreground and background masks. The computing device fills at least a portion of the interior depth holes of the depth image based on depth values of areas of the depth image within a threshold distance of the corresponding portion of the interior depth holes. Similarly, the computing device fills at least a portion of the exterior depth holes of the depth image based on depth values of areas of the depth image within the threshold distance of the corresponding portion of the exterior depth holes. | 03-03-2016 |
20160065932 | DEVICE AND SYSTEM FOR THREE-DIMENSIONAL SCANNING, AND METHOD THEREOF - A device for three-dimensional scanning of an object includes a detector adapted to obtain orientation information of the device, optics adapted to obtain planar information about a position of the device in a first reference plane of the optics, and a processor adapted to acquire the orientation information and the planar information from the detector and from the optics, respectively, and to process such information in order to obtain an estimate of a position of the device on an axis substantially perpendicular to the first reference plane, for the purpose of obtaining a three-dimensional image of the object. | 03-03-2016 |
20160065937 | ASSIST APPARATUS FOR VISUALLY IMPAIRED PERSON AND METHOD FOR CONTROLLING THE SAME - An assist apparatus for a visually impaired person is provided. The assist apparatus includes an image acquisition unit including a sensor unit, a gravity sensor, at least one retractable and extendable protrusion, a motor drive, and a processor. The sensor unit acquires a depth image including a distance value for an obstacle in front of the person. The gravity sensor detects an inclined angle of the image acquisition unit in relation to a ground surface upon which the person is walking. The motor drive drives a motor to cause the at least one protrusion to extend or retract. The processor generates 3-Dimensional (3D) data using the detected inclined angle and the acquired depth image, converts the distance value into a height value, and controls the motor drive to cause the at least one protrusion to extend or retract to the height value. | 03-03-2016 |
20160065938 | IMAGING SYSTEM AND METHOD FOR CONCURRENT MULTIVIEW MULTISPECTRAL POLARIMETRIC LIGHT-FIELD HIGH DYNAMIC RANGE IMAGING - There is disclosed a novel system and method for multiview, multispectral, polarimetric, light-field, and high dynamic range imaging in a concurrent manner specifically capturing information at different spectral bands and light polarizations simultaneously. The present system and method is capable of ( | 03-03-2016 |
20160065942 | DEPTH IMAGE ACQUISITION APPARATUS AND METHOD OF ACQUIRING DEPTH INFORMATION - A depth image acquisition apparatus and a method of acquiring depth information are provided. The method of acquiring depth information includes: sequentially projecting, to a subject, N different beams of light emitted from a light source for a time period including an idle time for each of the N different beams of transmitted light, where N is a natural number that is equal to or greater than 3; modulating, using a light modulation signal, beams of reflected light that are obtained by reflection of the N different beams from the subject; obtaining N phase images corresponding to the N different beams of light by capturing, using a rolling shutter method, the modulated beams of reflected light; and obtaining depth information by using the obtained N phase images. | 03-03-2016 |
20160065943 | METHOD FOR DISPLAYING IMAGES AND ELECTRONIC DEVICE THEREOF - An apparatus and a method for displaying images in an electronic device are provided. The electronic device includes a processor that obtains an image and a depth map corresponding to the image, separates the image into one or more areas based on the depth map of the image, applies an effect, which is different from at least one of other areas, to at least one of the areas separated from the image, and connects the areas, to which the different effects have been applied, as a single image, and a display that displays the single image. | 03-03-2016 |
20160070094 | STEREOSCOPIC IMAGING OPTICAL SYSTEM ASSEMBLY, STEREOSCOPIC IMAGING APPARATUS, AND ENDOSCOPE - A stereoscopic imaging optical system assembly including a first optical system at least including, in order from an object side to an image plane side, a negative lens and an aperture, the first optical system being rotationally symmetric with respect to a first center axis, a second optical system that is rotationally symmetric with respect to a second center axis parallel with the first center axis, and that has the same construction as, and is located in parallel with, the first optical system, and a variable optical system located in such a way as to intersect the respective optical paths through the first optical system and the second optical system. The variable optical system effects a change of at least either one of focus and vergence. | 03-10-2016 |
20160072258 | High Resolution Structured Light Source - A structured light source comprising VCSEL arrays is configured in many different ways to project a structured illumination pattern into a region for 3 dimensional imaging and gesture recognition applications. One aspect of the invention describes methods to construct densely and ultra-densely packed VCSEL arrays with to produce high resolution structured illumination pattern. VCSEL arrays configured in many different regular and non-regular arrays together with techniques for producing addressable structured light source are extremely suited for generating structured illumination patterns in a programmed manner to combine steady state and time-dependent detection and imaging for better accuracy. Structured illumination patterns can be generated in customized shapes by incorporating differently shaped current confining apertures in VCSEL devices. Surface mounting capability of densely and ultra-densely packed VCSEL arrays are compatible for constructing compact on-board 3-D imaging and gesture recognition systems. | 03-10-2016 |
20160073014 | GUIDED PHOTOGRAPHY AND VIDEO ON A MOBILE DEVICE - In an example embodiment, an item listing process is run in an item listing application. Upon reaching a specified point in the item listing process, a camera application on the user device is triggered (or the camera directly accessed by the item listing application) to enable a user to capture images using the camera, wherein the triggering includes providing a wireframe overlay informing the user as to an angle at which to capture images from the camera. | 03-10-2016 |
20160073087 | AUGMENTING A DIGITAL IMAGE WITH DISTANCE DATA DERIVED BASED ON ACOUSTIC RANGE INFORMATION - Methods, devices and program products are provided that capture image data at an image capture device for a scene, collect acoustic data indicative of a distance between the image capture device and an object in the scene, designate a range in connection with the object based on the acoustic data, and combine a portion of the image data related to the object with the range to form a 3D image data set. The device comprises a processor, a digital camera, a data collector, and a local storage medium storing program instructions accessible by the processor. The processor combines the image data related to the object with the range to form a 3D image data set. | 03-10-2016 |
20160073088 | VARIABLE RESOLUTION PIXEL - A photosensor having a plurality of light sensitive pixels each of which comprises a light sensitive region and a plurality of storage regions for accumulating photocharge generated in the light sensitive region, a transfer gate for each storage region that is selectively electrifiable to transfer photocharge from the light sensitive region to the storage region, and an array of microlenses that for each storage region directs a different portion of light incident on the pixel to a region of the light sensitive region closer to the storage region than to other storage regions. | 03-10-2016 |
20160073093 | IMAGING CIRCUITS AND A METHOD FOR OPERATING AN IMAGING CIRCUIT - An imaging circuit includes a first vertical trench gate and a neighboring second vertical trench gate. The imaging circuit includes a gate control circuit. The gate control circuit operates in a first operating mode to generate a first space charge region accelerating photogenerated charge carriers of a first charge-carrier type to a first collection contact in and in a second operating mode to generate a second space charge region accelerating photogenerated charge carriers of the first charge-carrier type to the first collection contact. The imaging circuit further includes an image processing circuit which determines distance information of an object based on photogenerated charge carriers of the first charge carrier type collected at the first collection contact in the first operating mode and color information of the object based on photogenerated charge carriers of the first charge carrier type collected at the first collection contact in the second operating mode. | 03-10-2016 |
20160080719 | MEDICAL-IMAGE PROCESSING APPARATUS - A medical-image processing apparatus according to an embodiment includes a reconstructing circuitry and a display control circuitry. The reconstructing circuitry performs a volume rendering operation on volume data while moving the viewpoint position by a predetermined parallactic angle, and generates a parallax image group that includes a plurality of parallax images with different viewpoint positions. The display control circuitry causes a stereoscopic display monitor to display the parallax image group as a stereoscopic image. With regard to the volume data that is acquired by each of the multiple types of medical-image diagnostic apparatus, the reconstructing circuitry adjusts each parallactic angle during generation of the parallax image group and, in accordance with each of the adjusted parallactic angles, generates each parallax image group on the basis of the volume data that is acquired by each of the multiple types of medical-image diagnostic apparatus. | 03-17-2016 |
20160080726 | ENHANCING IMAGING PERFORMANCE THROUGH THE USE OF ACTIVE ILLUMINATION - A system includes a camera and a projector for capturing spatial detail having resolution exceeding that afforded by the imaging optics, and recovering topographic information lost to the projective nature of imaging. The projector projects a spatial pattern onto the scene to be captured. The spatial patterns may include any pattern or combination of patterns that result in complex sinusoidal modulation. Spatial detail such as texture on the objects in the scene modulate the amplitude of the spatial pattern, and produce Moiré fringes that shifts previously unresolved spatial frequencies into the camera's optical passband. The images may be demodulated, and the demodulated components may be combined with the un-modulated components. The resulting image has spatial detail previously inaccessible to the camera owing to the band-limited nature of the camera optics. A spatial pattern may also be projected and received by the camera to estimate topographic information about the scene. | 03-17-2016 |
20160082877 | VEHICLE HEADLIGHT - The invention relates to a vehicle headlight, comprising a housing ( | 03-24-2016 |
20160086028 | METHOD FOR CLASSIFYING A KNOWN OBJECT IN A FIELD OF VIEW OF A CAMERA - A method for classifying a known object in a field of view of a digital camera includes developing a plurality of classifier feature vectors, each classifier feature vector associated with one of a plurality of facet viewing angles of the known object. The digital camera captures an image in a field of view including the known object and an image feature vector is generated based upon said captured image. The image feature vector is compared with each of the plurality of classifier feature vectors and one of the plurality of classifier feature vectors that most closely corresponds to the image feature vector is selected. A pose of the known object relative to the digital camera is determined based upon the selected classifier feature vector. | 03-24-2016 |
20160088286 | METHOD AND SYSTEM FOR AN AUTOMATIC SENSING, ANALYSIS, COMPOSITION AND DIRECTION OF A 3D SPACE, SCENE, OBJECT, AND EQUIPMENT - Method and system for automatic composition and orchestration of a 3D space or scene using networked devices and computer vision to bring ease of use and autonomy to a range of compositions. A scene, its objects, subjects and background are identified and classified, and relationships and behaviors are deduced through analysis. Compositional theories are applied, and context attributes (for example location, external data, camera metadata, and the relative positions of subjects and objects in the scene) are considered automatically to produce optimal composition and allow for direction of networked equipment and devices. Events inform the capture process, for example, a video recording initiated when a rock climber waves her hand, an autonomous camera automatically adjusting to keep her body in frame throughout the sequence of moves. Model analysis allows for direction, including audio tones to indicate proper form for the subject and instructions sent to equipment ensure optimal scene orchestration. | 03-24-2016 |
20160088288 | STEREO CAMERA AND AUTOMATIC RANGE FINDING METHOD FOR MEASURING A DISTANCE BETWEEN STEREO CAMERA AND REFERENCE PLANE - An automatic range finding method is applied to measure a distance between a stereo camera and a reference plane. The automatic range finding method includes acquiring a disparity-map video by the stereo camera facing the reference plane, analyzing the disparity-map video to generate a depth histogram, selecting a pixel group having an amount greater than a threshold from the depth histogram, calculating the distance between the stereo camera and the reference plane by weight transformation of the pixel group, and applying a coarse-to-fine computation for the disparity-map video. | 03-24-2016 |
20160091706 | SYNTHESIZING LIGHT FIELDS IN MICROSCOPY - A light field representation of a sample is synthesized or simulated based on bright field image data and phase image data acquired by a microscope such as a quantitative phase microscope. The light field representation may be utilized to render three-dimensional representations of the sample. | 03-31-2016 |
20160092078 | METHOD FOR SELECTING A RECORDING AREA AND SYSTEM FOR SELECTING A RECORDING AREA - In a 3D image of a patient supported on a patient table, the 3D image incorporates depth information about the outline of the patient. The 3D image is received and image information based on the 3D image is displayed on a screen, embedded into a graphical user interface. A first recording area is selectable particularly precisely by the selection being based on the inputting of a first start position and also a first end position in the displayed image information via the graphical user interface. The first recording area is displayed as a graphically highlighted first zone. Furthermore, a first position of the first recording area is determined relative to a recording unit based on the depth information and also based on the selection of the first recording area. This results in the first position being determined rapidly and particularly reliably, in particular in the vertical direction. | 03-31-2016 |
20160094806 | External Recognition Apparatus and Excavation Machine Using External Recognition Apparatus - An external recognition apparatus and an excavation machine using the external recognition apparatus, the external recognition apparatus including: a three-dimensional distance measurement device configured to acquire distance information in a three-dimensional space in a predetermined, region which is under a hydraulic shovel and which includes a region to be excavated by the hydraulic shovel; a plane surface estimation unit configured to estimate a plane surface in the predetermined region based on the distance information; and an excavation object region recognition unit configured to recognize the region to be excavated in the predetermined region based on the plane surface and the distance information. | 03-31-2016 |
20160094827 | HYPERSPECTRAL IMAGING DEVICES USING HYBRID VECTOR AND TENSOR PROCESSING - Methods and systems obtain data representative of a scene across spectral bands using a compressive-sensing-based hyperspectral imaging system comprising optical elements. These methods and systems sample two modes of a three-dimensional tensor corresponding to a hyperspectral representation of the scene using sampling matrices, one for each of the two modes, to generate a modified three-dimensional tensor. After sampling the two modes, such methods and systems sample a third mode of the modified three-dimensional tensor using a third sampling matrix to generate a further modified three-dimensional tensor. Then, the methods and systems reconstruct hyperspectral data from the further modified three-dimensional tensor using the sampling matrices and the third sampling matrix. | 03-31-2016 |
20160094830 | System and Methods for Shape Measurement Using Dual Frequency Fringe Patterns - A method obtains the shape of a target by projecting and recording images of dual frequency fringe patterns. Locations in each projector image plane are encoded into the patterns and projected onto die target while images are recorded. The resulting images show the patterns superimposed onto the target. The images are decoded to recover relative phase values for the patterns primary and dual frequencies. The relative phases are unwrapped into absolute phases and converted back to projector image plane locations. The relation between camera pixels and decoded projector locations is saved as a correspondence image representing the measured shape of the target. Correspondence images with a geometric triangulation method create a 3D model of the target. Dual frequency hinge patterns have a low frequency embedded into a high frequency sinusoidal, both frequencies are recovered in closed form by the decoding method, thus, enabling direct phase unwrapping. | 03-31-2016 |
20160094834 | IMAGING DEVICE WITH 4-LENS TIME-OF-FLIGHT PIXELS & INTERLEAVED READOUT THEREOF - Ranging devices, systems, and methods are provided. In embodiments, a device includes a casing with four openings and an array with depth pixels. The depth pixels are arranged in four quadrants, so that pixels in each of the quadrants receive light through one of the four openings. The depth pixels may generate samples in response to the received light. For a certain frame, a controller reads out samples from each of the quadrants before completing reading out the samples of any one of the quadrants. In some embodiments, reading out is performed by using interleaved rolling shutter for the rows. | 03-31-2016 |
20160100153 | 3D IMAGE SENSOR MODULE AND ELECTRONIC APPARATUS INCLUDING THE SAME - A three-dimensional (3D) image sensor device and an electronic apparatus including the 3D image sensor device are provided. The 3D image sensor device includes: a shutter driver that generates a driving voltage of a sine wave biased with a first bias voltage, from a loss-compensated recycling energy; an optical shutter that varies transmittance of reflective light reflected from a subject, according to the driving voltage, and modulates the reflective light to generate at least two optical modulation signals having different phases; and an image generator that generates 3D image data for the subject which includes depth information calculated based on a phase difference between the at least two optical modulation signals. | 04-07-2016 |
20160104274 | IMAGE-STITCHING FOR DIMENSIONING - Dimensioning systems may automate or assist with determining the physical dimensions of an object without the need for a manual measurement. A dimensioning system may project a light pattern onto the object, capture an image of the reflected pattern, and observe changes in the imaged pattern to obtain a range image, which contains 3D information corresponding to the object. Then, using the range image, the dimensioning system may calculate the dimensions of the object. In some cases, a single range image does not contain 3D data sufficient for dimensioning the object. To mitigate or solve this problem, the present invention embraces capturing a plurality of range images from different perspectives, and then combining the range images (e.g., using image-stitching) to form a composite range-image, which can be used to determine the object's dimensions. | 04-14-2016 |
20160104286 | Plane-Based Self-Calibration for Structure from Motion - Robust techniques for self-calibration of a moving camera observing a planar scene. Plane-based self-calibration techniques may take as input the homographies between images estimated from point correspondences and provide an estimate of the focal lengths of all the cameras. A plane-based self-calibration technique may be based on the enumeration of the inherently bounded space of the focal lengths. Each sample of the search space defines a plane in the 3D space and in turn produces a tentative Euclidean reconstruction of all the cameras that is then scored. The sample with the best score is chosen and the final focal lengths and camera motions are computed. Variations on this technique handle both constant focal length cases and varying focal length cases. | 04-14-2016 |
20160105660 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - The present disclosure relates to an image processing apparatus and an image processing method that enable high-precision generation of parallax images of viewpoints with a small amount of calculation. | 04-14-2016 |
20160109220 | HANDHELD DIMENSIONING SYSTEM WITH FEEDBACK - A handheld dimensioning system that analyzes a depth map for null-data pixels to provide feedback is disclosed. Null-data pixels correspond to missing range data and having too many in a depth map may lead to dimensioning errors. Providing feedback based on the number of null-data pixels helps a user understand and adapt to different dimensioning conditions, promotes accuracy, and facilitates handheld applications. | 04-21-2016 |
20160112643 | HANDHELD DIMENSIONER WITH DATA-QUALITY INDICATION - A handheld dimensioner with a user interface configured to present a quality indicator is disclosed. The handheld dimensioner is configured to capture three-dimensional (3D) data and assess the three-dimensional-data's quality. Based on this quality, a quality indicator may be generated and presented to a user via the user interface. This process may be repeated while the user repositions the handheld dimensioner. In this way, the user may use the quality indicators generated at different positions to find an optimal position for a particular dimension measurement. | 04-21-2016 |
20160112666 | OPTICAL MODULE AND METHOD - An optical module for use in a device includes an array of pixels configured to capture image data and a memory. The memory is configured to store identification information associated with said optical module. The identification information enables retrieval of information for controlling said optical module from a source outside said device. | 04-21-2016 |
20160112696 | IMAGING APPARATUSES AND A TIME OF FLIGHT IMAGING METHOD - The imaging apparatus includes an image sensor circuit comprising a time of flight sensor pixel. The imaging apparatus further includes a first light emitter having a first spatial offset relative to the time of flight sensor pixel. The imaging apparatus further includes a second light emitter having a second spatial offset relative to the time of flight sensor pixel. The imaging apparatus further includes an image processing circuit configured to produce an image of a region of an object based on first sensor pixel image data and second sensor pixel image data generated by the time of flight sensor pixel. The first sensor pixel image data is based on received light emitted by the first light emitter and reflected at the object's region and wherein the second sensor pixel image data is based on received light emitted by the second light emitter and reflected at the object's region. | 04-21-2016 |
20160119606 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - Provided is an image processing apparatus, including: a data acquisition unit configured to acquire light field data containing a plurality of parallax images; a setting unit configured to set a virtual focal plane of the light field data acquired by the data acquisition unit; a reconstruction unit configured to generate, based on the light field data acquired by the data acquisition unit, light field data in which a focal plane is moved to the virtual focal plane; a determination unit configured to determine a data reduction amount of the light field data generated by the reconstruction unit in accordance with a refocusable range of the light field data generated by the reconstruction unit; and a reduction unit configured to reduce an amount of data of the light field data generated by the reconstruction unit in accordance with the data reduction amount. | 04-28-2016 |
20160119611 | TIME OF FLIGHT DEPTH CAMERA - A method for operating a time of flight (TOF) depth camera is provided. The method includes, using an image processing module, interpolating an updated timing delay calibration for each of a plurality of pixel sensors based at least on an updated set of modulation frequency and duty cycle calibration combinations received by the image processing module, the plurality of pixel sensors coupled to a timing clock, and receiving light generated by a light source and reflected in a 3-dimensional environment, the updated set of modulation frequency and duty cycle calibration combinations replacing the corresponding factory-preloaded timing delay calibrations. The method further includes applying the updated timing delay calibrations to pixel data corresponding to each of the plurality of the pixel sensors to generate a depth map of the 3-dimensional environment. | 04-28-2016 |
20160125610 | 3D IMAGING, RANGING, AND/OR TRACKING USING ACTIVE ILLUMINATION AND POINT SPREAD FUNCTION ENGINEERING - Imaging systems and imaging methods are disclosed to estimate a three-dimensional position of an object at a scene and/or generate a three-dimensional image of the scene. The imaging system may include, for example, one or many light sources; an optical system configured to direct light from the one or more light sources into a pattern onto the scene; a mask; a detector array disposed to receive light from the scene through the mask; and at least one processor communicatively coupled with the detector and configured to estimate a depth of a particle within the scene. | 05-05-2016 |
20160127714 | SYSTEMS AND METHODS FOR REDUCING Z-THICKNESS AND ZERO-ORDER EFFECTS IN DEPTH CAMERAS - A projection system configured to emit patterned light along a projection optical axis includes: a diffractive optical element configured to perform a collimation function on the light emitted by the light emitter and to perform a pattern generation function to replicate the collimated light in a pattern, the pattern having substantially no collimated zero-order; and a light emitter configured to emit light toward the diffractive optical element, wherein the collimation function is configured to collimate the light emitted from the light emitter, and wherein the pattern generation function is configured to replicate the collimated light to produce the patterned light. | 05-05-2016 |
20160127715 | MODEL FITTING FROM RAW TIME-OF-FLIGHT IMAGES - Model fitting from raw time of flight image data is described, for example, to track position and orientation of a human hand or other entity. In various examples, raw image data depicting the entity is received from a time of flight camera. A 3D model of the entity is accessed and used to render, from the 3D model, simulations of raw time of flight image data depicting the entity in a specified pose/shape. The simulated raw image data and at least part of the received raw image data are compared and on the basis of the comparison, parameters of the entity are computed. | 05-05-2016 |
20160129283 | METHOD OF CALIBRATION OF A STEREOSCOPIC CAMERA SYSTEM FOR USE WITH A RADIO THERAPY TREATMENT APPARATUS - The disclosed calibration method includes a calibration phantom positioned on an adjustable table on the surface of a mechanical couch, with the phantom's centre at an estimated location for the iso-centre of a radio therapy treatment apparatus. The calibration phantom is then irradiated using the apparatus, and the relative location of the center of the calibration phantom and the iso-centre of the apparatus is determined by analyzing images of the irradiation of the calibration phantom. The calibration phantom is then repositioned by the mechanical couch applying an offset corresponding to the determined relative location of the centre of the calibration phantom and the iso-centre of the apparatus to the calibration phantom. Images of the relocated calibration phantom are obtained, to which the offset has been applied, and the obtained images are processed to set the co-ordinate system of a stereoscopic camera system relative to the iso-centre of the apparatus. | 05-12-2016 |
20160134858 | RGB-D IMAGING SYSTEM AND METHOD USING ULTRASONIC DEPTH SENSING - An RGB-D imaging system having an ultrasonic array for generating images that include depth data, and methods for manufacturing and using same. The RGB-D imaging system includes an ultrasonic sensor array positioned on a housing that includes an ultrasonic emitter and a plurality of ultrasonic sensors. The RGB-D imaging system also includes an RGB camera assembly positioned on the housing in a parallel plane with, and operably connected to, the ultrasonic sensor. The RGB-D imaging system thereby provides/enables improved imaging in a wide variety of lighting conditions compared to conventional systems. | 05-12-2016 |
20160134859 | 3D Photo Creation System and Method - The present application is directed to a 3D photo creation system and method, wherein the 3D photo creation system including: a stereo image input module configured to input a stereo image; wherein the stereo image comprises a left eye image and a right eye image; a depth estimation module configured to estimate a depth information of the stereo image and create a depthmap; a multi-view angle image reconstructing module configured to create a multi-view angle image according to the depthmap and the stereo image; and an image spaced scanning module configured to adjust the multi-view angle image and form a mixed image. The system and method outstandingly simplified the process of 3D photo creation and enhanced the quality of 3D photo. The system and method can be widely used in various theme parks, tourists attraction spots and photo galleries and bring about pleasure to more consumers with the 3D photos. | 05-12-2016 |
20160138910 | CAMERA FOR MEASURING DEPTH IMAGE AND METHOD OF MEASURING DEPTH IMAGE - A depth image measuring camera includes an illumination device configured to irradiate an object with light, and a light-modulating optical system configured to receive the light reflected from the object. The depth image measuring camera includes an image sensor configured to generate an image of the object by receiving light incident on the image sensor that passes through the light-modulating optical system. The light-modulating optical system includes a plurality of lenses having a same optical axis, and an optical modulator configured to operate in two modes for measuring a depth of the object. | 05-19-2016 |
20160139039 | IMAGING SYSTEM AND IMAGING METHOD - An imaging system includes an infrared camera | 05-19-2016 |
20160142701 | DEPTH SENSING METHOD, 3D IMAGE GENERATION METHOD, 3D IMAGE SENSOR, AND APPARATUS INCLUDING THE SAME - A three-dimensional (3D) image sensor module including: an oscillator configured to output a distortion-compensated oscillation frequency as a driving voltage of a sine wave biased with a bias voltage; an optical shutter configured to vary transmittance of reflective light reflected from a subject, according to the driving voltage, and to modulate the reflective light into at least two optical modulation signals having different phases; and an image generator configured to generate image data about the subject, the image data including depth information that is calculated based on a difference between the phases of the at least two optical modulation signals | 05-19-2016 |
20160148069 | METHOD AND DEVICE FOR DETECTING AN OBJECT - It is provided a method for detecting an object in a left view image and a right view image, comprising steps of receiving the left view image and the right view image; detecting a coarse region containing the object in one image of the left view image and the right view image; detecting the object within the detected coarse region in the one image; determining a coarse region in the other image of the left view image and the right view image based on the detected coarse region in the one image and offset relationship indicating position relationship of the object in a past left view image and a past right view image; and detecting the object within the determined coarse region in the other image. | 05-26-2016 |
20160150219 | Methods Circuits Devices Assemblies Systems and Functionally Associated Computer Executable Code for Image Acquisition With Depth Estimation - Disclosed are methods, circuits, devices, systems and functionally associated computer executable code for image acquisition with depth estimation. According to some embodiments, there may be provided an imaging device including: (a) one or more imaging assemblies with at least one image sensor; (b) at least one structured light projector adapted to project onto a scene a multiresolution structured light pattern, which patterns includes multiresolution symbols or codes; and (3) image processing circuitry, dedicated or programmed onto a processor, adapted to identify multiresolution structured light symbols/codes within an acquired image of the scene. | 05-26-2016 |
20160161246 | TEAR LINE 3D MEASUREMENT APPARATUS AND METHOD - A non-destructive method of measuring tear lines formed in a surface of a resilient automotive trim panel configured for overlaying an inflatable safety device includes the steps of periodically selecting a trim panel for testing from a flow of in-process trim panels, mounting the selected trim panel to a mounting jig configured to support a region of the selected trim panel adjacent a tear line and to temporally fold said selected trim panel to expose opposed edges forming at least a portion of the tear line, scanning a 3D image of the opposed edges, storing said 3D image as data in an associated processor, and removing the selected trim panel from the mounting jig. The mounting jig includes a base forming upwardly facing longitudinally elongated converging guide surfaces intersecting at a common apex, and a cover member forming downwardly facing longitudinally elongated converging support surfaces intersecting at a common apex. | 06-09-2016 |
20160163031 | MARKERS IN 3D DATA CAPTURE - A structured light projector includes an optical mask and a light emitter. The light emitter can be capable of illuminating the optical mask, and the optical mask can be capable of transforming the light from the emitter so as to provide structured light bi-dimensional coded light pattern that includes a plurality of feature types formed by a unique combination of feature elements. The projected light pattern includes one or more markers which include pairs of feature elements between which the epipolar distances are modified relative to distances between respective feature elements in non-marker areas of the pattern, and an appearance of feature elements within the marker and across the marker's edges is continuous. | 06-09-2016 |
20160163057 | Three-Dimensional Shape Capture Using Non-Collinear Display Illumination - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for three-dimensional shape capture. In one aspect, a method includes displaying a first, a second and a third illumination patterns on a display screen, and capturing a first, a second and a third image of an object while the first, the second and the third illumination patterns are respectively displayed. The method further includes determining the three-dimensional shape of the object based on the captured images. | 06-09-2016 |
20160165105 | Hyperchromatic Lens For Recording Time-Resolved Phenomena - A method and apparatus for the capture of a high number of quasi-continuous effective frames of 2-D data from an event at very short time scales (from less than 10 | 06-09-2016 |
20160165213 | LAYERED TYPE COLOR-DEPTH SENSOR AND THREE-DIMENSIONAL IMAGE ACQUISITION APPARATUS EMPLOYING THE SAME - Provided are a color-depth sensor and a three-dimensional image acquisition apparatus including the same. The color-depth sensor includes a color sensor that senses visible light and an infrared sensor that is stacked on the color sensor and senses infrared light. The 3D image acquisition apparatus includes: an imaging lens unit; a color-depth sensor that simultaneously senses color image information and depth image information about an object from light reflected by the object and transmitted through the imaging lens unit; and a 3D image processor that generates 3D image information by using the color image information and the depth image information sensed by the color-depth sensor. | 06-09-2016 |
20160171679 | THREE-DIMENSIONAL POSITION INFORMATION ACQUIRING METHOD AND THREE-DIMENSIONAL POSITION INFORMATION ACQUIRING APPARATUS | 06-16-2016 |
20160171746 | METHOD AND SYSTEM FOR THREE DIMENSIONAL IMAGING AND ANALYSIS | 06-16-2016 |
20160173855 | Structured-Light Projector And Three-Dimensional Scanner Comprising Such A Projector | 06-16-2016 |
20160173856 | IMAGE CAPTURE APPARATUS AND CONTROL METHOD FOR THE SAME | 06-16-2016 |
20160173858 | DRIVER ASSISTANCE FOR A VEHICLE | 06-16-2016 |
20160173883 | MULTI-FOCUS IMAGE DATA COMPRESSION | 06-16-2016 |
20160178532 | SYSTEM AND METHOD FOR ENGINE INSPECTION | 06-23-2016 |
20160180181 | OBSTACLE DETECTING APPARATUS AND OBSTACLE DETECTING METHOD | 06-23-2016 |
20160182886 | TIME-OF-FLIGHT IMAGE SENSOR AND LIGHT SOURCE DRIVER HAVING SIMULATED DISTANCE CAPABILITY | 06-23-2016 |
20160182887 | THREE DIMENSIONAL IMAGING WITH A SINGLE CAMERA | 06-23-2016 |
20160182892 | Illuminator For Camera System Having Three Dimensional Time-Of-Flight Capture With Movable Mirror Element | 06-23-2016 |
20160182895 | Integrated Camera System Having Two Dimensional Image Capture and Three Dimensional Time-of-Flight Capture With Movable Illuminated Region of Interest | 06-23-2016 |
20160182896 | Image Pick-Up Apparatus, Portable Terminal Having the Same and Image Pick-Up Method Using Apparatus | 06-23-2016 |
20160189362 | 3D RECORDING DEVICE, METHOD FOR PRODUCING A 3D IMAGE, AND METHOD FOR SETTING UP A 3D RECORDING DEVICE - A 3D recording device ( | 06-30-2016 |
20160189367 | AUGMENTED THREE DIMENSIONAL POINT COLLECTION OF VERTICAL STRUCTURES - Automated methods and systems of creating three dimensional LIDAR data are disclosed, including a method comprising capturing images of a geographic area with one or more image capturing devices as well as location and orientation data for each of the images corresponding to the location and orientation of the one or more image capturing devices capturing the images, the images depicting an object of interest; capturing three-dimensional LIDAR data of the geographic area with one or more LIDAR system such that the three-dimensional LIDAR data includes the object of interest; storing the three-dimensional LIDAR data on a non-transitory computer readable medium; analyzing the images with a computer system to determine three dimensional locations of points on the object of interest; and updating the three-dimensional LIDAR data with the three dimensional locations of points on the object of interest determined by analyzing the images. | 06-30-2016 |
20160189420 | IMAGE GENERATION DEVICE AND OPERATION SUPPORT SYSTEM - An image generation device generates an output image based on an image obtained by taking images by an image-taking part mounted to a body to be operated, which boy is capable of performing a turning operation. A coordinates correspondence part causes coordinates on a columnar space model, which is arranged to surround the body to be operated and has a center axis, to correspond to coordinates on an image plane on which the input image is positioned. An output image generation part generates the output image by causing values of the coordinates on the input image plane to correspond to values of the coordinates on an output image plane on which the output image is positioned through coordinates on the columnar space model. The columnar space model is arranged so that an optical axis of the image-taking part intersects with the center axis of said columnar space model. | 06-30-2016 |
20160191768 | LIGHT FIELD IMAGE CAPTURING APPARATUS INCLUDING SHIFTED MICROLENS ARRAY - A light field image capturing apparatus including a shifted microlens array, the light field image capturing apparatus including: an objective lens focusing light incident from an external object; an image sensor including a plurality of pixels, the image sensor outputting an image signal by detecting incident light; and a microlens array disposed between the objective lens and the image sensor and including a plurality of microlenses arranged in a two-dimensional manner, the plurality of microlenses corresponding to the plurality of pixels, wherein at least a part of the plurality of microlenses is shifted in a direction with respect to the pixels corresponding to the at least a part of the plurality of microlenses. | 06-30-2016 |
20160191867 | STRUCTURED LIGHT PROJECTOR - There is provided according to aspects of the present disclosure a structured light projector, a method of projecting a structured light pattern, and a depth sensing device. The projector includes an emitters array and a mask. The emitters array includes a plurality of individual light emitters. The light from each of the individual emitters diverges. The emitters array has a spatial intensity profile that is associated with the light divergence output of the array's individual emitters. The mask is designed to provide a structured light pattern when illuminated, and is positioned at a distance relative to the emitters array where rays from adjacent emitters overlap, and where such overlaps provide uniform light intensity distribution across the mask plane. | 06-30-2016 |
20160191896 | EXPOSURE COMPUTATION VIA DEPTH-BASED COMPUTATIONAL PHOTOGRAPHY - A method and electronic information handling system provide recording a first image of a scene at a first exposure level using a three-dimensional (3D) camera, correlating distances from the 3D camera and exposure levels over a plurality of image elements of the first image, selecting an exposure parameter value for at least one of the plurality of image elements having a z-distance value falling within a range of z-distance values, recording a second image of the scene according to the exposure parameter value, and constructing a composite image based on at least a portion of the second image for the at least one of the plurality of image elements. | 06-30-2016 |
20160191899 | Imaging Module, Stereo Camera for Vehicle, and Light Shielding Member for Imaging Module - Provided is a highly reliable imaging module that can easily carry out optical axis adjustments and focus adjustments while assuring required light shielding performance, and in which malfunctions do not easily occur in solder junctions and the like for imaging elements even under severe temperature environments. An imaging module has, in addition to a lens holding member ( | 06-30-2016 |
20160191902 | CAMERA TRACKER TARGET USER INTERFACE FOR PLANE DETECTION AND OBJECT CREATION - One exemplary embodiment involves identifying a plane defined by a plurality of three-dimensional (3D) track points rendered on a two-dimensional (2D) display, wherein the 3D track points are rendered at a plurality of corresponding locations of a video frame. The embodiment also involves displaying a target marker at the plane defined by the 3D track points to allow for visualization of the plane, wherein the target marker is displayed at an angle that corresponds with an angle of the plane. Additionally, the embodiment involves inserting a 3D object at a location in the plane defined by the 3D track points to be embedded into the video frame. The location of the 3D object is based at least in part on the target marker. | 06-30-2016 |
20160191903 | THREE-DIMENSIONAL IMAGE GENERATION METHOD AND APPARATUS - A method and apparatus of generating a three-dimensional (3D) image are provided. The method of generating a 3D image involves acquiring a plurality of images of a 3D object with a camera, calculating pose information of the plurality of images based on pose data for each of the plurality of images measured by an inertial measurement unit, and generating a 3D image corresponding to the 3D object based on the pose information. | 06-30-2016 |
20160191905 | Adaptive Image Acquisition and Display Using Multi-focal Display - Multiframe reconstruction combines a set of acquired images into a reconstructed image. Here, which images to acquire are selected based at least in part on the content of previously acquired images. In one approach, a set of at least three images of an object are acquired at different acquisition settings. For at least one of the images in the set, the acquisition setting for the image is determined based at least in part on the content of previously acquired images. Multiframe image reconstruction, preferably via a multi-focal display, is applied to the set of acquired images to synthesize a reconstructed image of the object. | 06-30-2016 |
20160195610 | ILLUMINATION LIGHT PROJECTION FOR A DEPTH CAMERA | 07-07-2016 |
20160195716 | IMAGING OPTICAL SYSTEM, CAMERA APPARATUS AND STEREO CAMERA APPARATUS | 07-07-2016 |
20160198141 | IMAGING SYSTEMS WITH PHASE DETECTION PIXELS | 07-07-2016 |
20160203612 | METHOD AND APPARATUS FOR GENERATING SUPERPIXELS FOR MULTI-VIEW IMAGES | 07-14-2016 |
20160205331 | METHOD AND SYSTEM FOR IMPROVING DETAIL INFORMATION IN DIGITAL IMAGES | 07-14-2016 |
20160205376 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD FOR THE SAME AND STORAGE MEDIUM | 07-14-2016 |
20160205380 | IMAGE PROCESSING APPARATUS, IMAGE PICKUP APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR SYNTHESIZING IMAGES | 07-14-2016 |
20160253824 | XSLIT CAMERA | 09-01-2016 |
20160254027 | THREE-DIMENSIONAL IMAGE PROCESSING SYSTEM, THREE-DIMENSIONAL IMAGE PROCESSING APPARATUS, AND THREE-DIMENSIONAL IMAGE PROCESSING METHOD | 09-01-2016 |
20160255332 | SYSTEMS AND METHODS FOR ERROR CORRECTION IN STRUCTURED LIGHT | 09-01-2016 |
20160379351 | USING 3D VISION FOR AUTOMATED INDUSTRIAL INSPECTION - A system and method for three dimensional (3D) vision inspection using a 3D vision system. The system and method comprising acquiring at least one 3D image of a 3D object using the 3D vision system, using the 3D vision system; extracting a 3D visible runtime mask of the 3D image; using the 3D vision system, comparing the 3D runtime visible mask to a 3D reference visible mask; and, using the 3D vision system, determining if a difference of pixels exists between the 3D runtime visible mask and the 3D reference visible mask. | 12-29-2016 |
20160381256 | Dynamic Minimally Invasive Surgical-Aware Assistant - A computer system that provides situational awareness and feedback during a surgical procedure is described. During operation, the computer system generates stereoscopic images at a first 3-dimensional (3D) location in an individual, where the stereoscopic images include image parallax and are based on data having a discrete spatial resolution and a predefined surgical plan for the individual. Then, the computer system provides the stereoscopic images to a display. When a surgical tool is advanced, location information of the surgical tool is updated in the stereoscopic images displayed are updated. Alternatively or additionally, When location information indicates a deviation from the predefined surgical plan during the surgical procedure, the computer system generates revised stereoscopic images that indicate: the deviation has occurred an update to the predefined surgical plan and/or how to return to the predefined surgical plan. | 12-29-2016 |
20170237918 | LIGHT FIELD IMAGING WITH TRANSPARENT PHOTODETECTORS | 08-17-2017 |
20170237960 | IMAGING SYSTEM INCLUDING LENS WITH LONGITUDINAL CHROMATIC ABERRATION, ENDOSCOPE AND IMAGING METHOD | 08-17-2017 |
20180023947 | SYSTEMS AND METHODS FOR 3D SURFACE MEASUREMENTS | 01-25-2018 |
20190147303 | Automatic Detection of Noteworthy Locations | 05-16-2019 |
20190149796 | DEPTH-SENSING DEVICE AND DEPTH-SENSING METHOD | 05-16-2019 |
20190149804 | ENHANCED THREE DIMENSIONAL IMAGING BY FOCUS CONTROLLED ILLUMINATION | 05-16-2019 |
20190149805 | Scanning projectors and image capture modules for 3D mapping | 05-16-2019 |
20220137226 | HYBRID SENSOR SYSTEM AND METHOD FOR PROVIDING 3D IMAGING - Provided is a 3D depth sensing system and method of providing an image based on a hybrid sensing array. The 3D sensing system including a light source configured to emit light, a hybrid sensing array comprising a 2D sensing region configured to detect ambient light reflected from an object and a 3D depth sensing region configured to detect the light emitted by the light source and reflected from the object, a metalens on the hybrid sensing array, the metalens being configured to direct the ambient light reflected from the object towards the 2D sensing region, and to direct the light emitted by the light source and reflected from the object towards the 3D depth sensing region, and a processing circuit configured to combine 2D image information provided by the 2D sensing region and 3D information provided by the 3D depth sensing region to generate a combined 3D image. | 05-05-2022 |