Entries |
Document | Title | Date |
20080199069 | Stereo Camera for a Motor Vehicle - A device is described for a motor vehicle, having at least one first camera and at least one second camera, the first camera and the second camera acting as a stereo camera, the first camera and the second camera being different with respect to at least one camera property, in particular the light sensitivity of the first camera and the light sensitivity of the second camera being different. Furthermore the device is configured in such a way that the driver assistance function of night vision support and/or traffic sign recognition and/or object recognition and/or road boundary recognition and/or lane recognition and/or other functions are ensured. | 08-21-2008 |
20080199070 | THREE-DIMENSIONAL IMAGE DISPLAY APPARATUS AND METHOD FOR ENHANCING STEREOSCOPIC EFFECT OF IMAGE - A three-dimensional (3D) image display apparatus for enhancing a stereoscopic effect of an image is provided. The 3D image display apparatus includes a disparity estimator which estimates the disparity between a first image and a second image which are obtained by photographing the same object from different angles; a computing unit which computes the adjustment disparity between the first image and the second image using a histogram obtained by computing the frequency of the estimated disparity; and an output unit which applies the computed adjustment disparity to the first image and the second image and outputs the first image and the second image in which the disparity is adjusted. Therefore, the input disparity between the first image and the second image is adjusted, and an image with an enhanced stereoscopic effect may be provided to a user. | 08-21-2008 |
20080199071 | CREATING 3D IMAGES OF OBJECTS BY ILLUMINATING WITH INFRARED PATTERNS - According to a general aspect, processing images includes projecting an infra-red pattern onto a three-dimensional object and producing a first image, a second image, and a third image of the three-dimensional object while the pattern is projected on the three-dimensional object. The first image and the second image include the three-dimensional object and the pattern. The first image and the second image are produced by capturing at a first camera and a second camera, respectively, light filtered through an infra-red filter. The third image includes the three-dimensional object but not the pattern. Processing the images also includes establishing a first-pair correspondence between a portion of pixels in the first image and a portion of pixels in the second image. Processing the images further includes constructing, based on the first-pair correspondence and the third image, a two-dimensional image that depicts a three-dimensional construction of the three-dimensional object. | 08-21-2008 |
20080205748 | Structural light based depth imaging method and system using signal separation coding, and error correction thereof - Provided is a structural light based three-dimensional depth imaging method and system using signal separation coding and error correction thereof capable of detecting, removing and correcting corresponding errors between a projection apparatus and an image photographing apparatus caused by phenomena such as reflection on an object surface, blurring by a focus, and so on, using geometrical constraints between the projection apparatus and the image photographing apparatus. Here, the projection apparatus projects light, and the image photographing apparatus obtains the light. The depth imaging method includes projecting light from a projection apparatus, obtaining the light using an image photographing apparatus, and measuring a distance or a three-dimensional depth image. Therefore, it is possible to provide a structural light based thee-dimensional depth imaging method and system using geometrical conditions capable of precisely obtaining three-dimensional depth information of target environment. | 08-28-2008 |
20080205749 | Polyp detection using smoothed shape operators - Improved surface feature recognition in CT images is provided by extracting a triangulated mesh representation of the surface of interest. Shape operators are computed at each vertex of the mesh from finite differences of vertex normals. The shape operators at each vertex are smoothed according to an iterative weighted averaging procedure. Principal curvatures at each vertex are computed from the smoothed shape operators. Vertices are marked as maxima and/or minima according to the signs of the principal curvatures. Vertices marked as having the same feature type are clustered together by adjacency on the mesh to provide candidate patches. Feature scores are computed for each candidate patch and the scores are provided as output to a user or for further processing. | 08-28-2008 |
20080212870 | COMBINED BEACON AND SCENE NAVIGATION SYSTEM - A controller and navigation system to implement beacon-based navigation and scene-based navigation is described. The navigation system may generate position data for the controller to compensate for a misalignment of the controller relative to the coordinate system of the navigation system. The navigation system may also distinguish between a beacon light source and a non-beacon light source. | 09-04-2008 |
20080212871 | DETERMINING A THREE-DIMENSIONAL MODEL OF A RIM OF AN ANATOMICAL STRUCTURE - Determining a three-dimensional model of a rim of an anatomical structure using two-dimensional images of the rim. The images are taken from different directions and each image can provide a different two-dimensional contour of the rim. Corresponding pairs of points are identified in the images and are used with a transformation matrix to calculate the three-dimensional model. The model may then be used to assist physicians in implantation procedures. | 09-04-2008 |
20080212872 | METHOD OF SETTING UP MULTI-DIMENSIONAL DDA VARIABLES - An apparatus and a computer program product render a multi-dimensional digital image using raytracing in a multi-dimensional space. A multi-dimensional digital differential analyzer (DDA) is included. Variables of said multi-dimensional digital differential analyzer (DDA) are set up using multiplications only. The digital image is rendered based upon the variables of the multi-dimensional digital differential analyzer (DDA). Each axis of the multi-dimensional space includes a numerator which holds the progress within a cell along that axis and a denominator which describes a size condition causing said DDA to step to a next cell. | 09-04-2008 |
20080226159 | Method and System For Calculating Depth Information of Object in Image - A method and a system for calculating a depth information of objects in an image is disclosed. In accordance with the method and the system, an area occupied by two or more objects in the image is classified into an object area and an occlusion area using an outline information to obtain an accurate depth information of each of the objects. | 09-18-2008 |
20080226160 | SYSTEMS AND METHODS FOR FILLING LIGHT IN FRAMES DURING 2-D TO 3-D IMAGE CONVERSION - The present invention is directed to systems and methods for processing 2-D to 3-D image conversion. The systems and methods fill in light among image frames when object have been removed or otherwise changed. In one embodiment, light is treated as an object and can be removed during image processing. The light is added back during the rendering process using the created light object. In other embodiments, light from other frames is filled in using weighted averaging of the light depending upon temporal distance from a particular frame and a base frame. | 09-18-2008 |
20080232679 | Apparatus and Method for 3-Dimensional Scanning of an Object - A 3-dimensional scanner capable of acquiring the shape, color, and reflectance of an object as a complete 3-dimensional object. The scanner utilizes a fixed camera, telecentric lens, and a light source rotatable around an object to acquire images of the object under varying controlled illumination conditions. Image data are processed using photometric stereo and structured light analysis methods to determine the object shape and the data combined using a minimization algorithm. Scans of adjacent object sides are registered together to construct a 3-dimensional surface model. | 09-25-2008 |
20080232680 | Two dimensional/three dimensional digital information acquisition and display device - A two dimensional/three dimensional (2D/3D) digital acquisition and display device for enabling users to capture 3D information using a single device. In an embodiment, the device has a single movable lens with a sensor. In another embodiment, the device has a single lens with a beam splitter and multiple sensors. In another embodiment, the device has multiple lenses and multiple sensors. In yet another embodiment, the device is a standard digital camera with additional 3D software. In some embodiments, 3D information is generated from 2D information using a depth map generated from the 2D information. In some embodiments, 3D information is acquired directly using the hardware configuration of the camera. The 3D information is then able to be displayed on the device, sent to another device to be displayed or printed. | 09-25-2008 |
20080240548 | Isosurfacial three-dimensional imaging system and method - An isosurfacial three-dimensional imaging system and method uses scanning electron microscopy for surface imaging of an assumed opaque object providing a series of tilt images for generating a sinogram of the object and a voxel data set for generating a three-dimensional image of the object having exterior surfaces some of which may be obscured so as to provide exterior three-dimensional surface imaging of objects including hidden surfaces normally obscured from stereographic view. | 10-02-2008 |
20080240549 | METHOD AND APPARATUS FOR CONTROLLING DYNAMIC DEPTH OF STEREO-VIEW OR MULTI-VIEW SEQUENCE IMAGES - A method and an apparatus for controlling a dynamic depth of stereo-view or multi-view images. The method includes receiving stereo-view or multi-view images, generating a disparity histogram by estimating the disparity of two images corresponding to the received images and measuring the frequency of the estimated disparity, determining the disparity control amount of the stereo-view or multi-view images by convoluting the generated disparity histogram and a characteristic function, and rearranging stereo-view or multi-view input images by controlling parallax according to the control amount of parity. | 10-02-2008 |
20080240550 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - By applying identification processing to each index included in a captured image, a set of an identifier, image coordinates, and an image number is acquired for each index, and the acquired set is registered in a data saving unit. The data saving unit manages the numbers of times of previous identification for respective identifiers. A display unit displays the number of times managed in association with an identifier in a set to be registered every time the set is registered. An index position and orientation calculation unit calculates the positions and orientations of indices corresponding to a set group using the set group registered in a memory. | 10-02-2008 |
20080247638 | Three-Dimensional Object Imaging Device - A three-dimensional object imaging device comprises a compound-eye imaging unit and an image reconstructing unit for reconstructing an image of a three-dimensional object based on multiple unit images captured by the imaging unit. Based on the unit images obtained by the imaging unit, the image reconstructing unit calculates a distance (hereafter “pixel distance”) between the object and the imaging unit for each pixel forming the unit images, and rearranges the unit images pixel-by-pixel on a plane at the pixel distance to create a reconstructed image. Preferably, the image reconstructing unit sums a high-frequency component reconstructed image created from the multiple unit images with a lower noise low-frequency component unit image selected from low-frequency component unit images created from the multiple unit images so as to form a reconstructed image of the three-dimensional object. This makes it possible to obtain a reconstructed image with high definition easily by a simple process. | 10-09-2008 |
20080260237 | Method for Determination of Stand Attributes and a Computer Program for Performing the Method - The invention is concerned with a method for forest inventory and for determination of stand attributes. Stand information of trees, sample plots, stands and larger forest areas can be determined by measuring or deriving the most important attributes for individual trees. The method uses a laser scanner and overlapping images. A densification of the laser point clouds is performed and the achieved denser point clouds are used to identify individual trees and groups of trees. The invention is also concerned with a computer program for performing the method. | 10-23-2008 |
20080260238 | Method and System for Determining Objects Poses from Range Images - A method and system determines a pose of an object by comparing an input range image acquired of a scene including the input object to each of a set of reference range image of a reference object, such that each reference range images has an associated different pose, and the reference object is similar to the input object. Then, the pose associated with the reference range image which best matches the input range image is selected as the pose of the input object. | 10-23-2008 |
20080267490 | SYSTEM AND METHOD TO IMPROVE VISIBILITY OF AN OBJECT IN AN IMAGED SUBJECT - A system to track movement of an object travelling through an imaged subject is provided. The system includes an imaging system to acquire a fluoroscopic image and operable to create a three-dimensional model of a region of interest of the imaged subject. A controller includes computer-readable program instructions representative of the steps of calculating a probability that an acquired image data is of the object on a per pixel basis in the fluoroscopic image, calculating a value of a blending coefficient per pixel of the fluoroscopic image dependent on the probability, adjusting the fluoroscopic image including multiplying the value of the blending coefficient with one of a greyscale value, a contrast value, and an intensity value for each pixel of the fluoroscopic image. The adjusted fluoroscopic image is combined with the three-dimensional model to create an output image illustrative of the object in spatial relation to the three-dimensional model. | 10-30-2008 |
20080279446 | SYSTEM AND TECHNIQUE FOR RETRIEVING DEPTH INFORMATION ABOUT A SURFACE BY PROJECTING A COMPOSITE IMAGE OF MODULATED LIGHT PATTERNS - A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face—or other animal feature or inanimate object—recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration. | 11-13-2008 |
20080279447 | Computational Solution Of A Building Of Three Dimensional Virtual Models From Aerial Photographs - A method for processing in a computer a plurality of digital images stored in the computer. The digital images are from respective photographs ( | 11-13-2008 |
20080279448 | Device and Method for Automactically Determining the Individual Three-Dimensional Shape of Particles - A method for automated determination of an individual three-dimensional shape of particles includes: a) dosing, alignment, and automated delivery of the particles; b) observation of the aligned particles and image acquisition, and c) evaluation of the images. A device for automated determination of the individual three-dimensional shape of particles includes: a) a mechanism for dosing, alignment, and automated delivery of the particles; b) at least two cameras for observation of the aligned particles, and c) a mechanism for evaluation of the images. The device can be used for automated determination of individual three-dimensional shape of particles. | 11-13-2008 |
20080279449 | Universal stereoscopic file format - Stereoscopic images may be represented in four coordinates where a first image is represented in three coordinates and a second image is represented of one coordinate. The brightness contrast is the property largely used in stereoscopic perception. The brightness and color of the first image is represented in three coordinates while the brightness of the second image is represented in the one coordinate. Color perception is dominated by the first image. A universal file format with four channels allows the stereoscopic images to be displayed as anaglyphs or as two full color images or as non-stereoscopic images. The anaglyphs may be rendered in three primary colors or four primary colors providing wide compatibility with traditional and specialized display apparatus. The universal file format facilitates methods to capture, display, convert, and communicate stereoscopic images. | 11-13-2008 |
20080285842 | Optoelectronic multiplane sensor and method for monitoring objects - An optoelectronic sensor and method for detecting an object in a three-dimensional monitored region uses a plurality of video sensors. Each sensor has a multiplicity of light-receiving elements that are configured to take a pixel picture of the monitored space, and a control unit identifies an object in the monitored space from video data of the pixel picture. Each video sensor has at least one pixel line that is formed by light-receiving elements. The video sensors are spaced from each other so that each sensor monitors an associated plane of the monitored space. | 11-20-2008 |
20080285843 | Camera-Projector Duality: Multi-Projector 3D Reconstruction - A system and method are disclosed for calibrating a plurality of projectors for three-dimensional scene reconstruction. The system includes a plurality of projectors and at least one camera, a camera-projector calibration module and a projector-projector calibration module. The camera-projector calibration module is configured to calibrate a first projector with the camera and generate a first camera-projector calibration data using camera-projector duality. The camera-projector calibration module is also configured to calibrate a second projector with the camera and generate a second camera-projector calibration data. The projector-projector calibration module is configured to calibrate the first and the second projector using the first and the second camera-projector calibration data. | 11-20-2008 |
20080292179 | SYSTEM AND METHOD FOR EVALUATING THE NEEDS OF A PERSON AND MANUFACTURING A CUSTOM ORTHOTIC DEVICE - A system for providing a custom orthotic can include a scanner, and imager for providing a digital three-dimensional image based on the scan, a gait and pressure measuring device and a data inputting system for inputting information regarding the customer. An analysis device can be provided to make modifications to the three-dimensional image based on the customer information, and the modified three-dimensional image can be forwarded electronically to a manufacturer for production. | 11-27-2008 |
20080292180 | POSITION AND ORIENTATION MEASUREMENT APPARATUS AND CONTROL METHOD THEREOF - A position and orientation measurement apparatus for measuring the position and orientation of an image capturing apparatus, which captures an image of a measurement object, relative to the measurement object, extracts configuration planes of the measurement object based on three-dimensional model data of the measurement object, and extracts measurement line segments to be used in detection of edges of a captured image from line segments which form the configuration planes. The position and orientation measurement apparatus projects the extracted measurement line segments onto the captured image based on an estimated position and orientation of the image capturing apparatus, selects visible measurement line segments which are not hidden by the extracted configuration planes, and calculates the position and orientation of the image capturing apparatus relative to the measurement object based on the visible measurement line segments and corresponding edges of the captured image. | 11-27-2008 |
20080298672 | SYSTEM AND METHOD FOR LOCATING A THREE-DIMENSIONAL OBJECT USING MACHINE VISION - This invention provides a system and method for determining position of a viewed object in three dimensions by employing 2D machine vision processes on each of a plurality of planar faces of the object, and thereby refining the location of the object. First a rough pose estimate of the object is derived. This rough pose estimate can be based upon predetermined pose data, or can be derived by acquiring a plurality of planar face poses of the object (using, for example multiple cameras) and correlating the corners of the trained image pattern, which have known coordinates relative to the origin, to the acquired patterns. Once the rough pose is achieved, this is refined by defining the pose as a quaternion (a, b, c and d) for rotation and a three variables (x, y, z) for translation and employing an iterative weighted, least squares error calculation to minimize the error between the edgelets of trained model image and the acquired runtime edgelets. The overall, refined/optimized pose estimate incorporates data from each of the cameras' acquired images. Thereby, the estimate minimizes the total error between the edgelets of each camera's/view's trained model image and the associated camera's/view's acquired runtime edgelets. A final transformation of trained features relative to the runtime features is derived from the iterative error computation. | 12-04-2008 |
20080298673 | THREE-DIMENSIONAL DATA REGISTRATION METHOD FOR VISION MEASUREMENT IN FLOW STYLE BASED ON DOUBLE-SIDED TARGET - The present disclosure is directed to a three-dimensional data registration method for vision measurement in flow style based on a double-sided target. An embodiment of the disclosed method that comprises A. Setting up two digital cameras which can observe the entire measured object; B. Calibrating intrinsic parameters and a transformation between the two digital camera coordinate frames; C. A double-sided target being placed near the measured area of the measured object, the two digital cameras and a vision sensor taking images of at least three non-collinear feature points of the double-sided target; D. Removing the target, measuring the measured area by using the vision sensor; E. Respectively computing the three dimensional coordinates of the feature points in the global coordinate frame and in the vision sensor coordinate frame; F. Estimating the transformation from the vision sensor coordinate frame to the global coordinate frame through the three dimensional coordinates of the three or more non-collinear feature points obtained at step E, then transforming the three dimensional data of the measured area to the global coordinate frame; and G. Repeating step C, D, E, F, then completing three dimensional data registration for all measured areas. The present disclosure improves three dimensional data registration precision and efficiency. | 12-04-2008 |
20080298674 | Stereoscopic Panoramic imaging system - An imaging system for producing stereoscopic panoramic images using multiple coplanar pairs of image capture devices with overlapping fields of view held in a rigid structural frame for long term calibration maintenance. Pixels are dynamically adjusted within the imaging system for position, color, brightness, aspect ratio, lens imperfections, imaging chip variations and any other imaging system shortcomings that are identified during calibration processes. Correction of pixel information is implemented in various combinations of hardware and software. Corrected image data is then available for storage or display or for separate data processing actions such as object distance or volume calculations. | 12-04-2008 |
20080310707 | VIRTUAL REALITY ENHANCEMENT USING REAL WORLD DATA - Techniques for enhancing virtual reality using transformed real world data are disclosed. In some aspects, a composite reality engine receives a transmission of the real world data that is captured by embedded sensors situated in the real world. The real world data is transformed and integrated with virtual reality data to create a composite reality environment generated by a composite reality engine. In other aspects, the composite reality environment enables activation of embedded actuators to modify the real world from the virtual reality environment. In still further aspects, techniques for sharing sensors and actuators in the real world are disclosed. | 12-18-2008 |
20080310708 | Method for Improving Image Viewing Properties of an Image - An image processing method for improving image viewing properties of a digital image is disclosed. The method comprises converting a value of a property of at least one pixel of the image into a display value of the at least one pixel of the image by means of a parameterized function, wherein the parameterized function is location dependent with reference to the location of said at least one pixel of the image, thus creating locally optimized image viewing properties of the image. | 12-18-2008 |
20080317331 | Recognizing Hand Poses and/or Object Classes - There is a need to provide simple, accurate, fast and computationally inexpensive methods of object and hand pose recognition for many applications. For example, to enable a user to make use of his or her hands to drive an application either displayed on a tablet screen or projected onto a table top. There is also a need to be able to discriminate accurately between events when a user's hand or digit touches such a display from events when a user's hand or digit hovers just above that display. A random decision forest is trained to enable recognition of hand poses and objects and optionally also whether those hand poses are touching or not touching a display surface. The random decision forest uses image features such as appearance, shape and optionally stereo image features. In some cases, the training process is cost aware. The resulting recognition system is operable in real-time. | 12-25-2008 |
20080317332 | System and Method for Determining Geometries of Scenes - A method and an apparatus determines a geometry of a scene by projecting one or more output image into the scene, in which a time to project the output image is t | 12-25-2008 |
20080317333 | METHOD AND SYSTEM FOR CORRECTION OF FLUOROSCOPE IMAGE DISTORTION - Certain embodiments of the present invention provide for a system and method for modeling S-distortion in an image intensifier. In an embodiment, the method may include identifying a reference coordinate on an input screen of the image intensifier. The method also includes computing a set of charged particle velocity vectors. The method also includes computing a set of magnetic field vectors. The method also includes computing the force exerted on the charged particle in an image intensifier. Certain embodiments of the present invention include an iterative method for calibrating an image acquisition system with an analytic S-distortion model. In an embodiment, the method may include comparing the difference between the measured fiducial shadow positions and the model fiducial positions with a threshold value. If the difference is less than the threshold value, the optical distortion parameters are used for linearizing the set of acquired images. | 12-25-2008 |
20080317334 | Method and Microscopy Divice for the Deflectometric Detection of Local Gradients and the Tree-Dimensional Shape of an Object - The invention relates to a method and an apparatus for high-resolution deflectometric determination of the local slope and of the three-dimensional shape of an object ( | 12-25-2008 |
20090003686 | ENHANCED OBJECT RECONSTRUCTION - Processing images includes projecting an infra-red pattern onto a three-dimensional object and producing a first image, a second image, and a third image of the three-dimensional object while the pattern is projected on the three-dimensional object. The first image and the second image include the three-dimensional object and the pattern. The first image and the second image are produced by capturing at a first camera and a second camera, respectively, light filtered through an infra-red filter. The third image includes the three-dimensional object but not the pattern. Processing the images also includes establishing a first-pair correspondence between a portion of pixels in the first image and a portion of pixels in the second image. Processing the images further includes constructing, based on the first-pair correspondence and the third image, a two-dimensional image that depicts a three-dimensional construction of the three-dimensional object. | 01-01-2009 |
20090003687 | Segmenting Image Elements - A method of segmenting image elements into a foreground and background is described, such that only the foreground elements are part of a volume of interest for stereo matching. This reduces computational burden as compared with computing stereo matching over the whole image. An energy function is defined using a probabilistic framework and that energy function approximated to require computation only over foreground disparities. An optimization algorithm is used on the energy function to perform the segmentation. | 01-01-2009 |
20090003688 | System and method for creating images - The invention provides a method of replicating the primary human field of view in an image. The method comprises receiving at least three digital images of a scene, the digital images comprising a centre image facing a scene directly, a centre left image obtained by rotating an image capture device a predefined angle to the left of centre and a centre right image obtained by rotating the image capture device a predefined angle to the right of centre; manipulating the centre image, the centre left image, and the centre right image on a data processing device; obtaining a composite image from the manipulated centre image, centre left image and centre right image conformed to the first virtual model; manipulating the composite image on the data processing device; obtaining a distortion adjusted image from the composite image conformed to the second virtual model; creating a physical image of the distortion adjusted image; and physically manipulating the physical image to form a physical image having a planar centre portion and curved left and right portions extending toward a viewpoint. | 01-01-2009 |
20090010530 | INFORMATION PROCESSING SYSTEM - An information processing system for performing processes on first image and second image captured from different viewpoints, comprising: a first specifying part for specifying a first corresponding point on the second image, corresponding to a designation point designated on the first image, by searching on a line along a first basis direction corresponding to a predetermined direction and passing through a position corresponding to the designation point in the second image; a second specifying part for specifying a second corresponding point on the second image, corresponding to the designation point, by searching on a line passing through the first corresponding point in the second image and along a second basis direction almost perpendicular to the first basis direction; and a third specifying part for specifying a third corresponding point on the second image, corresponding to the designation point, by searching on a line passing through the second corresponding point in the second image and along the first basis direction. | 01-08-2009 |
20090016598 | METHOD FOR COMPUTER-AIDED IDENTIFICATION OF THE CHILD OCTANTS OF A PARENT OCTANT, WHICH ARE INTERSECTED BY A BEAM, IN AN OCTREE DATA STRUCTURE BY MEANS OF LOOK-UP TABLES - The present invention relates to a method for computer-aided identification of the child octants of a parent octant, which are intersected by a beam, in an octree data tree. The method firstly determines the number of the child octants of the parent octant which are intersected by the beam and, on the basis thereof, the child octants of the parent octant which are intersected by the beam. It is characterised in that, for determination of intermediate octants which do not correspond to the entry and the exit octant and nevertheless are intersected by the beam, look-up tables are used for identification. | 01-15-2009 |
20090022393 | Method for reconstructing a three-dimensional surface of an object - Method for determining a disparity value of a disparity of each of a plurality of points on an object, the method including the procedures of detecting by a single image detector, a first image of the object through a first aperture, and a second image of the object through a second aperture, correcting the distortion of the first image, and the distortion of the second image, by applying an image distortion correction model to the first image and to the second image, respectively, thereby producing a first distortion-corrected image and a second distortion-corrected image, respectively, for each of a plurality of pixels in at least a portion of the first distortion-corrected image representing a selected one of the points, identifying a matching pixel in the second distortion-corrected image, and determining the disparity value according to the coordinates of each of the pixels and of the respective matching pixel. | 01-22-2009 |
20090041336 | STEREO MATCHING SYSTEM AND STEREO MATCHING METHOD USING THE SAME - A stereo matching system and a stereo matching method using the same. Here, a Sum of Edge Differences (SED) method as a disparity estimation method utilizing edge information is added to a disparity estimation method utilizing a local method to perform stereo matching. As such, it is possible to correct false matching in a non-texture region generated when stereo matching is performed using only a local method, thereby enabling good stereo matching. | 02-12-2009 |
20090041337 | IMAGE PROCESSING APPARATUS AND METHOD - Three-dimensional position information of each of feature points in a left and a right image is calculated based on a disparity between the left and right images; a lane marker existing on a road surface is detected from each of the left and right images; based on three-dimensional position information of a lane marker in a neighboring road surface area, by extending the lane marker to a distant area, a lateral direction position, and a depth direction position, of the extended lane marker in the distant area are estimated; an edge segment of a certain length or more is detected from feature points in the distant area in each of a plurality of images; three-dimensional position information of the edge segment is calculated; and, based on the three-dimensional position information of the edge segment, and on the extended lane marker information, a road incline in the distant area is estimated. | 02-12-2009 |
20090041338 | PHOTOGRAPHING FIELD ANGLE CALCULATION APPARATUS - A memory of a photographing field angle calculation apparatus has stored therein the position of each point captured by an imaging system as three coordinate values in horizontal, vertical, and depth directions when a photography space is photographed by the imaging system. A usage pixel value extraction means selects a plurality of points located in end portions in the horizontal direction of a range captured by the imaging system from those stored in the memory based on the vertical coordinate value, and extracts the horizontal and depth coordinate values of each of the selected points. A field angle calculation means calculates a horizontal photographing field angle when the photography space is photographed using the extracted horizontal and depth coordinate values. | 02-12-2009 |
20090041339 | Pseudo 3D image generation device, image encoding device, image encoding method, image transmission method, image decoding device, and image decoding method - A pseudo 3D image generation device includes frame memories that store a plurality of basic depth models used for estimating depth data based on a non-3D image signal and generating a pseudo 3D image signal; a depth model combination unit that combines the plurality of basic depth models for generating a composite depth model based on a control signal indicating composite percentages for combining the plurality of basic depth models; an addition unit that generates depth estimation data from the non-3D image signal and the composite depth models; and a texture shift unit that shifts the texture of the non-3D image for generating the pseudo 3D image signal. | 02-12-2009 |
20090046924 | Stereo-image processing apparatus - A stereo-image processing apparatus includes a stereo-image taking means configured to take a plurality of images from different viewpoints, a parallax detecting means configured to detect a parallax of a subject on the basis of the images taken by the stereo-image taking means, an object detecting means configured to detect objects on the basis of the parallax detected by the parallax detecting means and a parallax offset value, and a parallax-offset-value correcting means configured to correct the parallax offset value on the basis of a change in a parallax corresponding to an object whose size in real space does not change with time, of the objects detected by the object detecting means, and a change in an apparent size of the object. | 02-19-2009 |
20090052767 | Modelling - A method of modelling an object ( | 02-26-2009 |
20090060319 | METHOD, A SYSTEM AND A COMPUTER PROGRAM FOR SEGMENTING A SURFACE IN A MULTI-DIMENSIONAL DATASET - The method according to the invention is arranged to segment a surface in a multi-dimensional dataset comprising a plurality of images, which may be acquired using a suitable data-acquisition unit at a preparatory step | 03-05-2009 |
20090060320 | INFORMATION PRESENTATION SYSTEM, INFORMATION PRESENTATION APPARATUS, INFORMATION PRESENTATION METHOD, PROGRAM, AND RECORDING MEDIUM ON WHICH SUCH PROGRAM IS RECORDED - Disclosed is an information presentation system that includes a plurality of information presentation apparatuses movable and displaying images of a plurality of objects, and a control apparatus outputting control signals for controlling the information presentation apparatuses. Each information presentation apparatus includes a display unit, a moving unit, a driving unit, a position sensor, a first communication unit, and a control unit. The control apparatus includes an object position information obtaining unit, a second communication unit, and a control unit. The control unit of the information presentation apparatus control to display an image of the object, for which position information has been obtained by the object position information obtaining unit of the control apparatus, on the display unit and control to drive the driving unit based on the control signal received by the first communication unit. | 03-05-2009 |
20090060321 | SYSTEM FOR COMMUNICATING AND METHOD - A system communicates a representation of a scene, which includes a plurality of objects disposed on a plane, to one or more client devices. The representation is generated from one or more video images of the scene captured by a video camera. The system comprises an image processing apparatus operable to receive the video images of the scene which includes a view of the objects on the plane, to process the captured video images so as to extract one or more image features from each object, to compare the one or more image features with sample image features from a predetermined set of possible example objects which the video images may contain, to identify the objects from the comparison of the image features with the predetermined image features of the possible example objects, and to generate object path data for each object which identifies the respective object; and provides a position of the identified object on a three dimensional model of the plane in the video images with respect to time. The image processing apparatus is further operable to calculate a projection matrix for projecting the position of each of the objects according to the object path data from the plane in the video image into the three dimensional model of the plane. A distribution server is operable to receive the object path data and the projection matrix generated by the image processing apparatus for distribution of the object path data and the projection matrix to one or more client devices. The system is arranged to generate a representation of an event, such as a sporting event, which provides a substantial data in an amount of information which must be communicated to represent the event. As such, the system can be used to communicate the representation of the event, via a bandwidth limited communications network, such as the internet, from the server to one or more client devices in real time. Furthermore, the system can be used to view one or more of the objects within the video images by extracting the objects from the video images. | 03-05-2009 |
20090067705 | Method and Apparatus to Facilitate Processing a Stereoscopic Image Using First and Second Images to Facilitate Computing a Depth/Disparity Image - The processing of a stereoscopic image using first and second images to facilitate computing a corresponding depth/disparity image can be facilitated by providing ( | 03-12-2009 |
20090067706 | System and Method for Multiframe Surface Measurement of the Shape of Objects - A system and method are provided for the multiframe surface measurement of the shape of material objects. The system and method include capturing a plurality of images of portions of the surface of the object being measured and merging the captured images together in a common reference system. The shape and/or texture of a complex-shaped object can be measured using a 3D scanner by capturing multiple images from different perspectives and subsequently merging the images in a common coordinate system to align the merged images together. Alignment is achieved by capturing images of both a portion of the surface of the object and also of a reference object having known characteristics (e.g., shape and/or texture). This allows the position and orientation of the object scanner to be determined in the coordinate system of the reference object. | 03-12-2009 |
20090067707 | Apparatus and method for matching 2D color image and depth image - Provided are an apparatus and method for matching a 2D color image and a depth image to obtain 3D information. The method includes matching resolution of the 2D color image and resolution of a light intensity image, wherein the 2D color image and the light intensity image are separately obtained, detecting at least one edge from the matched 2D color image and the matched light intensity image, and matching overlapping pixels of the matched 2D color image and a depth image, which corresponds to the matched light intensity image, with each other in case that the matched 2D color image and the depth image are overlapped as much as the matched 2D color image and the matched light intensity image are overlapped so that the detected edges of the matched 2D color image and the detected edges of the matched light intensity image are maximally overlapped with each other. Accordingly, the 2D color image and the depth image can be accurately matched so that reliable 3D image information can be quickly obtained. | 03-12-2009 |
20090080765 | SYSTEM AND METHOD TO GENERATE A SELECTED VISUALIZATION OF A RADIOLOGICAL IMAGE OF AN IMAGED SUBJECT - A system to illustrate image data of an imaged subjected is provided. The system comprises an imaging system, an input device, an output device, and a controller in communication with the imaging system, the input device, and the output device. The controller includes a processor to perform program instructions representative of the steps of generating a three-dimensional reconstructed volume from the plurality of two-dimensional, radiography images, navigating through the three-dimensional reconstructed volume, the navigating step including receiving an instruction from an input device that identifies a location of a portion of the three-dimensional reconstructed volume, calculating and generating a two-dimensional display of the portion of the three-dimensional reconstructed volume identified in the navigation step, and reporting the additional view or at least one parameter to calculate and generate the additional view. | 03-26-2009 |
20090080766 | Method and apparatus for the Three-Dimensional Digitization of objects - This invention relates to a method and an apparatus for the three-dimensional digitization of objects with a 3D sensor, which comprises a projector and one or more cameras, in which a pattern is projected onto the object by means of the projector, and the pattern is detected with the one or more cameras. In accordance with the invention, the method and the apparatus are characterized in that at least three reference marks and/or a reference raster are projected onto the object with the 3D sensor and are detected with two or more external, calibrated digital cameras. | 03-26-2009 |
20090080767 | METHOD FOR DETERMINING A DEPTH MAP FROM IMAGES, DEVICE FOR DETERMINING A DEPTH MAP - Window based matching is used for determining a depth map from images obtained from different orientations. A set of fixed matching windows is used for points of the image for which the depth is to be determined. The set of matching windows covers a footprint of pixels around the point of the image, and the average number (0) of matching windows that a pixel of the footprint (FP) belongs to is less than one plus the number of pixels in the footprint divided by 15 (003-26-2009 | |
20090092311 | METHOD AND APPARATUS FOR RECEIVING MULTIVIEW CAMERA PARAMETERS FOR STEREOSCOPIC IMAGE, AND METHOD AND APPARATUS FOR TRANSMITTING MULTIVIEW CAMERA PARAMETERS FOR STEREOSCOPIC IMAGE - Provided is a method of receiving multiview camera parameters for a stereoscopic image. The method includes: extracting multiview camera parameter information for a predetermined data section from a received stereoscopic image data stream; extracting matrix information including at least one of translation matrix information and rotation matrix information for the predetermined data section from the multiview camera parameter information; and restoring coordinate systems of multiview cameras by using the extracted matrix information. | 04-09-2009 |
20090110266 | STEREOSCOPIC IMAGE PROCESSING DEVICE AND METHOD, STEREOSCOPIC IMAGE PROCESSING PROGRAM, AND RECORDING MEDIUM HAVING THE PROGRAM RECORDED THEREIN - The present invention is directed to a stereo image processing apparatus adapted for generating stereo images which permit, at a glance, discrimination of a suitable observation method. This stereo image processing apparatus includes an image input unit ( | 04-30-2009 |
20090110267 | Automated texture mapping system for 3D models - A camera pose may be determined automatically and is used to map texture onto a 3D model based on an aerial image. In one embodiment, an aerial image of an area is first determined. A 3D model of the area is also determined, but does not have texture mapped on it. To map texture from the aerial image onto the 3D model, a camera pose is determined automatically. Features of the aerial image and 3D model may be analyzed to find corresponding features in the aerial image and the 3D model. In one example, a coarse camera pose estimation is determined that is then refined into a fine camera pose estimation. The fine camera pose estimation may be determined based on the analysis of the features. When the fine camera pose is determined, it is used to map texture onto the 3D model based on the aerial image. | 04-30-2009 |
20090116728 | Method and System for Locating and Picking Objects Using Active Illumination - A method and system determines a 3D pose of an object in a scene. Depth edges are determined from a set of images acquired of a scene including multiple objects while varying illumination in the scene. The depth edges are linked to form contours. The images are segmented into regions according to the contours. An occlusion graph is constructed using the regions. The occlusion graph includes a source node representing an unoccluded region of an unoccluded object in scene. The contour associated with the unoccluded region is compared with a set of silhouettes of the objects, in which each silhouette has a known pose. The known pose of a best matching silhouette is selected as the pose of the unoccluded object. | 05-07-2009 |
20090116729 | THREE-DIMENSIONAL POSITION DETECTING DEVICE AND METHOD FOR USING THE SAME - A three-dimensional position detecting device includes an electromagnetic radiation source, a first sensing module having first sensing elements, and a second sensing module having second sensing elements. The first and the second sensing elements receive different radiation energies from different spatial direction angles generated by the electromagnetic radiation source relative to the first and the second sensing elements, so values of two spatial direction angles of the electromagnetic radiation source relative to the first and the second sensing modules are obtained according to magnitude relationship of the radiation energies received by the first and the second sensing modules. According to matrix operation of two spatial distances from the electromagnetic radiation source to the first and the second sensing modules and the two spatial direction angles, a spatial coordinate position of the electromagnetic radiation source relative to the first and the second sensing modules is obtained. | 05-07-2009 |
20090116730 | THREE-DIMENSIONAL DIRECTION DETECTING DEVICE AND METHOD FOR USING THE SAME - A three-dimensional direction detecting device, including: an electromagnetic radiation source and a sensing module. The electromagnetic radiation source is used to generate electromagnetic radiations. The sensing module has a plurality of sensing elements for receiving different radiation energies generated by the electromagnetic radiations from different spatial angles. Therefore, the sensing elements respectively receive the different radiation energies from different spatial direction angles generated by the electromagnetic radiation source relative to the sensing elements, so that the value of a spatial direction angle of the electromagnetic radiation source relative to the sensing module is obtained according to the magnitude relationship of the radiation energies received by the sensing module. | 05-07-2009 |
20090116731 | Method and system for detection of concha and intertragal notch point in 3D undetailed ear impressions - A method and system for detecting the concha and intertragal notch in an undetailed 3D ear impression is disclosed. The concha is detected by searching vertical scan lines in a region surrounding the aperture using a two-pass method. The intertragal notch is detected based on a bottom contour of the 3D undetailed ear impression and a local coordinate system defined for the 3D undetailed ear impression. | 05-07-2009 |
20090116732 | METHODS AND SYSTEMS FOR CONVERTING 2D MOTION PICTURES FOR STEREOSCOPIC 3D EXHIBITION - The present invention discloses methods of digitally converting 2D motion pictures or any other 2D image sequences to stereoscopic 3D image data for 3D exhibition. In one embodiment, various types of image data cues can be collected from 2D source images by various methods and then used for producing two distinct stereoscopic 3D views. Embodiments of the disclosed methods can be implemented within a highly efficient system comprising both software and computing hardware. The architectural model of some embodiments of the system is equally applicable to a wide range of conversion, re-mastering and visual enhancement applications for motion pictures and other image sequences, including converting a 2D motion picture or a 2D image sequence to 3D, re-mastering a motion picture or a video sequence to a different frame rate, enhancing the quality of a motion picture or other image sequences, or other conversions that facilitate further improvement in visual image quality within a projector to produce the enhanced images. | 05-07-2009 |
20090116733 | Systems and Methods for Creating and Viewing Three Dimensional Virtual Slides - Systems and methods for creating and viewing three dimensional virtual slides are provided. One or more microscope slides are positioned in an image acquisition device that scans the specimens on the slides and makes two dimensional images at a medium or high resolution. This two dimensional images are provided to an image viewing workstation where they are viewed by an operator who pans and zooms the two dimensional image and selects an area of interest for scanning at multiple depth levels (Z-planes). The image acquisition device receives a set of parameters for the multiple depth level scan, including a location and a depth. The image acquisition device then scans the specimen at the location in a series of Z-plane images, where each Z-plane image corresponds to a depth level portion of the specimen within the depth parameter. | 05-07-2009 |
20090123061 | Depth image generating method and apparatus - A method of and apparatus for generating a depth image are provided. The method of generating a depth image includes: emitting light to an object for a first predetermined time period; detecting first light information of the object for the first predetermined time period from the time when the light is emitted; detecting second light information of the object for the first predetermined time period a second predetermined time period after the time when the light is emitted; and by using the detected first and second light information, generating a depth image of the object. In this way, the method can generate a depth image more quickly. | 05-14-2009 |
20090129665 | Image processing system, 3-dimensional shape estimation system, object position/posture estimation system and image generation system - An object of the present invention is to process an image without a need to previously find out the initial value of a parameter representing an illumination condition and without a need for a user to manually input the illumination parameter. An image processing system includes a generalized illumination basis model generation means | 05-21-2009 |
20090129666 | METHOD AND DEVICE FOR THREE-DIMENSIONAL RECONSTRUCTION OF A SCENE - Passive methods for three-dimensional reconstruction of a scene by means of image data are generally based on the determination of spatial correspondences between a number of images of the scene recorded from various directions and distances. A method and a device are disclosed which provide a high reliability in the solution of the correspondence problem in conjunction with a low computational outlay. Image areas for determining the correspondences are determined within a plurality of images forming at least two image sequences. In preferred embodiments, a parameterized function h(u,v,t) is matched to each of the image areas in a space R(uvgt) defined by pixel position (u, v), image value g and time t. The parameters of the parameterized functions are used to form a similarity measure between the image areas. | 05-21-2009 |
20090129667 | DEVICE AND METHOD FOR ESTIMATIMING DEPTH MAP, AND METHOD FOR GENERATING INTERMEDIATE IMAGE AND METHOD FOR ENCODING MULTI-VIEW VIDEO USING THE SAME - The present invention relates to a device and a method for estimating a depth map, and a method for making an intermediate image and a method for encoding multi-view video using the same. More particularly, the present invention relates to a device and a method for estimating a depth map that are capable of acquiring a depth map that reduces errors and complexity, and is resistant to external influence by dividing an area into segments on the basis of similarity, acquiring a segment-unit initial depth map by using a three-dimensional warping method and a self adaptation function to which an extended gradient map is reflected, and refining the initial depth map by performing a belief propagation method by the segment unit, and achieving smoother view conversion and improved encoding efficiency by generating an intermediate image with the depth map and utilizing the intermediate image for encoding a multi-view video, and a method for generating the intermediate image and a method for encoding the multi-view video using the same. | 05-21-2009 |
20090141966 | INTERACTIVE GEO-POSITIONING OF IMAGERY - An interactive user-friendly incremental calibration technique that provides immediate feedback to the user when aligning a point on a 3D model to a point on a 2D image. A can drag-and-drop points on a 3D model to points on a 2D image. As the user drags the correspondences, the application updates current estimates of where the camera would need to be to match the correspondences. The 2D and 3D images can be overlayed on each other and are sufficiently transparent for visual alignment. The user can fade between the 2D/3D views providing immediate feedback as to the improvements in alignment. The user can begin with a rough estimate of camera orientation and then progress to more granular parameters such as estimates for focal length, etc., to arrive at the desired alignment. While one parameter is adjustable, other parameters are fixed allowing for user adjustment of one parameter at a time. | 06-04-2009 |
20090141967 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - A disparity function setting unit configured to set a plurality of disparity relationships expressing disparities as functions of an image position; a data term calculating unit configured to calculate the similarity of corresponding areas between images specified by the preset disparity functions; a smoothing term calculating unit configured to calculate the consistency between the disparity functions and the pixels located in the vicinity; and a disparity function selecting unit configured to select the disparity function for each point of the image from the plurality of preset disparity functions are provided. | 06-04-2009 |
20090141968 | CORONARY RECONSTRUCTION FROM ROTATIONAL X-RAY PROJECTION SEQUENCE - A method for three-dimensional reconstruction of a branched object from a rotational sequence of images of the branched object includes segmenting the branched object from each image of the sequence, extracting centerlines of the branched object, performing symbolic reconstruction via a stereo correspondence matching between the centerlines from different views of the sequence of images using a graph cut-based optimization, and creating a three-dimensional tomographic reconstruction of the branched object compensated for motion of the branched object between the images of the sequence. | 06-04-2009 |
20090148036 | Image processing apparatus, image processing method, image processing program and position detecting apparatus as well as mobile object having the same - There is provided an image processing apparatus capable of reducing a memory amount to be used and a processing time in processing images captured stereoscopically in wide-angle. In order to find pixel positions of an object as information for use in detecting position of the object from images captured by two cameras that are capable of imaging the object in wide-angle and are disposed on a straight line, the image processing apparatus includes an image input means for inputting the images captured by the two cameras, an image projecting means for projecting the images inputted from the respective cameras on a cylindrical plane having an axial line disposed in parallel with the straight line on which the respective cameras are disposed while correcting distortions and a pixel position detecting means for detecting the pixel positions corresponding to the object in the image projected on the cylindrical plane. | 06-11-2009 |
20090148037 | Color-coded target, color code extracting device, and three-dimensional measuring system - To provide a color-coded target having a color code of colors chosen not to cause code reading errors and technique for automatically detecting and processing the targets. The color-coded target CT | 06-11-2009 |
20090148038 | DISTANCE IMAGE PROCESSING APPARATUS AND METHOD - A distance image processing apparatus including a distance image obtaining unit for obtaining distance values that include depth information and position information, and represent a three-dimensional shape of a subject obtained by photographing the subject, a conversion unit for converting the depth information with a quantization number such that the smaller the depth information the larger the quantization number, and an image file generation unit for generating an image file of a distance image with distance values that include the converted depth information as the pixel value of each pixel, the image file including information related to the conversion attached thereto. | 06-11-2009 |
20090154792 | Linear Feature Detection Method and Apparatus - A method of extracting linear features from an image, the method including the steps of: (a) applying a non maximum suppression filter to the image for different angles of response to produce a series o filtered image responses; (b) combining the filtered image responses into a combined image having extracted linear features. | 06-18-2009 |
20090154793 | DIGITAL PHOTOGRAMMETRIC METHOD AND APPARATUS USING INTERGRATED MODELING OF DIFFERENT TYPES OF SENSORS - Disclosed is a digital photogrammetric method and apparatus using the integrated modeling of different types of sensors. A unified triangulation method is provided for an overlapping area between an aerial image and a satellite image that are captured by a frame camera and a line camera equipped with different types of sensors. Ground control lines or ground control surfaces are used as ground control features used for the triangulation. A few ground control points may be used together with the ground control surface in order to further improve the three-dimensional position. The ground control line and the ground control surface may be extracted from LiDAR data. In addition, triangulation may be performed by bundle adjustment in the units of blocks each having several aerial images and satellite images. When an orthophoto is needed, it is possible to generate the orthophoto by appropriately using elevation models with various accuracies that are created by a LiDAR system, according to desired accuracy. | 06-18-2009 |
20090154794 | Method and apparatus for reconstructing 3D shape model of object by using multi-view image information - A method for reconstructing a 3D shape model of an object by using multi-view image information, includes: inputting multi-view images obtained by photographing the object from multiple viewpoints in a voxel space, and extracting silhouette information and color information of the multi-view images; reconstructing visual hulls by silhouette intersection using the silhouette information; and approximating polygons of cross-sections of the visual hulls to a natural geometric shape of the object by using the color information. Further, the method includes expressing a 3D geometric shape of the object by connecting the approximated polygons to create a mesh structure; extracting color textures of a surface of the object by projecting meshes of the mesh structure to the multi-view image; and creating a 3D shape model by modeling natural shape information and surface color information of the object. | 06-18-2009 |
20090161944 | TARGET DETECTING, EDITING AND REBUILDING METHOD AND SYSTEM BY 3D IMAGE - A method and system for target detecting, editing and rebuilding by 3D image is provided, which comprises an inputting and picking unit, a training and detecting unit, a displaying and editing unit and a rebuilding unit. The inputting and picking unit receives a digital image and a LiDAR data and picks up a first parameter to form a 3D image. The training and detecting unit selects a target, picks up a second parameter therefrom, calculates the second parameter to generate a threshold and detects the target areas in the 3D image according to the threshold. The displaying and editing unit sets a quick selecting tool according to the threshold and edits the detecting result. The rebuilding unit sets a buffer area surrounding the target, picks up a third parameter therefrom and calculates the original shape of the target by the Surface Fitting method according to the third parameter. | 06-25-2009 |
20090161945 | Geometric parameter measurement of an imaging device - Disclosed is a method of determining at least one three-dimensional (3D) geometric parameter of an imaging device. A two-dimensional (2D) target image is provided having a plurality of alignment patterns. The target image is imaged with an imaging device to form a captured image. At least one pattern of the captured image is compared with a corresponding pattern of the target image. From the comparison, the geometric parameter of the imaging device is then determined. The alignment patterns include at least one of (i) one or more patterns comprising a 2D scale and rotation invariant basis function, (ii) one or more patterns comprising a 1D scale invariant basis function, and (iii) one or more patterns having a plurality of grey levels and comprising a plurality of superimposed sinusoidal patterns, the plurality of sinusoidal patterns having a plurality of predetermined discrete orientations. Also disclosed is a two-dimensional test chart for use in testing an imaging device, the test chart comprising a plurality of alignment patterns, at least one of said alignment patterns including one of those patterns mentioned above. | 06-25-2009 |
20090161946 | IMAGE PROCESSING APPARATUS - An image processing apparatus comprises an inputting section for inputting a plurality of continuous images which were photographed by a photographing section progressively moving relative to a photographed object; an extracting section for extracting characteristic points from images input by the inputting section; a tracking section for tracking the points corresponding to the characteristic points in the plurality of continuous images; an embedding section for embedding tracking data, which includes data of extracted and tracked points by the extracting section and the tracking section, into each image; and an outputting section for outputting the plurality of continuous images sequentially in which the tracking data was embedded by the embedding section. | 06-25-2009 |
20090169095 | SYSTEM AND METHOD FOR GENERATING STRUCTURED LIGHT FOR 3-DIMENSIONAL IMAGE RENDERING - A system and method for illuminating an object in preparation for three-dimensional rendering includes a projection device configured to project at least three two-dimensional structured light patterns onto a 3-dimensional object. At least two cameras detect light reflected by the object in response to the at least three structured light patterns. Each structured light pattern varies in intensity in a first dimension and is constant in a second dimension. A single line along the first dimension of a given structured light pattern is created from a superposition of three or more component triangular waveforms. Each component triangular waveform has an amplitude, a periodicity (frequency), and a phase shift which is implemented as a pixel shift. Each component triangular waveform may be subject to one or more waveshaping operations prior to being summed with the remaining component triangular waveforms. The summed waveform itself may also be subject to waveshaping operations. | 07-02-2009 |
20090169096 | IMAGE PROCESSING METHODS AND APPARATUS - We describe methods of characterising a set of images to determine their respective illumination, for example for recovering the 3D shape of an illuminated object. The method comprises: inputting a first set of images of the object captured from different positions; determining frontier point data from the images, this defining a plurality of frontier points on the object and for each said frontier point a direction of a normal to the surface of the object at the frontier point, and determining data defining the image capture positions; inputting a second set of images of said object, having substantially the same viewpoint and different illumination conditions; and characterising the second set of images said frontier point data to determine data comprising object reflectance parameter data (β) and, for each image of said second set, illumination data (L) comprising data defining an illumination direction and illumination intensity for the image. | 07-02-2009 |
20090180682 | SYSTEM AND METHOD FOR MEASURING IMAGE QUALITY - The present invention provides an improved system and method for measuring quality of both single and stereo video images. The embodiments of the present invention include frequency content measure for a single image or region-of-interest thereof and disparity measure for stereo images or region-of-interest thereof. | 07-16-2009 |
20090185741 | Apparatus and method for automatic airborne LiDAR data processing and mapping using data obtained thereby - Apparatus for processing of a LiDAR point cloud of a ground scan, comprises: a point cloud input for receiving said LiDAR point cloud, a ground filter for filtering out points that belong to the ground from said point cloud, thereby to generate an elevation map showing features extending from the ground, an automatic feature search and recognition unit associated with said three dimensional graphical engine for searching said elevation map of said three-dimensional model to identify features therein and to replace points associated with said feature with a virtual object representing said feature, thereby to provide objects within said data; and a three-dimensional graphical renderer supporting three-dimensional graphics, to generate a three-dimensional rendering of said ground scan. | 07-23-2009 |
20090190827 | Environment recognition system - An environment recognition system includes image taking means for taking a pair of images of an object in a surrounding environment with a pair of cameras and outputting the pair of images, stereo matching means for conducting stereo matching on a plurality of pairs of images that are taken by different image taking methods or that are formed by subjecting the pair of taken images to different image processing methods and forming distance images respectively for the pairs of images, selection means for dividing the distance images into a plurality of sections, calculating representative parallaxes respectively for the sections, and selecting any of the representative parallaxes of the corresponding section as a representative parallax of the section, and detection means for detecting the object in the image on the basis of the representative parallaxes of the sections. | 07-30-2009 |
20090196491 | Method for automated 3d imaging - A method for automated construction of 3D images is disclosed, in which a range measurement device is to initiate and control the processing of 2D images in order to produce a 3D image. The range measurement device may be integrated with an image sensor, for example the range sensor from a digital camera, or may be a separate device. Data indicating the distance to a specific feature obtained from the range sensor may be used to control and automate the construction of the 3D image. | 08-06-2009 |
20090196492 | Method, medium, and system generating depth map of video image - A method, medium, and system generating a depth map of a video image are provided. The depth map generating method extracts the ground of a video image other than an object from the video image, classifies the video image as a long shot image or a non-long shot image based on a distribution value of the extracted ground, calculates a depth value gradually varied along a predetermined direction of the extracted ground when the video image corresponds to the long shot image and calculates a depth value based on the object when the video image corresponds to the non-long shot image. Accordingly, a sense of space and perspective can be effectively given to even a long shot image in which the ground occupies a large part of the image and a stereoscopic image recognizable by a viewer can be generated even if rapid object change is made between scenes in a video image. | 08-06-2009 |
20090208095 | SITE MODELING USING IMAGE DATA FUSION - Site modeling using image data fusion. Geometric shapes are generated to represent portions of one or more structures based on digital height data and a two-dimensional segmentation of portions of the one or more structures is generated based on three-dimensional line segments and digital height data. A labeled segmentation of the one or more structures is generated based on the geometric shapes and the two-dimensional segmentation. A three-dimensional model of the one or more structures is generated based on the labeled segmentation. | 08-20-2009 |
20090214105 | Identity Document and Method for the Manufacture Thereof - Identity document comprising a data medium with data. These data comprise an image of a face. This image consists of two component images that are observed at different angles. By simultaneously viewing the two images, the person studying the identity document can obtain further information about the face. This is possible because the two images are applied at a relatively small angle of 5° to 20°. | 08-27-2009 |
20090214106 | PHOTOGRAMMETRIC TARGET AND RELATED METHOD - A multi-target photogrammetric target assembly and related method of evaluating curvilinear surface character. The target assembly includes a first photogrammetric target disposed at a first support and a second photogrammetric target disposed at a second support. The first support and the second support are operatively connected such that the first target is in predefined lateral spaced relation to the second target. The method includes providing a structure having a curvilinear surface and affixing one or more multi-target photogrammetric target assemblies to the curvilinear surface. The position of the targets is measured by one or more imaging devices to define surface contour characteristics. | 08-27-2009 |
20090214107 | IMAGE PROCESSING APPARATUS, METHOD, AND PROGRAM - Corresponding points corresponding to each other between each of a plurality of images photographed from different positions are searched for. When a plurality of corresponding points is searched out in a second image of the plurality of images for one target pixel in a first image of the plurality of images, at least partial subject shape around the target pixel is calculated based on distance values of a plurality of pixels around the target pixel, then a target distance value, which is a distance value of the target pixel, is calculated with respect to each of the plurality of corresponding points based on the target pixel and each of the plurality of corresponding points, and a valid corresponding point is determined from the plurality of corresponding points having a smallest difference from the subject shape. | 08-27-2009 |
20090220143 | Method for measuring a shape anomaly on an aircraft structural panel and system therefor - The disclosed embodiments concern a method for measuring a shape anomaly on an aircraft structural panel, including the following operations: projecting a target pattern at the site of the anomaly on the panel; producing at least two images of the projected pattern; processing the two images by stereocorrelation to obtain measurements of the anomaly. The disclosed embodiments also concern a system for implementing the method, including: a projected device for projecting a target pattern at the site of the anomaly on the panel; at least two imaging devices for producing each an image of the target pattern; and means for processing the target pattern images. | 09-03-2009 |
20090220144 | STEREO PHOTOGRAMMETRY FROM A SINGLE STATION USING A SURVEYING INSTRUMENT WITH AN ECCENTRIC CAMERA - A method for determining, in relation to a surveying instrument, target coordinates of a point of interest, or target, identified in two images captured by a camera in the surveying instrument. The method comprises determining coordinates of the surveying instrument, capturing a first image using the camera in the first camera position; identifying, in the first image, an object point associated with the target; measuring first image coordinates of the object point in the first image; rotating the surveying instrument around the horizontal axis and the vertical axis in order to position the camera in a second camera position; capturing a second image using the camera in the second camera position; identifying, in the second image, the object point identified in the first image; measuring second image coordinates of the object point in the second image; and determining the coordinates of the target in relation to the surveying instrument. | 09-03-2009 |
20090220145 | Target and three-dimensional-shape measurement device using the same - A target set in a to-be-measured object and used for acquiring a reference value of point-cloud data, the target includes a small circle surrounded by a frame and having the center of the target, a large circle surrounded by the frame and disposed concentrically with the small circle so as to surround the small circle, a low-luminance reflective region located between the frame and the large circle and having the lowest reflectivity, a high-luminance reflective region located between the large circle and the small circle and having the highest reflectivity, and an intermediate-luminance reflective region located inside the small circle and having an intermediate reflectivity which is higher than the reflectivity of the low-luminance reflective region and which is lower than the reflectivity of the high-luminance reflective region. | 09-03-2009 |
20090226079 | IDENTIFICATION OF OBJECTS IN A 3D VIDEO USING NON/OVER REFLECTIVE CLOTHING - A method includes generating a depth map from at least one image, detecting objects in the depth map, and identifying anomalies in the objects from the depth map. Another method includes identifying at least one anomaly in an object in a depth map, and using the anomaly to identify future occurrences of the object. A system includes a three dimensional (3D) imaging system to generate a depth map from at least one image, an object detector to detect objects within the depth map, and an anomaly detector to detect anomalies in the detected objects, wherein the anomalies are logical gaps and/or logical protrusions in the depth map. | 09-10-2009 |
20090232387 | MULTI PARALLAX EXPLOITATION FOR OMNI-DIRECTIONAL IMAGING ELECTRONIC EYE - Techniques and systems are disclosed for electronic target recognition. In particular, techniques and systems are disclosed for performing electronic surveillance and target recognition using a multiple parallax exploitation (MPEX) electronic eye platform. Among other things, a MPEX system can include an imaging unit that includes multiple image capture devices spaced from one another to form an array to provide overlapping fields-of-view and to capture multiple overlapping stereo images of a scene. The MPEX system can also include a processing unit connected to the imaging unit to receive and process data representing the captured multiple overlapping stereo images from the imaging unit to characterize one or more objects of interest in the scene. | 09-17-2009 |
20090232388 | REGISTRATION OF 3D POINT CLOUD DATA BY CREATION OF FILTERED DENSITY IMAGES | 09-17-2009 |
20090232389 | IMAGE PROCESSING METHOD AND APPARATUS, IMAGE REPRODUCING METHOD AND APPARATUS, AND RECORDING MEDIUM - Provided are an image processing method and apparatus, and an image reproducing method and apparatus. The image processing method includes receiving three-dimensional (3D) image data; generating additional information about the 3D image data; and inserting the additional information in a blanking interval of the 3D image data. | 09-17-2009 |
20090245623 | Systems and Methods for Gemstone Identification and Analysis - Images of items of jewelry having gemstones embedded therein are imaged and analyzed to determine the weights associated with the gemstones and, separately the precious metal in which the gemstones are encased without having to remove the gemstones from the jewelry. | 10-01-2009 |
20090245624 | IMAGE MATCHING SYSTEM USING THREE-DIMENSIONAL OBJECT MODEL, IMAGE MATCHING METHOD, AND IMAGE MATCHING PROGRAM - Even when only a small number of reference images are available for each object, it is possible to search at high speed a reference image stored in a database from an input image of an object imaged with a different pose and a different illumination condition. A reference image matching result storage section ( | 10-01-2009 |
20090252404 | Model uncertainty visualization for active learning - An active learning system and method are disclosed for generating a visual representation of a set of unlabeled elements to be labeled according to class. The representation shows the unlabeled elements as data points in a space and each class as a class point in the space. The position of each of the data points in the space reflects the uncertainty of a model regarding the classification of the respective element. The color of each data point also reflects the uncertainty of the model regarding the classification of the element and may be a mixture of the colors used for the class points. | 10-08-2009 |
20090263007 | Stereoscopic image recording device and program - If horizontal viewpoint quantity information Nx and vertical viewpoint quantity information Ny are predetermined quantities, a value of the aspect ratio of the output image data to be finally output satisfies a predetermined condition, and a 3D identification mark is contained in the output image data, then an image recording device adds a first extension as general-purpose 3D image data which can also be used in a conventional device and records it. Accordingly, when 3D image data is output (displayed and printed) in a conventional device, image data which can be used as 3D image data (which can be viewed as a stereoscopic image) can be output as general-purpose image data while image data which cannot be used as 3D image data in the conventional device is not output as image data. This prevents a confusion of general users. | 10-22-2009 |
20090263008 | Method For Recognizing Dice Dots - A method recognizing dice dots comprises the steps: projecting at least one dice with a plurality of different angle light sources; capturing a plurality of images of the dice according to the projecting times of the light sources on the dice; and recognizing dice dots based on the images through calculation methods. When recognized results obtained through the calculation methods are judged same by the recognizing module the dice dots are confirmed and accepted. If the recognized results done through the calculation methods are different, the dice is rolled anew. | 10-22-2009 |
20090263009 | METHOD AND SYSTEM FOR REAL-TIME VISUAL ODOMETRY - A method for real-time visual odometry comprises capturing a first three-dimensional image of a location at a first time, capturing a second three-dimensional image of the location at a second time that is later than the first time, and extracting one or more features and their descriptors from each of the first and second three-dimensional images. One or more features from the first three-dimensional image are then matched with one or more features from the second three-dimensional image. The method further comprises determining changes in rotation and translation between the first and second three-dimensional images from the first time to the second time using a random sample consensus (RANSAC) process and a unique iterative refinement technique. | 10-22-2009 |
20090274362 | Road Image Analyzing Apparatus and Road Image Analyzing Method - In a road image analyzing apparatus capable of obviously and rapidly distinguishing a road marking from a guardrail and capable of obtaining precise position information, a pre-processing unit defines sub-areas to main image data obtained by an image pickup unit, and an edge extracting unit extracts an edge component in each of the sub-areas. A linear line extracting unit analyzes the extracted edge component to extract a linear component, and a linear component analyzing unit extracts a continuous component from the linear component by using the linear component. A matching process unit performs a matching process between a vertex of the continuous component and auxiliary image data to obtain three-dimensional position information of each continuous component. An identifying unit identifies whether the continuous component is a road marking or a guardrail on the basis of height information of each continuous component included in the three-dimensional position information. | 11-05-2009 |
20090290786 | STEREOSCOPIC MEASUREMENT SYSTEM AND METHOD - A stereoscopic measurement system captures stereo images and determines measurement information for user-designated points within stereo images. The system comprises an image capture device for capturing stereo images of an object. A processing system communicates with the capture device to receive stereo images. The processing system displays the stereo images and allows a user to select one or more points within the stereo image. The processing system processes the designated points within the stereo images to determine measurement information for the designated points. | 11-26-2009 |
20090290787 | STEREOSCOPIC MEASUREMENT SYSTEM AND METHOD - A stereoscopic measurement system captures stereo images and determines measurement information for user-designated points within stereo images. The system comprises an image capture device for capturing stereo images of an object. A processing system communicates with the capture device to receive stereo images. The processing system displays the stereo images and allows a user to select one or more points within the stereo image. The processing system processes the designated points within the stereo images to determine measurement information for the designated points. | 11-26-2009 |
20090297020 | Method and system for determining poses of semi-specular objects - A camera acquires a set of coded images and a set of flash images of a semi-specular object. The coded images are acquired while scanning the object with a laser beam pattern, and the flash images are acquired while illuminating the object with a set of light sources at different locations near the camera, there being one flash image for each light source. 3D coordinates of points on the surface of the object are determined from the set of coded images, and 2D silhouettes of the object are determined from shadows cast in the set of flash images. Surface normals are obtained for the 3D points from photometric stereo on the set of flash images. The 3D coordinates, 2D silhouettes and surface normals are compared with a known 3D model of the object to determine the pose of the object. | 12-03-2009 |
20090304263 | Method for classifying an object using a stereo camera - A method is provided for classifying an object using a stereo camera, the stereo camera generating a first and a second image using a first and a second video sensor respectively. In order to classify the object, the first and the second image are compared with one another in predefined areas surrounding corresponding pixel coordinates, the pixel coordinates for at least one model, at least one position and at least one distance from the stereo camera being made available. | 12-10-2009 |
20090304264 | FREE VIEW GENERATION IN RAY-SPACE - The claimed subject matter relates to an architecture that can facilitate more efficient free view generation in Ray-Space by way of a Radon transform. The architecture can render virtual views based upon original image data by employing Ray-Space interpolation techniques. In particular, the architecture can apply the Radon transform to a feature epipolar plane image (FEPI) to extract more suitable slope or direction candidates. In addition, the architecture can facilitate improved block-based matching techniques in order to determine an optimal linear interpretation direction. | 12-10-2009 |
20090304265 | SYSTEMS AND METHODS FOR MODELING THREE-DIMENSIONAL OBJECTS FROM TWO- DIMENSIONAL IMAGES - In one embodiment, a system and method for modeling a three-dimensional object includes capturing two-dimensional images of the object from multiple different viewpoints to obtain multiple views of the object, estimating slices of the object that lie in parallel planes that cut through the object, and computing a surface of the object from the estimated slices. | 12-10-2009 |
20090304266 | CORRESPONDING POINT SEARCHING METHOD AND THREE-DIMENSIONAL POSITION MEASURING METHOD - A plurality of images (I, J) of an object (M) when viewed from different viewpoints are taken in. One of the images is set as a standard image (I), and the other image is set as a reference image (J). One-dimensional pixel data strings with a predetermined width (W) are cut out from the standard image (I) and the reference image (J) along epipolar lines (EP | 12-10-2009 |
20090310851 | 3D CONTENT AGGREGATION BUILT INTO DEVICES - The claimed subject matter provides a system and/or a method that facilitates capturing a portion 2-dimensional (2D) data for implementation within a 3-dimensional (3D) virtual environment. A device that can capture one or more 2D images, wherein the 2D image is representative of a corporeal object from a perspective dictated by an orientation of the device. The device can comprise a content aggregator that can construct a 3D image from two or more 2D images collected by the device, in which the construction is based at least in part upon aligning each corresponding perspective associated with each 2D image. | 12-17-2009 |
20090310852 | Method for Constructing Three-Dimensional Model and Apparatus Thereof - Disclosed are a method and an apparatus for constructing an accurate three-dimensional model. The apparatus includes a plurality of light sources, an image-capturing element and an image-processing unit. The present invention is used to integrate the two-dimensional images from different views of an object into a high accurate three-dimensional model. Compared with conventional apparatuses, the apparatus of the present invention is useful without safety problems, relatively easily manipulated, and capable of quick image reconstruction. | 12-17-2009 |
20090310853 | MEASUREMENTS USING A SINGLE IMAGE - A method used in broadcasts of events is disclosed for identifying the coordinates of an object in world space from a video frame, where the object is not on the geometric model of the environment. Once the world coordinates of the object are identified, a graphic may be added to a video replay showing the object. The method may also be expanded in a further embodiment to identify a trajectory of an object over time moving through world space from video images of the start and end of the trajectory, where the object is not on the geometric model of the environment. Once the trajectory of the object in world space is identified, a graphic may be added to a video replay showing the trajectory. | 12-17-2009 |
20090324058 | Use of geographic coordinates to identify objects in images - A method and device are disclosed. In one embodiment the method includes determining the location of a camera when the camera captures an image. The method continues by determining the viewable subject area of the image. Additionally, the method determines the location of one or more objects at the time the image is taken. Finally, upon making these determinations, the method concludes by identifying each of the one or more objects as being in the image when the location of each of the one or more objects is calculated to have been within the viewable subject area of the image at the time the image was taken. | 12-31-2009 |
20090324059 | METHOD FOR DETERMINING A DEPTH MAP FROM IMAGES, DEVICE FOR DETERMINING A DEPTH MAP - Window based matching is used to determine a depth map from images obtained from different orientations. A set of matching windows is used for points of the image for which the depth is to be determined. A provisional depth map is generated wherein to each point more than one candidate disparity value is attributed. The provisional depth map is filtered by a surface filtering wherein at least the z-component of a norm of a sum of unit vectors pointing from the candidate disparity values for neighboring points to a point of interest. | 12-31-2009 |
20100002934 | Three-Dimensional Motion Capture - In one general aspect, a method is described. The method includes generating a positional relationship between one or more support structures having at least one motion capture mark and at least one virtual structure corresponding to geometry of an object to be tracked and positioning the support structures on the object to be tracked. The support structures has sufficient rigidity that, if there are multiple marks, the marks on each support structure maintain substantially fixed distances from each other in response to movement by the object. The method also includes determining an effective quantity of ray traces between one or more camera views and one or more marks on the support structures, and estimating an orientation of the virtual structure by aligning the determined effective quantity of ray traces with a known configuration of marks on the support structures. | 01-07-2010 |
20100008565 | Method of object location in airborne imagery using recursive quad space image processing - A method and computer workstation are disclosed which determine the location in the ground space of selected point in a digital image of the earth obtained by an airborne camera. The image is rectangular and has four corners and corresponds to an image space. The image is associated with data indicating the geo-location coordinates for the points in the ground space corresponding to the four corners of the image, e.g., an image formatted in accordance with the NITF standard. The method includes the steps of: (a) performing independently and in parallel a recursive partitioning of the image space and the ground space into successively smaller quadrants until a pixel coordinate in the image assigned to the selected point is within a predetermined limit (Δ) of the center of a final recursively partitioned quadrant in the image space. The method further includes a step of (b) calculating a geo-location of the point in the ground space corresponding to the selected point in the image space from the final recursively partitioned quadrant in the ground space corresponding to the final recursively partitioned quadrant in the image space. The methods are particularly useful for geo-location from oblique reconnaissance imagery. | 01-14-2010 |
20100008566 | 3D model reconstruction acquisition based on images of incremental or decremental liquid level - A 3D model reconstruction acquisition includes the steps of preparing a transparent container and at least one image capture device, wherein an object is placed in the transparent container and a liquid is received in the transparent container; keeping the liquid level rising or lowering to allow the liquid level to pass through a surface of the object and then keeping capturing a series of the images; computing a liquid-level equation for each of the images by using curves of the images between the object and the incremental or decremental liquid level confined by the transparent container; computing 3D coordinates of the curves in accordance with the liquid-level equation of each image; and collecting 3D coordinates of all of the curves to create a 3D model of the object. In addition, the acquisition can be done in the environment having water and thus be applied to various environments. | 01-14-2010 |
20100014750 | POSITION MEASURING SYSTEM, POSITION MEASURING METHOD AND COMPUTER READABLE MEDIUM - A position measuring system includes: an image capturing unit that captures reference points provided on an object, the reference points composed of at least four first reference points provided respectively at vertices of a polygon or at vertices and a barycenter of a polygon and at least one second reference point provided so as to have a specific positional relationship with respect to the first reference points; an identification unit that identifies images of the first reference points and the second reference point captured by the image capturing unit, on the basis of positional relationships between the images of the first reference points and the second reference point; and a calculation unit that calculates a three-dimensional position and three-axial angles of the object on the basis of positional relationships of the images of the first reference points identified by the identification unit. | 01-21-2010 |
20100021052 | System and method for generating a terrain model for autonomous navigation in vegetation - The disclosed terrain model is a generative, probabilistic approach to modeling terrain that exploits the 3D spatial structure inherent in outdoor domains and an array of noisy but abundant sensor data to simultaneously estimate ground height, vegetation height and classify obstacles and other areas of interest, even in dense non-penetrable vegetation. Joint inference of ground height, class height and class identity over the whole model results in more accurate estimation of each quantity. Vertical spatial constraints are imposed on voxels within a column via a hidden semi-Markov model. Horizontal spatial constraints are enforced on neighboring columns of voxels via two interacting Markov random fields and a latent variable. Because of the rules governing abstracts, this abstract should not be used to construe the claims. | 01-28-2010 |
20100027874 | Stereo image matching method and system using image multiple lines - Disclosed is a stereo image matching method for re-creating 3-dimensional spatial information from a pair of 2-dimensional images. The conventional stereo image matching method generates much noise from a disparity value in the vertical direction, but the present invention uses disparity information of adjacent image lines as a constraint condition to eliminate the noise in the vertical direction, and compress the disparity by using a differential coding method to thereby increase a compression rate. | 02-04-2010 |
20100034457 | MODELING OF HUMANOID FORMS FROM DEPTH MAPS - A computer-implemented method includes receiving a depth map ( | 02-11-2010 |
20100040280 | Enhanced ghost compensation for stereoscopic imagery - A method and apparatus for reduction of ghost images in stereoscopic images. This disclosure provides a ghost compensation apparatus and methods that detect affected regions where ghosting may occur in a stereoscopic image, yet where conventional ghost compensation techniques are ineffective because there is insufficient luminance overhead to conduct a conventional ghost compensation process. Luminance values are modified in such regions prior to applying a ghost compensation process. | 02-18-2010 |
20100054578 | Method and apparatus for interactive visualization and distribution of very large image data sets - The present invention discloses a system for real-time visualization and distribution of very large image data sets using on-demand loading and dynamic view prediction. A robust image representation scheme is used for efficient adaptive rendering and a perspective view generation module is used to extend the applicability of the system to panoramic images. The effectiveness of the system is demonstrated by applying it both to imagery that does not require perspective correction and to very large panoramic data sets requiring perspective view generation. The system permits smooth, real-time interactive navigation of very large panoramic and non-panoramic image data sets on average personal computers without the use of specialized hardware. | 03-04-2010 |
20100054579 | THREE-DIMENSIONAL SURFACE GENERATION METHOD - The present invention provides a three-dimensional surface generation method that directly and efficiently generates a three-dimensional surface of the object surface from multiple images capturing a target object. | 03-04-2010 |
20100054580 | IMAGE GENERATION DEVICE, IMAGE GENERATION METHOD, AND IMAGE GENERATION PROGRAM - The image generation device includes distance calculation means for calculating a distance between a space model and an imaging device arrangement object model which is a model such as a vehicle having a camera mounted, according to viewpoint conversion image data generated by viewpoint conversion means, captured image data representing captured image, a space model, or mapped space data. When displaying an image viewed from an arbitrary virtual viewpoint in the 3D space, the image display format is changed according to the distance calculated by the distance calculation means. When displaying a monitoring object such as a vicinity of a vehicle, a shop, a house or a city as an image viewed from an arbitrary virtual viewpoint in the 3D space, it is possible to display the monitoring object in such a manner that the relationship between the vehicle and the image of the monitoring object can be understood intuitionally. | 03-04-2010 |
20100061622 | METHOD FOR ALIGNING OBJECTS - A computer implemented method for aligning objects receives a reference object and a to-be-moved object; determining feature elements of the reference object. A first coordinate system is constructed according to a plurality of feature elements of the reference object. A second coordinate system is constructed according to a plurality of feature elements of the to-be-moved object. A third coordinate system is constructed according to the first coordinate system and the second coordinate system. An operation matrix is computed according to the three coordinate systems. The two objects are aligned using the operation matrix. | 03-11-2010 |
20100061623 | Position measuring apparatus - A position measuring apparatus including a first irradiating part that irradiates a first beam to an object, a second irradiating part that irradiates a second beam to the object, a capturing part that captures images of the object, a processing part that generates a first difference image and a second difference image by processing the images captured by the capturing part, an extracting part that extracts a contour and a feature point of the object from the first difference image, a calculating part that calculates three-dimensional coordinates of a reflection point located on the object based on the second difference image, and a determining part that determines a position of the object by matching the contour, the feature point, and the three-dimensional coordinates with respect to predetermined modeled data of the object. | 03-11-2010 |
20100080447 | Methods and Apparatus for Dot Marker Matching - A method for a computer system includes receiving a first camera image of a 3D object having sensor markers, captured from a first location, at a first instance, receiving a second camera image of the 3D object from a second location, at a different instance, determining points from the first camera image representing sensor markers of the 3D object, determining points from the second camera image representing sensor markers of the 3D object, determining approximate correspondence between points from the first camera image and points from the second camera image, determining approximate 3D locations some sensor markers of the 3D object, and rendering an image including the 3D object in response to the approximate 3D locations. | 04-01-2010 |
20100080448 | METHOD AND GRAPHICAL USER INTERFACE FOR MODIFYING DEPTH MAPS - The invention relates to a method and a graphical user interface for modifying a depth map for a digital monoscopic color image. The method includes interactively selecting a region of the depth map based on color of a target region in the color image, and modifying depth values in the thereby selected region of the depth map using a depth modification rule. The color-based pixel selection rules for the depth map and the depth modification rule selected based on one color image from a video sequence may be saved and applied to automatically modify depths maps of other color images from the same sequence. | 04-01-2010 |
20100086199 | METHOD AND APPARATUS FOR GENERATING STEREOSCOPIC IMAGE FROM TWO-DIMENSIONAL IMAGE BY USING MESH MAP - Provided are a method and apparatus for generating a stereoscopic image from a two-dimensional (2D) image by using a mesh map and a computer readable recording medium having recorded thereon a computer program for executing the method. Also provided are a method and apparatus for generating a stereoscopic image by reading a 2D image, displaying the 2D image and a mesh map by overlapping the 2D image and the mesh map, and editing mesh shapes and depth information (depth values) of the mesh map by a user, and a computer readable recording medium having recorded thereon a computer program for executing the method. The method of generating a stereoscopic image includes receiving a 2D image; displaying the 2D image and a mesh map by overlapping the 2D image and the mesh map; editing mesh shapes and depth information (depth values) of the mesh map by a user in accordance with shapes of a displayed image; calculating relative depth information of pixels included in the 2D image in accordance with the mesh shapes and the depth information of the edited mesh map; and generating a stereoscopic image file by using the calculated relative depth information of the 2D image. The present invention may be used in a system for generating a stereoscopic image from a 2D image including a general still image or moving picture. | 04-08-2010 |
20100086200 | SYSTEMS AND METHODS FOR MULTI-PERSPECTIVE SCENE ANALYSIS - Systems and methods for using visual attention modeling techniques to evaluate a scene from multiple perspectives. | 04-08-2010 |
20100092071 | SYSTEM AND METHODS FOR NAVIGATION USING CORRESPONDING LINE FEATURES - A method for navigating identifies line features in a first three-dimensional (3-D) image and a second 3-D image as a navigation platform traverses an area and compares the line features in the first 3-D image that correspond to the line features in the second 3-D image. When the lines features compared in the first and the second 3-D images are within a prescribed tolerance threshold, the method uses a conditional set of geometrical criteria to determine whether the line features in the first 3-D image match the corresponding line features in the second 3-D image. | 04-15-2010 |
20100092072 | Automated generation of 3D models from 2D computer-aided design (CAD) drawings - The process and method for generating a 3D model from a set of 2D drawings is described herein. Traditionally, many structural components (objects) are communicated through a series of 2D drawings, wherein each drawing describes the components that are visible in a user-selected view direction. No machine-readable information in the drawings define a relationship between the drawings developed from various view directions or the objects' locations in 3D space. Considerable human effort and intervention is required to place objects defined in the 2D drawings into 3D space. With the ability to provide information in each drawing defining a relationship with the other drawings as well as its place in 3D space, the objects defined in 2D drawings can self-assemble in 3D space, thereby reducing a substantial amount of required human effort. | 04-15-2010 |
20100098323 | Method and Apparatus for Determining 3D Shapes of Objects - An apparatus and method determine a 3D shape of an object in a scene. The object is illuminated to cast multiple silhouettes on a diffusing screen coplanar and in close proximity to a mask. A single image acquired of the diffusing screen is partitioned into subview according to the silhouettes. A visual hull of the object is then constructed according to isosurfaces of the binary images to approximate the 3D shape of the object. | 04-22-2010 |
20100098324 | RECOGNITION PROCESSING METHOD AND IMAGE PROCESSING DEVICE USING THE SAME - A recognition processing method and an image processing device ends recognition of an object within a predetermined time while maintaining the recognition accuracy. The device extracts combinations of three points defining a triangle whose side length satisfy predetermined criterion values from feature points of the model of a recognition object, registers the extracted combinations as model triangles, and similarly extracts combinations of three points defining a triangle whose side lengths satisfy predetermined criterion values from feature points of the recognition object. The combinations are used as comparison object triangles and associated with the respective model triangles. The device calculates a transformation parameter representing the correspondence relation between each comparison object triangle and the corresponding model triangle using the coordinates of the corresponding points (A and A′, B and B′, and C and C′), determines the goodness of fit of the transformation parameters on the relation between the feature points of the model and those of the recognition object. The object is recognized by specifying the transformation parameters representing the correspondence relation between the feature points of the model and those of the recognition object according to the goodness of fit determined for each association. | 04-22-2010 |
20100098325 | System for optically detecting position and/or orientation of objects comprising at least two coplanar sensors - The electro-optical system for determining position and orientation of a mobile part comprises a fixed projector having a centre of projection (O) and a mobile part. The projector is rigidly linked with a virtual image plane, and the mobile part is rigidly linked with two linear sensors defining a first and a second direction vector. The fixed part projects onto the image plane and onto the sensors patterns, not represented, forming at least two secant networks of at least three segments that are each parallel. | 04-22-2010 |
20100098326 | EMBEDDING AND DECODING THREE-DIMENSIONAL WATERMARKS INTO STEREOSCOPIC IMAGES - Disclosed inventions relates to methods and systems for encoding at least one watermark into a stereoscopic conjugate pair of images. An example method comprises the step of encoding the at least one watermark by shifting selected pixels of said pair of images in one or more directions. The one or more directions include a horizontal direction. In the disclosed embodiments, ancillary information is not required to support decoding of encoded watermarks in addition to the transmitted left and right images. | 04-22-2010 |
20100098327 | 3D Imaging system - The present invention provides a system (method and apparatus) for creating photorealistic 3D models of environments and/or objects from a plurality of stereo images obtained from a mobile stereo camera and optional monocular cameras. The cameras may be handheld, mounted on a mobile platform, manipulator or a positioning device. The system automatically detects and tracks features in image sequences and self-references the stereo camera in 6 degrees of freedom by matching the features to a database to track the camera motion, while building the database simultaneously. A motion estimate may be also provided from external sensors and fused with the motion computed from the images. Individual stereo pairs are processed to compute dense 3D data representing the scene and are transformed, using the estimated camera motion, into a common reference and fused together. The resulting 3D data is represented as point clouds, surfaces, or volumes. The present invention also provides a system (method and apparatus) for enhancing 3D models of environments or objects by registering information from additional sensors to improve model fidelity or to augment it with supplementary information by using a light pattern projector. The present invention also provides a system (method and apparatus) for generating photo-realistic 3D models of underground environments such as tunnels, mines, voids and caves, including automatic registration of the 3D models with pre-existing underground maps. | 04-22-2010 |
20100098328 | 3D imaging system - The present invention provides a system (method and apparatus) for creating photorealistic 3D models of environments and/or objects from a plurality of stereo images obtained from a mobile stereo camera and optional monocular cameras. The cameras may be handheld, mounted on a mobile platform, manipulator or a positioning device. The system automatically detects and tracks features in image sequences and self-references the stereo camera in 6 degrees of freedom by matching the features to a database to track the camera motion, while building the database simultaneously. A motion estimate may be also provided from external sensors and fused with the motion computed from the images. Individual stereo pairs are processed to compute dense 3D data representing the scene and are transformed, using the estimated camera motion, into a common reference and fused together. The resulting 3D data is represented as point clouds, surfaces, or volumes. The present invention also provides a system (method and apparatus) for enhancing 3D models of environments or objects by registering information from additional sensors to improve model fidelity or to augment it with supplementary information by using a light pattern projector. The present invention also provides a system (method and apparatus) for generating photo-realistic 3D models of underground environments such as tunnels, mines, voids and caves, including automatic registration of the 3D models with pre-existing underground maps. | 04-22-2010 |
20100104174 | Markup Language for Interactive Geographic Information System - Data-driven guarded evaluation of conditional-data associated with data objects is used to control activation and processing of the data objects in an interactive geographic information system. Methods of evaluating conditional-data to control activation of the data objects are disclosed herein. Data structures to specify conditional data are also disclosed herein. | 04-29-2010 |
20100104175 | Integrated image processor - A system is disclosed. An input interface is configured to receive pixel data from two or more images. A pixel handling processor disposed on the substrate is configured to convert the pixel data into depth and intensity pixel data. In some embodiments, a foreground detector processor disposed on the substrate is configured to classify pixels as background or not background. In some embodiments, a projection generator disposed on the substrate is configured to generate a projection in space of the depth and intensity pixel data. | 04-29-2010 |
20100128971 | Image processing apparatus, image processing method and computer-readable recording medium - A pair of images subjected to image processing is divided. Next, based on mutually-corresponding divided images, mutually-corresponding matching images are respectively set. When a corresponding point of a characteristic point in one matching image is not extracted from the other matching image, adjoining divided images are joined together, and based on the joined divided image, a new matching image is set. | 05-27-2010 |
20100128972 | Stereo matching processing system, stereo matching processing method and recording medium - To correctly associate coinciding positions between a plurality of images. | 05-27-2010 |
20100128973 | Stereo image processing apparatus, stereo image processing method and computer-readable recording medium - A stereo image processing apparatus | 05-27-2010 |
20100128974 | Stereo matching processing apparatus, stereo matching processing method and computer-readable recording medium - To improve stereo matching speed and accuracy. An image data input unit | 05-27-2010 |
20100135573 | 3-D Optical Microscope - A 3-D optical microscope, a method of turning a conventional optical microscope into a 3-D optical microscope, and a method of creating a 3-D image on an optical microscope are described. The 3-D optical microscope includes a processor, at least one objective lens, an optical sensor capable of acquiring an image of a sample, a mechanism for adjusting focus position of the sample relative to the objective lens, and a mechanism for illuminating the sample and for projecting a pattern onto and removing the pattern from the focal plane of the objective lens. The 3-D image creation method includes taking two sets of images, one with and another without the presence of the projected pattern, and using a software algorithm to analyze the two image sets to generating a 3-D image of the sample. The 3-D image creation method enables reliable and accurate 3-D imaging on almost any sample regardless of its image contrast. | 06-03-2010 |
20100142801 | Stereo Movie Editing - The stereo movie editing technique described herein combines knowledge of both multi-view stereo algorithms and human depth perception. The technique creates a digital editor, specifically for stereographic cinema. The technique employs an interface that allows intuitive manipulation of the different parameters in a stereo movie setup, such as camera locations and screen position. Using the technique it is possible to reduce or enhance well-known stereo movie effects such as cardboarding and miniaturization. The technique also provides new editing techniques such as directing the user's attention and easier transitions between scenes. | 06-10-2010 |
20100142802 | APPARATUS FOR CALCULATING 3D SPATIAL COORDINATES OF DIGITAL IMAGES AND METHOD THEREOF - Provided is a digital photographing apparatus including: an image acquiring unit that acquires images by photographing a subject; a sensor information acquiring unit that acquires positional information, directional information, and posture information of the digital photographing apparatus at the time of photographing a subject; a device information acquiring unit that acquires device information of the digital photographing apparatus at the time of photographing a subject; and a spatial coordinates calculator that calculates 3D spatial coordinates for photographed images using the acquired positional information, directional information, posture information, and device information. | 06-10-2010 |
20100150431 | Method of Change Detection for Building Models - Lidar point clouds and multi-spectral aerial images are integrated for change detection of building models. This reduces errors owing to ground areas and vegetation areas. Manifold change types are detected with low cost, low inaccuracy and high efficiency. | 06-17-2010 |
20100158351 | COMBINED EXCHANGE OF IMAGE AND RELATED DATA - A method of combined exchange of image data and further data being related to the image data, the image data being represented by a first two-dimensional matrix of image data elements and the further data being represented by a second two-dimensional matrix of further data elements is disclosed. The method comprises combining the first two-dimensional matrix and the second two-dimensional matrix into a combined two-dimensional matrix of data elements. | 06-24-2010 |
20100158352 | APPARATUS AND METHOD FOR REAL-TIME CAMERA TRACKING - A camera tracking apparatus for calculating in real time feature information and camera motion information based on an input image includes a global camera tracking unit for computing a global feature map having feature information on entire feature points; a local camera tracking unit for computing in real time a local feature map having feature information on a part of the entire feature points; a global feature map update unit for receiving the computed feature information from the global and local camera tracking units to update the global feature map; and a local feature selection unit for receiving the updated feature information from the global feature map update unit to select in real time the feature points contained in the local feature map. The local camera tracking unit computes the local feature map for each frame, while the global camera tracking unit computes the global feature map over frames. | 06-24-2010 |
20100158353 | METHOD FOR RESTORATION OF BUILDING STRUCTURE USING INFINITY HOMOGRAPHIES CALCULATED BASED ON PARALLELOGRAMS - A method for restoration of building structure using infinity homographies calculated based on parallelograms includes: calculating, using two or more parallelograms, an infinity homography between those cameras which refer to an arbitrary camera; restoring cameras and the building structure on an affine space using the computed infinity homography and homologous points between images; and transforming the restored result onto the metric space using constraints on orthogonality of vectors joining the restored three-dimensional points, the ratio of lengths of the vectors and intrinsic camera parameters. As a result, intrinsic camera parameters, camera positions on the metric space and the structure of the building are restored. All the restoration is possible even when intrinsic camera parameters corresponding to all the images are not constant. | 06-24-2010 |
20100158354 | METHOD OF CREATING ANIMATABLE DIGITAL CLONE FROM MULTI-VIEW IMAGES - The present invention relates to a method of creating an animatable digital clone includes receiving input multi-view images of an actor captured by at least two cameras and reconstructing a three-dimensional appearance therefrom, accepting shape information selectively based on a probability of photo-consistency in the input multi-view images obtained from the reconstruction and transferring a mesh topology of a reference human body model onto a shape of the actor obtained from the reconstruction. The method further includes generating an initial human body model of the actor via transfer of the mesh topology utilizing sectional shape information of the actor's joints, and generating a genuine human body model of the actor from learning genuine behavioral characteristics of the actor by applying the initial human body model to multi-view posture learning images where performance of a predefined motion by the actor is recorded. | 06-24-2010 |
20100158355 | Fast Object Detection For Augmented Reality Systems - A detection method is based on a statistical analysis of the appearance of model patches from all possible viewpoints in the scene, and incorporates 3D geometry during both matching and pose estimation processes. By analyzing the computed probability distribution of the visibility of each patch from different viewpoints, a reliability measure for each patch is estimated. That reliability measure is useful for developing industrial augmented reality applications. Using the method, the pose of complex objects can be estimated efficiently given a single test image. | 06-24-2010 |
20100166293 | IMAGE FORMING METHOD AND OPTICAL COHERENCE TOMOGRAPH APPARATUS USING OPTICAL COHERENCE TOMOGRAPHY - An image forming method uses an optical coherence tomography as to an optical axis direction of plural pieces of image information of an object. First image information of an object is obtained at a first focus with respect to an optical axis direction to then object. A focusing position is changed by dynamic focusing from the first focus to a second focus along the optical axis. The second image information of the object is obtained at the second focus. A third image information, tomography image information of the object and including a tomography image of the first focus or the second focus, is obtained by Fourier domain optical coherence tomography. A tomography image or a three-dimensional image of the object is formed in positional relation, in the optical axis direction, between the first image information and the second image information using the third image information. | 07-01-2010 |
20100166294 | SYSTEM AND METHOD FOR THREE-DIMENSIONAL ALIGNMENT OF OBJECTS USING MACHINE VISION - This invention provides a system and method for determining the three-dimensional alignment of a modeledobject or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified by, for example, fitting found 3D or 2D points of the candidate poses to a larger set of corresponding three-dimensional or two-dimensional model points, whereby the closest match is the best refined three-dimensional pose. | 07-01-2010 |
20100166295 | METHOD AND SYSTEM FOR SEARCHING FOR GLOBAL MINIMUM - A method and a system for searching for a global minimum are provided. First, a subclass of a plurality of space points in a multidimensional space is clustered into a plurality of clusters through a clustering algorithm, wherein each of the space points is corresponding to an error value in an evaluation function. Then, ellipsoids for enclosing the clusters in the multidimensional space are respectively calculated. Next, a designated space corresponding to each of the ellipsoids is respectively inputted into a recursive search algorithm to search for a local minimum among the error values corresponding to the space points within each designated space. Finally, the local minimums of all the clusters are compared to obtain the space point corresponding to the minimum local minimum. | 07-01-2010 |
20100166296 | METHOD AND PROGRAM FOR EXTRACTING SILHOUETTE IMAGE AND METHOD AND PROGRAM FOR CONSTRUCTING THREE DIMENSIONAL MODEL - A present invention provides a method and a program for extracting the high accuracy silhouette by relatively simple process not using manual labor or special photography environment. A method for extracting the high accuracy silhouette comprises: extracting a number of first silhouettes from a number of object images and a number of background images by a background subtraction; constructing first visual hull from a number of the first silhouettes by a shape from silhouette method; constructing second visual hull by process to repair missed parts and/or to remove unwanted regions in first visual hull; and extracting a number of second silhouettes from the second visual hull. | 07-01-2010 |
20100172572 | Focus-Based Edge Detection - A model generator computes a first image perimeter color difference value for each of a plurality of first pixels included in a first image that is captured using a first focal length, and selects one of the first image perimeter color difference values that exceeds a perimeter color difference threshold. Next, the model generator computes a second image perimeter color difference value for each of a plurality of second pixels included in a second image that is captured using a second focal length, and selects one of the second image perimeter color difference values that exceeds the perimeter color difference threshold. The model generator then determines that an edge is located at the first focal length by detecting that the selected first image perimeter color difference value is greater than the selected second image perimeter color difference value, and generates an image accordingly. | 07-08-2010 |
20100177955 | BIDIRECTIONAL SIMILARITY OF SIGNALS - A method for measuring bi-directional similarity between a first signal of a first size and a second signal of a second size includes matching at least some patches of the first signal with patches of the second signal for data completeness, matching at least some patches of the second signal with patches of the first signal for data coherence, calculating the bi-directional similarity measure as a function of the matched patches for coherence and the matched patches for completeness and indicating the similarity between the first signal and the second signal. Another method generates a second signal from a first signal where the second signal is different than the first signal by at least one parameter. The method includes attempting to maximize a bi-directional similarity measure between the second signal and the first signal. | 07-15-2010 |
20100189341 | INTRA-ORAL MEASUREMENT DEVICE AND INTRA-ORAL MEASUREMENT SYSTEM - The present invention aims to provide an intra-oral measurement device and an intra-oral measurement system capable of measuring an inside of an oral cavity at high accuracy without increasing a size of the device, and includes a light projecting unit for irradiating a measuring object including at least a tooth within an oral cavity with light, a lens system unit for collecting light reflected by the measuring object, a focal position varying mechanism for changing a focal position of the light collected by the lens system unit, and an imaging unit for imaging light passed through the lens system unit. | 07-29-2010 |
20100189342 | SYSTEM, METHOD, AND APPARATUS FOR GENERATING A THREE-DIMENSIONAL REPRESENTATION FROM ONE OR MORE TWO-DIMENSIONAL IMAGES - In a system and method for generating a 3-dimensional representation of a portion of an organism, collecting training data, wherein the training data includes a first set of training data and a second set of training data. At least one statistical model having a set of parameters is built using the training data. The at least one statistical model is compared to a 2-dimensional image of the portion of the organism. At least one parameter of the set of parameters of the statistical model is modified based on the comparison of the at least one statistical model to the 2-dimensional image of the portion of the organism. The modified set of parameters representing the portion of the organism is passed through the statistical model. | 07-29-2010 |
20100189343 | METHOD AND APPARATUS FOR STORING 3D INFORMATION WITH RASTER IMAGERY - The present invention meets the above-stated needs by providing a method and apparatus that allows for X parallax information to be stored within an image pixel information. Consequently, only one image need be stored, whether it's a mosaic of a number of images, a single image or a partial image for proper reconstruction. To accomplish this, the present invention stores an X parallax value between the stereoscopic images with the typical pixel information by, e.g., increasing the pixel depth. | 07-29-2010 |
20100195898 | METHOD AND APPARATUS FOR IMPROVING QUALITY OF DEPTH IMAGE - A method and apparatus for enhancing quality of a depth image are provided. A method for enhancing quality of a depth image includes: receiving a multi-view image including a left image, a right image, and a center image; receiving a current depth image frame and a previous depth image frame of the current depth image frame; setting an intensity difference value corresponding to a specific disparity value of the current depth image frame by using the current depth image frame and the previous depth image frame; setting a disparity value range including the specific disparity value; and setting an intensity difference value corresponding to the disparity value range of the current depth image frame by using the multi-viewpoint image. | 08-05-2010 |
20100195899 | DETECTION OF PEOPLE IN REAL WORLD VIDEOS AND IMAGES - Systems and methods for detecting people in video data streams or image data are provided. The method includes using a plurality of training images for learning spatial distributions associated with a plurality of body parts, detecting a plurality of detections of body parts in an input image, clustering the detections of body parts located within a predetermined distance from one another to create one effective detection for each cluster of detections, and determining a position of each person associated with each effective detection. The detections of body parts can be associated with respective previously learned spatial distributions. | 08-05-2010 |
20100195900 | APPARATUS AND METHOD FOR ENCODING AND DECODING MULTI-VIEW IMAGE - An apparatus and method for encoding and decoding a multi-view image including a stereoscopic image are provided. The apparatus for encoding a multi-view image includes a base layer encoding unit that encodes a base layer image to generate a base layer bit stream, a view-based conversion unit that performs view-based conversion of the base layer image to generate a view-converted base layer image, a subtractor obtaining a residual between a enhancement layer image and the view-converted base layer image, and an enhancement layer encoding unit that encodes the obtained residual to generate an enhancement layer bit stream. | 08-05-2010 |
20100208981 | Method for visualization of point cloud data based on scene content - Systems and methods for associating color with spatial data are provided. In the system and method, a scene tag is selected for a portion | 08-19-2010 |
20100208982 | HOUSE CHANGE JUDGMENT METHOD AND HOUSE CHANGE JUDGMENT PROGRAM - It is an object to improve the accuracy of a house change judgment based on images and the like acquired by an airplane. A terrain altitude is subtracted from an attitude value of a digital surface model (DSM) acquired from an airplane or the like to generate a normalized DSM (NDSM). A judgment target region is segmented into a plurality of regions of elevated part for each elevated part with a size corresponding to a house appearing on the NDSM. An outline of the house is extracted from each region of elevated part and a house object containing three-dimensional information on the house is defined by the outline and NDSM data within the outline. The house objects acquired at two different time points, respectively, are compared to detect a variation between the two different time points, and a judgment as to a house change is made based on the variation. | 08-19-2010 |
20100215248 | Method for Determining Dense Disparity Fields in Stereo Vision - In a stereo vision system comprising two cameras shooting the same scene from different positions, a method is performed for determining dense disparity fields between digital images shot by the two cameras, including the steps of capturing a first and a second image of the scene, and determining, for each pixel of the second image, the displacement from a point in the first image to such pixel of the second image minimising an optical flow objective function, wherein the optical flow objective function includes, for each pixel of the second image, a term depending in a monotonously increasing way on the distance between the epipolar line associated with such pixel and the above point in the first image, such term depending on calibration parameters of the two cameras and being weighed depending on the uncertainty of the calibration data. | 08-26-2010 |
20100215249 | AUTOMATED IMAGE SEPARATION METHOD - A method of decomposing a set of scans of different views of overlapping objects into constituent objects is presented. The method involves an initialization process whereby keypoints in two views are determined and matched, disparity between keypoint pairs are computed, and the keypoints are grouped into clusters based on their disparities. Following the initialization process is an iterative optimization process whereby a cost function is calculated and minimized assuming a fixed composition matrix and re-solved assuming a fixed attenuation coefficient. Then, the composition matrix and the attenuation coefficient are updated simultaneously, and the solving, the re-solving, and the updating steps are repeated until there is no significant improvement in the result. | 08-26-2010 |
20100215250 | SYSTEM AND METHOD OF INDICATING TRANSITION BETWEEN STREET LEVEL IMAGES - A system and method of displaying transitions between street level images is provided. In one aspect, the system and method creates a plurality of polygons that are both textured with images from a 2D street level image and associated with 3D positions, where the 3D positions correspond with the 3D positions of the objects contained in the image. These polygons, in turn, are rendered from different perspectives to convey the appearance of moving among the objects contained in the original image. | 08-26-2010 |
20100215251 | METHOD AND DEVICE FOR PROCESSING A DEPTH-MAP - The present invention relates to a device and apparatus for processing a depth-map | 08-26-2010 |
20100215252 | HAND HELD PORTABLE THREE DIMENSIONAL SCANNER - Embodiments of the invention may include a scanning device to scan three dimensional objects. The scanning device may generate a three dimensional model. The scanning device may also generate a texture map for the three dimensional model. Techniques utilized to generate the model or texture map may include tracking scanner position, generating depth maps of the object and generation composite image of the surface of the object. | 08-26-2010 |
20100220920 | METHOD, APPARATUS AND SYSTEM FOR PROCESSING DEPTH-RELATED INFORMATION - The invention relates to a method, apparatus and system for processing first depth-related information associated with an image sequence. The method of processing comprises mapping first depth-related information of respective images of a shot of the image sequence on corresponding second depth-related information using a first estimate of a characteristic of the distribution of first depth-related information associated with at least one image from the shot, the mapping adapting the first depth-related information by enhancing the dynamic range of a range of interest of first depth-related information defined at least in part by the first estimate, and the amount of variation in the mapping for respective images in temporal proximity in the shot being limited. | 09-02-2010 |
20100220921 | STEREO IMAGE SEGMENTATION - Real-time segmentation of foreground from background layers in binocular video sequences may be provided by a segmentation process which may be based on one or more factors including likelihoods for stereo-matching, color, and optionally contrast, which may be fused to infer foreground and/or background layers accurately and efficiently. In one example, the stereo image may be segmented into foreground, background, and/or occluded regions using stereo disparities. The stereo-match likelihood may be fused with a contrast sensitive color model that is initialized or learned from training data. Segmentation may then be solved by an optimization algorithm such as dynamic programming or graph cut. In a second example, the stereo-match likelihood may be marginalized over foreground and background hypotheses, and fused with a contrast-sensitive color model that is initialized or learned from training data. Segmentation may then be solved by an optimization algorithm such as a binary graph cut. | 09-02-2010 |
20100226563 | MODEL IMAGE ACQUISITION SUPPORT APPARATUS, MODEL IMAGE ACQUISITION SUPPORT METHOD, AND MODEL IMAGE ACQUISITION SUPPORT PROGRAM - The present invention provides a model image acquisition support apparatus, a model image acquisition support method, and a model image acquisition support program that can easily and swiftly obtain an optimum model image for an image processing apparatus that performs matching processing based on a model image set in advance with respect to a measurement image that is obtained by imaging an object. A plurality of model image candidates, serving as candidates for model image, are extracted from a reference image obtained by imaging an object which can be a model. Matching processing with the plurality of extracted model images is executed on measurement images actually obtained by a visual sensor, so that trial results are obtained. A trial result is generated upon evaluating each of the trial results of the matching processing with the model image. An optimum model image is determined based on the evaluation result. | 09-09-2010 |
20100232681 | THREE-DIMENSIONAL VISION SENSOR - An object of the present invention is to enable performing height recognition processing by setting a height of an arbitrary plane to zero for convenience of the recognition processing. A parameter for three-dimensional measurement is calculated and registered through calibration and, thereafter, an image pickup with a stereo camera is performed on a plane desired to be recognized as having a height of zero in actual recognition processing. Further, three-dimensional measurement using the registered parameter is performed on characteristic patterns (marks m | 09-16-2010 |
20100232682 | METHOD FOR DERIVING PARAMETER FOR THREE-DIMENSIONAL MEASUREMENT PROCESSING AND THREE-DIMENSIONAL VISUAL SENSOR - In the present invention, processing for setting a parameter expressing a measurement condition of three-dimensional measurement to a value necessary to output a proper recognition result is easily performed. The three-dimensional measurement is performed to stereo images of real models WM | 09-16-2010 |
20100232683 | Method For Displaying Recognition Result Obtained By Three-Dimensional Visual Sensor And Three-Dimensional Visual Sensor - Display suitable to an actual three-dimensional model or a recognition-target object is performed when stereoscopic display of a three-dimensional model is performed while correlated to an image used in three-dimensional recognition processing. After a position and a rotation angle of a workpiece are recognized through recognition processing using the three-dimensional model, coordinate transformation of the three-dimensional model is performed based on the recognition result, and a post-coordinate-transformation Z-coordinate is corrected according to an angle (elevation angle f) formed between a direction of a line of sight and an imaging surface. Then perspective transformation of the post-correction three-dimensional model into a coordinate system of a camera of a processing object is performed, and a height according to a pre-correction Z-coordinate at a corresponding point of the pre-coordinate-transformation three-dimensional model is set to each point of a produced projection image. Projection processing is performed from a specified direction of a line of sight to a point group that is three-dimensionally distributed by the processing, thereby producing a stereoscopic image of the three-dimensional model. | 09-16-2010 |
20100232684 | CALIBRATION APPARATUS AND METHOD FOR ASSISTING ACCURACY CONFIRMATION OF PARAMETER FOR THREE-DIMENSIONAL MEASUREMENT - When computation of a three-dimensional measurement processing parameter is completed, accuracy of a computed parameter can easily be confirmed. After a parameter for three-dimensional measurement is computed through calibration processing using a calibration workpiece in which plural feature points whose positional relationship is well known can be extracted from an image produced by imaging, three-dimensional coordinate computing processing is performed using the computed parameter for the plural feature points included in the stereo image used to compute the parameter. Perspective transformation of each computed three-dimensional coordinate is performed to produce a projection image in which each post-perspective-transformation three-dimensional coordinate is expressed by a predetermined pattern, and the projection image is displayed on a monitor device. | 09-16-2010 |
20100239158 | FINE STEREOSCOPIC IMAGE MATCHING AND DEDICATED INSTRUMENT HAVING A LOW STEREOSCOPIC COEFFICIENT - The invention relates to a method and system for the acquisition and correlation matching of points belonging to a stereoscopic pair of images, whereby the pair is formed by a first image and a second image representing a scene. According to the invention, the two images of the pair are acquired with a single acquisition instrument ( | 09-23-2010 |
20100246937 | METHOD AND SYSTEM FOR INSPECTION OF CONTAINERS - A method and system for producing images of at least one object of interest in a container. The method includes receiving three-dimensional volumetric scan data from a scan of the container, reconstructing a three-dimensional representation of the container from the three-dimensional volumetric scan data, and inspecting the three-dimensional representation to detect the at least one object of interest within the container. The method also includes re-projecting a two-dimensional image from one of the three-dimensional volumetric scan data and the three-dimensional representation, and identifying a first plurality of image elements in the two-dimensional image corresponding to a location of the at least one object of interest. The method further includes outputting the two-dimensional image with the first plurality of image elements highlighted. | 09-30-2010 |
20100246938 | Image Processing Method for Providing Depth Information and Image Processing System Using the Same - An image processing method for providing corresponding depth information according to an input image is provided. This method includes the following steps. First, a reference image is generated according to the input image. Next, the input image and the reference image are divided into a number of input image blocks and a number of reference image blocks, respectively. Then, according to a number of input pixel data of each input image block and a number of reference pixel data of each reference image block, respective variance magnitudes of the input image blocks are obtained. Next, the input image is divided into a number of segmentation regions. Then, the depth information is generated according to the corresponding variance magnitudes of the input image blocks which each segmentation region covers substantially. | 09-30-2010 |
20100254592 | CALCULATING Z-DEPTHS AND EXTRACTING OBJECTS IN IMAGES - The dual cameras produce two simultaneous images IM | 10-07-2010 |
20100254593 | System for Draping Meteorological Data on a Three Dimensional Terrain Image - A system for draping meteorological data on a three dimensional terrain image has been developed. The system includes a central processing server that receives meteorological data in real time and drapes the meteorological data over a three dimensional terrain image. The image is then transmitted to a display computer for use by an end user. | 10-07-2010 |
20100266198 | Apparatus, method, and medium of converting 2D image 3D image based on visual attention - A method, apparatus, and medium of converting a two-dimensional (2D) image to a three-dimensional (3D) image based on visual attention are provided. A visual attention map including visual attention information, which is information about a significance of an object in a 2D image, may be generated. Parallax information including information about a left eye image and a right eye image of the 2D image may be generated based on the visual attention map. A 3D image may be generated using the parallax information. | 10-21-2010 |
20100272348 | TRANSPROJECTION OF GEOMETRY DATA - Systems and methods for transprojection of geometry data acquired by a coordinate measuring machine (CMM). The CMM acquires geometry data corresponding to 3D coordinate measurements collected by a measuring probe that are transformed into scaled 2D data that is transprojected upon various digital object image views captured by a camera. The transprojection process can utilize stored image and coordinate information or perform live transprojection viewing capabilities in both still image and video modes. | 10-28-2010 |
20100284605 | Methodology to Optimize and Provide Streaming Object Rotation Using Composite Images - Optimizing and presenting various sequences of images and/or photographs for viewing with a Web browser, is accomplished without the necessity of loading the entire image set, for example in connection with the 3D display of a product of interest. To represent an object that is rotating, a set of images must be taken. These images are taken at various angles, typically using either using a fixed camera or a turntable. The illusion of an object being rotated is created when the captured images based on the angle being viewed are displayed. To ensure a seamless rotation of an object, a technique is taught that significantly concentrates on reducing the loading time of the captured images by prioritizing which images should be transferred first according to their size, and their number of object views or view angles. A seamless rotation is thus achieved while less than the total number of images is loaded. In fact, an embodiment of the invention teaches that, by selectively loading certain images with specific angular values, it is possible to achieve an object rotation, i.e. using horizontal and vertical adjacent images positioning. | 11-11-2010 |
20100284606 | IMAGE PROCESSING DEVICE AND METHOD THEREOF - An image processing device and a method thereof are provided. In the method, an original image and a corresponding depth image are received, wherein the depth image includes a plurality of depth values, and the depth values indicate depth of field of a plurality of blocks in the original image respectively. Further, each of the blocks is processed to obtain a corresponding smoothness and/or sharpness effect according to each of the depth values. Thereby, a stereoscopic sensation of the original image can be enhanced. | 11-11-2010 |
20100284607 | METHOD AND SYSTEM FOR GENERATING A 3D MODEL FROM IMAGES - A method for generating a three dimensional (3D) model of an object from a series of two dimensional (2D) images is described. The series of 2D images depict varying views of the object and have associated camera parameter information. The method includes the steps of tracing the object in a first 2D image selected from the series of 2D images to provide a first set of tracing information, then tracing the object in a second 2D image selected from the series of 2D images to provide a second set of tracing information. The 3D model of the object is then generated based on the camera parameter information and the first and second sets of tracing information. | 11-11-2010 |
20100284608 | FEATURE-BASED SEGMENTATION METHOD, FOR SEGMENTING A PLURALITY OF LOOSELY-ARRANGED DUPLICATE ARTICLES AND A GROUP FOR ACTUATING THE METHOD FOR SUPPLYING A PACKAGING MACHINE - The invention relates to a segmentation method based on the characteristics for segmenting a plurality of duplicate articles ( | 11-11-2010 |
20100284609 | Apparatus and method for measuring size distribution of granular matter - A method and apparatus for measuring size distribution of bulk matter consisted of randomly orientated granules, such as wood chips, make use of scanning the exposed surface of the granular matter to generate three-dimensional profile image data defined with respect to a three-coordinate reference system, The image data is segmented to reveal regions associated with distinct granules, and values of the size-related parameter for the revealed regions are estimated. Then, a geometric correction to each ones of estimated size-related parameter values is applied, to compensate for the random orientation of corresponding distinct granules. Finally, the size distribution of bulk matter is statistically estimated from the corrected size-related parameter values. | 11-11-2010 |
20100290697 | METHODS AND SYSTEMS FOR COLOR CORRECTION OF 3D IMAGES - A system and method for color correction of 3D images including at least two separate image streams captured for a same scene include determining three-dimensional properties of at least a portion of a selected image stream, the three-dimensional properties including light and surface reflectance properties, surface color, reflectance properties, scene geometry and the like. A look of the portion of the selected image stream is then modified by altering the value of at least one of the determined three-dimensional properties and, in one embodiment, applying image formation theory. The modifications are then rendered in an output 3D picture either automatically and/or according to user inputs. In various embodiments, corrections made to the selected one of the at least two image streams can be automatically applied to the other of the image streams. | 11-18-2010 |
20100290698 | Distance-Varying Illumination and Imaging Techniques for Depth Mapping - A method for mapping includes projecting a pattern onto an object ( | 11-18-2010 |
20100296724 | Method and System for Estimating 3D Pose of Specular Objects - A method estimates a 3D pose of a 3D specular object in an environment. In a preprocessing step, a set of pairs of 2D reference images are generated using a 3D model of the object, and a set of poses of the object, wherein each pair of reference images is associated with one of the poses. Then, a pair of 2D input images are acquired of the object. A rough 3D pose of the object is estimated by comparing features in the pair of 2D input images and the features in each pair of 2D reference images using a rough cost function. The rough estimate is optionally refined using a fine cost function. | 11-25-2010 |
20100296725 | DEVICE AND METHOD FOR OBTAINING A THREE-DIMENSIONAL TOPOGRAPHY - In a device for obtaining a three-dimensional topography of a measured object, a center axis of an illumination system is situated at an angle with respect to a recording direction of a 2D camera, and the illumination system generates a focal plane on a predetermined area of the measured object, the predetermined area being smaller than a recording area of the 2D camera. The measured object is movable relative to the 2D camera and relative to the illumination system with the aid the movement device. The 2D camera records multiple images of the measured object from various positions which are occupied due to the movement of the movement device. | 11-25-2010 |
20100296726 | HIGH-RESOLUTION OPTICAL DETECTION OF THE THREE-DIMENSIONAL SHAPE OF BODIES - In a cost-efficient method and arrangement for 3D digitization of bodies and body parts, which produces dense and exact spatial coordinates despite imprecise optics and mechanics, the body to be digitized is placed on a photogrammetrically marked surface, a photogrammetrically marked band is fitted to the body or body part to be digitized, and a triangulation arrangement comprised of a camera and a light pattern projector is moved on a path around the body. By a photogrammetric evaluation of the photogrammetric marks of the surface and the band situated in the image field of the camera, and of the light traces of the light projector on the marked surface and the marked band, all unknown internal and external parameters of the triangulation arrangement are determined, and the absolute spatial coordinates of the body or body part are established from the light traces on the non-marked body with high point density and high precision without any separate calibration methods. | 11-25-2010 |
20100296727 | METHODS AND DEVICES FOR READING MICROARRAYS - In one embodiment of the invention, a method to image a probe array is described that includes focusing on a plurality of fiducials on a surface of an array. The method utilizes obtaining the best z position of the fiducials and using a surface fitting algorithm to produce a surface fit profile. One or more surface non-flatness parameters can be adjusted to improve the flatness image of the array surface to be imaged. | 11-25-2010 |
20100303336 | Method for ascertaining the axis of rotation of a vehicle wheel - A method for ascertaining the axis of rotation of a vehicle wheel in which a light pattern is projected at least onto the wheel during the rotation of the wheel and the light pattern reflected from the wheel is detected by a calibrated imaging sensor system and analyzed in an analyzer device. Accurate and robust measurement of the axis of rotation and, optionally, of the axis and wheel geometry, in particular when the vehicle is passing by, is achieved in that a 3D point cloud with respect to the wheel is determined in the analysis and a parametric surface model of the wheel is adapted thereto; normal vectors of the wheel are calculated for different rotational positions of the wheel for obtaining the axes of rotation; and the axis of rotation vector is calculated as the axis of rotation from the spatial movement of the normal vector of the wheel. | 12-02-2010 |
20100303337 | Methods and Apparatus for Practical 3D Vision System - A method and system for specifying an area of interest in a 3D imaging system including a plurality of cameras that include at least first and second cameras wherein each camera has a field of view arranged along a camera distinct trajectory, the method comprising the steps of presenting a part at a location within the fields of view of the plurality of cameras, indicating on the part an area of interest that is within the field of view of each of the plurality of cameras, for each of the plurality of cameras: (i) acquiring at least one image of the part including the area of interest, (ii) identifying a camera specific field of interest within the field of view of the camera associated with the area of interest in the at least one image and (iii) storing the field of interest for subsequent use. | 12-02-2010 |
20100303338 | Digital Video Content Fingerprinting Based on Scale Invariant Interest Region Detection with an Array of Anisotropic Filters - Video sequence processing is described with various filtering rules applied to extract dominant features for content based video sequence identification. Active regions are determined in video frames of a video sequence. Video frames are selected in response to temporal statistical characteristics of the determined active regions. A two pass analysis is used to detect a set of initial interest points and interest regions in the selected video frames to reduce the effective area of images that are refined by complex filters that provide accurate region characterizations resistant to image distortion for identification of the video frames in the video sequence. Extracted features and descriptors are robust with respect to image scaling, aspect ratio change, rotation, camera viewpoint change, illumination and contrast change, video compression/decompression artifacts and noise. Compact, representative signatures are generated for video sequences to provide effective query video matching and retrieval in a large video database. | 12-02-2010 |
20100303339 | System and Method for Initiating Actions and Providing Feedback by Pointing at Object of Interest - A system and method as described for compiling feedback in command statements that relate to applications or services associated with spatial objects or features, pointing at such spatial object or feature order to identify the object of interest, and executing the command statements on a system server and attaching feedback information to their representation of this object or feature in a database of the system server. | 12-02-2010 |
20100303340 | STEREO-IMAGE REGISTRATION AND CHANGE DETECTION SYSTEM AND METHOD - A system and method for registering stereoscopic images comprising: obtaining at least two sets of stereoscopic images, each one of the at least two sets including at least two images that are taken from different angles, determining at least two groups of images, each one of the groups including at least two images that are respective images of at least two of the sets or are derived therefrom. For each one of the groups, calculating a respective optimal entities list and stereo-matching at least two images, each one being or derived from different one of the at least two groups and same or different sets, using at least four optimal entities from each one of the optimal entities list, thereby giving rise to at least one pair of registered stereoscopic images. | 12-02-2010 |
20100303341 | METHOD AND DEVICE FOR THREE-DIMENSIONAL SURFACE DETECTION WITH A DYNAMIC REFERENCE FRAME - The surface shape of a three-dimensional object is acquired with an optical sensor. The sensor, which has a projection device and a camera, is configured to generate three-dimensional data from a single exposure, and the sensor is moved relative to the three-dimensional object, or vice versa. A pattern is projected onto the three-dimensional object and a sequence of overlapping images of the projected pattern is recorded with the camera. A sequence of 3D data sets is determined from the recorded images and a registration is effected between subsequently obtained 3D data sets. This enables the sensor to be moved freely about the object, or vice versa, without tracking their relative position, and to determine a surface shape of the three-dimensional object on the fly. | 12-02-2010 |
20100310153 | ENHANCED IMAGE IDENTIFICATION - A method for deriving a representation of an image is described. The method involves processing signals corresponding to the image. A three dimensional representation of the image is derived. The three dimensional representation of the image to used to derive the representation of the image. In one embodiment, each line of the image is defined by a first parameter (d) and a second parameter (θ), and a position on each line is defined by a third parameter (t), and the three dimensional representation is parameterised by the first, second and third parameters. A set of values is extracted from the three dimensional representation at a value of the first parameter, and a functional is applied along lines, or parts of lines, of the extracted set of values, the lines extending along values of the second or third parameter. | 12-09-2010 |
20100310154 | METHOD FOR MATCHING AN OBJECT MODEL TO A THREE-DIMENSIONAL POINT CLOUD - The invention relates to a method for matching an object model to a three-dimensional point cloud, wherein the point cloud is generated from two images by means of a stereo method and a clustering method is applied to the point cloud in order to identify points belonging to respectively one cluster, wherein model matching is subsequently carried out, with at least one object model being superposed on at least one cluster and an optimum position of the object model with respect to the cluster being determined, and wherein a correction of false assignments of points is carried out by means of the matched object model. A classifier, trained by means of at least one exemplary object, is used to generate an attention map from at least one of the images. A number and/or a location probability of at least one object, which is similar to the exemplary object, is determined in the image using the attention map, and the attention map is taken into account in the clustering method and/or in the model matching. | 12-09-2010 |
20100310155 | IMAGE ENCODING METHOD FOR STEREOSCOPIC RENDERING - An image encoding method that allows stereoscopic rendering basically comprises the following steps. In a primary encoding step (VDE | 12-09-2010 |
20100316280 | DESIGN DRIVEN SCANNING ALIGNMENT FOR COMPLEX SHAPES - Methods and systems for accurately determining dimensional accuracy of a complex three dimensional shape are disclosed. The invention in one respect includes determining at least a non-critical feature and at least a critical feature of the 3-D component, determining a first datum using at least the non-critical feature, aligning the first datum to at least a portion of a reference shape, determining a second datum corresponding to the critical feature subsequent to the aligning, and determining the dimensional accuracy of the 3-D component by comparing the second datum to another portion of the reference shape. | 12-16-2010 |
20100316281 | Method and device for determining the pose of a three-dimensional object in an image and method and device for creating at least one key image for object tracking - The invention relates to a method and a device for determining the exposure of a three-dimensional object in an image, characterised in that it comprises the following steps: acquiring a three-dimensional generic model of the object, projecting the three-dimensional generic model according to at least one two-dimensional representation and associating to each two-dimensional representation an exposure information of the three-dimensional object, electing and positioning a two-dimensional representation onto the object in said image, and determining the three-dimensional exposure of the object in the image from at least the exposure information associated with the selected two-dimensional representation. | 12-16-2010 |
20100316282 | Derivation of 3D information from single camera and movement sensors - In various embodiments, a camera takes pictures of at least one object from two different camera locations. Measurement devices coupled to the camera measure the change in location and the change in direction of the camera from one location to the other, and derive 3-dimensional information on the object from that information and, in some embodiments, from the images in the pictures. | 12-16-2010 |
20100322507 | SYSTEM AND METHOD FOR DETECTING DROWSY FACIAL EXPRESSIONS OF VEHICLE DRIVERS UNDER CHANGING ILLUMINATION CONDITIONS - The present invention includes a method of detecting drowsy facial expressions of vehicle drivers under changing illumination conditions. The method includes capturing an image of a person's face using an image sensor, detecting a face region of the image using a pattern classification algorithm, and performing, using an active appearance model algorithm, local pattern matching to identify a plurality of landmark points on the face region of the image. The method also includes generating a 3D face model with facial muscles of the face region, determining photometric flows from the 3D face model using an extract photometric flow module, determining geometric flows from the 3D face model using a compute geometric flow module, determining a noise component generated by varying illuminations by comparing the geometric flows to the photometric flows, and removing the noise component by subtracting two photometric flows. | 12-23-2010 |
20100329542 | Method for Determining a Location From Images Acquired of an Environment with an Omni-Directional Camera - A location and orientation in an environment is determined by first acquiring a real omni-directional image of an unknown skyline in the environment. A set of virtual omni-directional images of known skylines are synthesized from a 3D model of the environment, wherein each virtual omni-directional image is associated with a known location and orientation. The real omni-directional image with each virtual omni-directional images to determine a best matching virtual omni-directional image with the associated known location and orientation. | 12-30-2010 |
20100329543 | METHOD AND SYSTEM FOR RECTIFYING IMAGES - The present invention relates to a method and a system for rectifying images. An original stereo image pair is obtained, and the epipolar lines corresponding to the original stereo image pair are parallelized to obtain a first transformed stereo image pair. Epipolar lines corresponding to the first transformed stereo image pair are collinearized to obtain a second transformed stereo image pair. The present invention parallelizes and collinearizes the epipolar lines corresponding to the stereo image pair after the images are rectified. | 12-30-2010 |
20110002530 | SUB-DIFFRACTION LIMIT IMAGE RESOLUTION IN THREE DIMENSIONS - The present invention generally relates to sub-diffraction limit image resolution and other imaging techniques, including imaging in three dimensions. In one aspect, the invention is directed to determining and/or imaging light from two or more entities separated by a distance less than the diffraction limit of the incident light. For example, the entities may be separated by a distance of less than about 1000 nm, or less than about 300 nm for visible light. In some cases, the position of the entities can be determined in all three spatial dimensions (i.e., in the x, y, and z directions), and in certain cases, the positions in all three dimensions can be determined to an accuracy of less than about 1000 nm. In one set of embodiments, the entities may be selectively activatable, i.e., one entity can be activated to produce light, without activating other entities. A first entity may be activated and determined (e.g., by determining light emitted by the entity), then a second entity may be activated and determined. The emitted light may be used to determine the x and y positions of the first and second entities, for example, by determining the positions of the images of these entities, and in some cases, with sub-diffraction limit resolution. In some cases, the z positions may be determined using one of a variety of techniques that uses intensity information or focal information (e.g., a lack of focus) to determine the z position. Non-limiting examples of such techniques include astigmatism imaging, off-focus imaging, or multi-focal-plane imaging. | 01-06-2011 |
20110002531 | Object Recognition with 3D Models - An “active learning” method trains a compact classifier for view-based object recognition. The method actively generates its own training data. Specifically, the generation of synthetic training images is controlled within an iterative training process. Valuable and/or informative object views are found in a low-dimensional rendering space and then added iteratively to the training set. In each iteration, new views are generated. A sparse training set is iteratively generated by searching for local minima of a classifier's output in a low-dimensional space of rendering parameters. An initial training set is generated. The classifier is trained using the training set. Local minima are found of the classifier's output in the low-dimensional rendering space. Images are rendered at the local minima. The newly-rendered images are added to the training set. The procedure is repeated so that the classifier is retrained using the modified training set. | 01-06-2011 |
20110002532 | Data Reconstruction Using Directional Interpolation Techniques - Approaches to three-dimensional (3D) data reconstruction are presented. The 3D data comprises 2D images. In some embodiments, the 2D images are directionally interpolated to generate directionally-interpolated 3D data. The directionally-interpolated 3D data are then segmented to generate segmented directionally-interpolated 3D data. The segmented directionally-interpolated 3D data is then meshed. In other embodiments, a 3D data set, which includes 2D flow images, is accessed. The accessed 2D flow images are then directionally interpolated to generate 2D intermediate flow images. | 01-06-2011 |
20110002533 | IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE AND RECORDING MEDIUM - An image processing method and an image processing device which can improve sharpness by producing a binocular rivalry intentionally are provided. An image processing device | 01-06-2011 |
20110007962 | Overlay Information Over Video - In accordance with a particular embodiment of the invention, a method for geotagging an image includes receiving an image of a real-world scene. Location information may be received corresponding to the image. The location information may identify the location of the real-world scene. The image may be synchronized with the location information corresponding to the image such that a two-dimensional point on the image corresponds to a three-dimensional location in the real world at the real-world scene. A geotag may be received. The geotag may tag the image at the image at the two-dimensional point and provide additional information concerning the real-world scene. The geotag and the three-dimensional location in the real world at the real-world scene may be stored in a geotag database. | 01-13-2011 |
20110013827 | METHOD FOR OBTAINING A POSITION MATCH OF 3D DATA SETS IN A DENTAL CAD/CAM SYSTEM - Disclosed is a method for designing tooth surfaces of a digital dental prosthetic item existing as a 3D data set using a first 3D model of a preparation site and/or of a dental prosthetic item and a second 3D model, which second model comprises regions which match some regions on the first 3D model and regions which differ from other regions of the first 3D model, the non-matching regions containing some of the surface information required for the dental prosthetic item, wherein at least three pairs (P | 01-20-2011 |
20110013828 | Stereoscopic format converter - A device and method for converting one stereoscopic format into another. A software-enabled matrix is used to set forth predefined relationships between one type of format as an input image and another type of format as an output image. The matrix can then be used as a look-up table that defines a correspondence between input pixels and output pixels for the desired format conversion. | 01-20-2011 |
20110019904 | METHOD FOR DISPLAYING A VIRTUAL IMAGE - A method for displaying a virtual image of three dimensional objects in an area using stereo recordings of the area for storing a pixel and a height for each point of the area. A method is obtained of enabling displaying of vertical surfaces or even slightly downwards and inwards inclined surfaces. Stereo recordings from at least three different stereo recordings of different solid angles are used. For each different solid angle at least one data base including data about texture and height pixel point wise is established. Data for displaying the virtual image are combined from the different data bases in dependence of the direction in which the virtual image is to be displayed. | 01-27-2011 |
20110019905 | THREE-DIMENSIONAL AUTHENTICATION OF MIRCOPARTICLE MARK - A system, method, and apparatus for authenticating microparticle marks or marks including other three-dimensional objects. The authentication utilizes two or more sets of information captured or acquired for the mark in response to illumination of the mark by electromagnetic energy such as in the visible frequency range. These sets of information are then used to verify that the mark includes three-dimensional objects such as microparticles. The two or more sets of information about the mark preferably vary from each other in time, space/directionality, color, frequency or any combinations thereof, and can be captured or acquired as part of one, two, or more images of the microparticle mark. | 01-27-2011 |
20110019906 | METHOD FOR THE THREE-DIMENSIONAL SYNTHETIC RECONSTRUCTION OF OBJECTS EXPOSED TO AN ELECTROMAGNETIC AND/OR ELASTIC WAVE - A method for synthetic reconstruction of objects includes: extracting criteria from a knowledge base; extracting, from sensed signals filtered by the criteria, weak signals; extracting, from the weak signals, weak signals of interest; removing noise from and amplifying the weak signals of interest and obtaining useful weak signals; identifying useful direct information, from useful weak signals filtered by the criteria and supplying optimum criteria; reconstructing, using the useful direct information, information of interest; reconstructing, using the information of interest, useful information and supplying optimum criteria; reconstructing, based on the useful information, three-dimensional information, supplying a recognition state file and supplying the optimum criteria; and updating the criteria with the optimum criteria. | 01-27-2011 |
20110026807 | ADJUSTING PERSPECTIVE AND DISPARITY IN STEREOSCOPIC IMAGE PAIRS - A system and method for adjusting perspective and disparity in a stereoscopic image pair using range information includes receiving the stereoscopic image pair representing a scene; identifying range information associated with the stereoscopic image pair and including distances of pixels in the scene from a reference location; generating a cluster map based at least upon an analysis of the range information and the stereoscopic images, the cluster map grouping pixels of the stereoscopic images by their distances from a viewpoint; identifying objects and background in the stereoscopic images based at least upon an analysis of the cluster map and the stereoscopic images; generating a new stereoscopic image pair at least by adjusting perspective and disparity of the object and the background in the stereoscopic image pair, the adjusting occurring based at least upon an analysis of the range information; and storing the new generated stereoscopic image pair in a processor-accessible memory system. | 02-03-2011 |
20110026808 | APPARATUS, METHOD AND COMPUTER-READABLE MEDIUM GENERATING DEPTH MAP - Disclosed are an apparatus, a method and a computer-readable medium automatically generating a depth map corresponding to each two-dimensional (2D) image in a video. The apparatus includes an image acquiring unit to acquire a plurality of 2D images that are temporally consecutive in an input video, a saliency map generator to generate at least one saliency map corresponding to a current 2D image among the plurality of 2D images based on a Human Visual Perception (HVP) model, a saliency-based depth map generator, a three-dimensional (3D) structure matching unit to calculate matching scores between the current 2D image and a plurality of 3D typical structures that are stored in advance, and to determine a 3D typical structure having a highest matching score among the plurality of 3D typical structures to be a 3D structure of the current 2D image, a matching-based depth map generator; a combined depth map generator to combine the saliency-based depth map and the matching-based depth map and to generate a combined depth map, and a spatial and temporal smoothing unit to spatially and temporally smooth the combined depth map. | 02-03-2011 |
20110026809 | Fast multi-view three-dimensional image synthesis apparatus and method - A fast multi-view three-dimensional image synthesis apparatus includes: a disparity map generation module for generating a left image disparity map by using left and right image pixel data; intermediate-view generation modules for generating intermediate-view pixel data from different view points by using the left and right image pixel data and the left image disparity map; and a multi-view three-dimensional image generation module for generating multi-view three-dimensional image pixel data by using the left image pixel data, the right image pixel data and intermediate-view pixel data. Each of the intermediate-view generation module includes: a right image disparity map generation unit for generating a rough right image disparity map; an occluded region compensation unit for generating a right image disparity map by removing occluded regions from the rough right image disparity map; and an intermediate-view generation unit for generating the intermediate-view pixel data from the different view points. | 02-03-2011 |
20110033104 | MESH COLLISION AVOIDANCE - The invention relates to a system ( | 02-10-2011 |
20110038529 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - Image retargeting is appropriately performed on stereo pair images composed of at least two images such as in three-dimensional displays. A path of connected pixels in first image data is calculated based on pixel gradient energy. Each pixel in second image data corresponding to each pixel in connected pixels in the first image data is calculated as an initial search point, based on the stereo correspondence relationship between the first image data and the second image data. Pixels that minimize energy between pixels of the first image data and pixels of the second image data in the proximity of the initial search point is calculated as a path of connected pixels in the second image data. A path of optimal connected pixels in the first image data is calculated using the energy. | 02-17-2011 |
20110038530 | ENHANCED OBJECT RECONSTRUCTION - Processing images includes projecting an infra-red pattern onto a three-dimensional object and producing a first image, a second image, and a third image of the three-dimensional object while the pattern is projected on the three-dimensional object. The first image and the second image include the three-dimensional object and the pattern. The first image and the second image are produced by capturing at a first camera and a second camera, respectively, light filtered through an infra-red filter. The third image includes the three-dimensional object but not the pattern. Processing the images also includes establishing a first-pair correspondence between a portion of pixels in the first image and a portion of pixels in the second image. Processing the images further includes constructing, based on the first-pair correspondence and the third image, a two-dimensional image that depicts a three-dimensional construction of the three-dimensional object. | 02-17-2011 |
20110044530 | IMAGE CLASSIFICATION USING RANGE INFORMATION - A method of identifying an image classification for an input digital image comprising receiving an input digital image for a captured scene; receiving a range map which represents range information associated with the input digital image, wherein the range information represents distances between the captured scene and a known reference location; identifying the image classification using both the range map and the input digital image; and storing the image classification in association with the input digital image in a processor-accessible memory system. | 02-24-2011 |
20110044531 | SYSTEM AND METHOD FOR DEPTH MAP EXTRACTION USING REGION-BASED FILTERING - A system and method for extracting depth information from at least two images employing region-based filtering for reducing artifacts are provided. The present disclosure provides a post-processing algorithm or function for reducing the artifacts generated by scanline Dynamic Programming (DP) or other similar methods. The system and method provides for acquiring a first image and a second image from a scene, estimating the disparity of at least one point in the first image with at least one corresponding point in the second image to generate a disparity map, segmenting at least one of the first or second images into at least one region, and filtering the disparity map based on the segmented regions. Furthermore, anisotropic filters are employed, which have a great smoothing effect along the vertical direction than that of the horizontal direction, and therefore, reduce stripe artifacts without significantly blurring the depth boundaries. | 02-24-2011 |
20110044532 | Functional-Based Knowledge Analysis In A 2D and 3D Visual Environment - A method of creating a visual display based on a plurality of data sources is provided. An exemplary embodiment of the method comprises extracting a set of extracted data from the plurality of data sources and processing at least a portion of the extracted data with a set of knowledge agents according to specific criteria to create at least one data assemblage. The exemplary method also comprises providing an integrated two-dimensional/three-dimensional (2D/3D) visual display in which at least one 2D element of the at least one data assemblage is integrated into a 3D visual representation using a mapping identifier and a criteria identifier. | 02-24-2011 |
20110052041 | METHOD FOR DETERMINING THE ROTATIONAL AXIS AND THE CENTER OF ROTATION OF A VEHICLE WHEEL - The invention relates to a method for determining the rotational axis and the rotating center of a vehicle wheel by means of at least two image capture units assigned to each other in position and situation during the journey of the vehicle, and by means of an analysis unit arranged downstream of said units, processing the recorded image information, taking into account multiple wheel features ( | 03-03-2011 |
20110052042 | PROJECTING LOCATION BASED ELEMENTS OVER A HEADS UP DISPLAY - Methods and systems for projecting a location based elements over a heads up display. One method includes: generating a three dimensional (3D) model of a scene, based on a source of digital mapping of the scene; associating a position of at least one selected LAE contained within the scene, with a respective position in the 3D model; superimposing the projecting onto a specified position on a transparent screen facing a viewer and associated with the vehicle, at least one graphic indicator associated with the at least one LAE, wherein the specified position is calculated based on: the respective position of the LAE in the 3D model, the screen's geometrical and optical properties, the viewer's viewing angle, the viewer's distance from the screen, the vehicle's position and angle within the scene, such that the viewer, the graphic indicator, and the LAE are substantially on a common line. | 03-03-2011 |
20110052043 | METHOD OF MOBILE PLATFORM DETECTING AND TRACKING DYNAMIC OBJECTS AND COMPUTER-READABLE MEDIUM THEREOF - Disclosed herein is a computer-readable medium and method of a mobile platform detecting and tracking dynamic objects in an environment having the dynamic objects. The mobile platform acquires a three-dimensional (3D) image using a time-of-flight (TOF) sensor, removes a floor plane from the acquired 3D image using a random sample consensus (RANSAC) algorithm, and individually separates objects from the 3D image. Movement of the respective separated objects is estimated using a joint probability data association filter (JPDAF). | 03-03-2011 |
20110052044 | METHOD AND APPARATUS FOR CROSS-SECTION PROCESSING AND OBSERVATION - A cross-section processing and observation method includes: forming a cross section in a sample by a focused ion beam through etching processing; obtaining a cross-section observation image through cross-section observation by the focused ion beam; and forming a new cross section by performing etching processing in a region including the cross section and obtaining a cross-section observation image of the new cross section. A surface observation image of a region including a mark on the sample and the cross section is obtained. A position of the mark is recognized in the surface observation image and etching processing is performed on the cross section by setting, in reference to the position of the mark, a focused ion beam irradiation region in which to form the new cross section. Cross-section processing and observation is thus enabled continuously and efficiently using a focused ion beam apparatus having no SEM apparatus. | 03-03-2011 |
20110052045 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER READABLE MEDIUM - A system is provided to compress an image of a subject captured in a plurality of directions, at high compression, and the image processing apparatus includes: a model storage section that stores a reference model that is a three-dimensional model representing an object; a model generating section that generates, based on a plurality of captured images of an object, an object model that is a three-dimensional model that matches the object captured in the plurality of captured images; and an output section that outputs a position and a direction of the object captured in each of the plurality of captured images, in association with difference information between the reference model and the object model. | 03-03-2011 |
20110058732 | METHOD AND APPARATUS FOR STORING 3D INFORMATION WITH RASTER IMAGERY - The present invention meets the above-stated needs by providing a method and apparatus that allows for X parallax information to be stored within an image pixel information. Consequently, only one image need be stored, whether it's a mosaic of a number of images, a single image or a partial image for proper reconstruction. To accomplish this, the present invention stores an X parallax value between the stereoscopic images with the typical pixel information by, e.g., increasing the pixel depth. | 03-10-2011 |
20110058733 | METHOD OF COMPILING THREE-DIMENSIONAL OBJECT IDENTIFYING IMAGE DATABASE, PROCESSING APPARATUS AND PROCESSING PROGRAM - Provided are a method of generating a low-capacity model capable of identifying an object with high accuracy, and creating an image database using the model, a processing program for executing the method, and a processing apparatus that executes the process. The method for compiling an image database that is used for a three-dimensional object recognition includes a steps of extracting vectors as local descriptors from a plurality of images each image showing a three-dimensional object as seen from different viewpoints, a model creating step of evaluating the degree of contribution of each local descriptor to identification of the three-dimensional object, and creating a three-dimensional object model systematized to ensure approximate nearest neighbor search using the individual vectors which satisfy criteria, and a registration step of adding an object identifier to the created object model and registering the object model into an image database. In the model creating step, the local descriptor to be used in the model is selected based on the contributions of the individual vectors which are evaluated in such a way that when a vector extracted from one image of one three-dimensional object is an approximate nearest neighbor to another vector relating to an image of the three-dimensional object seen from a different viewpoint, the vector has a positive contribution, whereas when the vector is an approximate nearest neighbor to another vector relating to a different three-dimensional object, the vector has a negative contribution. The processing program is designed to execute the method, and the processing apparatus executes the process. | 03-10-2011 |
20110064298 | APPARATUS FOR EVALUATING IMAGES FROM A MULTI CAMERA SYSTEM, MULTI CAMERA SYSTEM AND PROCESS FOR EVALUATING - An apparatus for evaluating images from a multi camera system is proposed, the multi camera system comprising a main camera for generating a main image and at least two satellite cameras for generating at least a first and a second satellite image. The cameras can be orientated to a common observation area. The apparatus is operable to estimate a combined positional data of a point in the 3D-space of the observation area corresponding to a pixel or group of pixels of interest of the main image. The apparatus comprises first disparity means for estimating at least a first disparity data concerning the pixel or group of pixels of interest derived from the main image and the first satellite image, second disparity means for estimating at least a second disparity data concerning the pixel or group of pixels of interest derived from the main image and the second satellite image, and positional data means for estimating the combined positional data of the point in the 3D-space of the observation area corresponding to the pixel or group of pixels of interest. The positional data means is operable to estimate first positional data on basis of the first disparity data and second positional data on basis of the second disparity data, and to combine the first positional data and the second positional data to the combined positional data. | 03-17-2011 |
20110064299 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing method, includes: detecting a correspondence of each pixel between images acquired by imaging a subject from a plurality of viewpoints; calculating depth information of a non-occlusion pixel and creating a depth map including the depth information; regarding a region consisting of occlusion pixels as an occlusion region and determining an image reference region including the occlusion region and a peripheral region; dividing the image reference region into clusters on the basis of an amount of feature in the image reference region; calculating the depth information of the occlusion pixel in each cluster on the basis of the depth information in at least one cluster from the focused cluster, and clusters selected on the basis of the amount of feature of the focused cluster in the depth map; and adding the depth information of the occlusion pixel to the depth map. | 03-17-2011 |
20110064300 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - The present invention relates to an information processing device, an information processing method, and a program, capable of drawing, in 3D image display, an image for the left eye and an image for a right eye of graphics, in a matched state. | 03-17-2011 |
20110069879 | Apparatus and method to extract three-dimensional (3D) facial expression - Provided is a method and apparatus of extracting a 3D facial expression of a user. When a facial image of the user is received, the 3D facial expression extracting method and apparatus may generate 3D expression information by tracking an expression of the user from the facial image using at least one of shape-based tracking and texture-based tracking, may generate a 3D expression model based on the 3D expression information, and reconstruct the 3D expression model to have a natural facial expression by adding muscle control points to the 3D expression model. | 03-24-2011 |
20110069880 | THREE-DIMENSIONAL PHOTOGRAPHIC SYSTEM AND A METHOD FOR CREATING AND PUBLISHING 3D DIGITAL IMAGES OF AN OBJECT - The present invention provides a three-dimensional photographic system, which is applicable to take pictures of an object from a great variety of angles, particularly for taking series of pictures, which are later combined together forming a three-dimensional digital image of the object ( | 03-24-2011 |
20110075916 | MODELING METHODS AND SYSTEMS - Methods and/or systems for modeling 3-dimensional objects (for example, human faces). In certain example embodiments, methods and/or systems usable for computer animation or static manipulation or modification of modeled images (e.g., faces), image processing, or for facial (or other object) recognition methods and/or systems. | 03-31-2011 |
20110081071 | METHOD AND APPARATUS FOR REDUCTION OF METAL ARTIFACTS IN CT IMAGES - A method and apparatus include acquisition of a view dataset based on x-rays received by a detector corresponding to a energy level, reconstruction of an initial image using the view dataset, the initial image comprising a plurality of metal voxels at respective metal voxel locations, and generation of a metal mask corresponding to the plurality of metal voxels within the initial image. The method and apparatus also include forward projection of the metal mask onto the view dataset to identify metal dexels in the view dataset, performance of a weighted interpolation based on the identified metal dexels to generate a completed view dataset, reconstruction of a final image using the completed view dataset, the final image comprising a plurality of image voxels corresponding to the metal voxel locations, and replacement of a portion of the plurality of image voxels corresponding to the metal voxel locations with smoothed metal values. | 04-07-2011 |
20110081072 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - [PROBLEM] Provided are an image processing device, an image processing method, and a program which are capable of high density restoration and which are also strong to image processing. | 04-07-2011 |
20110085727 | SYSTEM AND METHOD FOR MARKING A STEREOSCOPIC FILM - A system and method for marking a stereoscopic film with colors are provided. The system and method provides for marking a left image with a mark and a right image with a mark having complementary colors, wherein upon viewing, the marks are not visible under certain conditions. The system and method provide for acquiring a stereoscopic image, the stereoscopic image including a first image and a second image, applying a first mark to the first image in a predetermined location, the first mark having a first color, and applying a second mark to the second image in substantially the same predetermined location as in the first image, the second mark having a second color that is different than the first color of the first mark, wherein when viewed in three-dimensional mode, the first mark and the second mark combine into a single mark of one color. | 04-14-2011 |
20110091095 | IMAGE RECONSTRUCTION METHOD - An image reconstruction method includes: fetching at least two images; calculating a relative displacement between those adjacent images by utilizing a phase correlation algorithm; calculating an absolute displacement between any one of those images and the first image of those images; and computing a common area of those images by utilizing the relative displacement and the absolute displacement, then deleting remainder portions of the image excluding the common area; determining a rotation centers of those images; and reconstructing three-dimensional data of those images. In the present invention, the phase correlation algorithm can be utilized to process numerous noise signals so as to get a higher precision of the image reconstruction. | 04-21-2011 |
20110091096 | Real-Time Stereo Image Matching System - A real-time stereo image matching system for stereo image matching of a pair of images captured by a pair of cameras ( | 04-21-2011 |
20110096982 | METHOD AND APPARATUS FOR GENERATING PROJECTING PATTERN - A pattern generating apparatus includes a sequence generating unit and an image data generating unit. The sequence generating unit generates a sequence formed by terms having M-value numeric values. The image data generating unit generates the image data by converting each numeric value of the sequence into a gray-level value according to each numeric value, and the sequence is generated by the sequence generating unit. The sequence generating unit generates the sequence such that vectors expressed by sub-sequences have different directions for the sub-sequence constituting the generated sequence. | 04-28-2011 |
20110103680 | METHOD AND APPARATUS FOR PROCESSING THREE-DIMENSIONAL IMAGES - A three-dimensional sense adjusting unit displays three-dimensional images to a user. If a displayed reaches a limit of parallax, the user responds to the three-dimensional sense adjusting unit. According to acquired appropriate parallax information, a parallax control unit generates parallax images to realize the appropriate parallax in the subsequent stereo display. The control of parallaxes is realized by optimally setting camera parameters by going back to three-dimensional data. Functions to realize the appropriate parallax are made into and presented by a library. | 05-05-2011 |
20110103681 | 3D atomic scale imaging methods - The present invention is directed generally toward atom probe and TEM data and associated systems and methods. Other aspects of the invention are directed toward combining APT data and TEM data into a unified data set. Other aspects of the invention are directed toward using the data from one instrument to improve the quality of concepts data obtained from another instrument. | 05-05-2011 |
20110110579 | SYSTEMS AND METHODS FOR PHOTOGRAMMETRICALLY FORMING A 3-D RECREATION OF A SURFACE OF A MOVING OBJECT USING PHOTOGRAPHS CAPTURED OVER A PERIOD OF TIME - A method for creating a 3-D data set of a surface of a moving object includes rigidly coupling a reference frame with targets to the object such that a change in position or orientation of the object causes a corresponding change in the reference frame. A first photograph is captured of at least a portion of the object and at least some of the plurality of targets at a first camera location. A second photograph is captured of at least a portion of the object and at least some of the plurality of targets at a second camera position. The object moves between the capturing of the first photograph and the capturing of the second photograph. The captured photographs are input to a computing device that is configured and arranged to determine 3-D data points corresponding to the surface of the object captured in the photographs. | 05-12-2011 |
20110110580 | GEOSPATIAL MODELING SYSTEM FOR CLASSIFYING BUILDING AND VEGETATION IN A DSM AND RELATED METHODS - A geospatial modeling system may include a geospatial model database configured to store a digital surface model (DSM) of a geographical area, and to store image data of the geographical area. The image data may have a spectral range indicative of a difference between buildings and vegetation. The geospatial modeling system may also include a processor cooperating with the geospatial model database to separate bare earth data from remaining building and vegetation data in the DSM to define a building and vegetation DSM. The processor may also register the image data with the building and vegetation DSM, and classify each point of the building and vegetation DSM as either building or vegetation based upon the spectral range of the image data. | 05-12-2011 |
20110110581 | 3D OBJECT RECOGNITION SYSTEM AND METHOD - Disclosed herein is a three-dimensional (3D) object recognition system and method. The 3D object recognition system includes a storage unit for storing an extended randomized forest in which a plurality of randomized trees is included and each of the randomized trees includes a plurality of leaf nodes, training means for extracting a plurality of keypoints from a training target object image, and calculating and storing an object recognition posterior probability distribution and training target object-based keypoint matching posterior probability distributions, and matching means for extracting a plurality of keypoints from a matching target object image, matching the extracted keypoints to a plurality of leaf nodes, recognizing an object using the object recognition posterior probability distributions, and matching the keypoints to keypoints of the recognized object using training target object-based keypoint matching posterior probability distributions stored at the matched leaf nodes. | 05-12-2011 |
20110110582 | METHOD AND SYSTEM FOR DETERMINING THE POSITION OF A FLUID DISCHARGE IN AN UNDERWATER ENVIRONMENT - The present invention relates to a method for determining the position of a fluid discharge in an underwater environment comprising the phases which consist in collecting ( | 05-12-2011 |
20110110583 | SYSTEM AND METHOD FOR DEPTH EXTRACTION OF IMAGES WITH MOTION COMPENSATION - A system and method for spatiotemporal depth extraction of images are provided. The system and method provide for acquiring a sequence of images from a scene, the sequence including a plurality of successive frames of images, estimating the disparity of at least one point in a first image with at least one corresponding point in a second image for at least one frame, estimating motion of the at least one point in the first image, estimating the disparity of the at least one next successive frame based on the estimated disparity of at least one previous frame in a forward direction of the sequence, wherein the estimate disparity is compensated with the estimated motion, and minimizing the estimated disparity of each of the plurality of successive frames based on the estimated disparity of at least one previous frame in a backward direction of the sequence. | 05-12-2011 |
20110116706 | METHOD, COMPUTER-READABLE MEDIUM AND APPARATUS ESTIMATING DISPARITY OF THREE VIEW IMAGES - Provided is a method, computer-readable medium apparatus that may estimate a disparity of three view images. A global matching may be performed to calculate a global path by performing a dynamic programming on the three view images, and a local matching for supplementing an occlusion region of the calculated global path may be performed, and thereby a disparity estimation of the three view images may be performed. | 05-19-2011 |
20110116707 | METHOD FOR GROUPING 3D MODELS TO CLASSIFY CONSTITUTION - Provided is a three-dimensional model classification method of classifying constitutions. The method includes correcting color values of a frontal image and one or more profile images to allow a color value of a reference color table in the images to equal a predetermined reference color value, through obtaining the frontal image and one or more profile images of a subject including the reference color table by a camera, the reference color table including one or more sub color regions, generating a three-dimensional geometric model of the subject by extracting feature point information from the frontal image and the profile image, matching the corresponding feature point information to extract spatial depth information, after removing the reference color table region from the frontal image and the profile image, and classifying a group of the three-dimensional geometric model of the subject by selecting a reference three-dimensional geometric model having a smallest sum of spatial displacements from the three-dimensional geometric model of the subject from a plurality of reference three-dimensional geometric models stored in the database and setting the group which the selected reference three-dimensional geometric model represents as the group where the three-dimensional geometric model of the subject belongs. | 05-19-2011 |
20110123095 | Sparse Volume Segmentation for 3D Scans - A computer readable medium is provided embodying instructions executable by a processor to perform a method for sparse volume segmentation for 3D scan of a target. The method including learning prior knowledge, providing volume data comprising the target, selecting a plurality of key contours of the image of the target, building a 3D spare model of the image of the target given the plurality of key contours, segmenting the image of the target given the 3D sparse model, and outputting a segmentation of the image of the target. | 05-26-2011 |
20110123096 | THREE-DIMENSIONAL IMAGE ANALYSIS SYSTEM, PROCESS DEVICE, AND METHOD THEREOF - A three-dimensional image analysis system, a process device for use in the three-dimensional image analysis system, and a method thereof are provided. The three-dimensional image analysis system is configured to generate a plurality of three-dimensional data of a three-dimensional image. The process device defines a plurality of horizontal scan lines and a plurality of vertical scan lines according to the three dimensional data, determines a preliminary edge information of the three-dimensional image according to the horizontal scan lines and the vertical scan lines, divides the three dimensional data into a plurality of groups, compares the groups to determine a plane information of the three-dimensional image, and determines an edge information of the three-dimensional image according to the preliminary edge information and the plane information. The method is adapted for the process device. | 05-26-2011 |
20110123097 | Method and computer program for improving the dimensional acquisition of an object - The present invention relates to a method for improving the efficiency of dimensional acquisition of an object by a dimensional measurement device directed over the object, comprising the steps: a) directing the measurement device over the object to acquire its dimensions, b) providing an indication of the resolution of the acquired regions, c) re-directing the measurement device over at least part of the acquired regions indicating insufficient resolution according to predetermined criteria, d) updating the indication of the resolution of the acquired regions, and e) repeating steps c) and d) until sufficient resolution is indicated according to the predetermined criteria, thereby efficiently acquiring the dimensions of the object at sufficient resolution. It also relates to a computer program therefor. | 05-26-2011 |
20110123098 | System and a Method for Three-dimensional Modeling of a Three-dimensional Scene Features with a Cooling System - A method and a system for three-dimensional modeling of a three-dimensional scene features, are described. | 05-26-2011 |
20110123099 | SENSING DEVICE AND METHOD OF DETECTING A THREE-DIMENSIONAL SPATIAL SHAPE OF A BODY - A method for identifying a best fitting shoe includes the steps of scanning a foot using a photogrammetric 3D foot scanner for obtaining a digital 3D model of the foot, and providing a database in which 3D models of shapes of the 5 interiors of available shoes are stored. The 3D model of the digitized foot of the customer is compared with the 3D models of available shoes stored in the database and a shoe of which the 3D model of internal shape is the most similar to the 3D model of the customer foot is selected. The steps of comparing and selecting are performed using a computing unit. A sensing device for detecting a 10 three-dimensional spatial shape of a body includes a sensing end and a camera. A method of detecting a three-dimensional interior spatial shape includes providing the sensing device and scanning the spatial shape. | 05-26-2011 |
20110129143 | METHOD AND APPARATUS AND COMPUTER PROGRAM FOR GENERATING A 3 DIMENSIONAL IMAGE FROM A 2 DIMENSIONAL IMAGE - A method of generating a three dimensional image from a two dimensional image is described. In the method, the two dimensional image has a background and a first foreground object and a second foreground object located thereon, the method comprising the steps of: applying a transformation to a copy of the background, generating stereoscopically for display the background and the transformed background, generating stereoscopically for display the first and second foreground object located on the stereoscopically displayable background and the transformed background and determining whether the first and second foreground objects occlude with one another, wherein in the event of occlusion, the occluded combination of the first and second object forms a third foreground object and, the method further comprises the step of: applying a transformation to the third foreground object, wherein the transformation applied to the third foreground object is less than or equal to the transformation applied to the background; generating a copy of the third foreground object with the transformation applied thereto and generating stereoscopically for display the third foreground object with the transform applied thereto and the copy of the third foreground object displaced relative to one another by an amount determined in accordance with the position of one of the first or second foreground objects in the image. | 06-02-2011 |
20110129144 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM - An image processing apparatus includes: a depth information extraction means for extracting depth information from an input 3D image; a luminance extraction means for extracting luminance components of the 3D image; a contrast extraction means for extracting contrast components of the 3D image based on the luminance components; a storage means for storing a performance function indicating relation between the contrast components and depth amounts subjectively perceived, which is determined based on visual sense characteristics of human beings; and a contrast adjustment means for calculating present depth amounts of the inputted 3D image from the contrast components based on the performance function with respect to at least one of a near side region and a deep side region of the inputted 3D image which are determined from the depth information and adjusting contrast components of the inputted 3D image based on the calculated present depth amounts and a set depth adjustment amount. | 06-02-2011 |
20110135190 | OBJECT POSITIONING WITH VISUAL FEEDBACK - A positioning system comprises a pattern projector ( | 06-09-2011 |
20110142328 | Method for using image depth information - In a first exemplary embodiment of the present invention, an automated, computerized method is provided for determining illumination information in an image. According to a feature of the present invention, the method comprises the steps of identifying depth information in the image, identifying spatio-spectral information for the image, as a function of the depth information and utilizing the spatio-spectral information to identify illumination flux in the image. | 06-16-2011 |
20110142329 | METHOD AND DEVICE FOR CONVERTING IMAGE - A method and a device for converting an image are disclosed. According to an embodiment of the present invention, the method for converting a two-dimensional image to a three-dimensional image by an image conversion device can include: receiving and setting overall depth information for an original image; classifying the original image into partial objects and setting three-dimensional information for each of the partial objects; generating a first image by moving the original image by use of the three-dimensional information; receiving and setting a zero point for the original image; generating a second image by moving the original image by use of the zero point; and generating a three-dimensional image by combining the first image and the second image. Accordingly, a still image can be converted to a three-dimensional image. | 06-16-2011 |
20110150320 | Method and System for Localizing in Urban Environments From Omni-Direction Skyline Images - A location and orientation in an environment is determined by acquiring a set of one or more real omni-directional images of an unknown skyline in the environment from an unknown location and an unknown orientation in the environment by an omni-directional camera. A set of virtual omni-directional images is synthesized from a 3D model of the environment, wherein each virtual omni-directional image is associated with a known skyline, a known location and a known orientation. Each real omni-directional image is compared with the set of virtual omni-directional images to determine a best matching virtual omni-directional image with the associated known location and known orientation that correspond to the unknown location and orientation. | 06-23-2011 |
20110150321 | METHOD AND APPARATUS FOR EDITING DEPTH IMAGE - Provided is a method of editing a depth image, comprising: receiving a selection on a depth image frame to be edited and a color image corresponding to the depth image frame; receiving a selection on an interest object in the color image; extracting boundary information of the interest object; and correcting a depth value of the depth image frame using the boundary information of the interest object. | 06-23-2011 |
20110150322 | THREE-DIMENSIONAL MULTILAYER SKIN TEXTURE RECOGNITION SYSTEM AND METHOD - A three-dimensional multilayer skin texture recognition system and method based on hyperspectral imaging. Three-dimensional facial model associated with an object may be acquired from a three-dimensional image capturing device. A face reconstruction approach may be implemented to reconstruct and rewarp the three-dimensional facial model to a frontal face image. A hyperspectral imager may be employed to extract a micro structure skin signature associated with the skin surface. The micro structure skin signature may be characterized utilizing a weighted subtraction of reflectance at different wavelengths that captures different layers under the skin surface via a multilayer skin texture recognition module. The volumetric skin data associated with the face skin can be classified via a volumetric pattern. | 06-23-2011 |
20110158503 | Reversible Three-Dimensional Image Segmentation - Aspects of the subject matter described herein relate to reversible image segmentation. In aspects, candidate pairs for merging three dimensional objects are determined. The cost of merging candidate pairs is computed using a cost function. A candidate pair that has the minimum cost is selected for merging. This may be repeated until all objects have been merged, until a selected number of merging has occurred, or until some other criterion is met. In conjunction with merging objects, data is maintained that allows the merging to be reversed. | 06-30-2011 |
20110158504 | APPARATUS AND METHOD FOR INDICATING DEPTH OF ONE OR MORE PIXELS OF A STEREOSCOPIC 3-D IMAGE COMPRISED FROM A PLURALITY OF 2-D LAYERS - Implementations of the present invention involve methods and systems for converting a 2-D image to a stereoscopic 3-D image and displaying the depth of one or more pixels of the 3-D image through an output image of a user interface. The pixels of the output image display the perceived depth of the corresponding 3-D image such that the user may determine the relative depth of the pixels of the image. In addition, one or more x-offset values or z-axis positions may be individually selected such that any pixel of the output image that correspond to the selected values is indicated in the output image. By providing the user with a visualization tool to quickly determine the perceived position of any pixel of a stereoscopic image, the user may confirm the proper alignment of the objects of the image in relation to the image as a whole. | 06-30-2011 |
20110158505 | STEREO PRESENTATION METHOD OF DISPLAYING IMAGES AND SPATIAL STRUCTURE - An image stereo presentation method, comprising steps of: establishing at least one three-dimensional (3D) model corresponding to a physical stereo object; projecting a planar image onto the 3D model at a specific angle, wherein the 3D model has an interface adjoining at least two surfaces in a range of projecting the planar image, and the said two surfaces are not on the same plane; extracting at least two sub-images from the said two surfaces whereby the planar image is projected onto; and forming the said two sub-images on two physical surfaces of the physical stereo object corresponding to the said two surfaces. The image stereo presentation method is capable to present a special visual effect. | 06-30-2011 |
20110158506 | METHOD AND APPARATUS FOR GENERATING 3D IMAGE DATA - A method and apparatus for generating three-dimensional (3D) image data by using 2D image data including a dummy component and an image component relating to an input image, wherein the dummy component is used to adjust a resolution of the input image, are provided. The method includes: generating a depth map that corresponds to the 2D image data; detecting a dummy area including the dummy component from the 2D image data; and correcting depth values of pixels that correspond to the dummy area in the depth map. | 06-30-2011 |
20110158507 | METHOD FOR VISION FIELD COMPUTING - A method for vision field computing may comprise the following steps of: forming a sampling system for a multi-view dynamic scene; controlling cameras in the sampling system for the multi-view dynamic scene to perform spatial interleaved sampling, temporal interleaved exposure sampling and exposure-variant sampling; performing spatial intersection to the sampling information in the view subspace of the dynamic scene and temporal intersection to the sampling information in the time subspace of the dynamic scene to reconstruct a dynamic scene geometry model; performing silhouette back projection based on the dynamic scene geometry model to obtain silhouette motion constraints for the view angles of the cameras; performing temporal decoupling for motion de-blurring with the silhouette motion constraints; and reconstructing a dynamic scene 3D model with a resolution larger than nominal resolution of each camera by a 3D reconstructing algorithm. | 06-30-2011 |
20110158508 | DEPTH-VARYING LIGHT FIELDS FOR THREE DIMENSIONAL SENSING - A method for mapping includes projecting onto an object a pattern of multiple spots having respective positions and shapes, such that the positions of the spots in the pattern are uncorrelated, while the shapes share a common characteristic. An image of the spots on the object is captured and processed so as to derive a three-dimensional (3D) map of the object. | 06-30-2011 |
20110158509 | IMAGE STITCHING METHOD AND APPARATUS - The present invention relates to an image processing technology, and discloses an image stitching method and apparatus to solve the problem of severe ghosting of an image stitched in the prior art. In the embodiments of the present invention, the overlap region of two images is found, a depth image of the overlap region is obtained, and the two images are stitched together according to the depth image. In the stitching process, the 3-dimensional information of the images is obtained by using the depth image to deghost the image. The method and apparatus under the present invention are applicable to multi-scene videoconferences and the occasions of making wide-view images or videos. | 06-30-2011 |
20110164810 | IMAGE SIGNATURES FOR USE IN MOTION-BASED THREE-DIMENSIONAL RECONSTRUCTION - A family of one-dimensional image signatures is obtained to represent each one of a sequence of images in a number of translational and rotational orientations. By calculating these image signatures as images are captured, a new current view can be quickly compared to historical views in a manner that is less dependent on the relative orientation of a target and search image. These and other techniques may be employed in a three-dimensional reconstruction process to generate a list of candidate images from among which full three-dimensional registration may be performed to test for an adequate three-dimensional match. In another aspect this approach may be supplemented with a Fourier-based approach that is selectively applied to a subset of the historical images. By alternating between spatial signatures for one set of historical views and spatial frequency signatures for another set of historical views, a pattern matching system may be implemented that more rapidly reattaches to a three-dimensional model in a variety of practical applications. | 07-07-2011 |
20110164811 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND IMAGE PROCESSING PROGRAM - An image processing device calculates, from a registration image representing a photographed object and three-dimensional shape data in which respective points of a three-dimensional shape of the object are correlated with pixels of the registration image, by assuming uniform albedo, a shadow base vector group having components from which an image under an arbitrary illumination condition can be generated through linear combination. A shadow in the registration image is estimated with using the vector group. A perfect diffuse component image including the shadow is generated, and based on the image a highlight removal image is generated in which a specular reflection component is removed from the registration image. Thus, an image recognition system generates illumination base vectors from the highlight removal image and thereby can obtain the illumination base vectors based on which an accurate image recognition process can be carried out without influence of a specular reflection. | 07-07-2011 |
20110170767 | THREE-DIMENSIONAL (3D) IMAGING METHOD - A method for constructing a digital image ( | 07-14-2011 |
20110176720 | Digital Image Transitions - Among other things, methods, systems and computer program products are disclosed for displaying a sequence of multiple images to provide an appearance of a three-dimensional (3D) effect. A data processing device or system can identify multiple images to be displayed. The data processing device or system can divide a two-dimensional (2D) display area into multiple display portions. The data processing device or system can display a sequence of the identified images on the display portions so as to provide an appearance of a three-dimensional (3D) effect. | 07-21-2011 |
20110176721 | Method and apparatus for composition coating for enhancing white light scanning of an object - The invention is directed to a method and apparatus for pretreatment an object to be white light scanned to enable accurate and consistent scanning. In those instances where the object part has a reflective or refractive surface or is made from a material having translucent or transparent properties the object must be pretreated to ensure accurate data collection during the scanning process. The object is coated with a composition forming a thin and uniform film of non destructive material coating to enhance the surface contrast characteristics for the mono-chromatic fringe pattern employed in the white light scanning process. | 07-21-2011 |
20110176722 | SYSTEM AND METHOD OF PROCESSING STEREO IMAGES - The present invention is a system and a method for processing stereo images utilizing a real time, robust, and accurate stereo matching system and method based on a coarse-to-fine architecture. At each image pyramid level, non-centered windows for matching and adaptive upsampling of coarse-level disparities are performed to generate estimated disparity maps using the ACTF approach. In order to minimize propagation of disparity errors from coarser to finer levels, the present invention performs an iterative optimization, at each level, that minimizes a cost function to generate smooth disparity maps with crisp occlusion boundaries. | 07-21-2011 |
20110176723 | Motion Correction in Cone-Beam CT by Tracking Internal and External Markers Using Cone-Beam Projection From a kV On-Board Imager: Four-Dimensional Cone-Beam CT and Tumor Tracking Implications - An apparatus comprising a processor configured to receive a sequence of Cone-Beam Computed Topology (CBCT) projections of a three dimensional (3D) object over a scanning period, wherein the 3D object is displaced during the scanning period, and wherein each of the CBCT projections is associated with a discrete point during the scanning period, locate a marker position in a plurality of the CBCT projections, wherein each marker position corresponds to the location of an internal marker at the corresponding discrete point during the scanning period, extract a 3D motion trajectory based on the plurality of marker positions and a plurality of time-tagged angular views, and correct the CBCT projections based on the 3D motion trajectory. | 07-21-2011 |
20110182497 | CASCADE STRUCTURE FOR CLASSIFYING OBJECTS IN AN IMAGE - A cascade object classification structure for classifying one or more objects in an image is provided. The cascade object classification structure includes a plurality of nodes arranged in one or more layers. Each layer includes at least one parent node and each subsequent layer includes at least two child nodes. A parent node in a layer is operatively linked to two child nodes in a subsequent layer. Further, at least one child node in one of the subsequent layers is operatively linked to two or more parent nodes in a preceding layer. Each node includes classifiers for classifying the objects as a positive object and a negative object. The positive object and the negative object classified by the parent node in each layer are further classified by one or more operatively linked child nodes in the subsequent layer. | 07-28-2011 |
20110182498 | Image Processing Apparatus, Image Processing Method, and Program - An image processing apparatus includes a viewing situation analyzing unit configured to obtain information representing a user's viewing situation of 3D content stored in a certain storage unit, and, based on a preset saving reference in accordance with a viewing situation of 3D content, determine a data reduction level of content data of the 3D content stored in the storage unit; and a data conversion unit configured to perform data compression of the content data of the 3D content stored in the storage unit in accordance with the determined data reduction level. | 07-28-2011 |
20110182499 | METHOD FOR DETERMINING THE SURFACE COVERAGE OBTAINED BY SHOT PEENING - In a method for determining the surface coverage obtained by shot peening to ensure uniform and complete strengthening of the surface of components, in particular blisk blades, a shot-peened surface topography is digitalized by an optical digital recording unit. A three-dimensional height profile is then prepared by measuring and evaluation software which includes both indentations and excrescences due to shot peening and also roughnesses due to manufacturing, which are smaller than the excrescences and indentations. The roughnesses are subsequently filtered out from the height image by a software filter using mathematical methods. A height diagram with the indentations situated below a zero line is established, with the size of these indentations being calculated in relation to the total area in the height diagram and the extent of coverage of the entire shot-peened surface being determined therefrom. | 07-28-2011 |
20110188736 | Reduced-Complexity Disparity MAP Estimation - Image processing herein reduces the computational complexity required to estimate a disparity map of a scene from a plurality of monoscopic images. Image processing includes calculating a disparity and associated matching cost for at least one pixel block in a reference image, and then predicting, based on this disparity and associated matching cost, a disparity and associated matching cost for a pixel block that neighbors the at least one pixel block. Image processing continues with calculating a tentative disparity and associated matching cost for the neighboring pixel block, by searching for a corresponding pixel block in a different monoscopic image over a reduced range of candidate pixel blocks focused around the disparity predicted. Searching over a reduced range avoids significant computational complexity. Image processing concludes with determining the disparity for the neighboring pixel block based on comparing the matching costs associated with the tentative disparity and the disparity predicted. | 08-04-2011 |
20110188737 | SYSTEM AND METHOD FOR OBJECT RECOGNITION BASED ON THREE-DIMENSIONAL ADAPTIVE FEATURE DETECTORS - Method and system for imaging an object in three-dimensions, binning data of the imaged object into three dimensional bins, determining a density value p of the data in each bin, and creating receptive fields of three dimensional feature maps, including processing elements O, each processing element O of a same feature map having a same adjustable parameter, weight Wc | 08-04-2011 |
20110188738 | FACE EXPRESSIONS IDENTIFICATION - In the last few years, face expression measurement has been receiving significant attention mainly due to advancements in areas such as face detection, face tracking and face recognition. For face recognition systems, detecting the locations in two-dimension (2D) images where faces are present is a first step to be performed before face expressions can be measured. However, face detection from a 2D image is a challenging task because of variability in imaging conditions, image orientation, pose, presence/absence of facial artefacts facial expression and occlusion. Existing efforts to address the shortcomings of existing face recognition systems deal with technologies for creation of three-dimensional (3D) models of a human subject's face based on a digital photograph of the human subject. However, such technologies are computationally intensive nature and susceptible to errors and hence might not be suitable for deployment. An embodiment of the invention describes a method for identifying face expressions of image objects. | 08-04-2011 |
20110188739 | Image processing apparatus and method - An image processing apparatus that configures a single frame by determining a central image of a certain viewpoint as an original resolution, and frame another single frame by combining a left image of a left viewpoint and a right image of a right viewpoint. The image processing apparatus may generate three-dimensional (3D) image data configured using the frames, and may encode, decode, and render an image based on the 3D image data. | 08-04-2011 |
20110188740 | DEVICE FOR IMPROVING STEREO MATCHING RESULTS, METHOD OF IMPROVING STEREO MATCHING RESULTS USING THE DEVICE, AND SYSTEM FOR RECEIVING STEREO MATCHING RESULTS - Provided is a device for improving stereo matching results. The device for improving stereo matching results includes: a stereo camera unit outputting binocular disparity images by using binocular disparity between two images preprocessed according to a plurality of preprocessing conditions; a discrete cosine transform (DCT) unit generating DCT coefficients by performing DCT on the binocular disparity images; a streak estimation unit receiving the DCT coefficients and estimating amounts of streaks distributed on a screen by using AC coefficients, including streak patterns, of the DCT coefficients; a condition estimation unit estimating a preprocessing condition, corresponding to the smallest amount of streaks of the estimated amounts of streaks, of the plurality of preprocessing conditions, as an optimal condition, and a streak removal unit generating binocular disparity images without the streaks by changing predetermined AC coefficients of the DCT coefficients and performing inverse DCT on the changed DCT coefficients. | 08-04-2011 |
20110188741 | SYSTEM AND METHOD FOR DIMENSIONING OBJECTS USING STEREOSCOPIC IMAGING - A method and configuration to estimate the dimensions of a cuboid. The configuration includes two image acquisition units offset from each other with at least one of the units positioned at a defined acquisition height above a background surface. Image processing techniques are used to extract a perimeter of a top surface of the cuboid, placed on the background surface, from pairs of acquired images. A height estimation technique, which corrects for spatial drift of the configuration, is used to calculate an absolute height of the cuboid. The absolute height of the cuboid is used, along with the extracted perimeter of the top surface of the cuboid, to calculate an absolute length and an absolute width of the cuboid. The height, length, and width may be used to calculate an estimated volume of the cuboid. | 08-04-2011 |
20110194756 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processor includes a main image statistical information generator that detects a parallax of each predetermined unit of a 3D main image from main image data and generates parallax statistical information, a sub-image statistical information generator unit that detects a parallax of each predetermined unit of a 3D sub-image from sub-image data and generates parallax statistical information, a parallax controller that computes, using the statistical information, a correction amount used for correcting at least one of the main image and sub-image parallaxes so that a positional distance between the main image and the sub-image in a depth direction is within a predetermined range, a converter that converts at least one of the main image data and sub-image data so that at least one of the parallaxes of the images is corrected by the correction amount, and a superimposing unit that superimposes the sub-image data on the main image data. | 08-11-2011 |
20110200248 | Method and system for aligning three-dimensional surfaces - A method for associating a three-dimensional surface representing a real object and a three-dimensional reference surface, said reference surface being represented by a set of reference points, the method comprising: obtaining a set of real points representing the real surface, determining the normal vector of each point of said obtained set of real points, selecting, among the set of real points, control points according to the determined normal vector by converting the set of real points to a bi-dimensional space of normal vectors, generating sets of points having similar normal vector among the points of the set of real points and selecting, for each set of points with similar normal vector, one point that is a control point of the real surface, determining correspondence points close to the set of reference points that are determined to correspond to the control points of the real surface, and determining the motion that minimizes the distances between the control points of the real surface and the correspondence points. | 08-18-2011 |
20110200249 | SURFACE DETECTION IN IMAGES BASED ON SPATIAL DATA - A system and method are provided for detecting surfaces in image data based on spatial data. The method includes obtaining an empirical probability density function (PDF) for the spatial data, where the spatial data includes a plurality of three-dimensional ( | 08-18-2011 |
20110206273 | Intelligent Part Identification for Use with Scene Characterization or Motion Capture - A variety of methods, systems, devices and arrangements are implemented for use with motion capture. One such method is implemented for identifying salient points from three-dimensional image data. The method involves the execution of instructions on a computer system to generate a three-dimensional surface mesh from the three-dimensional image data. Lengths of possible paths from a plurality of points on the three-dimensional surface mesh to a common reference point are categorized. The categorized lengths of possible paths are used to identify a subset of the plurality of points as salient points. | 08-25-2011 |
20110206274 | POSITION AND ORIENTATION ESTIMATION APPARATUS AND POSITION AND ORIENTATION ESTIMATION METHOD - A position and orientation estimation apparatus inputs an image capturing an object, inputs a distance image including three-dimensional coordinate data representing the object, extracts an image feature from the captured image, determines whether the image feature represents a shape of the object based on three-dimensional coordinate data at a position on the distance image corresponding to the image feature, correlates the image feature representing the shape of the object with a part of a three-dimensional model representing the shape of the object, and estimates the position and orientation of the object based on a correlation result. | 08-25-2011 |
20110211749 | System And Method For Processing Video Using Depth Sensor Information - A method for processing video using depth sensor information, comprising the steps of: dividing the image area into a number of bins roughly equal to the depth sensor resolution, with each bin corresponding to a number of adjacent image pixels; adding each depth measurement to the bin representing the portion of the image area to which the depth measurement corresponds; averaging the value of the depth measurement for each bin to determine a single average value for each bin; and applying a threshold to each bin of the registered depth map to produce a threshold image. | 09-01-2011 |
20110211750 | METHOD AND APPARATUS FOR DETERMINING MISALIGNMENT - An apparatus for determining misalignment between a first image and a second image, the first and second images being viewable stereoscopically, the apparatus comprising:
| 09-01-2011 |
20110211751 | METHOD AND APPARATUS FOR DETERMINING MISALIGNMENT - and second images being viewable stereoscopically, the method comprising: determining a feature position within the first image and a corresponding feature position within the second image; defining, within the first image and the second image, the optical axis of the cameras capturing said respective images; and calculating the misalignment between at least one of scale, roll or vertical translation of the feature position within the first image and the corresponding feature position within the second image, the misalignment being determined in dependence upon the location of the feature position of the first image and the corresponding feature position of the second image relative to the defined optical axis of the respective images is described. A corresponding apparatus is also described. | 09-01-2011 |
20110216961 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing device includes a virtual space recognition unit for analyzing 3D space structure of a real space to recognize a virtual space, a storage unit for storing an object to be arranged in the virtual space, a display unit for displaying the object arranged in the virtual space, on a display device, a detection unit for detecting device information of the display device, and an execution unit for executing predetermined processing toward the object based on the device information. | 09-08-2011 |
20110216962 | METHOD OF EXTRACTING THREE-DIMENSIONAL OBJECTS INFORMATION FROM A SINGLE IMAGE WITHOUT META INFORMATION - Disclosed herein is a method of extracting 3-dimension object information by a shadow analysis from a single image without meta information, and a technical problem to be solved is to extract three-dimension information of an object such as a height of the object and a footprint surface position of the object from a single image without meta information. | 09-08-2011 |
20110216963 | ROTATE AND SLANT PROJECTOR FOR FAST FULLY-3D ITERATIVE TOMOGRAPHIC RECONSTRUCTION - Disclosed herein are embodiments of a rotate-and-slant projector that takes advantage of symmetries in the geometry to compute truly volumetric projections to multiple oblique sinograms in a computationally efficient manner. It is based upon the 2D rotation-based projector using the fast three-pass method of shears, and it conserves the 2D rotator computations for multiple projections to each oblique sinogram set. The projector is equally applicable to both conventional evenly-spaced projections and unevenly-spaced line-of-response (LOR) data (where the arc correction is modeled within the projector). The LOR-based version models the exact location of the direct and oblique LORs, and provides an ordinary Poisson reconstruction framework. Speed optimizations of various embodiments of the projector include advantageously utilizing data symmetries such as the vertical symmetry of the oblique projection process, a coarse-depth compression, and array indexing schemes which maximize serial memory access. | 09-08-2011 |
20110222756 | Method for Handling Pixel Occlusions in Stereo Images Using Iterative Support and Decision Processes - In stereo images that include occluded pixels and visible pixels, occlusions are handled by first determining, for the occluded pixels, initial disparity values and support for the initial disparity values using an initial support function, an occlusion map and disparities of the visible pixels neighboring the occluded pixels in the stereo images. Then, for the occluded pixels, final disparity values and support for the final disparity values are determined using the initial disparity values, a final support function and a normalization function in an iterative support-and-decision process. | 09-15-2011 |
20110222757 | Systems and methods for 2D image and spatial data capture for 3D stereo imaging - Systems and methods for 2D image and spatial data capture for 3D stereo imaging are disclosed. The system utilizes a cinematography camera and at least one reference or “witness” camera spaced apart from the cinematography camera at a distance much greater that the interocular separation to capture 2D images over an overlapping volume associated with a scene having one or more objects. The captured image date is post-processed to create a depth map, and a point cloud is created form the depth map. The robustness of the depth map and the point cloud allows for dual virtual cameras to be placed substantially arbitrarily in the resulting virtual 3D space, which greatly simplifies the addition of computer-generated graphics, animation and other special effects in cinemagraphic post-processing. | 09-15-2011 |
20110222758 | RADIOGRAPHIC IMAGE CAPTURING SYSTEM AND METHOD OF DISPLAYING RADIOGRAPHIC IMAGES - A radiographic image capturing system includes an image reconstructor for processing a plurality of radiographic images of a subject in order to reconstruct a radiographic tomographic image of the subject, and a monitor for displaying at least the radiographic tomographic image. The radiographic image capturing system also includes a region-of-interest setter for setting a region of interest of the subject on the radiographic images or the radiographic tomographic image, a radiographic image extractor for extracting, from among the radiographic images, two radiographic images for viewing the region of interest by way of stereographic vision, and a first stereographic vision display controller or a second stereographic vision display controller for controlling the monitor to display the extracted two radiographic images for stereographic vision. | 09-15-2011 |
20110229012 | ADJUSTING PERSPECTIVE FOR OBJECTS IN STEREOSCOPIC IMAGES - A method for manipulating a stereoscopic image, comprising receiving an original stereoscopic image including a left image and a right image; identifying one or more objects; determining actual object sizes and actual object locations in both the left and right images; determining original perceived three-dimensional object location and new perceived three-dimensional object locations for the identified one or more objects; determining a size magnification factors and location displacement values for each of the one or more objects; generating a new stereoscopic image by changing the actual object sizes and the actual object locations responsive to the corresponding size magnification factors and location displacement values; and storing the new stereoscopic image in a processor-accessible memory system. | 09-22-2011 |
20110229013 | METHOD AND SYSTEM FOR MEASURING OBJECT - A method and system for measuring three-dimensional coordinates of an object are provided. The method includes: capturing images from a calibration point of known three-dimensional coordinates by two image-capturing devices disposed in a non-parallel manner, so as for a processing module connected to the image-capturing devices to calculate a beam confluence collinear function of the image-capturing devices; calibrating the image-capturing devices to calculate intrinsic parameters and extrinsic parameters of the image-capturing devices and calculate the beam confluence collinear function corresponding to the image-capturing devices; and capturing images from a target object by the image-capturing devices so as for the processing module to calculate three-dimensional coordinates of the object according to the beam confluence collinear function. In so doing, the method and system enable the three-dimensional coordinates and bearings of a target object to be calculated quickly, precisely, and conveniently. Hence, the method and system are applicable to various operating environments. | 09-22-2011 |
20110229014 | ANALYSIS OF STEREOSCOPIC IMAGES - A method of identifying the left-eye and the right-eye images of a stereoscopic pair, comprising the steps of comparing the images to locate an occluded region visible in only one of the images; detecting image edges; and identifying a right-eye image where image edges are aligned with a left hand edge of an occluded region and identifying a left-eye image where more image edges are aligned with a right hand edge of an occluded region. | 09-22-2011 |
20110229015 | METHOD AND APPARATUS FOR DETERMINING THE SURFACE PROFILE OF AN OBJECT - The present invention relates to a method, apparatus, computer code and algorithm for determining the surface profile of an object. The invention involves capturing three or four images of the object at different planes of which some of the images can be taken outside the depth of field of the optical system and some inside the depth of the field of the optical system. The invention may have particular application in instances of surface analysis and security applications under ambient lighting conditions. | 09-22-2011 |
20110235897 | DEVICE AND PROCESS FOR THREE-DIMENSIONAL LOCALIZATION AND POSE ESTIMATION USING STEREO IMAGE, AND COMPUTER-READABLE STORAGE MEDIUM STORING THE PROGRAM THEREOF - The device includes:
| 09-29-2011 |
20110235898 | MATCHING PROCESS IN THREE-DIMENSIONAL REGISTRATION AND COMPUTER-READABLE STORAGE MEDIUM STORING A PROGRAM THEREOF - The matching process includes: finding first and second three-dimensional reconstruction point sets that contain three-dimensional position coordinates of segments, and first and second feature set that contain three-dimensional information regarding vertices of the segments, from image data of an object (S | 09-29-2011 |
20110235899 | STEREOSCOPIC IMAGE PROCESSING DEVICE, METHOD, RECORDING MEDIUM AND STEREOSCOPIC IMAGING APPARATUS - An apparatus ( | 09-29-2011 |
20110243425 | Methods For Analyzing Absorbent Articles - A method for analyzing an absorbent article may include providing a three-dimensional computed tomography data set comprising a mannequin image and an article image. The article image may be constructed from projections collected while the absorbent article is fitted to a mannequin. An outer surface of the mannequin image may be identified. A desired distance may be provided. A volumetric demarcation may be spaced the desired distance away from the outer surface of the mannequin image. An image volume may be disposed between the outer surface of the mannequin image and the volumetric demarcation. A relevant portion of the article image may be enhanced using a processor. The relevant portion of the article image may be coincident with the image volume. | 10-06-2011 |
20110249886 | IMAGE CONVERTING DEVICE AND THREE-DIMENSIONAL IMAGE DISPLAY DEVICE INCLUDING THE SAME - An image converting device includes; a downscaling unit which downscales a two-dimensional image to generate at least one downscaling image, a feature map generating unit which extracts feature information from the downscaling image to generate a feature map, wherein the feature map includes a plurality of objects, an object segmentation unit which divides the plurality of objects, an object order determining unit which determines a depth order of the plurality of objects, and adds a first weight value to an object having the shallowest depth among the plurality of objects, and a visual attention calculating unit which generates a low-level attention map based on visual attention of the feature map. | 10-13-2011 |
20110249887 | IMAGE SYNTHESIS APPARATUS, IMAGE SYNTHESIS METHOD AND PROGRAM - An image synthesis apparatus includes: an image selection section adapted to select two or more three-dimensional images to be synthesized from among a plurality of three-dimensional images; an order determination section adapted to determine, based on parallax amounts of the selected three-dimensional images, a synthesis order representative of an order in which the selected three-dimensional images are to be synthesized; an image synthesis section adapted to synthesize the selected three-dimensional images in accordance with the synthesis order; and a control section adapted to control the image selection section, the order determination section and the image synthesis section in response to an operation of a user. | 10-13-2011 |
20110249888 | Method and Apparatus for Measuring an Audiovisual Parameter - There is provided a method of measuring 3D depth of a stereoscopic image, comprising providing left and right eye input images, applying an edge extraction filter to each of the left and right eye input images, and determining 3D depth of the stereoscopic image using the edge extracted left and right eye images. There is also provided an apparatus for carrying out the method of measuring 3D depth of a stereoscopic image. | 10-13-2011 |
20110249889 | STEREOSCOPIC IMAGE PAIR ALIGNMENT APPARATUS, SYSTEMS AND METHODS - Apparatus, systems, and methods disclosed herein operate to produce an image alignment shift vector used to shift left and right image portions of a stereoscopic image with respect to each other in order to reduce or eliminate undesirable horizontal and vertical disparity components. Vertical and horizontal projections of luminance value aggregations from selected left and right image pixel blocks are correlated to derive vertical and horizontal components of a disparity vector corresponding to each left/right pixel block pair. Disparity vectors corresponding to multiple image blocks are algebraically combined to yield the image alignment shift vector. The left and/or right images are then shifted in proportion to the magnitude of the image alignment shift vector at an angle corresponding to that of the image alignment shift vector. | 10-13-2011 |
20110255775 | METHODS, SYSTEMS, AND COMPUTER-READABLE STORAGE MEDIA FOR GENERATING THREE-DIMENSIONAL (3D) IMAGES OF A SCENE - Disclosed herein are methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene. According to an aspect, a method includes capturing a real-time image and a first still image of a scene. Further, the method includes displaying the real-time image of the scene on a display. The method also includes determining one or more properties of the captured images. The method also includes calculating an offset in a real-time display of the scene to indicate a target camera positional offset with respect to the first still image. Further, the method includes determining that a capture device is in a position of the target camera positional offset. The method also includes capturing a second still image. Further, the method includes correcting the captured first and second still images. The method also includes generating the three-dimensional image based on the corrected first and second still images. | 10-20-2011 |
20110255776 | METHODS AND SYSTEMS FOR ENABLING DEPTH AND DIRECTION DETECTION WHEN INTERFACING WITH A COMPUTER PROGRAM - One or more images can be captured with a depth camera having a capture location in a coordinate space. First and second objects in the one or more images can be identified and assigned corresponding first and second object locations in the coordinate space. A relative position can be identified in the coordinate space between the first object location and the second object location when viewed from the capture location by computing an azimuth angle and an altitude angle between the first object location and the object location in relation to the capture location. The relative position includes a dimension of depth with respect to the coordinate space. The dimension of depth is determined from analysis of the one or more images. A state of a computer program is changed based on the relative position. | 10-20-2011 |
20110262030 | Recovering 3D Structure Using Blur and Parallax - A system and method for generating a focused image of an object is provided. The method comprises obtaining a plurality of images of an object, estimating an initial depth profile of the object, estimating a parallax parameter and a blur parameter for each pixel in of the plurality of images and generating a focused image and a corrected depth profile of the object using a posterior energy function. The posterior energy function is based on the estimated parallax parameter and the blur parameter of each pixel in the plurality of images. | 10-27-2011 |
20110262031 | CONCAVE SURFACE MODELING IN IMAGE-BASED VISUAL HULL - Apparatus and methods disclosed herein provide for a set of reference images obtained from a camera and a reference image obtained from a viewpoint to capture an entire concave region of an object; a silhouette processing module for obtaining a silhouette image of the concave region of the object; and a virtual-image synthesis module connected to the silhouette processing module for synthesizing a virtual inside-out image of the concave region from the computed silhouette images and for generating a visual hull of the object having the concave region. | 10-27-2011 |
20110262032 | ENHANCED OBJECT RECONSTRUCTION - Processing images includes projecting an infra-red pattern onto a three-dimensional object and producing a first image, a second image, and a third image of the three-dimensional object while the pattern is projected on the three-dimensional object. The first image and the second image include the three-dimensional object and the pattern. The first image and the second image are produced by capturing at a first camera and a second camera, respectively, light filtered through an infra-red filter. The third image includes the three-dimensional object but not the pattern. Processing the images also includes establishing a first-pair correspondence between a portion of pixels in the first image and a portion of pixels in the second image. Processing the images further includes constructing, based on the first-pair correspondence and the third image, a two-dimensional image that depicts a three-dimensional construction of the three-dimensional object. | 10-27-2011 |
20110268350 | COLOR IMAGE PROCESSING METHOD, COLOR IMAGE PROCESSING DEVICE, AND RECORDING MEDIUM - To provide a color image processing method and device capable of improving the texture of a specific object in a color image taken by a color imaging device by controlling the quantity of a specular component in the specific object. A color image processing device ( | 11-03-2011 |
20110274343 | SYSTEM AND METHOD FOR EXTRACTION OF FEATURES FROM A 3-D POINT CLOUD - A method of extracting a feature from a point cloud comprises receiving a three-dimensional (3-D) point cloud representing objects in a scene, the 3-D point cloud containing a plurality of data points; generating a plurality of hypothetical features based on data points in the 3-D point cloud, wherein the data points corresponding to each hypothetical feature are inlier data points for the respective hypothetical feature; and selecting the hypothetical feature having the most inlier data points as representative of an object in the scene. | 11-10-2011 |
20110280473 | ROTATION ESTIMATION DEVICE, ROTATION ESTIMATION METHOD, AND RECORD MEDIUM - A rotation estimation device includes an attitude determination section that accepts a plurality of three-dimensional images captured by an image capturing device at a plurality of timings, detects a plane region that is present in common with the plurality of images, and obtains a relative attitude of the image capturing device to the plane region in the image based on the image for each of the plurality of images; and a rotation state estimation section that obtains a rotational state of the image capturing device based on the relative attitude of the image capturing device, the relative attitude being obtained for each of the images. | 11-17-2011 |
20110286660 | Spatially Registering User Photographs - Photographs of an object may be oriented with respect to both the geographic location and orientation of the object by registering a 3D model derived from a plurality of photographs of the objects with a 2D image of the object having a known location and orientation. For example, a 3D point cloud of an object created from photographs of the object using a Photosynth™ tool may be aligned with a satellite photograph of the object, where the satellite photograph has location and orientation information. A tool providing scaling and rotation of the 3D model with respect to the 2D image may be used or an automatic alignment may be performed using a function based on object edges filtered at particular angles. Once aligned, data may be recorded that registers camera locations for the plurality of photographs with geographic coordinates of the object, either absolute latitude/longitude or relative to the object. | 11-24-2011 |
20110286661 | METHOD AND APPARATUS FOR TEMPORALLY INTERPOLATING THREE-DIMENSIONAL DEPTH IMAGE - A method and apparatus for temporally interpolating a three-dimensional (3D) depth image are provided to generate an intermediate depth image in a desired time. The apparatus may interpolate depth images generated by a depth camera, using a temporal interpolation procedure, may generate an intermediate depth image in a new time using the interpolated depth images, and may combine the generated intermediate depth image with color images, to generate a high-precision 3D image. | 11-24-2011 |
20110293170 | IMAGE PROCESSING APPARATUS AND MATHOD - The format of an input image is determined appropriately, and an appropriate output image adapted to a format that can be displayed on a display section is displayed. | 12-01-2011 |
20110293171 | METHODS AND APPARATUS FOR HIGH-RESOLUTION CONTINUOUS SCAN IMAGING - A continuous scanning method employs one or more moveable sensors and one or more reference sensors deployed in the environment around a test subject. Each sensor is configured to sense an attribute of the test subject (e.g., sound energy, infrared energy, etc.) while continuously moving along a path and recording the sensed attribute, the position, and the orientation of each of the moveable sensors and each of the reference sensors. The system then constructs a set of transfer functions corresponding to points in space between the moveable sensors, wherein each of the transfer functions relates the test data of the moveable sensors to the test data of the reference sensors. In this way, a graphical representation of the attribute in the vicinity of test subject can be produced. | 12-01-2011 |
20110293172 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE DISPLAY APPARATUS - An image processing apparatus | 12-01-2011 |
20110299761 | Image Processing Apparatus, Image Processing Method, and Program - An image processing apparatus includes a projective transformation unit that performs projective transformation on left and right images captured from different points of view, a projective transformation parameter generating unit that generates a projective transformation parameter used by the projective transformation unit by receiving feature point information regarding the left and right images, a stereo matching unit that performs stereo matching using left and right projective transformation images subjected to projective transformation, and a matching error minimization control unit that computes image rotation angle information regarding the left and right projective transformation images and correspondence information of an error evaluation value of the stereo matching. The matching error minimization control unit computes the image rotation angle at which the error evaluation value is minimized, and the projective transformation parameter generating unit computes the projective transformation parameter that reflects the image rotation angle at which the error evaluation value is minimized. | 12-08-2011 |
20110299762 | Process Of Correcting An Image Provided On A Support Which Is Subsequently Submitted To A Deformation Process - The invention relates to a method for adapting a visual representation which subsequently is subjected to a deformation, like in packaging. To be able to take into account the deformations on the visual representation the method comprises the steps of: providing a pattern on a support, wherein the pattern comprises a distribution of codes, which are arranged such that each code is unique, deforming the support with the pattern, taking at least two images of the deformed support under different points of view, and determining a 3D surface model based on the matching of at least one code of the pattern in the at least two images. | 12-08-2011 |
20110299763 | APPARATUS AND METHOD OF INFORMATION EXTRACTION FROM ELECTROMAGNETIC ENERGY BASED UPON MULTI-CHARACTERISTIC SPATIAL GEOMETRY PROCESSING - An apparatus for information extraction from electromagnetic energy via multi-characteristic spatial geometry processing to determine three-dimensional aspects. Structure receives the electromagnetic energy, which has a plurality of spatial phase characteristics. Structure separates the plurality of spatial phase characteristics of the received electromagnetic energy. Structure identifies spatially segregated portions of each of the plurality of spatial phase characteristics, with each spatially segregated portion corresponding in a point to point relationship to a spatially segregated portion for each of the other of the plurality of spatial phase characteristics in a group. Structure quantifies each segregated portion to provide a spatial phase metric of each segregated portion for providing a data map of the spatial phase metric of each separated spatial phase characteristic of the plurality of spatial phase characteristics. Structure processes the spatial phase metrics to determine surface contour information for each segregated portion of the data map. | 12-08-2011 |
20110305383 | APPARATUS AND METHOD PROCESSING THREE-DIMENSIONAL IMAGES - Provided is a 3D image processing apparatus and method. The 3D image processing apparatus may determine, with a small amount of calculation, a quantization parameter to be used for compressing a depth image, based on a quantization parameter used for compressing a color image and characteristics of the color image and the depth image. | 12-15-2011 |
20110311128 | DIGITAL WATERMARK DETECTION IN 2D-3D CONTENT CONVERSION - A system and method are provided for analyzing 3D digital content to determine whether a watermark is detectable. The watermark may exist in 2D content that is converted to 3D, and in such cases, the survivability of the watermark to the conversion process is evaluated. An anticipated location of the watermark in left and right 3D images may be determined, and the detectability based upon the anticipated location. A report may indicate whether the watermark survived the conversion in one or both images, or neither. The process may be performed for single frames, sequences of single frames, or entire files containing many image frames. Watermark placement may also be proposed for locations in 2D content, 3D content, or both. Watermarks may similarly be placed in the content. | 12-22-2011 |
20110311129 | TRAINING-FREE GENERIC OBJECT DETECTION IN 2-D AND 3-D USING LOCALLY ADAPTIVE REGRESSION KERNELS - The present invention provides a method of learning-free detection and localization of actions that includes providing a query video action of interest and providing a target video, obtaining at least one query space-time localized steering kernel (3-D LSK) from the query video action of interest and obtaining at least one target 3-D LSK from the target video, determining at least one query feature from the query 3-D LSK and determining at least one target patch feature from the target 3-D LSK, and outputting a resemblance map, where the resemblance map provides a likelihood of a similarity between each the query feature and each target patch feature to output learning-free detection and localization of actions, where the steps of the method are performed by using an appropriately programmed computer. | 12-22-2011 |
20110311130 | IMAGE PROCESSING APPARATUS, METHOD, PROGRAM, AND RECORDING MEDIUM - Extracting information corresponding to a three-dimensional object from an image captured by plural imaging apparatuses is implemented with a simple configuration and a simple processing. | 12-22-2011 |
20110311131 | DATA RESTORATION METHOD AND APPARATUS, AND PROGRAM THEREFOR - Three-dimensional data is compressed at a high compression ratio without deteriorating resolution and accuracy, by computing a coupling coefficient from input three-dimensional data and a three-dimensional base data group obtained from a plurality of objects and outputting the coupling coefficient as compressed data. Specifically, the three-dimensional data is input to corresponding point determination means. The corresponding point determination means generates three-dimensional data to be synthesized in which vertexes of the three-dimensional data are made to correspond to vertexes of three-dimensional reference data serving as a reference to determine association relationship between vertexes. Coefficient computation means computes a coupling coefficient for coupling a three-dimensional base data group used for synthesis of three-dimensional data to synthesize three-dimensional data to be synthesized, and outputs the computed coupling coefficient as the compressed data of the three-dimensional data. | 12-22-2011 |
20110317910 | IMAGE ANALYSIS METHOD AND IMAGE ANALYSIS APPARATUS - An image analysis method includes acquiring images of spatially different analysis regions. Each of the images of the analysis regions is constituted by pixels including a plurality of data acquired simultaneously or time-serially. The method further includes obtaining a cross-correlation between two analysis regions by using data of pixels of images of the analysis regions. | 12-29-2011 |
20120002862 | APPARATUS AND METHOD FOR GENERATING DEPTH SIGNAL - According to one embodiment, a depth signal generating apparatus includes following units. The calculating unit is configured to calculate a statistic value for pixel values for each of predefined areas in the first image, and calculate, for each of predetermined base depth models, a first evaluation value based on the calculated statistic value. The correcting unit is configured to correct, based on a second evaluation value previously derived for the second image and a first degree of similarity indicating a similarity between the predetermined base depth models, the first evaluation value to derive second evaluation values for the predetermined base depth models. The selecting unit is configured to select a base depth model having the highest second evaluation value from the predetermined base depth models. The generating unit is configured to generate a depth signal based on the selected base depth model. | 01-05-2012 |
20120002863 | Depth image encoding apparatus and depth image decoding apparatus using loop-filter, method and medium - A depth image encoding apparatus and a depth image decoding apparatus are provided. The depth image encoding apparatus may compute coefficients used to restore an edge region and a smooth region of a depth image, and may restore the depth image using the depth image and a color image. | 01-05-2012 |
20120002864 | IMAGE PROCESSING UNIT, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM - An image processing unit includes a statistical information calculating section which calculates statistical information in macroblock units with regard to image data with a plurality of fields, a region determination section which executes region determination with regard to the image data with the level of recognition of three-dimensional images as a determination standard using the statistical information calculated by the statistical information calculating section, and an encoding processing section which encodes the image data of each field and generates an encoded stream while changing the content of the encoding process for each of the macroblocks according to the result of the region determination executed by the region determination section. | 01-05-2012 |
20120002865 | METHOD FOR PERFORMING AUTOMATIC CLASSIFICATION OF IMAGE INFORMATION - The method is characterised in that the method comprises the steps that a computer or several interconnected computers are caused to a) store, in the form of a pixel set in which set each pixel is associated with image information in at least one channel for light intensity, a first image to be classified onto a digital storage medium; b) carry out a first classification of the image, which classification is caused to be based upon the image information of each respective pixel and which classification is caused to associate each pixel with a certain class in a first set of classes, and to store these associations in a first database; c) calculate, for each pixel and for several classes in the first set of classes, the smallest distance in the image between the pixel in question and the closest pixel which is associated with the class in question in the database, and to store an association between each pixel and the calculated smallest distance for the pixel in a second database for each class for which a distance has been calculated; d) carry out a second classification of the data in the second database, which classification is caused to be based upon the smallest distance for each pixel to each respective class, and to associate each pixel to a certain class in a second set of classes; and e) store the classified image in the form of a set of pixels onto a digital storage medium, where each pixel comprises data regarding the association of the pixel to the certain class in the second set of classes, and where the classified image has the same dimensions as the first image. | 01-05-2012 |
20120002866 | METHOD AND APPARATUS FOR REDUCING THE MEMORY REQUIREMENT FOR DETERMINING DISPARITY VALUES FOR AT LEAST TWO STEREOSCOPICALLY RECORDED IMAGES - A method and an apparatus reduce the temporary random access memory required when determining disparity values for at least two stereoscopically recorded images with known epipolar geometry, in which a disparity is determined for each pixel of an image. Path-dependent dissimilarity costs are calculated on the basis of a disparity-dependent cost function, and compared, in two runs for a number of paths which open in the pixel. The disparity-dependent cost function evaluates a pixel-based dissimilarity measure between the pixel and the corresponding pixel, according to the respective disparity, in a second image. The path-dependent dissimilarity costs for a first predetermined set of disparities are calculated in a first run for a number of first paths and in a second run for a number of remaining paths, and the corresponding path-dependent dissimilarity costs of the first paths and of the remaining paths are accumulated for a second predetermined set of disparities. | 01-05-2012 |
20120002867 | FEATURE POINT GENERATION SYSTEM, FEATURE POINT GENERATION METHOD, AND FEATURE POINT GENERATION PROGRAM - A feature point generation system capable of generating a feature point that satisfies a preferred condition from a three-dimensional shape model is provided. Image group generation means 31 generates a plurality of images obtained by varying conditions with respect to the three-dimensional shape model. Evaluation means 33 calculates a first evaluation value that decreases steadily as a feature point group is distributed more uniformly on the three-dimensional shape model and a second evaluation value that decreases steadily as extraction of a feature point in an image corresponding to a feature point on the three-dimensional shape model becomes easier, and calculates an evaluation value relating to a designated feature point group as a weighted sum of the respective evaluation values. Feature point arrangement means 32 arranges the feature point group on the three-dimensional shape model so that the evaluation value calculated by the evaluation means 33 is minimized. | 01-05-2012 |
20120008852 | SYSTEM AND METHOD OF ENHANCING DEPTH OF A 3D IMAGE - A system and method of enhancing depth of a three-dimensional (3D) image are disclosed. A depth generator generates at least one depth map associated with an image. A depth enhancer enhances the depth map by stretching a depth histogram associated with the depth map, wherein the depth histogram is a distribution of depth levels of pixels of the image. | 01-12-2012 |
20120008853 | THREE-DIMENSIONAL (3D) IMAGE PROCESSING METHOD AND SYSTEM - A three-dimensional (3D) image processing method is provided. The method includes receiving from an image source a 3D image containing compressed first image pixel data and compressed second image pixel data, and storing the received compressed first image pixel data and compressed second image pixel data in a line register group. The method also includes determining a relationship between lines of the compressed first image pixel data and compressed second image pixel data, and using reading and writing operations on the line register group based on the relationship and a predetermined timing sequence to decompress the compressed first image pixel data and compressed second image pixel data. | 01-12-2012 |
20120008854 | Method and apparatus for rendering three-dimensional (3D) object - Provided is a method and apparatus that may generate a three-dimensional (3D) object from a two-dimensional (2D) image, and render the generated 3D object. | 01-12-2012 |
20120008855 | STEREOSCOPIC IMAGE GENERATION APPARATUS AND METHOD - According to embodiments, a stereoscopic image generation apparatus for generating a disparity image based on at least one image and depth information corresponding to the at least one image is provided. The apparatus includes a calculator, selector and generator. The calculator calculates, based on the depth information, evaluation values that assume larger values with increasing hidden surface regions generated upon generation of disparity images for respective viewpoint sets each including two or more viewpoints. The selector selects one of the viewpoint sets based on the evaluation values calculated for the viewpoint sets. The generator generates, from the at least one image and the depth information, the disparity image at a viewpoint corresponding to the one of the viewpoint sets selected by the selector. | 01-12-2012 |
20120008856 | Automatic Convergence Based on Face Detection for Stereoscopic Imaging - A method for automatic convergence of stereoscopic images is provided that includes receiving a stereoscopic image, selecting a face detected in the stereoscopic image, and shifting at least one of a left image in the stereoscopic image and a right image in the stereoscopic image horizontally, wherein horizontal disparity between the selected face in the left image and the selected face in the right image before the shifting is reduced. In some embodiments, the horizontal disparity is reduced to zero | 01-12-2012 |
20120008857 | METHOD OF TIME-EFFICIENT STEREO MATCHING - Unlike previous works with emphasis on hardware level optimization for the processing time reduction in stereo matching, the present invention provides a time efficient stereo matching method which is applicable at an algorithm level, which is compatible with and thus can be employed to any types of stereo matching implementation. | 01-12-2012 |
20120014590 | MULTI-RESOLUTION, MULTI-WINDOW DISPARITY ESTIMATION IN 3D VIDEO PROCESSING - A disparity value between corresponding pixels in a stereo pair of images, where the stereo pair of images includes a first view and a second view of a common scene, can be determined based on identifying a lowest aggregated matching cost for a plurality of support regions surrounding the pixel under evaluation. In response to the number of support regions having a same disparity value being greater than a threshold number, a disparity value indicator for the pixel under evaluation can be set to the same disparity value. | 01-19-2012 |
20120020548 | Method for Generating Images of Multi-Views - The present invention provides a method for generating images of multi-views. The method includes obtaining a 2D original image of an article and background figures of multi-views; calculating the background image range and the main body image range of the 2D original image of the article; cutting the main body image out; generating a depth model according to an equation; cutting the depth model according to the main body image range of the cut 2D image of the article; shifting every pixel in the main body image of the 2D original image of the article according to the cut depth model to obtain shifted main body images of multi-views; and synthesizing the shifted main body images of multi-views and the background figures of multi-views to obtain the final images of multi-views for 3D image reconstruction. | 01-26-2012 |
20120020549 | Apparatus and method for depth-image encoding with rate-distortion optimization - Provided is a rate-distortion optimizing apparatus and method for encoding a depth image. The rate-distortion optimizing apparatus may reduce a resolution in an area that does not include an edge that significantly affects image synthesis, and may use a high quantization parameter and thus, may provide a high compression performance. | 01-26-2012 |
20120027290 | OBJECT RECOGNITION USING INCREMENTAL FEATURE EXTRACTION - In one example, an apparatus includes a processor configured to extract a first set of one or more keypoints from a first set of blurred images of a first octave of a received image, calculate a first set of one or more descriptors for the first set of keypoints, receive a confidence value for a result produced by querying a feature descriptor database with the first set of descriptors, wherein the result comprises information describing an identity of an object in the received image, and extract a second set of one or more keypoints from a second set of blurred images of a second octave of the received image when the confidence value does not exceed a confidence threshold. In this manner, the processor may perform incremental feature descriptor extraction, which may improve computational efficiency of object recognition in digital images. | 02-02-2012 |
20120027291 | MULTI-VIEW IMAGE CODING METHOD, MULTI-VIEW IMAGE DECODING METHOD, MULTI-VIEW IMAGE CODING DEVICE, MULTI-VIEW IMAGE DECODING DEVICE, MULTI-VIEW IMAGE CODING PROGRAM, AND MULTI-VIEW IMAGE DECODING PROGRAM - The disclosed multi-view image coding/decoding device first obtains depth information for an object photographed in an area subject to processing. Next, a group of pixels in an already-coded (decoded) area which is adjacent to the area subject to processing and in which the same object as in the area subject to processing has been photographed is determined using the depth information and set as a sample pixel group. Then, a view synthesis image is generated for the pixels included in the sample pixel group and the area subject to processing. Next, correction parameters to correct illumination and color mismatches in the sample pixel group are estimated from the view synthesis image and the decoded image. A predicted image is then generated by correcting the view synthesis image relative to the area subject to processing using the estimated correction parameters. | 02-02-2012 |
20120027292 | THREE-DIMENSIONAL OBJECT DETERMINING APPARATUS, METHOD, AND COMPUTER PROGRAM PRODUCT - According to one embodiment, a three-dimensional object determining apparatus includes: a detecting unit configured to detect a plurality of feature points of an object included in an image data that is acquired; a pattern normalizing unit configured to generate a normalized pattern that is normalized by a three-dimensional model from the image data using the plurality of feature points; an estimating unit configured to estimate an illumination direction in which light is emitted to the object in the image data from the three-dimensional model and the normalized pattern; and a determining unit configured to determine whether or not the object in the image data is a three-dimensional object on the basis of the illumination direction. | 02-02-2012 |
20120033872 | APPARATUS AND METHOD FOR GENERATING EXTRAPOLATED VIEW BASED ON IMAGE RESIZING - A view extrapolation apparatus and a view extrapolation method to generate images at a plurality of virtual points using a relatively small number of input images are disclosed. The view extrapolation apparatus and the view extrapolation method output a view at a reference point, the view at the reference point being formed of frames according to time, generating the frames of the view at the reference point to generate a resized frame, and generating an extrapolated view at a virtual point using the resized frame. | 02-09-2012 |
20120033873 | METHOD AND DEVICE FOR DETERMINING A SHAPE MATCH IN THREE DIMENSIONS - Provided are a method and a device for determining a shape match in three dimensions, which can utilize information relating to three-dimensional shapes effectively. Camera control means ( | 02-09-2012 |
20120039525 | APPARATUS AND METHOD FOR PROVIDING THREE DIMENSIONAL MEDIA CONTENT - A system that incorporates teachings of the exemplary embodiments may include, for example, means for generating a disparity map based on a depth map, means for determining accuracy of pixels in the depth map where the determining means identifies the pixels as either accurate or inaccurate based on a confidence map and the disparity map, and means for providing an adjusted depth map where the providing means adjusts inaccurate pixels of the depth map using a cost function associated with the inaccurate pixels. Other embodiments are disclosed. | 02-16-2012 |
20120039526 | Volume-Based Coverage Analysis for Sensor Placement in 3D Environments - Coverage of sensors in a CTV system in a three-dimensional environment are analyzed by partitioning a 3D model of the environment into a set of voxels. A ray is cast from each pixel in each sensor through the 3D model to determine coverage data for each voxel. The coverage data are analyzed to determine a result indicative of an effective arrangement of the set of sensors. | 02-16-2012 |
20120045116 | METHOD FOR 3D DIGITALIZATION OF AN OBJECT WITH VARIABLE SURFACE - In a method for the 3D digitalization of an object with variable surface a plurality of camera pictures of partial surfaces of the object ( | 02-23-2012 |
20120045117 | METHOD AND DEVICE FOR TRAINING, METHOD AND DEVICE FOR ESTIMATING POSTURE VISUAL ANGLE OF OBJECT IN IMAGE - Method and device for estimating the posture orientation of the object in image are described. An image feature of the image is obtained. For each orientation class, 3-D object posture information corresponding to the image feature is obtained based on a mapping model corresponding to the orientation class, for mapping the image feature to the 3-D object posture information. A joint probability of a joint feature including the image feature and the corresponding 3-D object posture information for each orientation class is calculated according to a joint probability distribution model based on single probability distribution models for the orientation classes. A conditional probability of the image feature in condition of the corresponding 3-D object posture information is calculated based on the joint probability for each orientation class. The orientation class corresponding to the maximum of the conditional probabilities is estimated as the posture orientation of the object in the image. | 02-23-2012 |
20120057775 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing device includes a feature amount extracting unit configured to extract the feature amount of each frame of an image of a content for detector learning of interest that is a content to be used for learning of a highlight detector which is a model for detecting a scene in which the user is interested as a highlight scene; a clustering unit configured to use cluster information that is the information of the cluster obtained by performing cluster learning; a highlight label generating unit configured to generate a highlight label sequence; and a highlight detector learning unit configured to perform learning of the highlight detector. | 03-08-2012 |
20120057776 | THREE-DIMENSIONAL DISPLAY SYSTEM WITH DEPTH MAP MECHANISM AND METHOD OF OPERATION THEREOF - A method of operation of a three-dimensional display system includes: calculating an edge pixel image from a source image; generating a line histogram from the edge pixel image by applying a transform; calculating a candidate line from the line histogram meeting or exceeding a line category threshold for a horizontal line category, a vertical line category, a diagonal line category, or a combination thereof; calculating a vanishing point on the candidate line; and generating a depth map for the vanishing point for displaying the source image on a first device. | 03-08-2012 |
20120057777 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes a two-dimensional orthogonal transform unit configured to perform two-dimensional orthogonal transform on a plurality of images, an one-dimensional orthogonal transform unit configured to perform one-dimensional orthogonal transform in a direction in which the images are arranged on two-dimensional orthogonal transform coefficient data obtained by performing the two-dimensional orthogonal transform on the images using the two-dimensional orthogonal transform unit, and a three-dimensional orthogonal transform coefficient data encoder configured to encode three-dimensional orthogonal transform coefficient data obtained by performing the one-dimensional orthogonal transform on the two-dimensional orthogonal transform coefficient data using the one-dimensional orthogonal transform unit. | 03-08-2012 |
20120057778 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM - There is provided an image processing apparatus including an operation recognition unit for recognizing an operation signal for identifying a focused image among images displayed on a screen of an image display unit and an image drawing unit for drawing an image on the screen so as to display the image as a stereoscopic image or a planar image on the screen, on the basis of a recognition result provided by the operation recognition unit. | 03-08-2012 |
20120057779 | Method and Apparatus for Confusion Learning - A method and apparatus for processing image data is provided. The method includes the steps of employing a main processing network for classifying one or more features of the image data, employing a monitor processing network for determining one or more confusing classifications of the image data, and spawning a specialist processing network to process image data associated with the one or more confusing classifications. | 03-08-2012 |
20120057780 | IMAGE SIGNAL PROCESSING DEVICE AND IMAGE SIGNAL PROCESSING METHOD - When crosstalk is cancelled without considering the contents of an image signal, the effect of the crosstalk cancellation is sometimes obtained effectively, and sometimes not. In order to solve this problem, an image signal processing unit which cancels crosstalk in a three-dimensional image signal includes image adaptation control units ( | 03-08-2012 |
20120063668 | Spatial accuracy assessment of digital mapping imagery - The present invention defines a quantitative measure for expressing the spatial (geometric) accuracy of a single optical geo-referenced image. Further, a quality control (QC) method for assessing that measure is developed. The assessment is done on individual images (not stereo models), namely, an image of interest is compared with automatically selected image from a geo-referenced image database of known spatial accuracy. The selection is based on the developed selection criterion entitled “generalized proximity criterion” (GPC). The assessment is done by computation of spatial dissimilarity between N pairs of line-of-sight rays emanating from conjugate pixels on the two images. This innovation is sought to be employed in any optical system (stills, video, push-broom, etc), but its primary application is aimed at validating photogrammetric triangulation blocks that are based on small (<10 MPixels) and medium (<50 MPixels) collection systems of narrow and dynamic field of view together with certifying the respective collection systems. | 03-15-2012 |
20120063669 | Automatic Convergence of Stereoscopic Images Based on Disparity Maps - A method for automatic convergence of stereoscopic images is provided that includes receiving a stereoscopic image, generating a disparity map comprising a plurality of blocks for the stereoscopic image, clustering the plurality of blocks into a plurality of clusters based on disparities of the blocks, selecting a cluster of the plurality of clusters with a smallest disparity as a foreground cluster, determining a first shift amount and a first shift direction and a second shift amount and a second shift direction based on the smallest disparity, and shifting a left image in the stereoscopic image in the first shift direction by the first shift amount and a right image in the stereoscopic image in the second shift direction by the second shift amount, wherein the smallest disparity is reduced. | 03-15-2012 |
20120063670 | MOBILE TERMINAL AND 3D IMAGE COMPOSING METHOD THEREOF - A mobile terminal and a method for composing 3D images thereof are disclosed. The method for composing 3D images of a mobile terminal includes: selecting a background image as a reference from an image buffer; adjusting a convergence point of the selected background image; extracting an object image to be composed to the background image; displaying guidance information indicating a position at which the object image can be composed to the background image; and composing the object image to the background image according to the guidance information. Thus, when 3D images, each having a different convergence, are composed, the convergence point of a background image is adjusted and guidance information indicating a position at which an object image is to be composed is provided, thereby conveniently and accurately composing the 3D images. | 03-15-2012 |
20120063671 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND COMPUTER PROGRAM PRODUCT - According to one embodiment, an image processing device includes an obtaining unit configured to obtain a plurality of images captured in time series; a first calculating unit configured to calculate a first change vector indicating a change between the images in an angle representing a posture of a subject included in each of the images; a second calculating unit configured to calculate a second change vector indicating a change in coordinates of a feature point of the subject; a third calculating unit configured to calculate an intervector angle between the first change vector and the second change vector; and a determining unit configured to determine that the subject is three-dimensional when the intervector angle is smaller than a predetermined first threshold. | 03-15-2012 |
20120063672 | 3D GEOMETRIC MODELING AND MOTION CAPTURE USING BOTH SINGLE AND DUAL IMAGING - A method and apparatus for obtaining an image to determine a three dimensional shape of a stationary or moving object using a bi dimensional coded light pattern having a plurality of distinct identifiable feature types. The coded light pattern is projected on the object such that each of the identifiable feature types appears at most once on predefined sections of distinguishable epipolar lines. An image of the object is captured and the reflected feature types are extracted along with their location on known epipolar lines in the captured image. Displacements of the reflected feature types along their epipolar lines from reference coordinates thereupon determine corresponding three dimensional coordinates in space and thus a 3D mapping or model of the shape of the object at any point in time. | 03-15-2012 |
20120070068 | FOUR DIMENSIONAL RECONSTRUCTION AND CHARACTERIZATION SYSTEM - A method and apparatus for performing a four-dimensional image reconstruction. The apparatus can be configured to receive a first input to slice a stack of two-dimensional images that depicts an object of interest into one or more planes to form one or more virtual images and receive a second input to segment the virtual images. One or more seed points can be generated on the virtual images based on the second input and to automatically order the seed points using a magnetic linking method. Contours corresponding to the object of interest can be generated using a live-wire algorithm and perform a first three-dimensional construction of the object of interest based on the contours. The contours can be converted into seed points for a subsequent set of images and perform a second three-dimensional construction of the object of interest corresponding to the subsequent set of images. | 03-22-2012 |
20120070069 | IMAGE PROCESSING APPARATUS - According to one embodiment, an image processing apparatus includes a difference calculation unit, an intensity calculation unit, and an enhancing unit. The difference calculation unit calculates, for each partial area of an input image, a difference between a depth value of a subject and a reference value representing a depth as a reference. The intensity calculation unit calculates for each partial area an intensity, which has a local maximum value when the difference is 0 and has a greater value as the absolute value of the difference is smaller. The enhancing unit enhances each partial area according to the intensity to generate an output image. | 03-22-2012 |
20120070070 | LEARNING-BASED POSE ESTIMATION FROM DEPTH MAPS - A method for processing data includes receiving a depth map of a scene containing a humanoid form. Respective descriptors are extracted from the depth map based on the depth values in a plurality of patches distributed in respective positions over the humanoid form. The extracted descriptors are matched to previously-stored descriptors in a database. A pose of the humanoid form is estimated based on stored information associated with the matched descriptors. | 03-22-2012 |
20120070071 | SYSTEMS AND METHODS FOR AUTOMATED WATER DETECTION USING VISIBLE SENSORS - Systems and methods are disclosed that include automated machine vision that can utilize images of scenes captured by a 3D imaging system configured to image light within the visible light spectrum to detect water. One embodiment includes autonomously detecting water bodies within a scene including capturing at least one 3D image of a scene using a sensor system configured to detect visible light and to measure distance from points within the scene to the sensor system, and detecting water within the scene using a processor configured to detect regions within each of the at least one 3D images that possess at least one characteristic indicative of the presence of water. | 03-22-2012 |
20120070072 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND COMPUTER READABLE PRODUCT - According to one embodiment, an image processing device includes a readiness determining unit configured to determine whether or not a state of a face image included in an image at one time out of images obtained at a plurality of different times is a ready state that satisfies a condition for performing three-dimensionality determination, three-dimensionality determination is determining whether the object is three-dimensional or not; an initiation determining unit configured to determine whether or not a state of a face image included in an image at different time from the image at the one time is an initiation state changed from the ready state; and a first three-dimensionality determining unit configured to perform the three-dimensionality determination on the face images included in the images when it is determined that the state is the initiation state. | 03-22-2012 |
20120076398 | STEREOSCOPIC IMAGE PASTING SYSTEM, AND METHOD AND PROGRAM FOR CONTROLLING OPERATION OF SAME - It is arranged so that stereoscopic images will not overlap one another. A stereoscopic image to be pasted in a free-layout electronic album is selected. The amount of parallax of the stereoscopic image to be pasted in this electronic album is set. When this is done, the selected stereoscopic image is enlarged or reduced in size so as to take on the set amount of parallax. Automatic layout for pasting enlarged or reduced stereoscopic images on each page of the electronic album in such a manner that these stereoscopic images will not overlap one another is carried out. The result of the layout is displayed. | 03-29-2012 |
20120076399 | THREE-DIMENSIONAL IMAGE EDITING DEVICE AND THREE-DIMENSIONAL IMAGE EDITING METHOD - Even when the size of a three-dimensional image is changed, the pop-out amount is automatically adjusted to one intended by the user. The pop-out amount is adjusted based on a conversion characteristic defining a relationship between the size and the pop-out amount of a three-dimensional image as the size of the three-dimensional image is changed, and therefore the pop-out amount of the three-dimensional image can be automatically adjusted to a given pop-out amount preferred by the user or intended by the user. | 03-29-2012 |
20120076400 | METHOD AND SYSTEM FOR FAST THREE-DIMENSIONAL IMAGING USING DEFOCUSING AND FEATURE RECOGNITION - Described is a method and system for fast three-dimensional imaging using defocusing and feature recognition is disclosed. The method comprises acts of capturing a plurality of defocused images of an object on a sensor, identifying segments of interest in each of the plurality of images using a feature recognition algorithm, and matching the segments with three-dimensional coordinates according to the positions of the images of the segments on the sensor to produce a three-dimensional position of each segment of interest. The disclosed imaging method is “aware” in that it uses a priori knowledge of a small number of object features to reduce computation time as compared with “dumb” methods known in the art which exhaustively calculate positions of a large number of marker points. | 03-29-2012 |
20120082368 | DEPTH CORRECTION APPARATUS AND METHOD - According to one embodiment, a depth correction apparatus includes a clusterer, a calculator and a corrector. The clusterer is configured to apply clustering to at least one of pixel values and depth values of a plurality of pixels in a calculation range corresponding to a correction target pixel, and to classify the plurality of pixels in the calculation range into a plurality of classes. The calculator is configured to calculate pixel value statistics of the respective classes using pixel values of pixels in the respective classes. The corrector is configured to determine a corresponding class of the correction target pixel based on a pixel value of the correction target pixel and the pixel value statistics of the respective classes, and to apply correction which replaces a depth value of the correction target pixel by a representative depth value of the corresponding class. | 04-05-2012 |
20120082369 | IMAGE COMPOSITION APPARATUS, IMAGE RETRIEVAL METHOD, AND STORAGE MEDIUM STORING PROGRAM - There is provided an image composition apparatus including a parallax deriving unit configured to derive a parallax of one area in a background image, the one area corresponding to one object in the background image, an image selection unit configured to select an image which has a parallax different from the parallax of the one area in the background image, as a material image, from a plurality of three-dimensional images, each of which is viewed as a specific object in a three-dimensional manner, and an image composition unit configured to superpose the material image selected by the image selection unit on the background image. | 04-05-2012 |
20120082370 | MATCHING DEVICE, MATCHING METHOD AND MATCHING PROGRAM - Provided is a matching device capable of improving the accuracy of the degree of similarly in the calculation of the degree of similarly between data sets. Element selection means | 04-05-2012 |
20120087570 | Method and apparatus for converting 2D image into 3D image - A method and an apparatus for converting 2D image into 3D image are disclosed. The method includes converting an input image having pixel values into a brightness image having brightness values, generating a depth map having depth information from the brightness image, and generating at least one of a left eye image, a right eye image and a reproduction image by first parallax-processing the input image using the generated depth map. Here, a pixel value of a delay pixel is substituted for a pixel value of a pixel to be processed at present by considering depth information of N (is integer of above 2) pixels including the pixel to be processed at present in the parallax-processing. In addition, the delay pixel is determined in accordance with arrangement of the depth information of the N pixels, and the delay pixel means a pixel located before the pixel to be processed at present by M (is integer of above 0) pixel. | 04-12-2012 |
20120087571 | METHOD AND APPARATUS FOR SYNCHRONIZING 3-DIMENSIONAL IMAGE - There are provided a 3-D image synchronization method and apparatus. The method comprises determining a reference region for each of the frames of a first image and determining a counter region for each of the frames of a second image, corresponding to the reference region, for the first image and the second image forming a 3-D image; calculating the feature values of the reference region and the counter region; extracting a frame difference between the first image and the second image based on the feature values; and moving any one of the first image and the second image in the time domain based on the extracted frame difference. | 04-12-2012 |
20120087572 | Use of Three-Dimensional Top-Down Views for Business Analytics - A method of analyzing a depth image in a digital system is provided that includes detecting a foreground object in a depth image, wherein the depth image is a top-down perspective of a scene, and performing data extraction and classification on the foreground object using depth information in the depth image. | 04-12-2012 |
20120087573 | Eliminating Clutter in Video Using Depth Information - A method of clutter elimination in digital images is provided that includes identifying a foreground blob in an image, determining a depth of the foreground blob, and indicating that the foreground blob is clutter when the depth indicates that the foreground blob is too close to be an object of interest. Methods for obstruction detection in depth images such as those captured by stereoscopic cameras and structured light cameras are also provided. | 04-12-2012 |
20120093393 | CAMERA TRANSLATION USING ROTATION FROM DEVICE - A method, apparatus, system, article of manufacture, and computer readable storage medium provides the ability to determine two or more camera viewpoint optical centers. A first image and a second image captured by camera devices (and the rotations for the camera devices) are obtained. For each pair of matched points between the first image and the second image, a linear equation is defined that utilizes the rotations, pixel coordinates of the matched points and optical centers. A matrix A | 04-19-2012 |
20120093394 | METHOD FOR COMBINING DUAL-LENS IMAGES INTO MONO-LENS IMAGE - A method for combining dual-lens images into a mono-lens image, suitable for a three-dimensional camera having a left lens and a right lens is provided. First, the left lens and the right lens are used to capture a left-eye image and a right-eye image. Next, a disparity between each of a plurality of corresponding pixels in the left-eye image and the right-eye image is calculated. Then, an overlap area of the left-eye image and the right-eye image is determined according to the calculated disparities of pixels. Finally, the images within the overlap area of the left-eye image and the right-eye image are combined into the mono-lens image. | 04-19-2012 |
20120093395 | METHOD AND SYSTEM FOR HIERARCHICALLY MATCHING IMAGES OF BUILDINGS, AND COMPUTER-READABLE RECORDING MEDIUM - The present invention relates to a method for hierarchically matching a building image. The method includes the steps of: matching a wall of a specific building in the building image inputted as a query with a wall(s) of a building(s) in at least one panoramic image by using a technology of matching a building's shape or repeated pattern; selecting a candidate panoramic image(s) which includes a building(s) recognized to have the same or similar wall to the specific building in the panoramic image(s) as a result of matching its wall with others; matching at least one local region, if containing a recognizable string or figure, in the specific building with local region(s) in the building(s) of the candidate panoramic image(s) by using a technology of recognizing a string or a figure; and determining top n panoramic image(s) as the result of matching the local region. | 04-19-2012 |
20120099782 | IMAGE PROCESSING APPARATUS AND METHOD - Provided is an image processing apparatus for extracting a three-dimensional (3D) feature point from a depth image. An input processing unit may receive a depth image and may receive, via a user interface, selection information of at least one region that is selected as a target region in the depth image. A geometry information analyzer of the image processing apparatus may analyze geometry information of the target region within the input depth image, and a feature point extractor may extract at least one feature point from the target region based on the geometry information of the target region. | 04-26-2012 |
20120106830 | Texture Identification - Technologies are generally described for determining a texture of an object. In some examples, a method for determining a texture of an object includes receiving a two-dimensional image representative of a surface of the object, estimating a three-dimensional (3D) projection of the image, transforming the 3D projection into a frequency domain, projecting the 3D projection in the frequency domain onto a spherical co-ordinate system, and determining the texture of the surface by analyzing spectral signatures extracted from the 3D projection on the spherical co-ordinate system. | 05-03-2012 |
20120106831 | STEREO VISION BASED DICE RECOGNITION SYSTEM AND STEREO VISION BASED DICE RECOGNITION METHOD FOR UNCONTROLLED ENVIRONMENTS - A dice recognition system and a dice recognition method for uncontrolled environments are provided. In the present invention, the number of dot on a dice is automatically recognized in an uncontrolled environment by using multiple cameras. The present dice recognition system is different in at least two aspects from any existing automatic dice recognition system which uses a single camera for recognizing dice in an enclosed environment. Firstly, an existing automatic dice recognition system uses a single camera to obtain planar images for dice recognition, while the present dice recognition system uses multiple cameras to obtain different viewpoints images for dice recognition. Secondly, the present dice recognition system is designed for uncontrolled environments, and which can be applied to an open-table game in a general gambling place for dice recognition without changing the original dice, dice cup, and other related objects. | 05-03-2012 |
20120106832 | METHOD AND APPARATUS FOR CT IMAGE RECONSTRUCTION - A method and apparatus for CT image reconstruction may include selecting, by a unit, projection data of the same height on a curve having a curvature approximate to that of the scanning circular orbit, implementing, by a unit, a weighting processing on the selected projection data, filtering, by a unit, the weighting processed projection data along a horizontal direction, implementing, by a unit, three-dimensional back projection on the filtered projection data along the direction of ray. The method and apparatus can effectively eliminate cone beam artifact under a large cone angle. | 05-03-2012 |
20120106833 | METHOD FOR OBTAINING A POSITION MATCH OF 3D DATA SETS IN A DENTAL CAD/CAM SYSTEM - A method for designing tooth surfaces of a digital dental prosthetic item using a first 3D model of a preparation site and/or of a dental prosthetic item and a second 3D model. The second model comprises regions matching some regions on the first 3D model and regions differing from other regions of the first 3D model. The non-matching regions contain surface information. At least three pairs of corresponding points are selected on the matching region on the first 3D model and second 3D model. The positional correlation of the second 3D model with reference to the first 3D model is determined based on the at least three pairs, and portions of the non-matching regions of the first and second 3D models are implemented for designing the tooth surface prosthetic item taking into consideration the based on the positional correlation of these models relative to each other. | 05-03-2012 |
20120114223 | Method and apparatus for orienting image representative data - A method for processing a three-dimensional image file captured directly from a live subject, the file including the cranium of the subject, comprises: providing a vertex point cloud for the three-dimensional image file; determining a median point for the vertex point cloud; determining a point on the cranium; and utilizing the median point and the cranium point to define a z-axis for the three-dimensional image file. | 05-10-2012 |
20120114224 | SYSTEM AND METHOD OF IMAGE PROCESSING - A method of image processing comprising receiving a plurality of interpolated images, interpolated from two adjacent camera positions having different image planes, applying a transformation to each interpolated image to a respective one of a plurality intermediate image planes, wherein each intermediate image plane is oriented intermediate to the image planes of the two adjacent camera positions depending on a viewing angle of that interpolated image relative to the adjacent camera positions. Also an integrated circuit or processor, an apparatus for capturing images and an apparatus for displaying images. | 05-10-2012 |
20120114225 | IMAGE PROCESSING APPARATUS AND METHOD OF GENERATING A MULTI-VIEW IMAGE - An image processing apparatus may detect an occlusion boundary between objects within an input depth image by applying an edge detection algorithm to the input depth image. The image processing apparatus may classify the occlusion boundary into a foreground region boundary and a background region boundary using a depth gradient vector direction of the occlusion boundary, and may extract an occlusion region of the input depth image using the foreground region boundary. | 05-10-2012 |
20120121162 | Filtering apparatus and method for high precision restoration of depth image - A high speed filtering apparatus and a method for high precision restoration of a depth image are provided. The high speed filtering apparatus for high precision restoration of the depth image may include a block setting unit to set a first block including a target pixel, and to set a second block with respect to a central pixel distributed around the target pixel based on a size of the first block, a weight determining unit to determine a pixel weight with respect to each pixel in the second block, and to determine a block weight with respect to the second block by applying the pixel weight, and a processor to filter the target pixel based on the block weight, thereby accurately filtering the target pixel. | 05-17-2012 |
20120121163 | 3D DISPLAY APPARATUS AND METHOD FOR EXTRACTING DEPTH OF 3D IMAGE THEREOF - A three-dimensional (3D) display apparatus and a method for extracting a depth of a 3D image of the 3D display apparatus are provided. The 3D display apparatus includes: an image input unit which receives an image; a 3D image generator which generates a 3D image of which a depth is adjusted according to a relative motion between global and local motions of the image; and an image output unit which outputs the 3D image. | 05-17-2012 |
20120121164 | IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREFOR - An image processing apparatus according to the present invention, comprises a calculation unit that calculates a sharpness of a 2D image for each region thereof; an image processing unit that performs image processing, in a region with a sharpness calculated by the calculation unit being higher than a first predetermined value, to increase that sharpness, and performing image processing, in a region with a sharpness calculated by the calculation unit being lower than a second predetermined value which is equal to or lower than the first predetermined value, to reduce that sharpness; and a generation unit that generates, from the 2D image processed by the image processing unit, an image for a left eye and an image for a right eye by shifting the 2D image in a horizontal direction. | 05-17-2012 |
20120121165 | Method and apparatus for time of flight sensor 2-dimensional and 3-dimensional map generation - A method and apparatus for Time Of Flight sensor 2-dimensional and 3-dimensional map generation. The method includes retrieving Time Of Flight sensor fixed point data to obtain four phases of Time Of Flight fixed point raw data, computing Gray scale image array and phase differential signal arrays utilizing four phases of TOF fixed point raw data, computing Gray image array and Amplitude image array for fixed point, converting the phase differential signal array from fixed point to floating point, performing the floating point division for computing Arctan, TOF depthmap, and 3-dimensional point cloud map for Q format fixed point, and generating depthmap, 3-dimensional cloud coefficients and 3-dimensional point cloud for Q format fixed point. | 05-17-2012 |
20120121166 | METHOD AND APPARATUS FOR THREE DIMENSIONAL PARALLEL OBJECT SEGMENTATION - A method and apparatus for parallel object segmentation. The method includes retrieving at least a portion of a 3-dimensional point cloud data x, y, z of a frame, dividing the frame into sub-image frames if the sub-frame based object segmentation is enabled,
| 05-17-2012 |
20120121167 | FINITE DATASET INTERPOLATION METHOD - The invention provides a fast method for a high-quality interpolation of a finite multidimensional dataset. It has particular application in digital image processing, including, but not limited to, processing of both still images and real-time image/data processing. The method uses discrete cosine and sine transforms of appropriate types to covert, in blocks of desired size, the initial dataset to the frequency domain. Proposed interpolators calculate a chain of inverse transforms of non-square sizes that perform the interpolation. The larger transform is broken into smaller transforms of non-square size using a recursive size reduction process of FFT-type, and the smaller transforms are calculated directly exploiting the symmetry properties of smaller interpolator functions involved. An output dataset is then assembled using the calculated transforms. The method avoids computationally costly process of inflating the coefficient space by padding zeros exploited for DCT-based interpolations previously. | 05-17-2012 |
20120121168 | COMPOUND OBJECT SEPARATION - Representations of an object in an image generated by an imaging apparatus can comprise two or more separate sub-objects, producing a compound object. Compound objects can negatively affect the quality of object visualization and threat identification performance. As provided herein, a compound object can be separated into sub-objects. Topology score map data, representing topological differences in the potential compound object, may be computed and used in a statistical distribution to identify modes that may be indicative of the sub-objects. The identified modes may be assigned a label and a voxel of the image data indicative of the potential compound object may be relabeled based on the label assigned to a mode that represents data corresponding to properties of a portion of the object that the voxel represents to create image data indicative of one or more sub-objects. | 05-17-2012 |
20120128234 | System for Generating Images of Multi-Views - The present invention provides a system for generating images of multi-views. The system includes a processing unit; an image range calculating module coupled to the processing unit to calculate the ranges of a background image and a main body image of a 2D original image of an article; a depth model generating module coupled to the processing unit to generate a depth model according to an equation; an image cutting module coupled to the processing unit to cut the 2D original image of the article or the depth model to generate a cut 2D image of the article or a depth model with a main body image outline; a pixel shifting module coupled to the processing unit to shift every pixel in the main body image of the 2D original image of the article according to the depth model with the main body image outline to obtain shifted main body images of multi-views; and an image synthesizing module coupled to the processing unit to synthesize the shifted main body images of multi-views and background figures of multi-views to obtain final images of multi-views for 3D image reconstruction. | 05-24-2012 |
20120128235 | APPARATUS AND METHOD FOR RECONSTRUCTING COMPUTED TOMOGRAPHY IMAGE USING COLOR CHANNEL OF GRAPHIC PROCESSING UNIT - Provided is an apparatus and method for reconstructing a computed tomography (CT) image using a color channel of a graphic processing unit (GPU) that reconstructs a three-dimensional (3D) image using a projection image obtained from a CT device. According to an embodiment of the present invention, an apparatus for reconstructing a CT image may include a tomography unit to acquire a plurality of projection images, a filter application unit to load the plurality of projection images on a texture memory having a color channel, and filter the plurality of projection images, and a back-projection application unit to apply a back-projection scheme to the plurality of projection images loaded on the texture memory having a color channel. | 05-24-2012 |
20120128236 | METHOD AND APPARATUS FOR STEREO MISALIGNMENT ESTIMATION USING MODIFIED AFFINE OR PERSPECTIVE MODEL - A method and apparatus for estimating stereo misalignment using modified affine or perspective model. The method includes dividing a left frame and a right frame into blocks, comparing horizontal and vertical boundary signals in the left frame and the right frame, estimating the horizontal and the vertical motion vector for each block in a reference frame, selecting a reliable motion vectors from a set of motion vectors, dividing the selected block into smaller features, feeding the data to an affine or a perspective transformation model to solve for the model parameters, running the model parameters through a temporal filter, portioning the estimated misalignment parameters between the left frame and right frame, and modifying the left frame and the right frame to save some boundary space. | 05-24-2012 |
20120134574 | IMAGE PROCESSING APPARATUS, DISPLAY APPARATUS, IMAGE PROCESSING METHOD AND IMAGE PROCESSING PROGRAM - Disclosed herein is an image processing apparatus including: a depth-information extraction section; a luminance extraction section; a contrast extraction section; a gain generation section; and a correlation estimation section. | 05-31-2012 |
20120134575 | Systems and Methods for Tracking a Model - An image such as a depth image of a scene may be received, observed, or captured by a device and a model of a user in the depth image may be generated. The background of a received depth image may be removed to isolate a human target in the received depth image. A model may then be adjusted to fit with in the isolated human target in the received depth image. To adjust the model, a joint or a bone may be magnetized to the closest pixel of the isolated human target. The joint or the bone may then be refined such that the joint or the bone may be further adjusted to a pixels equidistant between two edges the body part of the isolated human target where the joint or bone may have been magnetized. | 05-31-2012 |
20120141014 | COLOR BALANCING FOR PARTIALLY OVERLAPPING IMAGES - When photographs are to be combined into a single image, haze correction and/or color balancing may be performed. The photographs may be analyzed and left-clipped in order to darken the photographs and to increase the density of pixels in the low-luminosity region, thereby decreasing the perception of haze. When the photographs are combined into one continuous image, tie points are selected that lie in regions where the photographs overlap. The tie points may be selected based on visual similarity of the photographs in the region around the tie point, using a variety of algorithms. Functions are then chosen to generate saturation and luminosity values that minimize, at the tie points, the cost of using the generated values as opposed to the actual saturation and luminosity values. These functions are then used to generate saturation and luminosity values for the full image. | 06-07-2012 |
20120141015 | VANISHING POINT ESTIMATION SYSTEM AND METHODS - System and methods for estimating a vanishing point within an image, including comprising: programming executable on a processor for computing line segment estimation of one or more lines in said image, wherein one or more of the lines comprise multiple line segments as a single least-mean-square-error (LMSE) fitted lines. Additionally the one or more lines having multiple line segments are represented as a single least-mean-square-error (LMSE) fitted line, and the one or more lines are intersected to locate a vanishing point in a density space. | 06-07-2012 |
20120141016 | VIRTUAL VIEWPOINT IMAGE SYNTHESIZING METHOD AND VIRTUAL VIEWPOINT IMAGE SYNTHESIZING SYSTEM - Provided is a virtual viewpoint image synthesizing method in which a virtual viewpoint image viewed from a virtual viewpoint is synthesized based on image information obtained from a plurality of viewpoints. The virtual viewpoint image is synthesized through a reference images obtaining step, a depth maps generating step, an up-sampling step, a virtual viewpoint information obtaining step, and a virtual viewpoint image synthesizing step. | 06-07-2012 |
20120148145 | SYSTEM AND METHOD FOR FINDING CORRESPONDENCE BETWEEN CAMERAS IN A THREE-DIMENSIONAL VISION SYSTEM - This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly, can be combined with the searched 2D object features in images of other camera assemblies (perspective or non-perspective), based on their trained object features to generate a set of 3D image features and thereby determine a 3D pose of the object. In this manner the speed and accuracy of the overall pose determination process is improved. The non-perspective lens can be a telecentric lens. | 06-14-2012 |
20120148146 | SYSTEM FOR MAKING 3D CONTENTS PROVIDED WITH VISUAL FATIGUE MINIMIZATION AND METHOD OF THE SAME - Disclosed are a system for making 3D contents provided with visual fatigue minimization and a method of the same. More particularly, an exemplary embodiment of the present invention provides a system for making 3D contents including: a human factor information unit generating guide information for making 3D contents by considering factors causing visual fatigue of the 3D contents; and a 3D contents making unit applying guide information generated by the human factor information unit to 3D contents data inputted for making the 3D contents to make the 3D contents, and a method of making 3D contents. | 06-14-2012 |
20120148147 | STEREOSCOPIC IMAGE DISPLAY SYSTEM, DISPARITY CONVERSION DEVICE, DISPARITY CONVERSION METHOD AND PROGRAM - Disparity in a stereoscopic image is converted, according to features of a configuration element of an image that influences depth perception of a stereoscopic image. A disparity detecting unit | 06-14-2012 |
20120155742 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - According to one embodiment, an image processing device includes a plurality of parallax image generators. Each of the parallax image generators is configured to generate a first image and a second image based on an input image and a parameter for setting a distance between viewpoints. There is a first parallax between the first image and the second image. The first parallax depends on the parameter for setting the distance between viewpoints. The input image is inputted to the parallax image generators in common. A plurality of parameters for setting the distance between viewpoints different from each other are inputted to the parallax image generators, respectively. | 06-21-2012 |
20120155743 | APPARATUS AND METHOD FOR CORRECTING DISPARITY MAP - Disclosed herein are an apparatus and method for correcting a disparity map. The apparatus includes a disparity map area setting unit, a pose estimation unit, and a disparity map correction unit. The apparatus removes the noise of the disparity map attributable to stereo matching and also fills in holes attributable to occlusion using information about the depth of a 3-dimensional (3D) model produced in a preceding frame of a current frame, thereby improving a disparity map and depth performance and providing high-accuracy depth information to an application to be used. | 06-21-2012 |
20120155744 | IMAGE GENERATION METHOD - A method of generating output image data representing a view from a specified spatial position in a real physical environment. The method comprises receiving data identifying the spatial position in the physical environment, receiving image data, the image data having been acquired using a first sensing modality and receiving positional data indicating positions of a plurality of objects in the real physical environment, the positional data having been acquired using a second sensing modality. At least part of the received image data is processed based upon the positional data and the data representing the specified spatial position to generate the output image data. | 06-21-2012 |
20120155745 | APPARATUS AND METHOD FOR EXTRACTING CORRESPONDENCES BETWEEN AERIAL IMAGES - Disclosed herein is an apparatus and method for extracting correspondences between aerial images. The apparatus includes a line extraction unit, a line direction determination unit, a building top area extraction unit, and a correspondence extraction unit. The line extraction unit extracts lines corresponding buildings from aerial images. The line direction determination unit defines the directions of the lines as x, y and z axis directions based on a two-dimensional (2D) coordinate system. The building top area extraction unit rotates lines in the x and y axis directions so that the lines are arranged in parallel with the horizontal and vertical directions of the 2D image, and then extracts building top areas from rectangles. The correspondence extraction unit extracts correspondences between the aerial images by comparing the locations of the building top areas extracted from the aerial images. | 06-21-2012 |
20120155746 | ADAPTIVE HIGH SPEED/HIGH RESOLUTION 3D IMAGE RECONSTRUCTION METHOD FOR ANY MEASUREMENT DISTANCE - Disclosed is a method for performing a 3D image reconstruction at a high speed and high resolution, regardless of a measurement distance. Specifically, a weight for image reconstruction is previously set, and a 3D image reconstruction algorithm is performed at a high speed, without reducing a resolution, by a parallel processing for image reconstruction, a computation of a partial region using a database based on a measurement result, and a generation of a variable pulse waveform. | 06-21-2012 |
20120155747 | STEREO IMAGE MATCHING APPARATUS AND METHOD - The present invention relates to a stereo image matching apparatus and method. The stereo matching apparatus includes a window image extraction unit for extracting window images, each having a predetermined size around a selected pixel, for individual pixels of images that constitute stereo images. A local support-area determination unit extracts a similarity mask having similarities equal to or greater than a threshold and a local support-area mask having neighbor connections to a center pixel of the similarity mask, from each of similarity images generated depending on differences in similarity between pixels of the window images. A similarity extraction unit calculates a local support-area similarity from a sum of similarities of a local support-area. A disparity selection unit selects a pair of window images for which the local support-area similarity is maximized, from among the window images, and then determines a disparity for the stereo images. | 06-21-2012 |
20120155748 | APPARATUS AND METHOD FOR PROCESSING STEREO IMAGE - processing a stereo image by infrared images. Proposed are stereo image processing apparatus and method that may generate a correction pattern for stereo matching by analyzing in real time at least one of the stability of a stereo image, the number of feature points included in a camera image, and an illumination condition and emit the correction pattern toward a subject as a feedback value. According to exemplary embodiment of the present invention, it is possible to improve the stability and accuracy of the stereo image. | 06-21-2012 |
20120155749 | METHOD AND DEVICE FOR CODING A MULTIDIMENSIONAL DIGITAL SIGNAL - The present invention relates to a method and a device for coding a multidimensional signal (LL | 06-21-2012 |
20120155750 | METHOD AND APPARATUS FOR RECEIVING MULTIVIEW CAMERA PARAMETERS FOR STEREOSCOPIC IMAGE, AND METHOD AND APPARATUS FOR TRANSMITTING MULTIVIEW CAMERA PARAMETERS FOR STEREOSCOPIC IMAGE - Provided is a method of receiving multiview camera parameters for a stereoscopic image. The method includes: extracting multiview camera parameter information for a predetermined data section from a received stereoscopic image data stream; extracting matrix information including at least one of translation matrix information and rotation matrix information for the predetermined data section from the multiview camera parameter information; and restoring coordinate systems of multiview cameras by using the extracted matrix information. | 06-21-2012 |
20120163700 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - An image processing device which includes, a sorting circuit inputting a stereoscopic image signal formed of a left eye image and a right eye image, and outputs the left and right eye images at the same timing line by line; a parallax generation circuit generating respective parallax images from the left eye image and the right eye image which are output from the sorting circuit; each delay circuit for the left and right eye images, which delays and outputs the left and right eye images which are output from the sorting circuit by the processing time of the parallax generation circuit, respectively; and an image combining circuit which synthesizes the images which are output from the delay circuit and the parallax generation circuit, respectively, and obtains the multi-viewpoint images. | 06-28-2012 |
20120163701 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing device including an image input unit that inputs a two-dimensional image signal; a depth information output unit that inputs or generates depth information; a depth information reliability output unit that inputs or generates the reliability of depth information that the depth information output unit outputs; an image conversion unit that inputs an image signal, depth information, and depth information reliability, and generates and outputs a left eye image and a right eye image for realizing binocular stereoscopic vision; and an image output unit that outputs a left eye image and a right eye image, wherein the image conversion unit has a configuration of performing image generation of at least one of a left eye image and a right eye image and executes a conversion process in which the depth information and the depth information reliability are applied as conversion control data during the image conversion. | 06-28-2012 |
20120163702 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus generating a multi-viewpoint image includes a parallax detection unit that receives only one of a plurality of actually-taken images including a left-eye image and a right-eye image and detects parallax of the received image so as to generate a parallax map, a first pseudo three-dimensional image generation unit that receives the left-eye image and generates one or more externally-provided or internally-provided images, based on the parallax map generated by the parallax detection unit, a first delay unit that receives the left-eye image and outputs the left-eye image with elapse of delay time, a second pseudo three-dimensional image generation unit that receives the right-eye image and generates one or more externally-provided or internally-provided images, based on the parallax map, and a second delay unit that receives the right-eye image and outputs the right-eye image with elapse of delay time. | 06-28-2012 |
20120163703 | STEREO MATCHING SYSTEM USING DYNAMIC PROGRAMMING AND METHOD THEREOF - Disclosed is a stereo matching system and method using a dynamic programming scheme. The stereo matching system and method using a dynamic programming scheme according to the present invention may perform viterbi type stereo matching using at least two different penalty of disparity discontinuity (PD) values and synthesize the performed stereo matching results, thereby outputting a disparity map. | 06-28-2012 |
20120163704 | APPARATUS AND METHOD FOR STEREO MATCHING - An image matching apparatus includes a bilateral filter that filters a left image and a right image to output a second left image and a second right image; a census cost calculation unit performing census transform on a window based on a first pixel of the second left image and a window based on a second pixel of the second right image to calculate a census cost corresponding to a pair of pixels of the first and second pixels; a support weight calculation unit obtaining support weights of the left and right images or the second left and second right images; a cost aggregation unit obtaining energy values of nodes corresponding to the pair of pixels of the first and second pixels using the census cost and the support weights; and a dynamic programming unit performing image matching using dynamic programming by the energy values of each node obtained. | 06-28-2012 |
20120163705 | IMAGE FILE PROCESSING APPARATUS WHICH GENERATES AN IMAGE FILE TO INCLUDE STEREO IMAGE DATA AND COLLATERAL DATA RELATED TO THE STEREO IMAGE DATA, AND INFORMATION RELATED TO AN IMAGE SIZE OF THE STEREO IMAGE DATA, AND CORRESPONDING IMAGE FILE PROCESSING METHOD - Stereo image data is generated based on a plurality of monocular images of a same subject with a predetermined parallax, a collateral data generating section generates collateral data related to the stereo image data, and a stereo image size information generating unit generates information related to an image size of the stereo image data. An image file generating unit generates an image file in conversion to a predetermined file format upon synthesizing the stereo image data and the collateral data, and further adds the information related to the image size to the collateral data at inner and outer areas thereof. | 06-28-2012 |
20120170831 | PRODUCING STEREOSCOPIC IMAGE - A method of producing a digital stereoscopic image using a processor is disclosed. The method includes providing a plurality of digital image files which include digital images and the time of capture of each image and using time of capture to identify candidate pairs of images. The method further includes using the processor to analyze the image content of the candidate pairs of images to identify at least one image pair that can be used to produce a stereoscopic image; and using an identified image pair to produce the digital stereoscopic image. | 07-05-2012 |
20120170832 | DEPTH MAP GENERATION MODULE FOR FOREGROUND OBJECT AND METHOD THEREOF - The present invention discloses a depth map generation module for a foreground object and the method thereof. The depth map generation method for a foreground object comprises the following steps: receiving an image sequence data, wherein the image sequence data includes a plurality of image frames; selecting at least one key image frame from the image sequence data; providing at least one depth indicative information and a contour of a first segment in the at least one key image frame; and performing a signal processing steps by a microprocessor. | 07-05-2012 |
20120170833 | MULTI-VIEW IMAGE GENERATING METHOD AND APPARATUS - According to an embodiment, a multi-view image generating method includes synthesizing images having a same depth value into a single image from among a plurality of images, based on depth values each being associated with one of the plurality of images and indicating image position in the depth direction of the image; shifting, with respect to each of a plurality of viewpoints each giving a different disparity, a synthesized image obtained at the synthesizing, according to a shift vector corresponding to the viewpoint and the depth value of the synthesized image in a direction and with an amount indicated in the shift vector, so as to generate an image having disparity given thereto; and generating a multi-view image in which the images that are shifted and that are given disparity at the shifting are arranged in a predetermined format. | 07-05-2012 |
20120177283 | FORMING 3D MODELS USING TWO IMAGES - A method for determining a three-dimensional model from two images comprising: receiving first and second images captured from first and second viewpoints, respectively, each image including a two-dimensional image together with a corresponding range map; identifying a set of corresponding features in the first and second two-dimensional images; removing any extraneous corresponding features in the set of corresponding features responsive to the first and second range maps to produce a refined set of corresponding features; determining a geometrical transform for transforming three-dimensional coordinates for the first image to be consistent three-dimensional coordinates for the second image responsive to three-dimensional coordinates for the refined set of corresponding features, the three-dimensional coordinates comprising two-dimensional pixel coordinates from the corresponding two-dimensional image together with a range coordinate from the corresponding range map; and determining a three-dimensional model responsive to the first image, the second image and the geometrical transform. | 07-12-2012 |
20120177284 | FORMING 3D MODELS USING MULTIPLE IMAGES - A method for determining a three-dimensional model from three or more images comprising: receiving three or more images, each image being captured from a different viewpoint and including a two-dimensional image together with a corresponding range map; designating a plurality of pairs of received images, each pair including a first image and a second image. For each of the designated pairs a geometric transform is determined by identifying a set of corresponding features in the two-dimensional images; removing any extraneous corresponding features to produce a refined set of corresponding features; and determining the geometrical transformation for transforming three-dimensional coordinates for the first image to three-dimensional coordinates for the second image responsive to three-dimensional coordinates for the refined set of corresponding features. A three-dimensional model is determined responsive to the three or more received images and the geometrical transformations for the designated pairs of received images. | 07-12-2012 |
20120177285 | STEREO IMAGE PROCESSING APPARATUS, STEREO IMAGE PROCESSING METHOD AND PROGRAM - An imaging device (100) includes: an imaging element (103) obtained by repeatedly arranging a pixel W for entire wavelength band, a W-R pixel for R, a W-G pixel for G, and a W-B pixel for B; a filter (102) configured such that a portion corresponding to the pixel W allows the entire wavelength band of a wavelength band within a certain range to pass and portions corresponding to the W-R pixel, the W-G pixel, and the W-B pixel reflect wavelength bands of corresponding colors, respectively; a reflection amount calculating unit (113) for calculating signal values of R, G, and B by subtracting a value of an image reading signal of each of the W-R pixel, the W-G pixel, and the W-B pixel from a value of an image reading signal of the pixel W. | 07-12-2012 |
20120183201 | METHOD AND SYSTEM FOR RECONSTRUCTING A STEREOSCOPIC IMAGE STREAM FROM QUINCUNX SAMPLED FRAMES - A method for reconstructing a stereoscopic image stream from a plurality of compressed frames is provided. Each compressed frame consists of a merged image formed by juxtaposing a sampled image frame of a left image and a sampled image frame of a right image. Each sampled image frame has half a number of original pixels disposed at intersections of a plurality of horizontal lines and a plurality of vertical lines in a staggered quincunx pattern in which original pixels surround missing pixels. Each missing pixel is reconstructed according to at least 5 horizontal pixel pairs and 3 vertical pixel pairs in a compressed frame. | 07-19-2012 |
20120183202 | Methods and Systems for 2D to 3D Conversion from a Portrait Image - A method for converting a 2D image into a 3D image includes receiving the 2D image; determining whether the received 2D image is a portrait, wherein the portrait can be a face portrait or a non-face portrait; if the received 2D image is determined to be a portrait, creating a disparity between a left eye image and a right eye image based on a local gradient and a spatial location; generating the 3D image based on the created disparity; and outputting the generated 3D image. | 07-19-2012 |
20120183203 | APPARATUS AND METHOD FOR EXTRACTING FEATURE OF DEPTH IMAGE - Provided is a feature extraction method and apparatus to extract a feature of a three-dimensional (3D) depth image. The feature extraction apparatus may generate a plurality of level sets using a depth image, and may extract a feature for each level depth image. | 07-19-2012 |
20120183204 | 3D MODELING AND RENDERING FROM 2D IMAGES - A method of converting an image from one form to another form by a conversion apparatus having a memory and a processor, the method including the steps of receiving a captured image, extracting at least one image dimension attribute from the image, calculating at least one dimension attribute of the image based on the image dimension attribute, modifying the image based on the calculated dimension attribute and the extracted dimension attribute, and displaying the modified image on a display unit. | 07-19-2012 |
20120183205 | METHOD FOR DISPLACEMENT MEASUREMENT, DEVICE FOR DISPLACEMENT MEASUREMENT, AND PROGRAM FOR DISPLACEMENT MEASUREMENT - Measurement of 3D displacement based on successively captured images of an object becomes difficult to be performed due to a load imposed on an operator along with an increase of the number of target portions defined on the object and that of time steps for displacement measurement. A device for displacement measurement executes stereo measurement relative to a stereo image to generate 3D shape information and orthographically projected image of an object for each time, and tracks the 2D image of the target portion through pattern matching between orthographically projected images at successive times to obtain a 2D displacement vector. The device for displacement measurement converts the start point and the end point of the 2D displacement vector into 3D coordinates, using the 3D shape information, to obtain a 3D displacement vector. | 07-19-2012 |
20120189190 | AUTOMATIC DETECTION AND GROUPING OF STRAIGHT LINES IN IMAGES FOR PERSONALIZATION - As set forth herein, a computer-implemented method is employed to place personalized text into an image. A location within the image is selected where the text is to be placed, and region is grown around the selected location. The 3D geometry of the surface is estimated proximate to the location and sets of parallel straight lines in the image are identified and selected to define a bounding polygon into which text may be inserted. Optionally, a user is permitted to adjust the bounding polygon once it has been automatically generated. | 07-26-2012 |
20120189191 | METHODS FOR MATCHING GAIN AND COLOR FOR STEREOSCOPIC IMAGING SYSTEMS - Stereoscopic imaging devices may include stereoscopic imagers, stereoscopic displays, and processing circuitry. The processing circuitry may be used to collect auto white balance (AWB) statistics for each image captured by the stereoscopic imager. A stereoscopic imager may include two image modules that may be color calibrated relative to each other or relative to a standard calibrator. AWB statistics may be used by the processing circuitry to determine global, local and spatial offset gain adjustments to provide intensity matched stereoscopic images for display. AWB statistics may be combined by the processing circuitry with color correction offsets determined during color calibration to determine color-transformation matrices for displaying color matched stereoscopic images using the stereoscopic display. Gain and color-transformation corrections may be continuously applied during operation of a stereoscopic imaging device to provide intensity-matched, color-matched stereoscopic images in any lighting condition. | 07-26-2012 |
20120195492 | Method and apparatus for generating a dense depth map using an adaptive joint bilateral filter - A method and apparatus for generating a dense depth map. In one embodiment, the method includes applying a joint bilateral filter to a first depth map to generate a second depth map, where at least one filter weight of the joint bilateral filter is adapted based upon content of an image represented by the first depth map, and the second depth map has a higher resolution than the first depth map. | 08-02-2012 |
20120195493 | STEREO MATCHING METHOD BASED ON IMAGE INTENSITY QUANTIZATION - A stereo matching method based on image intensity quantization is revealed. The method includes several steps. Firstly, provide computer an image pair of an object for image intensity quantization of the image pair to get a quantization result. Then according to the quantization result, a first extracted image pair is generated and used to get a first disparity map. A second extracted image pair is generated similarly to get a second disparity map. Next the two disparity maps are compared with each other to get an image error data. When an error contained in the image error data is smaller than or equal to an error threshold value, the computer outputs the second disparity map. Moreover, accuracy of disparity maps is improved by iteration processing. Therefore, the amount of information for processing is minimized and efficiency of data access/transmission is improved. | 08-02-2012 |
20120195494 | PSEUDO 3D IMAGE GENERATION DEVICE, IMAGE ENCODING DEVICE, IMAGE ENCODING METHOD, IMAGE TRANSMISSION METHOD, IMAGE DECODING DEVICE, AND IMAGE DECODING METHOD - A pseudo 3D image generation device includes frame memories that store a plurality of basic depth models used for estimating depth data based on a non-3D image signal and generating a pseudo 3D image signal; a depth model combination unit that combines the plurality of basic depth models for generating a composite depth model based on a control signal indicating composite percentages for combining the plurality of basic depth models; an addition unit that generates depth estimation data from the non-3D image signal and the composite depth models; and a texture shift unit that shifts the texture of the non-3D image for generating the pseudo 3D image signal. | 08-02-2012 |
20120201449 | IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREOF - An image processing apparatus and a control method thereof are provided. The an image processing apparatus includes: a depth map estimating unit which estimates a depth map of a stereoscopic image; a region setup unit which sets up a region in the stereoscopic image; and a 3D effect adjusting unit which determines a difference in a depth level between the setup region and a surrounding region other than the setup region based on the estimated depth map, and adjusts a 3D effect of the stereoscopic image based on the determined difference in the depth level. | 08-09-2012 |
20120207383 | METHOD AND APPARATUS FOR PERFORMING SEGMENTATION OF AN IMAGE - A method and system for segmenting a plurality of images. The method comprises the steps of segmenting the image through a novel clustering technique that is, generating a composite depth map including temporally stable segments of the image as well as segments in subsequent images that have changed. These changes may be determined by determining one or more differences between the temporally stable depth map and segments included in one or more subsequent frames. Thereafter, the portions of the one or more subsequent frames that include segments including changes from their corresponding segments in the temporally stable depth map are processed and are combined with the segments from the temporally stable depth map to compute their associated disparities in one or more subsequent frames. The images may include a pair of stereo images acquired through a stereo camera system at a substantially similar time. | 08-16-2012 |
20120207384 | Representing Object Shapes Using Radial Basis Function Support Vector Machine Classification - A shape of an object is represented by a set of points inside and outside the shape. A decision function is learned from the set of points an object. Feature points in the set of points are selected using the decision function, or a gradient of the decision function, and then a local descriptor is determined for each feature point. | 08-16-2012 |
20120219208 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus that an image input unit; a parallax acquisition unit configured to acquire a per-pixel or per-region parallax between two-viewpoint images; a main subject detection unit configured to detect a main subject on the two-viewpoint images; a parallax acquisition unit configured to acquire a parallax of the main subject; a setting unit configured to set a conversion factor of the parallax; a correction unit configured to correct the conversion factor of the parallax per pixel, per region, or per image; a multi-viewpoint image generation unit configured to convert at least one image of the two-viewpoint images in accordance with the corrected conversion factor of the parallax; an image adjustment unit configured to shift the two-viewpoint images or multi-viewpoint images to obtain a parallax appropriate for stereoscopic view; and a stereoscopically-displayed image generation unit configured to generate a stereoscopically-displayed image. | 08-30-2012 |
20120230580 | ANALYSIS OF STEREOSCOPIC IMAGES - A method of processing in an image processor a pair of images intended for stereoscopic presentation to identify left-eye and right-eye images of the pair. The method includes dividing both images of the pair into a plurality of like image regions, determining for each region a disparity value between the images of the pair to produce a set of disparity values, deriving for each region a confidence factor for the disparity value, determining a correlation parameter between the set of disparity values and a corresponding set of disparity values from a disparity model, in which the contribution of the disparity value for a region to the said correlation parameter is weighted in dependence on the confidence factor for that region, and identifying from said correlation parameter the left-eye and right-eye images of the pair, wherein the left eye and right images form a stereoscopic pair. | 09-13-2012 |
20120230581 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes an image acquisition unit for acquiring a real-world image, a space analysis unit for analyzing a three-dimensional space structure of the real-world image, a scale reference detection unit for detecting a length, in a three-dimensional space, of an object to be a scale reference that is included in the real-world image, and a scale determination unit for determining, from the length of the object detected by the scale reference detection unit, a scale of the three-dimensional space. | 09-13-2012 |
20120237111 | Performing Structure From Motion For Unordered Images Of A Scene With Multiple Object Instances - A technology is described for performing structure from motion for unordered images of a scene with multiple object instances. An example method can include obtaining a pairwise match graph using interest point detection for obtaining interest points in images of the scene to identify pairwise image matches using the interest points. Multiple metric two-view and three-view partial reconstructions can be estimated by performing independent structure from motion computation on a plurality of match-pairs and match-triplets selected from the pairwise match graph. Pairwise image matches can be classified into correct matches and erroneous matches using expectation maximization to generate geometrically consistent match labeling hypotheses and a scoring function to evaluate the match labeling hypotheses. A structure from motion computation can then be performed on the subset of match pairs which have been inferred as correct. | 09-20-2012 |
20120237112 | Structured Light for 3D Shape Reconstruction Subject to Global Illumination - Depth values in a scene are measured by projecting sets of patterns on the scene, wherein each set of patterns is structured with different spatial frequency using different encoding functions. Sets of images of the scene is acquired, wherein there is one image for each pattern in each set. Depth values are determining for each pixel at corresponding locations in the sets of images. The depth values of each pixel are analyzed, and the depth value is returned if the depth values at the corresponding locations are similar. Otherwise, the depth value is marked as having an error. | 09-20-2012 |
20120237113 | ELECTRONIC DEVICE AND METHOD FOR OUTPUTTING MEASUREMENT DATA - A method outputs measurement data automatically using an electronic device. The method obtains measurement data of feature elements from a two dimensional (2D) image of a measured object, determines a type of measurement applied to each feature element, obtains feature elements from planes of a three dimensional (3D) image of the measured object, and maps each of the obtained feature elements in the 3D image to the 2D image. The method further obtains sequential marked numbers from the 2D image, determines a feature element which is nearest to any marked number from the mapped feature elements, determines an output axis for each of the determined feature elements, and outputs measured results and measurement codes of the determined feature elements by reference to the measurement data, the type of measurement and the output axis of each determined feature element. | 09-20-2012 |
20120237114 | METHOD AND APPARATUS FOR FEATURE-BASED STEREO MATCHING - Disclosed are a method and apparatus for feature-based stereo matching. A method for stereo matching of a reference image and at least one comparative image captured by at least two cameras from different points of view using a computer device includes collecting the reference image and the at least one comparative image, extracting feature points from the reference image, tracking points corresponding to the feature points in the at least one comparative image using an optical flow technique, and generating a depth map according to correspondence-point tracking results. | 09-20-2012 |
20120237115 | Method for acquiring a 3D image dataset freed of traces of a metal object - An interpolation of data values is performed during the acquisition of a 3D image dataset which is free of traces of a metal object imaged in the underlying 2D image datasets. A target function is defined into which data values of the 3D image dataset that are dependent on said substitute data values are incorporated following preprocessing. The substitute data values are then varied iteratively until the value of the target function satisfies a predetermined criterion. Residual artifacts that still persist following the interpolation can thus be effectively reduced. | 09-20-2012 |
20120243774 | METHOD FOR RECONSTRUCTION OF URBAN SCENES - An urban scenes reconstruction method includes: acquiring digital data of a three-dimensional subject, the digital data comprising a 2D photograph and a 3D scan; fusing the 3D scan and the 2D photograph to create a depth-augmented photograph; decomposing the depth-augmented photograph into a plurality of constant-depth layers; detecting repetition patterns of each constant-depth layer; and using the repetitions to enhance the 3D scan to generate a polygon-level 3D reconstruction. | 09-27-2012 |
20120243775 | WIDE BASELINE FEATURE MATCHING USING COLLOBRATIVE NAVIGATION AND DIGITAL TERRAIN ELEVATION DATA CONSTRAINTS - A method for wide baseline feature matching comprises capturing one or more images from an image sensor on each of two or more platforms when the image sensors have overlapping fields of view, performing a 2-D feature extraction on each of the captured images in each platform using local 2-D image feature descriptors, and calculating 3-D feature locations on the ellipsoid of the Earth surface from the extracted features using a position and attitude of the platform and a model of the image sensor. The 3-D feature locations are updated using digital terrain elevation data (DTED) as a constraint, and the extracted features are matched using the updated 3-D feature locations to create a common feature zone. A subset of features from the common feature zone is selected, and the subset of features is inputted into a collaborative filter in each platform. A convergence test is then performed on other subsets in the common feature zone, and falsely matched features are pruned from the common feature zone. | 09-27-2012 |
20120243776 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes a noise removal unit that corrects a geometric mismatch of optical noise of a left eye image and a right eye image by performing a noise removal process for removing the separately generated optical noise on the left eye image and the right eye image which are captured and obtained by a two-lens type stereoscopic image capturing camera. | 09-27-2012 |
20120243777 | SYSTEM AND METHOD FOR SEGMENTATION OF THREE-DIMENSIONAL IMAGE DATA - In one embodiment, a system for computing class identifiers for three-dimensional pixel data has been developed. The system comprises a plurality of class identifying processors, and a data grouper operatively connected to a first memory. Each class identifying processor has a plurality of inputs for at least one pixel value and a plurality of class identifiers for pixel values neighboring the at least one pixel value and each class identifying processor is configured to generate a class identifier for the at least one pixel value input with reference to the class identifiers for the neighboring pixel values. The data grouper is configured to retrieve a plurality of pixel values from the first memory and a plurality of class identifiers for pixel values neighboring the retrieved pixel values. | 09-27-2012 |
20120250976 | Wavelet transform on incomplete image data and its applications in image processing - A system and method for effectively performing wavelet transforms on incomplete image data includes an image processor that performs a green-pixel transformation procedure on incomplete color pixel matrices. The image processor then rearranges red, blue and transformed green-pixel into four quadrants of contiguous pixels and applies some two dimensional (2D) wavelet thresholding schemes on each quadrant. After thresholding, an inverse procedure is applied to reconstruct the pixel values on the incomplete color pixel matrices. For further de-correlation of image data, the image processor may stack similar image patches in a three dimensional (3D) array and apply incomplete-data wavelet thresholding on the 3D array. The incomplete-data wavelet thresholding procedure may be put in an improved local similarity measurement framework to achieve better performance of image processing tasks. A CPU device typically controls the image processor to effectively perform the image processing procedure. | 10-04-2012 |
20120250977 | Method and System for Determining Projections in Non-Central Catadioptric Optical Systems - A three-dimensional (3D) location of a reflection point of a ray between a point in a scene (PS) and a center of projection (COP) of a camera of a catadioptric system is determined. The catadioptric system is non-central and includes the camera and a reflector, wherein a surface of the reflector is a quadric surface rotationally symmetric around an axis of symmetry. The 3D location of the reflection point is determined based on a law of reflection, an equation of the reflector, and an equation describing a reflection plane defined by the COP, the PS, and a point of intersection of a normal to the reflector at the reflection point with the axis of symmetry. | 10-04-2012 |
20120250978 | SCENE ANALYSIS USING IMAGE AND RANGE DATA - Image and range data associated with an image can be processed to estimate planes within the 3D environment in the image. By utilizing image segmentation techniques, image data can identify regions of visible pixels having common features. These regions can be used to candidate regions for fitting planes to the range data based on a RANSAC technique. | 10-04-2012 |
20120250979 | IMAGE PROCESSING APPARATUS, METHOD, AND PROGRAM - An image processing apparatus includes a depth control signal generation unit generating a depth control signal controlling emphasis of the feel of each region of an input image based on the depth position of a subject in each region of the input image; a face skin region control signal generation unit generating a face skin region control signal controlling emphasis of the feel of each region in the input image based on the human face skin region in the input image; a person region control signal generation unit generating a person region control signal controlling emphasis of the feel of each region in the input image based on the region of the person in the input image; and a control signal synthesis unit synthesizing the depth control signal, the face skin region control signal, and the person region control signal to generate a control signal. | 10-04-2012 |
20120250980 | METHOD, APPARATUS AND SYSTEM - A method of providing, over a network, an image for recreation in a device, the image containing a background and a foreground object and the method comprising: detecting the position of the foreground object in the image and generating position information on dependence thereon; removing the foreground object from the image; and transferring to the device i) the image with the foreground object removed, ii) the removed foreground object and iii) the position information. | 10-04-2012 |
20120257814 | IMAGE COMPLETION USING SCENE GEOMETRY - Image completion using scene geometry is described, for example, to remove marks from digital photographs or complete regions which are blank due to editing. In an embodiment an image depicting, from a viewpoint, a scene of textured objects has regions to be completed. In an example, geometry of the scene is estimated from a depth map and the geometry used to warp the image so that at least some surfaces depicted in the image are fronto-parallel to the viewpoint. An image completion process is guided using distortion applied during the warping. For example, patches used to fill the regions are selected on the basis of distortion introduced by the warping. In examples where the scene comprises regions having only planar surfaces the warping process comprises rotating the image. Where the scene comprises non-planar surfaces, geodesic distances between image elements may be scaled to flatten the non-planar surfaces. | 10-11-2012 |
20120257815 | METHOD AND APPARATUS FOR ANALYZING STEREOSCOPIC OR MULTI-VIEW IMAGES - A method for analyzing the colors of stereoscopic or multi-view images is described. The method comprises the steps of retrieving one or more disparity maps for the stereoscopic or multi-view images, aligning one or more of the images to a reference image by warping the one or more images according to the retrieved disparity maps, and performing an analysis of discrepancies on one or more of the aligned images. | 10-11-2012 |
20120257816 | ANALYSIS OF 3D VIDEO - An image analysis apparatus for processing a 3D pair of images representing respective left eye and right eye views of a scene comprises an image crop detector configured to detect the presence of an image crop at a lateral edge of one of the images; and a frame violation detector configured to detect, within areas of the images excluding any detected image crops, an image feature within a threshold distance of the left edge of the left image which is not found in the right image, or an image feature within a threshold distance of the right edge of the right image which is not found in the left image. | 10-11-2012 |
20120257817 | IMAGE OUTPUT APPARATUS - An image output apparatus ( | 10-11-2012 |
20120263371 | Method of image fusion - A method of fusing images includes the steps of providing at least two images of the same object, each image being a digital image or being transformed in a digital image formed by an array of pixels or voxels, and of combining together the pixels or voxels of the at least two images being combined to obtain a new image formed by the combined pixels or voxels. | 10-18-2012 |
20120263372 | Method And Apparatus For Processing 3D Image - A first image and a second image make a stereo pair. A parallax between each subject image in the first image and a corresponding subject image in the second image is calculated. A 3D image formed by the first image and the second image is divided into a plurality of areas. Detection is made as to which of the areas each parallax calculated by the parallax calculator is present in. A desired parallax is determined on the basis of the calculated parallax or parallaxes present in one of the areas. An object image is superimposed on the first image and the second image in said one of the areas in a manner such that a parallax between the object image superimposed on the first image and the object image superimposed on the second image will be equal to the desired parallax. | 10-18-2012 |
20120263373 | INVERSE STEREO IMAGE MATCHING FOR CHANGE DETECTION - A system and method for finding real terrain matches in a stereo image pair is presented. A method for finding differences of underlying terrain between a first stereo image and a second stereo image includes performing epipolar rectification on a stereo image pair to produce rectified image data. The method performs a hybrid stereo image matching on the rectified image data to produce image matching data. A digital surface model (DSM) is generated based on the image matching data. Next, the method identifies areas in the DSM where the stereo image matching should fail based on the image matching data and the DSM to generate predicted failures. The method can then determine real terrain changes based on the predicted failures and the image matching data. | 10-18-2012 |
20120263374 | DEVICE AND METHOD FOR TRANSFORMING 2D IMAGES INTO 3D IMAGES - A device for transforming 2D images into 3D images includes a position calculation unit and an image processing block. The position calculation unit generates multiple start points corresponding to multiple pixel lines of a panel according to a display type of the panel. The image processing block reshapes multiple input enable signals into multiple output enable signals according to the start points. The pixel lines of the panel displays the output data signal as multiple image signals respectively according to the output enable signals. The image signals include multiple left-eye image signals and multiple right-eye image signals. | 10-18-2012 |
20120269423 | Analytical Multi-View Rasterization - Multi-view rasterization may be performed by calculating visibility over a camera line. Edge equations may be evaluated iteratively along a scanline. The edge equations may be evaluated using single instruction multiple data instruction sets. | 10-25-2012 |
20120269424 | STEREOSCOPIC IMAGE GENERATION METHOD AND STEREOSCOPIC IMAGE GENERATION SYSTEM - A stereoscopic image generation method and a stereoscopic image generation system that can generate, from an original image, a stereoscopic image that allows the viewer to perceive a natural stereoscopic effect are provided. The method includes a characteristic information acquisition step of acquiring characteristic information for each of pixels, a depth information generation step of generating depth information for each of the pixels on the basis of the characteristic information, and a stereoscopic image generation step of generating a stereoscopic image on the basis of the pieces of depth information. | 10-25-2012 |
20120275686 | INFERRING SPATIAL OBJECT DESCRIPTIONS FROM SPATIAL GESTURES - Three-dimensional (3-D) spatial image data may be received that is associated with at least one arm motion of an actor based on free-form movements of at least one hand of the actor, based on natural gesture motions of the at least one hand. A plurality of sequential 3-D spatial representations that each include 3-D spatial map data corresponding to a 3-D posture and position of the hand at sequential instances of time during the free-form movements may be determined, based on the received 3-D spatial image data. An integrated 3-D model may be generated, via a spatial object processor, based on incrementally integrating the 3-D spatial map data included in the determined sequential 3-D spatial representations and comparing a threshold time value with model time values indicating numbers of instances of time spent by the hand occupying a plurality of 3-D spatial regions during the free-form movements. | 11-01-2012 |
20120275687 | System and Method for Processing Video Images - Embodiments use point clouds to form a three dimensional image of an object. The point cloud of the object may be formed from analysis of two dimensional images of the object. Various techniques may be used on the point cloud to form a three dimensional model of the object which is then used to create a stereoscopic representation of the object. | 11-01-2012 |
20120275688 | METHOD FOR AUTOMATED 3D IMAGING - A method for automated construction of 3D images is disclosed, in which a range measurement device is to initiate and control the processing of 2D images in order to produce a 3D image. The range measurement device may be integrated with an image sensor, for example the range sensor from a digital camera, or may be a separate device. Data indicating the distance to a specific feature obtained from the range sensor may be used to control and automate the construction of the 3D image. | 11-01-2012 |
20120275689 | SYSTEMS AND METHODS 2-D TO 3-D CONVERSION USING DEPTH ACCESS SEGIMENTS TO DEFINE AN OBJECT - The present invention is directed to systems and methods for controlling | 11-01-2012 |
20120281905 | METHOD OF IMAGE PROCESSING AND ASSOCIATED APPARATUS - A method of image processing is provided for separating an image object from a captured or provided image according to a three-dimensional (3D) depth and generating a synthesized image from the image portions identified and selectively modified in the process. The method retrieves or determines a corresponding three-dimensional (3D) depth for each portion of an image, and enables capturing a selective portion of the image as an image object according to the 3D depth of each portion of the image, so as to synthesize the image object with other image objects by selective processing and superimposing of the image objects to provide synthesized imagery. | 11-08-2012 |
20120281906 | Method, System and Computer Program Product for Converting a 2D Image Into a 3D Image - For converting a two-dimensional visual image into a three-dimensional visual image, the two-dimensional visual image is segmented into regions, including a first region having a first depth and a second region having a second depth. The first and second regions are separated by at least one boundary. A depth map is generated that assigns variable depths to pixels of the second region in response to respective distances of the pixels from the boundary, so that the variable depths approach the first depth as the respective distances decrease, and so that the variable depths approach the second depth as the respective distances increase. In response to the depth map, left and right views of the three-dimensional visual image are synthesized. | 11-08-2012 |
20120288184 | METHOD AND SYSTEM FOR ADJUSTING DEPTH VALUES OF OBJECTS IN A THREE DIMENSIONAL (3D) DISPLAY - A method of setting a plurality of depth values of a plurality of objects in a scene. The method comprises providing an image dataset depicting a scene comprising a plurality of objects having a plurality of depth values with a plurality of depth differences thereamong, selecting a depth range, simultaneously adjusting the plurality of depth values while maintaining the plurality of depth differences, the adjusting being limited by the depth range, and instructing the generation of an output image depicting the scene so that the plurality of objects having the plurality of adjusted depth values. | 11-15-2012 |
20120288185 | IMAGE CONVERSION APPARATUS AND IMAGE CONVERSION METHOD - According to one embodiment, an image conversion apparatus includes a 3D conversion instruction module, a determination module, and a converter. The 3D conversion instruction module is configured to instruct execution of a 3D conversion required to convert an input image into a 3D image. The determination module is configured to determine validity or invalidity of the 3D conversion instruction based on a type of the input image. The converter is configured to convert, based on validity determination of the 3D conversion instruction, the input image into the 3D image in response to the 3D conversion instruction. | 11-15-2012 |
20120294510 | DEPTH RECONSTRUCTION USING PLURAL DEPTH CAPTURE UNITS - A depth construction module is described that receives depth images provided by two or more depth capture units. Each depth capture unit generates its depth image using a structured light technique, that is, by projecting a pattern onto an object and receiving a captured image in response thereto. The depth construction module then identifies at least one deficient portion in at least one depth image that has been received, which may be attributed to overlapping projected patterns that impinge the object. The depth construction module then uses a multi-view reconstruction technique, such as a plane sweeping technique, to supply depth information for the deficient portion. In another mode, a multi-view reconstruction technique can be used to produce an entire depth scene based on captured images received from the depth capture units, that is, without first identifying deficient portions in the depth images. | 11-22-2012 |
20120301011 | DEVICES, METHODS, AND APPARATUSES FOR HOMOGRAPHY EVALUATION INVOLVING A MOBILE DEVICE - Components, methods, and apparatuses are provided that may be used to access information pertaining to a two-dimensional image of a three-dimensional object, to detect homography between said image of said three-dimensional object captured in said two-dimensional image indicative of said three-dimensional object and a reference object image and to determine whether said homography indicates pose suitable for image augmentation based, at least in part, on characteristics of an elliptically-shaped area that encompasses at least some of a plurality of inliers distributed in said two-dimensional image. | 11-29-2012 |
20120301012 | IMAGE SIGNAL PROCESSING DEVICE AND IMAGE SIGNAL PROCESSING METHOD - When super-resolution processing is applied to an entire screen image at the same intensity, a blur contained in an input image is uniformly reduced over the entire screen image. Therefore, the screen image may be seen differently from when it is naturally seen. As one of methods for addressing the problem, there is such a method that: when a first image for a left eye and a second image for a right eye are inputted, each of parameters concerning image-quality correction is determined based on a magnitude of a positional deviation between associated pixels in the first image and second image respectively; and the parameters are used to perform image-quality correction processing for adjusting a sense of depth of an image. | 11-29-2012 |
20120301013 | ENHANCED OBJECT RECONSTRUCTION - Processing images includes projecting an infra-red pattern onto a three-dimensional object and producing a first image, a second image, and a third image of the three-dimensional object while the pattern is projected on the three-dimensional object. The first image and the second image include the three-dimensional object and the pattern. The first image and the second image are produced by capturing at a first camera and a second camera, respectively, light filtered through an infra-red filter. The third image includes the three-dimensional object but not the pattern. Processing the images also includes establishing a first-pair correspondence between a portion of pixels in the first image and a portion of pixels in the second image. Processing the images further includes constructing, based on the first-pair correspondence and the third image, a two-dimensional image that depicts a three-dimensional construction of the three-dimensional object. | 11-29-2012 |
20120308114 | VOTING STRATEGY FOR VISUAL EGO-MOTION FROM STEREO - Methods and systems for egomotion estimation (e.g. of a vehicle) from visual inputs of a stereo pair of video cameras are described. 3D egomotion estimation is a six degrees of freedom problem in general. In embodiments of the present invention, this is simplified to four dimensions and further decomposed to two two-dimensional sub-solutions. The decomposition allows use of a voting strategy that identifies the most probable solution. An input is a set of image correspondences between two temporally consecutive stereo pairs, i.e. feature points do not need to be tracked over time. The experiments show that even if a trajectory is put together as a simple concatenation of frame-to-frame increments, the results are reliable and precise. | 12-06-2012 |
20120308115 | Method for Adjusting 3-D Images by Using Human Visual Model - The present disclosure provides a method for adjusting 3-D images converted from 2-D images by using a human visual model. Steps of the method include inputting a 2-D image, dividing the 2-D image into a plurality of blocks, forming a matrix of blocks, obtaining a depth value of each of the plurality of blocks, adjusting the depth value of each of the plurality of blocks according to a position of each of the plurality of blocks, obtaining adjusted depth information of the 2-D image, wherein the adjusted depth information comprises an adjusted depth value of each of the plurality of blocks of the 2-D image, and using depth image based rendering (DIBR) to generate a set of 3-D images according to the adjusted depth information and the 2-D image. | 12-06-2012 |
20120308116 | HEAD ROTATION TRACKING FROM DEPTH-BASED CENTER OF MASS - The rotation of a user's head may be determined as a function of depth values from a depth image. In accordance with some embodiments, an area of pixels from a depth image containing a user's head is identified as a head region. The depth values for pixels in the head region are used to calculate a center of depth-mass for the user's head. The rotation of the user's head may be determined based on the center of depth-mass for the user's head. | 12-06-2012 |
20120308117 | STORAGE MEDIUM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING SYSTEM - An example of a game apparatus as an image processing apparatus includes a CPU, and the CPU controls a movement, etc. of a player object according to an instruction from a player. In a case that a predetermined condition is satisfied, a two-dimensional surface is displayed within a virtual three-dimensional space. When the player moves a first controller, a two-dimensional image is depicted on the two-dimensional surface in response thereto. Then, it is determined whether or not the depicted two-dimensional image is a predetermined image. If it is determined that the two-dimensional image is the predetermined image, a three-dimensional object corresponding to the predetermined image appears, and the two-dimensional surface and the two-dimensional image depicted thereon are erased. | 12-06-2012 |
20120308118 | APPARATUS AND METHOD FOR 3D IMAGE CONVERSION AND A STORAGE MEDIUM THEREOF - An apparatus and method for converting a two-dimensional (2D) input image into a three-dimensional (3D) image, and a storage medium thereof are provided, the method being implemented by the 3D-image conversion apparatus including receiving an input image including a plurality of frames; selecting a first frame corresponding to a preset condition among the plurality of frames; extracting a first object from the selected first frame; inputting selection for one depth information setting mode among a plurality of depth information setting modes with regard to the first object; generating first depth information corresponding to the selected setting mode with regard to the first object; and rendering the input image based on the generated first depth information. | 12-06-2012 |
20120308119 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - A parallax detection unit generates a parallax map indicating a parallax of each pixel of an image formed by right and left images and generates a reliability map indicating reliability of the parallax. A depth information estimation unit generates a depth information map indicating the depth of a subject on the image based on the right and left images. A depth parallax conversion unit converts the depth information map into a pseudo-parallax map using a conversion equation used to convert depth information to parallax information. A parallax synthesis unit synthesizes the parallax map and the pseudo-parallax map to generate a corrected parallax map based on the reliability map. The present technology is applicable to an image processing apparatus. | 12-06-2012 |
20120308120 | METHOD FOR ESTIMATING DEFECTS IN AN OBJECT AND DEVICE FOR IMPLEMENTING SAME - The invention relates to a device and method for estimating defects potentially present in an object comprising an outer surface, wherein the method comprises the steps of: a) illuminating the outer surface of the object with an inductive wave field at a predetermined frequency; b) measuring an induced wave field ({right arrow over (H)}) at the outer surface of the object; c) developing from the properties of the object's material a coupling matrix T associated with a depth Z of the object from the outer surface; d) solving the matrix system | 12-06-2012 |
20120314932 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT FOR IMAGE PROCESSING - According to one embodiment, an image processing apparatus includes a first setting unit, a second setting unit, and a specifying unit. The first setting unit detects a position of at least a part of an object in an image so as to obtain, for one pixel or each of a plurality of pixels in the image, a first likelihood that indicates whether the corresponding pixel is included in a region where the object is present. The second setting unit obtains, for one pixel or each of a plurality of pixels in the image, a second likelihood indicating whether the pixel is a pixel corresponding to a 3D body by using a feature amount of the pixel. The a specifying unit specifies a region, in the image, where the object is present by using the first likelihood and the second likelihood. | 12-13-2012 |
20120314933 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes an attention region estimation unit that estimates an attention region which is estimated as a user paying attention thereto on a stereoscopic image, a parallax detection unit that detects a parallax of the stereoscopic image and generates a parallax map indicating a parallax of each region of the stereoscopic image, a setting unit that sets conversion characteristics for correcting a parallax of the stereoscopic image based on the attention region and the parallax map, and a parallax conversion unit that corrects the parallax map based on the conversion characteristics. | 12-13-2012 |
20120314934 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND PROGRAM - An information processing device is provided that includes: an image generation portion that generates a composite image by synthesizing a stereoscopic image with a two-dimensional image that is associated with the stereoscopic image, the stereoscopic image being generated from a right eye image and a left eye image which have parallax therebetween and on which perspective correction is performed; and an identification portion that, when a user operation on the two-dimensional image is detected, identifies the stereoscopic image with which the two-dimensional image is associated, as a selected target. | 12-13-2012 |
20120314935 | METHOD AND APPARATUS FOR INFERRING THE GEOGRAPHIC LOCATION OF CAPTURED SCENE DEPICTIONS - A method and apparatus for determining a geographic location of a scene in a captured depiction comprising extracting a first set of features from the captured depiction by algorithmically analyzing the captured depiction, matching the extracted features of the captured depiction against a second set of extracted features associated with reference depictions with known geographic locations and when the matching is successful, identifying the geographic location of the scene in the captured depiction based on a known geographic location of a matching reference depiction from the reference depictions. | 12-13-2012 |
20120314936 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus that acquires first posture information corresponding to the information processing apparatus and a first distance coordinate corresponding to the information processing apparatus, and second posture information corresponding to another information processing apparatus and a second distance coordinate corresponding to the another information processing apparatus. The information processing apparatus then calculates an object's position in a virtual space based on the first and second posture information and the first and second distance coordinates. | 12-13-2012 |
20120314937 | METHOD AND APPARATUS FOR PROVIDING A MULTI-VIEW STILL IMAGE SERVICE, AND METHOD AND APPARATUS FOR RECEIVING A MULTI-VIEW STILL IMAGE SERVICE - Provided are an apparatus and a method of providing a multiview still image service. The method includes: configuring a multiview still image file format including a plurality of image areas into which a plurality of pieces of image information forming a multiview still image are inserted; inserting the plurality of pieces of image information into the plurality of image areas, respectively; inserting three-dimensional (3D) basic attribute information to three-dimensionally reproduce the multiview still image into a first image area of the plurality of image areas into which main-view image information from among the plurality of pieces of image information is inserted; and outputting multiview still image data comprising the plurality of pieces of image information based on the multiview still image file format. | 12-13-2012 |
20120321171 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - Provided is an image processing apparatus including an image input unit configured to receive at least one of a first left eye image and a first right eye image photographed from different viewpoints and applicable to stereoscopic vision, and a stereoscopic image generation processing unit configured to receive one of the first left eye image and the first right eye image and generate a second left eye image and a second right eye image applicable to the stereoscopic vision through an image conversion process. Among the first left eye image and the first right eye image input to the image input unit and the second left eye image and the second right eye image generated by the stereoscopic image generation processing unit, two images are output as images to be applied to the stereoscopic vision. | 12-20-2012 |
20120321172 | CONFIDENCE MAP, METHOD FOR GENERATING THE SAME AND METHOD FOR REFINING A DISPARITY MAP - A method for generating a confidence map comprising a plurality of confidence values, each being assigned to a respective disparity value in a disparity map assigned to at least two stereo images each having a plurality of pixels, wherein a single confidence value is determined for each disparity value, and wherein for determination of the confidence value at least a first confidence value based on a match quality between a pixel or a group of pixels in the first stereo image and a corresponding pixel or a corresponding group of pixels in the second stereo image and a second confidence value based on a consistency of the corresponding disparity estimates is taken into account. | 12-20-2012 |
20120321173 | INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING APPARATUS - A multi view-point image composed of a great number of images according to a shape of an object is generated or an information processing method used for generating a three-dimensional model or performing image processing of arbitrary view-point object recognition is provided, and based on a plurality of captured images obtained by imaging of the object from a plurality of view points by an imaging means, a relative position and orientation with respect to the object relative to the imaging means for each of the plurality of view points is calculated, and based on the calculated plurality of relative positions and orientations, a missing position and orientation of the imaging means in a direction in which imaging by the imaging means is missing is calculated, and an image used for displaying the calculated missing position and orientation on a display means is generated. | 12-20-2012 |
20120328182 | IMAGE FORMAT DISCRIMINATION DEVICE, METHOD OF DISCRIMINATING IMAGE FORMAT, IMAGE REPRODUCING DEVICE AND ELECTRONIC APPARATUS - An image format discrimination device includes a correlation candidate extraction unit that obtains a gradient amount of each pixel position, based on pixel data of a horizontal line of input image data and extracts as a correlation candidate, a pixel of a position where a sign of the gradient amount is changed; a correlation inspection unit that inspects whether or not a first correlation candidate range and a second correlation candidate range having correlation to each other in the horizontal line are present, based on the correlation candidate that is extracted by the correlation candidate extraction unit; and a discriminating image format unit that discriminates whether or not the input image data are three-dimensional image data of a side-by-side type, based on the inspection result of the correlation inspection unit. | 12-27-2012 |
20130004058 | MOBILE THREE DIMENSIONAL IMAGING SYSTEM - A mobile device including an imaging device with a display and capable of obtaining a pair of images of a scene having a disparity between the pair of images. The imaging device estimating the distance between the imaging device and a point in the scene indicated by a user on the display. The imaging device displaying the scene on the display together with an indication of a geometric measure. | 01-03-2013 |
20130004059 | ALIGNING STEREOSCOPIC IMAGES - Systems, methods, and computer-readable and executable instructions are provided for aligning stereoscopic images. Aligning stereoscopic images can include applying, by a computer, a feature detection technique to a pair of stereoscopic images to detect a number of features in each stereoscopic image. Aligning stereoscopic images can also include creating, by the computer, a feature coordinate list for each stereoscopic image based on the feature detection and comparing, by the computer, the feature coordinate lists. Furthermore, aligning stereoscopic images can include aligning the stereoscopic images, by the computer, based on the comparison. | 01-03-2013 |
20130004060 | CAPTURING AND ALIGNING MULTIPLE 3-DIMENSIONAL SCENES - The capture and alignment of multiple 3D scenes is disclosed. Three dimensional capture device data from different locations is received thereby allowing for different perspectives of 3D scenes. An algorithm uses the data to determine potential alignments between different 3D scenes via coordinate transformations. Potential alignments are evaluated for quality and subsequently aligned subject to the existence of sufficiently high relative or absolute quality. A global alignment of all or most of the input 3D scenes into a single coordinate frame may be achieved. The presentation of areas around a particular hole or holes takes place thereby allowing the user to capture the requisite 3D scene containing areas within the hole or holes as well as part of the surrounding area using, for example, the 3D capture device. The new 3D captured scene is aligned with existing 3D scenes and/or 3D composite scenes. | 01-03-2013 |
20130011044 | OBJECT CONTOUR DETECTION DEVICE AND METHOD - An object contour detection method includes: allowing an image sensor to sense respectively a plurality of images of an object by moving a lens with a shallow depth of field at a plurality of positions repeatedly, meanwhile, record the plurality of positions of the lens and the plurality of images one-to-one corresponding to the plurality of positions; removing respectively unclear areas in the plurality of images to obtain a plurality of clear images, and obtaining a plurality of displacement quantities of depth of field depending on a displacement quantity between each two adjacent positions in the plurality of positions; and extending a depth of the front image to reach the corresponding displacement quantity of depth of field and then combine the front image with the rear image in sequence, allowing the plurality of clear images to combine into a stereoscopic image corresponding to the object contour. | 01-10-2013 |
20130011045 | APPARATUS AND METHOD FOR GENERATING THREE-DIMENSIONAL (3D) ZOOM IMAGE OF STEREO CAMERA - An apparatus and method for generating a three-dimensional (3D) zoom image of a stereo camera are provided that may compute a baseline variation or a convergence angle that is associated with a magnification of a zoom image acquired from the stereo camera, may warp the zoom image using the computed baseline variation or the computed convergence angle, and may perform inpainting on the warped image to prevent a distortion of 3D information, so that a 3D zoom image may be generated without a distortion of 3D information using a zoom lens. | 01-10-2013 |
20130011046 | DEPTH IMAGE CONVERSION APPARATUS AND METHOD - Provided are an apparatus and method for converting a low-resolution depth image to a depth image having a resolution identical to a resolution of a high-resolution color image. The depth image conversion apparatus may generate a discrete depth image by quantizing a depth value of an up-sampled depth image, estimate a high-resolution discrete depth image by optimizing an objective functions of the discrete depth image based on the high-resolution color image and an up-sampled depth border, and convert the up-sampled depth image to a high-resolution depth image by filtering the up-sampled depth image when a difference between discrete depth values of neighboring pixels in the high-resolution discrete depth image is less than a predetermined threshold value. | 01-10-2013 |
20130011047 | Method, System and Computer Program Product for Switching Between 2D and 3D Coding of a Video Sequence of Images - A video sequence of images includes at least first and second images. In response to at least first and second conditions being satisfied, an encoding mode is switched between two-dimensional video coding and three-dimensional video coding. The first condition is that the second image represents a scene change in comparison to the first image. The second image is encoded according to the switched encoding mode. | 01-10-2013 |
20130011048 | THREE-DIMENSIONAL IMAGE PROCESSING DEVICE, AND THREE-DIMENSIONAL IMAGE PROCESSING METHOD - In the three-dimensional imaging device (three-dimensional image processing device), the depth acquisition unit acquires L depth information and R depth information from a three-dimensional image. The image correction unit adjusts disparities of edge portion areas of a subject based on the L depth information and the R depth information such that the normal positions of the edge portion areas of the subject are farther away. Accordingly, when a three-dimensional image acquired by the three-dimensional imaging device is three-dimensionally displayed, the edge areas of the subject are displayed having a sense of roundness. As a result, a three-dimensional image that has been subjected to processing by this three-dimensional image processing device is a high-quality three-dimensional image that can appropriately reproduce the three-dimensional appearance and sense of thickness of the subject and has little of the cardboard effect. | 01-10-2013 |
20130016896 | 3D Visualization of Light Detection and Ranging DataAANM Seida; Steven B.AACI WylieAAST TXAACO USAAGP Seida; Steven B. Wylie TX US - In accordance with particular embodiments, a method includes receiving LIDAR data associated with a geographic area and generating a three-dimensional image of the geographic area based on the LIDAR data. The method further includes presenting at least a first portion of the three-dimensional image to a user based on a camera at a first location. The first portion of the three-dimensional image is presented from a walking perspective. The method also includes navigating the three-dimensional image based on a first input received from the user. The first input is used to direct the camera to move along a path in the walking perspective based on the first input and the three-dimensional image. The method further includes presenting at least a second portion of the three-dimensional image to the user based on navigating the camera to a second location. The second portion of the three dimensional image presented from the walking perspective. | 01-17-2013 |
20130016897 | METHOD AND APPARATUS FOR PROCESSING MULTI-VIEW IMAGE USING HOLE RENDERING - A method and apparatus for processing a multi-view image are provided. A priority may be assigned to each hole pixel in a hole region generated when an output view is generated. The priority of each hole pixel may be generated by combining a structure priority, a confidence priority, and a disparity priority. Hole rendering may be applied to a target patch including a hole pixel having a highest priority. The hole pixel may be restored by searching for a source patch most similar to a background of the target patch, and copying a pixel in the found source patch into a hole pixel of the target patch. | 01-17-2013 |
20130016898 | METHOD AND APPARATUS FOR LOW-BANDWIDTH CONTENT-PRESER VING ENCODING OF STEREOSCOPIC 3D IMAGESAANM Tchoukaleysky; EmilAACI TorontoAACO CAAAGP Tchoukaleysky; Emil Toronto CA - A method and apparatus are described including accepting a first and a second stereoscopic eye frame line image, determining a coarse image shift between the first stereoscopic eye frame line image and the second stereoscopic eye frame line image, determining a fine image shift responsive to the coarse image shift, forwarding one of the first stereoscopic eye frame line image and the second stereoscopic eye frame line image and forwarding data corresponding to the fine image shift and metadata for further processing. Also described are a method and apparatus including receiving a transmitted first full stereoscopic eye frame line image, extracting a difference between a first stereoscopic eye frame line image and a second stereoscopic image, subtracting the extracted difference from the first stereoscopic eye frame line image, storing the second stereoscopic eye frame line image, extracting a shift line value from metadata included in the first full stereoscopic eye frame line image and shifting the second stereoscopic eye frame line image to its original position responsive to the shift value. | 01-17-2013 |
20130022261 | SYSTEMS AND METHODS FOR EVALUATING IMAGES - Systems and methods for evaluating images segment a computational image into sub-images based on spectral information in the computational image, generate respective morphological signatures for the sub-images, generate respective spectral signatures for the sub-images, and generate a resulting image signature based on the morphological signatures and the spectral signatures. | 01-24-2013 |
20130022262 | HEAD RECOGNITION METHOD - Described herein is a method for recognising a human head in a source image. The method comprises detecting a contour of at least part of a human body in the source image, calculating a depth of the human body in the source image. From the source image, a major radius size and a minor radius size of an ellipse corresponding to a human head at the depth is calculated, and, for at least several of a set of pixels of the detected contour, generating in an accumulator array at least one segment of an ellipse centred on the position of the contour pixel and having the major and minor radius sizes. Positions of local intensity maxima in the accumulator array are selected as corresponding to positions of the human head candidates in the source image. | 01-24-2013 |
20130028507 | 2D to 3D IMAGE CONVERSION APPARATUS AND METHOD THEREOF - A 2D to 3D image conversion apparatus includes a data queue, a conversion unit and an offset calculation unit. The data queue receives and temporarily stores an input data value corresponding to a current pixel. The conversion unit outputs a current offset table corresponding to a current depth parameter of the current pixel. The current offset table includes (m+1) reference offsets corresponding to the current pixel and neighboring m pixels. The offset calculation unit selects one of the reference offsets corresponding to the current pixel in the current offset table and multiple previous offset tables as a data offset corresponding to the current pixel. The data queue selects and outputs an output data value corresponding to the current pixel according to an integer part of the data offset and the input data value. | 01-31-2013 |
20130034296 | PATTERN DISCRIMINATING APPARATUS - A pattern discriminating apparatus includes a setting unit configured to set at least one area in a three-dimensional space in a three-dimensional image data, a feature value calculating unit configured to calculate a pixel feature value from one pixel to another of the three-dimensional image data, a matrix calculating unit configured to (1) obtain at least one point on a three-dimensional coordinate in the area which is displaced in position from a focused point on the three-dimensional coordinate in the area by a specific mapping, and (2) calculate a co-occurrence matrix which expresses the frequency of occurrence of a combination of the pixel feature value of the focused point in the area and the pixel feature values of the mapped respective points, and a discriminating unit configured to discriminate whether or not an object to be detected is imaged in the area on the basis of the combination of the specific mapping and the co-occurrence matrix and a learning sample of the object to be detected which is learned in advance. | 02-07-2013 |
20130034297 | METHOD AND DEVICE FOR CALCULATING A DEPTH MAP FROM A SINGLE IMAGE - A method for calculating a depth map from an original matrix image, comprising the steps of:
| 02-07-2013 |
20130039566 | CODING OF FEATURE LOCATION INFORMATION - Methods and devices for coding of feature locations are disclosed. In one embodiment, a method of coding feature location information of an image includes generating a hexagonal grid, where the hexagonal grid includes a plurality of hexagonal cells, quantizing feature locations of an image using the hexagonal grid, generating a histogram to record occurrences of feature locations in each hexagonal cell, and encoding the histogram in accordance with the occurrences of feature locations in each hexagonal cell. The method of encoding the histogram includes applying context information of neighboring hexagonal cells to encode information of a subsequent hexagonal cell to be encoded in the histogram, where the context information includes context information from first order neighbors and context information from second order neighbors of the subsequent hexagonal cell to be encoded. | 02-14-2013 |
20130039567 | METHOD AND APPARATUS TO GENERATE A VOLUME-PANORAMA IMAGE - A method and apparatus to generate a volume-panorama image are provided. A method of generating a volume-panorama image includes receiving conversion relationships between volume images, one of the received conversion relationships being between a first volume image of the volume images and a second volume image of the volume images, the second volume image including an area that is common to an area of the first volume image, generating an optimized conversion relationship from the one of the received conversion relationships based on the received conversion relationships, and generating the volume-panorama image based on the generated optimized conversion relationship. | 02-14-2013 |
20130039568 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM - An image processing apparatus includes: a separation unit that separates each of the left image and the right image into a background area and a foreground area, regarding a first three-dimensional image data acquired by a acquisition unit; a background image generation unit that generates data of a background image by executing image processing on at least one of the background area of the left image and the background area of the right image separated by the separation unit; and a three-dimensional image data generation unit that generates second three-dimensional image data composed of two images having a parallax between a left image and a right image, by combining data of the background image generated by the background image generation unit and data of foreground images regarding the foreground area separated from each of the left image and the right image by the separation unit. | 02-14-2013 |
20130039569 | METHOD AND APPARATUS OF COMPILING IMAGE DATABASE FOR THREE-DIMENSIONAL OBJECT RECOGNITION - A method of compiling an image database for a three-dimensional object recognition including the steps of: when a plurality of images each showing an object from different viewpoint are inputted, extracting local features from each of the images, and expressing the local features using feature vectors; forming sets of the feature vectors, each set representing a same part of the object from a series of the viewpoints, and generating subspaces, each subspace representing a characteristic of each set; and storing each subspace to the image database with an identifier of the object to perform a recognition process that is realized by the steps of: when at least one image of an object is given as a query, extracting query feature vectors; determining the subspace most similar to each query feature vector; and executing a counting process to the identifiers to retrieve an object most similar to the query. | 02-14-2013 |
20130044939 | Method and system for modifying binocular images - The present invention relates to a method for modifying binocular images, for example, to manipulate the attention of viewers. The binocular images may be for 2D or 3D scenes. The method modifies a left image destined for a left eye and a right image destined for a right eye, by modifying a portion of the left image by adjusting a visual characteristic of the portion in a first direction by a first defined value and modifying a corresponding portion of the right image by adjusting the visual characteristic of the corresponding portion in the opposite of the first direction by a second defined value. A system with an image modification means for modifying binocular images, an apparatus for displaying modified binocular images, a signal and medium for carrying/storing modified binocular images, and a computer program for modifying binocular images are also disclosed. | 02-21-2013 |
20130044940 | SYSTEM AND METHOD FOR SECTIONING A MICROSCOPY IMAGE FOR PARALLEL PROCESSING - A computer-implemented system and method of processing a microscopy image are provided. A microscopy image is received, and a configuration for an image section that includes a portion of the microscopy image is determined. Multiple image sections are respectively assigned to multiple processing units, and the processing units respectively process the image sections in parallel. One or more objects are determined to be respectively present in the image sections, and the objects present in the image sections are measured to obtain object data associated with the objects. | 02-21-2013 |
20130044941 | METHOD FOR LOCATING ARTEFACTS IN A MATERIAL - A method for locating artefacts, such as particles or voids, in a material includes the steps of defining a path through a volume of the material, sensing the presence and type of any artefacts along the path and determining for each sensed artefact, the respective distance along the path. Analysis of the quantity of sensed artefacts and their respective position along the path enables the determination of measures for the artefact density, artefact size and artefact distribution in the material. | 02-21-2013 |
20130051657 | METHOD AND APPARATUS FOR DETERMINING A SIMILARITY OR DISSIMILARITY MEASURE - A solution for determining a similarity or dissimilarity measure for a selected pixel of a first image relative to another selected pixel in a second image is described. The first image and the second image form a stereoscopic image pair or part of a multi-view image group. In a first step a first support window containing the selected pixel in the first image is determining. Then a second support window containing the selected pixel in the second image is determining. Subsequently one or more statistical properties of the selected pixel in the first image are calculated to define a probability distribution for the selected pixel in the first image. Finally, pixel similarity or dissimilarity between the first support window and the second support window is aggregated using only those pixels belonging to the probability distribution for the selected pixel in the first image with a probability above a defined minimum. | 02-28-2013 |
20130051658 | METHOD OF SEPARATING OBJECT IN THREE DIMENSION POINT CLOUD - A method of separating an object in a three dimension point cloud including acquiring a three dimension point cloud image on an object using an image acquirer, eliminating an outlier from the three dimension point cloud image using a controller, eliminating a plane surface area from the three dimension point cloud image, of which the outlier has been eliminated using the controller, and clustering points of an individual object from the three dimension point cloud image, of which the plane surface area has been eliminated using the controller. | 02-28-2013 |
20130051659 | STEREOSCOPIC IMAGE PROCESSING DEVICE AND STEREOSCOPIC IMAGE PROCESSING METHOD - A stereoscopic image processing device that converts a two-dimensional (2D) image into a three-dimensional (3D) image includes: a detector detecting a value indicating a variation degree of an image feature quantity within a current frame to be processed of the 2D image; a normalizer (a) normalizing the image feature quantity to approximate the value detected by the detector to a threshold of the variation degree and outputting the normalized image feature quantity when the value is smaller than the threshold of the variation degree; and (b) not normalizing the image feature quantity and outputting the image feature quantity when the value is larger than or equal to the threshold of the variation degree; and a depth information generator generating depth information for converting the 2D image into the 3D image, based on the image feature quantity output by the normalizer. | 02-28-2013 |
20130051660 | IMAGE PROCESSOR, IMAGE DISPLAY APPARATUS, AND IMAGE TAKING APPARATUS - Disclosed is an image processor generating a three-dimensional image easily three-dimensionally viewed by, and hardly causing fatigue of, an observer, and easily adjusting a three-dimensional effect of an arbitrary portion in the three-dimensional image. The disparity correction portion | 02-28-2013 |
20130058561 | PHOTOGRAPHIC SYSTEM - A photographic system for generating photos is provided. The photographic system comprises a photo composition unit, and a photo synthesizer. The photo composition unit is capable of determining an extracted view from a three dimensional (3D) scene. The photo synthesizer, coupled to the photo composition unit, is capable of synthesizing an output photo according to the extracted view. | 03-07-2013 |
20130058562 | SYSTEM AND METHOD OF CORRECTING A DEPTH MAP FOR 3D IMAGE - A system and method of correcting a depth map for 3D image is disclosed. A spatial spectral transform unit extracts pixels of object boundaries according to an input image, wherein the spatial spectral transform unit adopts Hilbert-Huang transform (HHT). A correction unit corrects an input depth map corresponding to the input image according to the pixels of object boundaries, thereby resulting in an output depth map. | 03-07-2013 |
20130058563 | INTERMEDIATE IMAGE GENERATION METHOD, INTERMEDIATE IMAGE FILE, INTERMEDIATE IMAGE GENERATION DEVICE, STEREOSCOPIC IMAGE GENERATION METHOD, STEREOSCOPIC IMAGE GENERATION DEVICE, AUTOSTEREOSCOPIC IMAGE DISPLAY DEVICE, AND STEREOSCOPIC IMAGE GENERATION SYSTEM - By generating in advance intermediate images which have the same resolution as a stereoscopic image that is the final output image, and which integrate pixels for respective viewpoints, generation of a stereoscopic image is possible only by converting the pixel arrangement without using a high-speed and specialised computer or the like. Furthermore, using intermediate images in which images for respective viewpoints are arranged in a shape of tiles, a completely new stereoscopic image generation system can be realised in which a simple and low-cost stereoscopic image generation device generates stereoscopic images from intermediate images output or transmitted in a more standard format by a standard image output device such as a Blu-ray player, an STB, or an image distribution server. | 03-07-2013 |
20130058564 | METHOD AND APPARATUS FOR RECOVERING A COMPONENT OF A DISTORTION FIELD AND FOR DETERMINING A DISPARITY FIELD - A method and an apparatus for recovering a component of a distortion field of an image of a set of multi-view images are described. Also described are a method and an apparatus for determining a disparity field of an image of a set of multi-view images, which makes use of such method. | 03-07-2013 |
20130058565 | GESTURE RECOGNITION SYSTEM USING DEPTH PERCEPTIVE SENSORS - Acquired three-dimensional positional information is used to identify user created gesture(s), which gesture(s) are classified to determine appropriate input(s) to an associated electronic device or devices. Preferably at at least one instance of a time interval, the posture of a portion of a user is recognized, based at least one factor such as shape, position, orientation, velocity. Posture over each of the instance(s) is recognized as a combined gesture. Because acquired information is three-dimensional, two gestures may occur simultaneously. | 03-07-2013 |
20130064443 | APPARATUS AND METHOD FOR DETERMINING A CONFIDENCE VALUE OF A DISPARITY ESTIMATE - A method and an apparatus for determining a confidence value of a disparity estimate for a pixel or a group of pixels of a selected image of at least two stereo images are described, the confidence value being a measure for an improved reliability value of the disparity estimate for the pixel or the group of pixels. First an initial reliability value of the disparity estimate for the pixel or the group of pixels is determined, wherein the reliability is one of at least reliable and unreliable. Then a distance of the pixel or the group of pixels to a nearest pixel or group of pixels with an unreliable disparity estimate is determined. Finally, the confidence value of the disparity estimate for the pixel or the group of pixels is obtained from the determined distance. | 03-14-2013 |
20130071008 | IMAGE CONVERSION SYSTEM USING EDGE INFORMATION - In accordance with at least some embodiments of the present disclosure, a process for converting a two-dimensional (2D) image based on edge information is described. The process may include partitioning the 2D image to generate a plurality of blocks, segmenting the plurality of blocks into a group of regions based on edges determined in the plurality of blocks, assigning depth values to the plurality of blocks based on a depth gradient hypothesis associated with the group of regions, wherein pixels in each of the plurality of blocks are associated with a same depth value, and generating the depth map based on the depth values of the plurality of blocks. | 03-21-2013 |
20130071009 | DEPTH RANGE ADJUSTMENT FOR THREE-DIMENSIONAL IMAGES - A system is provided for generating a three dimensional image. The system may include a processor configured to generate a disparity map from a stereo image, adjust the disparity map to compress or expand a number of depth levels within the disparity map to generate an adjusted disparity map, and render a stereo view of the image based on the adjusted disparity map. | 03-21-2013 |
20130071010 | METHOD AND SYSTEM FOR FAST THREE-DIMENSIONAL IMAGING USING DEFOCUSING AND FEATURE RECOGNITION - Described is a method and system for fast three-dimensional imaging using defocusing and feature recognition is disclosed. The method comprises acts of capturing a plurality of defocused images of an object on a sensor, identifying segments of interest in each of the plurality of images using a feature recognition algorithm, and matching the segments with three-dimensional coordinates according to the positions of the images of the segments on the sensor to produce a three-dimensional position of each segment of interest. The disclosed imaging method is “aware” in that it uses a priori knowledge of a small number of object features to reduce computation time as compared with “dumb” methods known in the art which exhaustively calculate positions of a large number of marker points. | 03-21-2013 |
20130071011 | RECONSTRUCTION OF SHAPES OF OBJECTS FROM IMAGES - The present disclosure describes a system and method for transforming a two-dimensional image of an object into a three-dimensional representation, or model, that recreates the three-dimensional contour of the object. In one example, three pairs of symmetric points establish an initial relationship between the original image and a virtual image, then additional pairs of symmetric points in the original image are reconstructed. In each pair, a visible point and an occluded point are mapped into 3-space with a single free variable characterizing the mapping for all pairs. A value for the free variable is then selected to maximize compactness of the model, where compactness is defined as a function of the model's volume and its surface area. “Noise” correction derives from enforcing symmetry and selecting best-fitting polyhedra for the model. Alternative embodiments extend this to additional polyhedra, add image segmentation, use perspective, and generalize to asymmetric polyhedra and non-polyhedral objects. | 03-21-2013 |
20130071012 | IMAGE PROVIDING DEVICE, IMAGE PROVIDING METHOD, AND IMAGE PROVIDING PROGRAM FOR PROVIDING PAST-EXPERIENCE IMAGES - A image providing device provides a user with realistic and natural past-experience simulation through stereoscopic photographs. Specifically, feature-point extractors | 03-21-2013 |
20130071013 | VIDEO PROCESSING DEVICE, VIDEO PROCESSING METHOD, PROGRAM - A feature point extraction unit | 03-21-2013 |
20130077852 | METHOD AND APPARATUS FOR GENERATING FINAL DEPTH INFORMATION RELATED MAP THAT IS RECONSTRUCTED FROM COARSE DEPTH INFORMATION RELATED MAP THROUGH GUIDED INTERPOLATION - A method for generating a final depth information related map includes the following steps: receiving a coarse depth information related map, wherein a resolution of the coarse depth information related map is smaller than a resolution of the final depth information related map; and outputting the final depth information related map reconstructed from the coarse depth information related map by receiving an input data and performing a guided interpolation operation upon the coarse depth information related map according to the input data. | 03-28-2013 |
20130077853 | Image Scaling - The present invention relates to an apparatus, method for adjusting depth characteristics of a three-dimensional image for correcting for errors in perceived depth when scaling the three-dimensional image, the method comprising: receiving three-dimensional image information comprising a stereoscopic image including a first image and a second image, the stereoscopic image having depth characteristics associated with an offset of the first and second images; determining a scaling factor indicative of a scaling for converting the stereoscopic image from an original target size to a new size; determining at least one shifting factor for varying the depth characteristics, the at least one shifting factor indicative of a relative shift to be applied between the first and the second images, wherein the at least one shifting factor is determined in accordance with the scaling factor and at least one depth parameter derived from the depth characteristics; and performing the relative shift between the first and second images in accordance with the shifting factor for adjusting the offset of the first and second images. | 03-28-2013 |
20130077854 | MEASUREMENT APPARATUS AND CONTROL METHOD - A measurement apparatus which measures the relative position and orientation of an image-capturing apparatus capturing images of one or more measurement objects with respect to the measurement object, acquires a captured image using the image-capturing apparatus. The respective geometric features present in a 3D model of the measurement object are projected onto the captured image based on the position and orientation of the image-capturing apparatus, thereby obtaining projection geometric features. Projection geometric features are selected from the resultant projection geometric features based on distances between the projection geometric features in the captured image. The relative position and orientation of the image-capturing apparatus with respect to the measurement object is then calculated using the selected projection geometric features and image geometric features corresponding thereto detected in the captured image. | 03-28-2013 |
20130083992 | METHOD AND SYSTEM OF TWO-DIMENSIONAL TO STEREOSCOPIC CONVERSION - In one embodiment, a method of two-dimensional to stereoscopic image conversion, the method comprising detecting a face in a two-dimensional image; determining a body region based on the detected face; providing a color model from a portion of the determined body region, a portion of the detected face, or a combination of both portions; calculating a similarity value of at least one image pixel of the two-dimensional image based on the provided color model; and assigning a depth value of the image pixel based on the calculated similarity value to generate a stereoscopic image. | 04-04-2013 |
20130083993 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing device includes: an image acquisition section acquiring base and reference images in which a same object is drawn at horizontal positions different from each other; and a disparity detection section detecting a candidate pixel as a candidate of a pixel corresponding to a base pixel constituting the base image, from a reference pixel group including a first reference pixel constituting the reference image, and a second reference pixel, whose vertical position is different from that of the first reference pixel, based on the base pixel and the reference pixel group, associating a horizontal disparity candidate indicating a distance from a horizontal position of the base pixel to a horizontal position of the candidate pixel, with a vertical disparity candidate indicating a distance from a vertical position of the base pixel to a vertical position of the candidate pixel, and storing the associated candidates in a storage section. | 04-04-2013 |
20130083994 | Semi-Global Stereo Correspondence Processing With Lossless Image Decomposition - A method for disparity cost computation for a stereoscopic image is provided that includes computing path matching costs for external paths of at least some boundary pixels of a tile of a base image of the stereoscopic image, wherein a boundary pixel is a pixel at a boundary between the tile and a neighboring tile in the base image, storing the path matching costs for the external paths, computing path matching costs for pixels in the tile, wherein the stored path matching costs for the external paths of the boundary pixels are used in computing some of the path matching costs of some of the pixels in the tile, and computing aggregated disparity costs for the pixels in the tile, wherein the path matching costs computed for each pixel are used to compute the aggregated disparity costs for the pixel. | 04-04-2013 |
20130083995 | STEREOSCOPIC MEASUREMENT SYSTEM AND METHOD - A stereoscopic measurement system captures stereo images and determines measurement information for user-designated points within stereo images. The system comprises an image capture device for capturing stereo images of an object. A processing system communicates with the capture device to receive stereo images. The processing system displays the stereo images and allows a user to select one or more points within the stereo image. The processing system processes the designated points within the stereo images to determine measurement information for the designated points. | 04-04-2013 |
20130089254 | APPARATUS AND METHOD FOR CORRECTING STEREOSCOPIC IMAGE USING MATCHING INFORMATION - An apparatus for correcting a stereoscopic image using matching information, includes: a matching information visualizer receiving input of original stereoscopic images and intuitive matching information and visualizing a pair of stereoscopic images based on the intuitive matching information; a correction information processor obtaining a statistical camera parameter based on the intuitive matching information and correcting the received stereoscopic image using the statistical camera parameter; and an error allowable controller providing allowable error information to the correction information processor in consideration of an error allowable degree according to a selected time from the received intuitive matching information and preset human factor guide information, to extract a correlation between stereoscopic images using a stereoscopic image and provided information, thereby helping such that an erroneously photographed image is correctly photographed or correcting the image such that the erroneously photographed image is correctly interpreted, which leads to minimization of visual fatigue. | 04-11-2013 |
20130094753 | FILTERING IMAGE DATA - Systems, methods, and machine-readable and executable instructions are provided for filtering image data. Filtering image data can include determining a desired depth of field of an image, determining a distance between a pixel of the image and the desired depth of field. Filtering image data can also include adjusting a contrast of the pixel in proportion to a magnitude of a weight of the pixel, wherein the weight is based on the distance. | 04-18-2013 |
20130094754 | IMAGE OUTPUT APPARATUS AND METHOD FOR OUTPUTTING IMAGE THEREOF - An image output apparatus and a method for outputting an image thereof are provided. The method of the image output apparatus determines whether a difference in grayscale values between a current image frame and a previous image frame is greater than or equal to a pre-set value, if the difference in the grayscale values is greater than or equal to the pre-set value, at least one of a maximum grayscale value and a minimum grayscale value of the current image frame is adjusted, a grayscale of the current image frame according to an input and output grayscale function having the adjusted maximum grayscale value and minimum grayscale value is adjusted and an image is output. Accordingly, a crosstalk phenomenon of a | 04-18-2013 |
20130094755 | METHOD FOR THE MICROSCOPIC THREE-DIMENSIONAL REPRODUCTION OF A SAMPLE - A method for the three-dimensional imaging of a sample in which image information from different depth planes of the sample is stored in a spatially resolved manner, and the three-dimensional image of the sample is subsequently reconstructed from this stored image information is provided. A reference structure is applied to the illumination light, at least one fluorescing reference object is positioned next to or in the sample, images of the reference structure of the illumination light, of the reference object are recorded from at least one detection direction and evaluated. The light sheet is brought into an optimal position based on the results and image information of the reference object and of the sample from a plurality of detection directions is stored. Transformation operators are obtained on the basis of the stored image information and the reconstruction of the three-dimensional image of the is based on these transformation operators. | 04-18-2013 |
20130101206 | Method, System and Computer Program Product for Segmenting an Image - A first depth map is generated in response to a first stereoscopic image from a camera. The first depth map includes first pixels having valid depths and second pixels having invalid depths. A second depth map is generated in response to a second stereoscopic image from the camera. The second depth map includes third pixels having valid depths and fourth pixels having invalid depths. A first segmentation mask is generated in response to the first pixels and the third pixels. A second segmentation mask is generated in response to the second pixels and the fourth pixels. In response to the first and second segmentation masks, a determination is made of whether the second stereoscopic image includes a change in comparison to the first stereoscopic image. | 04-25-2013 |
20130101207 | Systems and Methods for Detecting a Tilt Angle from a Depth Image - A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels. | 04-25-2013 |
20130108148 | AUTOMATED BUILDING DETECTING | 05-02-2013 |
20130108149 | Processing Method for a Pair of Stereo Images | 05-02-2013 |
20130108150 | STEREOSCOPIC MEASUREMENT SYSTEM AND METHOD | 05-02-2013 |
20130108151 | RECOVERING 3D STRUCTURE USING BLUR AND PARALLAX | 05-02-2013 |
20130114883 | APPARATUS FOR EVALUATING VOLUME AND METHOD THEREOF - An apparatus for evaluating a volume of an object and a method thereof are provided. The provided apparatus and the method can precisely evaluate the volume of the object with a single camera, and the required evaluation time is short. Accordingly, shipping companies can utilize the most appropriate container or cargo space for each object to deliver, thereby reducing operation costs and optimizing the transportation fleet. | 05-09-2013 |
20130114884 | THREE-DIMENSION IMAGE PROCESSING METHOD AND A THREE-DIMENSION IMAGE DISPLAY APPARATUS APPLYING THE SAME - A three-dimension (3D) image processing method is disclosed. A plurality of asymmetric filtering is performed on an input depth map to obtain a plurality of asymmetric filtering results. One among the asymmetric filtering results is selected as an output depth map. A two-dimension (2D) image is converted into a 3D image according to the output depth map. | 05-09-2013 |
20130114885 | METHOD AND APPARATUS FOR CREATING STEREO IMAGE ACCORDING TO FREQUENCY CHARACTERISTICS OF INPUT IMAGE AND METHOD AND APPARATUS FOR REPRODUCING THE CREATED STEREO IMAGE - A method and an apparatus for creating a stereo image adaptively according to the characteristic of an input image and a method and an apparatus for reproducing the created stereo image are provided. The method for creating a stereo image includes selecting one of a left view image and a right view image that constitute the stereo image and measuring the directivity of high frequency components of the selected image, and synthesizing the left view image and the right view image into a stereo image in a format depending on the measured directivity. | 05-09-2013 |
20130114886 | POSITION AND ORIENTATION MEASUREMENT APPARATUS, POSITION AND ORIENTATION MEASUREMENT METHOD, AND STORAGE MEDIUM - A position and orientation measurement apparatus for measuring a position and orientation of a target object, comprising: storage means for storing a three-dimensional model representing three-dimensional shape information of the target object; obtaining means for obtaining a plurality of measurement data about the target object sensed by image sensing means; reliability calculation means for calculating reliability for each of the pieces of measurement data; selection means for selecting the measurement data by a predetermined number from the plurality of measurement data based on the reliability; association means for associating planes forming the three-dimensional model with each of the measurement data selected by the selection means; and decision means for deciding the position and orientation of the target object based on the result associated by the association means. | 05-09-2013 |
20130114887 | STEREO DISTANCE MEASUREMENT APPARATUS AND STEREO DISTANCE MEASUREMENT METHOD - Provided is a stereo distance measurement apparatus wherein a camera image itself is adjusted to correct the blur, thereby preventing the distance measurement time from being long, while improving the precision of disparity detection. In the apparatus ( | 05-09-2013 |
20130121558 | Point Selection in Bundle Adjustment - In an embodiment, a method comprises receiving a set of three dimensional ( | 05-16-2013 |
20130121559 | MOBILE DEVICE WITH THREE DIMENSIONAL AUGMENTED REALITY - A method for determining an augmented reality scene by a mobile device includes estimating 3D geometry and lighting conditions of the sensed scene based on stereoscopic images captured by a pair of imaging devices. The device accesses intrinsic calibration parameters of a pair of imaging devices of the device independent of a sensed scene of the augmented reality scene. The device determines two dimensional disparity information of a pair of images from the device independent of a sensed scene of the augmented reality scene. The device estimates extrinsic parameters of a sensed scene by the pair of imaging devices, including at least one of rotation and translation. The device calculates a three dimensional image based upon a depth of different parts of the sensed scene based upon a stereo matching technique. The device incorporates a three dimensional virtual object in the three dimensional image to determine the augmented reality scene. | 05-16-2013 |
20130121560 | IMAGE PROCESSING DEVICE, METHOD OF PROCESSING IMAGE, AND IMAGE DISPLAY APPARATUS - According to an embodiment, an image processing device includes: a first acquiring unit, a second acquiring unit, a first setting unit, a second setting unit, a first calculating unit, and a second calculating unit. The first acquiring unit acquires a plurality of captured images by imaging a target object from a plurality of positions. The second acquiring unit acquires a provisional three-dimensional position and a provisional size. The first setting unit sets at least one search candidate point near the provisional three-dimensional position. The second setting unit sets a search window for each projection position where the search candidate point is projected, the search window having a size. The first calculating unit calculates an evaluation value that represents whether or not the target object is included inside the search window. The second calculating unit calculates a three-dimensional position of the target object based on the evaluation value. | 05-16-2013 |
20130121561 | Method, System and Computer Program Product for Detecting an Object in Response to Depth Information - First information is about respective depths of pixel coordinates within an image. Second information is about respective depths of the pixel coordinates within a ground plane. In response to comparing the first information against the second information, respective markings are generated to identify whether any one or more of the pixel coordinates within the image has significant protrusion from the ground plane. In response to a particular depth of a representative pixel coordinate within the image, a window of pixel coordinates is identified that is formed by different pixel coordinates and the representative pixel coordinate. In response to the respective markings, respective probabilities are computed for the pixel coordinates, so that the respective probability for the representative pixel coordinate is computed in response to the respective markings of all pixel coordinates within the window. In response to the respective probabilities, at least one object is detected within the image. | 05-16-2013 |
20130121562 | Method, System and Computer Program Product for Identifying Locations of Detected Objects - First and second objects are detected within an image. The first object includes first pixel columns, and the second object includes second pixel columns. A rightmost one of the first pixel columns is adjacent to a leftmost one of the second pixel columns. A first equation is fitted to respective depths of the first pixel columns, and a first depth is computed of the rightmost one of the first pixel columns in response to the first equation. A second equation is fitted to respective depths of the second pixel columns, and a second depth is computed of the leftmost one of the second pixel columns in response to the second equation. The first and second objects are merged in response to the first and second depths being sufficiently similar to one another, and in response to the first and second equations being sufficiently similar to one another. | 05-16-2013 |
20130121563 | PRIORITIZED COMPRESSION FOR VIDEO - In one embodiment, a method of prioritized compression for 3D video wireless display, the method comprising: inputting video data; abstracting scene depth of the video data; estimating foreground and background for each image of the video data; performing different kinds of compressions to the foreground and background in each image; and outputting the processed video data. Thus, the image quality is not affected by the data loss during the wireless transmission. | 05-16-2013 |
20130121564 | POINT CLOUD DATA PROCESSING DEVICE, POINT CLOUD DATA PROCESSING SYSTEM, POINT CLOUD DATA PROCESSING METHOD, AND POINT CLOUD DATA PROCESSING PROGRAM - A point cloud data processing device is equipped with a non-plane area removing unit 101, a plane labeling unit 102, a contour calculating unit 103, and a point cloud data remeasurement request processing unit 106. The non-plane area removing unit 101 removes point cloud data relating to non-plane areas from point cloud data in which a two-dimensional image of an object is linked with data of three-dimensional coordinates of plural points that form the two-dimensional image. The plane labeling unit 102 adds labels for identifying planes with respect to the point cloud data in which the data of the non-plane areas are removed. The contour calculating unit 103 calculates a contour of the object by using local flat planes based on a local area that is connected with the labeled plane. The point cloud data remeasurement request processing unit 106 requests remeasurement of the point cloud data. | 05-16-2013 |
20130129190 | Model-Based Stereo Matching - Model-based stereo matching from a stereo pair of images of a given object, such as a human face, may result in a high quality depth map. Integrated modeling may combine coarse stereo matching of an object with details from a known 3D model of a different object to create a smooth, high quality depth map that captures the characteristics of the object. A semi-automated process may align the features of the object and the 3D model. A fusion technique may employ a stereo matching confidence measure to assist in combining the stereo results and the roughly aligned 3D model. A normal map and a light direction may be computed. In one embodiment, the normal values and light direction may be used to iteratively perform the fusion technique. A shape-from-shading technique may be employed to refine the normals implied by the fusion output depth map and to bring out fine details. The normals may be used to re-light the object from different light positions. | 05-23-2013 |
20130129191 | Methods and Apparatus for Image Rectification for Stereo Display - A set of features in a pair of images is associated to selected cells within a set of cells using a base mesh. Each image of the pair of images is divided using the base mesh to generate the set of cells. The set of features is defined in terms of the selected cells. A stereo image pair is generated by transforming the set of cells with a mesh-based transformation function. A transformation of the set of cells is computed by applying an energy minimization function to the set of cells. A selected transformed mesh and another transformed mesh are generated by applying the transformation of the set of cells to the base mesh. The mesh-based transformation function preserves selected properties of the set of features in the pair of images. | 05-23-2013 |
20130129192 | RANGE MAP DETERMINATION FOR A VIDEO FRAME - A method for determining a range map for a particular video frame from a digital video comprising: determining a set of extrinsic parameters and one or more intrinsic parameters for each video frame. A set of candidate video frames are defined and an image similarity score for each candidate video frame providing an indication of the visual similarity. The image similarity scores are compared to a predefined threshold to determine a subset of the candidate video frames. A position difference score is determined for each video frame in the determined subset responsive to the extrinsic parameters, and the video frame having the largest position difference score is selected. The range map is determined responsive to disparity values representing a displacement between corresponding image pixels in the particular video frame and the selected video frame. | 05-23-2013 |
20130129193 | FORMING A STEROSCOPIC IMAGE USING RANGE MAP - A method for forming a stereoscopic image from a main image of a scene captured from a main image viewpoint including one or more foreground objects, together a main image range map and a background image. A first-eye image is determined corresponding to a first-eye viewpoint and a second-eye image is determined corresponding to a second-eye viewpoint. At least one of the first-eye image and the second-eye image is determined by warping the main image to the associated viewpoint, wherein the warped main image includes one or more holes corresponding to scene content that was occluded in the main image; warping the background image to the associated viewpoint; and determining pixel values to fill the one or more holes in the warped main image using pixel values at corresponding pixel locations in the warped background image; and forming a stereoscopic image including the first-eye image and the second-eye image. | 05-23-2013 |
20130129194 | METHODS AND SYSTEMS OF MERGING DEPTH DATA FROM A PLURALITY OF DISPARITY MAPS - A method of merging a plurality of disparity maps. The method comprises calculating a plurality of disparity maps each from images captured by another of a plurality of pairs of image sensors having stereoscopic fields of view (SFOVs) with at least one overlapping portion, the SFOVs covering a scene with a plurality of objects, identifying at least one of the plurality of objects in the at least one overlapping portion, the at least one object being mapped in each the disparity map, calculating accuracy of disparity values depicting the object in each the disparity map, merging depth data from the plurality of disparity maps according to the accuracy so as to provide a combined depth map wherein disparity values of the object are calculated according to one of the plurality of disparity maps, and outputting the depth data. | 05-23-2013 |
20130129195 | IMAGE PROCESSING METHOD AND APPARATUS USING THE SAME - A image processing method for obtaining a saliency map of a input image, includes the steps of: determining a depth map and an initial saliency map; selecting a (j,i)th depth on the depth map as a target depth, wherein i and j are natural numbers respectively smaller than or equal to integers m and n; selecting 2R+1 selected depths with a one-dimensional window, centered with the target depth, wherein R is a natural number greater than 1; for each of the 2R+1 selected depths, determining whether it is greater than the target depth; if so, having a corresponding (j,i)th saliency value adjusted with a difference; and adjusting parameters i and j to have each and every saliency values of the initial saliency map adjusted and accordingly obtain the saliency map. | 05-23-2013 |
20130136336 | IMAGE PROCESSING APPARATUS AND CONTROLLING METHOD FOR IMAGE PROCESSING APPARATUS - According to one embodiment, an image processing apparatus includes, a composition estimation module configured to estimate a composition from a two-dimensional image, an inmost color determination module configured to determine an inmost color based on the estimated composition and the two-dimensional image, a first depth generator configured to generate a first depth for each of multiple regions in the two-dimensional image based on the inmost color, and an image processor configured to convert the two-dimensional image into a three-dimensional image using the first depth. | 05-30-2013 |
20130136337 | Methods and Apparatus for Coherent Manipulation and Stylization of Stereoscopic Images - Methods and apparatus for coherent manipulation and stylization of stereoscopic images. A stereo image manipulation method may use the disparity map for a stereo image pair to divide the left and right images into a set of slices, each of which is the portion of the images that correspond to a certain, small depth range. The method may merge the left and right slices for a depth into a single image. The method may then apply a stylization technique to each slice. The method may then extract the left and right portions of each stylized slice, and stack them together to create a coherent stylized stereo image. As an alternative to first extracting slices from a merged image and then applying a stylization technique to the slices, the method may first apply the stylization technique to the merged image and then extract slices from the stylized merged image. | 05-30-2013 |
20130136338 | Methods and Apparatus for Correcting Disparity Maps using Statistical Analysis on Local Neighborhoods - Methods and apparatus for disparity map correction through statistical analysis on local neighborhoods. A disparity map correction technique may be used to correct mistakes in a disparity or depth map. The disparity map correction technique may detect and mark invalid pixel pairs in a disparity map, segment the image, and perform a statistical analysis of the disparities in each segment to identify outliers. The invalid and outlier pixels may then be corrected using other disparity values in the local neighborhood. Multiple iterations of the disparity map correction technique may be performed to further improve the output disparity map. | 05-30-2013 |
20130136339 | SYSTEM FOR REAL-TIME STEREO MATCHING - A system for real-time stereo matching is provided, which provides improved stereo matching speed and rate by gradually optimizing a disparity range used in the stereo matching based on the stereo matching result of the previous frame image and thus reducing unnecessary matching computations. | 05-30-2013 |
20130136340 | ARITHMETIC PROCESSING DEVICE - An image processor sets a first predetermined number of first blocks at first intervals in a second image, calculates a first evaluated value, selects one of the first blocks, and calculates a first parallax between the selected first block and the matching target block. An image processor sets a second predetermined number of second blocks at second intervals in a second image, calculates a second evaluated value, selects one of the second blocks, and calculates a second parallax between the selected second block and the matching target block. A controller determines, based on the first evaluated value and the second evaluated value and based on the first parallax and the second parallax, whether or not to employ one of the first parallax and the second parallax. | 05-30-2013 |
20130136341 | ELECTRONIC APPARATUS AND THREE-DIMENSIONAL MODEL GENERATION SUPPORT METHOD - According to one embodiment, an electronic apparatus includes a 3D model generator, a capture position estimation module and a notification controller. The 3D model generator generates 3D model data of a 3D model by using images in which a target object of the 3D model is captured. The capture position estimation module estimates a capture position of a last captured image of the images. The notification controller notifies a user of a position at which the object is to be next captured, based on the generated 3D model data and the estimated capture position. The 3D model generator updates the 3D model data by further using a newly captured image of the object. | 05-30-2013 |
20130136342 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - There are provided an data input unit configured to receive an input image, depth data, and a shooting parameter, a parameter input unit that receives a transformation parameter as a parameter on projective transformation of a three-dimensional model, a transformed image generating unit that generate a transformed image by performing projective transformation based on the transformation parameter in the three-dimensional model obtained from the input image, the depth data, and the shooting parameter, a blank area detecting unit that detects a blank area in the transformed image, the blank area being a group of blank pixels having no corresponding pixel in the input image, and an output unit configured to output the transformed image in the case where a blank value indicating size of the blank area is smaller than or equal to a threshold value. | 05-30-2013 |
20130142415 | System And Method For Generating Robust Depth Maps Utilizing A Multi-Resolution Procedure - A system and method for generating robust depth maps includes a depth estimator that creates a depth map pyramid structure that includes a plurality of depth map levels that each have different resolution characteristics. In one embodiment, the depth map levels include a fine-scale depth map, a medium-scale depth map, and a coarse scale depth map. The depth estimator evaluates depth values from the fine-scale depth map by utilizing fine-scale confidence features, and evaluates depth values from the medium-scale depth map and the coarse-scale depth map by utilizing coarse-scale confidence features. The depth estimator then fuses optimal depth values from the different depth map levels into an optimal depth map. | 06-06-2013 |
20130142416 | DETECTION DEVICE AND DETECTION METHOD - A detection device capable of reliably detecting an object to be detected. An intersection region pattern setting unit ( | 06-06-2013 |
20130156294 | DEPTH MAP GENERATION BASED ON SOFT CLASSIFICATION - A method for generating a depth map for a 2D image and video includes receiving the 2D image and video; defining a plurality of object classes; analyzing content of the received 2D image and video; calculating probabilities that the received 2D image belongs to the object classes; and determining a final depth map based on a result of the analyzed content and the calculated probabilities for the object classes. | 06-20-2013 |
20130156295 | METHOD OF FILTERING A DISPARITY MESH OBTAINED FROM PIXEL IMAGES - A method of filtering a disparity mesh from pixel images according to the invention, where the disparity mesh comprises a plurality of points, where each point is associated with values of two planar coordinates (X, Y) and a disparity value (D) and where the values are quantization pitches, comprises the step: filtering planes by filtering 2D-lines in 2D-spaces (X-D, Y-D) of the planar coordinates (X,Y) and the disparity (D). | 06-20-2013 |
20130156296 | Three Dimensional Gesture Recognition in Vehicles - A method and system for performing gesture recognition of a vehicle occupant employing a time of flight (TOF) sensor and a computing system in a vehicle. An embodiment of the method of the invention includes the steps of receiving one or more raw frames from the TOF sensor, performing clustering to locate one or more body part clusters of the vehicle occupant, calculating the location of the tip of the hand of the vehicle occupant, determining whether the hand has performed a dynamic or a static gesture, retrieving a command corresponding to one of the determined static or dynamic gestures, and executing the command. | 06-20-2013 |
20130163854 | IMAGE PROCESSING METHOD AND ASSOCIATED APPARATUS - An image processing method includes: receiving a plurality of images, the images being captured under different view points; and performing image alignment for the plurality of images by warping the plurality of images, where the plurality of images are warped according to a set of parameters, and the set of parameters are obtained by finding a solution constrained to predetermined ranges of physical camera parameters. In particular, the step of performing the image alignment further includes: automatically performing the image alignment to reproduce a three-dimensional (3D) visual effect, where the plurality of images is captured by utilizing a camera module, and the camera module is not calibrated with regard to the view points. For example, the 3D visual effect can be a multi-angle view (MAV) visual effect. In another example, the 3D visual effect can be a 3D panorama visual effect. An associated apparatus is also provided. | 06-27-2013 |
20130163855 | AUTOMATED DETECTION AND CORRECTION OF STEREOSCOPIC EDGE VIOLATIONS - Pixel-based and region-based methods, computer program products, and systems for detecting, flagging, highlighting on a display, and automatically fixing edge violations in stereoscopic images and video. The highlighting and display methods involve signed, clamped subtraction of one image of a stereo image pair from the other image, with the subtraction preferably isolated to a region of interest near the lateral edges. Various embodiments include limiting the detection, flagging, and highlighting of edge violations to objects causing a degree of perceptual discomfort greater than a user-set or preset threshold, or to objects having a certain size and/or proximity and/or degree of cut-off by a lateral edge of the left or right eye images of a stereo image pair. Methods of removing violations include automatic or semi-automatic cropping of the offending object, and depth shifting of the offending object onto the screen plane. | 06-27-2013 |
20130163856 | APPARATUS AND METHOD FOR ENHANCING STEREOSCOPIC IMAGE, RECORDED MEDIUM THEREOF - An apparatus for enhancing a stereoscopic image may include: a color relationship extraction unit which extracts color relationships between a plurality of first coordinates in a 3-dimensional color space for a first image and second coordinates in a 3-dimensional color space for a second image corresponding to the plurality of first coordinates; a color relationship correction unit, which corrects a color relationship for any one first coordinate from among the plurality of first coordinates based on a color relationship of at least one first coordinate existing within a particular distance from the any one first coordinate; and a color value transformation unit, which transforms a color value of the first image by using the corrected color relationship of the any one first coordinate. The invention provides the advantage of accurately correcting color imbalance between the left image and right image forming a stereoscopic image. | 06-27-2013 |
20130163857 | MULTIPLE CENTROID CONDENSATION OF PROBABILITY DISTRIBUTION CLOUDS - Systems and methods are disclosed for identifying objects captured by a depth camera by condensing classified image data into centroids of probability that captured objects are correctly identified entities. Output exemplars are processed to detect spatially localized clusters of non-zero probability pixels. For each cluster, a centroid is generated, generally resulting in multiple centroids for each differentiated object. Each centroid may be assigned a confidence value, indicating the likelihood that it corresponds to a true object, based on the size and shape of the cluster, as well as the probabilities of its constituent pixels. | 06-27-2013 |
20130170736 | DISPARITY ESTIMATION DEPTH GENERATION METHOD - A disparity estimation depth generation method, wherein after inputting an original left map and an original right map in a stereo color image, compute depth of said original left and right maps, comprising following steps: perform filtering of said original left and right maps, to generate a left map and a right map; perform edge detection of an object in said left and right maps, to determine size of at least a matching block in said left and said right maps, based on information of two edges detected in an edge-adaptive approach; perform computation of matching cost, to generate respectively a preliminary depth map, and perform cross-check to find out at least an unreliable depth region from said preliminary depth map to perform refinement; and refine errors in said unreliable depth region, to obtain correct depth of said left and said right maps. | 07-04-2013 |
20130170737 | STEREOSCOPIC IMAGE CONVERTING APPARATUS AND STEREOSCOPIC IMAGE DISPLAYING APPARATUS - A stereoscopic image converting apparatus is capable of displaying a stereoscopic image. The apparatus comprises a photographing condition extracting portion for extracting convergent angle conversion information when right/left images are captured; and an image converting portion for changing the convergence angle of the time when the right/left images are captured. The image converting portion comprises a convergent angle correction value calculating portion which calculates the maximum disparity value of the right/left images on the basis of the convergent angle conversion information and display size information and calculates a convergent angle correction value at which the calculated maximum disparity value is equal to or lower than a previously designated maximum disparity value; and a convergent angle conversion processing portion which generates an image in which the convergent angle when the right/left images are captured is changed on the basis of the calculated convergent angle correction value. | 07-04-2013 |
20130177234 | SYSTEM AND METHOD FOR IDENTIFYING AN APERTURE IN A REPRESENTATION OF AN OBJECT - An iterative process for determining an aperture in a representation of an object is disclosed. The object is received and a bounding box corresponding thereto is determined. The bounding box includes a plurality of initial voxels and the object is embedded therein. An intersecting set of initial voxels is determined, as well as an internal set and an external set of initial voxels. The resolution of the voxels is iteratively decreased until the ratio of internal voxels to external voxels exceeds a predetermined threshold. The voxels corresponding to the final iteration are the final voxels. An internal set of final voxels is determined. A union set of initial voxels is determined indicating an intersection between the external set of initial voxels and the internal set of final voxels. From the union set of initial voxels and the external set of initial voxels, a location of an aperture is determined. | 07-11-2013 |
20130177235 | Evaluation of Three-Dimensional Scenes Using Two-Dimensional Representations - A system adapted to implement a learning rule in a three-dimensional (3D) environment is described. The system includes: a renderer adapted to generate a two-dimensional (2D) image based at least partly on a 3D scene; a computational element adapted to generate a set of appearance features based at least partly on the 2D image; and an attribute classifier adapted to generate at least one set of learned features based at least partly on the set of appearance features and to generate a set of estimated scene features based at least partly on the set of learned features. A method labels each image from among the set of 2D images with scene information regarding the 3D scene; selects a set of learning modifiers based at least partly on the labeling of at least two images; and updates a set of weights based at least partly on the set of learning modifiers. | 07-11-2013 |
20130177236 | METHOD AND APPARATUS FOR PROCESSING DEPTH IMAGE - An apparatus and method for processing a depth image. A depth image may be generated with reduced noise and motion blur, using depth images generated during different integration times that are generated based on the noise and motion blur of the depth image. | 07-11-2013 |
20130177237 | STEREO-VISION OBJECT DETECTION SYSTEM AND METHOD - An object in a visual scene is detected responsive to one or more void regions in an associated range-map image generated from associated stereo image components. In one aspect, each element of a valid-count vector contains a count of a total number of valid range values at a corresponding column position from a plurality of rows of the range-map image. The valid-count vector, or a folded version thereof, is filtered, and an integer approximation thereof is differentiated so as to provide for identifying one or more associated void regions along the plurality of rows of the range-map image. For each void region, the image pixels of an associated prospective near-range object are identified as corresponding to one or more modes of a histogram providing a count of image pixels with respect to image pixel intensity, for image pixels from one of the stereo image components within the void region. | 07-11-2013 |
20130182943 | SYSTEMS AND METHODS FOR DEPTH MAP GENERATION - Various embodiments are disclosed for generating depth maps. One embodiment is a method implemented in an image processing device. The method comprises retrieving, by the image processing device, a 2D image; and determining, by the image processing device, at least one region within the 2D image having a high gradient characteristic relative to other regions within the 2D image. The method further comprises identifying, by the image processing device, an out-of-focus region based on the at least one region having a high gradient characteristic; and deriving, by the image processing device, a color model according to the out-of-focus region. Based on the color model, the image processing device provides a depth map for 2D-to-stereoscopic conversion. | 07-18-2013 |
20130182944 | 2D TO 3D IMAGE CONVERSION - A method (and system) of processing image data in which a depth map is processed to derive a modified depth map by analysing luminance and/or chrominance information in respect of the set of pixels of the image data. The depth map is modified using a function which correlates depth with pixel height in the pixellated image and which has a different correlation between depth and pixel height for different luminance and/or chrominance values. | 07-18-2013 |
20130182945 | IMAGE PROCESSING METHOD AND APPARATUS FOR GENERATING DISPARITY VALUE - A method and apparatus for processing an image is provided. The image processing apparatus may adjust or generate a disparity of a pixel, by assigning similar disparities to two pixels that are adjacent to each other and have similar pixels. The image processing apparatus may generate a final disparity map that may minimize energy, based on an image and an initial disparity map, under a predetermined constraint. A soft constraint or a hard constraint may be used as the constraint. | 07-18-2013 |
20130188860 | MEASUREMENT DEVICE, MEASUREMENT METHOD, AND COMPUTER PROGRAM PRODUCT - According to an embodiment, a second calculator calculates a three-dimensional position of a measurement position and error in the three-dimensional position using a first image, the measurement position, a second image, and a correspondence position. A selection unit determines whether there is an image pair, in which error in the three-dimensional position becomes smaller than the error calculated by the second calculator, from among image pairs of the plurality of images, when there is an image pair, selects the image pair, and when there is no image pair, decides on the three-dimensional position. Each time an image pair is selected, the second calculator calculates a new three-dimensional position of the measurement position and error using new first and second images each included in the image pair, and first and second projection positions where the three-dimensional positions are projected onto the new first and second images, respectively. | 07-25-2013 |
20130188861 | APPARATUS AND METHOD FOR PLANE DETECTION - A plane detection apparatus for detecting at least one plane model from an input depth image. The plane detection apparatus may include an image divider to divide the input depth image into a plurality of patches, a plane model estimator to calculate one or more plane models with respect to the plurality of patches including a first patch and a second patch, and a patch merger to iteratively merge patches having a plane model a similarity greater than or equal to a first threshold by comparing plane models of the plurality of patches. When a patch having the plane model similarity greater than or equal to the first threshold is absent, the plane detection apparatus may determine at least one final plane model with respect to the input depth image using previously merged patches. | 07-25-2013 |
20130188862 | METHOD AND ARRANGEMENT FOR CENSORING CONTENT IN IMAGES - A method for censoring content on a three-dimensional image comprises a step of identifying in said three-dimensional image a three-dimensional object to be censored, and to replace said three-dimensional object to be censored by three-dimensional replacing contents in said three dimensional image. | 07-25-2013 |
20130195347 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes a modifying unit configured to modify depth information representing depths in individual pixels of an image in accordance with content included in the image, thereby generating modified depth information, and an enhancing unit configured to perform a stereoscopic effect enhancement process of enhancing a stereoscopic effect of the image by using the modified depth information generated by the modifying unit. | 08-01-2013 |
20130195348 | Image processing apparatus and method - An image processing apparatus is provided. The image processing apparatus may include a determining unit configured to determine at least one pixel having a pixel value difference between a first image and a second image lower than a critical value, among a plurality of input frame images to compute a hologram pattern, and a computing unit configured to compute a hologram pattern of the first image and to compute a hologram pattern of the second image using a computation result for the at least one pixel of the first image. | 08-01-2013 |
20130195349 | THREE-DIMENSIONAL IMAGE PROCESSING APPARATUS, THREE-DIMENSIONAL IMAGE-PICKUP APPARATUS, THREE-DIMENSIONAL IMAGE-PICKUP METHOD, AND PROGRAM - A sense of three-dimensionality and thickness is restored to a subject and a high-quality three-dimensional image with a low sense of a cardboard cutout effect is obtained, regardless of the cause of the cardboard cutout effect. In a three-dimensional image capturing apparatus (three-dimensional image processing apparatus) ( | 08-01-2013 |
20130195350 | IMAGE ENCODING DEVICE, IMAGE ENCODING METHOD, IMAGE DECODING DEVICE, IMAGE DECODING METHOD, AND COMPUTER PROGRAM PRODUCT - According to an embodiment, an image encoding device according to an embodiment includes an image generating unit, a first filtering unit, a prediction image generating unit, and an encoding unit. The image generating unit is configured to generate a first parallax image corresponding to a first viewpoint of an image to be encoded, with the use of at least one of depth information and parallax information of a second parallax image corresponding to a second viewpoint being different than the first viewpoint. The first filtering unit is configured to perform filtering on the first parallax image based on first filter information. The prediction image generating unit is configured to generate a prediction image with a reference image, the reference image being the first parallax image on which the filtering has been performed. The encoding unit is configured to generate encoded data from the image and the prediction image. | 08-01-2013 |
20130202190 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes an image detector and a controller. The image detector is utilized for receiving a surrounding image, and analyzing the surrounding image to determine a user's position. The controller is coupled to the image detector, and is utilized for receiving a stereo image, and modifying the stereo image to generate a modified stereo image by at least rotating the stereo image according to the user's position. | 08-08-2013 |
20130202191 | MULTI-VIEW IMAGE GENERATING METHOD AND APPARATUS USING THE SAME - A multi-view image generating method adapted to a 2D-to-3D conversion apparatus is provided. The multi-view image generating method includes the following steps. A pair of images is received. The pair of images is captured from different angles by a single image capturing apparatus rotating a rotation angle. A disparity map is generated based on one of the pair of images. A remapped disparity map is generated based on the disparity map by using a non-constant function. A depth map is generated based on the remapped disparity map. Multi-view images are generated based on the one of the pair of images and the depth map. Furthermore, a multi-view image generating apparatus adapted to the 2D-to-3D conversion apparatus is also provided. | 08-08-2013 |
20130202192 | APPARATUS AND METHOD FOR OPTICALLY MEASURING CREEP - A method of measuring creep strain in a gas turbine engine component, where at least a portion of the component has a material disposed thereon, and where the material has a plurality of markings providing a visually distinct pattern. The method may include capturing an image of at least a portion of the markings after an operational period of the gas turbine engine, and determining creep strain information of the component. The creep strain information may be determined by correlating the image captured after the operational period to an image captured before the operational period. | 08-08-2013 |
20130202193 | FRACTAL METHOD FOR DETECTING AND FILLING DATA GAPS WITHIN LIDAR DATA - Method for improving the quality of a set of a three dimensional (3D) point cloud data representing a physical surface by detecting and filling null spaces ( | 08-08-2013 |
20130202194 | Method for generating high resolution depth images from low resolution depth images using edge information - A method interpolates and filters a depth image with reduced resolution to recover a high resolution depth image using edge information, wherein each depth image includes an array of pixels at locations and wherein each pixel has a depth. The reduced depth image is first up-sampled, interpolating the missing positions by repeating the nearest-neighboring depth value. Next, a moving window is applied to the pixels in the up-sampled depth image. The window covers a set of pixels centred at each pixel. The pixels covered by the window are selected according to their relative position to the edge, and only pixels that are within the same side of the edge of the centre pixel are used for the filtering procedure. A single representative depth from the set of selected pixel in the window is assigned to the pixel to produce a processed depth image. | 08-08-2013 |
20130202195 | DEVICE AND METHOD FOR ACQUISITION AND RECONSTRUCTION OF OBJECTS - The purpose of this invention is a device and a method which will permit the acquisition and subsequent reconstruction of objects with volume throughout the total external surface. This invention is characterised in that it has a particular mode of acquisition on the free fall object in such a way that there is no support surface which prevents acquisition of the surface which would be hidden by said support. The invention is also characterised by special modes of distribution of the cameras which optimise image capturing and provide useful information in the subsequent reconstruction of the volume through computer means. | 08-08-2013 |
20130202196 | METHOD AND APPARATUS FOR REMOTE SENSING OF OBJECTS UTILIZING RADIATION SPECKLE - Disclosed are systems and methods to extract information about the size and shape of an object by observing variations of the radiation pattern caused by illuminating the object with coherent radiation sources and changing the wavelengths of the source. Sensing and image-reconstruction systems and methods are described for recovering the image of an object utilizing projected and transparent reference points and radiation sources. Sensing and image-reconstruction systems and methods are also described for rapid sensing of such radiation patterns. A computational system and method is also described for sensing and reconstructing the image from its autocorrelation. This computational approach uses the fact that the autocorrelation is the weighted sum of shifted copies of an image, where the shifts are obtained by sequentially placing each individual scattering cell of the object at the origin of the autocorrelation space. | 08-08-2013 |
20130202197 | System and Method for Manipulating Data Having Spatial Co-ordinates - Systems and methods are provided for extracting various features from data having spatial coordinates. The systems and methods may identify and extract data points from a point cloud, where the data points are considered to be part of the ground surface, a building, or a wire (e.g. power lines). Systems and methods are also provided for enhancing a point cloud using external data (e.g. images and other point clouds), and for tracking a moving object by comparing images with a point cloud. An objects database is also provided which can be used to scale point clouds to be of similar size. The objects database can also be used to search for certain objects in a point cloud, as well as recognize unidentified objects in a point cloud. | 08-08-2013 |
20130208975 | Stereo Matching Device and Method for Determining Concave Block and Convex Block - A stereo matching device used in a stereoscopic display system for determining a concave block and a convex block is provided. The stereo matching device comprises a receiving module for receiving a first and a second view-angle frames, a computation module, a feature extraction module and an estimation module. The computation module generates a disparity map having disparity entries respectively corresponding to blocks of the first view-angle frame. The feature extraction module generates feature maps each having feature entries respectively corresponding to the blocks. The estimation module comprises a reliability computation unit for computing a feature reliability of each of the blocks based on the feature maps and a comparator unit for filtering out unqualified blocks according to at least one reliability threshold to generate a plurality of candidate blocks and further determining the concave block and the convex block. | 08-15-2013 |
20130208976 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR CALCULATING ADJUSTMENTS FOR IMAGES - A system, method, and computer program product are provided for calculating adjustments for images. In use, a plurality of images is identified. Additionally, one or more discrepancies are determined between the plurality of images. Further, one or more adjustments are calculated for one or more of the plurality of images, utilizing the determined one or more discrepancies. | 08-15-2013 |
20130216123 | Design and Optimization of Plenoptic Imaging Systems - The spatial resolution of captured plenoptic images is enhanced. In one aspect, the plenoptic imaging process is modeled by a pupil image function (PIF), and a PIF inversion process is applied to the captured plenoptic image to produce a better resolution estimate of the object. | 08-22-2013 |
20130216124 | Spatial Reconstruction of Plenoptic Images - The spatial resolution of captured plenoptic images is enhanced. In one aspect, the plenoptic imaging process is modeled by a pupil image function (PIF), and a PIF inversion process is applied to the captured plenoptic image to produce a better resolution estimate of the object. | 08-22-2013 |
20130216125 | Resolution-Enhanced Plenoptic Imaging System - The spatial resolution of captured plenoptic images is enhanced. In one aspect, the plenoptic imaging process is modeled by a pupil image function (PIF), and a PIF inversion process is applied to the captured plenoptic image to produce a better resolution estimate of the object. | 08-22-2013 |
20130223725 | APPARATUS AND METHOD FOR ESTIMATING DISPARITY USING VISIBILITY ENERGY MODEL - An apparatus and method for estimating disparity based on a visibility energy model includes an energy calculator to calculate energy related to stereo matching of each of a left image and a right image which constitute a stereo image, a map generator to generate a visibility map for determining an error in disparity between the left image and the right image using the energy, an energy recalculator to recalculate energy with respect to a region including a visibility error generated due to the error in disparity in the visibility map, and a disparity determiner to determine disparity from the stereo image using recalculated energy of each of the left image and the right image. | 08-29-2013 |
20130230232 | Three-Dimensional Image Processing Device, And Three-Dimensional Image Processing Method - A three-dimensional image processing device for visually applying a special effect by subjecting raw data that has been acquired by imaging to image processing, and creating image data capable of being viewed stereoscopically. The device comprises a tone conversion section for tone converting the raw data in accordance with the special effect, a three-dimensional image data processing section for carrying out at least one of clipping image data that has been tone converted by the tone conversion section as image data to be viewed stereoscopically, or carrying out geometric processing, to create three-dimensional image data, and a special effect image processing section for subjecting the three-dimensional image data to special image processing to apply a special effect that is analogous to an image that has been formed optically or formed by photographic film or by development and printing processing, and creating a three-dimensional special-effect image. | 09-05-2013 |
20130230233 | Method for Reconstruction of Multi-Parameter Images of Oscillatory Processes in Mechanical Systems - A method for investing vibration processes in elastic mechanical systems. The technical result of the proposed invention is the creation of a spectral set of multidimensional images, mapping time-related three-dimensional vector parameters of metrological, and/or design-analytical, and/or design vibration parameters of mechanical systems. Reconstructed images with various dimensionality that are integrated in various combinations depending on the target function can be used as a homeostatic portrait or a cybernetic image of vibration processes in mechanical systems for objective evaluation of current operating conditions in real time. The invention can be widely used for improving the effectiveness of monitoring and investigating vibration processes in mechanical systems (objects) in the fields of mechanical engineering, construction, acoustics, etc. | 09-05-2013 |
20130230234 | ANALYSIS OF THREE-DIMENSIONAL SCENES WITH A SURFACE MODEL - A method for processing data includes receiving a depth map of a scene containing a humanoid form. The depth map is processed so as to identify three-dimensional (3D) connected components in the scene, each connected component including a set of the pixels that are mutually adjacent and have mutually-adjacent depth values. Separate, first and second connected components are identified as both belonging to the humanoid form, and a representation of the humanoid form is generated including both of the first and second connected components. | 09-05-2013 |
20130230235 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus according to the present invention includes a three-dimensional model storage unit configured to store data of a three-dimensional model that describes a geometric feature of an object, a two-dimensional image input unit configured to input a two-dimensional image in which the object is imaged, a range image input unit configured to input a range image in which the object is imaged, an image feature detection unit configured to detect an image feature from the two-dimensional image input from the two-dimensional image input unit, an image feature three-dimensional information calculation unit configured to calculate three-dimensional coordinates corresponding to the image feature from the range image input from the range image input unit, and a model fitting unit configured to fit the three-dimensional model into the three-dimensional coordinates of the image feature. | 09-05-2013 |
20130236089 | LEARNING-BASED ESTIMATION OF HAND AND FINGER POSE - A method for processing data includes receiving a depth map of a scene containing a human hand, the depth map consisting of a matrix of pixels having respective pixel depth values. The method continues by extracting from the depth map respective descriptors based on the depth values in a plurality of patches distributed in respective positions over the human hand, and matching the extracted descriptors to previously-stored descriptors in a database. A pose of the human hand is estimated based on stored information associated with the matched descriptors. | 09-12-2013 |
20130243305 | IMAGE PROCESSING METHOD FOR STEREOSCOPIC IMAGES - An image processing method for stereoscopic images includes providing image data of a first visual angle image and image data of a second visual angle image; performing first image processing to the image data of the second visual angle image according to performance of a first display parameter of the image data of the first visual angle image on a display panel, for adjusting performance of the first display parameter of the image data of the second visual angle image on the display panel to correspond to the performance of the first display parameter of the image data of the first visual image on the display panel; and displaying the first visual angle image and the second visual angle image after the first image processing. | 09-19-2013 |
20130243306 | Methods and Apparatus for 3D Camera Positioning Using a 2D Vanishing Point Grid - Methods and apparatus for three-dimensional (3D) camera positioning using a two-dimensional (2D) vanishing point grid. A vanishing point grid in a scene and initial camera parameters may be obtained. A new 3D camera may be calculated according to the vanishing point grid that places the grid as a ground plane in a scene. A 3D object may then be placed on the ground plane in the scene as defined by the 3D camera. The 3D object may be placed at the center of the vanishing point grid. Once placed, the 3D object can be moved to other locations on the ground plane or otherwise manipulated. The 3D object may be added as a layer in the image. | 09-19-2013 |
20130243307 | OBJECT IDENTIFICATION IN IMAGES OR IMAGE SEQUENCES - A solution for identifying an object in an image or a sequence of images is described. A segmenter separates a first image into superpixels. A set of grouped superpixels is determined from these superpixels by an analyzer or by a user input via a user interface. The set of grouped superpixels is sent to a search engine, which returns the results of a search performed by the search engine on the set of grouped superpixels. | 09-19-2013 |
20130251240 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes a determination unit, a search unit, a weight assignment unit and a filling unit. The determination unit determines whether a hole is surrounded by the foreground in a disparity map or a depth map. The search unit searches for multiple relative backgrounds along multiple directions when the hole is surrounded by the foreground. The weight assignment unit respectively assigns weights to the relative backgrounds. The filling unit selects an extremum from the weights, and fills the hole according to the relative background corresponding to the extremum. | 09-26-2013 |
20130251241 | Applying Perceptually Correct 3D Film Noise - Perceptually correct noises simulating a variety of noise patterns or textures may be applied to stereo image pairs each of which comprises a left eye (LE) image and a right eye (RE) image that represent a 3D image. LE and RE images may or may not be noise removed. Depth information of pixels in the LE and RE images may be computed from, or received with, the LE and RE images. Desired noise patterns are modulated onto the 3D image or scene so that the desired noise patterns are perceived to be part of 3D objects or image details, taking into account where the 3D objects or image details are on a z-axis perpendicular to an image rendering screen on which the LE and RE images are rendered. | 09-26-2013 |
20130251242 | 3D DATA ANALYSIS APPARATUS AND 3D DATA ANALYSIS METHOD - The present invention provides a 3D data analysis apparatus and a 3D data analysis method. | 09-26-2013 |
20130251243 | IMAGE PROCESSOR, LIGHTING PROCESSOR AND METHOD THEREFOR - An image processor, a lighting processor, and a method therefore are provided. According to one aspect of the invention, the lighting processor can extract information related to defuse lighting applied to a real object using a colored image and a depth image of the real object. The lighting processor can recover the diffuse image for the real object using the extracted information related to diffuse lighting, and generate either a speculum image or a shadow image using the recovered diffuse image and the colored image. | 09-26-2013 |
20130259360 | METHOD AND SYSTEM FOR STEREO CORRESPONDENCE - A method and system for stereo correspondence. The method for stereo correspondence includes a matching cost computation step, a cost aggregation step, a disparity computation step, and a disparity optimization step. The matching cost computation step acquires a left disparity space image and a right disparity space image by using horizontal gradients and vertical gradients of intensities of all component channels of every pixel in a left image and a right image. Utilizing the invention, accurate disparity maps may be acquired quickly. | 10-03-2013 |
20130259361 | CONTENT-BASED MATCHING OF VIDEOS USING LOCAL SPATIO-TEMPORAL FINGERPRINTS - A computer implemented method for matching video data to a database containing a plurality of video fingerprints of the type described above, comprising the steps of calculating at least one fingerprint representing at least one query frame from the video data; indexing into the database using the at least one calculated fingerprint to find a set of candidate fingerprints; applying a score to each of the candidate fingerprints; selecting a subset of candidate fingerprints as proposed frames by rank ordering the candidate fingerprints; and attempting to match at least one fingerprint of at least one proposed frame. | 10-03-2013 |
20130266206 | Training A User On An Accessibility Device - A user of an accessibility device is taught to properly use the device with a test image such that the accessibility device captures the entirety of or a large portion of a test image. In training the user, the device processes test image's information located within the device's field of view. Based on this processed information, the device indicates to the user if the device should be re-positioned such that a larger portion of the test image comes within the device's field of view. | 10-10-2013 |
20130266207 | METHOD FOR IDENTIFYING VIEW ORDER OF IMAGE FRAMES OF STEREO IMAGE PAIR ACCORDING TO IMAGE CHARACTERISTICS AND RELATED MACHINE READABLE MEDIUM THEREOF - A method for identifying an actual view order of image frames of a stereo image pair includes at least the following steps: receiving the image frames; obtaining image characteristics by analyzing the image frames according to an assumed view order; and identifying the actual view order by checking the image characteristics. In addition, a machine readable medium storing a program code is provided. The program causes a processor to perform at least the following steps for identifying an actual view order of image frames of a stereo image pair when executed by the processor: receiving the image frames; obtaining image characteristics by analyzing the image frames according to an assumed view order; and identifying the actual view order by checking the image characteristics. | 10-10-2013 |
20130266208 | IMAGE PROCESSING APPARATUS AND METHOD - Provided is an image processing apparatus. A boundary detector of the image processing apparatus may detect a boundary of an occlusion region of a color image warped in correspondence to a first view. A boundary labeling unit of the image processing apparatus may label the detected boundary with one of a foreground region boundary and a background region boundary. | 10-10-2013 |
20130266209 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - To provide a solution by which adjustment of the depth display of the image can be easily carried out by the user at will in a technique for forming a 3-D image from plural images, from the plural feed images, one feed image is extracted as the reference feed image, with an object recognition process being carried out to extract the object region having the prescribed characteristic features. The reference feed image IL is displayed on the display unit | 10-10-2013 |
20130266210 | DETERMINING A DEPTH MAP FROM IMAGES OF A SCENE - Methods for determining a depth measurement of a scene which involve capturing at least two images of the scene with different camera parameters, and selecting corresponding image patches in each scene. A first approach calculates a plurality of complex responses for each image patch using a plurality of different quadrature filters, each complex response having a magnitude and a phase, assigns, for each quadrature filter, a weighting to the complex responses in the corresponding image patches, the weighting being determined by a relationship of the phases of the complex responses, and determines the depth measurement of the scene from a combination of the weighted complex responses. | 10-10-2013 |
20130266211 | STEREO VISION APPARATUS AND METHOD - A method for stereo vision may include filtering a row or column in a stereo image to obtain intensity profiles, identifying peaks in the intensity profiles, pairing peaks within a maximum disparity distance, determining a shape interval for peak pairs, selecting a peak pair with a maximum shape interval, determining a disparity offset for the peak pairs, extending shape intervals to include all pixels in the intensity profiles, computing depths or distances from disparity offsets, and smoothing the stereo image disparity map along a perpendicular dimension. Another method for stereo vision includes filtering stereo images to intensity profiles, identifying peaks in the intensity profiles, pairing peaks within a maximum disparity distance, determining shape intervals for peak pairs, and selecting peak pairs with the maximum shape interval. Apparatus corresponding to the above methods are also disclosed herein. | 10-10-2013 |
20130266212 | METHOD AND APPARATUS FOR CONVERTING 2D VIDEO IMAGE INTO 3D VIDEO IMAGE - A method of converting a two-dimensional video to a three-dimensional video, the method comprising: comparing an image of an n | 10-10-2013 |
20130266213 | THREE-DIMENSIONAL IMAGE PROCESSING APPARATUS AND THREE-DIMENSIONAL IMAGE PROCESSING METHOD - A three-dimensional image processing apparatus includes an obtainer that obtains three-dimensional image information including information of a first image and a second image, a shade information obtainer that obtains shade information from the information of the first image and/or the second image, and a disparity adjuster that adjusts a disparity of a subject contained in the first and the second images based on the shade information. | 10-10-2013 |
20130272600 | RANGE IMAGE PIXEL MATCHING METHOD - A method for matching the pixels ( | 10-17-2013 |
20130272601 | Image Processing Apparatus, Image Processing Method, and Program - An image processing apparatus includes a viewing situation analyzing unit configured to obtain information representing a user's viewing situation of 3D content stored in a certain storage unit, and, based on a preset saving reference in accordance with a viewing situation of 3D content, determine a data reduction level of content data of the 3D content stored in the storage unit; and a data conversion unit configured to perform data compression of the content data of the 3D content stored in the storage unit in accordance with the determined data reduction level. | 10-17-2013 |
20130279797 | Image Scaling - The present invention relates to an apparatus, method for adjusting depth characteristics of a three-dimensional image for correcting for errors in perceived depth when scaling the three-dimensional image, the method comprising: receiving three-dimensional image information comprising a stereoscopic image including a first image and a second image, the stereoscopic image having depth characteristics associated with an offset of the first and second images; determining a scaling factor indicative of a scaling for converting the stereoscopic image from an original target size to a new size; determining at least one shifting factor for varying the depth characteristics, the at least one shifting factor indicative of a relative shift to be applied between the first and the second images, wherein the at least one shifting factor is determined in accordance with the scaling factor and at least one depth parameter derived from the depth characteristics; and performing the relative shift between the first and second images in accordance with the shifting factor for adjusting the offset of the first and second images. | 10-24-2013 |
20130279798 | VOLUMETRIC IMAGE DATA PROCESSING - A method, apparatus, computer readable medium storing computer readable instructions are disclosed for processing volumetric image data. According to the method, 3-dimensional data points are collected. A plurality of 2-dimensional image maps is obtained from the 3-dimensional data points. At least one of the plurality of 2D image maps is extracted to form at least one image frame. A frame gallery is created from the at least one image frame. | 10-24-2013 |
20130279799 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - An image processing device includes a map filtering processing unit which applies filtering to a parallax map based on a parallax value with respect to each pixel of an image; a blurred image generation unit which generates a blurred image of the image from the image; and an image composition unit which generates a composite image which is obtained by compositing the image and the blurred image based on the parallax map after the filtering by the map filtering processing unit. | 10-24-2013 |
20130287288 | METHOD AND DEVICE FOR DETERMINING THE OFFSET DISTANCE BETWEEN TWO SURFACES - A method and device for determining the offset distance between a first surface and a second surface is disclosed. The method and device determine a first reference surface, which is based on the three-dimensional coordinates of the first surface, and a second reference surface, which is based on the three-dimensional coordinates of the second surface. The offset distance is determined as the distance between a first point on the first reference surface and a second point on the second reference surface. | 10-31-2013 |
20130287289 | Synthetic Reference Picture Generation - A synthetic image block in a synthetic picture is generated for a viewpoint based on a texture image and a depth image. A subset of samples from the texture image are warped to the synthetic image block. Disoccluded samples are marked, and the disoccluded samples in the synthetic image block are filled based on samples in a constrained area. The method and system enables both picture level and block level processing for synthetic reference picture generation. The method can be used for power limited devices, and can also refine the synthetic reference picture quality at a block level to achieve coding gains. | 10-31-2013 |
20130287290 | IMAGE REGISTRATION OF MULTIMODAL DATA USING 3D GEOARCS - An accurate, flexible and scalable technique for multi-modal image registration is described, a technique that does not need to rely on direct feature matching and does not need to rely on precise geometric models. The methods and/or systems described in this disclosure enable the registration (fusion) of multi-modal images of a scene with a three dimensional (3D) representation of the same scene using, among other information, viewpoint data from a sensor that generated a target image, as well as 3D-GeoArcs. The registration techniques of the present disclosure may be comprised of three main steps, as shown in FIG. | 10-31-2013 |
20130287291 | METHOD OF PROCESSING DISPARITY SPACE IMAGE - The present invention relates to a processing method that emphasizes neighboring information around a disparity surface included in a source disparity space image by means of processing that emphasizes similarity at true matching points using inherent geometric information, that is, coherence and symmetry. The method of processing the disparity space image includes capturing stereo images satisfying epipolar geometry constraints using at least two cameras having parallax, generating pixels of a 3D disparity space image based on the captured images, reducing dispersion of luminance distribution of the disparity space image while keeping information included in the disparity space image, generating a symmetry-enhanced disparity space image by performing processing for emphasizing similarities of pixels arranged at reflective symmetric locations along a disparity-changing direction in the disparity space image, and extracting a disparity surface by connecting at least three matching points in the symmetry-enhanced disparity space image. | 10-31-2013 |
20130287292 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND COMPUTER PROGRAM PRODUCT - According to an embodiment, an image processing apparatus includes an obtaining unit and an image processing unit. The obtaining unit is configured to obtain depth information for each position in an image. The image processing unit is configured to switch between a first sharpening process and a second sharpening process in accordance with whether the image contains a predetermined area. The first sharpening process performs non-uniform sharpening on the image on the basis of the depth information; and the second sharpening process performs uniform sharpening on the image. | 10-31-2013 |
20130287293 | Active Lighting For Stereo Reconstruction Of Edges - Active lighting is provided for stereo reconstruction ( | 10-31-2013 |
20130287294 | Methods for Generating Personalized 3D Models Using 2D Images and Generic 3D Models, and Related Personalized 3D Model Generating System - A method for generating a personalized 3D model using a plurality of 2D images and a generic 3D model is provided. The method includes the following steps: extracting a plurality of feature points from the plurality of 2D images; extracting a plurality of landmark points from the generic 3D model; mapping the plurality of features extracted from the plurality of 2D images to the plurality of landmark points extracted from the generic 3D model so as to generate relationship parameters for a mapping algorithm; morphing the generic 3D model into a personalized 3D model with the plurality of landmark points, the relationship parameters and the mapping algorithm; iteratively refining the personalized 3D model with the plurality of feature points extracted from the plurality of 2D images; and when a convergent condition is met, the step of iteratively refining the personalized 3D model is complete and the personalized 3D model is saved to the 3D model database. | 10-31-2013 |
20130294681 | PARALLAX CALCULATING APPARATUS AND PARALLAX CALCULATING METHOD - A disparity calculating apparatus which calculates a disparity value from a stereo image including first and second images includes: an image dividing unit which divides the first image into segments; a reference point determining unit which sets reference points to the first image; a corresponding point detecting unit which (i) calculates, for each reference point, a corresponding point which is included in the second image and corresponds to the reference point, based on phase information of images obtained by performing Fourier transform on the first image and the second image, (ii) calculates a disparity value using the reference point and each corresponding point, and (iii) calculates a reliability of the disparity value; and a disparity assigning unit which assigns each of the segments which includes a corresponding one of the reference points the disparity value and the reliability calculated by the corresponding point detecting unit. | 11-07-2013 |
20130294682 | THREE-DIMENSIONAL IMAGE PROCESSING APPARATUS AND THREE-DIMENSIONAL IMAGE PROCESSING METHOD - In a three-dimensional image capturing apparatus (three-dimensional image processing apparatus), a depth obtainment unit obtains L depth information and R depth information from a three-dimensional image, and an image correction unit executes a smoothing process on an end part region of the subject based on the L depth information and the R depth information. As a result, the three-dimensional image processed by the three-dimensional image processing apparatus is a high-quality three-dimensional image that correctly expresses a sense of three-dimensionality and thickness in the subject and that has a low sense of a cardboard cutout effect. | 11-07-2013 |
20130294683 | THREE-DIMENSIONAL IMAGE PROCESSING APPARATUS, THREE-DIMENSIONAL IMAGE PROCESSING METHOD, AND PROGRAM - A three-dimensional image processing device for performing image correction processing on a three-dimensionally-viewed image includes a conversion unit and a composition unit. The conversion unit generates a converted image by performing tone conversion on a pixel value in an object included in the three-dimensionally-viewed image based on a relationship between a pixel value of a pixel of interest that is a processing target and a pixel value of a peripheral pixel of the pixel of interest. The composition unit synthesizes an n-th pixel in a first viewpoint image included in the three-dimensionally-viewed image and a corresponding pixel in a second viewpoint image included in the three-dimensionally-viewed image with a distribution ratio that is based on a subject distance of the n-th pixel. The corresponding pixel is located in the same spatial location as the n-th pixel. | 11-07-2013 |
20130294684 | Stereoscopic image format with depth information - A multi-view distribution format is described for stereoscopic and autostereoscopic 3D displays. In general, it comprises multi-tile image compression with the inclusion of one or more depth maps. More specifically, it provides embodiments that utilize stereoscopic images with one or more depth maps typically in the form of a compressed grey scale image and, in some instances, incorporates depth information from a second view encoded differentially. | 11-07-2013 |
20130301905 | SYSTEM AND METHOD FOR DETERMINING AN ORIENTATION AND POSITION OF AN OBJECT - A system includes a computation device having an input module adapted to receive data defining a single two dimensional image, an image analyzing module configured to receive the data and analyze the single two dimensional image to determine a two dimensional orientation representative of a three dimensional orientation and position, a position calculating module configured to receive the two dimensional orientation from the image analyzing module and determine the three dimensional orientation and position of the object, and an output module adapted to send information relating to the three dimensional orientation and position of the object. | 11-14-2013 |
20130301906 | APPARATUS AND METHOD FOR RECONSTRUCTING THREE DIMENSIONAL FACES BASED ON MULTIPLE CAMERAS - Disclosed herein are an apparatus and method for reconstructing a three-dimensional (3D) face based on multiple cameras. The apparatus includes a multi-image analysis unit, a texture image separation unit, a reconstruction image automatic synchronization unit, a 3D appearance reconstruction unit, and a texture processing unit. The multi-image analysis unit determines the resolution information of images received from a plurality of cameras, and determines whether the images have been synchronized with each other. The texture image separation unit separates a texture processing image by comparing the resolutions of the received images. The reconstruction image automatic synchronization unit synchronizes images that are determined to be asynchronous images by the multi-image analysis unit. The 3D appearance reconstruction unit computes the 3D coordinate values of the synchronized images, and reconstructs a 3D appearance image. The texture processing unit reconstructs a 3D image by mapping the texture processing image to the 3D appearance image. | 11-14-2013 |
20130301907 | APPARATUS AND METHOD FOR PROCESSING 3D INFORMATION - An apparatus and method for processing three-dimensional (3D) information is described. The 3D information processing apparatus may measure first depth information of an object using a sensor apparatus such as a depth camera, may estimate a foreground depth of the object, a background depth of a background, and a degree of transparency of the object, may estimate second depth information of the object based on the estimated foreground depth, background depth, and degree of transparency, and may determine the foreground depth, the background depth, and the degree of transparency through comparison between the measured first depth information and the estimated second depth information. | 11-14-2013 |
20130301908 | METHOD AND APPARATUS FOR ACQUIRING GEOMETRY OF SPECULAR OBJECT BASED ON DEPTH SENSOR - A method of acquiring geometry of a specular object is provided. Based on a single-view depth image, the method may include receiving an input of a depth image, estimating a missing depth value based on connectivity with a neighboring value in a local area of the depth image, and correcting the missing depth value. Based on a composite image, the method may include receiving an input of a composite image, calibrating the composite image, detecting an error area in the calibrated composite image, and correcting a missing depth value of the error area. | 11-14-2013 |
20130301909 | Three-Dimensional Shape Measurement Method and Three-Dimensional Shape Measurement Device - This three-dimensional shape measurement method comprises: a projection step for projecting an interference fringe pattern (F) having a single spatial frequency (fi) onto an object surface; a recording step for recording the pattern (F) as a digital hologram; and a measurement step for generating a plurality of reconstructed images having different focal distances from the hologram, and deriving the distance to each point on the object surface by applying a focusing method to the pattern (F) on each of the reconstructed images. The measurement step extracts the component of the single spatial frequency (fi) corresponding to the pattern (F) from each of the reconstructed images by spatial frequency filtering, upon applying the focusing method, and makes it possible to achieve a highly accurate measurement in which the adverse effect of speckles is reduced and the advantage of a free-focus image reconstruction with holography is used effectively. | 11-14-2013 |
20130315469 | METHOD FOR 3D INSPECTION OF AN OBJECT USING X-RAYS - A x-ray system for inspection of objects, utilizing two or more views has been presented. The system allows to compute the 3D spatial coordinates of certain features or object points within the object. The method used consists of first identifying feature points on the images, back tracing the ray paths from the images to the sources of radiation used, computing the point of intersection of the rays associated with each feature point and then assigning the coordinates of the intersection to the object points. | 11-28-2013 |
20130315470 | BODY MEASUREMENT - A method of generating three dimensional body data of a subject is described. The method includes capturing one or more images of the subject using a digital imaging device and generating three dimensional body data of the subject based on the one or more images. | 11-28-2013 |
20130315471 | CONCAVE SURFACE MODELING IN IMAGE-BASED VISUAL HULL - Apparatus and methods disclosed herein provide for a set of reference images obtained from a camera and a reference image obtained from a viewpoint to capture an entire concave region of an object; a silhouette processing module for obtaining a silhouette image of the concave region of the object; and a virtual-image synthesis module connected to the silhouette processing module for synthesizing a virtual inside-out image of the concave region from the computed silhouette images and for generating a visual hull of the object having the concave region. | 11-28-2013 |
20130315472 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - The present technique relates to an image processing device and an image processing method for realizing high-precision image generation of predetermined viewpoints by using depth images on the receiving end when the depth images with reduced resolutions are transmitted. | 11-28-2013 |
20130315473 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - The present technique relates to an image processing device and an image processing method that enable generation of high-quality color images and depth images of the viewpoints other than the reference point on the receiving end even if the precision of the reference-point depth image is low when the occlusion regions of color images and depth images of the viewpoints other than the reference point are transmitted. A warping unit performs a foreground-prioritized warping operation toward the left viewpoint on the reference-point depth image. Using the reference-point depth image of the left viewpoint obtained as a result of the warping operation, an occlusion determining unit detects a left-viewpoint occlusion region that appears when a viewpoint is converted from the reference point to the left viewpoint. The present technique can be applied to 3D image processing devices, for example. | 11-28-2013 |
20130315474 | METHOD FOR GENERATING, TRANSMITTING AND RECEIVING STEREOSCOPIC IMAGES, AND RELATED DEVICES - A method for generating a composite image of a stereoscopic video stream includes a pair of a right image and a left image of a scene, the right image and the left image being such that, when viewed by a spectator's right eye and left eye, respectively, they cause the spectator to perceive the scene as being three-dimensional, the method includes the steps of: generating a composite image including all the pixels of the pair of right and left images, defining a grid of macroblocks of the composite image, each macroblock of the grid including a plurality of adjacent pixels, decomposing one image of the pair of right and left images into a plurality of component regions including a plurality of contiguous pixels, processing the component regions in a manner such as to generate corresponding derived regions, the derived regions including at least all the pixels of a corresponding component region and being such that they can be decomposed into an integer number of macroblocks, arranging the non-decomposed image of the pair and the plurality of derived regions in the composite image in a manner such that all the edges of the non-decomposed image and of the derived regions coincide with edges of macroblocks of the grid. | 11-28-2013 |
20130315475 | BODY SHAPE ANALYSIS METHOD AND SYSTEM - A method for categorizing body shape is provided comprising the steps of providing a data set of body shape-defining measurements of a portion of the body of interest from a plurality of subjects' bodies, wherein the measurements define a silhouette and profile (front and side) perspectives of the portion of the body of interest; conducting a principal component (PC) analysis of the data set of measurements to calculate and generate PC scores; conducting cluster analysis using the PC scores as independent variables to produce cluster analysis results; and establishing one or more body shape categories from the cluster analysis results, thereby categorizing body shapes of the plurality of subjects. A shape prototyping system is also provided for designing a custom fit garment for an individual subject, the system being based on the method for categorizing body shape. | 11-28-2013 |
20130322738 | IMAGE PROCESSING APPARATUS AND METHOD FOR THREE-DIMENSIONAL (3D) IMAGE - An image processing apparatus and method for a three-dimensional (3D) image is provided. The image processing apparatus may include a parameter setting unit to set a first parameter related to a color image, and a parameter determining unit to determine an optimal second parameter related to a depth image, using the first parameter. | 12-05-2013 |
20130329985 | GENERATING A THREE-DIMENSIONAL IMAGE - Methods and systems for generating a three-dimensional image are provided. The method includes capturing an image and a depth map of a scene using an imaging device. The image includes a midpoint between a right side view and a left side view of the scene, and the depth map includes distances between the imaging device and objects within the scene. The method includes generating a right side image using the image and the depth map by calculating an appropriate location of each pixel within the image as viewed from the right side, and generating a left side image using the image and the depth map by calculating an appropriate location of each pixel within the image as viewed from the left side. The method also includes combining the right side image and left side image to generate a three-dimensional image of the scene and correcting the three-dimensional image. | 12-12-2013 |
20130336577 | Two-Dimensional to Stereoscopic Conversion Systems and Methods - In one embodiment, a two-dimensional to stereoscopic conversion method, comprising: estimating a local motion region in a first image relative to one or more second images, the first and the one or more second images comprising two-dimensional images; generating a color model based on the local motion region; calculating a similarity value for each of at least one image pixel selected from the first image based on the color model; and assigning a depth value for each of the at least one image pixel selected from the first image based on the calculated similarity value to generate a stereoscopic image, the method performed by one or more processors. | 12-19-2013 |
20130336578 | MICROSTRUCTURE ANALYSIS METHOD, PROGRAM THEREOF, AND MICROSTRUCTURE ANALYSIS DEVICE - Porous body data | 12-19-2013 |
20130343634 | CONTEMPORANEOUSLY RECONSTRUCTING IMAGES CAPTURED OF A SCENE ILLUMINATED WITH UNSTRUCTURED AND STRUCTURED ILLUMINATION SOURCES - What is disclosed is system and method for contemporaneously reconstructing images of a scene illuminated with unstructured and structured illumination sources. In one embodiment, the system comprises capturing a first 2D image containing energy reflected from a scene being illuminated by a structured illumination source and a second 2D image containing energy reflected from the scene being illuminated by an unstructured illumination source. A controller effectuates a manipulation of the structured and unstructured illumination sources during capture of the video. A processor is configured to execute machine readable program instructions enabling the controller to manipulate the illumination sources, and for effectuating the contemporaneous reconstruction of a 2D intensity map of the scene using the second 2D image and of a 3D surface map of the scene using the first 2D image. The reconstruction is effectuated by manipulating the illumination sources. | 12-26-2013 |
20130343635 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing apparatus including an image moving unit that moves viewpoint images according to an instruction from an outside, and changes a parallax amount among a plurality of viewpoint images, an image reading unit that reads the plurality of viewpoint images in units of lines or in units of pixels in a direction perpendicular to a line direction, an image selecting unit that sequentially selects and outputs the plurality of viewpoint images read in units of lines or in units of pixels by the image reading unit, and a control unit that adaptively switches reading of the plurality of viewpoint images by the image reading unit in the units of lines or in the units of pixels according to a scanning direction of a display unit and a base line length direction in the plurality of viewpoint images. | 12-26-2013 |
20130343636 | IMAGE PROCESSING APPARATUS, CONTROL METHOD OF THE SAME AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An image processing apparatus for generating an image of an arbitrary viewpoint using a plurality of input images obtains image information of a projection position at which a point along a first straight line passing through the arbitrary viewpoint and a pixel position on the image of the arbitrary viewpoint is projected onto each of the plurality of input images, defines parallax information of the plurality of input images at the pixel position, using the image information obtained for the point along the first straight line, and generates the image of the arbitrary viewpoint, by defining the image information of the pixel position from the plurality of input images using the parallax information for the pixel position. | 12-26-2013 |
20130343637 | Method for extracting information of interest from multi-dimensional, multi-parametric and/or multi-temporal datasets - Method of extraction of information of interest to multi-dimensional, multi-parametric and/or multi-temporal datasets related to a same object under observation through data fusion, in which a plurality of different data sets are provided concerning a single object, with the data related to various parameters and/or different time acquisition instants of said parameters. The data set are subjected to a first processing step by principal component analysis generated by an identical number of datasets with transformed data; and each of the datasets is combined in non-linearly with the corresponding transformed data set to obtain a certain predetermined number of combinations of parameters by weighing using parameters determined empirically using training datasets which determine the values of the non-linear weighting parameters that maximize the value of the new features associated with the data of interest, as compared to those of other data. | 12-26-2013 |
20130343638 | PULLING KEYS FROM COLOR SEGMENTED IMAGES - Described are computer-based methods and apparatuses, including computer program products, for pulling keys from color segmented images. Data indicative of a two dimensional image is stored in a data storage device, the two dimensional image comprising a plurality of pixels. A plurality of color segmented frames are generated based on the two dimensional image, wherein each color segmented frame comprises one or more objects. For each of the color segmented frames, a key is generated based on the one or more objects. A depth map is calculated for the two dimensional image based on the keys, wherein the depth map comprises data indicative of three dimensional information for each pixel of the two dimensional image. | 12-26-2013 |
20140003704 | IMAGING SYSTEM AND METHOD | 01-02-2014 |
20140003705 | Method for Registering Points and Planes of 3D Data in Multiple Coordinate Systems | 01-02-2014 |
20140003706 | METHOD AND SYSTEM FOR ENSURING STEREO ALIGNMENT DURING PIPELINE PROCESSING | 01-02-2014 |
20140003707 | System and Process for Roof Measurement Using Aerial Imagery | 01-02-2014 |
20140010438 | THREE DIMENSIONAL SHAPE MEASUREMENT APPARATUS AND METHOD - A three dimensional shape measurement apparatus includes m projecting sections, each of which includes a light source and a grating element, and, while moving the grating element by n times, projects a grating pattern light onto a measurement target for each movement, wherein the ‘n’ and the ‘m’ are natural numbers greater than or equal to 2, an imaging section photographing a grating pattern image reflected by the measurement target, and a control section controlling that, while photographing the grating pattern image by using one of the m projecting sections, a grating element of at least another projecting section is moved. Thus, measurement time may be reduced. | 01-09-2014 |
20140016857 | POINT CLOUD CONSTRUCTION WITH UNPOSED CAMERA - A method for processing stereo rectified images, each stereo rectified image being associated with a camera position, the method including selecting a first pair of stereo rectified images; determining a first point cloud of features from the pair of stereo rectified images; determining the locations of the features of the first point cloud with respect to a reference feature in the first point cloud; selecting a second pair of stereo rectified images so that one stereo rectified image of the second pair is common to the first pair, and scaling a second point cloud of features associated with the second pair of stereo rectified images to the first point cloud of features. | 01-16-2014 |
20140029836 | STEREOSCOPIC DEPTH RECONSTRUCTION WITH PROBABILISTIC PIXEL CORRESPONDENCE SEARCH - Generally, this disclosure provides devices, systems and methods for stereoscopic depth reconstruction, for 3-D imaging, with improved probabilistic pixel correspondence searching. The method may include obtaining a first image and a second image; down-sampling the first image; down-sampling the second image; generating a reduced resolution disparity matrix for the first down-sampled image including estimated correspondence pixels from the second down-sampled image; generating a reduced resolution quality matrix including quality metric values associated with pixels in the reduced resolution disparity matrix; up-sampling the reduced resolution disparity matrix to a first full resolution disparity matrix; up-sampling the reduced resolution quality matrix to a full resolution quality matrix; and generating a second full resolution disparity matrix for the first image including estimated correspondence pixels from the second image, the estimated correspondence pixels selected from a search range in the second image. | 01-30-2014 |
20140029837 | INERTIAL SENSOR AIDED INSTANT AUTOFOCUS - The disclosure is directed to creating an inertial sensor aided depth map of a scene. An embodiment of the disclosure captures at least a first image and a second image during movement of a device caused by a user while framing or recording the scene, compensates for rotation between the first image and the second image, calculates an amount of translation of the device between the first image and the second image, calculates a pixel shift of a plurality of key points of the first image and the second image, and estimates a depth to one or more of the plurality of key points of the first image and the second image. | 01-30-2014 |
20140029838 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM - To generate an image that enables effective suppression of the occurrence of binocular rivalry and that facilitates stereoscopic vision. A left-eye image feature point extraction unit ( | 01-30-2014 |
20140037189 | Fast 3-D point cloud generation on mobile devices - A system, apparatus and method for determining a 3-D point cloud is presented. First a processor detects feature points in the first 2-D image and feature points in the second 2-D image and so on. This set of feature points is first matched across images using an efficient transitive matching scheme. These matches are pruned to remove outliers by a first pass of s using projection models, such as a planar homography model computed on a grid placed on the images, and a second pass using an epipolar line constraint to result in a set of matches across the images. These set of matches can be used to triangulate and form a 3-D point cloud of the 3-D object. The processor may recreate the 3-D object as a 3-D model from the 3-D point cloud. | 02-06-2014 |
20140037190 | GAMUT CONTROL METHOD FOR IMPROVING IMAGE PERFORMANCE OF PARALLAX BARRIER S3D DISPLAY - A method for enhancing a three-dimensional (3D) image comprising at least two depth layers wherein each depth layer comprising image objects. The method comprising the steps of determining a near field and a far field comprising at least one depth layer each, identifying the image objects in the near field and the far field respectively, applying a first correction curve to the image objects identified in the near field and applying a second correction curve to the image objects identified in the far field. | 02-06-2014 |
20140037191 | LEARNING-BASED POSE ESTIMATION FROM DEPTH MAPS - A method for processing data includes receiving a depth map of a scene containing a humanoid form. Respective descriptors are extracted from the depth map based on the depth values in a plurality of patches distributed in respective positions over the humanoid form. The extracted descriptors are matched to previously-stored descriptors in a database. A pose of the humanoid form is estimated based on stored information associated with the matched descriptors. | 02-06-2014 |
20140037192 | Apparatus and Method for Disparity Map Generation - A method and system for generating a disparity map. The method comprises the steps of generating a first disparity map based upon a first image and a second image acquired at a first time, acquiring at least a third image and a fourth image at a second time, and determining one or more portions comprising a difference between one of the first and second images and a corresponding one of the third and fourth images. A disparity map update is generated for the one or more determined portions, and a disparity map is generated based upon the third image and the fourth image by combining the disparity map update and the first disparity map. | 02-06-2014 |
20140037193 | Apparatus and Method for Performing Segment-Based Disparity Decomposition - A method and system for generating a disparity map. The method comprises the steps of generating a first disparity map based upon a first image and a second image acquired at a first time, acquiring at least a third image and a fourth image at a second time, and determining one or more portions comprising a difference between one of the first and second images and a corresponding one of the third and fourth images. A disparity map update is generated for the one or more determined portions, and a disparity map is generated based upon the third image and the fourth image by combining the disparity map update and the first disparity map. | 02-06-2014 |
20140037194 | THREE-DIMENSIONAL POINT CLOUD POSITION DATA PROCESSING DEVICE, THREE-DIMENSIONAL POINT CLOUD POSITION DATA PROCESSING SYSTEM, AND THREE-DIMENSIONAL POINT CLOUD POSITION DATA PROCESSING METHOD AND PROGRAM - A technique is provided for efficiently process three-dimensional point cloud position data that are obtained at different viewpoints. A projecting plane is set in a measurement space as a parameter for characterizing a target plane contained in plural planes that form an object. The target plane and other planes are projected on the projecting plane. Then, a distance between each plane and the projecting plane is calculated at each grid point on the projecting plane, and the calculated matrix data is used as a range image that characterizes the target plane. The range image is also formed with respect to the other planes and with respect to planes that are viewed from another viewpoint. The range images of the two viewpoints are compared, and a pair of the planes having the smallest difference between the range images thereof is identified as matching planes between the two viewpoints. | 02-06-2014 |
20140044340 | 3D Radiometry - Methods and a computer program product for deriving temperature information with respect to surfaces within a scene that is imaged radiometrically. A time sequence of radiometric data is acquired in frames viewed from distinct angles. A three-dimensional structure of the scene is derived, allowing viewing angles and distances to the imaged surfaces to be inferred. Normalized surface areas of the imaged surfaces are calculated based on the inferred viewing angles and emissivities of the imaged surfaces are corrected accordingly. Corrections also account for background radiation impinging on the imaged surfaces. The radiometric data are converted to a perceptible temperature map of the imaged surfaces. | 02-13-2014 |
20140044341 | USING GRAVITY MEASUREMENTS WITHIN A PHOTOGRAMMETRIC ADJUSTMENT - A method for determining a location of a target includes, at a first location, determining first location coordinates of a measuring device using one or more GNSS signals, determining a first gravitational direction, and capturing a first image using the camera. The method also includes, at a second location, determining second location coordinates of the measuring device, and capturing a second image. The method further includes determining a plurality of correspondence points between the first and second images, determining a first plurality of image coordinates for the plurality of correspondence points in the first image, determining a second plurality of image coordinates for the plurality of correspondence points in the second image, and determining the location of the target using at least the first plurality of image coordinates, the second plurality of image coordinates, and the first gravitational direction. | 02-13-2014 |
20140044342 | METHOD FOR GENERATING 3D COORDINATES AND MOBILE TERMINAL FOR GENERATING 3D COORDINATES - A method of generating 3D coordinates includes: acquiring a target image including a finger region with a camera of a terminal; detecting the finger region in the target image using an image processing technique; detecting a fingertip region in the finger region; and calculating 3D coordinate values using the fingertip region. Also provided is a terminal suitable for performing such a method. | 02-13-2014 |
20140044343 | IDENTIFYING AND FILLING HOLES ACROSS MULTIPLE ALIGNED THREE-DIMENSIONAL SCENES - The capture and alignment of multiple 3D scenes is disclosed. Three dimensional capture device data from different locations is received thereby allowing for different perspectives of 3D scenes. An algorithm uses the data to determine potential alignments between different 3D scenes via coordinate transformations. Potential alignments are evaluated for quality and subsequently aligned subject to the existence of sufficiently high relative or absolute quality. A global alignment of all or most of the input 3D scenes into a single coordinate frame may be achieved. The presentation of areas around a particular hole or holes takes place thereby allowing the user to capture the requisite 3D scene containing areas within the hole or holes as well as part of the surrounding area using, for example, the 3D capture device. The new 3D captured scene is aligned with existing 3D scenes and/or 3D composite scenes. | 02-13-2014 |
20140044344 | BUILDING A THREE-DIMENSIONAL COMPOSITE SCENE - The capture and alignment of multiple 3D scenes is disclosed. Three dimensional capture device data from different locations is received thereby allowing for different perspectives of 3D scenes. An algorithm uses the data to determine potential alignments between different 3D scenes via coordinate transformations. Potential alignments are evaluated for quality and subsequently aligned subject to the existence of sufficiently high relative or absolute quality. A global alignment of all or most of the input 3D scenes into a single coordinate frame may be achieved. The presentation of areas around a particular hole or holes takes place thereby allowing the user to capture the requisite 3D scene containing areas within the hole or holes as well as part of the surrounding area using, for example, the 3D capture device. The new 3D captured scene is aligned with existing 3D scenes and/or 3D composite scenes. | 02-13-2014 |
20140044345 | METHOD OF ANALYZING AND/OR PROCESSING AN IMAGE - A method of processing a starting image, to obtain a final image of better quality, the method comprising the following steps: a) establishing a predefined quality level and/or a predefined processing time for the final image; b) computation information relating to said starting image; c) analyzing said starting image by means of said computed information; d) determining whether said information is sufficient to obtain said predefined quality level for said final image; e) if the step d) determines that the information is sufficient and/or if processing time is exhausted, reducing the noise of said starting image to obtain said final image; and f) if the step d) determines that the information is insufficient and/or processing time is not exhausted, refining the computation in the step b). | 02-13-2014 |
20140044346 | Creating and Viewing Three Dimensional Virtual Slides - Systems and methods for creating and viewing three dimensional digital slides are provided. One or more microscope slides are positioned in an image acquisition device that scans the specimens on the slides and makes two dimensional images at a medium or high resolution. These two dimensional digital slide images are provided to an image viewing workstation where they are viewed by an operator who pans and zooms the two dimensional image and selects an area of interest for scanning at multiple depth levels (Z-planes). The image acquisition device receives a set of parameters for the multiple depth level scan, including a location and a depth. The image acquisition device then scans the specimen at the location in a series of Z-plane images, where each Z-plane image corresponds to a depth level portion of the specimen within the depth parameter. | 02-13-2014 |
20140044347 | MAGE CODING APPARATUS, IMAGE CODING METHOD, IMAGE CODING PROGRAM, IMAGE DECODING APPARATUS, IMAGE DECODING METHOD, AND IMAGE DECODING PROGRAM - In an image coding apparatus that codes, on a block-by-block basis, a distance image including depth values each representing a pixel-by-pixel distance from a viewpoint to a subject, a segmentation unit divides a block of a texture image including luminance values of individual pixels of the subject into segments including the pixels on the basis of the luminance values, and an intra-plane prediction unit sets a depth value of each of the divided segments included in one block of the distance image on the basis of depth values of pixels included in an already-coded block adjacent to the block, and generates, on a block-by-block basis, a predicted image including the set depth values of the individual segments. | 02-13-2014 |
20140050390 | RECONSTRUCTION OF DEFORMING SURFACES BY CANCELING AMBIENT OCCLUSION AND REFINING 3-D SHAPE - Methods for improved reconstruction of a deforming surface may comprise canceling ambient occlusion of the deforming surface from an input image, computing an optical flow of the image, and refining a 3-D shape of the surface. Canceling ambient occlusion of the deforming surface may comprise computing the ambient occlusion of the surface, projecting the ambient occlusion onto each image plane, and then removing the ambient occlusion from the corresponding input image | 02-20-2014 |
20140056508 | APPARATUS AND METHOD FOR IMAGE MATCHING BETWEEN MULTIVIEW CAMERAS - An apparatus for image matching between multiview cameras includes a pattern model storing unit to store a pattern model, a matching processing unit to match the stored pattern model with a point cloud obtained by at least one depth camera, and a parameter obtaining unit to obtain a parameter for each of the at least one depth camera, based on a result of the matching. | 02-27-2014 |
20140056509 | SIGNAL PROCESSING METHOD, SIGNAL PROCESSING APPARATUS, AND STORAGE MEDIUM - There is provided with a signal processing method. A filtering result is generated by performing spatial filtering on multi-dimensional data. Encoding result data is output by encoding the filtering result using a value at a pixel of interest of the filtering result and a value at a reference pixel located at a relative position with respect to the pixel of interest. The relative position of the reference pixel is decided in advance according to a characteristic of a spatial filter used in the spatial filtering step. | 02-27-2014 |
20140056510 | FACE LOCATION DETECTION - The location of a face is detected from data about a scene. A 3D surface model from is obtained from measurements of the scene. A 2D angle data image is generated from the 3D surface model. The angle data image is generated for a virtual lighting direction, the image representing angles between a ray directions from a virtual light source direction and normal to the 3D surface. A 2D face location algorithm is applied to each of the respective 2D images. In an embodiment respective 2D angle data images for a plurality of virtual lighting directions are generated and face locations detected from the respective 2D images are fused. | 02-27-2014 |
20140064602 | METHOD AND APPARATUS FOR OBJECT POSITIONING BY USING DEPTH IMAGES - According to an exemplary embodiment, a method for object positioning by using depth images is executed by a hardware processor as following: converting depth information of each of a plurality of pixels in each of one or more depth images into a real world coordinate; based on the real world coordinate, computing a distance of each pixel to an edge in each of a plurality of directions; assigning a weight to the distance of each pixel to each edge; and based on the weight of the distance of each pixel to each edge and a weight limit, selecting one or more extremity positions of an object. | 03-06-2014 |
20140064603 | 3D SHAPE MEASUREMENT USING DITHERING - A method for three-dimensional (3D) shape measurement includes generating fringe patterns using a dithering technique, projecting the fringe patterns onto an object using a projector, capturing the fringe patterns distorted by surface geometry of the object using an imaging device, and performing a fringe analysis to reconstruct a 3D shape of the object using the fringe patterns and the fringe patterns distorted by the surface geometry of the object. The step of generating the fringe patterns using the dithering technique may include binarizing sinuisoidal fringe patterns with the dithering technique. The step of generating the fringe patterns using the dithering technique may include applying an optimization algorithm. | 03-06-2014 |
20140064604 | Method for objectively evaluating quality of stereo image - A method for objectively evaluating quality of a stereo image is provided. The method obtains a cyclopean image of a stereo image formed in the human visual system by simulating a process that the human visual system deals with the stereo image. The cyclopean image includes three areas: an occlusion area, a binocular fusion area and a binocular suppression area. Representing characteristics of the image according to the singular value of the image has a strong stability. According characteristics of different areas of the human visual system while dealing with the cyclopean image, the distortion degree of the cyclopean image corresponding to the testing stereo image is presented by the singular value distance between cyclopean images respectively corresponding to the testing stereo image and the reference stereo image, in such a manner that an overall visual quality of the testing stereo image is finally evaluated. | 03-06-2014 |
20140064605 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing apparatus which sets divided regions in each of a left-eye image and a right-eye image in one way of setting divided regions, the left-eye image and the right-eye image being components of a stereoscopic image, and corrects, on a set divided region basis, color discrepancy between one of the left-eye image and the right-eye image and the other of the left-eye image and the right-eye image, the one image being not a color correction target, the other image being the color correction target, and which serially sets divided regions in the left-eye image and the right-eye image, in one or more other ways of setting divided regions different from the one way of setting divided regions, and serially corrects the color discrepancy between the one image and the other image on the set divided region basis. | 03-06-2014 |
20140064606 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - There is provided an image processing apparatus including a difference value calculation section which associates, with each other on a tone basis, histograms indicating a number of pixels per tone in a left-eye image and a right-eye image, respectively, the left-eye image and the right-eye image being components of a stereoscopic image, which calculates a difference value between the left-eye image and the right-eye image on the associated tone basis, and which smooths the calculated difference value among tones, and a correction section which corrects the left-eye image or the right-eye image based on the smoothed tone-basis difference value. | 03-06-2014 |
20140064607 | SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR LOW-LATENCY WARPING OF A DEPTH MAP - Methods, systems, and computer program products to warp a depth map into alignment with an image, where the image sensor (e.g., camera) responsible for the image and depth sensor responsible for an original depth map are separated in space. In an embodiment, the warping of the depth map may be started before the original depth map has been completely read. Moreover, data from the warped depth map may be made available to an application before the entire warped depth map has been completely generated. Such a method and system may improve the speed of the overall process and/or reduce memory requirements. | 03-06-2014 |
20140064608 | METHOD OF TRANSFORMING STEREOSCOPIC IMAGE AND RECORDING MEDIUM STORING THE SAME - Disclosed is a method of transforming a stereoscopic image, including: extracting a depth map from a left-eye image and a right-eye image of the stereoscopic image as the left-eye image and the right-eye image are input; obtaining transformation information from the depth map; and transforming red, green, and blue (RGB) values of the stereoscopic image based on the transformation information. It is possible to provide a stereoscopic image having an improved three-dimensional effect, compared to an existing stereoscopic image. | 03-06-2014 |
20140072205 | IMAGE PROCESSING DEVICE, IMAGING DEVICE, AND IMAGE PROCESSING METHOD - An image processing device for generating depth data, utilizing the a first image and a second image which are captured from different viewpoints, the image processing device including: a disparity value calculation unit which calculates, for each of plural representative pixels included in pixels in the first image, a disparity value of the representative pixel, based on a positional relationship between the representative pixel and a pixel corresponding to the representative pixel, in the second image; a segmentation unit which partitions the first image into plural segments, based on a similarity between pixel values; and a depth data generation unit which determines, for each segment, a disparity value of the segment, based on the disparity value of the representative pixel included in the segment to generate depth data indicative of depths corresponding to the plural segments. | 03-13-2014 |
20140079313 | METHOD AND APPARATUS FOR ADJUSTING IMAGE DEPTH - A method for adjusting image depth includes receiving a three-dimension (3D) image including a first image and a second image. The method includes measuring the 3D image to generate a first parallax gradient value, calculating a second parallax gradient value according to the first parallax gradient value and a user setting value, calculating a parallax modification value, and moving the first image according to the corresponding parallax modification value so as to generate a adjusted first image for replacing the first image. | 03-20-2014 |
20140086476 | SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR HIGH DEPTH OF FIELD IMAGING - Methods, systems, and computer program products allow for the capturing of a high depth of field (DOF) image. A comprehensive depth map of the scene may be automatically determined. The scene may then be segmented, where each segment of the same corresponds to a respective depth of the depth map. A sequence of images may then be recorded, where each image in the sequence is focused at a respective depth of the depth map. The images of this sequence may then be interleaved to create a single composite image that includes the respective in-focus segments from these images. | 03-27-2014 |
20140086477 | METHOD AND DEVICE FOR DETECTING DRIVABLE REGION OF ROAD - A method and a device are disclosed for detecting a drivable region of a road, the method comprising the steps of: deriving a disparity map from a gray-scale map including the road and detecting the road from the disparity map; removing a part with a height above the road greater than a predetermined height threshold from the disparity map so as to generate a sub-disparity map; converting the sub-disparity map into a U-disparity map; detecting the drivable region from the U-disparity map; and converting the drivable region detected from the U-disparity map into the drivable region within the gray-scale map. | 03-27-2014 |
20140086478 | 3D VISION PROCESSING - Methods and apparatuses are described for processing 3D vision algorithms. A 3D vision processor device comprises one or more 3D vision processing cores. Each 3D vision processing core includes one or more memory blocks for storing location values associated with 3D point cloud images and an arithmetic logic unit coupled to the one or more memory modules. The arithmetic logic unit includes a plurality of memory registers for temporarily storing location values associated with a point in a 3D point cloud image and a processing unit coupled to the plurality of memory registers for performing arithmetic operations on the location values stored in the memory registers, the arithmetic operations used for 3D vision processing algorithms. The 3D vision processing core also includes a communication link for transferring data between the arithmetic logic unit and the memory modules. | 03-27-2014 |
20140093158 | APPARATUS AND METHOD FOR GENERATING A MULTI-VIEWPOINT IMAGE - According to one embodiment, in an apparatus for generating a multi-viewpoint image, a separation unit separates a target image into a first diffuse reflection image and a first non-diffuse reflection image based on a pixel value of each pixel of the target image. The first non-diffuse reflection image has components except for the first diffuse reflection image. A first estimation unit estimates a change amount of each pixel among a plurality of first non-diffuse reflection images corresponding to viewpoints differently. A first generation unit generates a second non-diffuse reflection image by changing at least one of a shape and a luminance of each pixel of the first non-diffuse reflection image, based on the change amount of each pixel. A synthesis unit generates the multi-viewpoint image by synthesizing the first diffuse reflection image with the second non-diffuse reflection image. Each viewpoint image corresponds to each of the viewpoints. | 04-03-2014 |
20140093159 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus is provided which includes: a feature calculation unit which calculates feature quantities of respective pixels included in an input image; a reliability level obtaining unit which obtains reliability level information indicating reliability levels of respective depth values indicating depths of the respective pixels; and a depth correction unit which corrects the depth values included in input depth information, using the reliability levels and the feature quantities, to generate output depth information. | 04-03-2014 |
20140099017 | METHOD AND APPARATUS FOR RECONSTRUCTING THREE DIMENSIONAL MODEL - A method and an apparatus for reconstructing a three dimensional model of an object are provided. The method includes the following steps. A plurality of first depth images of an object are obtained. According to a linking information of the object, the first depth images are divided into a plurality of depth image groups. The linking information records location information corresponding to a plurality of substructures of the object. Each depth image group includes a plurality of second depth images, and the substructures correspond to the second depth images. According to the second depth image and the location information corresponding to each substructure, a local module of each substructure is built. According to the linking information, the local models corresponding to the substructures are merged, and the three-dimensional model of the object is built. | 04-10-2014 |
20140099018 | METHOD, SYSTEM, AND DEVICE FOR COMPRESSING, ENCODING, INDEXING, AND DECODING IMAGES - A method for encoding image data, the method includes creating a plurality of textors from the image data; clustering the plurality of textors into a plurality of textor primatives; retrieving a learned image based on the plurality of textor primatives; and determining an error space based on a difference between the learned image and the plurality of textor primatives. A method for compressing, indexing, and decoding image data based on textors is also provided. | 04-10-2014 |
20140099019 | Gesture Recognition in Vehicles - A method and system for performing gesture recognition of a vehicle occupant employing a time of flight (TOF) sensor and a computing system in a vehicle. An embodiment of the method of the invention includes the steps of receiving one or more raw frames from the TOF sensor, performing clustering to locate one or more body part clusters of the vehicle occupant, calculating the location of the tip of the hand of the vehicle occupant, determining whether the hand has performed a dynamic or a static gesture, retrieving a command corresponding to one of the determined static or dynamic gestures, and executing the command. | 04-10-2014 |
20140105483 | SYSTEM AND METHOD FOR REDUCING ARTIFACTS CAUSED BY VIEW-DEPENDENT LIGHTING COMPONENTS - Systems and methods are provided for resolving artifacts in view-dependent components. The system and method apply a different stereo camera pair to view-dependent versus non-view-dependent or independent rays. In this way the lighting components may be independently dialed to art direct specular components versus diffuse components. | 04-17-2014 |
20140105484 | APPARATUS AND METHOD FOR RECONSTRUCTING SUPER-RESOLUTION THREE-DIMENSIONAL IMAGE FROM DEPTH IMAGE - An apparatus and method for reconstructing a super-resolution three-dimensional (3D) image from a depth image. The apparatus may include an error point relocation processing unit to relocate an error point in a depth image, and a super-resolution processing unit to reconstruct a 3D image by performing a super-resolution with respect to the depth image in which the error point is relocated. | 04-17-2014 |
20140105485 | BASIS VECTOR SPECTRAL IMAGE COMPRESSION - Computer implemented methods for compressing 3D hyperspectral image data having a plurality of spatial pixels associated with a hyperspectral image, and a number of spectral dimensions associated with each spatial pixel, include receiving, using a processor, the 3D hyperspectral image data, a set of basis vectors associated therewith, and either a maximum error amount or a maximum data size. The methods also include partitioning the 3D hyperspectral image data into a plurality of 2D images, each associated with one of the number of spectral dimensions, and an associated one of the set of basis vectors. The methods additionally include ranking the set of basis vectors if not already ranked. The methods may further include iteratively applying lossy compression to the 2D images, in an order determined by the ranking. Other embodiments and features are also disclosed. | 04-17-2014 |
20140105486 | METHOD FOR LOCATING A CAMERA AND FOR 3D RECONSTRUCTION IN A PARTIALLY KNOWN ENVIRONMENT - A method for locating a camera and 3D reconstruction of its static environment, comprising an object of interest, the 3D model of which is known, includes: calculating an initial pose of the camera in the environment and an initial reconstruction; calculating the pose of the camera for each new image by pairing 3D primitives of the environment with 2D primitives of said image and reconstructing 3D primitives of the environment by triangulation; and simultaneously optimizing the poses of the camera and 3D primitives by minimizing a reprojection error over a plurality of images. The 3D model is a geometric description of the object of interest, the reprojection error has only two types of terms, a first type associated with primitives constrained by the 3D model and a second type associated with primitives of the environment other than the object, the optimization associating the primitives with the environment or 3D model. | 04-17-2014 |
20140112572 | FAST CORRELATION SEARCH FOR STEREO ALGORITHM - Techniques are disclosed for carrying our correlation search in contexts such as stereo algorithms of graphics systems. In accordance with an embodiment, the techniques employ a locality-sensitive hashing (LSH) function to reduce the number of bits to be processed during the correlation process, and to identify a sub-set of available image points that are likely to be the best match to a given target image point. Once such a sub-set of likely image points is identified, a more comprehensive correlation algorithm can be run, if so desired, to further ensure the quality of the match. | 04-24-2014 |
20140112573 | Systems and Methods for Marking Images for Three-Dimensional Image Generation - Methods and systems for image marking and generation of a three-dimensional (3D) image of an object are described. In an example, a computing device may be configured to receive a first set of images of an object that capture details of the object. The computing device may also be configured to receive a second set of images that include markings projected on the object and that are indexed to correspond to images of the first set of images. The computing device may be configured to spatially align images of the second set of images based on the markings projected on the object and determine respective images of the first set of images corresponding to spatially aligned images of the second set of images. The computing device may then generate a 3D image of the object from the respective images of the first set of images. | 04-24-2014 |
20140112574 | APPARATUS AND METHOD FOR CALIBRATING DEPTH IMAGE BASED ON RELATIONSHIP BETWEEN DEPTH SENSOR AND COLOR CAMERA - Disclosed is a method of calibrating a depth image based on a relationship between a depth sensor and a color camera, and an apparatus for calibrating a depth image may include a three-dimensional (3D) point determiner to determine a 3D point of a camera image and a 3D point of a depth image simultaneously captured with the camera image, a calibration information determiner to determine calibration information for calibrating an error of a depth image captured by the depth sensor and a geometric information between the depth sensor and a color camera, using the 3D point of the camera image and the 3D point of the depth image, and a depth image calibrator to calibrate the depth image based on the calibration information and the 3D point of the depth image. | 04-24-2014 |
20140112575 | CONVEX MINIMIZATION AND DATA RECOVERY WITH LINEAR CONVERGENCE - A convex minimization is formulated to robustly recover a subspace from a contaminated data set, partially sampled around it, and propose a fast iterative algorithm to achieve the corresponding minimum. This disclosure establishes exact recovery by this minimizer, quantifies the effect of noise and regularization, and explains how to take advantage of a known intrinsic dimension and establish linear convergence of the iterative algorithm. The minimizer is an M-estimator. The disclosure demonstrates its significance by adapting it to formulate a convex minimization equivalent to the non-convex total least squares (which is solved by PCA). The technique is compared with many other algorithms for robust PCA on synthetic and real data sets and state-of-the-art speed and accuracy is demonstrated. | 04-24-2014 |
20140119639 | WATER-BODY CLASSIFICATION - Among other things, one or more techniques and/or systems are provided for classifying a water-body. For example, initial water-body segmentation may be used to segment imagery into water-body features or non-water-body features to create an initial water-body map. The initial water-body map may be refined based upon confidence scores assigned to pixels within the imagery. In one example, a confidence score may correspond to a confidence that a stereo matching technique produced a correct elevation for a pixel. A relatively low confidence score may indicate that the pixel corresponds to water (e.g., due to a lack of features/texture on water), while a relatively high confidence score may indicate that the pixel does not correspond to water (e.g., due to presence of features/texture, such as roads, building corners, etc.). In this way, confidence scores may, for example, be used to refine the initial water-body map to create a final water-body map. | 05-01-2014 |
20140126806 | IMAGE PROCESSING METHOD, PROGRAM, IMAGE PROCESSING APPARATUS, IMAGE-PICKUP OPTICAL APPARATUS, AND NETWORK DEVICE - An image processing method is configured to denoise three-dimensional image data. The image processing method includes an image transform step of performing a frequency transform for the three-dimensional image data in the optical axis direction and of calculating three-dimensional transformed image data, an image modulation step of reducing an absolute value of the three-dimensional transformed image data in a specific frequency region and of calculating three-dimensional modulated image data, and an inverse image transform step of performing an inverse frequency transform corresponding to the frequency transform for the three-dimensional modulated image data in the optical axis direction and of calculating three-dimensional inversely transformed image data. The specific frequency region is a part of a predetermined. | 05-08-2014 |
20140126807 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM - An image processing apparatus includes: a luminance extraction means to extract luminance components of an input image; a contrast extraction means to extract contrast components of the input image based on the luminance components of the input image extracted by the luminance extraction means; a storage means to store a performance function indicating relation between the contrast components of the input image and depth amounts subjectively perceived, which is determined based at least in part on visual sense characteristics of human beings; and a contrast adjustment means to calculate present depth amounts of the input image from the contrast components of the input image extracted by the contrast extraction means based at least in part on the performance function with respect to region of the input image which are determined from depth information of the input image and adjusting contrast components of the input image. | 05-08-2014 |
20140133739 | METHOD AND APPARATUS FOR CORRECTING ERRORS IN MULTIPLE STREAM-BASED 3D IMAGES - The present invention relates to correcting errors of multiple stream-based 3D images. The present invention comprises a 3D image synchronizing unit which synchronizes a first image and a second image consisting the 3D image; a 3D image correcting unit which detects an error block in the first image, searches a corresponding block in the second image, and corrects the error block on the basis of the block information of said corresponding block; and a compositing unit which composites the corrected first image and the second image to generate a 3D stereoscopic image. According to the present invention, bit errors occurring when transmitting multiple stream of 3D image can be corrected, providing better quality of 3D images. Also, error correction can be easily performed by using the method of the present invention without complicated calculations. | 05-15-2014 |
20140133740 | INTELLIGENT PART IDENTIFICATION FOR USE WITH SCENE CHARACTERIZATION OR MOTION CAPTURE - A variety of methods, systems, devices and arrangements are implemented for use with motion capture. One such method is implemented for identifying salient points from three-dimensional image data. The method involves the execution of instructions on a computer system to generate a three-dimensional surface mesh from the three-dimensional image data. Lengths of possible paths from a plurality of points on the three-dimensional surface mesh to a common reference point are categorized. The categorized lengths of possible paths are used to identify a subset of the plurality of points as salient points. | 05-15-2014 |
20140133741 | DEVICE FOR GENERATING THREE DIMENSIONAL FEATURE DATA, METHOD FOR GENERATING THREE-DIMENSIONAL FEATURE DATA, AND RECORDING MEDIUM ON WHICH PROGRAM FOR GENERATING THREE-DIMENSIONAL FEATURE DATA IS RECORDED - A stereo disparity calculating unit calculates the predicted value of the stereo disparity. A line extracting unit performs line extraction in an image. A line classification unit classifies the extracted lines into different line types. A meaningless line eliminating unit eliminates lines not existing in the real world away from the following processing. A stereo disparity correcting unit corrects the predicted value of the disparity based on the line pairs determined by the line pair determining unit. A line pair clustering unit clusters all the line pairs belonging to the same feature as one cluster. A plane combining unit finds out the location relationship in the three-dimensional space among all the planes of each feature extracted by a plane extracting unit, and generates a three-dimensional model describing the overall structure for each feature. | 05-15-2014 |
20140140608 | IMAGE PROCESSING APPARATUS AND METHOD FOR COLOR-DEPTH DEMOSAICING - An image processing apparatus and method, which add depth information to a color pixel and add color information to a depth pixel in an image including the color pixel and the depth pixel includes a depth information determination unit to determine depth information of a current color pixel using peripheral color pixels of the current color pixel and peripheral depth pixels of the current color pixel. | 05-22-2014 |
20140147031 | Disparity Estimation for Misaligned Stereo Image Pairs - A disparity vector for a pixel in a right image corresponding to a pixel in a left image in a pair of stereo images is determined. The disparity vector is based on a horizontal disparity and a vertical disparity and the pair of stereo images is unrectified. First, a set of candidate horizontal disparities is determined. For each candidate horizontal disparity, a cost associated with a particular horizontal disparity and corresponding vertical disparities is determined. The vertical disparity associated with a first optimal cost is assigned to each candidate horizontal disparity, so that the candidate horizontal disparity and the vertical disparity yield a candidate disparity vector. Lastly, the candidate disparity vector with a second optimal cost is selected as the disparity vector of the pixel in the right image. | 05-29-2014 |
20140147032 | Method and System for Recovery of 3D Scene Structure and Camera Motion From a Video Sequence - An improved method and a system are disclosed for recovering a three-dimensional (3D) scene structure from a plurality of two-dimensional (2D) image frames obtained from imaging means. Sets of 2D features are extracted from the image frames, and sets corresponding to successive image frames are matched, such that at least one pair of matched 2D features refers to a same 3D point in a 3D scene captured in 2D in the image frames. A 3D ray is generated by back-projection from each 2D feature, and the generated 3D rays are subjected to an anchor-based minimization process, for determining camera motion parameters and 3D scene points coordinates, thereby recovering a structure of the 3D scene. | 05-29-2014 |
20140147033 | CONVERSION OF MONOSCOPIC VISUAL CONTENT USING IMAGE-DEPTH DATABASE - An image converter compiles three-dimensional content into a data store, identifies a number of stereo image pairs from the three-dimensional content, computes a depth map for each of the stereo image pairs from the three-dimensional content, and partitions the stereo image pairs in the data store into multiple categories. The image converter determines a depth cue for each of the categories based on the depth map for each of the stereo image pairs in each category. The image converter computes a depth map for a category associated with a two-dimensional input image based on the determined depth cue and renders a three-dimensional output image from the two-dimensional input image using the depth map for the category. | 05-29-2014 |
20140153816 | Depth Map Stereo Correspondence Techniques - Depth map stereo correspondence techniques are described. In one or more implementations, a depth map generated through use of a depth sensor is leveraged as part of processing of stereo images to assist in identifying which parts of stereo images correspond to each other. For example, the depth map may be utilized to describe depth of an image scene which may be used as part of a stereo correspondence calculation. The depth map may also be utilized as part of a determination of a search range to be employed as part of the stereo correspondence calculation. | 06-05-2014 |
20140153817 | Patch Size Adaptation for Image Enhancement - Systems and methods are provided for providing patch size adaptation for patch-based image enhancement operations. In one embodiment, an image manipulation application receives an input image. The image manipulation application compares a value for an attribute of at least one input patch of the input image to a threshold value. Based on comparing the value for the to the threshold value, the image manipulation application adjusts a first patch size of the input patch to a second patch size that improves performance of a patch-based image enhancement operation as compared to the first patch size. The image manipulation application performs the patch-based image enhancement operation based on one or more input patches of the input image having the second patch size. | 06-05-2014 |
20140153818 | METHOD FOR PRODUCING AN IRIDESCENT IMAGE, IMAGE OBTAINED AND DEVICE INCLUDING SAME, ASSOCIATED PROGRAM - The invention relates to a method for producing a series of modified images intended for forming an iridescent image, using at least one reference image, characterised by including steps of creating a colour pallet (P); creating a series of at least two modified reference images (IRMO, IRMI, IRM2, IRMn, etc.), in which the colours of said at least one reference image (IR) are replaced with the colours of the pallet by applying, before or between each new modified reference image (IRM), a circular shift to the colours of the pallet. The invention also relates to the inter-laced, iridescent, 3-D images obtained using such a method, as well as to a device including such an interlaced image and an associated program. | 06-05-2014 |
20140161346 | FEATURE POINT SELECTING SYSTEM, FEATURE POINT SELECTING METHOD AND FEATURE POINT SELECTING PROGRAM - A recognition task executing means | 06-12-2014 |
20140161347 | METHOD AND APPARATUS FOR COLOR TRANSFER BETWEEN IMAGES - A method and an arrangement for color transfer between images for compensating color differences between at least two images as a first and a second image represented by pixel data are recommended, wherein for corresponding feature points of the images a color map and a geometric map are calculated for compensating a first image by applying said geometric map and said color map to the first image resulting in a compensated first image for detecting regions where a compensation fails by comparing the compensated first image with the second image to perform a color transfer excluding image regions where the compensation failed. The method can be performed on the fly and is applicable for equalizing color differences between images different in geometry and color. | 06-12-2014 |
20140169658 | METHODS AND APPARATUS FOR IMAGE FUSION - Systems and methods configured to implement sliced source imaging to produce a plurality of overlapping in-focus images on the same location of a single imaging detector without using beamsplitters. | 06-19-2014 |
20140169659 | SYSTEM AND METHOD FOR CUSTOMIZING FIGURINES WITH A SUBJECT'S FACE - A system and method of making an at least partially customized figure emulating a subject is disclosed which involves obtaining at least two two-dimensional images of the face of the subject from different perspectives; and processing the images of the face with a computer processor to create a three dimensional model of the subject's face; scaling the three dimensional model and applying the three dimensional model to a predetermined template adapted to interfit with the head of a figure preform that comprises at least a head. The template is printed and installed on the head portion of the figure preform. | 06-19-2014 |
20140169660 | Stereo Correspondence Smoothness Tool - Stereo correspondence smoothness tool techniques are described. In one or more implementations, an indication is received of a user-defined region in at least one of a plurality of stereoscopic images of an image scene. Stereo correspondence is calculated of image data of the plurality of stereoscopic images of the image scene, the calculation performed based at least in part on the user-defined region as indicating a smoothness in disparities to be calculated for pixels in the user-defined region. | 06-19-2014 |
20140169661 | Method of color correction of pair of colorful stereo microscope images - A method for color correction of a pair of colorful stereo microscope images is provided, which transmits the color information of the foreground areas and the background area of the reference image to the aberrated image separately for avoiding transmission error of the color information of the varied areas of the pair of the images, thus sufficiently improves the accuracy of the color correction, reduces the difference between the color of the reference image and the color of the aberrated image, and well prepares for the stereo matching of the pair of colorful stereo microscope images as well as for the three-dimensional reconstruction and three-dimensional measurement; on the other hand, during the correction, the correcting procedure is provided automatically without manual work. | 06-19-2014 |
20140177942 | THREE DIMENSIONAL SENSING METHOD AND THREE DIMENSIONAL SENSING APPARATUS - A three dimensional (3D) sensing method and an apparatus thereof are provided. The 3D sensing method includes the following steps. A resolution scaling process is performed on a first pending image and a second pending image so as to produce a first scaled image and a second scaled image. A full-scene 3D measurement is performed on the first and second scaled images so as to obtain a full-scene depth image. The full-scene depth image is analyzed to set a first region of interest (ROI) and a second ROI. A first ROI image and a second ROI image is obtained according to the first and second ROI. Then, a partial-scene 3D measurement is performed on the first and second ROI images accordingly, such that a partial-scene depth image is produced. | 06-26-2014 |
20140177943 | METHOD AND APPARATUS FOR RENDERING HYBRID MULTI-VIEW - A method and apparatus for generating a multi-view image, the method including determining an input image for generating multi-view images, and selecting a total of stereo images or one of stereo images to be the input image based on a presence of a distortion between the stereo images. | 06-26-2014 |
20140177944 | Method and System for Modeling Subjects from a Depth Map - A method for modeling and tracking a subject using image depth data includes locating the subject's trunk in the image depth data and creating a three-dimensional (3D) model of the subject's trunk. Further, the method includes locating the subject's head in the image depth data and creating a 3D model of the subject's head. The 3D models of the subject's head and trunk can be exploited by removing pixels from the image depth data corresponding to the trunk and the head of the subject, and the remaining image depth data can then be used to locate and track an extremity of the subject. | 06-26-2014 |
20140177945 | AERIAL ROOF ESTIMATION SYSTEMS AND METHODS - Methods and systems for roof estimation are described. Example embodiments include a roof estimation system, which generates and provides roof estimate reports annotated with indications of the size, geometry, pitch and/or orientation of the roof sections of a building. Generating a roof estimate report may be based on one or more aerial images of a building. In some embodiments, generating a roof estimate report of a specified building roof may include generating a three-dimensional model of the roof, and generating a report that includes one or more views of the three-dimensional model, the views annotated with indications of the dimensions, area, and/or slope of sections of the roof. This abstract is provided to comply with rules requiring an abstract, and it is submitted with the intention that it will not be used to interpret or limit the scope or meaning of the claims. | 06-26-2014 |
20140185920 | IMAGE SELECTION AND MASKING USING IMPORTED DEPTH INFORMATION - A method, process, and associated systems for automatically selecting and masking areas of a still image or video clip using imported depth information. An image-editing or video-editing application receives a set of depth values that are each associated with an area of a still image or video frame. Each depth value identifies the distance from the camera position of an object depicted by the area associated with the depth value. When a user directs the application to automatically select or mask a region of the image or frame, the application uses the depth values to automatically choose which pixels to include in the selection or mask such that the selection or mask best approximates an area of the image or frame that represents a three-dimensional object. | 07-03-2014 |
20140185921 | APPARATUS AND METHOD FOR PROCESSING DEPTH IMAGE - A method and apparatus for processing a depth image that removes noise of a depth image may include a noise estimating unit to estimate noise of a depth image using an amplitude image, a super-pixel generating unit to generate a planar super-pixel based on depth information of the depth image and the noise estimated, and a noise removing unit to remove noise of the depth image using depth information of the depth image and depth information of the super-pixel. | 07-03-2014 |
20140185922 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - According to one embodiment, an image processing apparatus comprises an image processing unit and a control unit. The image processing unit is configured to perform image processing enhancing a depth of an imaged image. The control unit is configured to acquire an imaging condition of the imaged image and perform control such that an enhancement amount of the depth of the imaged image increases when the acquired imaging condition satisfies a predetermined depth enhancement condition. | 07-03-2014 |
20140185923 | METHODS FOR EXTRACTING SHAPE FEATURE, INSPECTION METHODS AND APPARATUSES - Methods for extracting a shape feature of an object and security inspection methods and apparatuses. Use is made of CT's capability of obtaining a 3D structure. The shape of an object in an inspected luggage is used as a feature of a suspicious object in combination with a material property of the object. For example, a false alarm rate in detection of suspicious explosives may be reduced. | 07-03-2014 |
20140198976 | METHOD AND SYSTEM FOR FAST DENSE STEREOSCOPIC RANGING - A stochastic method and system for fast stereoscopic ranging includes selecting a pair of images for stereo processing, in which the pair of images are a frame pair and one of the image is a reference frame, seeding estimated values for a range metric at each pixel of the reference frame, initializing one or more search stage constraints, stochastically computing local influence for each valid pixel in the reference frame, aggregating local influences for each valid pixel in the reference frame, refining the estimated values for the range metric at each valid pixel in the reference frame based on the aggregated local influence, and post-processing range metric data. A valid pixel is a pixel in the reference frame that has a corresponding pixel in the other frame of the frame pair. The method repeats n iterations of the stochastically computing through the post-processing. | 07-17-2014 |
20140198977 | Enhancement of Stereo Depth Maps - A method for computation of a depth map for corresponding left and right two dimensional (2D) images of a stereo image is provided that includes determining a disparity range based on a disparity of at least one object in a scene of the left and right 2D images, performing color matching of the left and right 2D images, performing contrast and brightness matching of the left and right 2D images, and computing a disparity image for the left and right 2D images after the color matching and the contrast and brightness matching are performed, wherein the disparity range is used for correspondence matching of the left and right 2D images. | 07-17-2014 |
20140198978 | METHOD FOR SEARCHING A ROOF FACET AND CONSTRUCTING A BUILDING ROOF STRUCTURE LINE - A method for searching a building roof facet and reconstructing a roof structure line, in which the searching is performed automatically and without limitation of how slope of the roof facet, and the building structure line is constructed through aerial imagery. At first, lidar point clouds on the roof are extracted to compose a roof facet by using coplanarity analysis, and the roof is differentiated to a possible flat roof and a pitched roof. An optimal roof facet is obtained by analyzing lidar point clouds to overcome the low pitched facet issue. A relationship of a roof facet on a 2-dimensional space is analyzed to ascertain an area of a roof structure line. An initial boundary is generated. Line detection is performed on the images and a roof structure line segment is composed. All the structure line segments are used to reconstructing a 3-dimensional building pattern in object space. | 07-17-2014 |
20140198979 | METHODS AND SYSTEMS FOR INTERACTIVE 3D IMAGE SEGMENTATION - Methods and systems for interactively segmenting 3D image data are provided. An initial segmentation of the 3D image data is obtained, and for each of a plurality of image regions, a segmentation uncertainty indicator for the initial image segmentation is associated with the image region and a strength is assigned to the segmentation uncertainty indicator. A low-confidence region in the 3D image data is identified based at least in part on proximity of the low-confidence region to the image regions and strengths of the corresponding segmentation uncertainty indicators. An optimization routine may be applied to an objective function whose value depends at least in part on proximity of the candidate region to the image regions and the strengths of the corresponding uncertainty indicators to identify the low-confidence region from among a plurality of candidate regions. | 07-17-2014 |
20140205181 | SYSTEMS AND METHODS FOR ROW CAUSAL SCAN-ORDER OPTIMIZATION STEREO MATCHING - Systems and methods to determine a disparity map using row causal scanline optimization stereo matching are presented. A method includes, for each corresponding pixel P between a pair of input stereo images, and for each considered disparity, determining a basic match cost and a match cost for each of a set of given orientations including an east orientation and one or more other orientations, determining an overall match cost for each pixel at each considered disparity based on a sum of the determined match costs for all considered orientations for each pixel and disparity pair, and determining a resulting disparity for each pixel based on a minimum of the determined overall match costs, where a subset of the determined resulting disparities becomes available prior to completion of the input images being read in, and where the resulting disparities for all pixels are determined in a single pass through the input images. | 07-24-2014 |
20140205182 | STEREOSCOPIC IMAGE PROCESSING APPARATUS, STEREOSCOPIC IMAGE PROCESSING METHOD, AND STEREOSCOPIC IMAGE PROCESSING PROGRAM - An apparatus including: an acquiring unit operable to acquire a stereoscopic view image which includes a left eye image and a right eye image, the stereoscopic view image representing a stereoscopic image by means of parallax between the left eye image and the right eye image; an extracting unit operable to extract a subject region from the acquired stereoscopic view image, the subject region giving a stereoscopic image of a specific subject; a calculating unit operable to calculate a length of the extracted subject region in a parallax direction; and an image correcting unit operable to adjust parallax at an edge of the subject region based on the calculated length of the subject region in the parallax direction. | 07-24-2014 |
20140205183 | Method and Apparatus for Enhancing Stereo Vision Through Image Segmentation - A method and apparatus for segmenting an image are provided. The method may include the steps of clustering pixels from one of a plurality of images into one or more segments, determining one or more unstable segments changing by more than a predetermined threshold from a prior of the plurality of images, determining one or more segments transitioning from an unstable to a stable segment, determining depth for one or more of the one or more segments that have changed by more than the predetermined threshold, determining depth for one or more of the one or more transitioning segments, and combining the determined depth for the one or more unstable segments and the one or more transitioning segments with a predetermined depth of all segments changing less than the predetermined threshold from the prior of the plurality of images. | 07-24-2014 |
20140205184 | METHOD FOR REPRESENTING SURROUNDINGS - A method for environmental representation, in which two images of an environment (U) are taken respectively and a disparity image is determined by means of stereo image processing. An unobstructed free space (F) is identified in the disparity image, in that each pixel of the disparity image is allocated either to the unobstructed ground surface (B) or to one of several segments (S | 07-24-2014 |
20140205185 | IMAGE PROCESSING DEVICE, IMAGE PICKUP DEVICE, AND IMAGE DISPLAY DEVICE - With an image processing device of the present invention, the image processing device, an image pickup device, and an image display device are provided which can correct a spatial distortion generated in taking and displaying a stereoscopic image, and which can present a high-quality image with a stereoscopic feel. The image processing device comprises an information acquisition unit ( | 07-24-2014 |
20140212026 | STATISTICAL POINT PATTERN MATCHING TECHNIQUE - A statistical point pattern matching technique is used to match corresponding points selected from two or more views of a roof of a building. The technique statistically selecting points from each of orthogonal and oblique aerial views of a roof. generating radial point patterns for each aerial view, calculating the origin of each point pattern, representing the shape of the point pattern as a radial function, and Fourier-transforming the radial function to produce a feature space plot. A feature profile correlation function can then be computed to relate the point match sets. From the correlation results, a vote occupancy table can be generated to help evaluate the variance of the point match sets, indicating, with high probability, which sets of points are most likely to match one another. | 07-31-2014 |
20140212027 | SINGLE IMAGE POSE ESTIMATION OF IMAGE CAPTURE DEVICES - Methods for image based localization using an electronic computing device are presented, the methods including: capturing a local image with an image capture device; associating metadata with the local image; causing the electronic computing device to receive the local image; causing the electronic computing device to match the local image with a database image, where the database image is three-dimensional (3D); and calculating a pose of the image capture device based on a pose of the database image and metadata associated with the local image. In some embodiments, the metadata includes at least pitch and roll data corresponding with the image capture device at a time of image capture. | 07-31-2014 |
20140212028 | STATISTICAL POINT PATTERN MATCHING TECHNIQUE - A statistical point pattern matching technique is used to match corresponding points selected from two or more views of a roof of a building. The technique entails statistically selecting points from each of orthogonal and oblique aerial views of a roof, generating radial point patterns for each aerial view, calculating the origin of each point pattern, representing the shape of the point pattern as a radial function, and Fourier-transforming the radial function to produce a feature space plot. A feature profile correlation function can then be computed to relate the point match sets. From the correlation results, a vote occupancy table can be generated to help evaluate the variance of the point match sets, indicating, with high probability, which sets of points are most likely to match one another. | 07-31-2014 |
20140212029 | Markup Language for Interactive Geographic Information System - Data-driven guarded evaluation of conditional-data associated with data objects is used to control activation and processing of the data objects in an interactive geographic information system. Methods of evaluating conditional-data to control activation of the data objects are disclosed herein. Data structures to specify conditional data are also disclosed herein. | 07-31-2014 |
20140212030 | METHOD AND ARRANGEMENT FOR IMAGE MODEL CONSTRUCTION - A method for constructing an image model (M | 07-31-2014 |
20140212031 | METHOD AND ARRANGEMENT FOR 3-DIMENSIONAL IMAGE MODEL ADAPTATION - Method for adapting a 3D model (m) of an object, said method comprising the steps of
| 07-31-2014 |
20140219547 | Method for Increasing Resolutions of Depth Images - A resolution of a low resolution depth image is increased by applying joint geodesic upsampling to a high resolution image to obtain a geodesic distance map. Depths in the low resolution depth image are interpolated using the geodesic distance map to obtain a high resolution depth image. The high resolution image can be a gray scale or color image, or a binary boundary map. The low resolution depth image can be acquired by any type of depth sensor. | 08-07-2014 |
20140219548 | Method and System for On-Site Learning of Landmark Detection Models for End User-Specific Diagnostic Medical Image Reading - A method and system for on-line learning of landmark detection models for end-user specific diagnostic image reading is disclosed. A selection of a landmark to be detected in a 3D medical image is received. A current landmark detection result for the selected landmark in the 3D medical image is determined by automatically detecting the selected landmark in the 3D medical image using a stored landmark detection model corresponding to the selected landmark or by receiving a manual annotation of the selected landmark in the 3D medical image. The stored landmark detection model corresponding to the selected landmark is then updated based on the current landmark detection result for the selected landmark in the 3D medical image. The landmark selected in the 3D medical image can be a set of landmarks defining a custom view of the 3D medical image. | 08-07-2014 |
20140219549 | METHOD AND APPARATUS FOR ACTIVE STEREO MATCHING - An active stereo matching method includes extracting a pattern from a stereo image, generating a depth map through a stereo matching using the extracted pattern, calculating an aggregated cost for a corresponding disparity using a window kernel generated using the extracted pattern and a cost volume generated for the stereo image, and generating a disparity map using the depth map and the aggregated cost. | 08-07-2014 |
20140219550 | Silhouette-based pose estimation - Estimating a pose of an articulated | 08-07-2014 |
20140219551 | 3D Photo Creation System and Method - The present application is directed to a 3D photo creation system and method, wherein the 3D photo creation system including: a stereo image input module configurated to input a stereo image; wherein the stereo image comprises a left eye image and a right eye image; a depth estimation module configurated to estimate a depth information of the stereo image and create a depthmap; a multi-view angle image reconstructing module configurated to create a multi-view angle image according to the depthmap and the stereo image; and an image spaced scanning module configurated to adjust the multi-view angle image and form a mixed image. The system and method outstandingly simplified the process of 3D photo creation and enhanced the quality of 3D photo. The system and method can be widely used in various theme parks, tourists attraction spots and photo galleries and bring about pleasure to more consumers with the 3D photos. | 08-07-2014 |
20140233845 | AUTOMATIC IMAGE RECTIFICATION FOR VISUAL SEARCH - Disclosed is a computing device that can perform automatic image rectification for a visual search. A method implemented at a computing device includes receiving one or more images from an image capture device, storing the one or more images with the computing device, building a three dimensional (3D) geometric model for one or more potential objects of interest within an environment based on at least one image of the one or more images, and automatically creating at least one rectified image having at least one potential object of interest for a visual search. | 08-21-2014 |
20140233846 | METHOD FOR AUTO-DEPICTING TRENDS IN OBJECT CONTOURS - Disclosed herein is a method for auto-depicting trends in object contours (referred to as ADTOC). At the heart of ADTOC is a sifting process to determine a significant angular value via evaluating a plurality of angular values in a predefined range. ADTOC is characterized in that a probe-ahead concept is applied to obtain a reference angular value along the current route, and then the probed angular value is used to modify the significant angular value in order to timely correct the subsequent trace direction, thus achieving more accurate trace result. Contours with discontinuous segments caused by noise, obstacles, illumination, shading variations, etc. can also be auto-depicted without requiring a predefined auxiliary route. | 08-21-2014 |
20140233847 | NETWORKED CAPTURE AND 3D DISPLAY OF LOCALIZED, SEGMENTED IMAGES - Systems, devices and methods are described including receiving a source image having a foreground portion and a background portion, where the background portion includes image content of a three-dimensional (3D) environment. A camera pose of the source image may be determined by comparing features of the source image to image features of target images of the 3D environment and using the camera pose to segment the foreground portion from the background portion may generate a segmented source image. The resulting segmented source image and the associated camera pose may be stored in a networked database. The camera pose and segmented source image may be used to provide a simulation of the foreground portion in a virtual 3D environment. | 08-21-2014 |
20140233848 | APPARATUS AND METHOD FOR RECOGNIZING OBJECT USING DEPTH IMAGE - An apparatus recognizes an object using a hole in a depth image. An apparatus may include a foreground extractor to extract a foreground from the depth image, a hole determiner to determine whether a hole is present in the depth image, based on the foreground and a color image, a feature vector generator to generate a feature vector, by generating a plurality of features corresponding to the object based on the foreground and the hole, and an object recognizer to recognize the object, based on the generated feature vector and at least one reference feature vector. | 08-21-2014 |
20140233849 | METHOD FOR SINGLE-VIEW HAIR MODELING AND PORTRAIT EDITING - The invention discloses a method for single-view hair modeling and portrait editing. The method is capable of 3D structure reconstruction for individual's hairstyle in an input image, and it requires only a small amount of user inputs to bring about a variety of portrait editing functions; after steps of image preprocessing, 3D head model reconstruction, 2D strands extraction and 3D hairstyle reconstruction, the method finally achieves portrait editing functions such as portrait pop-ups, hairstyle replacements, hairstyle editing, etc.; the invention discloses a method for creating a 3D hair model from a single portrait view for the first time, thereby bringing about a series of practical portrait hairstyle editing functions, of which the effect is superior to methods in the prior art, and having features such as simple interactions and highly efficient calculations. | 08-21-2014 |
20140241612 | REAL TIME STEREO MATCHING - Real-time stereo matching is described, for example, to find depths of objects in an environment from an image capture device capturing a stream of stereo images of the objects. For example, the depths may be used to control augmented reality, robotics, natural user interface technology, gaming and other applications. Streams of stereo images, or single stereo images, obtained with or without patterns of illumination projected onto the environment are processed using a parallel-processing unit to obtain depth maps. In various embodiments a parallel-processing unit propagates values related to depth in rows or columns of a disparity map in parallel. In examples, the values may be propagated according to a measure of similarity between two images of a stereo pair; propagation may be temporal between disparity maps of frames of a stream of stereo images and may be spatial within a left or right disparity map. | 08-28-2014 |
20140241613 | COORDINATED STEREO IMAGE ACQUISITION AND VIEWING SYSTEM - An image processing apparatus is provided, which includes a first calculation unit to calculate a first position of at least one first point sampled from an actual 3-dimensional (3D) object to be acquired as stereo 3D images, a second calculation unit to calculate a second position of at least one second point of a receiving end corresponding to the first point, using at least one second parameter related to the receiving end provided with the stereo 3D images, and a determination unit to determine at least one first parameter related to a transmission end to acquire and provide the stereo 3D images to the receiving end so that a difference between the first position and the second position is minimized. | 08-28-2014 |
20140241614 | System for 2D/3D Spatial Feature Processing - An electronic device ( | 08-28-2014 |
20140241615 | Design and Optimization of Plenoptic Imaging Systems - The spatial resolution of captured plenoptic images is enhanced. In one aspect, the plenoptic imaging process is modeled by a pupil image function (PIF), and a PIF inversion process is applied to the captured plenoptic image to produce a better resolution estimate of the object. | 08-28-2014 |
20140247976 | Image Analysis Method and Image Analysis Apparatus - An image analysis method includes acquiring an image of at least one frame that comprises pixels, setting at least one analytic region for the image of at least one frame, extracting data on the pixel corresponding to each analytic region, setting time intervals for data pairs for use in correlation calculations, performing a correlation calculation for each of the time intervals by use of the extracted data, and performing a fitting for each of the correlation calculation results. | 09-04-2014 |
20140254917 | AUTO-CONVERGENCE SYSTEM WITH ACTIVE LEARNING AND RELATED METHOD AND MACHINE-READABLE MEDIUM THEREOF - An auto-convergence system includes a disparity unit, a convergence unit and an active learning unit. The disparity unit performs a disparity analysis upon an input stereo image pair, and accordingly obtains a disparity distribution of the input stereo image pair. The convergence unit adjusts the input stereo image pair adaptively according to the disparity distribution and a learned convergence range, and accordingly generates an output stereo image pair for playback. The active learning unit actively learns a convergence range during playback of stereo image pairs, and accordingly determines the learned convergence range. | 09-11-2014 |
20140254918 | DISPARITY ESTIMATION METHOD OF STEREOSCOPIC IMAGE - A disparity estimation method of stereoscopic image is provided. A matching cost computation is executed for a first image and a second image, and one extreme value is selected from cost values corresponding to estimating disparities for each pixel to obtain a matching point corresponding to each pixel. And a matching disparity corresponding to each matching point is adjusted based on edge detection according to a scan order. | 09-11-2014 |
20140254919 | DEVICE AND METHOD FOR IMAGE PROCESSING - A device and a method for image processing include an image processing device that may extract a foreground moving object from a depth map of a three-dimensional (3D) image that may include an image depth map acquirer to obtain the depth map of a successive 3D image over a period of time, a moving object segmenter to segment a moving object from the obtained depth map, and a moving object tracker to identify and track the segmented moving object. | 09-11-2014 |
20140254920 | METHOD AND APPARATUS FOR ENHANCING QUALITY OF 3D IMAGE - A method of enhancing a quality of a 3 dimensional (3D) image includes classifying an input 3D image into a plurality of sub-areas based on noise characteristics of the plurality of sub-areas of the input 3D image, denoising each of the plurality of sub-areas of the input 3D image by using different denoising methods according to noise characteristics of each of the classified plurality of sub-areas and obtaining a second 3D image after the denoising, and enhancing a contrast ratio of the second 3D image after the denoising. | 09-11-2014 |
20140254921 | PROCEDURAL AUTHORING - The claimed subject matter provides a system and/or a method that facilitates generating a model from a 3-dimensional (3D) object assembled from 2-dimensional (2D) content. A content aggregator can construct a 3D object from a collection of two or more 2D images each depicting a real entity in a physical real world, wherein the 3D object is constructed by combining the two or more 2D images based upon a respective image perspective. A 3D virtual environment can allow exploration of the 3D object. A model component can extrapolate a true 3D geometric model from the 3D object, wherein the true 3D geometric model is generated to include scaling in proportion to a size within the physical real world. | 09-11-2014 |
20140270476 | METHOD FOR 3D OBJECT IDENTIFICATION AND POSE DETECTION USING PHASE CONGRUENCY AND FRACTAL ANALYSIS - Method for identifying objects within a three-dimensional point cloud data set. The method includes a fractal analysis ( | 09-18-2014 |
20140270477 | SYSTEMS AND METHODS FOR DISPLAYING A THREE-DIMENSIONAL MODEL FROM A PHOTOGRAMMETRIC SCAN - A computer-implemented method for displaying a three-dimensional (3D) model from a photogrammetric scan. An image of an object and a scan marker may be obtained at a first location. A relationship between the image of the object and the image of the scan marker at the first location may be determined. A geometric property of the object may be determined based on the relationship between the image of the object and the image of the scan marker. A 3D model of the object may be generated based on the determined geometric property of the object. The 3D model of the object may be displayed to scale in an augmented reality environment at a second location based on a scan marker at the second location. | 09-18-2014 |
20140270478 | IMAGE MOSAICKING USING A VIRTUAL GRID - Systems, methods, and other embodiments associated with generating a mosaic image using a virtual grid are described. In one embodiment, a method includes analyzing, by a processor of an apparatus, a boundary of a requested image to determine source images that collectively form an area that includes the requested image. The method also includes generating, by the processor, a virtual grid from coordinates of the source images by identifying edges of the source images from the coordinates to define rows and columns of the virtual grid within the boundary. The rows and columns of the virtual grid define virtual tiles in the virtual grid. | 09-18-2014 |
20140270479 | SYSTEMS AND METHODS FOR PARAMETER ESTIMATION OF IMAGES - Systems and methods are disclosed for identifying the vanishing point, vanishing direction and road width of an image using scene identification algorithms and a new edge-scoring technique. | 09-18-2014 |
20140270480 | DETERMINING OBJECT VOLUME FROM MOBILE DEVICE IMAGES - Techniques are described for analyzing images acquired via mobile devices in various ways, including to estimate measurements for one or more attributes of one or more objects in the images. For example, the described techniques may be used to measure the volume of a stockpile of material or other large object, based on images acquired via a mobile device that is carried by a human user as he or she passes around some or all of the object. During the acquisition of a series of digital images of an object of interest, various types of user feedback may be provided to a human user operator of the mobile device, and particular images may be selected for further analysis in various manners. Furthermore, the calculation of object volume and/or other determined object information may include generating and manipulating a computer model or other representation of the object from selected images. | 09-18-2014 |
20140270481 | SYSTEM FOR DETERMINING ALIGNMENT OF A USER-MARKED DOCUMENT AND METHOD THEREOF - A system for evaluating a user-marked document including a user-marked response sheet having a response area and at least one image marker, a means for obtaining a digital image of the user-marked response sheet, a computer having programming to perform steps which include identifying three-dimensional position information of the at least one image marker in an obtained digital image, calculating position information of the response area in an obtained digital image using the three-dimensional position information of the at least one image marker, identifying position information of a user created mark within the response area using the calculated position information of the response area, and evaluating whether the position information of the user created mark corresponds with position information of a first predefined mark or a second predefined mark. | 09-18-2014 |
20140270482 | Recognizing Entity Interactions in Visual Media - An entity interaction recognition system algorithmically recognizes a variety of different types of entity interactions that may be captured in two-dimensional images. In some embodiments, the system estimates the three-dimensional spatial configuration or arrangement of entities depicted in the image. In some embodiments, the system applies a proxemics-based analysis to determine an interaction type. In some embodiments, the system infers, from a characteristic of an entity detected in an image, an area or entity of interest in the image. | 09-18-2014 |
20140270483 | METHODS AND SYSTEMS FOR MEASURING GROUP BEHAVIOR - Methods and systems for measuring group behavior are provided. Group behavior of different groups may be measured objectively and automatically in different environments including a dark environment. A uniform visible signal comprising images of members of a group may be obtained. Facial motion and body motions of each member may be detected and analyzed from the signal. Group behavior may be measured by aggregating facial motions and body motions of all members of the group. A facial motion such as a smile may be detected by using the Fourier Lucas-Kanade (FLK) algorithm to register and track faces of each member of a group. A flow-profile for each member of the group is generated. Group behavior may be further analyzed to determine a correlation of the group behavior and the content of the stimulus. A prediction of the general public's response to the stimulus based on the analysis of the group behavior is also provided. | 09-18-2014 |
20140270484 | Moving Object Localization in 3D Using a Single Camera - Systems and methods are disclosed for autonomous driving with only a single camera by moving object localization in 3D with a real-time framework that harnesses object detection and monocular structure from motion (SFM) through the ground plane estimation; tracking feature points on moving cars a real-time framework to and use the feature points for 3D orientation estimation; and correcting scale drift with ground plane estimation that combines cues from sparse features and dense stereo visual data. | 09-18-2014 |
20140270485 | SPATIO-TEMPORAL DISPARITY-MAP SMOOTHING BY JOINT MULTILATERAL FILTERING - A filter structure for filtering a disparity map includes a first filter, a second filter, and a filter selector. The first filter is for filtering a contemplated section of the disparity map according to a first measure of central tendency. The second filter is for filtering the contemplated section of the disparity maps according to a second measure of central tendency. The filter selector is provided for selecting the first filter or the second filter for filtering the contemplated section of the disparity map, the selection being based on at least one local property of the contemplated section. A corresponding method for filtering a disparity map includes determining a local property of the contemplated section and selecting a filter. The contemplated section is then filtered using the first filter or the second filter depending on a result of the selection. | 09-18-2014 |
20140270486 | HYBRID RECURSIVE ANALYSIS OF SPATIO-TEMPORAL OBJECTS - A method for generating 3D-information from multiple images showing a 3D scene from multiple perspectives has: providing at least two hypotheses for the 3D-information; performing a multi-hypotheses test by matching the at least two hypotheses to the multiple images and determining a test-result hypothesis that fulfills a particular matching criterion; updating the test-result hypothesis by varying a parameter set of the test-result hypothesis to further improve the matching criterion or another criterion; and determining the 3D-information on the basis of the parameter set of a resulting hypothesis provided by the action of updating the test-result hypothesis. A corresponding computer readable digital storage medium and a 3D-information generator are also described. Further embodiments perform a correspondence analysis between projections of spatio-temporal objects (STO) in multiple images to select a particular spatio-temporal object on the basis of said correspondence analysis. | 09-18-2014 |
20140286566 | COOPERATIVE PHOTOGRAPHY - Imagery from two or more users' different smartphones is streamed to a cloud processor, enabling creation of 3D model information about a scene being imaged. From this model, arbitrary views and streams can be synthesized. In one arrangement, a user of such a system is at a sports arena, and her view of the sporting event is blocked when another spectator rises to his feet in front of her. Nonetheless, the imagery presented on her headworn display continues uninterrupted—the blocked imagery from that viewpoint being seamlessly re-created based on imagery contributed by other system users in the arena. A great variety of other features and arrangements are also detailed. | 09-25-2014 |
20140286567 | IMAGE PROCESSING METHOD AND ASSOCIATED APPARATUS - An image processing method includes: receiving a plurality of images, the images being captured under different view points; and performing image alignment for the plurality of images by warping the plurality of images, where the plurality of images are warped according to a set of parameters, and the set of parameters are obtained by finding a solution constrained to predetermined ranges of physical camera parameters. In particular, the step of performing the image alignment further includes: automatically performing the image alignment to reproduce a three-dimensional (3D) visual effect, where the plurality of images is captured by utilizing a camera module, and the camera module is not calibrated with regard to the view points. For example, the 3D visual effect can be a multi-angle view (MAV) visual effect. In another example, the 3D visual effect can be a 3D panorama visual effect. An associated apparatus is also provided. | 09-25-2014 |
20140294287 | LOW-COMPLEXITY METHOD OF CONVERTING IMAGE/VIDEO INTO 3D FROM 2D - A low-complexity method of converting 2D images/videos into 3D ones includes the steps of identifying whether each pixel in one of the frames is an edge feature point; locating at least two vanishing lines in the frame according to the edge feature point; categorizing the frame into the one of close-up photographic feature, of landscape feature, and of vanishing-area feature; if the frame is identified to have the vanishing-area feature or the landscape feature to generate a GDM; and apply a modificatory procedure to the GDM to generate a final depth information map; if the frame is identified to have the close-up photographic feature, distinguish between a foreground object and a background information in the frame and define the depth of field to generate the final depth information map. | 10-02-2014 |
20140294288 | SYSTEM FOR BACKGROUND SUBTRACTION WITH 3D CAMERA - A system for background image subtraction includes a computing device coupled with a 3D video camera, a processor of the device programmed to receive a video feed from the camera containing images of one or more subject that include depth information. The processor, for an image: segments pixels and corresponding depth information into three different regions including foreground (FG), background (BG), and unclear (UC); categorizes UC pixels as FG or BG using a function that considers the color and background history (BGH) information associated with the UC pixels and the color and BGH information associated with pixels near the UC pixels; examines the pixels marked as FG and applies temporal and spatial filters to smooth boundaries of the FG regions; constructs a new image by overlaying the FG regions on top of a new background; displays a video feed of the new image in a display device; and continually maintains the BGH. | 10-02-2014 |
20140294289 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - There is provided an image processing apparatus including a stereo matching unit configured to obtain right and left disparity images by using stereo matching, based on a pair of images captured by right and left cameras, respectively, a filter processing unit configured to perform filter processing on the disparity images, and a first merging unit configured to make a comparison, in the disparity images that have undergone the filter processing, between disparity values at mutually corresponding positions in the right and left disparity images and to merge the disparity values of the right and left disparity images based on a comparison result. | 10-02-2014 |
20140294290 | Projector-Camera Misalignment Correction for Structured Light Systems - A method of misalignment correction in a structured light device is provided that includes extracting features from a first captured image of a scene, wherein the first captured image is captured by an imaging sensor component of the structured light device, and wherein the first captured image includes a pattern projected into the scene by a projector component of the structured light device, matching the features of the first captured image to predetermined features of a pattern image corresponding to the projected pattern to generate a dataset of matching features, determining values of alignment correction parameters of an image alignment transformation model using the dataset of matching features, and applying the image alignment transformation model to a second captured image using the determined alignment correction parameter values. | 10-02-2014 |
20140301633 | System and Method for Floorplan Reconstruction and Three-Dimensional Modeling - Systems and methods for reconstructing a floorplan of a building for generating a three-dimensional model are provided. One aspect of the present disclosure is directed to a computer-implemented method for generating a three-dimensional model of a building. The method includes estimating a floor height and a ceiling height of the building. The method also includes identifying a core region of a two-dimensional graph, the core region corresponding to an interior of the building. The method includes determining a solution path that circumnavigates the core region and minimizes a cost formula, the cost formula providing an edge cost for each of a plurality of edges. The method further includes generating a three-dimensional model of the interior of the building based on the floor height, the ceiling height, and the solution path. | 10-09-2014 |
20140307950 | IMAGE DEBLURRING - Image deblurring is described, for example, to remove blur from digital photographs captured at a handheld camera phone and which are blurred due to camera shake. In various embodiments an estimate of blur in an image is available from a blur estimator and a trained machine learning system is available to compute parameter values of a blur function from the blurred image. In various examples the blur function is obtained from a probability distribution relating a sharp image, a blurred image and a fixed blur estimate. For example, the machine learning system is a regression tree field trained using pairs of empirical sharp images and blurred images calculated from the empirical images using artificially generated blur kernels. | 10-16-2014 |
20140307951 | Light Estimation From Video - Embodiments of the invention include a method and/or system for determining the relative location of light sources in a video clip. The relative location of a light source can be determined from information found within the video clip. This can be determined from specular reflection on a planar surface within the video clip. An ellipse can be fit to glare that is indicative of specular reflection on the planar surface. An outgoing light angle can be determined from the ellipse. From this outgoing light angle, the incident light angle can be determined for the frame. From this incident light angle, the location of the light source can be determined. | 10-16-2014 |
20140307952 | MIXING INFRARED AND COLOR COMPONENT DATA POINT CLOUDS - The subject disclosure is directed towards mixing RGB data with infrared data so as to provide depth-related data in regions where infrared data are sparse. Infrared data, such as corresponding to point cloud data, are processed to determine sparse regions therein. For any such sparse regions, RGB data corresponding to a counterpart region in the RGB data are added to a data structure. The data structure, which may include or be concatenated to the IR data, may be used for depth-related data, e.g., with a point cloud. | 10-16-2014 |
20140307953 | ACTIVE STEREO WITH SATELLITE DEVICE OR DEVICES - The subject disclosure is directed towards communicating image-related data between a base station and/or one or more satellite computing devices, e.g., tablet computers and/or smartphones. A satellite device captures image data and communicates image-related data (such as the images or depth data processed therefrom) to another device, such as a base station. The receiving device uses the image-related data to enhance depth data (e.g., a depth map) based upon the image data captured from the satellite device, which may be physically closer to something in the scene than the base station, for example. To more accurately capture depth data in various conditions, an active illumination pattern may be projected from the base station or another external projector, whereby satellite units may use the other source's active illumination and thereby need not consume internal power to benefit from active illumination. | 10-16-2014 |
20140307954 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, PROGRAM, AND ELECTRONIC APPLIANCE - There is provided an image processing apparatus including an image processing unit configured to carry out adjustment that makes disparity larger than disparity corresponding to processed images, which are moving images to be processed, based on an amount of change over time in a magnitude of the disparity corresponding to the processed images. | 10-16-2014 |
20140307955 | APPARATUS AND METHOD FOR DETECTING BODY PARTS FROM USER IMAGE - An apparatus for detecting a body part from a user image may include an image acquirer to acquire a depth image, an extractor to extract the user image from a foreground of the acquired depth image, and a body part detector to detect the body part from the user image, using a classifier trained based on at least one of a single-user image sample and a multi-user image sample. The single-user image may be an image representing non-overlapping users, and the multi-user image may be an image representing overlapping users. | 10-16-2014 |
20140314307 | METHOD AND SYSTEM FOR ANALYZING IMAGES FROM SATELLITES - Various embodiments provide a method for analyzing images generated from at least one imaging system on at least one satellite. The method comprises providing at least three images of an area of interest from the at least one imaging system, the provided images being provided from at least three different angles, establishing point correspondence between the provided images, generating at least two sets of three-dimensional information based on the provided images, wherein the at least two sets of three-dimensional information are generated based on at least two different combinations of at least two of the at least three provided images of the area of interest, and comparing the at least two sets of three-dimensional information so as to determine discrepancies and providing information related to the imaging system and/or errors in the images based on the determined discrepancies. Associated systems, computer programs, and computer program products are also provided. | 10-23-2014 |
20140314308 | THREE-DIMENSIONAL POINT CLOUD POSITION DATA PROCESSING DEVICE, THREE-DIMENSIONAL POINT CLOUD POSITION DATA PROCESSING SYSTEM, AND THREE-DIMENSIONAL POINT CLOUD POSITION DATA PROCESSING METHOD AND PROGRAM - A technique is provided for efficiently process three-dimensional point cloud position data that are obtained at different viewpoints. A projecting plane is set in a measurement space as a parameter for characterizing a target plane contained in plural planes that form an object. The target plane and other planes are projected on the projecting plane. Then, a distance between each plane and the projecting plane is calculated at each grid point on the projecting plane, and the calculated matrix data is used as a range image that characterizes the target plane. The range image is also formed with respect to the other planes and with respect to planes that are viewed from another viewpoint. The range images of the two viewpoints are compared, and a pair of the planes having the smallest difference between the range images thereof is identified as matching planes between the two viewpoints. | 10-23-2014 |
20140314309 | PREPROCESSING APPARATUS IN STEREO MATCHING SYSTEM - A preprocessing apparatus in a stereo matching system is provided. In the preprocessing apparatus, coordinate information of a stereo camera is received, a new address of a pixel of an image is specified in real time, and left and right images are interpolated, using the new address of the pixel and weight information of the stereo camera. | 10-23-2014 |
20140321732 | AUTOMATIC DETECTION OF STEREOSCOPIC CONTENT IN VIDEO/IMAGE DATA - A method includes calculating, through a processor of a computing device communicatively coupled to a memory, correlation between two portions of an image and/or a video frame on either side of a reference portion thereof. The method also includes determining, through the processor, whether content of the image and/or the video frame is stereoscopic or non-stereoscopic based on the determined correlation. | 10-30-2014 |
20140321733 | METHODS, APPARATUSES, AND COMPUTER-READABLE MEDIA FOR PROJECTIONAL MORPHOLOGICAL ANALYSIS OF N-DIMENSIONAL SIGNALS - Embodiments discussed herein in the form of methods, systems, and computer-readable media deal with the application of advanced “projectional” morphological algorithms for solving a broad range of problems. In a method of performing projectional morphological analysis, an N-dimensional input signal is supplied. At least one N-dimensional form indicative of at least one feature in the N-dimensional input signal is identified. The N-dimensional input signal is filtered relative to the at least one N-dimensional form and an N-dimensional output signal is generated indicating results of the filtering at least as differences in the N-dimensional input signal relative to the at least one N-dimensional form. | 10-30-2014 |
20140321734 | METHOD AND APPARATUS FOR REMOTE SENSING OF OBJECTS UTILIZING RADIATION SPECKLE - Disclosed are systems and methods to extract information about the size and shape of an object by observing variations of the radiation pattern caused by illuminating the object with coherent radiation sources and changing the wavelengths of the source. Sensing and image-reconstruction systems and methods are described for recovering the image of an object utilizing projected and transparent reference points and radiation sources such as tunable lasers. Sensing and image-reconstruction systems and methods are also described for rapid sensing of such radiation patterns. A computational system and method is also described for sensing and reconstructing the image from its autocorrelation. This computational approach uses the fact that the autocorrelation is the weighted sum of shifted copies of an image, where the shifts are obtained by sequentially placing each individual scattering cell of the object at the origin of the autocorrelation space. | 10-30-2014 |
20140321735 | Method and computer program product of the simultaneous pose and points-correspondences determination from a planar model - A method and software for the simultaneous pose and points-correspondences determination from a planar model are disclosed. The method includes using a coarse pose estimation algorithm to obtain two possible coarse poses, and using each one of the two coarse poses as the initialization of the extended TsPose algorithm to obtain two candidate estimated poses; selecting one from the two candidate estimated poses based on the cost function value. Thus, The method solves the problem of pose redundancy in the simultaneous pose and points-correspondences determination from a planar model, i.e., the problem that the numbers of estimated poses increase exponentially as the iterations go. The disclosed embodiment is based on the coplanar points, and does not place restriction on the shape of a planar model. It performs well in a cluttered and occluded environment, and is noise-resilient in the presence of different levels of noise. | 10-30-2014 |
20140321736 | MOVING-IMAGE PROCESSING DEVICE, MOVING-IMAGE PROCESSING METHOD, AND INFORMATION RECORDING MEDIUM - Provided is a moving-image processing device for determining interference between objects. A moving-image processing device ( | 10-30-2014 |
20140328535 | SPARSE LIGHT FIELD REPRESENTATION - The disclosure provides an approach for generating a sparse representation of a light field. In one configuration, a sparse representation application receives a light field constructed from multiple images, and samples and stores a set of line segments originating at various locations in epipolar-plane images (EPI), until the EPIs are entirely represented and redundancy is eliminated to the extent possible. In addition, the sparse representation application determines and stores difference EPIs that account for variations in the light field. Taken together, the line segments and the difference EPIs compactly store all relevant information that is necessary to reconstruct the full 3D light field and extract an arbitrary input image with a corresponding depth map, or a full 3D point cloud, among other things. This concept also generalizes to higher dimensions. In a 4D light field, for example, the principles of eliminating redundancy and storing a difference volume remain valid. | 11-06-2014 |
20140334715 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - There is provided an image processing apparatus that includes a coefficient setting section, and a processing section. The coefficient setting section is configured to set a filter coefficient based on a correlation value in a color image. The processing section is configured to perform filter processing on a disparity image of the color image for correction of the disparity image, the filter processing being performed using the filter coefficient set by the coefficient setting section. | 11-13-2014 |
20140334716 | APPARATUS AND METHOD FOR HIERARCHICAL STEREO MATCHING - An apparatus and a method for hierarchical stereo matching are provided. In the method, a reduced image is formed by reducing left and right images, and a first Trellis is performed on the reduced image. Then, a magnified image is generated by magnifying the size and the brightness of the reduced image, and a second Trellis is performed on the magnified image. | 11-13-2014 |
20140334717 | METHOD AND APPARATUS FOR COMPRESSING TEXTURE INFORMATION OF THREE-DIMENSIONAL (3D) MODELS - A 3D model can be modeled using “pattern-instance?representation. To describe the vertices and triangles, properties of the instance, for example, texture, color, and normal, are adjusted to correspond to the order in the pattern. The texture of an instance is encoded depending on its similarity with the texture of a corresponding pattern. When instance texture is identical or almost identical to the pattern texture, the instance texture is not encoded and the pattern texture will be used to reconstruct the instance texture. When the instance texture is similar to the pattern texture, the instance texture is predictively encoded from the pattern texture, that is, the difference between the instance texture and pattern texture is encoded, and the instance texture is determined as a combination of the pattern texture and the difference. | 11-13-2014 |
20140341463 | Method for Reconstructing 3D Lines from 2D Lines in an Image - A method for reconstructing—three-dimensional (3D) lines in a 3D world coordinate system from two-dimensional (2D) lines in a single image of scene detects and clusters the 2D lines using vanishing points. A constraint graph of vertices and edges is generated, wherein the vertices represent the 2D lines, and the edges represents constraints on the 2D lines, then identifying the 3D lines that satisfy the constraints and reconstructing the 3D lines using the identified constraints. | 11-20-2014 |
20140341464 | SHADOW DETECTION METHOD AND DEVICE - Disclosed are a shadow detection method and device. The method includes a step of obtaining a depth/disparity map and color/grayscale image from a two-lens camera or stereo camera; a step of detecting and acquiring plural foreground points; a step of projecting the acquired plural foreground points into a 3-dimensional coordinate system; a step of carrying out, in the 3-dimensional coordinate system, a clustering process with respect to the projected plural foreground points so as to divide the projected plural foreground points into one or more point clouds; a step of calculating density distribution of each of the one or more point clouds by adopting a principal component analysis algorithm so as to obtain one or more principal component values of the corresponding point cloud; and a step of determining, based on the one or more principal component values, whether the corresponding point cloud is a shadow. | 11-20-2014 |
20140348416 | STEREO IMAGE RECTIFICATION APPARATUS AND METHOD - A stereo image rectification apparatus includes: an input unit, for receiving a first image from a first camera and a second image from a second camera, wherein the first and the second camera are affine cameras; a feature point determination unit, for determining at least one first feature point on the first image and at least one second feature point on the second image, wherein both the first feature point on the first image and the second feature point on the second image correspond to the same imaginary object; and a warping matrix establishing unit, for establishing a warping matrix for mapping the first image to the second image by: calculating the elements of the warping matrix in regard to the mapping between the x- and y-coordinates of the first feature points and the y-coordinates of the second feature points | 11-27-2014 |
20140348417 | SYSTEM AND METHOD TO CAPTURE AND PROCESS BODY MEASUREMENTS - The present invention is directed to system and method capture and process body measurement data including one or more body scanners, each body scanner configured to capture a plurality depth images of a user. A backend system, coupled to each body scanner, is configured to receive the plurality of depth images, generate a three dimensional avatar of the user based on the plurality of depth images, and generate a set of body measurements of the user based on the avatar. A repository is in communication with the backend system and configured to associate the body scanner data with the user and store the body scanner data. | 11-27-2014 |
20140348418 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - There is provided an image processing apparatus, including a virtual viewpoint interval determination section which determines virtual viewpoint intervals corresponding to an image to be processed based on a parallax corresponding to the image to be processed and an image feature amount for each pixel in the image to be processed. | 11-27-2014 |
20140355868 | TECHNIQUES FOR STEREO THREE DIMENSIONAL IMAGE MAPPING - An apparatus may include processor to retrieve a stereo three dimensional (S3D) frame of an S3D game, the frame comprising a red-green-blue (RGB) frame and depth frame; and an interest aware disparity mapping component to: generate a depth edge frame from the depth frame; and to generate a depth distribution diagram for the depth frame based on the depth edge frame, the depth distribution diagram defining a multiplicity of camera regions for generating a mapped S3D frame for a target device based upon viewing parameters of the target device. | 12-04-2014 |
20140355869 | SYSTEM AND METHOD FOR PREVENTING AIRCRAFTS FROM COLLIDING WITH OBJECTS ON THE GROUND - A safety system for preventing aircraft collisions with objects on the ground is provided herein. The safety system may include gated imaging sensors attached to the aircraft that capture overlapping gated images which are images that allow estimating the range of the imaged objects. The overlap zones are utilized to generate a three dimensional model of the aircraft surroundings. Additionally, aircraft contour data and aircraft kinematic data are used to construct an expected swept volume of the aircraft which is then projected onto the three dimensional model of the aircraft surroundings to derive an estimation of likelihood of collision of the aircraft with objects in its surroundings and corresponding warnings. | 12-04-2014 |
20140355870 | Systems and Methods for Generating Depth Maps Using A Set of Images Containing A Baseline Image - Systems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. An imaging device in accordance with one embodiment of the invention includes at least one imager array, and each imager in the array comprises a plurality of light sensing elements and a lens stack including at least one lens surface, where the lens stack is configured to form an image on the light sensing elements, control circuitry configured to capture images formed on the light sensing elements of each of the imagers, and a super-resolution processing module configured to generate at least one higher resolution super-resolved image using a plurality of the captured images. | 12-04-2014 |
20140363073 | HIGH-PERFORMANCE PLANE DETECTION WITH DEPTH CAMERA DATA - The subject disclosure is directed towards detecting planes in a scene using depth data of a scene image, based upon a relationship between pixel depths, row height and two constants. Samples of a depth image are processed to fit values for the constants to a plane formulation to determine which samples indicate a plane. A reference plane may be determined from those samples that indicate a plane, with pixels in the depth image processed to determine each pixel's relationship to the plane based on the pixel's depth, location and associated fitted values, e.g., below the plane, on the plane or above the plane. | 12-11-2014 |
20140369594 | METHOD AND APPARATUS FOR IDENTIFYING LOCAL FEATURES - A method of local feature identification, comprising the steps of:
| 12-18-2014 |
20140369595 | COMPUTER VISION DATABASE PLATFORM FOR A THREE-DIMENSIONAL MAPPING SYSTEM - A system is provided including a database that ingests data from disparate image sources, with a variety of image metadata types and qualities, and manages images geospatially through the creation and continued refinement of camera solutions for each data object included. These camera solutions are calculated and refined by the database as additional data enters the system that could affect the solutions, through a combination of the application of image metadata towards image processing methods and the use of optical-only computer vision techniques. The database continually generates data quality metrics and relevant imagery and geometry analytics, which drive future collection tasking, system analytics, and human quality control requirements. | 12-18-2014 |
20140376803 | METHOD OF STORING A CONTENT OF A THREE-DIMENSIONAL IMAGE - A method of storing a content of a three-dimensional image includes a processor initializing a register and a maximum number; the processor utilizing a stereo comparison algorithm to generate a depth information map corresponding to each frame of a three-dimensional image signal; the processor obtaining a depth value corresponding to each pixel of each pixel row of the depth information map from the depth information map; the processor generating at least one depth vector corresponding to the pixel row according to a depth value corresponding to each pixel of the pixel row; a counter counting a number of the at least one depth vector; the processor storing the number of the at least one depth vector in the register; the processor comparing the number of the at least one depth vector with the maximum number; and the processor executing a corresponding operation on the register according to a comparison result. | 12-25-2014 |
20150010230 | Image matching method and stereo matching system - An image matching method is utilized for performing a stereo matching from a first image block to a second image block in a stereo matching system. The image matching method includes performing a matching computation from the first image block to the second image block according to a first matching algorithm to generate a first matching result; performing the matching computation between the first image block and the second image block according to a second matching algorithm to generate a second matching result and a third matching result; obtaining a matching error and a matching similarity of the first image block according to the second matching result and the third matching result; and determining a stereo matching result of the first image block according to the matching error and the matching similarity. | 01-08-2015 |
20150016712 | METHODS FOR OBJECT RECOGNITION AND RELATED ARRANGEMENTS - Methods and arrangements involving portable user devices such smartphones and wearable electronic devices are disclosed, as well as other devices and sensors distributed within an ambient environment. Some arrangements enable a user to perform an object recognition process in a computationally- and time-efficient manner. Other arrangements enable users and other entities to, either individually or cooperatively, register or enroll physical objects into one or more object registries on which an object recognition process can be performed. Still other arrangements enable users and other entities to, either individually or cooperatively, associate registered or enrolled objects with one or more items of metadata. A great variety of other features and arrangements are also detailed. | 01-15-2015 |
20150016713 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - There is provided an image processing device including a depth generation unit configured to generate, based on an image of a current frame and an image of a preceding frame of the current frame, a depth image indicating a position of a subject in a depth direction in the image of the preceding frame as a depth image of the current frame. | 01-15-2015 |
20150016714 | TAGGING VIRTUALIZED CONTENT - Techniques for tagging virtualized content are disclosed. In some embodiments, a modeled three-dimensional scene of objects representing abstracted source content is generated and analyzed to determine a contextual characteristic of the scene that is based on a plurality of objects comprising the scene. The modeled scene is tagged with a tag specifying the determined contextual characteristic. | 01-15-2015 |
20150023585 | Matching search method and system - A matching search method is utilized for performing a matching search from a first image block to a second image block in a stereo matching system. The matching search method includes obtaining a global offset of the first image block corresponding to the second image block according to an offset relationship between a first image data of the first image block and a second image data of the second image block; and performing the matching search from the first image block to the second image block according to the global offset and a searching range. | 01-22-2015 |
20150023586 | DEPTH MAP GENERATION METHOD, RELATED SYSTEM AND COMPUTER PROGRAM PRODUCT - A depth map is generated from at least a first and a second image. A plurality of reference pixels in the first image are selected and associated with respective pixels in the second image. A disparity between each reference pixel and the respective pixel in said second image is determined, and a depth value is determined as a function of the respective disparity. The plurality of reference pixels is selected based on detected contours in the first image. | 01-22-2015 |
20150023587 | METHOD FOR GENERATING A DEPTH MAP, RELATED SYSTEM AND COMPUTER PROGRAM PRODUCT - A depth map is generated from at least a first and a second image. Generally, a plurality of reference pixels are selected in the first image and associated with respective pixels in the second image. Next, the disparity between each reference pixel and the respective pixel in said second image is determined, and for each reference pixel a depth value as a function of the respective disparity. In particular, each reference pixel is associated with a respective pixel in the second image via a matching and a filtering operation. The matching operation selects for each reference pixel a plurality of candidate pixels in the second image and associates with each candidate pixel a respective cost function value and a respective disparity value. | 01-22-2015 |
20150023588 | DEPTH MAP GENERATION METHOD, RELATED SYSTEM AND COMPUTER PROGRAM PRODUCT - A depth map is generated from at least a first and a second image. A plurality of reference pixels are selected in the first image. A cost function is used to associate each reference pixel with a respective pixel in the second image. A masking operation is used to identify a subset of pixels in a block of pixels surrounding a reference pixel and the cost function is based on the identified subset of pixels. A disparity between each reference pixel and the respective pixel in said second image is determined, and a depth value is determined for each reference pixel as a function of the respective disparity. A depth map is generated based on the determined depth values. | 01-22-2015 |
20150023589 | IMAGE RECORDING DEVICE, THREE-DIMENSIONAL IMAGE REPRODUCING DEVICE, IMAGE RECORDING METHOD, AND THREE-DIMENSIONAL IMAGE REPRODUCING METHOD - The image recording device includes: an image information obtaining unit obtaining a first image of the object viewed from a first viewpoint, a second image of the object viewed from a second viewpoint, and viewpoint positions each for one of the viewpoints; a depth information generating unit generating depth information items each indicating a depth of the object included in the first and second images; an image generating unit generating a third image and a viewpoint position of the third image, using the depth information items, the first image, and the second image, the third image being of the object viewed from a third viewpoint different from the first and second viewpoints; and a recording unit recording on the image file the first, second and third images in association with the viewpoint positions each for one of the first, second, and third images. | 01-22-2015 |
20150030231 | Method for Data Segmentation using Laplacian Graphs - A method segments n-dimensional by first determining prior information from the data. A fidelity term is determined from the prior information, and the data are represented as a graph. A graph Laplacian is determined from the graph from the graph, and a Laplacian spectrum constraint is determined from the graph Laplacian. Then, an objective function is minimized according to the fidelity term and the Laplacian spectrum constraint to identify a segment of target points in the data. | 01-29-2015 |
20150030232 | IMAGE PROCESSOR CONFIGURED FOR EFFICIENT ESTIMATION AND ELIMINATION OF BACKGROUND INFORMATION IN IMAGES - An image processing system comprises an image processor implemented using at least one processing device and adapted for coupling to an image source, such as a depth imager. The image processor is configured to compute a convergence matrix and a noise threshold matrix, to estimate background information of an image utilizing the convergence matrix, and to eliminate at least a portion of the background information from the image utilizing the noise threshold matrix. The background estimation and elimination may involve the generation of static and dynamic background masks that include elements indicating which pixels of the image are part of respective static and dynamic background information. The computing, estimating and eliminating operations may be performed over a sequence of depth images, such as frames of a 3D video signal, with the convergence and noise threshold matrices being recomputed for each of at least a subset of the depth images. | 01-29-2015 |
20150030233 | System and Method for Determining a Depth Map Sequence for a Two-Dimensional Video Sequence - A system and method of determining a depth map sequence for a subject two-dimensional video sequence by: determining a plurality of monocular depth cues for each frame of the subject two-dimensional video sequence; and determining a depth map for each frame of the subject two-dimensional video sequence based on the application of the plurality of monocular depth cues determined for the frame to a depth map model. The depth map model determined by: determining a plurality of monocular depth cues for one or more training two-dimensional video sequences; and determining a depth map model based the plurality of monocular depth cues of the one or more training two-dimensional video sequences and corresponding known depth maps for each of the one or more training two-dimensional video sequences. | 01-29-2015 |
20150030234 | ADAPTIVE MULTI-DIMENSIONAL DATA DECOMPOSITION - A method of decomposing an image or video into a plurality of components. The method comprises: obtaining ( | 01-29-2015 |
20150030235 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM - Provided is an image processing device including a disparity detector configured to receive a plurality of 3D images and detect disparity of each of the 3D images, a disparity analyzer configured to generate statistical information about disparity of each 3D image using the disparity of each 3D image detected by the disparity detector, and a disparity controller configured to convert the disparity using the statistical information about disparity of each 3D image generated by the disparity analyzer in such a manner that the 3D images are not overlapped so that a range of the disparity is within a predetermined range. | 01-29-2015 |
20150030236 | INFERRING SPATIAL OBJECT DESCRIPTIONS FROM SPATIAL GESTURES - Three-dimensional (3-D) spatial image data may be received that is associated with at least one arm motion of an actor based on free-form movements of at least one hand of the actor, based on natural gesture motions of the at least one hand. A plurality of sequential 3-D spatial representations that each include 3-D spatial map data corresponding to a 3-D posture and position of the hand at sequential instances of time during the free-form movements may be determined, based on the received 3-D spatial image data. An integrated 3-D model may be generated, via a spatial object processor, based on incrementally integrating the 3-D spatial map data included in the determined sequential 3-D spatial representations and comparing a threshold time value with model time values indicating numbers of instances of time spent by the hand occupying a plurality of 3-D spatial regions during the free-form movements. | 01-29-2015 |
20150036916 | STEREO-MOTION METHOD OF THREE-DIMENSIONAL (3-D) STRUCTURE INFORMATION EXTRACTION FROM A VIDEO FOR FUSION WITH 3-D POINT CLOUD DATA - According to an embodiment, a method for generating a 3-D stereo structure comprises registering and rectifying a first image frame and a second image frame by local correction matching, extracting a first scan line from the first image frame, extracting a second scan line from the second image frame corresponding to the first scan line, calculating a pixel distance between the first scan line and the second scan line for each pixel for a plurality of pixel shifts, calculating a smoothed pixel distance for each pixel for the pixel shifts by filtering the pixel distance for each pixel over the pixel shifts, and determining a scaled height for each pixel of the first scan line, the scaled height comprising a pixel shift from among the pixel shifts corresponding to a minimal distance of the smoothed pixel distance for the pixel. | 02-05-2015 |
20150036917 | STEREO IMAGE PROCESSING DEVICE AND STEREO IMAGE PROCESSING METHOD - Provided is a stereo image processing device, with which it is possible to compute disparity with high precision even for an object of a small image region size in a baseline length direction. With this device, an image matching unit ( | 02-05-2015 |
20150036918 | IMAGE PROCESSING METHOD AND SYSTEM - A method of comparing two object poses, wherein each object pose is expressed in terms of position, orientation and scale with respect to a common coordinate system, the method comprising:
| 02-05-2015 |
20150043806 | AUTOMATIC GEOMETRY AND LIGHTING INFERENCE FOR REALISTIC IMAGE EDITING - Image editing techniques are disclosed that support a number of physically-based image editing tasks, including object insertion and relighting. The techniques can be implemented, for example in an image editing application that is executable on a computing system. In one such embodiment, the editing application is configured to compute a scene from a single image, by automatically estimating dense depth and diffuse reflectance, which respectively form the geometry and surface materials of the scene. Sources of illumination are then inferred, conditioned on the estimated scene geometry and surface materials and without any user input, to form a complete 3D physical scene model corresponding to the image. The scene model may include estimates of the geometry, illumination, and material properties represented in the scene, and various camera parameters. Using this scene model, objects can be readily inserted and composited into the input image with realistic lighting, shadowing, and perspective. | 02-12-2015 |
20150043807 | DEPTH IMAGE COMPRESSION AND DECOMPRESSION UTILIZING DEPTH AND AMPLITUDE DATA - In one embodiment, an image processing system comprises an image processor configured to obtain depth and amplitude data associated with a depth image, to identify a region of interest based on the depth and amplitude data, to separately compress the depth and amplitude data based on the identified region of interest to form respective compressed depth and amplitude portions, and to combine the separately compressed portions to provide a compressed depth image. The image processor may additionally or alternatively be configured to obtain a compressed depth image, to divide the compressed depth image into compressed depth and amplitude portions, and to separately decompress the compressed depth and amplitude portions to provide respective depth and amplitude data associated with a depth image. Other embodiments of the invention can be adapted for compressing or decompressing only depth data associated with a given depth image or sequence of depth images. | 02-12-2015 |
20150043808 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGING APPARATUS - An image processing apparatus comprising: an image acquisition unit configured to acquire an image; a depth map acquisition unit configured to acquire a depth map corresponding to the image; a refinement unit configured to detect a saliency region from the image and to refine the depth map on the basis of the saliency region, the saliency region being a region on which a person tends to focus; and an image processing unit configured to apply image processing to the image using the depth map refined by the refinement unit. | 02-12-2015 |
20150049937 | METHOD AND APPARATUS FOR PROCESSING IMAGES - Provided is an image processing method including: receiving a depth map and a color image with respect to a predetermined scene; projecting a plurality of points included in the depth map, onto the color image; segmenting the color image, onto which the depth map is projected, by using at least two image segmentation methods to generate respective segmentation results, and classifying the plurality of points included in the color image into at least one set according to the respective segmentation results; assigning a depth value to each of the plurality of points of the color image included in the set; and outputting the color image, to which the depth values are assigned. | 02-19-2015 |
20150055853 | METHOD AND SYSTEM FOR PROVIDING THREE-DIMENSIONAL AND RANGE INTER-PLANAR ESTIMATION - A system, apparatus and method of performing 3-D object profile inter-planar estimation and/or range inter-planar estimation of objects within a scene, including: providing a predefined finite set of distinct types of features, resulting in feature types, each feature type being distinguishable according to a unique bi-dimensional formation; providing a coded light pattern having multiple appearances of the feature types; projecting the coded light pattern, having axially varying intensity, on objects within a scene, the scene having at least two planes, resulting in a first plane and a second plane; capturing a 2-D image of the objects having the projected coded light pattern projected thereupon, resulting in a captured 2-D image, the captured 2-D image including reflected feature types; determining intensity values of the 2-D captured image; and performing 3-D object profile inter-planar estimation and/or range inter-planar estimation of objects within the scene based on determined intensity values. | 02-26-2015 |
20150063678 | SYSTEMS AND METHODS FOR GENERATING A 3-D MODEL OF A USER USING A REAR-FACING CAMERA - A computer-implemented method for generating a three-dimensional (3-D) model of a user. A plurality of images of a user are captured via a rear-facing camera on a mobile device. A 3-D model of the user is generated using the plurality of images of the user captured using the rear-facing camera and 3-D data derived from processing a previous plurality of images of the user prior to capturing the plurality of images. A point of interest in at least one of the plurality of images captured via the rear-facing camera may be identified and the identified point of interest in at least one of the plurality of images may be correlated with one or more vertices from the 3-D data. | 03-05-2015 |
20150063679 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR A STEREOSCOPIC IMAGE LASSO - A system, method, and computer program product for providing a lasso selection tool for a stereoscopic image is disclosed. The method includes the steps of obtaining a lasso region of a stereoscopic image pair based on a path defined by a user using a lasso selection tool. An object in a first image of the stereoscopic image pair is identified, where the object is at least partially included within the lasso region and the object is identified in a second image of the stereoscopic image pair. | 03-05-2015 |
20150063680 | Disparity Calculating Method and Stereo Matching Device thereof - A disparity calculating method includes generating an energy matrix according to a first image-block and a second image-block, wherein the energy matrix includes a plurality of energies of a plurality of pixels corresponding to a plurality of disparity candidates; setting the energy corresponding to a starting pixel of the plurality of pixels and a specified disparity candidate of the plurality of disparity candidates as a first predetermined value and setting the energies corresponding to the starting pixel and other disparity candidates of the plurality of disparity candidates as a second predetermined value, wherein the second predetermined value is greater than the first predetermined value; generating a path matrix according to the energy matrix; and determining a plurality of disparities of the plurality of pixels sequentially from an ending pixels of the plurality of pixels, wherein the disparity of the ending pixel is set as a third predetermined value. | 03-05-2015 |
20150063681 | ESTIMATING DEPTH FROM A SINGLE IMAGE - During a training phase, a machine accesses reference images with corresponding depth information. The machine calculates visual descriptors and corresponding depth descriptors from this information. The machine then generates a mapping that correlates these visual descriptors with their corresponding depth descriptors. After the training phase, the machine may perform depth estimation based on a single query image devoid of depth information. The machine may calculate one or more visual descriptors from the single query image and obtain a corresponding depth descriptor for each visual descriptor from the generated mapping. Based on obtained depth descriptors, the machine creates depth information that corresponds to the submitted single query image. | 03-05-2015 |
20150063682 | VIDEO DISPARITY ESTIMATE SPACE-TIME REFINEMENT METHOD AND CODEC - A method for disparity estimation of stereo video data receives a sequence of frames of stereo video data. Image-based disparity estimation is initially conducted on a frame-by-frame basis to produce initial disparity estimates. A plurality of initial disparity estimates is grouped into a space-time volume. Disparity error is reduced in the space-time volume to refine the initial disparity estimates. | 03-05-2015 |
20150063683 | BUILDING DATUM EXTRACTION FROM LASER SCANNING DATA - A method, apparatus, system, and computer program product provide the ability to extract level information and reference grid information from point cloud data. Point cloud data is obtained and organized into a three-dimensional structure of voxels. Potential boundary points are filtered from the boundary cells. Level information is extracted from a Z-axis histogram of the voxels positioned along the Z-axis of the three-dimensional voxel structure and further refined. Reference grid information is extracted from an X-axis histogram of the voxels positioned along the X-axis of the three-dimensional voxel structure and a Y-axis histogram of the voxels positioned along the Y-axis of the three-dimensional voxel structure and further refined. | 03-05-2015 |
20150063684 | METHOD, SYSTEM AND APPARATUS FOR DETERMINING A PROPERTY OF AN IMAGE - A method of determining a property of an image ( | 03-05-2015 |
20150071524 | 3D Feature Descriptors with Camera Pose Information - A method includes determining a first two-dimensional (2D) feature descriptor from a first image captured by an imaging camera in a first pose at a time of capture of the first image, the first pose including a first observation direction of the imaging camera. The method further includes storing, at an electronic device, a 3D feature descriptor including the first 2D feature descriptor and a representation of the first pose of the imaging camera. The method additionally includes determining a second 2D feature descriptor from a second image captured by the imaging camera in a second pose at a time of capture of the second image, the second pose including a second observation direction of the imaging camera. The method also includes storing the 3D feature descriptor with the second 2D feature descriptor and a representation of the second pose of the imaging camera. | 03-12-2015 |
20150071525 | PROCESSING 3D IMAGE SEQUENCES - Various implementations provide techniques to prevent excessive parallax, depth, or disparity from being passed through to a viewer. In one particular implementation, it is determined that a depth indicator for an object in a stereoscopic image pair of a video sequence is outside of a target range. One or more images of the stereoscopic image pair is modified so that the depth indicator for the object is within the target range. In other implementations, a depth transition between the object and another portion of the video sequence is smoothed. In further implementations, the stereoscopic image pair is replaced with a 2D image pair that includes the object. In yet further implementations, a resulting video sequence includes (i) one or more stereoscopic image pairs having non-zero disparity and for which the depth indicator is within the target range for the entire image pair, and (ii) one or more 2D image pairs. | 03-12-2015 |
20150071526 | SAMPLING-BASED MULTI-LATERAL FILTER METHOD FOR DEPTH MAP ENHANCEMENT AND CODEC - A preferred method receives a color image and a corresponding raw depth map from a sensor or system. Unreliable regions are determined in the raw depth map by calculating pixel reliabilities for pixels throughout the depth map. Information is collected from the color image, for corresponding pixels in the unreliable regions of the raw depth map, from neighboring pixels outside the unreliable regions. The depth of pixels in the unreliable regions is updated with information collected from the reliable regions. Robust multi-later filtering is conducted on the adjusted depth map to produce an enhanced depth map. | 03-12-2015 |
20150071527 | PATIENT MONITOR AND METHOD - A patient monitoring system can include stereoscopic cameras connected to a computer which includes a 3D position determination module operable to process stereoscopic images of a patient to identify 3D positions of a plurality of points on the surface of an imaged patient. A target model store can store a target model including data identifying 3D positions of a set of vertices of a triangulated 3D wire mesh model and connectivity indicative of connections between vertices. A matching module can identify the triangles in a target model surface stored in the target model store closest to points identified by the 3D position determination module and calculate a rigid transformation which minimizes point to plane distances between the identified points and the planes containing the triangles of the target model surface identified as being closest to those points. | 03-12-2015 |
20150078651 | DEVICE AND METHOD FOR MEASURING SURFACES - A method for detecting and measuring local deviations from the form in planar, curved, or arched surfaces of a test object is provided, whereby three-dimensional measurements (D) of the surfaces are processed with an evaluation unit, embodied and further developed with regards to a non-destructive examination of test objects with objective and easily interpreted assessment results such that the evaluation device uses at least one virtual filter element as a concave filter for detecting concave partial areas in planar or convex surfaces and/or as a convex filter for detecting convex partial areas in planar or concave surfaces, that the filter element determines values of the deviations from the form, and that they are displayed via a display device in the form of measurements. | 03-19-2015 |
20150078652 | METHOD AND SYSTEM FOR DETERMINING A RELATION BETWEEN A FIRST SCENE AND A SECOND SCENE - The present invention relates to a system ( | 03-19-2015 |
20150078653 | METHOD AND APPARATUS FOR PERFORMING A FRAGMENTATION ASSESSMENT OF A MATERIAL - A method and apparatus for performing a fragmentation assessment of a material including fragmented material portions is disclosed. The method involves receiving two-dimensional image data representing a region of interest of the material, and processing the 2D image data to identify features of the fragmented material portions. The method also involves receiving a plurality of three dimensional point locations on surfaces of the fragmented material portions within the region of interest, identifying 3D point locations within the plurality of three dimensional point locations that correspond to identified features in the 2D image, and using the identified corresponding 3D point locations to determine dimensional attributes of the fragmented material portions. | 03-19-2015 |
20150086106 | IMAGE-DATA PROCESSING DEVICE AND IMAGE-DATA PROCESSING METHOD - The application discloses an image data processing device for generating output image data which represents an output image including a first region image to be displayed in a first region and a second region image to be displayed in a second region adjacent to the first region. The image data processing device includes an extractor configured to extract a part of first image data representing a first image as first extraction data representing the first region image and a part of second image data representing a second image to be viewed and compared simultaneously with the first, image as second extraction data representing the second region image. The extractor processes the first extraction data and the second extraction data to generate the output image data. | 03-26-2015 |
20150086107 | USE OF THREE-DIMENSIONAL TOP-DOWN VIEWS FOR BUSINESS ANALYTICS - A method of analyzing a depth image in a digital system is provided that includes detecting a foreground object in a depth image, wherein the depth image is a top-down perspective of a scene, and performing data extraction and classification on the foreground object using depth information in the depth image. | 03-26-2015 |
20150086108 | IDENTIFICATION USING DEPTH-BASED HEAD-DETECTION DATA - A candidate human head is found in depth video using a head detector. A head region of light intensity video is spatially resolved with a three-dimensional location of the candidate human head in the depth video. Facial recognition is performed on the head region of the light intensity video using a face recognizer. | 03-26-2015 |
20150093015 | Visual-Experience-Optimized Super-Resolution Frame Generator - An image processor generates a Super-Resolution (SR) frame by upscaling. A Human Visual Preference Model (HVPM) helps detect random texture regions, where visual artifacts and errors are tolerated to allow for more image details, and immaculate regions having flat areas, corners, or regular structures, where details may be sacrificed to prevent annoying visual artifacts that seem to stand out more. A regularity or isotropic measurement is generated for each input pixel. More regular and less anisotropic regions are mapped as immaculate regions. Higher weights for blurring, smoothing, or blending from a single frame source are assigned for immaculate regions to reduce the likelihood of generated artifacts. In the random texture regions, multiple frames are used as sources for blending, and sharpening is increased to enhance details, but more artifacts are likely. These artifacts are more easily tolerated by humans in the random texture regions than in the regular-structure immaculate regions. | 04-02-2015 |
20150093016 | Digital watermarking based method for objectively evaluating quality of stereo image - A digital watermarking based method for objectively evaluating quality of stereo image includes: at transmitting terminal, extracting characteristics of the left-view image and the right-view image of an undistorted stereo image in DCT domain and embedding digital watermarking obtained by processing quantization coding on the characteristics into the DCT domain; at a receiving terminal, detecting the digital watermarking embedded in the distorted stereo image and processing decoding and inverse quantization to extract the embedded characteristics of the left-view image and the right-view image of the stereo image, obtaining a stereo perception value and a view quality value of the distorted stereo image according to the embedded characteristics, and finally obtaining an objective quality score of the distorted stereo image utilizing a support vector regression model. | 04-02-2015 |
20150093017 | METHOD AND SYSTEM FOR CREATING DEPTH SIGNATURES - A method and system for creating a depth signature from plural images for providing watermark information related to the images. The method comprises analysing a pair of images, each image containing a plurality of elements, identifying a first element in one of the pair of images, identifying plural elements in the other of the pair of images. The method further comprises measuring a disparity parameter between the first element and a set of the plural elements, matching the first element from the set of plural elements, the matched second element having the smallest measured disparity parameter, and computing a signature based at least in part on the measured disparity between the first and second elements. | 04-02-2015 |
20150093018 | SYSTEMS AND METHODS FOR THREE DIMENSIONAL GEOMETRIC RECONSTRUCTION OF CAPTURED IMAGE DATA - In various embodiments, methods, systems, and computer program products for processing digital images captured by a mobile device are disclosed. Myriad features enable and/or facilitate processing of such digital images using a mobile device that would otherwise be technically impossible or impractical, and furthermore address unique challenges presented by images captured using a camera rather than a traditional flat-bed scanner, paper-feed scanner, or multifunction peripheral. Notably, the presently disclosed systems and techniques enable three-dimensional reconstruction of objects depicted in image captured using a camera of a mobile device. The reconstruction corrects or compensates for perspective distortion caused by camera-based capture. | 04-02-2015 |
20150093019 | METHOD AND APPARATUS FOR GENERATING TEMPORALLY CONSISTENT DEPTH MAPS - A method and an apparatus for generating temporally consistent depth maps for an image are described. A first retrieving unit retrieves a representation of an image from the sequence of images by segmentation regions resulting from a temporally consistent over-segmentation. In addition, a second retrieving unit retrieves a depth map associated to the image. A projecting unit then projects the segmentation regions of the image into the depth map associated to the image. Finally, a modifying unit modifies depth values of the depth map within one or more of the projected segmentation regions. | 04-02-2015 |
20150093020 | METHOD, DEVICE AND SYSTEM FOR RESTORING RESIZED DEPTH FRAME INTO ORIGINAL DEPTH FRAME - A method, a device and a system for restoring a resized depth frame into an original depth frame are disclosed. The method for restoring a resized depth frame into an original depth frame includes the steps of: obtaining a first sub-pixel value from one pixel of the resized depth frame; storing the first sub-pixel value in all sub-pixels of a first pixel of the original depth frame; obtaining a second sub-pixel value of the pixel of the resized depth frame; and storing the second sub-pixel value to all sub-pixels of a second pixel of the original depth frame. | 04-02-2015 |
20150098644 | METHODS AND APPARATUS FOR DETERMINING FIELD OF VIEW DEPENDENT DEPTH MAP CORRECTION VALUES - In a method for determining field of view dependent depth map correction values for correction of a depth map of an image taken with a lens having a field of view the following is performed:
| 04-09-2015 |
20150098645 | Method, apparatus and system for selecting a frame - A method of selecting a frame from a plurality of video frames captured by a camera ( | 04-09-2015 |
20150104096 | 3D OBJECT TRACKING - Embodiments relate to tracking a pose of a 3D object. In embodiments, a 3D computer model, consisting of geometry and joints, matching the 3D real-world object may be used for the tracking process. Processing the 3D model may be done using collision constraints generated from interpenetrating geometry detected in the 3D model, and by angular motion constraints generated by the joints describing the connections between pieces/segments/bones of the model. The depth data in its 3D (point cloud) form, supplied by a depth camera, may be used to create additional constraints on the surface of the 3D model thus limiting its motion. Combined together, all the constraints, using linear equation processing, may be satisfied to determine a plausible pose of the 3D model that matches the real-world pose of the object in front of the 3D camera. | 04-16-2015 |
20150104097 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes an image acquisition unit acquiring a plurality of images, a corresponding point acquisition unit, a first fundamental matrix estimation unit, an epipole coordinate deriving unit, an epipole coordinate determination unit, and a fundamental matrix determination unit. The corresponding point acquisition unit acquires first corresponding points. The first fundamental matrix estimation unit calculates first fundamental matrices based on the first corresponding points. The epipole coordinate deriving unit calculates first epipole coordinates that correspond to the first fundamental matrices. The epipole coordinate determination unit determines one of the first epipole coordinates as a second epipole coordinate. The fundamental matrix determination unit determines the first fundamental matrix corresponding to the second epipole coordinate as a second fundamental matrix. | 04-16-2015 |
20150110385 | PHOTOGRAPH LOCALIZATION IN A THREE-DIMENSIONAL MODEL - A photo localization application is configured to determine the location that an image depicts relative to a 3D representation of a structure. The 3D representation may be a 3D model, color range scan, or gray scale range scan of the structure. The image depicts a particular section of the structure. The photo localization application extracts and stores features from the 3D representation in a database. The photo localization application then extracts features from the image and compares those features against the database to identify matching features. The matching features form a location fingerprint, from which the photo localization application determines the location that the image depicts, relative to the 3D representation. The location allows the user to better understand and communicate information captured by the image. | 04-23-2015 |
20150117756 | Processing of Light Fields by Transforming to Scale and Depth Space - Light field images of a three-dimensional scene are transformed from an (image,view) domain to an (image,scale,depth) domain. Processing then occurs in the (image,scale,depth) domain. | 04-30-2015 |
20150117757 | METHOD FOR PROCESSING AT LEAST ONE DISPARITY MAP, CORRESPONDING ELECTRONIC DEVICE AND COMPUTER PROGRAM PRODUCT - In one embodiment, it is proposed a method for processing at least one disparity map associated to at least one left view and one right view of stereovision images. Such method is remarkable in that it comprises determining at least one modified disparity map that comprises, for a given pixel or group of pixels, a modified disparity value determined in function of disparity values of the at least one disparity map associated to pixels that belong to a neighborhood of the given pixel or group of pixels, said disparity values being weighted in function of a value obtained from the at least one disparity map and at least one other disparity map. | 04-30-2015 |
20150117758 | SHAPE FROM CAMERA MOTION FOR UNKNOWN MATERIAL REFLECTANCE - A computer vision method that includes deriving a relationship of spatial and temporal image derivatives of an object to bidirectional reflectance distribution function (BRDF) derivatives under camera motion, and deriving with a processor a quasilinear partial differential equation for solving surfaced depth for orthographic projections using the relationship of spatial and temporal image derivatives without requiring knowledge of the BRDF. The method may further recover surface depth for an object with unknown BRDF under perspective projection. | 04-30-2015 |
20150125070 | METHOD AND OPTICAL SYSTEM FOR DETERMINING A DEPTH MAP OF AN IMAGE - A method and optical system for determining a depth map of an image, the method including: determining a first focus measure of a first color in at least one region of the image; determining a second focus measure of a second color in the at least one region of the image; determining a ratio of the first and the second focus measure; and determining the depth map based on a ratio of the first and second focus measure. | 05-07-2015 |
20150125071 | PRE-SEGMENT POINT CLOUD DATA TO RUN REAL-TIME SHAPE EXTRACTION FASTER - A method, apparatus, system, and computer readable storage medium provide the ability to pre-segment point cloud data. Point cloud data is obtained and segmented. The segment information is stored. An indexing structure is created and instantiated with the point cloud data and the segment information. Based on the segment information, a determination is made regarding points needed for shape extraction. Needed points are fetched from the indexing structure an used to extract shapes. The extracted shapes are used to cull points from the point cloud data. | 05-07-2015 |
20150131897 | Method and Apparatus for Building Surface Representations of 3D Objects from Stereo Images - A method and apparatus for extracting surface representation from images and video data for segmenting image plane according to the surface connectivity, and identifying areas of images taken by a moving camera according to the object surfaces wherefrom the areas of images are taken, are disclosed. The invention discloses a method and apparatus comprising a plurality of processing modules for extracting from images in a video sequence the occluding contours delineating images into regions in accordance with the spatial connectivity of the correspondent visible surfaces, and diffeomorphism relations between areas of images taken from different perspective centers for identifying image areas of different frames as of the surface of same object, and specifying the fold contours of the surfaces that owns the contour, and thus producing the surface representations from video images taken from persistent objects by a moving camera. | 05-14-2015 |
20150139532 | CAMARA TRACKING APPARATUS AND METHOD USING RECONSTRUCTION SEGMENTS AND VOLUMETRIC SURFACE - Provided are an apparatus and method for tracking a camera that reconstructs a real environment in three dimensions by using reconstruction segments and a volumetric surface. The camera tracking apparatus using reconstruction segments and a volumetric surface includes a reconstruction segment division unit configured to divide three-dimensional space reconstruction segments extracted from an image acquired by a camera, a transformation matrix generation unit configured to generate a transformation matrix for at least one reconstruction segment among the reconstruction segments obtained by the reconstruction segment division unit, and a reconstruction segment connection unit configured to rotate or move the at least one reconstruction segment according to the transformation matrix generated by the reconstruction segment division unit and connect the rotated and moved reconstruction segment with another reconstruction segment. | 05-21-2015 |
20150139533 | METHOD, ELECTRONIC DEVICE AND MEDIUM FOR ADJUSTING DEPTH VALUES - A depth processing method, an electronic device and a medium are provided. The depth processing method includes: obtaining a color image and a depth map corresponding to the color image; extracting a plurality of regions from at least one of the depth map and the color image; obtaining region information of the regions, and classifying the regions into at least one of a region-of-interest and a non-region-of-interest according to the region information, wherein the region information comprises area information and edge information; and adjusting a plurality of depth values in the depth map according to the region information. | 05-21-2015 |
20150139534 | IMAGE PROCESSING APPARATUS, IMAGING APPARATUS AND DISTANCE CORRECTION METHOD - An image processing apparatus for correcting, on the basis of an image and a depth map corresponding to the image, the depth map, includes: a detection unit that detects an object included in the image; a determination unit that determines whether a size of the object detected by the detection unit is a threshold or less; and a correction unit that corrects a distance in a target area which corresponds to an area of the object in the depth map, when the size of the detected object is the threshold or less. | 05-21-2015 |
20150139535 | SILHOUETTE-BASED OBJECT AND TEXTURE ALIGNMENT, SYSTEMS AND METHODS - An object-image alignment data generating method for use in an object recognition system is presented. The method obtains a 3D model and a set of 2D images of the object. Each 2D image from the set is captured based on a particular camera point of view. The method then uses the 3D model of the object to generate multiple silhouettes of the object according to different camera point of views. Each silhouette is then matched and aligned with a 2D image based on the corresponding camera point of view. The method also derives at least one descriptor from the 2D images and compiles feature points that correspond to the descriptors. Each feature point includes a 2D location and a 3D location. The method then generates an object-image alignment packet by packaging the 2D images, the descriptors, and the feature points. | 05-21-2015 |
20150146970 | SPHERICAL LIGHTING DEVICE WITH BACKLIGHTING CORONAL RING - A method for capturing three-dimensional photographic lighting of a spherical lighting device is described. Calculation of boundaries of the spherical lighting device based on lighting properties of at least one light source in a set location of the spherical lighting device is performed. A mapping of multitude points of the spherical lighting device to three-dimensional vectors of at least one camera device using a logical grid is performed. A measurement of brightness of the logical grid of the spherical lighting device is performed. The method further comprises determining brightest grid point of the logical grid of the spherical lighting device, wherein the brightest grid point of the logical grid is measured within a region brightness of the spherical lighting device. The method further comprises calculating the region of brightness of the spherical lighting device based on the determined brightest grid point of the logical grid. | 05-28-2015 |
20150146971 | MESH RECONSTRUCTION FROM HETEROGENEOUS SOURCES OF DATA - A system, apparatus, method, computer program product, and computer readable storage medium provide the ability to reconstruct a surface mesh. Photo image data is obtained from a set of overlapping photographic images. Scan data is obtained from a scanner. A point cloud is generated from a combination of the photo image data and the scan data. An initial rough mesh is estimated from the point cloud data. The initial rough mesh is iteratively refined into a refined mesh. | 05-28-2015 |
20150294470 | METHOD AND SYSTEM FOR DISPARITY VISUALIZATION - A method and system to generate and visualize the distribution of disparities in a stereo sequence and the way they change through time. The data representing the disparities are generated using the disparity and confidence maps of the stereo sequence. For each frame, a histogram of disparity-confidence pairs is generated. These data are later visualized on the screen, presenting the disparity for the full sequence in one graph. | 10-15-2015 |
20150294471 | SYSTEM FOR RECOGNIZING VEHICLE IDENTIFICATION NUMBER - A system for recognizing a vehicle identification number includes a three dimensional scanner configured to scan, in a direction in which the vehicle identification number is engraved, a vehicle identification number engraved in a vehicle body to obtain an image. The system further includes an image processor configured to convert the image obtained by the three dimensional scanner into a gray image, divide the gray image according to a gray scale to extract a symbol in the image corresponding to a symbol engraved in the vehicle body, and compare the symbol with standard symbols to determine the vehicle identification number. | 10-15-2015 |
20150294472 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR DISPARITY ESTIMATION OF PLENOPTIC IMAGES - In an example embodiment, a method, apparatus and computer program product are provided. The method includes facilitating receipt of a plenoptic image associated with a scene, the plenoptic image including plenoptic micro-images and being captured by a focused plenoptic camera. The method includes generating plenoptic vectors for the plenoptic micro-images of the plenoptic image, where an individual plenoptic vector is generated for an individual plenoptic micro-image. The method includes assigning disparities for the plenoptic micro-images of the plenoptic image. A disparity for a plenoptic micro-image is assigned by accessing a plurality of subspaces associated with a set of pre-determined disparities, projecting a plenoptic vector for the plenoptic micro-image in the plurality of subspaces, calculating a plurality of residual errors based on projections of the plenoptic vector in the plurality of subspaces, and determining the disparity for the plenoptic micro-image based on a comparison of the plurality of residual errors. | 10-15-2015 |
20150294473 | Processing of Depth Images - A method, an electronic device, a computer program and a computer program product relate to 3D image reconstruction. A depth image part ( | 10-15-2015 |
20150294475 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM - According to some aspects, an image processing apparatus is provided, comprising circuitry configured to receive an input image, the input image being supplied as a stereoscopic image including a left image for a left eye and a right image for a right eye, calculate depth information for each of a plurality of sub-regions of the input image based at least in part on the right image and the left image, and determine, for each of the plurality of sub-regions of the input image, at least one luminance component based at least in part on the depth information and a function indicating a relationship between depth information and luminance value. | 10-15-2015 |
20150296197 | ENCODING/DECODING METHOD AND ENCODING/DECODING DEVICE USING DEPTH INFORMATION - A method for inducing depth information, according to one embodiment of the present invention, comprises the steps of: receiving image data that includes depth information; generating depth configuration information from a part of the image data; obtaining movement information connected to a spatial location or a temporal location from the image data by using the depth configuration information; and decoding an image included in the image data on the basis of the obtained movement information. | 10-15-2015 |
20150302239 | INFORMATION PROCESSOR AND INFORMATION PROCESSING METHOD - An image storage section | 10-22-2015 |
20150302596 | IMAGE PROCESSING METHOD AND AN IMAGE PROCESSING APPARATUS - A sub-pixel disparity cost volume which contains initial cost values of dissimilarity calculated between the pixel values on a standard image of a plurality of parallax images and the interpolated sub-pixel values on the counterpart image or images other than said standard image is prepared for a plurality of parallax images of objects in a three-dimensional structure composed of horizontal, vertical and disparity axes. Noise signals on the calculated cost values in the sub-pixel disparity cost volume are eliminated by using an edge-preserving filter which allocates bigger weights between two cost values whose pixel coordinates have similar pixel values on the standard image, for preserving edges or boundaries of the objects. A sub-pixel disparity is selected, which gives the minimum cost value in the specific disparity range around a previously-given initial pixel-wise or sub-pixel disparity at each pixel coordinate on the standard image. Thus the distance to the objects is estimated from the computed disparity. Precise disparity can be computed by preparing a sub-pixel disparity cost volume and then computing the sub-pixel resolution disparity. Further, the necessary processing time can be reduced by parallel computation manner. | 10-22-2015 |
20150310256 | DEPTH IMAGE PROCESSING - Embodiments described herein can be used to detect holes in a subset of pixels of a depth image that has been specified as corresponding to a user, and to fill such detected holes. Additionally, embodiments described herein can be used to produce a low resolution version of a subset of pixels that has been specified as corresponding to a user, so that when an image including a representation of the user is displayed, the image respects the shape of the user, yet is not a mirror image of the user. Further, embodiments described herein can be used to identify pixels, of a subset of pixels specified as corresponding to the user, that likely correspond to a floor supporting the user. This enables the removal of the pixels, identified as likely corresponding to the floor, from the subset of pixels specified as corresponding to the user. | 10-29-2015 |
20150310620 | STRUCTURED STEREO - An apparatus, system, and method are described herein. The apparatus includes an emitter and a plurality of sensors. The emitter and the sensors are asymmetrically placed in the system with respect to the emitter. Data from the emitter and sensors is used to generate a high accuracy depth map and a dense depth map. A high resolution and dense depth map is calculated using the high accuracy depth map and the dense depth map. | 10-29-2015 |
20150310622 | Depth Image Generation Utilizing Pseudoframes Each Comprising Multiple Phase Images - In one embodiment, an image processor is configured to obtain phase images, and to group the phase images into pseudoframes with each of at least a subset of the pseudoframes comprising multiple ones of the phase images and having as a first phase image thereof one of the phase images that is not a first phase image of an associated depth frame. A velocity field is estimated by comparing corresponding phase images in respective ones of the pseudoframes. Phase images of one or more pseudoframes are modified based at least in part on the estimated velocity field, and one or more depth images are generated based at least in part on the modified phase images. By way of example, different groupings of the phase images into pseudoframes may be used for each obtained phase image, allowing depth images to be generated at much higher rates than would otherwise be possible. | 10-29-2015 |
20150323672 | POINT-CLOUD FUSION - A method and a system for creating a collective point-cloud data from a plurality of local point-cloud data including the steps of: providing scanner data containing a plurality of local point-cloud data and relatively low-precision positioning data of the scanner providing the local point-cloud data, creating a relatively medium-precision collective point-cloud data from the plurality of local point-cloud data using external (medium-precision) positioning data associated with each of said plurality of local point-cloud data, and then creating a relatively high-precision collective point-cloud data from said medium-precision collective point-cloud data. | 11-12-2015 |
20150325046 | Evaluation of Three-Dimensional Scenes Using Two-Dimensional Representations - A system adapted to implement a learning rule in a three-dimensional (3D) environment is described. The system includes: a renderer adapted to generate a two-dimensional (2D) image based at least partly on a 3D scene; a computational element adapted to generate a set of appearance features based at least partly on the 2D image; and an attribute classifier adapted to generate at least one set of learned features based at least partly on the set of appearance features and to generate a set of estimated scene features based at least partly on the set of learned features. A method labels each image from among the set of 2D images with scene information regarding the 3D scene; selects a set of learning modifiers based at least partly on the labeling of at least two images; and updates a set of weights based at least partly on the set of learning modifiers. | 11-12-2015 |
20150326845 | DEPTH VALUE RESTORATION METHOD AND SYSTEM - Disclosed is a method comprising steps of conducting image preprocessing so as to respectively obtain a candidate object region including a foreground image in a depth map and a color image; determining whether it is necessary to conduct a region growing process with respect to the candidate object region of the depth map; in the former case, conducting the region growing process with respect to the candidate object region of the depth map; and conducting, after the region growing process is conducted with respect to the candidate object region of the depth map, a depth value restoration process with respect to a candidate region. | 11-12-2015 |
20150332447 | METHOD AND APPARATUS FOR GENERATING SPANNING TREE, METHOD AND APPARATUS FOR STEREO MATCHING, METHOD AND APPARATUS FOR UP-SAMPLING, AND METHOD AND APPARATUS FOR GENERATING REFERENCE PIXEL - A method and apparatus for generating a spanning tree, a method and apparatus for stereo matching, a method and apparatus for up-sampling, and a method and apparatus for generating a reference pixel are disclosed, in which a spanning tree may be generated by reference pixels, stereo matching or up-sampling may be performed based on the generated spanning tree, and a reference pixel may be generated based on a stereo video. | 11-19-2015 |
20150332460 | INTERACTIVE GEO-POSITIONING OF IMAGERY - An interactive user-friendly incremental calibration technique that provides immediate feedback to the user when aligning a point on a 3D model to a point on a 2D image. A can drag-and-drop points on a 3D model to points on a 2D image. As the user drags the correspondences, the application updates current estimates of where the camera would need to be to match the correspondences. The 2D and 3D images can be overlayed on each other and are sufficiently transparent for visual alignment. The user can fade between the 2D/3D views providing immediate feedback as to the improvements in alignment. The user can begin with a rough estimate of camera orientation and then progress to more granular parameters such as estimates for focal length, etc., to arrive at the desired alignment. While one parameter is adjustable, other parameters are fixed allowing for user adjustment of one parameter at a time. | 11-19-2015 |
20150332468 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, IMAGE PROCESSING PROGRAM, AND IMAGING DEVICE - An image processing device includes a phase difference detection portion configured to detect phase difference between parallax images by performing correlation value calculation with a plurality of parallax images, and generate phase difference distribution in an image. The phase difference detection portion performs the phase difference detection individually along two or more directions different from each other, and generates the phase difference distribution by utilizing results of the phase difference detection regarding the two or more directions. | 11-19-2015 |
20150332474 | Orthogonal and Collaborative Disparity Decomposition - A novel disparity computation technique is presented which comprises multiple orthogonal disparity maps, generated from approximately orthogonal decomposition feature spaces, collaboratively generating a composite disparity map. Using an approximately orthogonal feature set extracted from such feature spaces produces an approximately orthogonal set of disparity maps that can be composited together to produce a final disparity map. Various methods for dimensioning scenes and are presented. One approach extracts the top and bottom vertices of a cuboid, along with the set of lines, whose intersections define such points. It then defines a unique box from these two intersections as well as the associated lines. Orthographic projection is then attempted, to recenter the box perspective. This is followed by the extraction of the three-dimensional information that is associated with the box, and finally, the dimensions of the box are computed. The same concepts can apply to hallways, rooms, and any other object. | 11-19-2015 |
20150339541 | POINT CLOUD MATCHING METHOD - A method comprising: providing a first three-dimensional (3D) point cloud obtained according to a first sensing technique, a second 3D point cloud obtained according to a second sensing technique and a first radius of a sphere covering a real object underlying the second first 3D point cloud and a second radius of a sphere covering a real object underlying the second 3D point cloud; defining scales of the first 3D point cloud and the second 3D point cloud based on said first radius and second radius; searching statistically substantially similar candidate regions between the first 3D point cloud and the second 3D point cloud using an ensemble shape function (ESF); aligning the statistically substantially similar candidate regions between the first 3D point cloud and the second 3D point cloud at least in vertical direction; and ranking the aligned candidate regions between the 3D point clouds based on their structural similarity | 11-26-2015 |
20150356716 | SYSTEM FOR BACKGROUND SUBTRACTION WITH 3D CAMERA - A system for background image subtraction includes a computing device coupled with a 3D video camera, a processor of the device programmed to receive a video feed from the camera containing images of one or more subject that include depth information. The processor, for an image: segments pixels and corresponding depth information into three different regions including foreground (FG), background (BG), and unclear (UC); categorizes UC pixels as FG or BG using a function that considers the color and background history (BGH) information associated with the UC pixels and the color and BGH information associated with pixels near the UC pixels; examines the pixels marked as FG and applies temporal and spatial filters to smooth boundaries of the FG regions; constructs a new image by overlaying the FG regions on top of a new background; displays a video feed of the new image in a display device; and continually maintains the BGH. | 12-10-2015 |
20150356742 | COMPUTER IMPLEMENTED METHODS FOR IDENTIFYING CHANNELS IN A 3D VOLUME AND COMPUTER PROGRAM PRODUCT IMPLEMENTING THE METHODS - The methods comprise:
| 12-10-2015 |
20150356748 | METHOD AND APPARATUS FOR SENSING MOVING BALL - Disclosed are an apparatus for sensing a moving golf ball and a method for sensing the moving golf ball using the apparatus. The apparatus includes an image acquisition unit for acquiring consecutive images of a ball, an image processing unit for extracting a feature portion of the ball from the acquired images, and a spin calculation unit for performing any one of forward operation analysis and inverse operation analysis to calculate spin of the ball. | 12-10-2015 |
20150370348 | ACTIVE TRIANGULATION CALIBRATION - According to examples of the presently disclosed subject an active triangulation system includes an active triangulation setup and a calibration module. The active triangulation setup includes a projector and a sensor. The projector is configured to project a structured light pattern that includes a repeating structure of a plurality of unique feature types and a plurality of markers distributed in the projected structured light pattern, where an epipolar distance between any two epipolar lines which are associated with an appearance in the image of any two respective markers is greater than a distance between any two distinguishable epipolar lines. The sensor is configured to capture an image of a reflected portion of the projected structured light. The calibration module is configured to determine an epipolar field for the active triangulation setup according to locations of the markers in the image, and to calibrate the active triangulation setup. | 12-24-2015 |
20150371393 | STRUCTURED LIGHT THREE-DIMENSIONAL (3D) DEPTH MAP BASED ON CONTENT FILTERING - A structured light three-dimensional (3D) depth map based on content filtering is disclosed. In a particular embodiment, a method includes receiving, at a receiver device, image data that corresponds to a structured light image. The method further includes processing the image data to decode depth information based on a pattern of projected coded light. The depth information corresponds to a depth map. The method also includes performing one or more filtering operations on the image data. An output of the one or more filtering operations includes filtered image data. The method further includes performing a comparison of the depth information to the filtered image data and modifying the depth information based on the comparison to generate a modified depth map. | 12-24-2015 |
20150371394 | METHOD FOR GENERATING A DEPTH MAP, RELATED SYSTEM AND COMPUTER PROGRAM PRODUCT - A pattern of symbols is generated and sent to a projector, wherein the pattern includes an array of symbols having a given number of symbol columns and symbol rows, and an image is obtained from a camera. Next the image is decoded in order to generate a decoded pattern of symbols and the depth map is generated as a function of the pattern and the decoded pattern. The image is decoded by placing an array of classification windows on the image and determining the displacement of each classification window in order to optimize a given cost function. Finally, the decoded pattern is generated by determining a respective symbol for each classification window. | 12-24-2015 |
20150371396 | CONSTRUCTING A 3D STRUCTURE - Disclosed is a method and system for constructing a 3D structure. The system of the present disclosure comprises an image capturing unit for capturing images of an object. The system comprises of a gyroscope, a magnetometer, and an accelerometer for determining extrinsic camera parameters, wherein the extrinsic camera parameters comprise a rotation and a translation of the images. Further the system determines an internal calibration matrix once. The system uses the extrinsic camera parameters and the internal calibration matrix for determining a fundamental matrix. The system extracts features of the images for establishing point correspondences between the images. Further, the point correspondences are filtered using the fundamental matrix for generating filtered point correspondences. The filtered point correspondences are triangulated for determining 3D points representing the 3D structure. Further, the 3D structure may be optimized for eliminating reprojection errors associated with the 3D structure. | 12-24-2015 |
20150371398 | METHOD AND SYSTEM FOR UPDATING BACKGROUND MODEL BASED ON DEPTH - A method and a system for updating a background model based on depth are disclosed. The method includes receiving, in response to the occurrence of a predetermined background updating condition, one or more depth images captured after a time when the predetermined background updating condition occurs; obtaining, based on an original background model, foreground images in the one or more captured depth images, which are newly added compared with a depth image at the time when the predetermined background updating condition occurs; for each of foreground pixels in each of the newly added foreground images, comparing a current depth value with a previous depth value before the time when the predetermined background updating condition occurs; and updating, when the current depth value is greater than the previous depth value, the original background model as the updated background model by using the foreground pixel in the newly added foreground image. | 12-24-2015 |
20150379767 | IMAGE PROCESSING APPARATUS AND METHOD FOR IMAGE PROCESSING - An image processing apparatus comprises an acquiring section that acquires DEM data indicating digital elevation of each lattice-shaped area in a predetermined map region, a selecting section that selects at least one of a plurality of line filters that filters data which is continuous in one direction in the map region and a plurality of matrix filters that filters data constituting a two-dimensional region in the map region, a filter processing section that conducts filter processing of the DEM data acquired by the acquiring section by a filter selected by the selecting section, and an outputting section that outputs the DEM data after being filtered in the filter processing. | 12-31-2015 |
20150381959 | IMAGE PROCESSING DEVICE AND METHOD THEREFOR - The present specification relates to an image processing device capable of compensating for a 3D image distorted by a screen curvature of a 3D-curved display, and a method therefore. The image processing method according to one embodiment of the present specification includes: receiving a 3D image signal; changing a depth value of a left-eye image and a right-eye image included in the received 3D image signal, according to a screen curvature of an image display device; and displaying the left-eye image and the right-eye image updated based on the changed depth value, such that the 3D image signal is output after being compensated. | 12-31-2015 |
20160005145 | Aligning Ground Based Images and Aerial Imagery - Systems and methods for aligning ground based images of a geographic area taken from a perspective at or near ground level and a set of aerial images taken from, for instance, an oblique perspective, are provided. More specifically, candidate aerial imagery can be identified for alignment with the ground based image. Geometric data associated with the ground based image can be obtained and used to warp the ground based image to a perspective associated with the candidate aerial imagery. One or more feature matches between the warped image and the candidate aerial imagery can then be identified using a feature matching technique. The matched features can be used to align the ground based image with the candidate aerial imagery. | 01-07-2016 |
20160005149 | AUTOMATED SEAMLINE CONSTRUCTION FOR HIGH-QUALITY HIGH-RESOLUTION ORTHOMOSAICS - A system for semi-automated feature extraction comprising an image analysis server that receives and initializes a plurality of raster images, a feature extraction server that identifies and extracts image features, a mosaic server that assembles mosaics from multiple images, and a rendering engine that provides visual representations of images for review by a human user, and a method for generating a cost raster utilizing the system of the invention. | 01-07-2016 |
20160005158 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - An image processing device includes a setting unit that sets a search area according to a reference image based on an environmental condition, for each input image other than the reference image, using an image based on one input image, of a group of multi-viewpoint input images including a common sub-region, as the reference image, a positional deviation estimation unit that estimates positional deviation of each input image other than the reference image with respect to the reference image, by performing template matching processing in the search area, using the reference image, and a super-resolution processing unit that executes super-resolution processing, using the estimated positional deviation as a parameter. | 01-07-2016 |
20160005179 | METHODS AND APPARATUS FOR MERGING DEPTH IMAGES GENERATED USING DISTINCT DEPTH IMAGING TECHNIQUES - A depth imager is configured to generate a first depth image using a first depth imaging technique, and to generate a second depth image using a second depth imaging technique different than the first depth imaging technique. At least portions of the first and second depth images are merged to form a third depth image. The depth imager comprises at least one sensor including a single common sensor at least partially shared by the first and second depth imaging techniques, such that the first and second depth images are both generated at least in part using data acquired from the single common sensor. By way of example, the first depth image may comprise a structured light (SL) depth map generated using an SL depth imaging technique, and the second depth image may comprise a time of flight (ToF) depth map generated using a ToF depth imaging technique. | 01-07-2016 |
20160005181 | METHOD AND APPARATUS FOR SEGMENTATION OF 3D IMAGE DATA - The present invention provides a method and an apparatus for real time object segmentation of 3D image data based on local feature correspondences between a plurality of views. In order to reduce the computational effort of object segmentation of 3D image data, the segmentation process is performed based on correspondences relating to local features of the image data and a depth map. In this way, computational effort can be significantly reduced and the image segmentation can be carried out very fast. | 01-07-2016 |
20160005225 | AUTOMATED CONVERSION OF TWO-DIMENSIONAL HYDROLOGY VECTOR MODELS INTO VALID THREE-DIMENSIONAL HYDROLOGY VECTOR MODELS - A system for automated conversion of two-dimensional hydrology vector models into valid three-dimensional hydrology vector models, comprising a vector extraction engine that retrieves vectors from, and sends vectors to, a vector storage, a DSM server that retrieves a DSM from a DSM storage and computes a DSM from stereo disparity measurements of a stereo pair retrieved from a raster storage, and a rendering engine that provides visual representations of images for review by a human user, and a method for automated hydrology vector model development utilizing the system of the invention. | 01-07-2016 |
20160012567 | SYSTEMS AND METHODS FOR STEREO DEPTH ESTIMATION USING GLOBAL MINIMIZATION AND DEPTH INTERPOLATION | 01-14-2016 |
20160012602 | CONTEMPORANEOUSLY RECONSTRUCTING IMAGES CAPTURED OF A SCENE ILLUMINATED WITH UNSTRUCTURED AND STRUCTURED ILLUMINATION SOURCES | 01-14-2016 |
20160012603 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD AND IMAGE PROCESSING PROGRAM | 01-14-2016 |
20160026885 | SYSTEM FOR DETERMINING ALIGNMENT OF A USER-MARKED DOCUMENT AND METHOD THEREOF - A system for evaluating a user-marked document including a user-marked response sheet having a response area and at least one image marker, a means for obtaining a digital image of the user-marked response sheet, a computer having programming to perform steps which include identifying three-dimensional position information of the at least one image marker in an obtained digital image, calculating position information of the response area in an obtained digital image using the three-dimensional position information of the at least one image marker, identifying position information of a user created mark within the response area using the calculated position information of the response area, and evaluating whether the position information of the user created mark corresponds with position information of a first predefined mark or a second predefined mark. | 01-28-2016 |
20160034746 | CONTROL SYSTEM, ROBOT SYSTEM, AND CONTROL METHOD - A control system includes a projection section that projects predetermined patterned light on a target object, a first imaging section that captures an image of the target object on which the predetermined patterned light is projected by the projection section, a second imaging section that is disposed in a position different from a position where the first imaging section is disposed and captures an image of the target object on which the predetermined patterned light is projected by the projection section, and a calculation section that calculates a three-dimensional shape of the target object based on a first point in a first captured image captured by the first imaging section and a second point in a second captured image captured by the second imaging section. | 02-04-2016 |
20160035095 | Systems And Methods For Detecting A Tilt Angle From A Depth Image - A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels. | 02-04-2016 |
20160042515 | METHOD AND DEVICE FOR CAMERA CALIBRATION - A method and an apparatus for camera calibration are described. The method and the apparatus use an image dataset in which a calibration object is captured by a camera. 2D and 3D correspondences are acquired from the image dataset, as well as reprojection errors of the 2D and 3D correspondences. A reliability map of a retinal plane of the camera is generated using the acquired reprojection errors, which indicates a reliability measure of the geometrical information carried by each pixel of the retinal plane of the calibrated camera. | 02-11-2016 |
20160042520 | METHOD AND APPARATUS FOR ENVIRONMENTAL PROFILE GENERATION - A method for generating an environmental profile is provided. The method for generating environmental profile includes generating an image of an environment by capturing the environment with at least one recording device, detecting a change of an object in the environment based on the image, and generating an environmental profile based on the change of the object. | 02-11-2016 |
20160048963 | 3-D Localization And Imaging of Dense Arrays of Particles - Systems, methods, and computer program products are disclosed to localize and/or image a dense array of particles. In some embodiments, a plurality of particles may be imaged using an imaging device. A plurality of point spread function dictionary coefficients of the image may be estimated using a point spread function dictionary; where the point spread function dictionary can include a plurality of spread function responses corresponding to different particle positions. From the point spread function dictionary coefficients the number of particles in the image can be determined. Moreover location of each particle in the image can be determined from the point spread function dictionary co efficients. | 02-18-2016 |
20160048970 | MULTI-RESOLUTION DEPTH ESTIMATION USING MODIFIED CENSUS TRANSFORM FOR ADVANCED DRIVER ASSISTANCE SYSTEMS - A computer-implemented depth estimation method based on non-parametric Census transform with adaptive window patterns and semi-global optimization. A modified cross-based cost aggregation technique adaptively creates the shape of the cross for each pixel distinctly. In addition, a depth refinement algorithm fills holes within the estimated depth map using the surrounding background depth pixels and sharpens the object boundaries by exerting a trilateral filter to the generated depth map. The trilateral filter uses the curvature of pixels as well as texture and depth information to sharpen the edges. | 02-18-2016 |
20160055638 | SYSTEMS AND METHODS FOR DETECTING MISALIGNMENT BETWEEN A HELIPAD AND AN ASSOCIATED STRUCTURE - A system is provided for detecting misalignment between a helipad and a structure associated with the helipad. The system comprises a first database that includes first structure data, which data can comprise a location of the first structure. The system can further comprise a second database that can include second structure data, where the second structure data can comprise a location of the second structure. The second structure can comprise a helipad situated atop the first structure. The system can further comprise a processor coupled to receive the first structure data from the first database and the second structure data from the second database and can be configured, upon receipt of the first data structure and the second data structure: determine a correlation coefficient based upon a degree of overlap of the first volumetric model and the second volumetric model, and selectively generate an alert based upon the correlation coefficient. | 02-25-2016 |
20160061591 | Stationary Dimensioning Apparatus - A stationary dimensioning apparatus dimensions a load on a movable conveyance by detecting a barcode fiducial that is situated on the conveyance and by detecting a large number of points in space that represent points on the surface of the load. The location of the barcode fiducial on the conveyance is compared with a reference location of a reference barcode fiducial, and a translation vector and a rotation vector are calculated to characterize the difference in translation and rotation between the reference barcode fiducial and the barcode fiducial that was detected on the conveyance. The translation and rotation vectors are then employed in a transformation matrix that is used to transform each of the detected points in space into transformed points in space that correspond with a reference coordinate system, such as might be defined in terms of horizontal and vertical directions. Those transformed space in points that are determined to be points on the surface of the conveyance itself can be ignored, and the dimensions of the load can then be calculated from the remaining transformed points in space. | 03-03-2016 |
20160063302 | METHOD FOR INSERTING FEATURES INTO A THREE-DIMENSIONAL OBJECT AND METHOD FOR OBTAINING FEATURES FROM A THREE DIMENSIONAL OBJECT - A method for inserting features into a three-dimensional object is disclosed, wherein features correspond to original features of the three-dimensional object. The inserting method comprises determining a reference shape using the original features; determining a set of vertices on the three-dimensional original object at the intersection between the reference shape and the three-dimensional object; modifying the local neighborhood of the set of vertices so that their local neighborhood is close to a set of target shapes. A method for obtaining features from a three-dimensional object is further disclosed. The method comprises obtaining anchor vertices whose local neighborhood is close to a set of target shapes; fitting a reference shape onto the anchor vertices; and obtaining the features using the fitted reference shape. A 3D object carrying anchor vertices and devices for implementing the disclosed methods are further disclosed. | 03-03-2016 |
20160063704 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM THEREFOR - The position and the attitude of a device or a member are determined. A camera | 03-03-2016 |
20160063716 | LINE PARAMETRIC OBJECT ESTIMATION - A method may include projecting, onto a first projection plane of a first projection volume, first points from a point cloud of a setting that are within the first projection volume. Further, the method may include matching a plurality of the projected first points with a cross-section template that corresponds to a line parametric object (LPO) of the setting to determine a plurality of first element points of a first primary projected element. Additionally, the method may include projecting, onto a second projection plane of a second projection volume, second points from the point cloud that are within the second projection volume and matching a plurality of the projected second points with the cross-section template to determine a plurality of second element points of a second primary projected element. Moreover, the method may include generating a parameter function based on the first element points and the second element points. | 03-03-2016 |
20160063759 | APPARATUS, METHOD, AND COMPUTER READABLE MEDIUM FOR CORRECTING AN INERPOLATION COEFFICIENT FOR STEREO MATCHING - An apparatus for correcting an interpolation coefficient for stereo matching includes: an interpolation coefficient generator configured to generate an interpolation coefficient λ; a correction value calculator configured to calculate a parameter and a weight value based on a position of an object in an image; and an interpolation coefficient corrector configured to correct the generated interpolation coefficient by multiplying the calculated parameter and the calculated weight value by the generated interpolation coefficient. | 03-03-2016 |
20160065939 | APPARATUS, METHOD, AND MEDIUM OF CONVERTING 2D IMAGE TO 3D IMAGE BASED ON VISUAL ATTENTION - A method and apparatus of converting a two-dimensional (2D) image to a three-dimensional (3D) image based on visual attention are provided. A visual attention map including visual attention information, which is information about a significance of an object in a 2D image, may be generated. Parallax information including information about a left eye image and a right eye image of the 2D image may be generated based on the visual attention map. A 3D image may be generated using the parallax information. | 03-03-2016 |
20160071274 | SELECTIVE 3D REGISTRATION - A sampling and weighting technique is presented. Given a 3D model that is composed out of n separated entities, a set of parameters is obtained for each entity. A weight is calculated for each entity, giving higher weight for entities corresponding to rarer parameters. Entities are assigned to components based on their corresponding parameters. Entities are sampled based on the weights or based on the components. A new 3D model is constructed from the sampled entities. | 03-10-2016 |
20160071281 | METHOD AND APPARATUS FOR SEGMENTATION OF 3D IMAGE DATA - The present invention provides a method for segmentation of 3D image data of a 3D image, the method comprising: determining ( | 03-10-2016 |
20160071327 | SYSTEM AND METHOD FOR SIMPLIFYING A MESH POINT CLOUD - A method for simplifying a mesh point cloud includes following steps: obtaining a point cloud and meshing the point cloud so that the point cloud is formed with a plurality of triangular grids; calculating a distance between a vertex of the triangular grid and its corresponding normal plane to determine an influence of the vertex to a geometric characteristic of the mesh point cloud; deleting the vertexes and the grids in connection therewith in accordance with a predetermined degree of simplification, wherein the deleted vertexes are those which have less influence to the geometric characteristic of the mesh point cloud; creating triangular grids to fill the void part in accordance with Delaunay triangulation; and smoothening the created grids. | 03-10-2016 |
20160081435 | FOOTWEAR RECOMMENDATIONS FROM FOOT SCAN DATA - Techniques for recommending footwear based on scan data describing feet of a user and a variety of other information are described. In some instances, a footwear service may obtain scan data describing feet of a user. The footwear service may process the scan data to generate a 3D representation of the user's feet, such as a 3D model or other representation. The footwear service may also obtain other information about the user, such as user preferences, orthotics data, information identifying an activity that the user participates in and so on. The footwear service may determine a footwear recommendation for the user based on the 3D representation of the user's feet, footwear data describing a footwear item, the user preferences and/or other information. | 03-24-2016 |
20160086322 | IMAGE MEASUREMENT DEVICE - An image measurement device acquires a first measurement region and a second measurement region on an image, acquires distance information corresponding to the first measurement region and distance information corresponding to the second measurement region, and computes a distance between the two regions. In a case where the first measurement region is displaced on the image, the second measurement region is displaced on a contour or a plane on the image, and a distance between the first measurement region after the displacement and the second measurement region after the displacement is computed. Thereby, distance measurement is able to be performed over a wide range without a complicated operation in the image measurement device for measuring a distance between two regions by using an image. | 03-24-2016 |
20160086341 | SYSTEM AND METHOD FOR ADAPTIVE DEPTH MAP RECONSTRUCTION - What is disclosed is a system and method for adaptively reconstructing a depth map of a scene. In one embodiment, upon receiving a mask identifying a region of interest (ROI), a processor changes either a spatial attribute of a pattern of source light projected on the scene by a light modulator which projects an undistorted pattern of light with known spatio-temporal attributes on the scene, or changes an operative resolution of a depth map reconstruction module. A sensing device detects the reflected pattern of light. A depth map of the scene is generated by the depth map reconstruction module by establishing correspondences between spatial attributes in the detected pattern and spatial attributes of the projected undistorted pattern and triangulating the correspondences to characterize differences therebetween. The depth map is such that a spatial resolution in the ROI is higher relative to a spatial resolution of locations not within the ROI. | 03-24-2016 |
20160086385 | Using Free-Form Deformations In Surface Reconstruction - Volumes of a 3D physical space are used in a surface reconstruction process, where adjacent volumes share vertices so that no gaps or overlaps between the volumes exist. As a result, a continuous surface is obtained in the surface reconstruction process. The vertices are anchored to nodes in a pose graph, such that locations of the vertices are adjusted as the pose graph is updated. As a result, a deformation of the volumes is permitted. Based on the deformation of a volume, a region of a depth map of the physical space is deformed correspondingly. Each vertex can be anchored to a closest node of the pose graph, or to a point which is based on a combination of nodes. In one approach, the point is defined based on the closest node and other nodes within a defined radius of the closest node. | 03-24-2016 |
20160093054 | SYSTEM AND METHOD FOR COMPONENT DETECTION - A method and system that include an imaging device configured to capture image data of the vehicle. The vehicle includes one or more components of interest. The method and system include a memory device configured to store an image detection algorithm based on one or more image templates corresponding to the one or more components of interest. The method and system also includes an image processing unit operably coupled to the imaging device in the memory device. The image processing unit is configured to determine one or more shapes of interest of the image data using the image detection algorithm that correspond to the one or more components of interest, and determine one or more locations of the one or more shapes of interest respective to the vehicle. | 03-31-2016 |
20160093057 | DETECTING DEVICE, DETECTING METHOD, AND PROGRAM - To detect a distant object that may become an obstacle to a traveling destination of a moving vehicle or the like more accurately than the conventional, there is provided a detecting device, a program used in the detecting device, and a detecting method using the detecting device, where the detecting device includes: an acquisition section for acquiring two or more images captured in two or more imaging devices provided at different heights; and a detection section for detecting a rising portion of an identical object toward the imaging devices based on a difference between the lengths of the identical object in the height direction in the two or more images. | 03-31-2016 |
20160093058 | THREE-DIMENSIONAL COORDINATE COMPUTING APPARATUS, THREE-DIMENSIONAL COORDINATE COMPUTING METHOD, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM HAVING THEREIN PROGRAM FOR THREE-DIMENSIONAL COORDINATE COMPUTING - A three-dimensional coordinate computing apparatus includes an image selecting unit and a coordinate computing unit. The image selecting unit selects a first selected image from multiple captured images, and selects a second selected image from multiple subsequent images captured by the camera after the first selected image has been captured. The second selected image is selected based on a distance between a position of capture of the first selected image and a position of capture of each of the multiple subsequent images and the number of corresponding feature points, each of which corresponds to one of feature points extracted from the first selected image and one of feature points extracted from each of the multiple subsequent images. The coordinate computing unit computes three-dimensional coordinates of the multiple corresponding feature points based on two-dimensional coordinates of each corresponding feature point in the first and second selected images. | 03-31-2016 |
20160100152 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, IMAGE PROCESSING COMPUTER PROGRAM, AND INFORMATION RECORDING MEDIUM WHEREUPON IMAGE PROCESSING COMPUTER PROGRAM IS STORED - An image processing apparatus includes an image-frame reading section that reads one or more image frames from a moving image, a region-boundary-line-information receiving section that receives information concerning a region boundary line in the read image frames, a region dividing section that expands a division region starting from a point on the region boundary line and divides the inside and outside of the region boundary line with division lines, which connect points of brightness, an opening processing section that leaves a first division line between a pair of the region boundary lines and opens a second division line, a separating section that separates regions in the image frames in units of a region surrounded by the first division line, and a first depth-value giving section that gives, to the region surrounded by the first division line, a depth value representing a distance degree of the region. | 04-07-2016 |
20160104031 | DEPTH FROM TIME OF FLIGHT CAMERA - Region of interest detection in raw time of flight images is described. For example, a computing device receives at least one raw image captured for a single frame by a time of flight camera. The raw image depicts one or more objects in an environment of the time of flight camera (such as human hands, bodies or any other objects). The raw image is input to a trained region detector and in response one or more regions of interest in the raw image are received. A received region of interest comprises image elements of the raw image which are predicted to depict at least part of one of the objects. A depth computation logic computes depth from the one or more regions of interest of the raw image. | 04-14-2016 |
20160104289 | REAL-TIME RANGE MAP GENERATION - A system, method, and non-transitory computer-readable storage medium for range map generation is disclosed. The method may include receiving an image from a camera and receiving a 3D point cloud from a range detection unit. The method may further include transforming the 3D point cloud from range detection unit coordinates to camera coordinates. The method may further include projecting the transformed 3D point cloud into a 2D camera image space corresponding to the camera resolution to yield projected 2D points. The method may further include filtering the projected 2D points based on a range threshold. The method may further include generating a range map based on the filtered 2D points and the image. | 04-14-2016 |
20160110872 | METHOD AND IMAGE PROCESSING APPARATUS FOR GENERATING A DEPTH MAP - In a method for generating a depth map for an image, an image processing apparatus is configured to: determine a depth level for each of at least two objects in the image according to an angle of incidence in which light incident upon the object is projected onto an image sensor of a light-field camera; calculate a depth value of the depth level associated with one of the objects; estimate a depth value for the depth level associated with another one of the objects; and generate a depth map according to the depth values. The depth value is estimated based on a distance between first and second locations on the image sensor, on which light incident upon the reference and relative objects are projected. | 04-21-2016 |
20160110913 | 3D REGISTRATION OF A PLURALITY OF 3D MODELS - A technique for 3D registration of three or more 3D models using parallel computing. The technique treats the pairwise 3D registration problem as an atomic sub-problem, and solves in parallel a plurality of pairwise 3D registration. The initial guess for the pairwise 3D registration is calculated based on the, possibly incomplete, information available at the moment the calculation is made. At each point the available pairwise transformations are examined based on current available information. Transformations that are identified as outliers or as inaccurate are marked for repeated pairwise 3D registration when additional information relevant for the calculation of the initial guess becomes available. | 04-21-2016 |
20160117830 | OBJECT DETECTION AND TRACKING USING DEPTH DATA - Methods and systems for detecting and/or tracking one or more objects utilize depth data. An example method of detecting one or more objects in image data includes receiving depth image data corresponding to a depth image view point relative to the one or more objects. A series of binary threshold depth images are formed from the depth image data. Each of the binary threshold depth images is based on a respective depth. One or more depth extremal regions in which image pixels have the same value are identified for each of the binary depth threshold images. One or more depth maximally stable extremal regions are selected from the identified depth extremal regions based on change in area of the one or more respective depth extremal regions for different depths. | 04-28-2016 |
20160117831 | IMAGE PROCESSING APPARATUS - An image processing apparatus includes an image processing section periodically performing image processing for a periodically captured image, a diagnosing section comparing an image processing result obtained from diagnostic image data with an expected value data which indicates a reference for a normal processing result of image processing of the diagnostic image data and determining whether the image processing result obtained from the diagnostic image data is normal, by making the image processing section perform image processing for the diagnostic image data which is directly accessible by the image processing section in parallel with image processing periodically performed by the image processing section for the captured image, and an output controlling section outputting the processing result of image processing for the captured image as valid to a control section on condition that the image processing result obtained from the diagnostic image data is determined as normal by the diagnosing section. | 04-28-2016 |
20160124995 | ESTIMATING DEPTH FROM A SINGLE IMAGE - During a training phase, a machine accesses reference images with corresponding depth information. The machine calculates visual descriptors and corresponding depth descriptors from this information. The machine then generates a mapping that correlates these visual descriptors with their corresponding depth descriptors. After the training phase, the machine may perform depth estimation based on a single query image devoid of depth information. The machine may calculate one or more visual descriptors from the single query image and obtain a corresponding depth descriptor for each visual descriptor from the generated mapping. Based on obtained depth descriptors, the machine creates depth information that corresponds to the submitted single query image. | 05-05-2016 |
20160125226 | METHOD AND SYSTEM FOR AUTOMATICALLY OPTIMIZING QUALITY OF POINT CLOUD DATA - Disclosed is a method for automatically optimizing point cloud data quality, including the following steps of: acquiring initial point cloud data for a target to be reconstructed, to obtain an initial discrete point cloud; performing preliminary data cleaning on the obtained initial discrete point cloud to obtain a Locally Optimal Projection operator (LOP) sampling model; obtaining a Possion reconstruction point cloud model by using a Possion surface reconstruction method on the obtained initial discrete point cloud; performing iterative closest point algorithm registration on the obtained Possion reconstruction point cloud model and the obtained initial discrete point cloud; and for each point on a currently registered model, calculating a weight of a surrounding point within a certain radius distance region of a position corresponding to the point for the point on the obtained LOP sampling model, and comparing the weight with a threshold, to determine whether a region where the point is located requires repeated scanning. Further disclosed is a system for automatically optimizing point cloud data quality. | 05-05-2016 |
20160125258 | STEREO IMAGE PROCESSING USING CONTOURS - A computer-implemented stereo image processing method which uses contours is described. In an embodiment, contours are extracted from two silhouette images captured at substantially the same time by a stereo camera of at least part of an object in a scene. Stereo correspondences between contour points on corresponding scanlines in the two contour images (one corresponding to each silhouette image in the stereo pair) are calculated on the basis of contour point comparison metrics, such as the compatibility of the normal of the contours and/or a distance along the scanline between the point and a centroid of the contour. A corresponding system is also described. | 05-05-2016 |
20160125617 | ESTIMATING DEVICE AND ESTIMATION METHOD - An estimation method executed by a computer includes: extracting, from an image, a plurality of characteristic points satisfying a certain requirement regarding changes in gray levels between the plurality of characteristic points and surrounding points; identifying, as map points from the plurality of characteristic points, characteristic points existing on planes, by excluding corners of an object depicted on the image; extracting, from another image, another plurality of characteristic points satisfying the certain requirement; executing matching of the another plurality of characteristic points with the map points based on a region including the map points; and estimating, based on results of the matching, a position and an orientation of an imaging device while the another image is captured. | 05-05-2016 |
20160125637 | METHOD AND APPARATUS FOR REMOVING OUTLIERS FROM A MAIN VIEW OF A SCENE DURING 3D SCENE RECONSTRUCTION - A method and an apparatus for removing outliers from a main view of a scene during 3D reconstruction of a scene from multiple views of the scene. A 3D projection unit projects a 3D point of a pixel of the main view into neighboring views. A comparator then compares the distance of each of the projected 3D points in the neighboring views to the 3D point of the main view with a defined distance threshold. Based on the comparison a flagging unit assigns flags to the pixel in the main view. Finally, depending on values of the flags a rejecting unit rejects the pixel in the main view as an outlier. | 05-05-2016 |
20160125651 | Method and Apparatus for Generation of 3D Models with Applications in Dental Restoration Design - Methods and apparatus are provided for generating computer 3D models of an object, by registering two or more scans of physical models of an object. The scans may be 3D scans registered by a curve-based registration process. A method is provided for generating a 3D model of a portion of a patient's oral anatomy for use in dental restoration design. Also provided are scanning workflows for scanning physical models of an object to obtain a 3D model. | 05-05-2016 |
20160133006 | VIDEO PROCESSING METHOD AND APPARATUS - The present disclosure discloses a video processing method and apparatus, which belong to the field of data processing technologies. The method includes: acquiring at least one three-dimensional image, and obtaining a to-be-processed video; parsing the to-be-processed video, to obtain at least two video images; fusing each three-dimensional image with each video image separately, to obtain fused video images; and synthesizing the fused video images into a video, to obtain a processed video. The present disclosure separately fuses each acquired three-dimensional image with each video image obtained by parsing an acquired to-be-processed video, and synthesizes fused video images into a video, to obtain a processed video, which implements adding a three-dimensional image to a video, and enables a processed video to display a three-dimensional image, thereby expanding an application range of video processing, and enriching display effects of the processed video. | 05-12-2016 |
20160133026 | NON-PARAMETRIC METHOD OF AND SYSTEM FOR ESTIMATING DIMENSIONS OF OBJECTS OF ARBITRARY SHAPE - A non-parametric method of, and system for, dimensioning an object of arbitrary shape, captures a three-dimensional (3D) point cloud of data points over a field of view containing the object and a base surface on which the object is positioned, detects a base plane indicative of the base surface from the point cloud, extracts the data points of the object from the point cloud, processes the extracted data points of the object to obtain a convex hull, and fits a bounding box of minimum volume to enclose the convex hull. The bounding box has a pair of mutually orthogonal planar faces, and the fitting is performed by orienting one of the faces to be generally perpendicular to the base plane, and by simultaneously orienting the other of the faces to be generally parallel to the base plane. | 05-12-2016 |
20160140713 | SYSTEM AND METHOD FOR IMAGING DEVICE MODELLING AND CALIBRATION - The invention relates to a camera modeling and calibration system and method using a new set of variables to compensate for imperfections in line of sight axis squareness with camera plane and which increases accuracy in measuring distortion introduced by image curvature caused by geometric and chromatic lens distortion and wherein the camera image plane is represented from a full 3D perspective projection. | 05-19-2016 |
20160140719 | SYSTEM AND METHOD OF ESTIMATING 3D FACIAL GEOMETRY - The present invention relates to image analysis. In particular, but not limited to, the invention relates to estimating 3D facial geometry. First, images are acquired | 05-19-2016 |
20160140736 | VIEWPOINT POSITION CALCULATION DEVICE, IMAGE GENERATION DEVICE, AND VIEWPOINT POSITION CALCULATION METHOD - According to one embodiment, a viewpoint position calculation device includes a shape acquisitor, a measurement information acquisitor, and a viewpoint position calculator. The shape acquisitor is configured to acquire a shape data representing a three-dimensional shape of an object including first and second positions. The measurement information acquisitor is configured to acquire a first measurement information data including line segment data related to a line segment connecting the first position with the second position. The viewpoint position calculator is configured to calculate a viewpoint based on the shape data and the first measurement information data. A first image of the object as viewed from the viewpoint is generated based on the shape data and the line segment data. The first image includes an image of a first region of the object and an image of the line segment. The first region includes the first position and the second position. | 05-19-2016 |
20160150182 | METHOD AND APPARATUS FOR PROVIDING EYE-CONTACT FUNCTION TO MULTIPLE POINTS OF ATTENDANCE USING STEREO IMAGE IN VIDEO CONFERENCE SYSTEM - The present invention relates to a new eye-contact function providing method which provides a natural eye-contact function to attendances by using a stereo image and a depth image to estimate a precise depth value of the occlusion region and improve a quality of a composite eye-contact image when there are two or more remote attendances in one site at the time of a video conference using a video conference system and an apparatus therefor. | 05-26-2016 |
20160150208 | VIRTUAL VIEWPOINT SYNTHESIS METHOD AND SYSTEM - A virtual viewpoint synthesis method and system, including: establishing a left viewpoint virtual view and a right viewpoint virtual view; searching for a candidate pixel in a reference view, and marking a pixel block in which the candidate pixel is not found as a hole point; ranking the found candidate pixels according to depth, and successively calculating a foreground coefficient and a background coefficient for performing weighted summation; enlarging the hole-point regions of the left viewpoint virtual view and/or the right viewpoint virtual view in the direction of the background to remove a ghost pixel; performing viewpoint synthesis on the left viewpoint virtual view and the right viewpoint virtual view; and filling the hole-points of a composite image. | 05-26-2016 |
20160150210 | METHOD AND APPARATUS FOR MATCHING STEREO IMAGES - A method for matching stereo images may calculate a data cost value for each of a plurality of images, calculate a smoothness cost value for each of the plurality of images, and match pixels among the plurality of images based on the data cost value and the smoothness cost value. | 05-26-2016 |
20160154255 | A METHOD FOR DETERMINING A VISUAL EFFECT OF AN OPHTHALMIC LENS | 06-02-2016 |
20160155019 | APPARATUS AND METHOD FOR GENERATING MULTI-VIEWPOINT IMAGE | 06-02-2016 |
20160163067 | APPARATUS FOR AND METHOD OF ESTIMATING DIMENSIONS OF AN OBJECT ASSOCIATED WITH A CODE IN AUTOMATIC RESPONSE TO READING THE CODE - Dimensions of an object associated with an electro-optically readable code are estimated by aiming a handheld device at a scene containing the object supported on a base surface. A scanner on the device scans the scene over a field of view to obtain a position of a reference point of the code associated with the object, and reads the code. A dimensioning sensor on the device captures a three-dimensional (3D) point cloud of data points of the scene in automatic response to the reading of the code. A controller clusters the point cloud into data clusters, locates the reference point of the code in one of the data clusters, extracts from the point cloud the data points of the one data cluster belonging to the object, and processes the extracted data points belonging to the object to estimate the dimensions of the object. | 06-09-2016 |
20160164258 | LASER DEVICE WITH ADJUSTABLE POLARIZATION - The invention describes a laser device ( | 06-09-2016 |
20160165206 | DIGITAL REFOCUSING METHOD - A digital refocusing method includes: to plurality of images corresponding to multiple views in a scene are obtained, the images include a central view image and at least one non-central view image; a pixel shift or a pixel index shift is performed to the non-central view image; a line scan along a pre-determined linear path is performed to the central view image and the non-central view images to obtain corresponding pixels of the central view image and corresponding pixels of the non-central view images; view interpolation based on the disparities defined in a disparity map is performed, target pixels corresponded to a novel view image are obtained from the corresponding pixels of the central view image and the corresponding pixels of the non-central view according to a target disparity; and a refocused novel view image is obtained by averaging and compositing the target pixels of novel views. | 06-09-2016 |
20160171315 | DRIVER ASSISTANCE FOR A VEHICLE | 06-16-2016 |
20160171325 | SYSTEM FOR DETERMINING ALIGNMENT OF A USER-MARKED DOCUMENT AND METHOD THEREOF | 06-16-2016 |
20160171677 | DISPARITY CORRECTING DEVICE IN STEREO VISION AND METHOD THEREOF | 06-16-2016 |
20160171699 | AUTOMATED REGISTRATION OF THREE-DIMENSIONAL VECTORS TO THREE-DIMENSIONAL LINEAR FEATURES IN REMOTELY-SENSED DATA | 06-16-2016 |
20160171703 | CAMERA POSE ESTIMATION APPARATUS AND METHOD | 06-16-2016 |
20160171706 | IMAGE SEGMENTATION USING COLOR & DEPTH INFORMATION | 06-16-2016 |
20160171755 | AUTOMATIC GEOMETRY AND LIGHTING INFERENCE FOR REALISTIC IMAGE EDITING | 06-16-2016 |
20160173849 | Processing of Light Fields by Transforming to Scale and Depth Space | 06-16-2016 |
20160173851 | APPARATUS AND METHOD FOR ADJUSTING DEPTH | 06-16-2016 |
20160180188 | Method for detecting salient region of stereoscopic image | 06-23-2016 |
20160180485 | APPARATUS AND METHOD FOR GENERATING A FINGERPRINT AND IDENTIFYING A THREE-DIMENSIONAL MODEL | 06-23-2016 |
20160180540 | 3D MODEL UPDATES USING CROWDSOURCED VIDEO | 06-23-2016 |
20160182880 | SETTINGS OF A DIGITAL CAMERA FOR DEPTH MAP REFINEMENT | 06-23-2016 |
20160187144 | Selecting Feature Geometries for Localization of a Device - Systems, apparatuses, and methods are provided for developing a fingerprint database and selecting feature geometries for determining the geographic location of a device. A device collects a depth map at a location in a path network. Two-dimensional feature geometries from the depth map are extracted using a processor of the device. The extracted feature geometries are ranked to provide ranking values for the extracted feature geometries. A portion of the extracted feature geometries are selected based upon the ranking values and a geographic distribution of the extracted feature geometries. | 06-30-2016 |
20160188955 | SYSTEM AND METHOD FOR DETERMINING DIMENSIONS OF AN OBJECT IN AN IMAGE - An information handling system includes a three dimensional camera and a processor. The three dimensional camera is configured to capture a three dimensional image. The processor is configured to communicate with the three dimensional camera. The processor to provide the three dimensional image to be displayed on a display screen of the information handling system, to determine three dimensional coordinates for an object within the three dimensional image, and to calculate a dimension of the object based on the three dimensional coordinates. | 06-30-2016 |
20160189348 | SYSTEMS AND METHODS FOR PROCESSING MAPPING AND MODELING DATA - A method for post-processing georeferenced mapping data includes providing positioning data indicating a position of a data acquisition system in a defined space at specific moments in time, providing ranging data indicating relative position of objects in the defined space with respect to the data acquisition system at the specific moments in time, performing a smoothing process on the positioning data to determine smoothed best estimate of trajectory (SBET) data for trajectory of the data acquisition system, performing a scan matching process on the SBET data and the ranging data to identify objects and/or object features in the defined space, performing a process to revise the SBET data so that the SBET data aligns with the identified objects and/or object features and storing the revised SBET data with the range data. | 06-30-2016 |
20160189386 | SYSTEM AND METHOD FOR REDEFINING DEPTH-BASED EDGE SNAPPING FOR THREE-DIMENSIONAL POINT SELECTION - An information handling system includes a three dimensional camera and a processor. The three dimensional camera is configured to capture a three dimensional image. The processor is configured to provide the three dimensional image to be displayed on a display screen of the information handling system, to detect a selection of a pixel within the three dimensional image, and to redefine the selected pixel to be a second pixel. The second pixel has a large disparity within the three dimensional image or its two dimensional counterpart. | 06-30-2016 |
20160191889 | STEREO VISION SOC AND PROCESSING METHOD THEREOF - A stereo vision SoC and a processing method thereof are provided. The stereo vision SoC extracts first support points from an image and adds second support points, performs triangulation based on the first support points and the second support points; and extracts disparity using a result of the triangulation. Accordingly, depth image quality is improved and HW is easily implemented in the stereo vision SoC. | 06-30-2016 |
20160191890 | IMAGE PROCESSING APPARATUS - In an image processing apparatus, an image acquiring unit acquires a first image and a second image that form stereoscopic images. A first sub-image extracting unit extracts first sub-images from the first image. A second sub-image extracting unit extracts second sub-images from the second image. A matching unit matches each pair of the first and second sub-images to determine a degree of similarity therebetween. A similar sub-image setting unit sets the second sub-image having a highest degree of similarity to the first sub-image to be a similar sub-image to the first sub-image. A brightness comparing unit compares in brightness each pair of the first and second sub-images. The matching unit is configured to, if a result of comparison made by the brightness comparing unit between a pair of the first and second sub-images is out of a predetermined brightness range, exclude such a pair of the first and second sub-images. | 06-30-2016 |
20160196655 | PHOTOGRAPH LOCALIZATION IN A THREE-DIMENSIONAL MODEL | 07-07-2016 |
20160196657 | METHOD AND SYSTEM FOR PROVIDING DEPTH MAPPING USING PATTERNED LIGHT | 07-07-2016 |
20160196659 | 3D OBJECT SEGMENTATION | 07-07-2016 |
20160253780 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM | 09-01-2016 |
20160253807 | Method and System for Determining 3D Object Poses and Landmark Points using Surface Patches | 09-01-2016 |
20160253814 | PHOTOGRAMMETRIC METHODS AND DEVICES RELATED THERETO | 09-01-2016 |
20160255329 | IMAGE PROCESSING METHOD AND ELECTRONIC DEVICE SUPPORTING THE SAME | 09-01-2016 |
20160255334 | GENERATING AN IMPROVED DEPTH MAP USING A MULTI-APERTURE IMAGING SYSTEM | 09-01-2016 |
20160378061 | METHOD AND DEVICE FOR VERIFYING DIFFRACTIVE ELEMENTS - The invention relates to a method for authenticating a diffractive element, e.g., a hologram, on an object or document ( | 12-29-2016 |
20160379341 | SYSTEM AND A METHOD FOR DEPTH-IMAGE-BASED RENDERING - A method for depth-image-based rendering, the method comprising the steps of: obtaining a first reference view; obtaining a depth map for the first reference view; obtaining a second reference view; obtaining a depth map for the second reference view; the method further comprising the steps of extracting noise present in the first and the second reference views; denoising the first and the second reference views and, based on the denoised first and second reference views, rendering an output view using depth-image-based rendering; adding the extracted noise to the output view. | 12-29-2016 |
20170236286 | Determining Depth from Structured Light Using Trained Classifiers | 08-17-2017 |
20170236296 | PROVIDING VOLUME INDICATORS BASED ON RECEIVED IMAGES OF CONTAINERS | 08-17-2017 |
20170237969 | METHOD FOR PREDICTING STEREOSCOPIC DEPTH AND APPARATUS THEREOF | 08-17-2017 |
20180025236 | DETECTING DEVICE, DETECTING METHOD, AND PROGRAM | 01-25-2018 |
20180025496 | SYSTEMS AND METHODS FOR IMPROVED SURFACE NORMAL ESTIMATION | 01-25-2018 |
20180025501 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND, NON-TRANSITORY COMPUTER READABLE MEDIUM | 01-25-2018 |
20180025505 | Image Processing Device, and related Depth Estimation System and Depth Estimation Method | 01-25-2018 |
20180025508 | APPARATUS AND METHOD FOR GENERATING AROUND VIEW | 01-25-2018 |
20180027224 | Systems and Methods for Estimating and Refining Depth Maps | 01-25-2018 |
20190147247 | Systems and Methods for Rapidly Developing Annotated Computer Models of Structures | 05-16-2019 |
20190147609 | System and Method to acquire the three-dimensional shape of an object using a moving patterned substrate | 05-16-2019 |
20190147614 | CLASSIFICATION AND IDENTIFICATION SYSTEMS AND METHODS | 05-16-2019 |
20190147619 | METHOD AND SYSTEM FOR IMAGE GEOREGISTRATION | 05-16-2019 |
20190147622 | UNMANNED AERIAL VEHICLE CALIBRATION METHOD AND SYSTEM BASED ON COLOUR 3D CALIBRATION OBJECT | 05-16-2019 |
20220138977 | TWO-STAGE DEPTH ESTIMATION MACHINE LEARNING ALGORITHM AND SPHERICAL WARPING LAYER FOR EQUI-RECTANGULAR PROJECTION STEREO MATCHING - A system and method is disclosed having an end-to-end two-stage depth estimation deep learning framework that takes one spherical color image and estimate dense spherical depth maps. The contemplated framework may include a view synthesis (stage 1) and a multi-view stereo matching (stage 2). The combination of the two-stage process may provide the advantage of the geometric constraints from stereo matching to improve depth map quality, without the need of additional input data. It is also contemplated that a spherical warping layer may be used to integrate multiple spherical features volumes to one cost volume with uniformly sampled inverse depth for the multi-view spherical stereo matching stage. The two-stage spherical depth estimation system and method may be used in various applications including virtual reality, autonomous driving and robotics. | 05-05-2022 |
20220138978 | TWO-STAGE DEPTH ESTIMATION MACHINE LEARNING ALGORITHM AND SPHERICAL WARPING LAYER FOR EQUI-RECTANGULAR PROJECTION STEREO MATCHING - A system and method is disclosed having an end-to-end two-stage depth estimation deep learning framework that takes one spherical color image and estimate dense spherical depth maps. The contemplated framework may include a view synthesis (stage 1) and a multi-view stereo matching (stage 2). The combination of the two-stage process may provide the advantage of the geometric constraints from stereo matching to improve depth map quality, without the need of additional input data. It is also contemplated that a spherical warping layer may be used to integrate multiple spherical features volumes to one cost volume with uniformly sampled inverse depth for the multi-view spherical stereo matching stage. The two-stage spherical depth estimation system and method may be used in various applications including virtual reality, autonomous driving and robotics. | 05-05-2022 |