Class / Patent application number | Description | Number of patent applications / Date published |
382285000 | Mapping 2-D image onto a 3-D surface | 46 |
20080205791 | METHODS AND SYSTEMS FOR USE IN 3D VIDEO GENERATION, STORAGE AND COMPRESSION - A memory storage device readable by machine is presented, the device tangibly embodying a sequence of depth maps associated with a continuous scene sequence of digital 2D images of a predetermined resolution, the sequence of depth maps including at least one restricted redundancy depth map of a resolution lower than the predetermined resolution of the 2D images. The depth maps may be used for 3D (i.e. stereo) visualization. | 08-28-2008 |
20080219592 | Self-similar ordered microstructural arrays of amphiphilic molecules - The invention pertains, at least in part, to a method for determining the structure of an amphiphilic molecule using Self-Similar Microstructure Arrays. | 09-11-2008 |
20080226194 | SYSTEMS AND METHODS FOR TREATING OCCLUSIONS IN 2-D TO 3-D IMAGE CONVERSION - The present invention is directed to systems and methods for processing 2-D to 3-D images. The system and method includes a procedure for optimizing occlusion and/or texturing by creating tolerances in which such texturing need not occur. | 09-18-2008 |
20080232716 | Video Processing Method and Device for Depth Extraction - The present invention provides an improved method and device for generating a depth map ( | 09-25-2008 |
20080247668 | METHOD FOR RECONSTRUCTING THREE-DIMENSIONAL IMAGES FROM TWO-DIMENSIONAL IMAGE DATA - A method for reconstructing three-dimensional, plural views of images from two dimensional image data. The method includes: obtaining two-dimensional, stereo digital data from images of an object; processing the digital data to generate an initial three-dimensional candidate of the object, such process using projective geometric constraints imposed on edge points of the object; refining the initial candidate comprising examining spatial coherency of neighboring edge points along a surface of the candidate. | 10-09-2008 |
20080260288 | Creating a Depth Map - A method of generating a depth map ( | 10-23-2008 |
20080310756 | SYSTEM, COMPUTER PROGRAM AND METHOD FOR 3D OBJECT MEASUREMENT, MODELING AND MAPPING FROM SINGLE IMAGERY - A method for deriving three-dimensional measurement information and/or creating three-dimensional models and maps, from single images of at least one three-dimensional object is provided. The method includes the steps of: (a) obtaining at least one two-dimensional single image of the object, the image consisting of image data and being associated with an image geometry model (IGM); (b) deriving three-dimensional coordinate information associated with the image, based on the IGM, and associating the three-dimensional coordinate information with the image data; (c) analyzing the image data so as to: (i) measure the projection of the object using the IGM to derive measurement data including the height and/or point-to-point distances pertaining to the object; and/or (ii) measure the shadow of the object to derive measurement data including the height and/or point-to-point distance pertaining to the object; and (d) obtaining three-dimensional measurements based on the projection and/or shadow measurements of the object. In another aspect of the method, the method includes the further step of creating three-dimensional models or maps based on the projection and/or shadow measurements. A series of algorithms are also provided for processing the method of the invention. A computer system and related computer program for deriving three-dimensional measurement information and/or creating three-dimensional models and maps, from single images of at least one three-dimensional object is provided based on the disclosed method is also provided. | 12-18-2008 |
20080310757 | System and related methods for automatically aligning 2D images of a scene to a 3D model of the scene - A system and related method for automatically aligning a plurality of 2D images of a scene to a first 3D model of the scene. The method includes providing a plurality of 2D images of the scene, generating a second 3D model of the scene based on the plurality of 2D images, generating a transformation between the second 3D model and the first 3D model based on a comparison of at least one of the plurality of 2D images to the first 3D model, and using the transformation to automatically align the plurality of 2D images to the first 3D model. | 12-18-2008 |
20090003728 | Depth Perception - A method of rendering a multi-view image comprising a first output image and a second output image on basis of an input image ( | 01-01-2009 |
20090080803 | IMAGE PROCESSING PROGRAM, COMPUTER-READABLE RECORDING MEDIUM RECORDING THE PROGRAM, IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - Provided is a program that is executed by an image processing apparatus including a memory and a processor, and which generates two-dimensional image data obtained by performing perspective projection to a virtual three-dimensional space on a prescribed perspective projection plane. The program causes the processor to perform processes of (a) arranging a viewpoint in the virtual three-dimensional space, generating basic image data by performing perspective projection on the perspective projection plane set in correspondence to the viewpoint, and storing the basic image data in the memory; (b) setting a concentration map showing a concentration value associated with a partial region of the basic image data, and storing the concentration map in the memory; (c) reading texture data from the memory; and (d) generating the two-dimensional image data by synthesizing the texture data with the basic image data at a ratio according to the concentration value set with the concentration map. | 03-26-2009 |
20090092335 | METHOD AND APPARATUS FOR RECEIVING AND GENERATING IMAGE DATA STREAM INCLUDING PARAMETERS FOR DISPLAYING LOCAL THREE DIMENSIONAL IMAGE - Provided are a method and apparatus for receiving and generating an image data stream including a three dimensional (3D) image. The method of receiving an image data stream includes receiving an image data stream including at least one of two dimensional (2D) and 3D image data periods; extracting local 3D image parameters, which are parameters of each image data period, from the image data stream; and restoring at least one of 2D and 3D images by using the local 3D image parameters. In the method, each 3D image is composed of at least one of a base image and an additional image, and the local 3D image parameters include stereoscopic arrangement order information representing an arrangement order of the base image and additional image of the 3D image. | 04-09-2009 |
20090103833 | Image Playback Device - Conventionally, there has been a problem that a viewer is tired when playing back images mixedly including 2D images and 3D images, because images change frequently. A storage part stores 2D images and 3D images, an image conversion part that converts the 2D image stored in the storage part into a new 3D image, and an image output part that outputs the 3D image stored in the storage part and the new 3D image converted by the image conversion part. Consequently, it is possible to prevent a viewer from tiring when playing back images mixedly including 2D images and 3D images. | 04-23-2009 |
20090110327 | Semi-automatic plane extrusion for 3D modeling - In accordance with one or more aspects, a plane in a 3D coordinate system in which a 3D model is to be generated based on one or more 2D images is identified. A direction of extrusion for the plane is also identified. Additionally, a user identification of a region of interest on a 2D image is received and projected onto the plane. A location in the 3D model of the region of interest is then automatically identified by extruding the plane along the direction of extrusion until the region of interest in the plane matches a corresponding region of at least one of the one or more 2D images. | 04-30-2009 |
20090161989 | Method, medium, and apparatus representing adaptive information of 3D depth image - A method, medium, and apparatus processing depth information of a depth image. The apparatus adaptively presenting information on depth information includes a section determination unit determining which one of plural sections respective depth values for pixels of the 3D image fall within, with the plural sections being defined by a measured limit distance for the 3D image being parsed into the plural sections based on distance based depth resolution information, and an adaptive quantization unit to selectively quantize and represent each depth value based on a respective predefined quantization setting of the one section. | 06-25-2009 |
20090180712 | REPRODUCIBLE THREE DIMENSIONAL VACUUM FORMING TECHNIQUE - A method of reproducing a three dimensional (3D) image by counter-distorting a two dimensional (2D) image prior to vacuum forming. A captured or obtained image of a subject is digitalized into 3D and 2D formats and used to create a 3D surface using a CNC machine. A standardized grid pattern with numerous reference points is printed on a vacuum formable material and vacuum formed on the 3D surface representing a subject. The reference points on the grid are displaced during the vacuum forming process due to the 3D nature of the surface. If the image of the subject were printed on the vacuum formable material, it would appear distorted. The displaced reference points are observed and the data is entered into the inventive software which generates a new image with compensated morphological changes. When the new image is vacuum formed on vacuum formable material under the same conditions, the new image would not appear distorted and would accurately depict the subject in 3D. | 07-16-2009 |
20090245690 | Auto-calibration method for a projector-camera system - A method for self-recalibration of a structured light vision system including a camera and a projector. A camera plane and a projector plane are defined, a Homography matrix between the camera plane and the projector plane is computed, and a translation vector and a rotation matrix are determined from Homography-based constraints. A computer vision system implementing the method is also described. | 10-01-2009 |
20090245691 | ESTIMATING POSE OF PHOTOGRAPHIC IMAGES IN 3D EARTH MODEL USING HUMAN ASSISTANCE - The pose of a photographic image of a portion of Earth may be estimated using human assistance. A 3D graphics engine may render a virtual image of Earth from a controllable viewpoint based on 3D data that is representative of a 3D model of at least a portion of Earth. A user may locate and display a corresponding virtual image of Earth at a viewpoint that approximately corresponds to the pose of the photographic image by manipulating user controls. The photographic image and the corresponding virtual image may be overlaid on one another so that both images can be seen at the same time. The user may adjust the pose of one of the images while overlaid on the other image by manipulating user controls so that both images appear to substantially align with one another. The settings of the user controls may be converted to pose data that is representative of the pose of the photographic image within the 3D model. | 10-01-2009 |
20090252436 | Method for visualization of multidimensional data - The method provides a visualization technique for rendering multidimensional data points as 2D curves on a 3D plot with the third dimension representing their order in the multidimensional data set. The technique uses colour palettes to render individual data curves, which enables visual analysis of the entire dataset based on the colour characteristics of the resulting image. The method also suggests a technique for: a) visualizing a distance between multidimensional data points; c) showing a linear segment between two multidimensional data points; d) displaying a colour map of an individual multidimensional point or data set; e) displaying a multidimensional data interval. | 10-08-2009 |
20090274391 | INTERMEDIATE POINT BETWEEN IMAGES TO INSERT/OVERLAY ADS - The claimed subject matter provides a system and/or a method that facilitates simulating a portion 2-dimensional (2D) data for implementation within a 3-dimensional (3D) virtual environment. A 3D virtual environment can enable a 3D exploration of a 3D image constructed from a collection of two or more 2D images, the 3D image is constructed by combining the two or more 2D images based upon a respective image perspective. An analyzer can evaluate the collection of two or more 2D images to identify a portion of the 3D image that is unrepresented by the combined two or more 2D images. A synthetic view generator can create a simulated synthetic view for the portion of 3D image that is unrepresented, the simulated synthetic view replicates a 2D image with a respective image perspective for the unrepresented portion of 3D image. | 11-05-2009 |
20090290811 | SYSTEM AND METHOD FOR GENERATING A MULTI-DIMENSIONAL IMAGE - A system and method for generating a multi-dimensional image of an object in a scene is disclosed. One inventive aspect includes a spectral estimation module configured to convert a two-dimensional (2D) high-resolution light intensity image of the scene to a spectral-augmented image of a selected channel. The system further includes a high-resolution depth image generation module configured to generate a high-resolution depth image of the object based on a three-dimensional (3D) low-resolution depth image of the scene and the spectral-augmented image. | 11-26-2009 |
20090297061 | REPLACING IMAGE INFORMATION IN A CAPTURED IMAGE - Described herein are systems and methods for expanding upon the single-distance-based background denotation to seamlessly replace unwanted image information in a captured image derived from an imaging application so as to account for a selected object's spatial orientation to maintain an image of the selected object in the captured image. | 12-03-2009 |
20100014781 | Example-Based Two-Dimensional to Three-Dimensional Image Conversion Method, Computer Readable Medium Therefor, and System - An example-based 2D to 3D image conversion method, a computer readable medium therefor, and a system are provided. The embodiments are based on an image database with depth information or with which depth information can be generated. With respect to a 2D image to be converted into 3D content, a matched background image is found from the database. In addition, graph-based segmentation and comparison techniques are employed to detect the foreground of the 2D image so that the relative depth map can be generated from the foreground and background information. Therefore, the 3D content can be provided with the 2D image plus the depth information. Thus, users can rapidly obtain the 3D content from the 2D image automatically and the rendering of the 3D content can be achieved. | 01-21-2010 |
20100080489 | Hybrid Interface for Interactively Registering Images to Digital Models - The first image may be displayed adjacent to the second image where the second image is a three dimensional image. An element may be selected in the first image and a matching element may be selected in the second image. A selection may be permitted to view a merged view where the merged view is the first image displayed over the second image by varying the opaqueness of the images. If the merged view is not acceptable, the method may repeat and if the merged view is acceptable; the first view onto the second view and the merged view may be stored as a merged image. | 04-01-2010 |
20100092105 | Information processing apparatus, information processing method, and program - An information processing apparatus comprises a data conversion unit for converting second 3D image information, to which first image information can be pasted, into 3D photo frame data including three-dimensional object information representing a three-dimensional shape of an object included in the second 3D image information and parameter information including a pasting position of the first image information, a parse calculation unit for calculating an image of the 3D photo frame data projected onto a display screen, an image pasting unit for pasting the first image information to the 3D photo frame data, and a display control unit for outputting to the display screen the 3D photo frame data or the 3D photo frame data pasted with the first image information. | 04-15-2010 |
20100104219 | IMAGE PROCESSING METHOD AND APPARATUS - An image processing method including: obtaining points on left-eye and right-eye images to be generated from a two-dimensional (2D) image, to which a predetermined pixel of the 2D image is to be mapped, by using the sizes of holes to be generated in the left-eye and right-eye images; and generating the left-eye and right-eye images respectively having the obtained points to which the predetermined pixel of the 2D image is mapped. | 04-29-2010 |
20100111444 | METHOD AND SYSTEM FOR FAST DENSE STEREOSCOPIC RANGING - A stochastic method and system for fast stereoscopic ranging includes selecting a pair of images for stereo processing, in which the pair of images are a frame pair and one of the image is a reference frame, seeding estimated values for a range metric at each pixel of the reference frame, initializing one or more search stage constraints, stochastically computing local influence for each valid pixel in the reference frame, aggregating local influences for each valid pixel in the reference frame, refining the estimated values for the range metric at each valid pixel in the reference frame based on the aggregated local influence, and post-processing range metric data. A valid pixel is a pixel in the reference frame that has a corresponding pixel in the other frame of the frame pair. The method repeats n iterations of the stochastically computing through the post-processing. | 05-06-2010 |
20100135596 | DIFFUSION-BASED INTERACTIVE EXTRUSION OF TWO-DIMENSIONAL IMAGES INTO THREE-DIMENSIONAL MODELS - Methods and systems for creating three-dimensional models from two-dimensional images are provided. According to one embodiment, a method of creating an inflatable icon involves a vectorizing module polygonizing an input image to produce an inflatable image by representing a set of pixels making up the input image as polygons. The inflatable image is then extruded by an extrusion module by generating appropriate z-coordinate values for a reference point associated with each polygon of the inflatable image based upon a biased diffusion process. End-user controlled pressure modulation is supported by an interface module by (i) adjusting one or more modulation functions employed by the biased diffusion process based upon end-user input regarding relative modulation bias for a selected set of one or more pixels associated with the inflatable image or (ii) applying the biased diffusion process to only the selected set of one or more pixels. | 06-03-2010 |
20100142852 | Image Analysis System and Image Analysis Program - There is provided an image analysis system which captures image data of an arbitrary pair of a first image RI and a second image LI among images obtained by color-photographing a single object from different positions into an analysis computer, wherein the computer includes corresponding point extraction means for assigning a weighing factor to a pixel information value based on the contrast size of the pixel information value in each of a first local area ROI | 06-10-2010 |
20100166338 | IMAGE PROCESSING METHOD AND APPARATUS THEREFOR - An image processing method, including extracting compensation information comprising one from among a depth compensation value and a depth value compensated for by using the depth compensation value; when the compensation information comprises the depth compensation value, compensating for a depth value to be applied to a pixel of a two-dimensional (2D) image by using the depth compensation value, and generating a depth map about the 2D image by using the compensated depth value, and when the compensation information comprises the compensated depth value, generating the depth map about the 2D image by using the compensated depth value; obtaining positions in a left-eye image and a right-eye image by using the depth map, wherein the pixel of the 2D image is mapped to the positions; and generating the left-eye image and the right-eye image comprising the positions to which the pixel is mapped. | 07-01-2010 |
20100232727 | CAMERA POSE ESTIMATION APPARATUS AND METHOD FOR AUGMENTED REALITY IMAGING - An apparatus for providing an estimate for a 3D camera pose relative to a scene from 2D image data of 2D image frame provided by said camera. A candidate 2D key points detector determines candidate 2D key points from the 2D image frame. A detected 3D observations detector determines detected 3D observations from pre-recorded scene data and the candidate 2D key points. A detected 3D camera pose estimator determines a detected 3D camera pose estimate from the camera data, the detected 3D observations and the candidate 2D key points. A first storage stores the detected 2D candidate key points and the 2D image data, and outputs in response to a 3D camera pose estimate output previous 2D image data and candidate 2D key points related to a previous 3D camera pose estimate output. A second storage stores and outputs a previous 3D camera pose estimate. A tracked 3D observations detector determines tracked 3D observations from the 2D image data, the candidate 2D key points, the camera data, the previous 2D image data and candidate 2D key points, the previous 3D camera pose estimate and 3D scene model data. A pose estimate selector outputs a selected one of the detected camera pose estimate and the previous 3D camera pose estimate. A 3D camera pose estimator computes and outputs the 3D camera pose estimate from the camera data, the detected 3D observations, the tracked 3D observations and the selected 3D camera pose estimate. | 09-16-2010 |
20100254627 | IMAGE PROCESSING APPARATUS AND C0MPUTER PROGRAM - The present invention relates to an image processing apparatus for compressing image data used in an image generating apparatus for generating a free-viewpoint image. According to the invention, the apparatus has a selecting unit that selects one image as a first image, and defines other images as second images, a projective transformation unit that generates a projected depth map of a second image from a depth map of the first image, a subtracting unit that creates a difference map of the second image, and a storage unit that stores the depth map of the first image and the difference map of the second image. Here, the difference map is a difference between a depth map of the second image and the projected depth map of the second image, and the depth map indicates a depth value of each pixel of a corresponding image. | 10-07-2010 |
20100266220 | FEATURES-BASED 2D-3D IMAGE REGISTRATION - An image registration apparatus comprises: a features detector ( | 10-21-2010 |
20100322535 | IMAGE TRANSFORMATION METHOD ADAPTED TO COMPUTER PROGRAM PRODUCT AND IMAGE DISPLAY DEVICE - An image transformation method for use in a computer program product and an image display device is provided. In the image transformation method, a two dimensional image and a corresponding depth image are acquired first. A motion process is performed on the two dimensional image to obtain a plurality of motion images according to the depth image and a plurality of gain values. Then, a plurality of view images are provided and an interpolation process is performed on each motion image to obtain the corresponding view image. Finally, a synthesis process is performed on the view images to obtain a three dimensional image. | 12-23-2010 |
20110002557 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - Disclosed are an image processing apparatus and an image processing method for producing a three-dimensional image frame. The image processing method includes: applying an offset value to an object abstracted from a two-dimensional image frame; obtaining image information corresponding to a distorted area from another image frame; and compensating the distorted area with the obtained image information, with respect to the distorted area of the image information in the two-dimensional image frame by the offset value. | 01-06-2011 |
20110123135 | METHOD AND DEVICE OF MAPPING AND LOCALIZATION METHOD USING THE SAME - A mapping method is provided. The environment is scanned to obtain depth information of environmental obstacles. The image of the environment is captured to generate an image plane. The depth information of environmental obstacles is projected onto the image plane, so as to obtain projection positions. At least one feature vector is calculated from a predetermined range around each projection position. The environmental obstacle depth information and the environmental feature vector are merged to generate a sub-map at a certain time point. Sub-maps at all time points are combined to generate a map. In addition, a localization method using the map is also provided. | 05-26-2011 |
20110170799 | TECHNIQUES FOR DENSITY MAPPING - Techniques in a data processor for drawing a density surface on a map in a manner that more accurately accounts for projection distortion in the map. According to one embodiment, data is maintained that represents a geotagged event. A map plane is divided into a plurality of cells and an origin cell corresponding to the geotagged event is identified. Density values are allocated to cells surrounding the origin cell based on geodetic distances between geographic coordinates corresponding to surrounding cells and the geographic coordinate of the geotagged event. A density surface based on the cell allocations is then displayed on a map. By allocating density values to cells based on geodetic distances, the resulting density surface displayed on the map more accurately accounts for projection distortions in the area of the map on which density surface is displayed. | 07-14-2011 |
20120099804 | Generating Three-Dimensional Virtual Tours From Two-Dimensional Images - Interactive three-dimensional (3D) virtual tours are generated from ordinary two-dimensional (2D) still images such as photographs. Two or more 2D images are combined to form a 3D scene, which defines a relationship among the 2D images in 3D space. 3D pipes connect the 2D images with one another according to defined spatial relationships and for guiding virtual camera movement from one image to the next. A user can then take a 3D virtual tour by traversing images within the 3D scene, for example by moving from one image to another, either in response to user input or automatically. In various embodiments, some or all of the 2D images can be selectively distorted to enhance the 3D effect, and thereby reinforce the impression that the user is moving within a 3D space. Transitions from one image to the next can take place automatically without requiring explicit user interaction. | 04-26-2012 |
20120183238 | Rapid 3D Face Reconstruction From a 2D Image and Methods Using Such Rapid 3D Face Reconstruction - Creating a 3D face reconstruction model using a single 2D image and a generic facial depth map that provides depth information. In one example, the generic facial depth map is selected based on gender and ethnicity/race. In one embodiment, a set of facial features of the 2D image is mapped to create a facial-feature map, and a 2D mesh is created using the map. The same set of facial features is also mapped onto a generic facial depth map, and a 3D mesh is created therefrom. The 2D image is then warped by transposing depth information from the 3D mesh of the generic facial depth map onto the 2D mesh of the 2D image so as to create a reconstructed 3D model of the face. The reconstructed 3D model can be used, for example, to create one or more synthetic off-angle-pose images of the subject of the original 2D image. | 07-19-2012 |
20140169699 | PANORAMIC IMAGE VIEWER - A viewer relying on a conformal projection process to perserve local shapes is provided employing a rotated cylindriac mapping. In the image generation process, the source panoramic image, which can be elliptical, is placed on a sphere according to the angular location of pixels in the panomorph. The sphere is rotated around its center to a desired orientation before being projected to a cylinder also centered at the sphere's center with its longitudinal axis along the sphere's z-axis. The projected image on the cylinder is unwrapped and displayed by the viewer. | 06-19-2014 |
20140363100 | METHOD AND APPARATUS FOR REAL-TIME CONVERSION OF 2-DIMENSIONAL CONTENT TO 3-DIMENSIONAL CONTENT - Various aspects of a method and apparatus for video processing may include a computing device communicably coupled to an external device. The computing device may be operable to determine an average vertical velocity and an average horizontal velocity of a subset of pixels in an image frame and determine a depth value for each pixel of the subset of pixels based on calculated motion vectors of the pixel of the subset of pixels, the average vertical velocity of the subset of pixels and the average horizontal velocity of the subset of pixels. | 12-11-2014 |
20150063721 | EFFICIENT VISUAL SURFACE FINDING - A structure for determining a plane in a depth image includes dividing a portion of a depth image into a plurality of areas, fitting a two-dimensional line to depth points in each of the plurality of areas, and combining two or more of the plurality of two-dimensional lines to form a three-dimensional plane estimate. | 03-05-2015 |
20150063722 | EFFICIENT VISUAL SURFACE FINDING - A method and non-transitory program for determining a plane in a depth image includes dividing a portion of a depth image into a plurality of areas, fitting a two-dimensional line to depth points in each of the plurality of areas, and combining two or more of the plurality of two-dimensional lines to form a three-dimensional plane estimate. | 03-05-2015 |
20150110420 | IMAGE PROCESSING METHOD AND SYSTEM USING THE SAME - An image processing method and system using the same, wherein the image processing method includes capturing a plurality of images corresponding to the surroundings of an object using a plurality of image capturing modules to generate a two-dimension planar image; providing a three-dimension projected curved surface; defining a plurality of first grids on the three-dimension projected curved surface and a plurality of second grids on the two-dimension planar image, wherein each of the first grids correspond to each of the second grids; transforming the first grids on the three-dimension projected curved surface and the second grids on the two-dimension planar image into a plurality of first redrawn grids and second redrawn grids respectively based on the angles formed between the normal vector of the two-dimension planar image and the normal vector of each first grid, wherein each first redrawn grid corresponds to each second redrawn grid. | 04-23-2015 |
20160070969 | METHOD AND SYSTEM FOR ALIGNING AND CLASSIFYING IMAGES - In one embodiment, L dimensional images are trained, mapped, and aligned to an M dimensional topology to obtain azimuthal angles. The aligned L dimensional images are then trained and mapped to an N dimensional topology to obtain 2 | 03-10-2016 |
20160104314 | INFORMATION PROCESSING APPARATUS AND METHOD THEREOF - A two-dimensional image obtained by capturing a scene including an object is obtained. Parameters indicating capturing position and capturing orientation of the two-dimensional image are obtained. A three-dimensional shape model representing a three-dimensional shape of the object is obtained. Two-dimensional geometrical features of the object are extracted from the two-dimensional image. Three-dimensional information with respect to a surface of the object close to each of the two-dimensional geometrical features is calculated from the three-dimensional shape model. Three-dimensional geometrical features in the three-dimensional shape model, corresponding to the two-dimensional geometrical features are calculated based on the two-dimensional geometrical features, the parameters, and the calculated three-dimensional information. | 04-14-2016 |
20160171335 | 3D ROTATIONAL PRESENTATION GENERATED FROM 2D STATIC IMAGES | 06-16-2016 |