Patent application number | Description | Published |
20110060556 | Method for Registering 3D Points with 3D Planes - Three-dimensional points acquired of an object in a sensor coordinate system are registered with planes modeling the object in a world coordinate system by determining correspondences between the points and the planes. Points are transformed to an intermediate coordinate system using the correspondences and transformation parameters. The planes are transformed to an intermediate world coordinate system using world rotation and translation parameters. Intermediate rotation and translation parameters are determined by applying coplanarity constraints and orthogonality constraints to the points in the intermediate sensor coordinate system and the planes in the intermediate world coordinate system. Then, rotation and translation parameters between the sensor and world coordinate systems are determined to register the points with the planes. | 03-10-2011 |
20110141251 | Method and System for Segmenting Moving Objects from Images Using Foreground Extraction - A set of images is acquired of a scene by a camera. The scene includes a moving object, and a relative difference of a motion of the camera and a motion of the object is substantially zero. Statistical properties of pixels in the images are determined, and a statistical method is applied to the statistical properties to identify pixels corresponding to the object. | 06-16-2011 |
20110211729 | Method for Generating Visual Hulls for 3D Objects as Sets of Convex Polyhedra from Polygonal Silhouettes - A visual hull for a 3D object is generated by using a set of silhouettes extracted from a set of images. First, a set of convex polyhedra is generated as a coarse 3D model of the object. Then for each image, the convex polyhedra are refined by projecting them to the image and determining the intersections with the silhouette in the image. The visual hull of the object is represented as union of the convex polyhedra. | 09-01-2011 |
20110246130 | Localization in Industrial Robotics Using Rao-Blackwellized Particle Filtering - Embodiments of the invention disclose a system and a method for determining a pose of a probe relative to an object by probing the object with the probe, comprising steps of: determining a probability of the pose using Rao-Blackwellized particle filtering, wherein a probability of a location of the pose is represented by a location of each particle, and a probability of an orientation of the pose is represented by a Gaussian distribution over orientation of each particle conditioned on the location of the particle, wherein the determining is performed for each subsequent probing until the probability of the pose concentrates around a particular pose; and estimating the pose of the probe relative to the object based on the particular pose. | 10-06-2011 |
20110276307 | Method and System for Registering an Object with a Probe Using Entropy-Based Motion Selection and Rao-Blackwellized Particle Filtering - A probe is registered with an object by probing the object with the probe at multiple poses, wherein each pose of the probe includes a location and an orientation. A probability distribution of a current location of the probe is represented by a set of particles, and a probability distribution of a current orientation of the probe is represented by a Gaussian distribution for each particle conditioned on the current location. A set of candidate motions is chosen, and for each candidate motion, an expected uncertainty based on the set of particles is determined. The candidate motion with a least expected uncertainty is selected as a next motion of the probe, the probe is moved according to the next motion, and the set of particles is updated using the next pose of the probe. | 11-10-2011 |
20110316968 | Digital Refocusing for Wide-Angle Images Using Axial-Cone Cameras - A single camera acquires an input image of a scene as observed in an array of spheres, wherein pixels in the input image corresponding to each sphere form a sphere image. A set of virtual cameras are defined for each sphere on a line joining a center of the sphere and a center of projection of the camera, wherein each virtual camera has a different virtual viewpoint and an associated cone of rays, appearing as a circle of pixels on its virtual image plane. A projective texture mapping of each sphere image is applied to all of the virtual cameras on the virtual image plane to produce a virtual camera image comprising circle of pixels. Each virtual camera image for each sphere is then projected to a refocusing geometry using a refocus viewpoint to produce a wide-angle lightfield view, which are averaged to produce a refocused wide-angle image. | 12-29-2011 |
20120002304 | Method and System for Determining Projections in Non-Central Catadioptric Optical Systems - Embodiment of invention discloses a system and a method for determining a three-dimensional (3D) location of a folding point of a ray between a point in a scene (PS) and a center of projection (COP) of a camera of a catadioptric system. One embodiment maps the catadioptric system, including 3D locations of the PS and the COP on a two-dimensional (2D) plane defined by an axis of symmetry of a folding optical element and the PS to produce a conic and 2D locations of the PS and COP on the 2D plane, and determines a 2D location of the folding point on the 2D plane based on the conic, the 2D locations of the PS and the COP. Next, the embodiment determines the 3D location of the folding point from the 2D location of the folding point on the 2D plane. | 01-05-2012 |
20120250977 | Method and System for Determining Projections in Non-Central Catadioptric Optical Systems - A three-dimensional (3D) location of a reflection point of a ray between a point in a scene (PS) and a center of projection (COP) of a camera of a catadioptric system is determined. The catadioptric system is non-central and includes the camera and a reflector, wherein a surface of the reflector is a quadric surface rotationally symmetric around an axis of symmetry. The 3D location of the reflection point is determined based on a law of reflection, an equation of the reflector, and an equation describing a reflection plane defined by the COP, the PS, and a point of intersection of a normal to the reflector at the reflection point with the axis of symmetry. | 10-04-2012 |
20130010067 | Camera and Method for Focus Based Depth Reconstruction of Dynamic Scenes - A dynamic scene is reconstructed as depths and an extended depth of field video by first acquiring, with a camera including a lens and sensor, a focal stack of the dynamic scene while changing a focal depth. An optical flow between the frames of the focal stack is determined, and the frames are warped according to the optical flow to align the frames and to generate a virtual static focal stack. Finally, a depth map and a texture map for each virtual static focal stack is generated using a depth from defocus, wherein the texture map corresponds to an EDOF image. | 01-10-2013 |
20130156262 | Voting-Based Pose Estimation for 3D Sensors - A pose of an object is estimated by first defining a set of pair features as pairs of geometric primitives, wherein the geometric primitives include oriented surface points, oriented boundary points, and boundary line segments. Model pair features are determined based on the set of pair features for a model of the object. Scene pair features are determined based on the set of pair features from data acquired by a 3D sensor, and then the model pair features are matched with the scene pair features to estimate the pose of the object. | 06-20-2013 |
20150193910 | Method for Increasing Resolutions of Depth Images - A resolution of a low resolution depth image is increased by applying joint geodesic upsampling to a high resolution image to obtain a geodesic distance map. Depths in the low resolution depth image are interpolated using the geodesic distance map to obtain a high resolution depth image. The high resolution image can be a gray scale or color image, or a binary boundary map. The low resolution depth image can be acquired by any type of depth sensor. | 07-09-2015 |
Patent application number | Description | Published |
20120131289 | MULTIPATH SWITCHING OVER MULTIPLE STORAGE SYSTEMS - A system comprises a first storage system, a second storage system, a plurality of switches, and a server connected with the first storage system via a first group of switches and connected with the second storage system via a second group of switches. The first group and the second group have at least one switch which is not included in both the first and second groups. The first storage system receives I/O commands targeted to first logical units from the server via the first group of switches. The first storage system maintains first information regarding the ports of both the first and second storage systems. The first information is used to generate multipath communication between the server and the first storage system, including at least one path which passes through the second storage system and at least one other path which does not pass through the second storage system. | 05-24-2012 |
20120246205 | EFFICIENT DATA STORAGE METHOD FOR MULTIPLE FILE CONTENTS - Embodiments of the invention provide efficient data storage for multiple file contents. In specific embodiments, a content management computer is coupled via a network to a storage system, and comprises a processor, a memory, and a content compose/decompose module. The content compose/decompose module is configured to: decompose a file into multiple parts of data and store the multiple parts into adaptive logical storage partitions; and in response to a read request for the file, re-compose the multiple parts into an original file and send the original file. The file is decomposed into the multiple parts based on both structure and characteristics of the data in the file. The multiple parts are stored into different media provided by the adaptive logical storage partitions according to the structure and characteristics of the data in the multiple parts. | 09-27-2012 |
20130262811 | METHOD AND APPARATUS OF MEMORY MANAGEMENT BY STORAGE SYSTEM - Exemplary embodiments provide high-speed memory devices such as high-speed DRAM resources in a storage system for external computers. In accordance with an aspect of the invention, a computer system comprises: a computer which includes an internal memory and an external memory, the external memory being provided by a storage system coupled to the computer; and a controller operable to manage a virtual memory space provided by the internal memory and the external memory. The controller is operable to add a logical unit provided by the storage system, to the external memory included in the virtual memory space, based on a usage level of the virtual memory space. The controller is operable to release a logical unit provided by the storage system, from the external memory included in the virtual memory space, based on the usage level of the virtual memory space. | 10-03-2013 |
Patent application number | Description | Published |
20140002597 | Tracking Poses of 3D Camera Using Points and Planes | 01-02-2014 |
20140003705 | Method for Registering Points and Planes of 3D Data in Multiple Coordinate Systems | 01-02-2014 |
20140015992 | Specular Edge Extraction Using Multi-Flash Imaging - A method and system extract features from an image acquired of an object with a specular surface by first acquiring an image while illuminating the object with a hue circle generated by a set of lights flashed simultaneously. The lights have different colors and are arranged circularly around a lens of a camera. Then, the features correspond to locations of pixels in the image within a neighborhood of pixels that includes a subset of the colors of the lights. | 01-16-2014 |
20140016862 | Method and Apparatus for Extracting Depth Edges from Images Acquired of Scenes by Cameras with Ring Flashes Forming Hue Circles - A set of images is acquired of a scene while illuminating the scene with a set of colors with different hues. The set of colors is generated by a set of light sources arranged in a substantial circular manner around a lens of a camera to form a hue circle, wherein each light source emits a different color. A shadow confidence map is generated from the set of images by using hues and saturations of pixels in the set of images. Then, depth edges are extracted from the shadow confidence map. | 01-16-2014 |
20140037136 | Method and System for Determining Poses of Vehicle-Mounted Cameras for In-Road Obstacle Detection - Poses of a movable camera relative to an environment are obtained by determining point correspondences from a set of initial images and then applying 2-point motion estimation to the point correspondences to determine a set of initial poses of the camera. A point cloud is generated from the set of initial poses and the point correspondences. Then, for each next image, the point correspondences and corresponding poses are determined, while updating the point cloud. | 02-06-2014 |
20140037146 | Method and System for Generating Structured Light with Spatio-Temporal Patterns for 3D Scene Reconstruction - A structured light pattern including a set of patterns in a sequence is generated by initializing a base pattern. The base pattern includes a sequence of colored stripes such that each subsequence of the colored stripes is unique for a particular size of the subsequence. The base pattern is shifted hierarchically, spatially and temporally a predetermined number of times to generate the set of patterns, wherein each pattern is different spatially and temporally. A unique location of each pixel in a set of images acquired of a scene is determined, while projecting the set of patterns onto the scene, wherein there is one image for each pattern. | 02-06-2014 |
20140219547 | Method for Increasing Resolutions of Depth Images - A resolution of a low resolution depth image is increased by applying joint geodesic upsampling to a high resolution image to obtain a geodesic distance map. Depths in the low resolution depth image are interpolated using the geodesic distance map to obtain a high resolution depth image. The high resolution image can be a gray scale or color image, or a binary boundary map. The low resolution depth image can be acquired by any type of depth sensor. | 08-07-2014 |
20140333615 | Method For Reconstructing 3D Scenes From 2D Images - A method reconstructs at three-dimensional (3D) real-world scene from a single two-dimensional (2D) image by identifying junctions satisfying geometric constraint of the scene based on intersecting lines, vanishing points, and vanishing lines that are orthogonal to each other. Possible layouts of the scene are generated by sampling the 2D image according to the junctions. Then, an energy function is maximized to select an optimal layout from the possible layouts. The energy function use's a conditional random field (CRF) model to evaluate the possible layouts. | 11-13-2014 |
20150154467 | Method for Extracting Planes from 3D Point Cloud Sensor Data - A method extracts planes from three-dimensional (3D) points by first partitioning the 3D points into disjoint regions. A graph of nodes and edges is then constructed, wherein the nodes represent the regions and the edges represent neighborhood relationships of the regions. Finally, agglomerative hierarchical clustering is applied to the graph to merge regions belonging to the same plane. | 06-04-2015 |
20150206015 | Method for Estimating Free Space using a Camera System - A method estimates free space near a moving object from a sequence of images in a video acquired of a scene by a camera system arranged on the moving object by First constructing a one-dimensional graph, wherein each node corresponds to a column of pixels in the image. Features are determined in the image, and an energy function is constructed on the graph based on the features. Using dynamic programming, the energy function is maximized to obtain the free space. | 07-23-2015 |
20150363938 | Method for Stereo Visual Odometry using Points, Lines and Planes - A method determines a motion between a first and second coordinate system, by first extracting a first set of primitives from a 3D image acquired in the first coordinate system from an environment, and extracting a second set of primitives from a 3D image acquired in the second coordinate system from the environment. Motion hypotheses are generated for different combinations of the first and second sets of primitives using a RANdom SAmple Consensus procedure. Each motion hypothesis is scored using a scoring function learned using parameter learning techniques. Then, a best motion hypothesis is selected as the motion between the first and second coordinate system. | 12-17-2015 |