Patent application number | Description | Published |
20140108466 | DETECTION OF PLANAR TARGETS UNDER STEEP ANGLES - Systems, apparatus and methods to create a database by a device (such as a server) and to use the database by a mobile device for detecting a planar target are presented. The database allows recognition of a planar target by a mobile device from steeper angles with minimum impact on runtime. The database is created from at least one warped view of the planar target. For example, a database may contain keypoints and descriptors from a non-warped view and also from one or more warped views. The database may be pruned by removing keypoints and corresponding descriptors of one image (e.g., a warped image) overlapping with similar or identical keypoints and descriptors of another image (e.g., a non-warped image). | 04-17-2014 |
20140233845 | AUTOMATIC IMAGE RECTIFICATION FOR VISUAL SEARCH - Disclosed is a computing device that can perform automatic image rectification for a visual search. A method implemented at a computing device includes receiving one or more images from an image capture device, storing the one or more images with the computing device, building a three dimensional (3D) geometric model for one or more potential objects of interest within an environment based on at least one image of the one or more images, and automatically creating at least one rectified image having at least one potential object of interest for a visual search. | 08-21-2014 |
20140267397 | IN SITU CREATION OF PLANAR NATURAL FEATURE TARGETS - Disclosed are a system, apparatus, and method for in-situ creation of planar natural feature targets. In one embodiment, a planar target is initialized from a single first reference image one or more subsequent images are processed. In one embodiment, the planar target is tracked in six degrees of freedom upon the processing of the one or more subsequent images and a second reference image is selected from the processed one or more subsequent images. In one embodiment, upon selecting the second reference image the planar target is refined to a more accurate planar target. | 09-18-2014 |
20140270348 | MOTION BLUR AWARE VISUAL POSE TRACKING - Various methods, apparatuses and/or articles of manufacture are provided which may be implemented for use by an electronic device to track objects across two or more digital images. For example, an electronic device may generate a plurality of warped patches corresponding to a reference patch of a reference image, and combine two or more warped patches to form a blurred warped patch corresponding to the reference patch with a motion blur effect applied to a digital representation corresponding to a keypoint of an object to be tracked. | 09-18-2014 |
20140279860 | Client-Server Based Dynamic Search - Method, mobile device, computer program product and apparatus for performing a search are disclosed. The method of performing a search comprises receiving one or more images of an environment in view of a mobile device, generating a simultaneous localization and mapping of the environment using the one or more images, wherein the simultaneous localization and mapping of the environment comprises a plurality of map points representing a plurality of surfaces in a three dimensional coordinate system of the environment, sending a set of the plurality of map points as a search query to a server, receiving a query response from the server, and identifying an object in the environment based at least in part on the query response. | 09-18-2014 |
20150062120 | METHOD AND APPARATUS FOR REPRESENTING A PHYSICAL SCENE - Systems, methods, and devices are described for constructing a digital representation of a physical scene by obtaining information about the physical scene. Based on the information, an initial portion of a planar surface within the physical scene may be identified. In one aspect of the disclosure, constructing a digital representation of a physical scene may include obtaining information about the physical scene, identifying a planar surface within the physical scene, selecting a physical object within the physical scene, placed above the planar surface, detecting properties associated with the physical object, generating a three-dimensional (3D) reconstructed object using the properties associated with the physical object, and representing the planar surface as an augmented reality (AR) plane in an augmented reality environment, wherein the AR plane in the AR environment is capable of supporting 3D reconstructed objects on top of it. | 03-05-2015 |
20150062166 | EXPANDING A DIGITAL REPRESENTATION OF A PHYSICAL PLANE - Techniques are presented for expanding a digital representation of a physical plane from a physical scene. In some aspects, a method may include determining an orientation and an initial portion of a physical plane in the scene, and subdividing a rectified image for the scene into a plurality of grid cells. For the grid cells, an image signature may be generated. A grid cell contiguous to the obtained initial portion of the plane is determined to include part of the plane. An iterative process may be performed for each neighboring grid cell from the grid cell contiguous to at least part of the obtained initial portion, determining whether the neighboring grid cell is to be included as part of the plane if the image signature of said neighboring grid cell is similar to the image signature of a grid cell already determined to be included as part of the plane. | 03-05-2015 |
20150095360 | MULTIVIEW PRUNING OF FEATURE DATABASE FOR OBJECT RECOGNITION SYSTEM - A method of building a database for an object recognition system includes acquiring several multi-view images of a target object and then extracting a first set of features from the images. One of these extracted features is then selected and a second set of features is determined based on which of the first set of features include both, descriptors that match and keypoint locations that are proximate to the selected feature. If a repeatability of the selected feature is greater than a repeatability threshold and if a discriminability is greater than a discriminability threshold, then at least one derived feature is stored to the database, where the derived single feature is representative of the second set of features. | 04-02-2015 |
20150098614 | OBJECT TRACKING BASED ON DYNAMICALLY BUILT ENVIRONMENT MAP DATA - A computer-implemented method of tracking a target object in an object recognition system includes acquiring a plurality of images with a camera. The method further includes simultaneously tracking the target object and dynamically building environment map data from the plurality of images. The tracking of the target object includes attempting to estimate a target pose of the target object with respect to the camera based on at least one of the plurality of images and based on target map data. Next, the method determines whether the tracking of the target object with respect to the camera is successful. If not, then the method includes inferring the target pose with respect to the camera based on the dynamically built environment map data. In one aspect the method includes fusing the inferred target pose with the actual target pose even if tracking is successful to improve robustness. | 04-09-2015 |
20150098615 | DYNAMIC EXTENSION OF MAP DATA FOR OBJECT DETECTION AND TRACKING - A computer-implemented method of tracking a target object in an object recognition system includes acquiring a plurality of images with a camera and simultaneously tracking the target object and dynamically building online map data from the plurality of images. Tracking of the target object is based on the online map data and the offline map data. In one aspect, tracking the target object includes enabling only one of the online map data and offline map data for tracking based on whether tracking is successful. In another aspect, tracking the target object includes fusing the online map data with the offline map data to generate a fused online model. | 04-09-2015 |
20150098616 | OBJECT RECOGNITION AND MAP GENERATION WITH ENVIRONMENT REFERENCES - Exemplary methods, apparatuses, and systems for performing object detection on a mobile device are disclosed. A reference dataset comprising a set of reference keyframes for an object captured in a plurality of different lighting environments is obtained. An image of the object in a current lighting environment is captured. Reference keyframes are grouped into respective subsets according to one or more of: a reference keyframe camera position and orientation (pose), a reference keyframe lighting environment, or a combination thereof. Feature points of the image are compared with feature points of the reference keyframes in each of the respective subsets. A candidate subset of reference keyframes from the respective subsets is selected in response to the comparing feature points. A reference keyframe from the candidate subset of reference keyframes is selected for triangulation with the image of the object. | 04-09-2015 |
20150254284 | PERFORMING A VISUAL SEARCH USING A RECTIFIED IMAGE - Disclosed is a server that can perform a visual search using at least one rectified image. A method implemented at a server includes storing a plurality of images with the server, receiving at least one rectified image having at least one potential object of interest from a computing device for a visual search, and extracting descriptors representing features of the at least one rectified image. The extracted descriptors of the at least one rectified image are designed to be invariant to rotation, scale, and lighting without needing to be invariant to perspective or affine distortion. | 09-10-2015 |