Patent application number | Description | Published |
20090285544 | Video Processing - A method and apparatus for processing video is disclosed. In an embodiment, image features of an object within a frame of video footage are identified and the movement of each of these features is tracked throughout the video footage to determine its trajectory (track). The tracks are analyzed, the maximum separation of the tracks is determined and used to determine a texture map, which is in turn interpolated to provide an unwrap mosaic for the object. The process may be iterated to provide an improved mosaic. Effects or artwork can be overlaid on this mosaic and the edited mosaic can be warped via the mapping, and combined with layers of the original footage. The effect or artwork may move with the object's surface. | 11-19-2009 |
20100322525 | Image Labeling Using Multi-Scale Processing - Multi-scale processing may be used to reduce the memory and computational requirements of optimization algorithms for image labeling, for example, for object segmentation, 3D reconstruction, stereo correspondence, optical flow and other applications. For example, in order to label a large image (or 3D volume) a multi-scale process first solves the problem at a low resolution, obtaining a coarse labeling of an original high resolution problem. This labeling is refined by solving another optimization on a subset of the image elements. In examples, an energy function for a coarse level version of an input image is formed directly from an energy function of the input image. In examples, the subset of image elements may be selected using a measure of confidence in the labeling. | 12-23-2010 |
20110210915 | Human Body Pose Estimation - Techniques for human body pose estimation are disclosed herein. Images such as depth images, silhouette images, or volumetric images may be generated and pixels or voxels of the images may be identified. The techniques may process the pixels or voxels to determine a probability that each pixel or voxel is associated with a segment of a body captured in the image or to determine a three-dimensional representation for each pixel or voxel that is associated with a location on a canonical body. These probabilities or three-dimensional representations may then be utilized along with the images to construct a posed model of the body captured in the image. | 09-01-2011 |
20120194516 | Three-Dimensional Environment Reconstruction - Three-dimensional environment reconstruction is described. In an example, a 3D model of a real-world environment is generated in a 3D volume made up of voxels stored on a memory device. The model is built from data describing a camera location and orientation, and a depth image with pixels indicating a distance from the camera to a point in the environment. A separate execution thread is assigned to each voxel in a plane of the volume. Each thread uses the camera location and orientation to determine a corresponding depth image location for its associated voxel, determines a factor relating to the distance between the associated voxel and the point in the environment at the corresponding location, and updates a stored value at the associated voxel using the factor. Each thread iterates through an equivalent voxel in the remaining planes of the volume, repeating the process to update the stored value. | 08-02-2012 |
20120194517 | Using a Three-Dimensional Environment Model in Gameplay - Use of a 3D environment model in gameplay is described. In an embodiment, a mobile depth camera is used to capture a series of depth images as it is moved around and a dense 3D model of the environment is generated from this series of depth images. This dense 3D model is incorporated within an interactive application, such as a game. The mobile depth camera is then placed in a static position for an interactive phase, which in some examples is gameplay, and the system detects motion of a user within a part of the environment from a second series of depth images captured by the camera. This motion provides a user input to the interactive application, such as a game. In further embodiments, automatic recognition and identification of objects within the 3D model may be performed and these identified objects then change the way that the interactive application operates. | 08-02-2012 |
20120194644 | Mobile Camera Localization Using Depth Maps - Mobile camera localization using depth maps is described for robotics, immersive gaming, augmented reality and other applications. In an embodiment a mobile depth camera is tracked in an environment at the same time as a 3D model of the environment is formed using the sensed depth data. In an embodiment, when camera tracking fails, this is detected and the camera is relocalized either by using previously gathered keyframes or in other ways. In an embodiment, loop closures are detected in which the mobile camera revisits a location, by comparing features of a current depth map with the 3D model in real time. In embodiments the detected loop closures are used to improve the consistency and accuracy of the 3D model of the environment. | 08-02-2012 |
20120194650 | Reducing Interference Between Multiple Infra-Red Depth Cameras - Systems and methods for reducing interference between multiple infra-red depth cameras are described. In an embodiment, the system comprises multiple infra-red sources, each of which projects a structured light pattern into the environment. A controller is used to control the sources in order to reduce the interference caused by overlapping light patterns. Various methods are described including: cycling between the different sources, where the cycle used may be fixed or may change dynamically based on the scene detected using the cameras; setting the wavelength of each source so that overlapping patterns are at different wavelengths; moving source-camera pairs in independent motion patterns; and adjusting the shape of the projected light patterns to minimize overlap. These methods may also be combined in any way. In another embodiment, the system comprises a single source and a mirror system is used to cast the projected structured light pattern around the environment. | 08-02-2012 |
20120195471 | Moving Object Segmentation Using Depth Images - Moving object segmentation using depth images is described. In an example, a moving object is segmented from the background of a depth image of a scene received from a mobile depth camera. A previous depth image of the scene is retrieved, and compared to the current depth image using an iterative closest point algorithm. The iterative closest point algorithm includes a determination of a set of points that correspond between the current depth image and the previous depth image. During the determination of the set of points, one or more outlying points are detected that do not correspond between the two depth images, and the image elements at these outlying points are labeled as belonging to the moving object. In examples, the iterative closest point algorithm is executed as part of an algorithm for tracking the mobile depth camera, and hence the segmentation does not add substantial additional computational complexity. | 08-02-2012 |
20120196679 | Real-Time Camera Tracking Using Depth Maps - Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used. | 08-02-2012 |
20120207346 | Detecting and Localizing Multiple Objects in Images Using Probabilistic Inference - An object detection system is disclosed herein. The object detection system allows detection of one or more objects of interest using a probabilistic model. The probabilistic model may include voting elements usable to determine which hypotheses for locations of objects are probabilistically valid. The object detection system may apply an optimization algorithm such as a simple greedy algorithm to find hypotheses that optimize or maximize a posterior probability or log-posterior of the probabilistic model or a hypothesis receiving a maximal probabilistic vote from the voting elements in a respective iteration of the algorithm. Locations of detected objects may then be ascertained based on the found hypotheses. | 08-16-2012 |
20120219209 | Image Labeling with Global Parameters - Image labeling with global parameters is described. In an embodiment a pose estimation system executes automatic body part labeling. For example, the system may compute joint recognition or body part segmentation for a gaming application. In another example, the system may compute organ labels for a medical imaging application. In an example, at least one global parameter, for example body height is computed for each of the images to be labeled. In an example, the global parameter is used to modify an image labeling process. For example the global parameter may be used to modify the input image to a canonical scale. In another example, the global parameter may be used to adaptively modify previously stored parameters of the image labeling process. In an example, the previously stored parameters may be computed from a reduced set of training data. | 08-30-2012 |
20120225719 | Gesture Detection and Recognition - A gesture detection and recognition technique is described. In one example, a sequence of data items relating to the motion of a gesturing user is received. A selected set of data items from the sequence are tested against pre-learned threshold values, to determine a probability of the sequence representing a certain gesture. If the probability is greater than a predetermined value, then the gesture is detected, and an action taken. In examples, the tests are performed by a trained decision tree classifier. In another example, the sequence of data items can be compared to pre-learned templates, and the similarity between them determined. If the similarity for a template exceeds a threshold, a likelihood value associated with a future time for a gesture associated with that template is updated. Then, when the future time is reached, the gesture is detected if the likelihood value is greater than a predefined value. | 09-06-2012 |
20120237127 | Grouping Variables for Fast Image Labeling - This application describes grouping variables together to minimize cost or time of performing computer vision analysis techniques on images. In one instance, the pixels of an image are represented by a lattice structure of nodes that are connected to each other by edges. The nodes are grouped or merged together based in part on the energy function associated with each edge that connects the nodes together. The energy function of the edge is based in part on the energy functions associated with each node. The energy functions of the node are based on the possible states in which the node may exist. The states of the node are representative of an object, image, or any other feature or classification that may be associated with the pixels in the image. | 09-20-2012 |
20120239174 | Predicting Joint Positions - Predicting joint positions is described, for example, to find joint positions of humans or animals (or parts thereof) in an image to control a computer game or for other applications. In an embodiment image elements of a depth image make joint position votes so that for example, an image element depicting part of a torso may vote for a position of a neck joint, a left knee joint and a right knee joint. A random decision forest may be trained to enable image elements to vote for the positions of one or more joints and the training process may use training images of bodies with specified joint positions. In an example a joint position vote is expressed as a vector representing a distance and a direction of a joint position from an image element making the vote. The random decision forest may be trained using a mixture of objectives. | 09-20-2012 |
20120251008 | Classification Algorithm Optimization - Classification algorithm optimization is described. In an example, a classification algorithm is optimized by calculating an evaluation sequence for a set of weighted feature functions that orders the feature functions in accordance with a measure of influence on the classification algorithm. Classification thresholds are determined for each step of the evaluation sequence, which indicate whether a classification decision can be made early and the classification algorithm terminated without evaluating further feature functions. In another example, a classifier applies the weighted feature functions to previously unseen data in the order of the evaluation sequence and determines a cumulative value at each step. The cumulative value is compared to the classification thresholds at each step to determine whether a classification decision can be made early without evaluating further feature functions. | 10-04-2012 |
20120257814 | IMAGE COMPLETION USING SCENE GEOMETRY - Image completion using scene geometry is described, for example, to remove marks from digital photographs or complete regions which are blank due to editing. In an embodiment an image depicting, from a viewpoint, a scene of textured objects has regions to be completed. In an example, geometry of the scene is estimated from a depth map and the geometry used to warp the image so that at least some surfaces depicted in the image are fronto-parallel to the viewpoint. An image completion process is guided using distortion applied during the warping. For example, patches used to fill the regions are selected on the basis of distortion introduced by the warping. In examples where the scene comprises regions having only planar surfaces the warping process comprises rotating the image. Where the scene comprises non-planar surfaces, geodesic distances between image elements may be scaled to flatten the non-planar surfaces. | 10-11-2012 |
20120288186 | SYNTHESIZING TRAINING SAMPLES FOR OBJECT RECOGNITION - An enhanced training sample set containing new synthesized training images that are artificially generated from an original training sample set is provided to satisfactorily increase the accuracy of an object recognition system. The original sample set is artificially augmented by introducing one or more variations to the original images with little to no human input. There are a large number of possible variations that can be introduced to the original images, such as varying the image's position, orientation, and/or appearance and varying an object's context, scale, and/or rotation. Because there are computational constraints on the amount of training samples that can be processed by object recognition systems, one or more variations that will lead to a satisfactory increase in the accuracy of the object recognition performance are identified and introduced to the original images. | 11-15-2012 |
20120306734 | Gesture Recognition Techniques - In one or more implementations, a static geometry model is generated, from one or more images of a physical environment captured using a camera, using one or more static objects to model corresponding one or more objects in the physical environment. Interaction of a dynamic object with at least one of the static objects is identified by analyzing at least one image and a gesture is recognized from the identified interaction of the dynamic object with the at least one of the static objects to initiate an operation of the computing device. | 12-06-2012 |
20120306876 | GENERATING COMPUTER MODELS OF 3D OBJECTS - Generating computer models of 3D objects is described. In one example, depth images of an object captured by a substantially static depth camera are used to generate the model, which is stored in a memory device in a three-dimensional volume. Portions of the depth image determined to relate to the background are removed to leave a foreground depth image. The position and orientation of the object in the foreground depth image is tracked by comparison to a preceding depth image, and the foreground depth image is integrated into the volume by using the position and orientation to determine where to add data derived from the foreground depth image into the volume. In examples, the object is hand-rotated by a user before the depth camera. Hands that occlude the object are integrated out of the model as they do not move in sync with the object due to re-gripping. | 12-06-2012 |
20130107010 | SURFACE SEGMENTATION FROM RGB AND DEPTH IMAGES | 05-02-2013 |
20130156297 | Learning Image Processing Tasks from Scene Reconstructions - Learning image processing tasks from scene reconstructions is described where the tasks may include but are not limited to: image de-noising, image in-painting, optical flow detection, interest point detection. In various embodiments training data is generated from a 2 or higher dimensional reconstruction of a scene and from empirical images of the same scene. In an example a machine learning system learns at least one parameter of a function for performing the image processing task by using the training data. In an example, the machine learning system comprises a random decision forest. In an example, the scene reconstruction is obtained by moving an image capture apparatus in an environment where the image capture apparatus has an associated dense reconstruction and camera tracking system. | 06-20-2013 |
20130156298 | Using High-Level Attributes to Guide Image Processing - Using high-level attributes to guide image processing is described. In an embodiment high-level attributes of images of people such as height, torso orientation, body shape, gender are used to guide processing of the images for various tasks including but not limited to joint position detection, body part classification, medical image analysis and others. In various embodiments one or more random decision forests are trained using images where global variable values such as player height are known in addition to ground-truth data appropriate for the image processing task concerned. In some examples sequences of images are used where global variables are static or vary smoothly over the sequence. In some examples one or more trained random decision forests are used to find global variable values as well as output values for the task concerned such as joint positions or body part classes. | 06-20-2013 |
20130166481 | DISCRIMINATIVE DECISION TREE FIELDS - A tractable model solves certain labeling problems by providing potential functions having arbitrary dependencies upon an observed dataset (e.g., image data). The model uses decision trees corresponding to various factors to map dataset content to a set of parameters used to define the potential functions in the model. Some factors define relationships among multiple variable nodes. When making label predictions on a new dataset, the leaf nodes of the decision tree determine the effective weightings for such potential functions. In this manner, decision trees define non-parametric dependencies and can represent rich, arbitrary functional relationships if sufficient training data is available. Decision trees training is scalable, both in the training set size and by parallelization. Maximum pseudolikelihood learning can provide for joint training of aspects of the model, including feature test selection and ordering, factor weights, and the scope of the interacting variable nodes used in the graph. | 06-27-2013 |
20130244782 | REAL-TIME CAMERA TRACKING USING DEPTH MAPS - Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used. | 09-19-2013 |
20130346844 | CHECKING AND/OR COMPLETION FOR DATA GRIDS - Checking and/or completing for data grids is described such as for grids having rows and columns of cells at least some of which contain data values such as numbers or categories. In various embodiments predictive probability distributions are obtained from an inference engine for one or more of the cells and the predictive probability distributions are used for various tasks such as to suggest values to complete blank cells, highlight cells having outlying values, identify potential errors, suggest corrections to potential errors, identify similarities between cells, identify differences between cells, cluster rows of the data grid, and other tasks. In various embodiments a graphical user interface displays a data grid and provides facilities for completing, error checking/correcting, and analyzing data in the data grid. | 12-26-2013 |
20140169444 | IMAGE SEQUENCE ENCODING/DECODING USING MOTION FIELDS - Compressing motion fields is described. In one example video compression may comprise computing a motion field representing the difference between a first image and a second image, the motion field being used to make a prediction of the second image. In various examples of encoding a sequence of video data the first image, motion field and a residual representing the error in the prediction may be encoded rather than the full image sequence. In various examples the motion field may represented by its coefficients in a linear basis, for example a wavelet basis, and an optimization may be carried out to minimize the cost of encoding the motion field and maximize the quality of the reconstructed image while also minimizing the residual error. In various examples the optimized motion field may quantized to enable encoding. | 06-19-2014 |
20140247212 | Gesture Recognition Techniques - In one or more implementations, a static geometry model is generated, from one or more images of a physical environment captured using a camera, using one or more static objects to model corresponding one or more objects in the physical environment. Interaction of a dynamic object with at least one of the static objects is identified by analyzing at least one image and a gesture is recognized from the identified interaction of the dynamic object with the at least one of the static objects to initiate an operation of the computing device. | 09-04-2014 |
20140307956 | IMAGE LABELING USING GEODESIC FEATURES - Image labeling is described, for example, to recognize body organs in a medical image, to label body parts in a depth image of a game player, to label objects in a video of a scene. In various embodiments an automated classifier uses geodesic features of an image, and optionally other types of features, to semantically segment an image. For example, the geodesic features relate to a distance between image elements, the distance taking into account information about image content between the image elements. In some examples the automated classifier is an entangled random decision forest in which data accumulated at earlier tree levels is used to make decisions at later tree levels. In some examples the automated classifier has auto-context by comprising two or more random decision forests. In various examples parallel processing and look up procedures are used. | 10-16-2014 |