Patent application number | Description | Published |
20120207359 | Image Registration - Image registration is described. In an embodiment an image registration system executes automatic registration of images, for example medical images. In an example, semantic information is computed for each of the images to be registered comprising information about the types of objects in the images and the certainty of that information. In an example a mapping is found to register the images which takes into account the intensities of the image elements as well as the semantic information in a manner which is weighted by the certainty of that semantic information. For example, the semantic information is computed by estimating posterior distributions for the locations of anatomical structures by using a regression forest and transforming the posterior distributions into a probability map. In an example the mapping is found as a global point of inflection of an energy function, the energy function having a term related to the semantic information. | 08-16-2012 |
20120269407 | AUTOMATIC ORGAN LOCALIZATION - Automatic organ localization is described. In an example, an organ in a medical image is localized using one or more trained regression trees. Each image element of the medical image is applied to the trained regression trees to compute probability distributions that relate to a distance from each image element to the organ. At least a subset of the probability distributions are selected and aggregated to calculate a localization estimate for the organ. In another example, the regression trees are trained using training images having a predefined organ location. At each node of the tree, test parameters are generated that determine which subsequent node each training image element is passed to. This is repeated until each image element reaches a leaf node of the tree. A probability distribution is generated and stored at each leaf node, based on the distance from the leaf node's image elements to the organ. | 10-25-2012 |
20150248764 | DEPTH SENSING USING AN INFRARED CAMERA - A method of sensing depth using an infrared camera. In an example method, an infrared image of a scene is received from an infrared camera. The infrared image is applied to a trained machine learning component which uses the intensity of image elements to assign all or some of the image elements a depth value which represents the distance between the surface depicted by the image element and the infrared camera. In various examples, the machine line component comprises one or more random decision forests. | 09-03-2015 |
20150248765 | DEPTH SENSING USING AN RGB CAMERA - A method of sensing depth using an RGB camera. In an example method, a color image of a scene is received from an RGB camera. The color image is applied to a trained machine learning component which uses features of the image elements to assign all or some of the image elements a depth value which represents the distance between the surface depicted by the image element and the RGB camera. In various examples, the machine learning component comprises one or more entangled geodesic random decision forests. | 09-03-2015 |
20160085310 | TRACKING HAND/BODY POSE - Tracking hand or body pose from image data is described, for example, to control a game system, natural user interface or for augmented reality. In various examples a prediction engine takes a single frame of image data and predicts a distribution over a pose of a hand or body depicted in the image data. In examples, a stochastic optimizer has a pool of candidate poses of the hand or body which it iteratively refines, and samples from the predicted distribution are used to replace some candidate poses in the pool. In some examples a best candidate pose from the pool is selected as the current tracked pose and the selection processes uses a 3D model of the hand or body. | 03-24-2016 |
20160086025 | POSE TRACKER WITH MULTI THREADED ARCHITECTURE - Tracking pose of an articulated entity from image data is described, for example, to control a game system, natural user interface or for augmented reality. In various examples a plurality of threads execute on a parallel computing unit, each thread processing data from an individual frame of a plurality of frames of image data captured by an image capture device. In examples, each thread is computing an iterative optimization process whereby a pool of partially optimized candidate poses is being updated. In examples, one or more candidate poses from an individual thread are sent to one or more of the other threads and used to replace or add to candidate poses at the receiving thread(s). | 03-24-2016 |
20160086349 | TRACKING HAND POSE USING FOREARM-HAND MODEL - Tracking hand pose from image data is described, for example, to control a natural user interface or for augmented reality. In various examples an image is received from a capture device, the image depicting at least one hand in an environment. For example, a hand tracker accesses a 3D model of a hand and forearm and computes pose of the hand depicted in the image by comparing the 3D model with the received image. | 03-24-2016 |
20160104031 | DEPTH FROM TIME OF FLIGHT CAMERA - Region of interest detection in raw time of flight images is described. For example, a computing device receives at least one raw image captured for a single frame by a time of flight camera. The raw image depicts one or more objects in an environment of the time of flight camera (such as human hands, bodies or any other objects). The raw image is input to a trained region detector and in response one or more regions of interest in the raw image are received. A received region of interest comprises image elements of the raw image which are predicted to depict at least part of one of the objects. A depth computation logic computes depth from the one or more regions of interest of the raw image. | 04-14-2016 |