Patent application number | Description | Published |
20080212836 | Visual Tracking Using Depth Data - Real-time visual tracking using depth sensing camera technology, results in illumination-invariant tracking performance. Depth sensing (time-of-flight) cameras provide real-time depth and color images of the same scene. Depth windows regulate the tracked area by controlling shutter speed. A potential field is derived from the depth image data to provide edge information of the tracked target. A mathematically representable contour can model the tracked target. Based on the depth data, determining a best fit between the contour and the edge of the tracked target provides position information for tracking. Applications using depth sensor based visual tracking include head tracking, hand tracking, body-pose estimation, robotic command determination, and other human-computer interaction systems. | 09-04-2008 |
20080304710 | METHOD AND APPARATUS FOR PROCESSING IMAGE OF AT LEAST ONE SEEDLING - In one embodiment, a method of processing a source image of at least one seedling may include: a) segmenting the source image into at least a foreground portion and a background portion to form a segmented image, b) skeletonizing the segmented image to form a skeletonized image, the skeletonized image including a skeleton, c) dividing the skeleton into a plurality of segments, d) identifying alternate separations of the skeleton, each alternate separation including at least two groups, each group including at least one segment and potentially relating to a individual seedling, and e) evaluating a plurality of the alternate separations as a function of at least one of: 1) individual angles defined by connecting segments of corresponding groups, 2) combined angles defined by connecting segments of corresponding groups, 3) length defined by connecting segments of corresponding groups, and 4) unused segments. | 12-11-2008 |
20090074252 | REAL-TIME SELF COLLISION AND OBSTACLE AVOIDANCE - A system, method, and computer program product for avoiding collision of a body segment with unconnected structures in an articulated system are described. A virtual surface is constructed surrounding an actual surface of the body segment. Distances between the body segment and unconnected structures are monitored. Responding to an unconnected structure penetrating the virtual surface, a redirected joint motion that prevents the unconnected structure from penetrating deeper into the virtual surface is determined. The body segment is redirected based on the redirected joint motion to avoid colliding with the unconnected structure. | 03-19-2009 |
20090110292 | Hand Sign Recognition Using Label Assignment - A method and system for recognizing hand signs that include overlapping or adjoining hands from a depth image. A linked structure comprising multiple segments is generated from the depth image including overlapping or adjoining hands. The hand pose of the overlapping or adjoining hands is determined using either (i) a constrained optimization process in which a cost function and constraint conditions are used to classify segments of the linked graph to two hands or (ii) a tree search process in which a tree structure including a plurality of nodes is used to obtain the most-likely hand pose represented by the depth image. After determining the hand pose, the segments of the linked structure are matched with stored shapes to determine the sign represented by the depth image. | 04-30-2009 |
20090175540 | Controlled human pose estimation from depth image streams - A system, method, and computer program product for estimating upper body human pose are described. According to one aspect, a plurality of anatomical features are detected in a depth image of the human actor. The method detects a head, neck, and torso (H-N-T) template in the depth image, and detects the features in the depth image based on the H-N-T template. An estimated pose of a human model is estimated based on the detected features and kinematic constraints of the human model. | 07-09-2009 |
20090252423 | Controlled human pose estimation from depth image streams - A system, method, and computer program product for estimating human body pose are described. According to one aspect, anatomical features are detected in a depth image of a human actor. The method detects a head, neck, and trunk (H-N-T) template in the depth image, and detects limbs in the depth image based on the H-N-T template. The anatomical features are detected based on the H-N-T template and the limbs. An estimated pose of a human model is estimated based on the detected features and kinematic constraints of the human model. | 10-08-2009 |
20100034427 | TARGET ORIENTATION ESTIMATION USING DEPTH SENSING - A system for estimating orientation of a target based on real-time video data uses depth data included in the video to determine the estimated orientation. The system includes a time-of-flight camera capable of depth sensing within a depth window. The camera outputs hybrid image data (color and depth). Segmentation is performed to determine the location of the target within the image. Tracking is used to follow the target location from frame to frame. During a training mode, a target-specific training image set is collected with a corresponding orientation associated with each frame. During an estimation mode, a classifier compares new images with the stored training set to determine an estimated orientation. A motion estimation approach uses an accumulated rotation/translation parameter calculation based on optical flow and depth constrains. The parameters are reset to a reference value each time the image corresponds to a dominant orientation. | 02-11-2010 |
20110054870 | Vision Based Human Activity Recognition and Monitoring System for Guided Virtual Rehabilitation - A system, method, and computer program product for providing a user with a virtual environment in which the user can perform guided activities and receive feedback are described. The user is provided with guidance to perform certain movements. The user's movements are captured in an image stream. The image stream is analyzed to estimate the user's movements, which is tracked by a user-specific human model. Biomechanical quantities such as center of pressure and muscle forces are calculated based on the tracked movements. Feedback such as the biomechanical quantities and differences between the guided movements and the captured actual movements are provided to the user. | 03-03-2011 |
20140268353 | 3-DIMENSIONAL (3-D) NAVIGATION - One or more embodiments of techniques or systems for 3-dimensional (3-D) navigation are provided herein. A heads-up display (HUD) component can project graphic elements on focal planes around an environment surrounding a vehicle. The HUD component can cause these graphic elements to appear volumetric or 3-D by moving or adjusting a distance between a focal plane and the vehicle. Additionally, a target position for graphic elements can be adjusted. This enables the HUD component to project graphic elements as moving avatars. In other words, adjusting the focal plane distance and the target position enables graphic elements to be projected in three dimensions along an x, y, and z axis. Further, a moving avatar can be ‘animated’ by sequentially projecting the avatar on different focal planes, thereby providing an occupant with the perception that the avatar is moving towards or away from the vehicle. | 09-18-2014 |
20140270352 | THREE DIMENSIONAL FINGERTIP TRACKING - Systems and methods for detecting, tracking the presence, location, orientation and/or motion of a hand or hand segments visible to an input source are disclosed herein. Hand, hand segment and fingertip location and tracking can be performed using ball fit methods. Analysis of hand, hand segment and fingertip location and tracking data can be used as input for a variety of systems and devices. | 09-18-2014 |
20140282259 | INFORMATION QUERY BY POINTING - Navigating through objects or items in a display device using a first input device to detect pointing of a user's finger to an object or item and using a second input device to receive the user's indication on the selection of the object or the item. An image of the hand is captured by the first input device and is processed to determine a location on the display device corresponding to the location of the fingertip of the pointing finger. The object or the item corresponding to the location of the fingertip is selected after the second input device receives predetermined user input from the second input device. | 09-18-2014 |