Patent application number | Description | Published |
20090237401 | MULTI-STAGE TESSELLATION FOR GRAPHICS RENDERING - This disclosure describes a multi-stage tessellation technique for tessellating a curve during graphics rendering. In particular, a first tessellation stage tessellates the curve into a first set of line segments that each represents a portion of the curve. A second tessellation stage further tessellates the portion of the curve represented by each of the line segments of the first set into additional line segments that more finely represent the shape of the curve. In this manner, each portion of the curve that was represented by only one line segment after the first tessellation stage is represented by more than one line segment after the second tessellation stage. In some instances, more than two tessellation stages may be performed to tessellate the curve. | 09-24-2009 |
20110107216 | GESTURE-BASED USER INTERFACE - A gesture-based user interface system that includes a media-capturing device, a processor, and a display device. The media-capturing device captures media associated with a user and his/her surrounding environment. Using the captured media, the processor recognizes gestures the user uses to interact with display virtual objects displayed on the display device, without the user touching the display. A mirror image of the user and the surrounding environment is displayed in 3D on the display device with the display virtual objects in a virtual environment. The interaction between the image of the user and the display virtual objects is also displayed, in addition to an indication of the interaction such as a visual and/or an audio feedback. | 05-05-2011 |
20110262001 | VIEWPOINT DETECTOR BASED ON SKIN COLOR AREA AND FACE AREA - In a particular illustrative embodiment, a method of determining a viewpoint of a person based on skin color area and face area is disclosed. The method includes receiving image data corresponding to an image captured by a camera, the image including at least one object to be displayed at a device coupled to the camera. The method further includes determining a viewpoint of the person relative to a display of the device coupled to the camera. The viewpoint of the person may be determined by determining a face area of the person based on a determined skin color area of the person and tracking a face location of the person based on the face area. One or more objects displayed at the display may be moved in response to the determined viewpoint of the person. | 10-27-2011 |
20120113241 | FINGERTIP TRACKING FOR TOUCHLESS USER INTERFACE - In general, this disclosure describes techniques for providing a gesture-based user interface. For example, according to some aspects of the disclosure, a user interface generally includes a camera and a computing device that identifies and tracks the motion of one or more fingertips of a user. In some examples, the user interface is configured to identify predefined gestures (e.g., patterns of motion) associated with certain motions of the user's fingertips. In another example, the user interface is configured to identify hand postures (e.g., patterns of showing up of fingertips). Accordingly, the user can interact with the computing device by performing the gestures. | 05-10-2012 |
20120139906 | HYBRID REALITY FOR 3D HUMAN-MACHINE INTERFACE - A three dimensional (3D) mixed reality system combines a real 3D image or video, captured by a 3D camera for example, with a virtual 3D image rendered by a computer or other machine to render a 3D mixed-reality image or video. A 3D camera can acquire two separate images (a left and a right) of a common scene, and superimpose the two separate images to create a real image with a 3D depth effect. The 3D mixed-reality system can determine a distance to a zero disparity plane for the real 3D image, determine one or more parameters for a projection matrix based on the distance to the zero disparity plane, render a virtual 3D object based on the projection matrix, combine the real image and the virtual 3D object to generate a mixed-reality 3D image. | 06-07-2012 |
20120140038 | ZERO DISPARITY PLANE FOR FEEDBACK-BASED THREE-DIMENSIONAL VIDEO - The techniques of this disclosure are directed to the feedback-based stereoscopic display of three-dimensional images, such as may be used for video telephony (VT) and human-machine interface (HMI) application. According to one example, a region of interest (ROI) of stereoscopically captured images may be automatically determined based on determining disparity for at least one pixel of the captured images are described herein. According to another example, a zero disparity plane (ZDP) for the presentation of a 3D representation of stereoscopically captured images may be determined based on an identified ROI. According to this example, the ROI may be automatically identified, or identified based on receipt of user input identifying the ROI. | 06-07-2012 |
20120223884 | SYSTEM AND METHOD TO DISPLAY CONTENT - An apparatus and method for displaying content is disclosed. A particular method includes determining a viewing orientation of a user relative to a display and providing a portion of content to the display based on the viewing orientation. The portion includes at least a first viewable element of the content and does not include at least one second viewable element of the content. The method also includes determining an updated viewing orientation of the user and updating the portion of the content based on the updated viewing orientation. The updated portion includes at least the second viewable element. A display difference between the portion and the updated portion is non-linearly related to an orientation difference between the viewing orientation and the updated viewing orientation. | 09-06-2012 |
20120235999 | STEREOSCOPIC CONVERSION FOR SHADER BASED GRAPHICS CONTENT - The example techniques of this disclosure are directed to generating a stereoscopic view from an application designed to generate a mono view. For example, the techniques may modify source code of a vertex shader to cause the modified vertex shader, when executed, to generate graphics content for the images of the stereoscopic view. As another example, the techniques may modify a command that defines a viewport for the mono view to commands that define the viewports for the images of the stereoscopic view. | 09-20-2012 |
20120236002 | 3D TO STEREOSCOPIC 3D CONVERSION - This disclosure describes techniques for modifying application program interface (API) calls in a manner that can cause a device to render native three dimensional (3D) graphics content in stereoscopic 3D. The techniques of this disclosure can be implemented in a manner where API calls themselves are modified, but the API itself and the GPU hardware are not modified. The techniques of the present disclosure include using the same viewing frustum defined by the original content to generate a left-eye image and a right-eye image and shifting the viewport offset of the left-eye image and the right-eye image. | 09-20-2012 |
20120268376 | VIRTUAL KEYBOARDS AND METHODS OF PROVIDING THE SAME - The present disclosure provides systems, methods and apparatus, including computer programs encoded on computer storage media, for providing virtual keyboards. In one aspect, a system includes a camera, a display, a video feature extraction module and a gesture pattern matching module. The camera captures a sequence of images containing a finger of a user, and the display displays each image combined with a virtual keyboard having a plurality of virtual keys. The video feature extraction module detects motion of the finger in the sequence of images relative to virtual sensors of the virtual keys, and determines sensor actuation data based on the detected motion relative to the virtual sensors. The gesture pattern matching module uses the sensor actuation data to recognize a gesture. | 10-25-2012 |
20120268484 | METHOD AND DEVICE FOR PERFORMING USER-DEFINED CLIPPING IN OBJECT SPACE - A method and device for performing and processing user-defined clipping in object space to reduce the number of computations needed for the clipping operation. The method and device also combine the modelview transformation of the vertex coordinates with projection transform. The user-defined clipping in object space provides a higher performance and less power consumption by avoiding generation of eye coordinates if there is no lighting. The device includes a driver for the user-defined clipping in the object space to perform dual mode user-defined clipping in object space when a lighting function is disabled and in eye space when the lighting function is enabled. | 10-25-2012 |
20120321139 | CONTENT-ADAPTIVE SYSTEMS, METHODS AND APPARATUS FOR DETERMINING OPTICAL FLOW - Embodiments include methods and systems which determine pixel displacement between frames based on a respective weighting-value for each pixel or a group of pixels. The weighting-values provide an indication as to which pixels are more pertinent to optical flow computations. Computational resources and effort can be focused on pixels with higher weights, which are generally more pertinent to optical flow determinations. | 12-20-2012 |
20140029807 | CONTENT-ADAPTIVE PIXEL PROCESSING SYSTEMS, METHODS AND APPARATUS - Embodiments include methods and systems for context-adaptive pixel processing based, in part, on a respective weighting-value for each pixel or a group of pixels. The weighting-values provide an indication as to which pixels are more pertinent to pixel processing computations. Computational resources and effort can be focused on pixels with higher weights, which are generally more pertinent for certain pixel processing determinations. | 01-30-2014 |
20140050372 | METHOD AND APPARATUS FOR FACIAL RECOGNITION - Apparatus and methods for facial detection are disclosed. A plurality of images of an observed face is received for identification. Based at least on two or more selected images of the plurality of images, a template of the observed face is generated. In some embodiments, the template is a subspace generated based on feature vectors of the plurality of received images. A database of identities and corresponding facial data of known persons is searched based at least on the template of the observed face and the facial data of the known persons. One or more identities of the known persons are selected based at least on the search. | 02-20-2014 |
20140071241 | Devices and Methods for Augmented Reality Applications - In a particular embodiment, a method includes evaluating, at a mobile device, a first area of pixels to generate a first result. The method further includes evaluating, at the mobile device, a second area of pixels to generate a second result. Based on comparing a threshold with a difference between the first result and the second result, a determination is made that the second area of pixels corresponds to a background portion of a scene or a foreground portion of the scene. | 03-13-2014 |
20140169667 | REMOVING AN OBJECT FROM AN IMAGE - A method for removing an object from an image is described. The image is separated into a source region and a target region. The target region includes the object to be removed. A contour of the target region may be extracted. One or more filling candidate pixels are obtained. Multiple filling patches are obtained. Each filling patch is centered at a filling candidate pixel. A filling patch may be selected for replacement. | 06-19-2014 |
20140192053 | STEREOSCOPIC CONVERSION WITH VIEWING ORIENTATION FOR SHADER BASED GRAPHICS CONTENT - The example techniques of this disclosure are directed to generating a stereoscopic view from an application designed to generate a mono view. For example, the techniques may modify instructions for a vertex shader based on a viewing angle. When the modified vertex shader is executed, the modified vertex shader may generate coordinates for vertices for a stereoscopic view based on the viewing angle. | 07-10-2014 |
20140205141 | SYSTEMS AND METHODS FOR TRACKING AND DETECTING A TARGET OBJECT - A method for detecting and tracking a target object is described. The method includes performing motion-based tracking for a current video frame by comparing a previous video frame and the current video frame. The method also includes selectively performing object detection in the current video frame based on a tracked parameter. | 07-24-2014 |
20140212050 | SYSTEMS AND METHODS FOR PROCESSING AN IMAGE - A method for processing an image is described. Mask bits are determined for a current pixel. The mask bits indicate intensity comparisons between the current pixel and multiple neighboring pixels. The mask bits also indicate whether each of the current pixel's neighboring pixels have been processed. A next pixel is selected for processing based on the mask bits. | 07-31-2014 |
20140321698 | METHOD FOR IMAGE-BASED STATUS DETERMINATION - Methods, systems, computer-readable media, and apparatuses for image-based status determination are presented. In some embodiments, a method includes capturing at least one image of a moving path. At least one feature within the at least one image is analyzed and based on the analysis of the at least one feature, a direction of movement of the moving path is determined. In some embodiments, a method includes capturing an image of an inclined path. At least one feature within the image is analyzed and based on analysis of the at least one feature, a determination is made whether the image was captured from a top position relative to the inclined path or a bottom position relative to the inclined path. | 10-30-2014 |
20140334740 | CONTENT-ADAPTIVE PIXEL PROCESSING SYSTEMS, METHODS AND APPARATUS - Embodiments include methods and systems for context-adaptive pixel processing based, in part, on a respective weighting-value for each pixel or a group of pixels. The weighting-values provide an indication as to which pixels are more pertinent to pixel processing computations. Computational resources and effort can be focused on pixels with higher weights, which are generally more pertinent for certain pixel processing determinations. | 11-13-2014 |
20140359563 | EFFICIENT EXECUTION OF GRAPH-BASED PROGRAMS - A method includes accessing, at a computing device, data descriptive of a graph representing a program. The graph includes multiple nodes representing execution steps of the program and includes multiple edges representing data transfer steps. The method also includes determining at least two heterogeneous hardware resources of the computing device that are available to execute code represented by one or more of the nodes, and determining one or more paths from a source node to a sink node based on a topology of the graph. The method further includes scheduling execution of code at the at least two heterogeneous hardware resources. The code is represented by at least one of the multiple nodes, and the execution of the code is scheduled based on the one or more paths. | 12-04-2014 |
20140369555 | TRACKER ASSISTED IMAGE CAPTURE - A method for picture processing is described. A first tracking area is obtained. A second tracking area is also obtained. The method includes beginning to track the first tracking area and the second tracking area. Picture processing is performed once a portion of the first tracking area overlapping the second tracking area passes a threshold. | 12-18-2014 |