Patent application number | Description | Published |
20090125820 | COMPACT, PORTABLE, AND EFFICIENT REPRESENTATION OF A USER INTERFACE CONTROL TREE - A non-tree representation of a UI control tree is provided by a compact UI binary file that is generated by encoding a UI definition markup file expressing UI controls and behavior in human-readable form. The UI binary file is utilized in a runtime environment on a computing device as a source of a binary instruction stream. The stream can be efficiently processed by an interpreter on the device without needing validation between loading and rendering the UI. The encoding places much of the representation into an object and script section of the UI binary file. The interpreter runs this section without it being entirely resident in the device's memory to minimize the memory footprint. At runtime, operation code (“op-code”) streams contained in this section are used to build UI objects, and implement scriptable behavior for manipulating the UI objects to render the UI on the device with the designed behavior. | 05-14-2009 |
20090327238 | EXTENSIBLE BINDING OF DATA WITHIN GRAPHICAL RICH APPLICATIONS - An arrangement is provided for retrieving and updating data within an application, such as a media player application and its metadata. Information is gathered from multiple remote sources. The remote source is queried for information, and information is received from the same. The received information is compared to a number of stored data storage conventions. The particular data storage convention employed is determined, and using the determined data storage convention, data is bound to a number of fields in the application. | 12-31-2009 |
20100081507 | Adaptation for Alternate Gaming Input Devices - Mechanisms for adjusting signals between gaming controllers and gaming consoles are disclosed. In an embodiment, the output signals of a mouse control a gaming console which is normally controlled by an analog thumbstick. The output signals of mouse are adjusted to compensate for the analog thumbstick controller assist techniques employed by the gaming console. The adjusted signals are sent to the gaming console. The result is that the user is able to control the game using the mouse and have the same feel as if the user was using the analog thumbstick controller. | 04-01-2010 |
20100194741 | DEPTH MAP MOVEMENT TRACKING VIA OPTICAL FLOW AND VELOCITY PREDICTION - Techniques for efficiently tracking points on a depth map using an optical flow are disclosed. In order to optimize the use of optical flow, isolated regions of the depth map may be tracked. The sampling regions may comprise a 3-dimensional box (width, height and depth). Each region may be “colored” as a function of depth information to generate a “zebra” pattern as a function of depth data for each sample. The disclosed techniques may provide for handling optical flow tracking when occlusion occurs by utilizing a weighting process for application of optical flow vs. velocity prediction to stabilize tracking. | 08-05-2010 |
20100281437 | MANAGING VIRTUAL PORTS - Techniques for managing virtual ports are disclosed herein. Each such virtual port may have different associated features such as, for example, privileges, rights or options. When one or more users are in a capture scene of a gesture based system, the system may associate virtual ports with the users and maintain the virtual ports. Also provided are techniques for disassociating virtual ports with users or swapping virtual ports between two or more users. | 11-04-2010 |
20100302365 | Depth Image Noise Reduction - A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes noise. For example, the depth image may include one or more holes having one or more empty pixels or pixels without a depth value. Depth values for the one or more empty pixels may be estimated and a depth image that includes the estimated depth values for the one or empty more pixels may be rendered. | 12-02-2010 |
20100303289 | DEVICE FOR IDENTIFYING AND TRACKING MULTIPLE HUMANS OVER TIME - A system recognizes human beings in their natural environment, without special sensing devices attached to the subjects, uniquely identifies them and tracks them in three dimensional space. The resulting representation is presented directly to applications as a multi-point skeletal model delivered in real-time. The device efficiently tracks humans and their natural movements by understanding the natural mechanics and capabilities of the human muscular-skeletal system. The device also uniquely recognizes individuals in order to allow multiple people to interact with the system via natural movements of their limbs and body as well as voice commands/responses. | 12-02-2010 |
20100303302 | Systems And Methods For Estimating An Occluded Body Part - A depth image of a scene may be received, observed, or captured by a device. The depth image may include a human target that may have, for example, a portion thereof non-visible or occluded. For example, a user may be turned such that a body part may not be visible to the device, may have one or more body parts partially outside a field of view of the device, may have a body part or a portion of a body part behind another body part or object, or the like such that the human target associated with the user may also have a portion body part or a body part non-visible or occluded in the depth image. A position or location of the non-visible or occluded portion or body part of the human target associated with the user may then be estimated. | 12-02-2010 |
20100304813 | Protocol And Format For Communicating An Image From A Camera To A Computing Environment - A media feed interface may be provided that may be used to extract a media frame from a media feed. The media feed interface may access a capture device, a file, and/or a network resource. Upon accessing the capture device, file, and/or network resource, the media feed interface may populate buffers with data and then may create a media feed from the buffers. Upon request, the media feed interface may isolate a media frame within the media feed. For example, the media feed interface analyze media frames in the media feed to determine whether a media frame includes information associated with, for example, the request. If the media frame includes the requested information, the media feed interface may isolate the media frame associated with the information and may provide access to the isolated media frame. | 12-02-2010 |
20120144348 | Managing Virtual Ports - Techniques for managing virtual ports are disclosed herein. Each such virtual port may have different associated features such as, for example, privileges, rights or options. When one or more users are in a capture scene of a gesture based system, the system may associate virtual ports with the users and maintain the virtual ports. Also provided are techniques for disassociating virtual ports with users or swapping virtual ports between two or more users. | 06-07-2012 |
20120206452 | REALISTIC OCCLUSION FOR A HEAD MOUNTED AUGMENTED REALITY DISPLAY - Technology is described for providing realistic occlusion between a virtual object displayed by a head mounted, augmented reality display system and a real object visible to the user's eyes through the display. A spatial occlusion in a user field of view of the display is typically a three dimensional occlusion determined based on a three dimensional space mapping of real and virtual objects. An occlusion interface between a real object and a virtual object can be modeled at a level of detail determined based on criteria such as distance within the field of view, display size or position with respect to a point of gaze. Technology is also described for providing three dimensional audio occlusion based on an occlusion between a real object and a virtual object in the user environment. | 08-16-2012 |
20120280897 | Attribute State Classification - Attribute state classification techniques are described. In one or more implementations, one or more pixels of an image are classified by a computing device as having one or several states for one or more attributes that do not identify corresponding body parts of a user. A gesture is recognized by the computing device that is operable to initiate one or more operations of the computing device based at least in part of the state classifications of the one or more pixels of one or more attributes. | 11-08-2012 |
20120314031 | INVARIANT FEATURES FOR COMPUTER VISION - Technology is described for determining and using invariant features for computer vision. A local orientation may be determined for each depth pixel in a subset of the depth pixels in a depth map. The local orientation may an in-plane orientation, an out-out-plane orientation or both. A local coordinate system is determined for each of the depth pixels in the subset based on the local orientation of the corresponding depth pixel. A feature region is defined relative to the local coordinate system for each of the depth pixels in the subset. The feature region for each of the depth pixels in the subset is transformed from the local coordinate system to an image coordinate system of the depth map. The transformed feature regions are used to process the depth map. | 12-13-2012 |
20130286004 | DISPLAYING A COLLISION BETWEEN REAL AND VIRTUAL OBJECTS - Technology is described for displaying a collision between objects by an augmented reality display device system. A collision between a real object and a virtual object is identified based on three dimensional space position data of the objects. At least one effect on at least one physical property of the real object is determined based on physical properties of the real object, like a change in surface shape, and physical interaction characteristics of the collision. Simulation image data is generated and displayed simulating the effect on the real object by the augmented reality display. Virtual objects under control of different executing applications can also interact with one another in collisions. | 10-31-2013 |
20140002607 | INVARIANT FEATURES FOR COMPUTER VISION | 01-02-2014 |
20140085193 | PROTOCOL AND FORMAT FOR COMMUNICATING AN IMAGE FROM A CAMERA TO A COMPUTING ENVIRONMENT - A media feed interface may be provided that may be used to extract a media frame from a media feed. The media feed interface may access a capture device, a file, and/or a network resource. Upon accessing the capture device, file, and/or network resource, the media feed interface may populate buffers with data and then may create a media feed from the buffers. Upon request, the media feed interface may isolate a media frame within the media feed. For example, the media feed interface analyze media frames in the media feed to determine whether a media frame includes information associated with, for example, the request. If the media frame includes the requested information, the media feed interface may isolate the media frame associated with the information and may provide access to the isolated media frame. | 03-27-2014 |
20140313225 | AUGMENTED REALITY AUCTION PLATFORM - An augmented reality submission includes a hologram to virtually augment a world space object and a compensation offer for presenting the hologram to a viewer of the world space object. The augmented reality submission is selected as a winning submission if the submission satisfies a selection criteria. | 10-23-2014 |
20140375545 | ADAPTIVE EVENT RECOGNITION - A system and related methods for adaptive event recognition are provided. In one example, a selected sensor of a head-mounted display device is operated at a first polling rate corresponding to a higher potential latency. Initial user-related information is received. Where the initial user-related information matches a pre-event, the selected sensor is operated at a second polling rate faster than the first polling rate and corresponding to a lower potential latency. Subsequent user-related information is received. Where the subsequent user-related information matches a selected target event, feedback associated with the selected target event is provided to the user via the head-mounted display device. | 12-25-2014 |
20150040040 | TWO-HAND INTERACTION WITH NATURAL USER INTERFACE - Two-handed interactions with a natural user interface are disclosed. For example, one embodiment provides a method comprising detecting via image data received by the computing device a context-setting input performed by a first hand of a user. and sending to a display a user interface positioned based on a virtual interaction coordinate system, the virtual coordinate system being positioned based upon a position of the first hand of the user. The method further includes detecting via image data received by the computing device an action input performed by a second hand of the user, the action input performed while the first hand of the user is performing the context-setting input, and sending to the display a response based on the context-setting input and an interaction between the action input and the virtual interaction coordinate system. | 02-05-2015 |
20150084967 | DEPTH MAP MOVEMENT TRACKING VIA OPTICAL FLOW AND VELOCITY PREDICTION - Techniques for efficiently tracking points on a depth map using an optical flow are disclosed. In order to optimize the use of optical flow, isolated regions of the depth map may be tracked. The sampling regions may comprise a 3-dimensional box (width, height and depth). Each region may be “colored” as a function of depth information to generate a “zebra” pattern as a function of depth data for each sample. The disclosed techniques may provide for handling optical flow tracking when occlusion occurs by utilizing a weighting process for application of optical flow vs. velocity prediction to stabilize tracking. | 03-26-2015 |