Patent application number | Description | Published |
20120299827 | MULTI-PLATFORM MOTION-BASED COMPUTER INTERACTIONS - Systems and methods for multi-platform motion interactivity, is provided. The system includes a motion-sensing subsystem, a display subsystem including a display, a logic subsystem, and a data-holding subsystem containing instructions executable by the logic subsystem. The system configured to display a displayed scene on the display; receive a dynamically-changing motion input from the motion-sensing subsystem that is generated in response to movement of a tracked object; generate, in real time, a dynamically-changing 3D spatial model of the tracked object based on the motion input; control, based on the movement of the tracked object and using the 3D spatial model, motion within the displayed scene. The system further configured to receive, from a secondary computing system, a secondary input; and control the displayed scene in response to the secondary input to visually represent interaction between the motion input and the secondary input. | 11-29-2012 |
20120308140 | SYSTEM FOR RECOGNIZING AN OPEN OR CLOSED HAND - A system and method are disclosed relating to a pipeline for generating a computer model of a target user, including a hand model of the user's hands, captured by an image sensor in a NUI system. The computer model represents a best estimate of the position of a user's hand or hands and whether the hand or hand is in an open or closed state. The generated hand model may be used by a gaming or other application to determine such things as user gestures and control actions. | 12-06-2012 |
20120309532 | SYSTEM FOR FINGER RECOGNITION AND TRACKING - A system and method are disclosed relating to a pipeline for generating a computer model of a target user, including a hand model of the user's hands and fingers, captured by an image sensor in a NUI system. The computer model represents a best estimate of the position and orientation of a user's hand or hands. The generated hand model may be used by a gaming or other application to determine such things as user gestures and control actions. | 12-06-2012 |
20130127994 | VIDEO COMPRESSION USING VIRTUAL SKELETON - Optical sensor information captured via one or more optical sensors imaging a scene that includes a human subject is received by a computing device. The optical sensor information is processed by the computing device to model the human subject with a virtual skeleton, and to obtain surface information representing the human subject. The virtual skeleton is transmitted by the computing device to a remote computing device at a higher frame rate than the surface information. Virtual skeleton frames are used by the remote computing device to estimate surface information for frames that have not been transmitted by the computing device. | 05-23-2013 |
20130141419 | AUGMENTED REALITY WITH REALISTIC OCCLUSION - A head-mounted display device is configured to visually augment an observed physical space to a user. The head-mounted display device includes a see-through display and is configured to receive augmented display information, such as a virtual object with occlusion relative to a real world object from a perspective of the see-through display. | 06-06-2013 |
20130141421 | AUGMENTED REALITY VIRTUAL MONITOR - A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display. | 06-06-2013 |
20130141434 | VIRTUAL LIGHT IN AUGMENTED REALITY - A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment. | 06-06-2013 |
20130194259 | VIRTUAL ENVIRONMENT GENERATING SYSTEM - A system and related methods for visually augmenting an appearance of a physical environment as seen by a user through a head-mounted display device are provided. In one embodiment, a virtual environment generating program receives eye-tracking information, lighting information, and depth information from the head-mounted display. The program generates a virtual environment that models the physical environment and is based on the lighting information and the distance of a real-world object from the head-mounted display. The program visually augments a virtual object representation in the virtual environment based on the eye-tracking information, and renders the virtual object representation on a transparent display of the head-mounted display device. | 08-01-2013 |
20130194304 | COORDINATE-SYSTEM SHARING FOR AUGMENTED REALITY - A method for presenting real and virtual images correctly positioned with respect to each other. The method includes, in a first field of view, receiving a first real image of an object and displaying a first virtual image. The method also includes, in a second field of view oriented independently relative to the first field of view, receiving a second real image of the object and displaying a second virtual image, the first and second virtual images positioned coincidently within a coordinate system. | 08-01-2013 |
20130196757 | MULTIPLAYER GAMING WITH HEAD-MOUNTED DISPLAY - A system and related methods for inviting a potential player to participate in a multiplayer game via a user head-mounted display device are provided. In one example, a potential player invitation program receives user voice data and determines that the user voice data is an invitation to participate in a multiplayer game. The program receives eye-tracking information, depth information, facial recognition information, potential player head-mounted display device information, and/or potential player voice data. The program associates the invitation with the potential player using the eye-tracking information, the depth information, the facial recognition information, the potential player head-mounted display device information, and/or the potential player voice data. The program matches a potential player account with the potential player. The program receives an acceptance response from the potential player, and joins the potential player account with a user account in participating in the multiplayer game. | 08-01-2013 |
20130196772 | MATCHING PHYSICAL LOCATIONS FOR SHARED VIRTUAL EXPERIENCE - Embodiments for matching participants in a virtual multiplayer entertainment experience are provided. For example, one embodiment provides a method including receiving from each user of a plurality of users a request to join the virtual multiplayer entertainment experience, receiving from each user of the plurality of users information regarding characteristics of a physical space in which each user is located, and matching two or more users of the plurality of users for participation in the virtual multiplayer entertainment experience based on the characteristics of the physical space of each of the two or more users. | 08-01-2013 |
20130335435 | COLOR VISION DEFICIT CORRECTION - Embodiments related to improving a color-resolving ability of a user of a see-thru display device are disclosed. For example, one disclosed embodiment includes, on a see-thru display device, constructing and displaying virtual imagery to superpose onto real imagery sighted by the user through the see-thru display device. The virtual imagery is configured to accentuate a locus of the real imagery of a color poorly distinguishable by the user. Such virtual imagery is then displayed by superposing it onto the real imagery, in registry with the real imagery, in a field of view of the user. | 12-19-2013 |
20130342568 | LOW LIGHT SCENE AUGMENTATION - Embodiments related to providing low light scene augmentation are disclosed. One embodiment provides, on a computing device comprising a see-through display device, a method including recognizing, from image data received from an image sensor, a background scene of an environment viewable through the see-through display device, the environment comprising a physical object. The method further includes identifying one or more geometrical features of the physical object and displaying, on the see through display device, an image augmenting the one or more geometrical features. | 12-26-2013 |
20140049558 | AUGMENTED REALITY OVERLAY FOR CONTROL DEVICES - Embodiments for providing instructional information for control devices are disclosed. In one example, a method on a see-through display device comprising a see-through display and an outward-facing image sensor includes acquiring an image of a scene viewable through the see-through display and detecting a control device in the scene. The method also includes retrieving information pertaining to a function of an interactive element of the control device and displaying an image on the see-through display augmenting an appearance of the interactive element of the control device with image data related to the function of the interactive element. | 02-20-2014 |
20140125574 | USER AUTHENTICATION ON DISPLAY DEVICE - Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated. | 05-08-2014 |
20140125668 | CONSTRUCTING AUGMENTED REALITY ENVIRONMENT WITH PRE-COMPUTED LIGHTING - Embodiments related to efficiently constructing an augmented reality environment with global illumination effects are disclosed. For example, one disclosed embodiment provides a method of displaying an augmented reality image via a display device. The method includes receiving image data, the image data capturing an image of a local environment of the display device, and identifying a physical feature of the local environment via the image data. The method further includes constructing an augmented reality image of a virtual structure for display over the physical feature in spatial registration with the physical feature from a viewpoint of a user, the augmented reality image comprising a plurality of modular virtual structure segments arranged in adjacent locations to form the virtual structure feature, each modular virtual structure segment comprising a pre-computed global illumination effect, and outputting the augmented reality image to the display device. | 05-08-2014 |
20140145914 | HEAD-MOUNTED DISPLAY RESOURCE MANAGEMENT - A system and related methods for a resource management in a head-mounted display device are provided. In one example, the head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A resource management program is configured to operate a selected sensor in a default power mode to achieve a selected fidelity. The program receives user-related information from one or more of the sensors, and determines whether target information is detected. Where target information is detected, the program adjusts the selected sensor to operate in a reduced power mode that uses less power than the default power mode. | 05-29-2014 |
20140240351 | MIXED REALITY AUGMENTATION - Embodiments that relate to providing motion amplification to a virtual environment are disclosed. For example, in one disclosed embodiment a mixed reality augmentation program receives from a head-mounted display device motion data that corresponds to motion of a user in a physical environment. The program presents via the display device the virtual environment in motion in a principal direction, with the principal direction motion being amplified by a first multiplier as compared to the motion of the user in a corresponding principal direction. The program also presents the virtual environment in motion in a secondary direction, where the secondary direction motion is amplified by a second multiplier as compared to the motion of the user in a corresponding secondary direction, and the second multiplier is less than the first multiplier. | 08-28-2014 |
20140320389 | MIXED REALITY INTERACTIONS - Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display. | 10-30-2014 |
20140333665 | CALIBRATION OF EYE LOCATION - Embodiments are disclosed that relate to calibrating a predetermined eye location in a head-mounted display. For example, in one disclosed embodiment a method includes displaying a virtual marker visually alignable with a real world target at an alignment condition. At the alignment condition, image data is acquired to determine a location of the real world target. From the image data, an estimated eye location relative to a location of the head-mounted display is determined. Based upon the estimated eye location, the predetermined eye location is then calibrated. | 11-13-2014 |
20140333666 | INTERACTIONS OF VIRTUAL OBJECTS WITH SURFACES - Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a display system. For example, one disclosed embodiment includes displaying a virtual object via the display system as free-floating, detecting a trigger to display the object as attached to a surface, and, in response to the trigger, displaying the virtual object as attached to the surface via the display system. The method may further include detecting a trigger to detach the virtual object from the surface and, in response to the trigger to detach the virtual object from the surface, detaching the virtual object from the surface and displaying the virtual object as free-floating. | 11-13-2014 |
20150035832 | VIRTUAL LIGHT IN AUGMENTED REALITY - A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment. | 02-05-2015 |