Patent application number | Description | Published |
20100285885 | MASSIVELY MULTIPLAYER GAME MESSAGE SCHEDULING - A massively multiplayer game management service includes a scheduling module that establishes a message receiving period and a game data aggregation period. The massively multiplayer game management service further includes a message receiving module that, during the message receiving period that overlaps at least part of the game data aggregation period, receives a message from a player client. The message may include an identifier and an execution time that follows the game data aggregation period. The massively multiplayer game management service further includes a message sending module that sends game data, aggregated in a game space location corresponding to the identifier, to the player clients upon occurrence of the execution time. | 11-11-2010 |
20130147686 | Connecting Head Mounted Displays To External Displays And Other Communication Networks - An audio and/or visual experience of a see-through head-mounted display (HMD) device, e.g., in the form of glasses, can be moved to target computing device such as a television, cell phone, or computer monitor to allow the user to seamlessly transition the content to the target computing device. For example, when the user enters a room in the home with a television, a movie which is playing on the HMD device can be transferred to the television and begin playing there without substantially interrupting the flow of the movie. The HMD device can inform the television of a network address for accessing the movie, for instance, and provide a current status in the form of a time stamp or packet identifier. Content can also be transferred in the reverse direction, to the HMD device. A transfer can occur based on location, preconfigured settings and user commands. | 06-13-2013 |
20140240351 | MIXED REALITY AUGMENTATION - Embodiments that relate to providing motion amplification to a virtual environment are disclosed. For example, in one disclosed embodiment a mixed reality augmentation program receives from a head-mounted display device motion data that corresponds to motion of a user in a physical environment. The program presents via the display device the virtual environment in motion in a principal direction, with the principal direction motion being amplified by a first multiplier as compared to the motion of the user in a corresponding principal direction. The program also presents the virtual environment in motion in a secondary direction, where the secondary direction motion is amplified by a second multiplier as compared to the motion of the user in a corresponding secondary direction, and the second multiplier is less than the first multiplier. | 08-28-2014 |
20140320389 | MIXED REALITY INTERACTIONS - Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display. | 10-30-2014 |
20140333665 | CALIBRATION OF EYE LOCATION - Embodiments are disclosed that relate to calibrating a predetermined eye location in a head-mounted display. For example, in one disclosed embodiment a method includes displaying a virtual marker visually alignable with a real world target at an alignment condition. At the alignment condition, image data is acquired to determine a location of the real world target. From the image data, an estimated eye location relative to a location of the head-mounted display is determined. Based upon the estimated eye location, the predetermined eye location is then calibrated. | 11-13-2014 |
20140375683 | INDICATING OUT-OF-VIEW AUGMENTED REALITY IMAGES - Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object. | 12-25-2014 |
Patent application number | Description | Published |
20130002813 | VIEWING WINDOWS FOR VIDEO STREAMS - Techniques are provided for viewing windows for video streams. A video stream from a video capture device is accessed. Data that describes movement or position of a person is accessed. A viewing window is placed in the video stream based on the data that describes movement or position of the person. The viewing window is provided to a display device in accordance with the placement of the viewing window in the video stream. Motion sensors can detect motion of the person carrying the video capture device in order to dampen the motion such that the video on the remote display does not suffer from motion artifacts. Sensors can also track the eye gaze of either the person carrying the mobile video capture device or the remote display device to enable control of the spatial region of the video stream shown at the display device. | 01-03-2013 |
20130083173 | VIRTUAL SPECTATOR EXPERIENCE WITH A PERSONAL AUDIO/VISUAL APPARATUS - Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user. | 04-04-2013 |
20130114043 | SEE-THROUGH DISPLAY BRIGHTNESS CONTROL - The technology provides various embodiments for controlling brightness of a see-through, near-eye mixed display device based on light intensity of what the user is gazing at. The opacity of the display can be altered, such that external light is reduced if the wearer is looking at a bright object. The wearer's pupil size may be determined and used to adjust the brightness used to display images, as well as the opacity of the display. A suitable balance between opacity and brightness used to display images may be determined that allows real and virtual objects to be seen clearly, while not causing damage or discomfort to the wearer's eyes. | 05-09-2013 |
20130328762 | CONTROLLING A VIRTUAL OBJECT WITH A REAL CONTROLLER DEVICE - Technology is described for controlling a virtual object displayed by a near-eye, augmented reality display with a real controller device. User input data is received from a real controller device requesting an action to be performed by the virtual object. A user perspective of the virtual object being displayed by the near-eye, augmented reality display is determined. The user input data requesting the action to be performed by the virtual object is applied based on the user perspective, and the action is displayed from the user perspective. The virtual object to be controlled by the real controller device may be identified based on user input data which may be from a natural user interface (NUI). A user selected force feedback object may also be identified, and the identification may also be based on NUI input data. | 12-12-2013 |
20130328927 | AUGMENTED REALITY PLAYSPACES WITH ADAPTIVE GAME RULES - A system for generating a virtual gaming environment based on features identified within a real-world environment, and adapting the virtual gaming environment over time as the features identified within the real-world environment change is described. Utilizing the technology described, a person wearing a head-mounted display device (HMD) may walk around a real-world environment and play a virtual game that is adapted to that real-world environment. For example, the HMD may identify environmental features within a real-world environment such as five grassy areas and two cars, and then spawn virtual monsters based on the location and type of the environmental features identified. The location and type of the environmental features identified may vary depending on the particular real-world environment in which the HMD exists and therefore each virtual game may look different depending on the particular real-world environment. | 12-12-2013 |
20130335405 | VIRTUAL OBJECT GENERATION WITHIN A VIRTUAL ENVIRONMENT - A system and method are disclosed for building and experiencing three-dimensional virtual objects from within a virtual environment in which they will be viewed upon completion. A virtual object may be created, edited and animated using a natural user interface while the object is displayed to the user in a three-dimensional virtual environment. | 12-19-2013 |
20140002444 | CONFIGURING AN INTERACTION ZONE WITHIN AN AUGMENTED REALITY ENVIRONMENT | 01-02-2014 |
20140306994 | PERSONAL HOLOGRAPHIC BILLBOARD - Methods for generating and displaying personalized virtual billboards within an augmented reality environment are described. The personalized virtual billboards may facilitate the sharing of personalized information between persons within an environment who have varying degrees of acquaintance (e.g., ranging from close familial relationships to strangers). In some embodiments, a head-mounted display device (HMD) may detect a mobile device associated with a particular person within an environment, acquire a personalized information set corresponding with the particular person, generate a virtual billboard based on the personalized information set, and display the virtual billboard on the HMD. The personalized information set may include information associated with the particular person such as shopping lists and classified advertisements. The HMD may share personalized information associated with an end user of the HMD with the mobile device based on whether the particular person is a friend or unknown to the end user. | 10-16-2014 |
Patent application number | Description | Published |
20110275431 | ROLE ASSIGNMENT IN MULTIPLAYER GAMES - Dynamic role selection of players for different roles in multiplayer gaming sessions is provided. Users seeking to participate in different roles in the game may request participation in the role. Selection of players for roles is made dynamically by varying a selection component for different sessions of the game. The selection component may be a user's game score over different time periods, and can be rotated for different sessions of the game, so that various levels of players have an opportunity to fill game roles. | 11-10-2011 |
20120157200 | INTELLIGENT GAMEPLAY PHOTO CAPTURE - Implementations for identifying, capturing, and presenting high-quality photo-representations of acts occurring during play of a game that employs motion tracking input technology are disclosed. As one example, a method is disclosed that includes capturing via an optical interface, a plurality of photographs of a player in a capture volume during play of the electronic game. The method further includes for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. The method further includes assigning respective scores to the plurality of captured photographs based, at least in part, on the comparison to the even-based scoring parameter. The method further includes associating the captured photographs at an electronic storage media with the respective scores assigned to the captured photographs. | 06-21-2012 |
20120306853 | ADDING ATTRIBUTES TO VIRTUAL REPRESENTATIONS OF REAL-WORLD OBJECTS - A method, medium, and virtual object for providing a virtual representation with an attribute are described. The virtual representation is generated based on a digitization of a real-world object. Properties of the virtual representation, such as colors, shape similarities, volume, surface area, and the like are identified and an amount or degree of exhibition of those properties by the virtual representation is determined. The properties are employed to identify attributes associated with the virtual representation, such as temperature, weight, or sharpness of an edge, among other attributes of the virtual object. A degree of exhibition of the attributes is also determined based on the properties and their degrees of exhibition. Thereby, the virtual representation is provided with one or more attributes that instruct presentation and interactions of the virtual representation in a virtual world. | 12-06-2012 |
20120307010 | OBJECT DIGITIZATION - Digitizing objects in a picture is discussed herein. A user presents the object to a camera, which captures the image comprising color and depth data for the front and back of the object. For both front and back images, the closest point to the camera is determined by analyzing the depth data. From the closest points, edges of the object are found by noting large differences in depth data. The depth data is also used to construct point cloud constructions of the front and back of the object. Various techniques are applied to extrapolate edges, remove seams, extend color intelligently, filter noise, apply skeletal structure to the object, and optimize the digitization further. Eventually, a digital representation is presented to the user and potentially used in different applications (e.g., games, Web, etc.). | 12-06-2012 |
20130127994 | VIDEO COMPRESSION USING VIRTUAL SKELETON - Optical sensor information captured via one or more optical sensors imaging a scene that includes a human subject is received by a computing device. The optical sensor information is processed by the computing device to model the human subject with a virtual skeleton, and to obtain surface information representing the human subject. The virtual skeleton is transmitted by the computing device to a remote computing device at a higher frame rate than the surface information. Virtual skeleton frames are used by the remote computing device to estimate surface information for frames that have not been transmitted by the computing device. | 05-23-2013 |
20130194304 | COORDINATE-SYSTEM SHARING FOR AUGMENTED REALITY - A method for presenting real and virtual images correctly positioned with respect to each other. The method includes, in a first field of view, receiving a first real image of an object and displaying a first virtual image. The method also includes, in a second field of view oriented independently relative to the first field of view, receiving a second real image of the object and displaying a second virtual image, the first and second virtual images positioned coincidently within a coordinate system. | 08-01-2013 |
20130342568 | LOW LIGHT SCENE AUGMENTATION - Embodiments related to providing low light scene augmentation are disclosed. One embodiment provides, on a computing device comprising a see-through display device, a method including recognizing, from image data received from an image sensor, a background scene of an environment viewable through the see-through display device, the environment comprising a physical object. The method further includes identifying one or more geometrical features of the physical object and displaying, on the see through display device, an image augmenting the one or more geometrical features. | 12-26-2013 |
20140043433 | AUGMENTED REALITY DISPLAY OF SCENE BEHIND SURFACE - Embodiments are disclosed that relate to augmenting an appearance of a surface via a see-through display device. For example, one disclosed embodiment provides, on a computing device comprising a see-through display device, a method of augmenting an appearance of a surface. The method includes acquiring, via an outward-facing image sensor, image data of a first scene viewable through the display. The method further includes recognizing a surface viewable through the display based on the image data and, in response to recognizing the surface, acquiring a representation of a second scene comprising one or more of a scene located physically behind the surface viewable through the display and a scene located behind a surface contextually related to the surface viewable through the display. The method further includes displaying the representation via the see-through display. | 02-13-2014 |
20140044305 | OBJECT TRACKING - Embodiments are disclosed herein that relate to the automatic tracking of objects. For example, one disclosed embodiment provides a method of operating a mobile computing device having an image sensor. The method includes acquiring image data, identifying an inanimate moveable object in the image data, determining whether the inanimate moveable object is a tracked object, ate moveable object is a tracked object, then storing information regarding a state of the inanimate moveable object, detecting a trigger to provide a notification of the state of the inanimate moveable object, and providing an output of the notification of the state of the inanimate moveable object. | 02-13-2014 |
20140049558 | AUGMENTED REALITY OVERLAY FOR CONTROL DEVICES - Embodiments for providing instructional information for control devices are disclosed. In one example, a method on a see-through display device comprising a see-through display and an outward-facing image sensor includes acquiring an image of a scene viewable through the see-through display and detecting a control device in the scene. The method also includes retrieving information pertaining to a function of an interactive element of the control device and displaying an image on the see-through display augmenting an appearance of the interactive element of the control device with image data related to the function of the interactive element. | 02-20-2014 |
20140125574 | USER AUTHENTICATION ON DISPLAY DEVICE - Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated. | 05-08-2014 |
20140125668 | CONSTRUCTING AUGMENTED REALITY ENVIRONMENT WITH PRE-COMPUTED LIGHTING - Embodiments related to efficiently constructing an augmented reality environment with global illumination effects are disclosed. For example, one disclosed embodiment provides a method of displaying an augmented reality image via a display device. The method includes receiving image data, the image data capturing an image of a local environment of the display device, and identifying a physical feature of the local environment via the image data. The method further includes constructing an augmented reality image of a virtual structure for display over the physical feature in spatial registration with the physical feature from a viewpoint of a user, the augmented reality image comprising a plurality of modular virtual structure segments arranged in adjacent locations to form the virtual structure feature, each modular virtual structure segment comprising a pre-computed global illumination effect, and outputting the augmented reality image to the display device. | 05-08-2014 |