Patent application number | Description | Published |
20130083008 | ENRICHED EXPERIENCE USING PERSONAL A/V SYSTEM - A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. | 04-04-2013 |
20130147686 | Connecting Head Mounted Displays To External Displays And Other Communication Networks - An audio and/or visual experience of a see-through head-mounted display (HMD) device, e.g., in the form of glasses, can be moved to target computing device such as a television, cell phone, or computer monitor to allow the user to seamlessly transition the content to the target computing device. For example, when the user enters a room in the home with a television, a movie which is playing on the HMD device can be transferred to the television and begin playing there without substantially interrupting the flow of the movie. The HMD device can inform the television of a network address for accessing the movie, for instance, and provide a current status in the form of a time stamp or packet identifier. Content can also be transferred in the reverse direction, to the HMD device. A transfer can occur based on location, preconfigured settings and user commands. | 06-13-2013 |
20130293468 | COLLABORATION ENVIRONMENT USING SEE THROUGH DISPLAYS - A see-through, near-eye, mixed reality display device and system for collaboration amongst various users of other such devices and personal audio/visual devices of more limited capabilities. One or more wearers of a see through head mounted display apparatus define a collaboration environment. For the collaboration environment, a selection of collaboration data and the scope of the environment are determined. Virtual representations of the collaboration data in the field of view of the wearer, and other device users are rendered. Persons in the wearer's field of view to be included in collaboration environment and who are entitled to share information in the collaboration environment are defined by the wearer. If allowed, input from other users in the collaboration environment on the virtual object may be received and allowed to manipulate a change in the virtual object. | 11-07-2013 |
20130293530 | PRODUCT AUGMENTATION AND ADVERTISING IN SEE THROUGH DISPLAYS - An augmented reality system that provides augmented product and environment information to a wearer of a see through head mounted display. The augmentation information may include advertising, inventory, pricing and other information about products a wearer may be interested in. Interest is determined from wearer actions and a wearer profile. The information may be used to incentivize purchases of real world products by a wearer, or allow the wearer to make better purchasing decisions. The augmentation information may enhance a wearer's shopping experience by allowing the wearer easy access to important product information while the wearer is shopping in a retail establishment. Through virtual rendering, a wearer may be provided with feedback on how an item would appear in a wearer environment, such as the wearer's home. | 11-07-2013 |
20130293577 | INTELLIGENT TRANSLATIONS IN PERSONAL SEE THROUGH DISPLAY - A see-through, near-eye, mixed reality display apparatus for providing translations of real world data for a user. A wearer's location and orientation with the apparatus is determined and input data for translation is selected using sensors of the apparatus. Input data can be audio or visual in nature, and selected by reference to the gaze of a wearer. The input data is translated for the user relative to user profile information bearing on accuracy of a translation and determining from the input data whether a linguistic translation, knowledge addition translation or context translation is useful. | 11-07-2013 |
20130326364 | POSITION RELATIVE HOLOGRAM INTERACTIONS - A system and method are disclosed for positioning and sizing virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects. | 12-05-2013 |
20130328763 | MULTIPLE SENSOR GESTURE RECOGNITION - Methods for recognizing gestures using adaptive multi-sensor gesture recognition are described. In some embodiments, a gesture recognition system receives a plurality of sensor inputs from a plurality of sensor devices and a plurality of confidence thresholds associated with the plurality of sensor inputs. A confidence threshold specifies a minimum confidence value for which it is deemed that a particular gesture has occurred. Upon detection of a compensating event, such as excessive motion involving one of the plurality of sensor devices, the gesture recognition system may modify the plurality of confidence thresholds based on the compensating event. Subsequently, the gesture recognition system generates a multi-sensor confidence value based on whether at least a subset of the plurality of confidence thresholds has been satisfied. The gesture recognition system may also modify the plurality of confidence thresholds based on the plugging and unplugging of sensor inputs from the gesture recognition system. | 12-12-2013 |
20130328925 | OBJECT FOCUS IN A MIXED REALITY ENVIRONMENT - A system and method are disclosed for interpreting user focus on virtual objects in a mixed reality environment. Using inference, express gestures and heuristic rules, the present system determines which of the virtual objects the user is likely focused on and interacting with. At that point, the present system may emphasize the selected virtual object over other virtual objects, and interact with the selected virtual object in a variety of ways. | 12-12-2013 |
20130342572 | CONTROL OF DISPLAYED CONTENT IN VIRTUAL ENVIRONMENTS - A system and method are disclosed for controlling content displayed to a user in a virtual environment. The virtual environment may include virtual controls with which a user may interact using predefined gestures. Interacting with a virtual control may adjust an aspect of the displayed content, including for example one or more of fast forwarding of the content, rewinding of the content, pausing of the content, stopping the content, changing a volume of content, recording the content, changing a brightness of the content, changing a contrast of the content and changing the content from a first still image to a second still image. | 12-26-2013 |
20140160157 | PEOPLE-TRIGGERED HOLOGRAPHIC REMINDERS - Methods for generating and displaying people-triggered holographic reminders are described. In some embodiments, a head-mounted display device (HMD) generates and displays an augmented reality environment to an end user of the HMD in which reminders associated with a particular person may be displayed if the particular person is within a field of view of the HMD or if the particular person is within a particular distance of the HMD. The particular person may be identified individually or identified as belonging to a particular group (e.g., a member of a group with a particular job title such as programmer or administrator). In some cases, a completion of a reminder may be automatically detected by applying speech recognition techniques (e.g., to identify key words, phrases, or names) to captured audio of a conversation occurring between the end user and the particular person. | 06-12-2014 |
20140306891 | HOLOGRAPHIC OBJECT FEEDBACK - Methods for providing real-time feedback to an end user of a mobile device as they are interacting with or manipulating one or more virtual objects within an augmented reality environment are described. The real-time feedback may comprise visual feedback, audio feedback, and/or haptic feedback. In some embodiments, a mobile device, such as a head-mounted display device (HMD), may determine an object classification associated with a virtual object within an augmented reality environment, detect an object manipulation gesture performed by an end user of the mobile device, detect an interaction with the virtual object based on the object manipulation gesture, determine a magnitude of a virtual force associated with the interaction, and provide real-time feedback to the end user of the mobile device based on the interaction, the magnitude of the virtual force applied to the virtual object, and the object classification associated with the virtual object. | 10-16-2014 |
20140306993 | HOLOGRAPHIC SNAP GRID - Methods for positioning virtual objects within an augmented reality environment using snap grid spaces associated with real-world environments, real-world objects, and/or virtual objects within the augmented reality environment are described. A snap grid space may comprise a two-dimensional or three-dimensional virtual space within an augmented reality environment in which one or more virtual objects may be positioned. In some embodiments, a head-mounted display device (HMD) may identify one or more grid spaces within an augmented reality environment, detect a positioning of a virtual object within the augmented reality environment, determine a target grid space of the one or more grid spaces in which to position the virtual object, determine a position of the virtual object within the target grid space, and display the virtual object within the augmented reality environment based on the position of the virtual object within the target grid space. | 10-16-2014 |