Patent application number | Description | Published |
20100277411 | USER TRACKING FEEDBACK - Technology is presented for providing feedback to a user on an ability of an executing application to track user action for control of the executing application on a computer system. A capture system detects a user in a capture area. Factors in the capture area and the user's actions can adversely affect the ability of the application to determine if a user movement is a gesture which is a control or instruction to the application. One example of such factors is a user being out of the field of view of the capture system. Some other factor examples include lighting conditions and obstructions in the capture area. Responsive to a user tracking criteria not being satisfied, feedback is output to the user. In some embodiments, the feedback is provided within the context of an executing application. | 11-04-2010 |
20110296505 | CLOUD-BASED PERSONAL TRAIT PROFILE DATA - A system and method is disclosed for sensing, storing and using personal trait profile data. Once sensed and stored, this personal trait profile data may be used for a variety of purposes. In one example, a user's personal trait profile data may be accessed and downloaded to different computing systems with which a user may interact so that the different systems may be instantly tuned to the user's personal traits and manner of interaction. In a further example, a user's personal trait profile data may also be used for authentication purposes. | 12-01-2011 |
20110298827 | LIMITING AVATAR GESTURE DISPLAY - Technology determines whether a gesture of an avatar depicts one of a set of prohibited gestures. An example of a prohibited gesture is a lewd gesture. If the gesture is determined to be a prohibited gesture, the image data for display of the gesture is altered. Some examples of alteration are substitution of image data for the prohibited gesture, or performing a filtering technique to the image data depicting the gesture to visually obscure the prohibited gesture. | 12-08-2011 |
20110300929 | SYNTHESIS OF INFORMATION FROM MULTIPLE AUDIOVISUAL SOURCES - A system and method are disclosed for synthesizing information received from multiple audio and visual sources focused on a single scene. The system may determine the positions of capture devices based on a common set of cues identified in the image data of the capture devices. As a scene may often have users and objects moving into and out of the scene, data from the multiple capture devices may be time synchronized to ensure that data from the audio and visual sources are providing data of the same scene at the same time. Audio and/or visual data from the multiple sources may be reconciled and assimilated together to improve an ability of the system to interpret audio and/or visual aspects from the scene. | 12-08-2011 |
20110314381 | NATURAL USER INPUT FOR DRIVING INTERACTIVE STORIES - A system and method are disclosed for combining interactive gaming aspects into a linear story. A user may interact with the linear story via a NUI system to alter the story and the images that are presented to the user. In an example, a user may alter the story by performing a predefined exploration gesture. This gesture brings the user into the 3-D world of the displayed image. In particular, the image displayed on the screen changes to create the impression that a user is stepping into the 3-D virtual world to allow a user to examine virtual objects from different perspectives or to peer around virtual objects. | 12-22-2011 |
20110317871 | SKELETAL JOINT RECOGNITION AND TRACKING SYSTEM - A system and method are disclosed for recognizing and tracking a user's skeletal joints with a NUI system and further, for recognizing and tracking only some skeletal joints, such as for example a user's upper body. The system may include a limb identification engine which may use various methods to evaluate, identify and track positions of body parts of one or more users in a scene. In examples, further processing efficiency may be achieved by segmenting the field of view in smaller zones, and focusing on one zone at a time. Moreover, each zone may have its own set of predefined gestures which are recognized. | 12-29-2011 |
20120072936 | Automatic Customized Advertisement Generation System - A system for generating a customized advertisement for a user is provided. Multimedia content associated with a current broadcast is received and displayed. The multimedia content may include recorded video content, video-on-demand content, television content, television programs, advertisements, commercials, music, movies, video clips, and other on-demand media content. One or more users are identified in a field of view of a capture device connected to a computing device. User-specific information related to a user is tracked. An emotional response of a user to the multimedia content viewed by the user is tracked. A targeted advertisement is provided to a user based on the multimedia content viewed by the user, the user's identification information and the user's emotional response. The targeted advertisement is automatically customized based on the user-specific information related to the user to generate a customized advertisement for the user. The targeted and customized advertisement is displayed to the user during a pre-programmed time interval, via an audiovisual device connected to the computing device. | 03-22-2012 |
20120124456 | AUDIENCE-BASED PRESENTATION AND CUSTOMIZATION OF CONTENT - A system and method are disclosed for delivering content customized to the specific user or users interacting with the system. The system includes one or more modules for recognizing an identity of a user. These modules may include for example a gesture recognition engine, a facial recognition engine, a body language recognition engine and a voice recognition engine. The user may also be carrying a mobile device such as a smart phone which identifies the user. One or more of these modules may cooperate to identify a user, and then customize the user's content based on the user's identity. In particular, the system receives user preferences indicating the content a user wishes to receive and the conditions under which it is to be received. Based on the user preferences and recognition of a user identity and/or other traits, the system presents content customized for a particular user. | 05-17-2012 |
20120124604 | AUTOMATIC PASSIVE AND ANONYMOUS FEEDBACK SYSTEM - A system for generating passive and anonymous feedback of multimedia content viewed by users is disclosed. The multimedia content may include recorded video content, video-on-demand content, television content, television programs, advertisements, commercials, music, movies, video clips, and other on-demand media content. One or more of the users in a field of view of a capture device connected to the computing device are identified. An engagement level of the users to multimedia content being viewed by the users is determined by tracking movements, gestures, postures and facial expressions performed by the users. A report of the response to viewed multimedia content is generated based on the movements, gestures, postures and facial expressions performed by the users. The report is provided to rating agencies, content providers and advertisers. In one embodiment, preview content and personalized content related to the viewed multimedia content is received from the content providers and advertisers based on the report. The preview content and personalized content are displayed to the users. | 05-17-2012 |
20120147038 | SYMPATHETIC OPTIC ADAPTATION FOR SEE-THROUGH DISPLAY - A method for overlaying first and second images in a common focal plane of a viewer comprises forming the first image and guiding the first and second images along an axis to a pupil of the viewer. The method further comprises adjustably diverging the first and second images at an adaptive diverging optic to bring the first image into focus at the common focal plane, and, adjustably converging the second image at an adaptive converging optic to bring the second image into focus at the common focal plane. | 06-14-2012 |
20120158755 | GRANULAR METADATA FOR DIGITAL CONTENT - Accordingly, the present discussion is directed in one respect; to a method of associating metadata with a digital content item and then controlling the presentation of that item in accordance with metadata-associated controls/constraints. The method may include determining a time-specific portion of a digital content item with which a user desires to associate a metadata item; associating the metadata item with the time-specific portion of the digital content item. During subsequent consumption of the digital content item by the user or another, the metadata item is presented synchronously in co lection with the time-specific portion of the digital content item, where such presentation is constrained or controlled in response to user-controlled filters implemented through a social network. | 06-21-2012 |
20120159527 | SIMULATED GROUP INTERACTION WITH MULTIMEDIA CONTENT - A method and system for generating time synchronized data streams based on a viewer's interaction with a multimedia content stream is provided. A viewer's interactions with a multimedia content stream being viewed by the viewer are recorded. The viewer's interactions include comments provided by the viewer, while viewing the multimedia content stream. Comments include text messages, audio messages, video feeds, gestures or facial expressions provided by the viewer. A time synchronized commented data stream is generated based on the viewer's interactions. The time synchronized commented data stream includes the viewer's interactions time stamped relative to a virtual start time at which the multimedia content stream is rendered to the viewer. One or more time synchronized data streams are rendered to the viewer, via an audiovisual device, while the viewer views a multimedia content stream. | 06-21-2012 |
20120162065 | SKELETAL JOINT RECOGNITION AND TRACKING SYSTEM - A system and method are disclosed for recognizing and tracking a user's skeletal joints with a NUI system and further, for recognizing and tracking only some skeletal joints, such as for example a user's upper body. The system may include a limb identification engine which may use various methods to evaluate, identify and track positions of body parts of one or more users in a scene. In examples, further processing efficiency may be achieved by segmenting the field of view in smaller zones, and focusing on one zone at a time. Moreover, each zone may have its own set of predefined gestures which are recognized. | 06-28-2012 |
20120180107 | GROUP-ASSOCIATED CONTENT RECOMMENDATION - A method of generating content recommendations to groups of users is provided. The method includes establishing a group, determining group-associated characteristics, where such characteristics include preferences independent of any merging, intersection or other combination of individual preferences of the group members, and providing content recommendations to the group based on the group-associated characteristics. | 07-12-2012 |
20120206452 | REALISTIC OCCLUSION FOR A HEAD MOUNTED AUGMENTED REALITY DISPLAY - Technology is described for providing realistic occlusion between a virtual object displayed by a head mounted, augmented reality display system and a real object visible to the user's eyes through the display. A spatial occlusion in a user field of view of the display is typically a three dimensional occlusion determined based on a three dimensional space mapping of real and virtual objects. An occlusion interface between a real object and a virtual object can be modeled at a level of detail determined based on criteria such as distance within the field of view, display size or position with respect to a point of gaze. Technology is also described for providing three dimensional audio occlusion based on an occlusion between a real object and a virtual object in the user environment. | 08-16-2012 |
20120213212 | LIFE STREAMING - A system and method for analyzing, summarizing, and transmitting life experiences captured using a life recorder is described. A life recorder is a recording device that continuously captures life experiences, including unanticipated life experiences, in video and/or audio recordings. In some embodiments, the video and/or audio recordings generated by a life recorder are automatically summarized, indexed, and stored for future use. By indexing and storing life recordings, a life recorder may search for and acquire life recordings generated by itself or another life recorder, thereby allowing life experiences to be shared minutes or even years later. In some embodiments, recordings generated by a life recorder may be analyzed in real-time and automatically pushed to one or more target devices. The ability to automatically and instantaneously push life recordings as live feeds to one or more target devices allows friends and family to experience one's life experience in real-time. | 08-23-2012 |
20120278904 | CONTENT DISTRIBUTION REGULATION BY VIEWING USER - A content presentation system and method allowing content providers to regulate the presentation of content on a per-user-view basis. Content is distributed an associated license option on the number of individual consumers or viewers allowed to consume the content. Consumers are presented with a content selection and a choice of licenses allowing consumption of the content. The users consuming the content on a display device are monitored so that if the number of user-views licensed is exceeded, remedial action may be taken. | 11-01-2012 |
20120293548 | EVENT AUGMENTATION WITH REAL-TIME INFORMATION - A system and method to present a user wearing a head mounted display with supplemental information when viewing a live event. A user wearing an at least partially see-through, head mounted display views the live event while simultaneously receiving information on objects, including people, within the user's field of view, while wearing the head mounted display. The information is presented in a position in the head mounted display which does not interfere with the user's enjoyment of the live event. | 11-22-2012 |
20120320013 | SHARING OF EVENT MEDIA STREAMS - Embodiments are disclosed that relate to sharing media streams capturing different perspectives of an event. For example, one embodiment provides, on a computing device, a method including storing an event definition for an event, receiving from each capture device of a plurality of capture devices a request to share a media stream provided by the capture device, receiving a media stream from each capture device of the plurality of capture devices, and associating a subset of media streams from the plurality of capture devices with the event based upon the event definition. The method further includes receiving a request for transmission of a selected media stream associated with the event, and sending the selected media stream associated with the event to the requesting capture device. | 12-20-2012 |
20130044130 | PROVIDING CONTEXTUAL PERSONAL INFORMATION BY A MIXED REALITY DEVICE - The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location. | 02-21-2013 |
20130050642 | ALIGNING INTER-PUPILLARY DISTANCE IN A NEAR-EYE DISPLAY SYSTEM - The technology provides for automatic alignment of a see-through near-eye, mixed reality device with an inter-pupillary distance (IPD). A determination is made as to whether a see-through, near-eye, mixed reality display device is aligned with an IPD of a user. If the display device is not aligned with the IPD, the display device is automatically adjusted. In some examples, the alignment determination is based on determinations of whether an optical axis of each display optical system positioned to be seen through by a respective eye is aligned with a pupil of the respective eye in accordance with an alignment criteria. The pupil alignment may be determined based on an arrangement of gaze detection elements for each display optical system including at least one sensor for capturing data of the respective eye and the captured data. The captured data may be image data, image and glint data, and glint data only. | 02-28-2013 |
20130050833 | ADJUSTMENT OF A MIXED REALITY DISPLAY FOR INTER-PUPILLARY DISTANCE ALIGNMENT - The technology provides for adjusting a see-through, near-eye, mixed reality display device for alignment with an inter-pupillary distance (IPD) of a user by different examples of display adjustment mechanisms. The see-through, near-eye, mixed reality system includes for each eye a display optical system having an optical axis. Each display optical system is positioned to be seen through by a respective eye, and is supported on a respective movable support structure. A display adjustment mechanism attached to the display device also connects with each movable support structure for moving the structure. A determination is automatically made as to whether the display device is aligned with an IPD of a user. If not aligned, one or more adjustment values for a position of at least one of the display optical systems is automatically determined. The display adjustment mechanism moves the at least one display optical system in accordance with the adjustment values. | 02-28-2013 |
20130083003 | PERSONAL AUDIO/VISUAL SYSTEM - The technology described herein incudes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The system can be used in various entertainment, sports, shopping and theme-park situations to provide a mixed reality experience. | 04-04-2013 |
20130083007 | CHANGING EXPERIENCE USING PERSONAL A/V SYSTEM - A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. | 04-04-2013 |
20130083008 | ENRICHED EXPERIENCE USING PERSONAL A/V SYSTEM - A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. | 04-04-2013 |
20130083009 | EXERCISING APPLICATIONS FOR PERSONAL AUDIO/VISUAL SYSTEM - The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The personal A/V apparatus serves as an exercise program that is always with the user, provides motivation for the user, visually tells the user how to exercise, and lets the user exercise with other people who are not present. | 04-04-2013 |
20130083011 | REPRESENTING A LOCATION AT A PREVIOUS TIME PERIOD USING AN AUGMENTED REALITY DISPLAY - Technology is described for representing a physical location at a previous time period with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. The personal A/V apparatus is identified as being within the physical location, and one or more objects in a display field of view of the near-eye, augmented reality display are automatically identified based on a three dimensional mapping of objects in the physical location. User input, which may be natural user interface (NUI) input, indicates a previous time period, and one or more 3D virtual objects associated with the previous time period are displayed from a user perspective associated with the display field of view. An object may be erased from the display field of view, and a camera effect may be applied when changing between display fields of view. | 04-04-2013 |
20130083018 | PERSONAL AUDIO/VISUAL SYSTEM WITH HOLOGRAPHIC OBJECTS - A system for generating an augmented reality environment using state-based virtual objects is described. A state-based virtual object may be associated with a plurality of different states. Each state of the plurality of different states may correspond with a unique set of triggering events different from those of any other state. The set of triggering events associated with a particular state may be used to determine when a state change from the particular state is required. In some cases, each state of the plurality of different states may be associated with a different 3-D model or shape. The plurality of different states may be defined using a predetermined and standardized file format that supports state-based virtual objects. In some embodiments, one or more potential state changes from a particular state may be predicted based on one or more triggering probabilities associated with the set of triggering events. | 04-04-2013 |
20130083062 | PERSONAL A/V SYSTEM WITH CONTEXT RELEVANT INFORMATION - A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. | 04-04-2013 |
20130083063 | Service Provision Using Personal Audio/Visual System - A collaborative on-demand system allows a user of a head-mounted display device (HMDD) to obtain assistance with an activity from a qualified service provider. In a session, the user and service provider exchange camera-captured images and augmented reality images. A gaze-detection capability of the HMDD allows the user to mark areas of interest in a scene. The service provider can similarly mark areas of the scene, as well as provide camera-captured images of the service provider's hand or arm pointing to or touching an object of the scene. The service provider can also select an animation or text to be displayed on the HMDD. A server can match user requests with qualified service providers which meet parameters regarding fee, location, rating and other preferences. Or, service providers can review open requests and self-select appropriate requests, initiating contact with a user. | 04-04-2013 |
20130083064 | PERSONAL AUDIO/VISUAL APPARATUS PROVIDING RESOURCE MANAGEMENT - Technology is described for resource management based on data including image data of a resource captured by at least one capture device of at least one personal audiovisual (A/V) apparatus including a near-eye, augmented reality (AR) display. A resource is automatically identified from image data captured by at least one capture device of at least one personal A/V apparatus and object reference data. A location in which the resource is situated and a 3D space position or volume of the resource in the location is tracked. A property of the resource is also determined from the image data and tracked. A function of a resource may also be stored for determining whether the resource is usable for a task. Responsive to notification criteria for the resource being satisfied, image data related to the resource is displayed on the near-eye AR display. | 04-04-2013 |
20130083173 | VIRTUAL SPECTATOR EXPERIENCE WITH A PERSONAL AUDIO/VISUAL APPARATUS - Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user. | 04-04-2013 |
20130084970 | Sharing Games Using Personal Audio/Visual Apparatus - A game can be created, shared and played using a personal audio/visual apparatus such as a head-mounted display device (HMDD). Rules of the game, and a configuration of the game space, can be standard or custom. Boundary points of the game can be defined by a gaze direction of the HMDD, by the user's location, by a model of a physical game space such as an instrumented court or by a template. Players can be identified and notified of the availability of a game using a server push technology. For example, a user in a particular location may be notified of the availability of a game at that location. A server manages the game, including storing the rules, boundaries and a game state. The game state can identify players and their scores. Real world objects can be imaged and provided as virtual objects in the game space. | 04-04-2013 |
20130085345 | Personal Audio/Visual System Providing Allergy Awareness - A system provides a recommendation of food items to a user based on nutritional preferences of the user, using a head-mounted display device (HMDD) worn by the user. In a store, a forward-facing camera of the HMDD captures an image of a food item. The food item can be identified by the image, such as based on packaging of the food item. Nutritional parameters of the food item are compared to nutritional preferences of the user to determine whether the food item is recommended. The HMDD displays an augmented reality image to the user indicating whether the food item is recommended. If the food item is not recommended, a substitute food item can be identified. The nutritional preferences can indicate food allergies, preferences for low calorie foods and so forth. In a restaurant, the HMDD can recommend menu selections for a user. | 04-04-2013 |
20130095924 | ENHANCING A SPORT USING AN AUGMENTED REALITY DISPLAY - Technology is described for providing a personalized sport performance experience with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. A physical movement recommendation is determined for the user performing a sport based on skills data for the user for the sport, physical characteristics of the user, and 3D space positions for at least one or more sport objects. 3D virtual data depicting one or more visual guides for assisting the user in performing the physical movement recommendation may be displayed from a user perspective associated with a display field of view of the near-eye AR display. An avatar may also be displayed by the near-eye AR display performing a sport. The avatar may perform the sport interactively with the user or be displayed performing a prior performance of an individual represented by the avatar. | 04-18-2013 |
20130137076 | HEAD-MOUNTED DISPLAY BASED EDUCATION AND INSTRUCTION - Technology disclosed herein provides for use of HMDs in a classroom setting. Technology disclosed herein provides for HMD use for holographic instruction. In one embodiment, the HMD is used for social coaching. User profile information may be used to tailor instruction to a specific user based on known skills, learning styles, and/or characteristics. One or more individuals may be monitored based on sensor data. The sensor data may come from an HMD. The monitoring may be analyzed to determine how to enhance an experience. The experience may be enhanced by presenting an image in at least one head mounted display worn by the one or more individuals. | 05-30-2013 |
20130147687 | DISPLAYING VIRTUAL DATA AS PRINTED CONTENT - The technology provides embodiments for displaying virtual data as printed content by a see-through, near-eye, mixed reality display device system. One or more literary content items registered to a reading object in a field of view of the display device system are displayed with print layout characteristics. Print layout characteristics from a publisher of each literary content item are selected if available. The reading object has a type like a magazine, book, journal or newspaper and may be a real object or a virtual object displayed by the display device system. The reading object type of the virtual object is based on a reading object type associated with a literary content item to be displayed. Virtual augmentation data registered to a literary content item is displayed responsive to detecting user physical action in image data. An example of a physical action is a page flipping gesture. | 06-13-2013 |
20130147836 | MAKING STATIC PRINTED CONTENT DYNAMIC WITH VIRTUAL DATA - The technology provides embodiments for making static printed content being viewed through a see-through, mixed reality display device system more dynamic with display of virtual data. A printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a printed content selection within the printed content item is identified based on physical action user input, for example eye gaze or a gesture. A task in relation to the printed content selection can also be determined based on physical action user input. Virtual data for the printed content selection is displayed in accordance with the task. Additionally, virtual data can be linked to a work embodied in a printed content item. Furthermore, a virtual version of the printed material may be displayed at a more comfortable reading position and with improved visibility of the content. | 06-13-2013 |
20130147838 | UPDATING PRINTED CONTENT WITH PERSONALIZED VIRTUAL DATA - The technology provides for updating printed content with personalized virtual data using a see-through, near-eye, mixed reality display device system. A printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a printed content selection within the printed content item is identified based on physical action user input, for example eye gaze or a gesture. Virtual data is selected from available virtual data for the printed content selection based on user profile data, and the display device system displays the selected virtual data in a position registered to the position of the printed content selection. In some examples, a task related to the printed content item is determined based on physical action user input, and personalized virtual data is displayed registered to the printed content item in accordance with the task. | 06-13-2013 |
20130162505 | ENVIRONMENTAL-LIGHT FILTER FOR SEE-THROUGH HEAD-MOUNTED DISPLAY DEVICE - An environmental-light filter removably coupled to an optical see-through head-mounted display (HMD) device is disclosed. The environmental-light filter couples to the HMD device between a display component and a real-world scene. Coupling features are provided to allow the filter to be easily and removably attached to the HMD device when desired by a user. The filter increases the primacy of a provided augmented-reality image with respect to a real-world scene and reduces brightness and power consumption requirements for presenting the augmented-reality image. A plurality of filters of varied light transmissivity may be provided from which to select a desired filter based on environmental lighting conditions and user preference. The light transmissivity of the filter may be about 70% light transmissive to substantially or completely opaque. | 06-27-2013 |
20130165225 | NATURAL USER INPUT FOR DRIVING INTERACTIVE STORIES - A system and method are disclosed for combining interactive gaming aspects into a linear story. A user may interact with the linear story via a NUI system to alter the story and the images that are presented to the user. In an example, a user may alter the story by performing a predefined exploration gesture. This gesture brings the user into the | 06-27-2013 |
20130169683 | HEAD MOUNTED DISPLAY WITH IRIS SCAN PROFILING - A see-through head mounted-display and method for operating the display to optimize performance of the display by referencing a user profile automatically. The identity of the user is determined by performing an iris scan and recognition of a user enabling user profile information to be retrieved and used to enhance the user's experience with the see through head mounted display. The user profile may contain user preferences regarding services providing augmented reality images to the see-through head-mounted display, as well as display adjustment information optimizing the position of display elements in the see-though head-mounted display. | 07-04-2013 |
20130194164 | EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS - Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. | 08-01-2013 |
20130286178 | GAZE DETECTION IN A SEE-THROUGH, NEAR-EYE, MIXED REALITY DISPLAY - The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed. | 10-31-2013 |
20130293468 | COLLABORATION ENVIRONMENT USING SEE THROUGH DISPLAYS - A see-through, near-eye, mixed reality display device and system for collaboration amongst various users of other such devices and personal audio/visual devices of more limited capabilities. One or more wearers of a see through head mounted display apparatus define a collaboration environment. For the collaboration environment, a selection of collaboration data and the scope of the environment are determined. Virtual representations of the collaboration data in the field of view of the wearer, and other device users are rendered. Persons in the wearer's field of view to be included in collaboration environment and who are entitled to share information in the collaboration environment are defined by the wearer. If allowed, input from other users in the collaboration environment on the virtual object may be received and allowed to manipulate a change in the virtual object. | 11-07-2013 |
20130293530 | PRODUCT AUGMENTATION AND ADVERTISING IN SEE THROUGH DISPLAYS - An augmented reality system that provides augmented product and environment information to a wearer of a see through head mounted display. The augmentation information may include advertising, inventory, pricing and other information about products a wearer may be interested in. Interest is determined from wearer actions and a wearer profile. The information may be used to incentivize purchases of real world products by a wearer, or allow the wearer to make better purchasing decisions. The augmentation information may enhance a wearer's shopping experience by allowing the wearer easy access to important product information while the wearer is shopping in a retail establishment. Through virtual rendering, a wearer may be provided with feedback on how an item would appear in a wearer environment, such as the wearer's home. | 11-07-2013 |
20130293577 | INTELLIGENT TRANSLATIONS IN PERSONAL SEE THROUGH DISPLAY - A see-through, near-eye, mixed reality display apparatus for providing translations of real world data for a user. A wearer's location and orientation with the apparatus is determined and input data for translation is selected using sensors of the apparatus. Input data can be audio or visual in nature, and selected by reference to the gaze of a wearer. The input data is translated for the user relative to user profile information bearing on accuracy of a translation and determining from the input data whether a linguistic translation, knowledge addition translation or context translation is useful. | 11-07-2013 |
20130300653 | GAZE DETECTION IN A SEE-THROUGH, NEAR-EYE, MIXED REALITY DISPLAY - The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed. | 11-14-2013 |
20130307855 | HOLOGRAPHIC STORY TELLING - A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids. | 11-21-2013 |
20130307856 | SYNCHRONIZING VIRTUAL ACTOR'S PERFORMANCES TO A SPEAKER'S VOICE - A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids. | 11-21-2013 |
20130321255 | NAVIGATING CONTENT IN AN HMD USING A PHYSICAL OBJECT - Technology is disclosed herein to help a user navigate through large amounts of content while wearing a see-through, near-eye, mixed reality display device such as a head mounted display (HMD). The user can use a physical object such as a book to navigate through content being presented in the HMD. In one embodiment, a book has markers on the pages that allow the system to organize the content. The book could have real content, but it could be blank other than the markers. As the user flips through the book, the system recognizes the markers and presents content associated with the respective marker in the HMD. | 12-05-2013 |
20130321462 | GESTURE BASED REGION IDENTIFICATION FOR HOLOGRAMS - Techniques are provided for allowing a user to select a region within virtual imagery, such as a hologram, being presented in an HMD. The user could select the region by using their hands to form a closed loop such that from the perspective of the user, the closed loop corresponds to the region the user wishes to select. The user could select the region by using a prop, such as a picture frame. In response to the selection, the selected region could be presented using a different rendering technique than other regions of the virtual imagery. Various rendering techniques such as zooming, filtering, etc. could be applied to the selected region. The identification of the region by the user could also serve as a selection of an element in that portion of the virtual image. | 12-05-2013 |
20130328783 | TRANSMISSION OF INFORMATION TO SMART FABRIC OUPUT DEVICE - A system and method are provided for a user to communicate uniquely human and personal information to one or more other users, via a smart textile input/output device. The information may be displayed on the device associated with the user, on one or more other articles of clothing associated with one or more other users, or on one or more external devices proximate to and associated with a target user. The information may result from a direct input of display information from a source user to a target user or from a third party directed to one or more target users. | 12-12-2013 |
20140002442 | MECHANISM TO GIVE HOLOGRAPHIC OBJECTS SALIENCY IN MULTIPLE SPACES | 01-02-2014 |
20140002491 | DEEP AUGMENTED REALITY TAGS FOR HEAD MOUNTED DISPLAYS | 01-02-2014 |
20140002492 | PROPAGATION OF REAL WORLD PROPERTIES INTO AUGMENTED REALITY IMAGES | 01-02-2014 |
20140002495 | MULTI-NODE POSTER LOCATION | 01-02-2014 |
20140002496 | CONSTRAINT BASED INFORMATION INFERENCE | 01-02-2014 |
20140006026 | CONTEXTUAL AUDIO DUCKING WITH SITUATION AWARE DEVICES | 01-02-2014 |
20150049114 | EXERCISING APPLICATIONS FOR PERSONAL AUDIO/VISUAL SYSTEM - The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The personal A/V apparatus serves as an exercise program that is always with the user, provides motivation for the user, visually tells the user how to exercise, and lets the user exercise with other people who are not present. | 02-19-2015 |