Patent application number | Description | Published |
20110093820 | GESTURE PERSONALIZATION AND PROFILE ROAMING - A gesture-based system may have default or pre-packaged gesture information, where a gesture is derived from a user's position or motion in a physical space. In other words, no controllers or devices are necessary. Depending on how a user uses his or her gesture to accomplish the task, the system may refine the properties and the gesture may become personalized. The personalized gesture information may be stored in a gesture profile and can be further updated with the latest data. The gesture-based system may use the gesture profile information for gesture recognition techniques. Further, the gesture profile may be roaming such that the gesture profile is available in a second location without requiring the system to relearn gestures that have already been personalized on behalf of the user. | 04-21-2011 |
20110296505 | CLOUD-BASED PERSONAL TRAIT PROFILE DATA - A system and method is disclosed for sensing, storing and using personal trait profile data. Once sensed and stored, this personal trait profile data may be used for a variety of purposes. In one example, a user's personal trait profile data may be accessed and downloaded to different computing systems with which a user may interact so that the different systems may be instantly tuned to the user's personal traits and manner of interaction. In a further example, a user's personal trait profile data may also be used for authentication purposes. | 12-01-2011 |
20110300929 | SYNTHESIS OF INFORMATION FROM MULTIPLE AUDIOVISUAL SOURCES - A system and method are disclosed for synthesizing information received from multiple audio and visual sources focused on a single scene. The system may determine the positions of capture devices based on a common set of cues identified in the image data of the capture devices. As a scene may often have users and objects moving into and out of the scene, data from the multiple capture devices may be time synchronized to ensure that data from the audio and visual sources are providing data of the same scene at the same time. Audio and/or visual data from the multiple sources may be reconciled and assimilated together to improve an ability of the system to interpret audio and/or visual aspects from the scene. | 12-08-2011 |
20110307260 | MULTI-MODAL GENDER RECOGNITION - Gender recognition is performed using two or more modalities. For example, depth image data and one or more types of data other than depth image data is received. The data pertains to a person. The different types of data are fused together to automatically determine gender of the person. A computing system can subsequently interact with the person based on the determination of gender. | 12-15-2011 |
20110310007 | ITEM NAVIGATION USING MOTION-CAPTURE DATA - A system and method is provided for using motion-capture data to control navigating of a cursor in a user interface of a computing system. Movement of a user's hand or other object in a three-dimensional capture space is tracked and represented in the computing system as motion-capture model data. The method includes obtaining a plurality of positions for the object from the motion-capture model data. The method determines a curved-gesture center point based on at least some of the plurality of positions for the object. Using the curved-gesture center point as an origin, an angular property is determined for one of the plurality of positions for the object. The method further includes navigating the cursor in a sequential arrangement of selectable items based on the angular property. | 12-22-2011 |
20110314381 | NATURAL USER INPUT FOR DRIVING INTERACTIVE STORIES - A system and method are disclosed for combining interactive gaming aspects into a linear story. A user may interact with the linear story via a NUI system to alter the story and the images that are presented to the user. In an example, a user may alter the story by performing a predefined exploration gesture. This gesture brings the user into the 3-D world of the displayed image. In particular, the image displayed on the screen changes to create the impression that a user is stepping into the 3-D virtual world to allow a user to examine virtual objects from different perspectives or to peer around virtual objects. | 12-22-2011 |
20110316853 | TELEPRESENCE SYSTEMS WITH VIEWER PERSPECTIVE ADJUSTMENT - Described herein is a telepresence system where a real-time a virtual hologram of a user is displayed at a remote display screen and is rendered from a vantage point that is different than the vantage point from which images of the user are captured via a video camera. The virtual hologram is based at least in part upon data acquired from a sensor unit at the location of the user. The sensor unit includes a color video camera that captures 2-D images of the user including surface features of the user. The sensor unit also includes a depth sensor that captures 3-D geometry data indicative of the relative position of surfaces on the user in 3-D space. The virtual hologram is rendered to orientate the gaze of the eyes of the virtual hologram towards the eyes of a second user viewing the remote display screen. | 12-29-2011 |
20110317871 | SKELETAL JOINT RECOGNITION AND TRACKING SYSTEM - A system and method are disclosed for recognizing and tracking a user's skeletal joints with a NUI system and further, for recognizing and tracking only some skeletal joints, such as for example a user's upper body. The system may include a limb identification engine which may use various methods to evaluate, identify and track positions of body parts of one or more users in a scene. In examples, further processing efficiency may be achieved by segmenting the field of view in smaller zones, and focusing on one zone at a time. Moreover, each zone may have its own set of predefined gestures which are recognized. | 12-29-2011 |
20120068913 | OPACITY FILTER FOR SEE-THROUGH HEAD MOUNTED DISPLAY - An optical see-through head-mounted display device includes a see-through lens which combines an augmented reality image with light from a real-world scene, while an opacity filter is used to selectively block portions of the real-world scene so that the augmented reality image appears more distinctly. The opacity filter can be a see-through LCD panel, for instance, where each pixel of the LCD panel can be selectively controlled to be transmissive or opaque, based on a size, shape and position of the augmented reality image. Eye tracking can be used to adjust the position of the augmented reality image and the opaque pixels. Peripheral regions of the opacity filter, which are not behind the augmented reality image, can be activated to provide a peripheral cue or a representation of the augmented reality image. In another aspect, opaque pixels are provided at a time when an augmented reality image is not present. | 03-22-2012 |
20120092328 | FUSING VIRTUAL CONTENT INTO REAL CONTENT - A system that includes a head mounted display device and a processing unit connected to the head mounted display device is used to fuse virtual content into real content. In one embodiment, the processing unit is in communication with a hub computing device. The system creates a volumetric model of a space, segments the model into objects, identifies one or more of the objects including a first object, and displays a virtual image over the first object on a display (of the head mounted display) that allows actual direct viewing of at least a portion of the space through the display. | 04-19-2012 |
20120127284 | HEAD-MOUNTED DISPLAY DEVICE WHICH PROVIDES SURROUND VIDEO - A see-through head-mounted display (HMD) device, e.g., in the form of augmented reality glasses, allows a user to view a video display device and an associated augmented reality image. In one approach, the augmented reality image is aligned with edges of the video display device to provide a larger, augmented viewing region. The HMD can include a camera which identifies the edges. The augmented reality image can be synchronized in time with content of the video display device. In another approach, the augmented reality image video provides a virtual audience which accompanies a user in watching the video display device. In another approach, the augmented reality image includes a 3-D which appears to emerge from the video display device, and which is rendered from a perspective of a user's location. In another approach, the augmented reality image can be rendered on a vertical or horizontal surface in a static location. | 05-24-2012 |
20120147038 | SYMPATHETIC OPTIC ADAPTATION FOR SEE-THROUGH DISPLAY - A method for overlaying first and second images in a common focal plane of a viewer comprises forming the first image and guiding the first and second images along an axis to a pupil of the viewer. The method further comprises adjustably diverging the first and second images at an adaptive diverging optic to bring the first image into focus at the common focal plane, and, adjustably converging the second image at an adaptive converging optic to bring the second image into focus at the common focal plane. | 06-14-2012 |
20120158755 | GRANULAR METADATA FOR DIGITAL CONTENT - Accordingly, the present discussion is directed in one respect; to a method of associating metadata with a digital content item and then controlling the presentation of that item in accordance with metadata-associated controls/constraints. The method may include determining a time-specific portion of a digital content item with which a user desires to associate a metadata item; associating the metadata item with the time-specific portion of the digital content item. During subsequent consumption of the digital content item by the user or another, the metadata item is presented synchronously in co lection with the time-specific portion of the digital content item, where such presentation is constrained or controlled in response to user-controlled filters implemented through a social network. | 06-21-2012 |
20120159001 | DISTRIBUTED ROBUST CLOCK SYNCHRONIZATION - Technology is provided for synchronization of clock information between networked devices. One or more of the devices may include one or more applications needed access to data and a common time reference between devices. In one embodiment, the devices have applications utilizing data shared in a network environment with other devices, as well as having a reference to a local clock signal on each device. A device may have a layer of code between the operating system and software applications that processes the data and maintains a remote clock reference for one or more of the other devices on the network. | 06-21-2012 |
20120162065 | SKELETAL JOINT RECOGNITION AND TRACKING SYSTEM - A system and method are disclosed for recognizing and tracking a user's skeletal joints with a NUI system and further, for recognizing and tracking only some skeletal joints, such as for example a user's upper body. The system may include a limb identification engine which may use various methods to evaluate, identify and track positions of body parts of one or more users in a scene. In examples, further processing efficiency may be achieved by segmenting the field of view in smaller zones, and focusing on one zone at a time. Moreover, each zone may have its own set of predefined gestures which are recognized. | 06-28-2012 |
20120163520 | SYNCHRONIZING SENSOR DATA ACROSS DEVICES - Techniques are provided for synchronization of sensor signals between devices. One or more of the devices may collect sensor data. The device may create a sensor signal from the sensor data, which it may make available to other devices upon a publisher/subscriber model. The other devices may subscribe to sensor signals they choose. A device could be a provider or a consumer of the sensor signals. A device may have a layer of code between an operating system and software applications that processes the data for the applications. The processing may include such actions as synchronizing the data in a sensor signal to a local time clock, predicting future values for data in a sensor signal, and providing data samples for a sensor signal at a frequency that an application requests, among other actions. | 06-28-2012 |
20120163723 | CLASSIFICATION OF POSTURE STATES - Systems and methods for estimating a posture of a body part of a user are disclosed. In one disclosed embodiment, an image is received from a sensor, where the image includes at least a portion of an image of the user including the body part. The skeleton information of the user is estimated from the image, a region of the image corresponding to the body part is identified at least partially based on the skeleton information, and a shape descriptor is extracted for the region and the shape descriptor is classified based on training data to estimate the posture of the body part. | 06-28-2012 |
20120165964 | INTERACTIVE CONTENT CREATION - An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track. | 06-28-2012 |
20120194645 | LIVING ROOM MOVIE CREATION - A system and method are disclosed living room movie creation. Movies can be directed, captured, and edited using a system that includes a depth camera. A virtual movie set can be created by using ordinary objects in the living room as virtual props. The system is able to capture motions of actors using the depth camera and to generate a movie based thereon. Therefore, there is no need for the actors to wear any special markers to detect their motion. A director may view scenes from the perspective of a “virtual camera” and record those scenes for later editing. | 08-02-2012 |
20120213212 | LIFE STREAMING - A system and method for analyzing, summarizing, and transmitting life experiences captured using a life recorder is described. A life recorder is a recording device that continuously captures life experiences, including unanticipated life experiences, in video and/or audio recordings. In some embodiments, the video and/or audio recordings generated by a life recorder are automatically summarized, indexed, and stored for future use. By indexing and storing life recordings, a life recorder may search for and acquire life recordings generated by itself or another life recorder, thereby allowing life experiences to be shared minutes or even years later. In some embodiments, recordings generated by a life recorder may be analyzed in real-time and automatically pushed to one or more target devices. The ability to automatically and instantaneously push life recordings as live feeds to one or more target devices allows friends and family to experience one's life experience in real-time. | 08-23-2012 |
20120278904 | CONTENT DISTRIBUTION REGULATION BY VIEWING USER - A content presentation system and method allowing content providers to regulate the presentation of content on a per-user-view basis. Content is distributed an associated license option on the number of individual consumers or viewers allowed to consume the content. Consumers are presented with a content selection and a choice of licenses allowing consumption of the content. The users consuming the content on a display device are monitored so that if the number of user-views licensed is exceeded, remedial action may be taken. | 11-01-2012 |
20130002813 | VIEWING WINDOWS FOR VIDEO STREAMS - Techniques are provided for viewing windows for video streams. A video stream from a video capture device is accessed. Data that describes movement or position of a person is accessed. A viewing window is placed in the video stream based on the data that describes movement or position of the person. The viewing window is provided to a display device in accordance with the placement of the viewing window in the video stream. Motion sensors can detect motion of the person carrying the video capture device in order to dampen the motion such that the video on the remote display does not suffer from motion artifacts. Sensors can also track the eye gaze of either the person carrying the mobile video capture device or the remote display device to enable control of the spatial region of the video stream shown at the display device. | 01-03-2013 |
20130013811 | DISTRIBUTED ROBUST CLOCK SYNCHRONIZATION - Technology is provided for synchronization of clock information between networked devices. One or more of the devices may include one or more applications needed access to data and a common time reference between devices. In one embodiment, the devices have applications utilizing data shared in a network environment with other devices, as well as having a reference to a local clock signal on each device. A device may have a layer of code between the operating system and software applications that processes the data and maintains a remote clock reference for one or more of the other devices on the network. | 01-10-2013 |
20130021373 | Automatic Text Scrolling On A Head-Mounted Display - A see-through head-mounted display (HMD) device, e.g., in the form of glasses, provides view an augmented reality image including text, such as in an electronic book or magazine, word processing document, email, karaoke, teleprompter or other public speaking assistance application. The presentation of text and/or graphics can be adjusted based on sensor inputs indicating a gaze direction, focal distance and/or biological metric of the user. A current state of the text can be bookmarked when the user looks away from the image and subsequently resumed from the bookmarked state. A forward facing camera can adjust the text if a real word object passes in front of it, or adjust the appearance of the text based on a color of pattern of a real world background object. In a public speaking or karaoke application, information can be displayed regarding a level of interest of the audience and names of audience members. | 01-24-2013 |
20130044130 | PROVIDING CONTEXTUAL PERSONAL INFORMATION BY A MIXED REALITY DEVICE - The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location. | 02-21-2013 |
20130050642 | ALIGNING INTER-PUPILLARY DISTANCE IN A NEAR-EYE DISPLAY SYSTEM - The technology provides for automatic alignment of a see-through near-eye, mixed reality device with an inter-pupillary distance (IPD). A determination is made as to whether a see-through, near-eye, mixed reality display device is aligned with an IPD of a user. If the display device is not aligned with the IPD, the display device is automatically adjusted. In some examples, the alignment determination is based on determinations of whether an optical axis of each display optical system positioned to be seen through by a respective eye is aligned with a pupil of the respective eye in accordance with an alignment criteria. The pupil alignment may be determined based on an arrangement of gaze detection elements for each display optical system including at least one sensor for capturing data of the respective eye and the captured data. The captured data may be image data, image and glint data, and glint data only. | 02-28-2013 |
20130050833 | ADJUSTMENT OF A MIXED REALITY DISPLAY FOR INTER-PUPILLARY DISTANCE ALIGNMENT - The technology provides for adjusting a see-through, near-eye, mixed reality display device for alignment with an inter-pupillary distance (IPD) of a user by different examples of display adjustment mechanisms. The see-through, near-eye, mixed reality system includes for each eye a display optical system having an optical axis. Each display optical system is positioned to be seen through by a respective eye, and is supported on a respective movable support structure. A display adjustment mechanism attached to the display device also connects with each movable support structure for moving the structure. A determination is automatically made as to whether the display device is aligned with an IPD of a user. If not aligned, one or more adjustment values for a position of at least one of the display optical systems is automatically determined. The display adjustment mechanism moves the at least one display optical system in accordance with the adjustment values. | 02-28-2013 |
20130083003 | PERSONAL AUDIO/VISUAL SYSTEM - The technology described herein incudes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The system can be used in various entertainment, sports, shopping and theme-park situations to provide a mixed reality experience. | 04-04-2013 |
20130083007 | CHANGING EXPERIENCE USING PERSONAL A/V SYSTEM - A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. | 04-04-2013 |
20130083008 | ENRICHED EXPERIENCE USING PERSONAL A/V SYSTEM - A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. | 04-04-2013 |
20130083009 | EXERCISING APPLICATIONS FOR PERSONAL AUDIO/VISUAL SYSTEM - The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The personal A/V apparatus serves as an exercise program that is always with the user, provides motivation for the user, visually tells the user how to exercise, and lets the user exercise with other people who are not present. | 04-04-2013 |
20130083011 | REPRESENTING A LOCATION AT A PREVIOUS TIME PERIOD USING AN AUGMENTED REALITY DISPLAY - Technology is described for representing a physical location at a previous time period with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. The personal A/V apparatus is identified as being within the physical location, and one or more objects in a display field of view of the near-eye, augmented reality display are automatically identified based on a three dimensional mapping of objects in the physical location. User input, which may be natural user interface (NUI) input, indicates a previous time period, and one or more 3D virtual objects associated with the previous time period are displayed from a user perspective associated with the display field of view. An object may be erased from the display field of view, and a camera effect may be applied when changing between display fields of view. | 04-04-2013 |
20130083018 | PERSONAL AUDIO/VISUAL SYSTEM WITH HOLOGRAPHIC OBJECTS - A system for generating an augmented reality environment using state-based virtual objects is described. A state-based virtual object may be associated with a plurality of different states. Each state of the plurality of different states may correspond with a unique set of triggering events different from those of any other state. The set of triggering events associated with a particular state may be used to determine when a state change from the particular state is required. In some cases, each state of the plurality of different states may be associated with a different 3-D model or shape. The plurality of different states may be defined using a predetermined and standardized file format that supports state-based virtual objects. In some embodiments, one or more potential state changes from a particular state may be predicted based on one or more triggering probabilities associated with the set of triggering events. | 04-04-2013 |
20130083062 | PERSONAL A/V SYSTEM WITH CONTEXT RELEVANT INFORMATION - A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. | 04-04-2013 |
20130083063 | Service Provision Using Personal Audio/Visual System - A collaborative on-demand system allows a user of a head-mounted display device (HMDD) to obtain assistance with an activity from a qualified service provider. In a session, the user and service provider exchange camera-captured images and augmented reality images. A gaze-detection capability of the HMDD allows the user to mark areas of interest in a scene. The service provider can similarly mark areas of the scene, as well as provide camera-captured images of the service provider's hand or arm pointing to or touching an object of the scene. The service provider can also select an animation or text to be displayed on the HMDD. A server can match user requests with qualified service providers which meet parameters regarding fee, location, rating and other preferences. Or, service providers can review open requests and self-select appropriate requests, initiating contact with a user. | 04-04-2013 |
20130083064 | PERSONAL AUDIO/VISUAL APPARATUS PROVIDING RESOURCE MANAGEMENT - Technology is described for resource management based on data including image data of a resource captured by at least one capture device of at least one personal audiovisual (A/V) apparatus including a near-eye, augmented reality (AR) display. A resource is automatically identified from image data captured by at least one capture device of at least one personal A/V apparatus and object reference data. A location in which the resource is situated and a 3D space position or volume of the resource in the location is tracked. A property of the resource is also determined from the image data and tracked. A function of a resource may also be stored for determining whether the resource is usable for a task. Responsive to notification criteria for the resource being satisfied, image data related to the resource is displayed on the near-eye AR display. | 04-04-2013 |
20130083173 | VIRTUAL SPECTATOR EXPERIENCE WITH A PERSONAL AUDIO/VISUAL APPARATUS - Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user. | 04-04-2013 |
20130084970 | Sharing Games Using Personal Audio/Visual Apparatus - A game can be created, shared and played using a personal audio/visual apparatus such as a head-mounted display device (HMDD). Rules of the game, and a configuration of the game space, can be standard or custom. Boundary points of the game can be defined by a gaze direction of the HMDD, by the user's location, by a model of a physical game space such as an instrumented court or by a template. Players can be identified and notified of the availability of a game using a server push technology. For example, a user in a particular location may be notified of the availability of a game at that location. A server manages the game, including storing the rules, boundaries and a game state. The game state can identify players and their scores. Real world objects can be imaged and provided as virtual objects in the game space. | 04-04-2013 |
20130085345 | Personal Audio/Visual System Providing Allergy Awareness - A system provides a recommendation of food items to a user based on nutritional preferences of the user, using a head-mounted display device (HMDD) worn by the user. In a store, a forward-facing camera of the HMDD captures an image of a food item. The food item can be identified by the image, such as based on packaging of the food item. Nutritional parameters of the food item are compared to nutritional preferences of the user to determine whether the food item is recommended. The HMDD displays an augmented reality image to the user indicating whether the food item is recommended. If the food item is not recommended, a substitute food item can be identified. The nutritional preferences can indicate food allergies, preferences for low calorie foods and so forth. In a restaurant, the HMDD can recommend menu selections for a user. | 04-04-2013 |
20130095924 | ENHANCING A SPORT USING AN AUGMENTED REALITY DISPLAY - Technology is described for providing a personalized sport performance experience with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. A physical movement recommendation is determined for the user performing a sport based on skills data for the user for the sport, physical characteristics of the user, and 3D space positions for at least one or more sport objects. 3D virtual data depicting one or more visual guides for assisting the user in performing the physical movement recommendation may be displayed from a user perspective associated with a display field of view of the near-eye AR display. An avatar may also be displayed by the near-eye AR display performing a sport. The avatar may perform the sport interactively with the user or be displayed performing a prior performance of an individual represented by the avatar. | 04-18-2013 |
20130147687 | DISPLAYING VIRTUAL DATA AS PRINTED CONTENT - The technology provides embodiments for displaying virtual data as printed content by a see-through, near-eye, mixed reality display device system. One or more literary content items registered to a reading object in a field of view of the display device system are displayed with print layout characteristics. Print layout characteristics from a publisher of each literary content item are selected if available. The reading object has a type like a magazine, book, journal or newspaper and may be a real object or a virtual object displayed by the display device system. The reading object type of the virtual object is based on a reading object type associated with a literary content item to be displayed. Virtual augmentation data registered to a literary content item is displayed responsive to detecting user physical action in image data. An example of a physical action is a page flipping gesture. | 06-13-2013 |
20130147836 | MAKING STATIC PRINTED CONTENT DYNAMIC WITH VIRTUAL DATA - The technology provides embodiments for making static printed content being viewed through a see-through, mixed reality display device system more dynamic with display of virtual data. A printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a printed content selection within the printed content item is identified based on physical action user input, for example eye gaze or a gesture. A task in relation to the printed content selection can also be determined based on physical action user input. Virtual data for the printed content selection is displayed in accordance with the task. Additionally, virtual data can be linked to a work embodied in a printed content item. Furthermore, a virtual version of the printed material may be displayed at a more comfortable reading position and with improved visibility of the content. | 06-13-2013 |
20130147838 | UPDATING PRINTED CONTENT WITH PERSONALIZED VIRTUAL DATA - The technology provides for updating printed content with personalized virtual data using a see-through, near-eye, mixed reality display device system. A printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a printed content selection within the printed content item is identified based on physical action user input, for example eye gaze or a gesture. Virtual data is selected from available virtual data for the printed content selection based on user profile data, and the display device system displays the selected virtual data in a position registered to the position of the printed content selection. In some examples, a task related to the printed content item is determined based on physical action user input, and personalized virtual data is displayed registered to the printed content item in accordance with the task. | 06-13-2013 |
20130165225 | NATURAL USER INPUT FOR DRIVING INTERACTIVE STORIES - A system and method are disclosed for combining interactive gaming aspects into a linear story. A user may interact with the linear story via a NUI system to alter the story and the images that are presented to the user. In an example, a user may alter the story by performing a predefined exploration gesture. This gesture brings the user into the | 06-27-2013 |
20130169683 | HEAD MOUNTED DISPLAY WITH IRIS SCAN PROFILING - A see-through head mounted-display and method for operating the display to optimize performance of the display by referencing a user profile automatically. The identity of the user is determined by performing an iris scan and recognition of a user enabling user profile information to be retrieved and used to enhance the user's experience with the see through head mounted display. The user profile may contain user preferences regarding services providing augmented reality images to the see-through head-mounted display, as well as display adjustment information optimizing the position of display elements in the see-though head-mounted display. | 07-04-2013 |
20130286004 | DISPLAYING A COLLISION BETWEEN REAL AND VIRTUAL OBJECTS - Technology is described for displaying a collision between objects by an augmented reality display device system. A collision between a real object and a virtual object is identified based on three dimensional space position data of the objects. At least one effect on at least one physical property of the real object is determined based on physical properties of the real object, like a change in surface shape, and physical interaction characteristics of the collision. Simulation image data is generated and displayed simulating the effect on the real object by the augmented reality display. Virtual objects under control of different executing applications can also interact with one another in collisions. | 10-31-2013 |
20130286178 | GAZE DETECTION IN A SEE-THROUGH, NEAR-EYE, MIXED REALITY DISPLAY - The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed. | 10-31-2013 |
20130300653 | GAZE DETECTION IN A SEE-THROUGH, NEAR-EYE, MIXED REALITY DISPLAY - The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed. | 11-14-2013 |
20130307855 | HOLOGRAPHIC STORY TELLING - A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids. | 11-21-2013 |
20130307856 | SYNCHRONIZING VIRTUAL ACTOR'S PERFORMANCES TO A SPEAKER'S VOICE - A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids. | 11-21-2013 |
20130321255 | NAVIGATING CONTENT IN AN HMD USING A PHYSICAL OBJECT - Technology is disclosed herein to help a user navigate through large amounts of content while wearing a see-through, near-eye, mixed reality display device such as a head mounted display (HMD). The user can use a physical object such as a book to navigate through content being presented in the HMD. In one embodiment, a book has markers on the pages that allow the system to organize the content. The book could have real content, but it could be blank other than the markers. As the user flips through the book, the system recognizes the markers and presents content associated with the respective marker in the HMD. | 12-05-2013 |
20130321462 | GESTURE BASED REGION IDENTIFICATION FOR HOLOGRAMS - Techniques are provided for allowing a user to select a region within virtual imagery, such as a hologram, being presented in an HMD. The user could select the region by using their hands to form a closed loop such that from the perspective of the user, the closed loop corresponds to the region the user wishes to select. The user could select the region by using a prop, such as a picture frame. In response to the selection, the selected region could be presented using a different rendering technique than other regions of the virtual imagery. Various rendering techniques such as zooming, filtering, etc. could be applied to the selected region. The identification of the region by the user could also serve as a selection of an element in that portion of the virtual image. | 12-05-2013 |
20130328783 | TRANSMISSION OF INFORMATION TO SMART FABRIC OUPUT DEVICE - A system and method are provided for a user to communicate uniquely human and personal information to one or more other users, via a smart textile input/output device. The information may be displayed on the device associated with the user, on one or more other articles of clothing associated with one or more other users, or on one or more external devices proximate to and associated with a target user. The information may result from a direct input of display information from a source user to a target user or from a third party directed to one or more target users. | 12-12-2013 |
20140002442 | MECHANISM TO GIVE HOLOGRAPHIC OBJECTS SALIENCY IN MULTIPLE SPACES | 01-02-2014 |
20140002491 | DEEP AUGMENTED REALITY TAGS FOR HEAD MOUNTED DISPLAYS | 01-02-2014 |
20140002492 | PROPAGATION OF REAL WORLD PROPERTIES INTO AUGMENTED REALITY IMAGES | 01-02-2014 |
20140002495 | MULTI-NODE POSTER LOCATION | 01-02-2014 |
20140002496 | CONSTRAINT BASED INFORMATION INFERENCE | 01-02-2014 |
20140006026 | CONTEXTUAL AUDIO DUCKING WITH SITUATION AWARE DEVICES | 01-02-2014 |
20140024457 | GAME BROWSING - Embodiments of the present invention allow players to instantly access and begin playing games through an online service. To make the games instantly available, an online service keeps instances of games running in active memory waiting for a player to be added. The game instances running in active memory are not attached to a player profile or an I/O channel from a game client. Once the player requests a game, the player's player profile is loaded into the running game instance and an I/O channel is mapped from the game client to the game instance. From the player's perspective, the preloaded game instances allow the player to browse directly from game to game with very little delay. To optimize the usage of server-side resources, historical usage data may be analyzed to anticipate demand for different games. | 01-23-2014 |
20140152558 | DIRECT HOLOGRAM MANIPULATION USING IMU - Methods for controlling an augmented reality environment associated with a head-mounted display device (HMD) are described. In some embodiments, a virtual pointer may be displayed to an end user of the HMD and controlled by the end user using motion and/or orientation information associated with a secondary device (e.g., a mobile phone). Using the virtual pointer, the end user may select and manipulate virtual objects within the augmented reality environment, select real-world objects within the augmented reality environment, and/or control a graphical user interface of the HMD. In some cases, the initial position of the virtual pointer within the augmented reality environment may be determined based on a particular direction in which the end user is gazing and/or a particular object at which the end user is currently focusing on or has recently focused on. | 06-05-2014 |
20140168261 | DIRECT INTERACTION SYSTEM MIXED REALITY ENVIRONMENTS - A system and method are disclosed for interacting with virtual objects in a virtual environment using an accessory such as a hand held object. The virtual object may be viewed using a display device. The display device and hand held object may cooperate to determine a scene map of the virtual environment, the display device and hand held object being registered in the scene map. | 06-19-2014 |
20140198017 | Wearable Behavior-Based Vision System - A see through display apparatus includes a see-through, head mounted display and sensors on the display which detect audible and visual data in a field of view of the apparatus. A processor cooperates with the display to provide information to a wearer of the device using a behavior-based real object mapping system. At least a global zone and an egocentric behavioral zone relative to the apparatus are established, and real objects assigned behaviors that are mapped to the respective zones occupied by the object. The behaviors assigned to the objects can be used by applications that provide services to the wearer, using the behaviors as the foundation for evaluation of the type of feedback to provide in the apparatus. | 07-17-2014 |
20140253437 | Automatic Text Scrolling On A Display Device - A see-through head-mounted display (HMD) device, e.g., in the form of glasses, provides view an augmented reality image including text, such as in an electronic book or magazine, word processing document, email, karaoke, teleprompter or other public speaking assistance application. The presentation of text and/or graphics can be adjusted based on sensor inputs indicating a gaze direction, focal distance and/or biological metric of the user. A current state of the text can be bookmarked when the user looks away from the image and subsequently resumed from the bookmarked state. A forward facing camera can adjust the text if a real word object passes in front of it, or adjust the appearance of the text based on a color of pattern of a real world background object. In a public speaking or karaoke application, information can be displayed regarding a level of interest of the audience and names of audience members. | 09-11-2014 |
20140347391 | HOLOGRAM ANCHORING AND DYNAMIC POSITIONING - A system and method are disclosed for displaying virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects. When a user is moving through the mixed reality environment, the virtual objects may remain world-locked, so that the user can move around and explore the virtual objects from different perspectives. When the user is motionless in the mixed reality environment, the virtual objects may rotate to face the user so that the user can easily view and interact with the virtual objects. | 11-27-2014 |
20140368532 | VIRTUAL OBJECT ORIENTATION AND VISUALIZATION - A method and apparatus for the creation of a perspective-locked virtual object having in world space. The virtual object may be consumed) by another user with a consumption device at a location, position, and orientation which is the same as, or proximate to, the location, position, and orientation where the virtual object is created. Objects may have one, few or many allowable consumption locations, positions, and orientations defined by its creator | 12-18-2014 |
20140368533 | MULTI-SPACE CONNECTED VIRTUAL DATA OBJECTS - A see-through head mounted display apparatus includes a display and a processor. The processor determines geo-located positions of points of interest within a field of view and generates markers indicating information regarding an associated real world object is available to the user. Markers are rendered in the display relative to the geo-located position and the field of view of the user. When a user selects a marker though a user gesture, the device displays a near-field virtual object having a visual tether to the marker simultaneously with the marker. The user may interact with the marker to view, add or delete information associated with the point of interest. | 12-18-2014 |
20140368534 | CONCURRENT OPTIMAL VIEWING OF VIRTUAL OBJECTS - A see through head mounted display apparatus includes code performing a method of choosing and optimal viewing location and perspective for shared-view virtual objects rendered for multiple users in a common environment. Multiple objects and multiple users are taken into account in determining the optimal, common viewing location. The technology allows each user to have a common view if the relative position of the object in the environment. | 12-18-2014 |
20140368535 | HYBRID WORLD/BODY LOCKED HUD ON AN HMD - A system and method are disclosed for displaying virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects. When a user is not focused on the virtual object, which may be a heads-up display, or HUD, the HUD may remain body locked to the user. As such, the user may explore and interact with a mixed reality environment presented by the head mounted display device without interference from the HUD. When a user wishes to view and/or interact with the HUD, the user may look at the HUD. At this point, the HUD may change from a body locked virtual object to a world locked virtual object. The user is then able to view and interact with the HUD from different positions and perspectives of the HUD. | 12-18-2014 |
20140368537 | SHARED AND PRIVATE HOLOGRAPHIC OBJECTS - A system and method are disclosed for displaying virtual objects in a mixed reality environment including shared virtual objects and private virtual objects. Multiple users can collaborate together in interacting with the shared virtual objects. A private virtual object may be visible to a single user. In examples, private virtual objects of respective users may facilitate the users' collaborative interaction with one or more shared virtual objects. | 12-18-2014 |
20140372957 | MULTI-STEP VIRTUAL OBJECT SELECTION - A head mounted display allows user selection of a virtual object through multi-step focusing by the user. Focus on the selectable object is determined and then a validation object is displayed. When user focus moves to the validation object, a timeout determines that a selection of the validation object, and thus the selectable object has occurred. The technology can be used in see through head mounted displays to allow a user to effectively navigate an environment with a multitude of virtual objects without unintended selections. | 12-18-2014 |
20140375680 | TRACKING HEAD MOVEMENT WHEN WEARING MOBILE DEVICE - Methods for tracking the head position of an end user of a head-mounted display device (HMD) relative to the HMD are described. In some embodiments, the HMD may determine an initial head tracking vector associated with an initial head position of the end user relative to the HMD, determine one or more head tracking vectors corresponding with one or more subsequent head positions of the end user relative to the HMD, track head movements of the end user over time based on the initial head tracking vector and the one or more head tracking vectors, and adjust positions of virtual objects displayed to the end user based on the head movements. In some embodiments, the resolution and/or number of virtual objects generated and displayed to the end user may be modified based on a degree of head movement of the end user relative to the HMD. | 12-25-2014 |
20150049114 | EXERCISING APPLICATIONS FOR PERSONAL AUDIO/VISUAL SYSTEM - The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The personal A/V apparatus serves as an exercise program that is always with the user, provides motivation for the user, visually tells the user how to exercise, and lets the user exercise with other people who are not present. | 02-19-2015 |