Patent application number | Description | Published |
20100194762 | Standard Gestures - Systems, methods and computer readable media are disclosed for grouping complementary sets of standard gestures into gesture libraries. The gestures may be complementary in that they are frequently used together in a context or in that their parameters are interrelated. Where a parameter of a gesture is set with a first value, all other parameters of the gesture and of other gestures in the gesture package that depend on the first value may be set with their own value which is determined using the first value. | 08-05-2010 |
20100199228 | Gesture Keyboarding - Systems, methods and computer readable media are disclosed for gesture keyboarding. A user makes a gesture by either making a pose or moving in a pre-defined way that is captured by a depth camera. The depth information provided by the depth camera is parsed to determine at least that part of the user that is making the gesture. When parsed, the character or action signified by this gesture is identified. | 08-05-2010 |
20100199229 | MAPPING A NATURAL INPUT DEVICE TO A LEGACY SYSTEM - Systems and methods for mapping natural input devices to legacy system inputs are disclosed. One example system may include a computing device having an algorithmic preprocessing module configured to receive input data containing a natural user input and to identify the natural user input in the input data. The computing device may further include a gesture module coupled to the algorithmic preprocessing module, the gesture module being configured to associate the natural user input to a gesture in a gesture library. The computing device may also include a mapping module to map the gesture to a legacy controller input, and to send the legacy controller input to a legacy system in response to the natural user input. | 08-05-2010 |
20100199230 | GESTURE RECOGNIZER SYSTEM ARCHITICTURE - Systems, methods and computer readable media are disclosed for a gesture recognizer system architecture. A recognizer engine is provided, which receives user motion data and provides that data to a plurality of filters. A filter corresponds to a gesture, that may then be tuned by an application receiving information from the gesture recognizer so that the specific parameters of the gesture—such as an arm acceleration for a throwing gesture—may be set on a per-application level, or multiple times within a single application. Each filter may output to an application using it a confidence level that the corresponding gesture occurred, as well as further details about the user motion data. | 08-05-2010 |
20100199231 | PREDICTIVE DETERMINATION - Systems, methods and computer readable media are disclosed for a gesture recognizer system architecture. A recognizer engine is provided, which receives user motion data and provides that data to a plurality of filters. A filter corresponds to a gesture, that may then be tuned by an application receiving information from the gesture recognizer so that the specific parameters of the gesture—such as an arm acceleration for a throwing gesture—may be set on a per-application level, or multiple times within a single application. Each filter may output to an application using it a confidence level that the corresponding gesture occurred, as well as further details about the user motion data. | 08-05-2010 |
20100238182 | CHAINING ANIMATIONS - In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate. | 09-23-2010 |
20100241998 | VIRTUAL OBJECT MANIPULATION - Systems, methods and computer readable media are disclosed for manipulating virtual objects. A user may utilize a controller, such as his hand, in physical space to associate with a cursor in a virtual environment. As the user manipulates the controller in physical space, this is captured by a depth camera. The image data from the depth camera is parsed to determine how the controller is manipulated, and a corresponding manipulation of the cursor is performed in virtual space. Where the cursor interacts with a virtual object in the virtual space, that virtual object is manipulated by the cursor. | 09-23-2010 |
20100266210 | Predictive Determination - Systems, methods and computer readable media are disclosed for a gesture recognizer system architecture. A recognizer engine is provided, which receives user motion data and provides that data to a plurality of filters. A filter corresponds to a gesture, that may then be tuned by an application receiving information from the gesture recognizer so that the specific parameters of the gesture—such as an arm acceleration for a throwing gesture—may be set on a per-application level, or multiple times within a single application. Each filter may output to an application using it a confidence level that the corresponding gesture occurred, as well as further details about the user motion data. | 10-21-2010 |
20100277489 | DETERMINE INTENDED MOTIONS - It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like. | 11-04-2010 |
20100278393 | ISOLATE EXTRANEOUS MOTIONS - A system may receive image data and capture motion with respect to a target in a physical space and recognize a gesture from the captured motion. It may be desirable to isolate aspects of captured motion to differentiate random and extraneous motions. For example, a gesture may comprise motion of a user's right arm, and it may be desirable to isolate the motion of the user's right arm and exclude an interpretation of any other motion. Thus, the isolated aspect may be the focus of the received data for gesture recognition. Alternately, the isolated aspects may be an aspect of the captured motion that is removed from consideration when identifying a gesture from the captured motion. For example, gesture filters may be modified to correspond to the user's natural lean to eliminate the effect the lean has on the registry of a motion with a gesture filter. | 11-04-2010 |
20100281432 | SHOW BODY POSITION - A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. Providing visual feedback representing instructional gesture data to the user can teach the user how to properly gesture. The visual feedback may be provided in any number of suitable ways. For example, visual feedback may be provided via ghosted images, player avatars, or skeletal representations. The system can process prerecorded or live content for displaying visual feedback representing instructional gesture data. The feedback can portray the deltas between the user's actual position and the ideal gesture position. | 11-04-2010 |
20100281438 | ALTERING A VIEW PERSPECTIVE WITHIN A DISPLAY ENVIRONMENT - Disclosed herein are systems and methods for altering a view perspective within a display environment. For example, gesture data corresponding to a plurality of inputs may be stored. The input may be input into a game or application implemented by a computing device. Images of a user of the game or application may be captured. For example, a suitable capture device may capture several images of the user over a period of time. The images may be analyzed and processed for detecting a user's gesture. Aspects of the user's gesture may be compared to the stored gesture data for determining an intended gesture input for the user. The comparison may be part of an analysis for determining inputs corresponding to the gesture data, where one or more of the inputs are input into the game or application and cause a view perspective within the display environment to be altered. | 11-04-2010 |
20100281439 | Method to Control Perspective for a Camera-Controlled Computer - Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control. | 11-04-2010 |
20100306261 | Localized Gesture Aggregation - Systems, methods and computer readable media are disclosed for a localized gesture aggregation. In a system where user movement is captured by a capture device to provide gesture input to the system, demographic information regarding users as well as data corresponding to how those users respectively make various gestures is gathered. When a new user begins to use the system, his demographic information is analyzed to determine a most likely way that he will attempt to make or find it easy to make a given gesture. That most likely way is then used to process the new user's gesture input. | 12-02-2010 |
20100306713 | Gesture Tool - Systems, methods and computer readable media are disclosed for a gesture tool. A capture device captures user movement and provides corresponding data to a gesture recognizer engine and an application. From that, the data is parsed to determine whether it satisfies one or more gesture filters, each filter corresponding to user-performed gesture. The data and the information about the filters is also sent to a gesture tool, which displays aspects of the data and filters. In response to user input corresponding to a change in a filter, the gesture tool sends an indication of such to the gesture recognizer engine and application, where that change occurs. | 12-02-2010 |
20100306714 | Gesture Shortcuts - Systems, methods and computer readable media are disclosed for gesture shortcuts. A user's movement or body position is captured by a capture device of a system, and is used as input to control the system. For a system-recognized gesture, there may be a full version of the gesture and a shortcut of the gesture. Where the system recognizes that either the full version of the gesture or the shortcut of the gesture has been performed, it sends an indication that the system-recognized gesture was observed to a corresponding application. Where the shortcut comprises a subset of the full version of the gesture, and both the shortcut and the full version of the gesture are recognized as the user performs the full version of the gesture, the system recognizes that only a single performance of the gesture has occurred, and indicates to the application as such. | 12-02-2010 |
20100306715 | Gestures Beyond Skeletal - Systems, methods and computer readable media are disclosed for gesture input beyond skeletal. A user's movement or body position is captured by a capture device of a system. Further, non-user-position data is received by the system, such as controller input by the user, an item that the user is wearing, a prop under the control of the user, or a second user's movement or body position. The system incorporates both the user-position data and the non-user-position data to determine one or more inputs the user made to the system. | 12-02-2010 |
20110035666 | SHOW BODY POSITION - A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. Providing visual feedback representing instructional gesture data to the user can teach the user how to properly gesture. The visual feedback may be provided in any number of suitable ways. For example, visual feedback may be provided via ghosted images, player avatars, or skeletal representations. The system can process prerecorded or live content for displaying visual feedback representing instructional gesture data. The feedback can portray the deltas between the user's actual position and the ideal gesture position. | 02-10-2011 |
20110099476 | DECORATING A DISPLAY ENVIRONMENT - Disclosed herein are systems and methods for decorating a display environment. In one embodiment, a user may decorate a display environment by making one or more gestures, using voice commands, using a suitable interface device, and/or combinations thereof. A voice command can be detected for user selection of an artistic feature, such as, for example, a color, a texture, an object, and a visual effect for decorating in a display environment. The user can also gesture for selecting a portion of the display environment for decoration. Next, the selected portion of the display environment can be altered based on the selected artistic feature. The user's motions can be reflected in the display environment by an avatar. In addition, a virtual canvas or three-dimensional object can be displayed in the display environment for decoration by the user. | 04-28-2011 |
20110109617 | Visualizing Depth - An image such as a depth image of a scene may be received, observed, or captured by a device. The image may then be analyzed to identify one or more targets within the scene. When a target is identified, vertices may be generated. A mesh model may then be created by drawing lines that may connect the vertices. Additionally, a depth value may also be calculated for each vertex. The depth values of the vertices may then be used to extrude the mesh model such that the mesh model may represent the target in the three-dimensional virtual world. A colorization scheme, a texture, lighting effects, or the like, may be also applied to the mesh model to convey the depth the virtual object may have in the virtual world. | 05-12-2011 |
20110175801 | Directed Performance In Motion Capture System - Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person. | 07-21-2011 |
20110175809 | Tracking Groups Of Users In Motion Capture System - In a motion capture system, a unitary input is provided to an application based on detected movement and/or location of a group of people. Audio information from the group can also be used as an input. The application can provide real-time feedback to the person or group via a display and audio output. The group can control the movement of an avatar in a virtual space based on the movement of each person in the group, such as in a steering or balancing game. To avoid a discontinuous or confusing output by the application, missing data can be generated for a person who is occluded or partially out of the field of view. A wait time can be set for activating a new person and deactivating a currently-active person. The wait time can be adaptive based on a first detected position or a last detected position of the person. | 07-21-2011 |
20110175810 | Recognizing User Intent In Motion Capture System - Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and evaluates the person's intent to engage with the application. Factors such as location, stance, movement and voice data can be evaluated. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. Voice data can include volume as well as words which are detected by speech recognition. | 07-21-2011 |
20110221755 | BIONIC MOTION - A camera that can sense motion of a user is connected to a computing system (e.g., video game apparatus or other type of computer). The computing system determines an action corresponding to the sensed motion of the user and determines a magnitude of the sensed motion of the user. The computing system creates and displays an animation of an object (e.g., an avatar in a video game) performing the action in a manner that is amplified in comparison to the sensed motion by a factor that is proportional to the determined magnitude. The computing system also creates and outputs audio/visual feedback in proportion to a magnitude of the sensed motion of the user. | 09-15-2011 |
20110223995 | INTERACTING WITH A COMPUTER BASED APPLICATION - A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game. | 09-15-2011 |
20110234490 | Predictive Determination - Systems, methods and computer readable media are disclosed for a gesture recognizer system architecture. A recognizer engine is provided, which receives user motion data and provides that data to a plurality of filters. A filter corresponds to a gesture, that may then be tuned by an application receiving information from the gesture recognizer so that the specific parameters of the gesture—such as an arm acceleration for a throwing gesture—may be set on a per-application level, or multiple times within a single application. Each filter may output to an application using it a confidence level that the corresponding gesture occurred, as well as further details about the user motion data. | 09-29-2011 |
20110246329 | MOTION-BASED INTERACTIVE SHOPPING ENVIRONMENT - An on-screen shopping application which reacts to a human target user's motions to provide a shopping experience to the user is provided. A tracking system captures user motions and executes a shopping application allowing a user to manipulate an on-screen representation the user. The on-screen representation has a likeness of the user or another individual and movements of the user in the on-screen interface allows the user to interact with virtual articles that represent real-world articles. User movements which are recognized as article manipulation or transaction control gestures are translated into commands for the shopping application. | 10-06-2011 |
20110285620 | GESTURE RECOGNIZER SYSTEM ARCHITECTURE - Systems, methods and computer readable media are disclosed for a gesture recognizer system architecture. A recognizer engine is provided, which receives user motion data and provides that data to a plurality of filters. A filter corresponds to a gesture, that may then be tuned by an application receiving information from the gesture recognizer so that the specific parameters of the gesture—such as an arm acceleration for a throwing gesture—may be set on a per-application level, or multiple times within a single application. Each filter may output to an application using it a confidence level that the corresponding gesture occurred, as well as further details about the user motion data. | 11-24-2011 |
20110285626 | GESTURE RECOGNIZER SYSTEM ARCHITECTURE - Systems, methods and computer readable media are disclosed for a gesture recognizer system architecture. A recognizer engine is provided, which receives user motion data and provides that data to a plurality of filters. A filter corresponds to a gesture, that may then be tuned by an application receiving information from the gesture recognizer so that the specific parameters of the gesture—such as an arm acceleration for a throwing gesture—may be set on a per-application level, or multiple times within a single application. Each filter may output to an application using it a confidence level that the corresponding gesture occurred, as well as further details about the user motion data. | 11-24-2011 |
20110299728 | AUTOMATIC DEPTH CAMERA AIMING - Automatic depth camera aiming is provided by a method which includes receiving from the depth camera one or more observed depth images of a scene. The method further includes, if a point of interest of a target is found within the scene, determining if the point of interest is within a far range relative to the depth camera. The method further includes, if the point of interest of the target is within the far range, operating the depth camera with a far logic, or if the point of interest of the target is not within the far range, operating the depth camera with a near logic. | 12-08-2011 |
20110304632 | INTERACTING WITH USER INTERFACE VIA AVATAR - Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control. | 12-15-2011 |
20110304774 | CONTEXTUAL TAGGING OF RECORDED DATA - Embodiments are disclosed that relate to the automatic tagging of recorded content. For example, one disclosed embodiment provides a computing device comprising a processor and memory having instructions executable by the processor to receive input data comprising one or more of a depth data, video data, and directional audio data, identify a content-based input signal in the input data, and apply one or more filters to the input signal to determine whether the input signal comprises a recognized input. Further, if the input signal comprises a recognized input, then the instructions are executable to tag the input data with the contextual tag associated with the recognized input and record the contextual tag with the input data. | 12-15-2011 |
20110316871 | FAST RECONFIGURATION OF GRAPHICS PIPELINE STATE - Techniques and technologies are provided for binding resources to particular slots associated with shaders in a graphics pipeline. Resource dependencies between resources being utilized by respective shaders can be determined, and, based on these resource dependencies, common resource/slot associations can be computed. Respective common resource/slot associations identify a particular one of the resources to be associated with a particular one of the slots. | 12-29-2011 |
20120154618 | MODELING AN OBJECT FROM IMAGE DATA - A method for modeling an object from image data comprises identifying in an image from the video a set of reference points on the object, and, for each reference point identified, observing a displacement of that reference point in response to a motion of the object. The method further comprises grouping together those reference points for which a common translational or rotational motion of the object results in the observed displacement, and fitting the grouped-together reference points to a shape. | 06-21-2012 |
20120155705 | FIRST PERSON SHOOTER CONTROL WITH VIRTUAL SKELETON - A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control. | 06-21-2012 |
20120157198 | DRIVING SIMULATOR CONTROL WITH VIRTUAL SKELETON - Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation. | 06-21-2012 |
20120157200 | INTELLIGENT GAMEPLAY PHOTO CAPTURE - Implementations for identifying, capturing, and presenting high-quality photo-representations of acts occurring during play of a game that employs motion tracking input technology are disclosed. As one example, a method is disclosed that includes capturing via an optical interface, a plurality of photographs of a player in a capture volume during play of the electronic game. The method further includes for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. The method further includes assigning respective scores to the plurality of captured photographs based, at least in part, on the comparison to the even-based scoring parameter. The method further includes associating the captured photographs at an electronic storage media with the respective scores assigned to the captured photographs. | 06-21-2012 |
20120157203 | SKELETAL CONTROL OF THREE-DIMENSIONAL VIRTUAL WORLD - A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control. | 06-21-2012 |
20120165096 | INTERACTING WITH A COMPUTER BASED APPLICATION - A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game. | 06-28-2012 |
20120293518 | DETERMINE INTENDED MOTIONS - It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like. | 11-22-2012 |
20120302350 | COMMUNICATION BETWEEN AVATARS IN DIFFERENT GAMES - Synchronous and asynchronous communications between avatars is allowed. For synchronous communications, when multiple users are playing different games of the same game title and when the avatars of the multiple users are at the same location in their respective games they can communicate with one another, thus allowing the users of those avatars to communicate with one another. For asynchronous communications, an avatar of a particular user is left behind at a particular location in a game along with a recorded communication. When other users of other games are at that particular location, the avatar of that particular user is displayed and the recorded communication is presented to the other users. | 11-29-2012 |
20120302351 | AVATARS OF FRIENDS AS NON-PLAYER-CHARACTERS - In accordance with one or more aspects, for a particular user one or more other users associated with that particular user are identified based on a social graph of that particular user. An avatar of at least one of the other users is obtained and included as a non-player-character in a game being played by that particular user. The particular user can provide requests to interact with the avatar of the second user (e.g., calling out the name of the second user, tapping the avatar of the second user on the shoulder, etc.), these requests being invitations for the second user to join in a game with the first user. An indication of such an invitation is presented to the second user, which can, for example, accept the invitation to join in a game with the first user. | 11-29-2012 |
20120307010 | OBJECT DIGITIZATION - Digitizing objects in a picture is discussed herein. A user presents the object to a camera, which captures the image comprising color and depth data for the front and back of the object. For both front and back images, the closest point to the camera is determined by analyzing the depth data. From the closest points, edges of the object are found by noting large differences in depth data. The depth data is also used to construct point cloud constructions of the front and back of the object. Various techniques are applied to extrapolate edges, remove seams, extend color intelligently, filter noise, apply skeletal structure to the object, and optimize the digitization further. Eventually, a digital representation is presented to the user and potentially used in different applications (e.g., games, Web, etc.). | 12-06-2012 |
20120309534 | AUTOMATED SENSOR DRIVEN MATCH-MAKING - A method of matching a player of a multi-player game with a remote participant includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, using an identity of the observer to find one or more candidates to play as the remote participant of the multi-player game, and when selecting the remote participant, choosing a candidate from the one or more candidates above a non-candidate if the candidate satisfies a matching criteria. | 12-06-2012 |
20120309538 | PHYSICAL CHARACTERISTICS BASED USER IDENTIFICATION FOR MATCHMAKING - One or more physical characteristics of each of multiple users are detected. These physical characteristics of a user can include physical attributes of the user (e.g., the user's height, length of the user's legs) and/or physical skills of the user (e.g., how high the user can jump). Based on these detected one or more physical characteristics of the users, two or more of the multiple users to share an online experience (e.g., play a multi-player game) are identified. | 12-06-2012 |
20120311031 | AUTOMATED SENSOR DRIVEN FRIENDING - A method of finding a new social network service friend for a player belonging to a social network service and having a friend group including one or more player-accepted friends includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, and adding the observer to the friend group of the player in the social network service if the observer satisfies a friending criteria of the player. | 12-06-2012 |
20120311032 | EMOTION-BASED USER IDENTIFICATION FOR ONLINE EXPERIENCES - Emotional response data of a particular user, when the particular user is interacting with each of multiple other users, is collected. Using the emotional response data, an emotion of the particular user when interacting with each of multiple other users is determined. Based on the determined emotions, one or more of the multiple other users are identified to share an online experience with the particular user. | 12-06-2012 |
20120326976 | Directed Performance In Motion Capture System - Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person. | 12-27-2012 |
20130002813 | VIEWING WINDOWS FOR VIDEO STREAMS - Techniques are provided for viewing windows for video streams. A video stream from a video capture device is accessed. Data that describes movement or position of a person is accessed. A viewing window is placed in the video stream based on the data that describes movement or position of the person. The viewing window is provided to a display device in accordance with the placement of the viewing window in the video stream. Motion sensors can detect motion of the person carrying the video capture device in order to dampen the motion such that the video on the remote display does not suffer from motion artifacts. Sensors can also track the eye gaze of either the person carrying the mobile video capture device or the remote display device to enable control of the spatial region of the video stream shown at the display device. | 01-03-2013 |
20130007013 | MATCHING USERS OVER A NETWORK - Various embodiments are disclosed that relate to negatively matching users over a network. For example, one disclosed embodiment provides a method including storing a plurality of user profiles corresponding to a plurality of users, each user profile in the plurality of user profiles including one or more user attributes, and receiving a request from a user for a list of one or more suggested negatively matched other users. In response to the request, the method further includes ranking each of a plurality of other users based on a magnitude of a difference between one or more user attributes of the user and corresponding one or more user attributes of the other user, and sending a list of one or more negatively matched users to the exclusion of more positively matched users based on the ranking. | 01-03-2013 |
20130013093 | PHYSICAL CHARACTERISTICS BASED USER IDENTIFICATION FOR MATCHMAKING - One or more physical characteristics of each of multiple users are detected. These physical characteristics of a user can include physical attributes of the user (e.g., the user's height, length of the user's legs) and/or physical skills of the user (e.g., how high the user can jump). Based on these detected one or more physical characteristics of the users, two or more of the multiple users to share an online experience (e.g., play a multi-player game) are identified. | 01-10-2013 |
20130022235 | INTERACTIVE SECRET SHARING - Interactive secret sharing includes receiving video data from a source and interpreting the video data to track an observed path of a device. In addition, position information is received from the device, and the position information is interpreted to track a self-reported path of the device. If the observed path is within a threshold tolerance of the self-reported path, access is provided to a restricted resource. | 01-24-2013 |
20130044130 | PROVIDING CONTEXTUAL PERSONAL INFORMATION BY A MIXED REALITY DEVICE - The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location. | 02-21-2013 |
20130074002 | Recognizing User Intent In Motion Capture System - Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and determines a probabilistic measure of the person's intent to engage of disengage with the application based on location, stance and movement. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. | 03-21-2013 |
20130177296 | GENERATING METADATA FOR USER EXPERIENCES - A system and method for efficiently managing life experiences captured by one or more sensors (e.g., video or still camera, image sensors including RGB sensors and depth sensors). A “life recorder” is a recording device that continuously captures life experiences, including unanticipated life experiences, in image, video, and/or audio recordings. In some embodiments, video and/or audio recordings captured by a life recorder are automatically analyzed, tagged with a set of one or more metadata, indexed, and stored for future use. By tagging and indexing life recordings, a life recorder may search for and acquire life recordings generated by itself or another life recorder, thereby allowing life experiences to be shared minutes or even years later. | 07-11-2013 |
20140168075 | Method to Control Perspective for a Camera-Controlled Computer - Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control. | 06-19-2014 |
20140267311 | INTERACTING WITH USER INTERFACE VIA AVATAR - Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control. | 09-18-2014 |
20140375683 | INDICATING OUT-OF-VIEW AUGMENTED REALITY IMAGES - Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object. | 12-25-2014 |
20140380254 | GESTURE TOOL - Systems, methods and computer readable media are disclosed for a gesture tool. A capture device captures user movement and provides corresponding data to a gesture recognizer engine and an application. From that, the data is parsed to determine whether it satisfies one or more gesture filters, each filter corresponding to user-performed gesture. The data and the information about the filters is also sent to a gesture tool, which displays aspects of the data and filters. In response to user input corresponding to a change in a filter, the gesture tool sends an indication of such to the gesture recognizer engine and application, where that change occurs. | 12-25-2014 |