Patent application number | Description | Published |
20110083108 | PROVIDING USER INTERFACE FEEDBACK REGARDING CURSOR POSITION ON A DISPLAY SCREEN - Disclosed herein are systems and methods for providing user interface feedback regarding a cursor position on a display screen. A user may use a suitable input device for controlling a cursor in a computing environment. The displayed objects may provide feedback regarding the cursor's position. Particularly, a position of the cursor may be compared to an object's position for determining whether the cursor is positioned on the display screen at the same position as a portion of the object or within a predetermined distance of the portion of the object. In response to determining the cursor is positioned on the display screen at the same position as the portion of the object or within the predetermined distance of the portion of the object, an appearance of the portion of the object may be altered, such as, for example, brightness or color of the object portion. | 04-07-2011 |
20110197161 | HANDLES INTERACTIONS FOR HUMAN-COMPUTER INTERFACE - A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle. | 08-11-2011 |
20110279368 | INFERRING USER INTENT TO ENGAGE A MOTION CAPTURE SYSTEM - Techniques are provided for inferring a user's intent to interact with an application run by a motion capture system. Deliberate user gestures to interact with the motion capture system are disambiguated from unrelated user motions within the system's field of view. An algorithm may be used to determine the user's aggregated level of intent to engage the system. Parameters in the algorithm may include posture and motion of the user's body, as well as the state of the system. The system may develop a skeletal model to determine the various parameters. If the system determines that the parameters strongly indicate an intent to engage the system, then the system may react quickly. However, if the parameters only weakly indicate an intent to engage the system, it may take longer for the user to engage the system. | 11-17-2011 |
20110289455 | Gestures And Gesture Recognition For Manipulating A User-Interface - Symbolic gestures and associated recognition technology are provided for controlling a system user-interface, such as that provided by the operating system of a general computing system or multimedia console. The symbolic gesture movements in mid-air are performed by a user with or without the aid of an input device. A capture device is provided to generate depth images for three-dimensional representation of a capture area including a human target. The human target is tracked using skeletal mapping to capture the mid-air motion of the user. The skeletal mapping data is used to identify movements corresponding to pre-defined gestures using gesture filters that set forth parameters for determining when a target's movement indicates a viable gesture. When a gesture is detected, one or more pre-defined user-interface control actions are performed. | 11-24-2011 |
20110289456 | Gestures And Gesture Modifiers For Manipulating A User-Interface - Gesture modifiers are provided for modifying and enhancing the control of a user-interface such as that provided by an operating system or application of a general computing system or multimedia console. Symbolic gesture movements are performed by a user in mid-air. A capture device generates depth images and a three-dimensional representation of a capture area including a human target. The human target is tracked using skeletal mapping to capture the mid-air motion of the user. Skeletal mapping data is used to identify movements corresponding to pre-defined gestures using gesture filters. Detection of a viable gesture can trigger one or more user-interface actions or controls. Gesture modifiers are provided to modify the user-interface action triggered by detection of a gesture and/or to aid in the identification of gestures. | 11-24-2011 |
20130311944 | HANDLES INTERACTIONS FOR HUMAN-COMPUTER INTERFACE - A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle. | 11-21-2013 |
Patent application number | Description | Published |
20100107219 | AUTHENTICATION - CIRCLES OF TRUST - Within a surface computing environment users are provided a seamless and intuitive manner of modifying security levels associated with information. If a modification is to be made the user can perceive the modifications and the result of such modifications, such as on a display. When information is rendered within the surface computing environment and a condition changes, the user can quickly have that information concealed in order to mitigate unauthorized access to the information. | 04-29-2010 |
20110184735 | SPEECH RECOGNITION ANALYSIS VIA IDENTIFICATION INFORMATION - Embodiments are disclosed that relate to the use of identity information to help avoid the occurrence of false positive speech recognition events in a speech recognition system. One embodiment provides a method comprising receiving speech recognition data comprising a recognized speech segment, acoustic locational data related to a location of origin of the recognized speech segment as determined via signals from the microphone array, and confidence data comprising a recognition confidence value, and also receiving image data comprising visual locational information related to a location of each person in an image. The acoustic locational data is compared to the visual locational data to determine whether the recognized speech segment originated from a person in the field of view of the image sensor, and the confidence data is adjusted depending on this determination. | 07-28-2011 |
20110193939 | PHYSICAL INTERACTION ZONE FOR GESTURE-BASED USER INTERFACES - In a motion capture system having a depth camera, a physical interaction zone of a user is defined based on a size of the user and other factors. The zone is a volume in which the user performs hand gestures to provide inputs to an application. The shape and location of the zone can be customized for the user. The zone is anchored to the user so that the gestures can be performed from any location in the field of view. Also, the zone is kept between the user and the depth camera even as the user rotates his or her body so that the user is not facing the camera. A display provides feedback based on a mapping from a coordinate system of the zone to a coordinate system of the display. The user can move a cursor on the display or control an avatar. | 08-11-2011 |
20110313768 | COMPOUND GESTURE-SPEECH COMMANDS - A multimedia entertainment system combines both gestures and voice commands to provide an enhanced control scheme. A user's body position or motion may be recognized as a gesture, and may be used to provide context to recognize user generated sounds, such as speech input. Likewise, speech input may be recognized as a voice command, and may be used to provide context to recognize a body position or motion as a gesture. Weights may be assigned to the inputs to facilitate processing. When a gesture is recognized, a limited set of voice commands associated with the recognized gesture are loaded for use. Further, additional sets of voice commands may be structured in a hierarchical manner such that speaking a voice command from one set of voice commands leads to the system loading a next set of voice commands. | 12-22-2011 |
20120089392 | SPEECH RECOGNITION USER INTERFACE - Speech recognition techniques are disclosed herein. In one embodiment, a novice mode is available such that when the user is unfamiliar with the speech recognition system, a voice user interface (VUI) may be provided to guide them. The VUI may display one or more speech commands that are presently available. The VUI may also provide feedback to train the user. After the user becomes more familiar with speech recognition, the user may enter speech commands without the aid of the novice mode. In this “experienced mode,” the VUI need not be displayed. Therefore, the user interface is not cluttered. | 04-12-2012 |
20130027296 | COMPOUND GESTURE-SPEECH COMMANDS - A multimedia entertainment system combines both gestures and voice commands to provide an enhanced control scheme. A user's body position or motion may be recognized as a gesture, and may be used to provide context to recognize user generated sounds, such as speech input. Likewise, speech input may be recognized as a voice command, and may be used to provide context to recognize a body position or motion as a gesture. Weights may be assigned to the inputs to facilitate processing. When a gesture is recognized, a limited set of voice commands associated with the recognized gesture are loaded for use. Further, additional sets of voice commands may be structured in a hierarchical manner such that speaking a voice command from one set of voice commands leads to the system loading a next set of voice commands. | 01-31-2013 |
20130138424 | Context-Aware Interaction System Using a Semantic Model - The subject disclosure is directed towards detecting symbolic activity within a given environment using a context-dependent grammar. In response to receiving sets of input data corresponding to one or more input modalities, a context-aware interactive system processes a model associated with interpreting the symbolic activity using context data for the given environment. Based on the model, related sets of input data are determined. The context-aware interactive system uses the input data to interpret user intent with respect to the input and thereby, identify one or more commands for a target output mechanism. | 05-30-2013 |
Patent application number | Description | Published |
20080320410 | VIRTUAL KEYBOARD TEXT REPLICATION - The present invention extends to methods, systems, and computer program products for replicating text at a virtual keyboard. Characters submitted to, displayed at, or accumulated for submission to an application data field are echoed at a keyboard data field that is in relatively close proximity to virtual keys used to enter the characters. Thus, a user does not have to alter their field of view to the application data field to determine what was submitted to the application data field, what was entered at the application data field, or what is to be submitted to the application data field. Accordingly, embodiments of the invention permit a user to much more easily see what they typed using a virtual keyboard. The need to alter a visual field of focus between an application data field and a virtual keyboard is significantly reduced, if not eliminated. | 12-25-2008 |
20120030609 | VIRTUAL KEYBOARD TEXT REPLICATION - Text that is selected at a virtual keyboard is submitted to and displayed at an application data field and is echoed at a keyboard data field that is in relatively close proximity to virtual keys used to select the text. Thus, a user does not have to alter their field of view to the application data field to determine what was submitted to the application data field. | 02-02-2012 |
20130080965 | VIRTUAL KEYBOARD TEXT REPLICATION - Text that is selected at a virtual keyboard is submitted to and displayed at an application data field another data field, such as a keyboard data field, that can be in closer proximity to the virtual keys used to select the text. Thus, a user does not have to alter their field of view to the application data field to determine what was submitted to the application data field. | 03-28-2013 |