Patent application number | Description | Published |
20100177049 | VISUAL RESPONSE TO TOUCH INPUTS - The provision of visual responses to touch inputs is disclosed. For example, one disclosed embodiment provides a computing device comprising a touch-sensitive display, a processor in operative communication with the touch-sensitive display, and memory comprising instructions stored thereon that are executable by the processor to detect a touch input made via the touch-sensitive display, display on the touch-sensitive display a first visual response to the touch input indicating that the touch input was detected by the computing device, and if the touch input is made in a touch-interactive area on the touch-sensitive display, then to display a second visual response to the touch input indicating that the touch was made in the touch-interactive area of the display. | 07-15-2010 |
20140098067 | ALWAYS-AVAILABLE INPUT THROUGH FINGER INSTRUMENTATION - A finger device initiates actions on a computer system when placed in contact with a surface. The finger device includes instrumentation that captures images and gestures. When in contact with a surface, the finger device captures images of the surface and gestures made on the surface. The finger device also transmits the images and gesture data to the computer system. An application on the computer system matches the images received from the finger device to a representation of the surface, identifies an action associated with the surface representation and gesture, and executes the action. Instrumenting the finger instead of the surface, allows a user to configure virtually any surface to accept touch input. | 04-10-2014 |
20150077355 | REDUCING CONTROL RESPONSE LATENCY WITH DEFINED CROSS-CONTROL BEHAVIOR - A system for processing user input with reduced control response latency includes an input device, an input processing unit, a high-latency subsystem, a low-latency subsystem, input processing unit software for generating signals, and an output device. The low-latency subsystem receives the signals and generates low-latency output and the high-latency subsystem processes the signals and generates high-latency output. In an embodiment, the signals comprise an identification of a defined cross-control behavior. | 03-19-2015 |
Patent application number | Description | Published |
20110115745 | INTERACTIVE DISPLAY SYSTEM WITH CONTACT GEOMETRY INTERFACE - An interactive display system with a contact geometry interface is disclosed. The interactive display system may include a multi-touch display, a touch detection system configured to detect a touch input on the multi-touch display and to generate contact geometry for a contact region of the touch input, and an application programming interface executed on a processor of the interactive display system. The application programming interface may be configured to receive the contact geometry and to send the contact geometry to a requesting application program for application-level processing. Further, the application programming interface may be configured to receive from the application program a display command based on the application level-processing. The application programming interface may be configured to send the display command to the multi-touch display to adjust a display of a graphical element on the multi-touch display. | 05-19-2011 |
20110260962 | INTERFACING WITH A COMPUTING APPLICATION USING A MULTI-DIGIT SENSOR - A technology is described for interfacing with a computing application using a multi-digit sensor. A method may include obtaining an initial stroke using a single digit of a user on the multi-digit sensor. A direction change point for the initial stroke can be identified. At the direction change point for the initial stroke, a number of additional digits can be presented by the user to the multi-digit sensor. Then a completion stroke can be identified as being made with the number of additional digits. A user interface signal to can be sent to the computing application based on the number of additional digits used in the completion touch stroke. In another configuration of the technology, the touch stroke or gesture may include a single stroke where user interface items can be selected when additional digits are presented at the end of a gesture. | 10-27-2011 |
20110300516 | Tactile Tile Vocalization - Braille symbols are automatically read aloud, to aid in learning or using Braille. More generally, a tile which bears a tactile symbol and a corresponding visual symbol is placed in a sensing area, automatically distinguished from other tiles, and vocalized. The tile is sensed and distinguished from other tiles based on various signal mechanisms, or by computer vision analysis of the tile's visual symbol. Metadata is associated with the tile. Additional placed tiles are similarly sensed, identified, and vocalized. When multiple tiles are placed in the sensing area, they are vocalized individually, and an audible phrase spelled by their arrangement of tactile symbols is also produced. A lattice is provided with locations for receiving tiles. Metadata are associated with lattice locations. Tile placement is used to control an application program which responds to tile identifications. | 12-08-2011 |
20140337806 | INTERFACING WITH A COMPUTING APPLICATION USING A MULTI-DIGIT SENSOR - A technology is described for interfacing with a computing application using a multi-digit sensor. A method may include obtaining an initial stroke using a single digit of a user on the multi-digit sensor. A direction change point for the initial stroke can be identified. At the direction change point for the initial stroke, a number of additional digits can be presented by the user to the multi-digit sensor. Then a completion stroke can be identified as being made with the number of additional digits. A user interface signal to can be sent to the computing application based on the number of additional digits used in the completion touch stroke. In another configuration of the technology, the touch stroke or gesture may include a single stroke where user interface items can be selected when additional digits are presented at the end of a gesture. | 11-13-2014 |
Patent application number | Description | Published |
20130093708 | PROXIMITY-AWARE MULTI-TOUCH TABLETOP - A proximity-aware multi-touch tabletop is disclosed that includes both a touch screen display and proximity sensors. The proximity sensors are disposed in one or more annular groups around the touch screen display and are positioned in upward- and outward-facing directions. The proximity sensors allow the multi-touch tabletop to sense the distance of a body, arm, hand, or fingers of a user from the multi-touch tabletop. Thus, hand, arm, and finger positions of a user can be determined relative to the body position of the user, which enables the multi-touch tabletop to differentiate between left hand/arm gestures and right hand/arm gestures. Further, because the multi-touch tabletop can correlate left arm and right arm movements to a user body, the multi-touch tabletop can differentiate gestures originating from different users. The ability of the multi-touch tabletop to distinguish between users greatly enhances user experiences, particularly in a multi-user environment. | 04-18-2013 |
20130100057 | PROXIMITY-AWARE MULTI-TOUCH TABLETOP - A proximity-aware multi-touch tabletop is disclosed that includes both a touch screen display and proximity sensors. The proximity sensors are disposed in one or more annular groups around the touch screen display and are positioned in upward-and outward-facing directions. The proximity sensors allow the multi-touch tabletop to sense the distance of a body, arm, hand, or fingers of a user from the multi-touch tabletop. Thus, hand, arm, and finger positions of a user can be determined relative to the body position of the user, which enables the multi-touch tabletop to differentiate between left hand/arm gestures and right hand/arm gestures. Further, because the multi-touch tabletop can correlate left arm and right arm movements to a user body, the multi-touch tabletop can differentiate gestures originating from different users. The ability of the multi-touch tabletop to distinguish between users greatly enhances user experiences, particularly in a multi-user environment. | 04-25-2013 |
20140139456 | HYBRID SYSTEMS AND METHODS FOR LOW-LATENCY USER INPUT PROCESSING AND FEEDBACK - A system for processing user input includes an input device, an input processing unit, a high-latency subsystem, a low-latency subsystem, input processing unit software for generating signals in response to user inputs, and an output device. The low-latency subsystem receives the signals and generates low-latency output and the high-latency subsystem processes the signals and generates high-latency output. | 05-22-2014 |
20140143692 | HYBRID SYSTEMS AND METHODS FOR LOW-LATENCY USER INPUT PROCESSING AND FEEDBACK - A system for processing user input includes an input device, an input processing unit, a high-latency subsystem, a low-latency subsystem, input processing unit software for generating signals in response to user inputs, and an output device. The low-latency subsystem receives the signals and generates low-latency output and the high-latency subsystem processes the signals and generates high-latency output. | 05-22-2014 |
20140267140 | LOW-LATENCY TOUCH SENSITIVE DEVICE - Disclosed are a sensor and method that provide detection of touch events from human fingers on a two-dimensional manifold with the capability for multiple simultaneous touch events to be detected and distinguished from each other. In accordance with an embodiment, the touch events are detected, processed and supplied to downstream computational processes with very low latency, i.e. on the order of one millisecond or less. Disclosed is a projected capacitive method that has been enhanced for high update rate and low latency measurements of touch events. The technique can use parallel hardware and higher frequency waveforms to gain the above advantages. Also disclosed are methods to make the measurements sensitive and robust, allow the technique to be used on transparent display surfaces and permit economical manufacturing of products which employ the technique. | 09-18-2014 |
Patent application number | Description | Published |
20110041096 | MANIPULATION OF GRAPHICAL ELEMENTS VIA GESTURES - A method of operating a graphical user interface of a computing device is disclosed. The method comprises displaying a graphical user interface (GUI) element on the touch sensitive display screen. The method further comprises in response to receiving touch input data indicative of a one-touch gesture mapping the one-touch gesture to a corresponding GUI element function. The method further comprises in response to receiving touch input data indicative of a multi-touch gesture, mapping the multi-touch gesture to the corresponding GUI element function. The method further comprises transforming display of the GUI element on the touch sensitive display screen based on the corresponding GUI element function. | 02-17-2011 |
20110093821 | DISPLAYING GUI ELEMENTS ON NATURAL USER INTERFACES - A computing system for displaying a GUI element on a natural user interface is described herein. The computing system includes a display configured to display a natural user interface of a program executed on the computing system, and a gesture sensor configured to detect a gesture input directed at the natural user interface by a user. The computing system also includes a processor configured to execute a gesture-recognizing module for recognizing a registration phase, an operation phase, and a termination phase of the gesture input, and a gesture assist module configured to first display a GUI element overlaid upon the natural user interface in response to recognition of the registration phase. The GUI element includes a visual or audio operation cue to prompt the user to carry out the operation phase of the gesture input, and a selector manipulatable by the user via the operation phase of the gesture. | 04-21-2011 |
20110117526 | TEACHING GESTURE INITIATION WITH REGISTRATION POSTURE GUIDES - A method for providing multi-touch input initiation training on a display surface is disclosed. A set of one or more registration hand postures is determined, where each registration hand posture corresponds to one or more gestures executable from that registration hand posture. A registration posture guide is displayed on the display surface. The registration posture guide includes a catalogue for each registration hand posture, where the catalogue includes a contact silhouette showing a model touch-contact interface between the display surface and that registration hand posture. | 05-19-2011 |
20110117535 | TEACHING GESTURES WITH OFFSET CONTACT SILHOUETTES - A method for providing multi-touch input training on a display surface is disclosed. A touch input is detected at one or more regions of the display surface. A visualization of the touch input is displayed at a location of the display surface offset from the touch input. One or more annotations are displayed at a location of the display surface offset from the touch input and proximate to the visualization, where each annotation shows a different legal continuation of the touch input. | 05-19-2011 |
20110119216 | NATURAL INPUT TRAINER FOR GESTURAL INSTRUCTION - A computing device that detects precursory user-input preactions executed in an instructive region and user-input action gestures executed in a functionally-active region is provided. The computing device includes a natural input trainer to present a predictive input cue on a display in response to detecting a precursory user-input preaction performed in the instructive region. The computing device also includes an interface engine to execute a computing function in response to detecting a successive user-input action gesture performed in the functionally-active region subsequent to detection of the precursory user-input preaction. | 05-19-2011 |
20110134047 | MULTI-MODAL INTERACTION ON MULTI-TOUCH DISPLAY - Embodiments are disclosed herein that relate to multi-modal interaction on a computing device comprising a multi-touch display. One disclosed embodiment comprises a method of multi-modal interaction including recognizing a hand posture of a user's first hand directed at the display and displaying a modal region based on the hand posture, wherein the modal region defines an area on the display. The method further includes receiving an input selecting a mode to be applied to the modal region, wherein the mode indicates functionalities to be associated with the modal region and defines a mapping of touch gestures to actions associated with the mode. The method further includes, while the modal region remains displayed, recognizing a touch gesture from a user's second hand directed at the display within the modal region and performing an action on the display based upon a mapping of the touch gesture. | 06-09-2011 |
20110157025 | HAND POSTURE MODE CONSTRAINTS ON TOUCH INPUT - A method of controlling a virtual object within a virtual workspace includes recognizing a hand posture of an initial touch gesture directed to a touch-input receptor, and a mode constraint is set based on the hand posture. The mode constraint specifies a constrained parameter of a virtual object that is to be maintained responsive to a subsequent touch gesture. The method further includes recognizing a subsequent touch gesture directed to the touch-input receptor. An unconstrained parameter of the virtual object is modulated responsive to the subsequent touch gesture while the constrained parameter of the virtual object is maintained in accordance with the mode constraint. | 06-30-2011 |
20110270824 | COLLABORATIVE SEARCH AND SHARE - Collaborative search and share is provided by a method of facilitating collaborative content-finding, which includes displaying a toolbar user interface object for each user that not only allows each user to perform content-finding but also increases awareness of each user to the activities of other users. The method further includes displaying content results as various disparate image clips that can easily be shared, moved, etc. amongst users. | 11-03-2011 |
20140120518 | TEACHING GESTURES WITH OFFSET CONTACT SILHOUETTES - A method for providing multi-touch input training on a display surface is disclosed. A touch/hover input is detected at one or more regions of the display surface. A visualization of the touch/hover input is displayed at a location of the display surface offset from the touch/hover input. One or more annotations are displayed at a location of the display surface offset from the touch/hover input and proximate to the visualization, where each annotation shows a different legal continuation of the touch/hover input. | 05-01-2014 |
Patent application number | Description | Published |
20120229508 | THEME-BASED AUGMENTATION OF PHOTOREPRESENTATIVE VIEW - On a display configured to provide a photorepresentative view from a user's vantage point of a physical environment in which the user is located, a method is provided comprising receiving, from the user, an input selecting a theme for use in augmenting the photorepresentative view. The method further includes obtaining, optically and in real time, environment information of the physical environment and generating a spatial model of the physical environment based on the environment information. The method further includes identifying, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment. The method further includes based on such analysis, displaying, on the display, an augmentation of an identified feature, the augmentation being associated with the theme. | 09-13-2012 |
20120264510 | INTEGRATED VIRTUAL ENVIRONMENT - An integrated virtual environment is provided by obtaining a 3D spatial model of a physical environment in which a user is located, and identifying, via analysis of the 3D spatial model, a physical object in the physical environment. The method further comprises generating a virtualized representation of the physical object, and incorporating the virtualized representation of the physical object into an existing virtual environment, thereby yielding the integrated virtual environment. The method further comprises displaying, on a display device and from a vantage point of the user, a view of the integrated virtual environment, said view being changeable in response to the user moving and/or interacting within the physical environment. | 10-18-2012 |