Patent application number | Description | Published |
20080284729 | Three dimensional volumetric display input and output configurations - The present invention is a system that allows a number of 3D volumetric display or output configurations, such as dome, cubical and cylindrical volumetric displays, to interact with a number of different input configurations, such as a three-dimensional position sensing system having a volume sensing field, a planar position sensing system having a digitizing tablet, and a non-planar position sensing system having a sensing grid formed on a dome. The user interacts via the input configurations, such as by moving a digitizing stylus on the sensing grid formed on the dome enclosure surface. This interaction affects the content of the volumetric display by mapping positions and corresponding vectors of the stylus to a moving cursor within the 3D display space of the volumetric display that is offset from a tip of the stylus along the vector. | 11-20-2008 |
20090189857 | TOUCH SENSING FOR CURVED DISPLAYS - Described herein is an apparatus that includes a curved display surface that has an interior and an exterior. The curved display surface is configured to display images thereon. The apparatus also includes an emitter that emits light through the interior of the curved display surface. A detector component analyzes light reflected from the curved display surface to detect a position on the curved display surface where a first member is in physical contact with the exterior of the curved display surface. | 07-30-2009 |
20090307623 | SYSTEM FOR ORGANIZING AND VISUALIZING DISPLAY OBJECTS - A method, system and computer program for organizing and visualizing display objects within a virtual environment is provided. In one aspect, attributes of display objects define the interaction between display objects according to pre-determined rules, including rules simulating real world mechanics, thereby enabling enriched user interaction. The present invention further provides for the use of piles as an organizational entity for desktop objects. The present invention further provides for fluid interaction techniques for committing actions on display objects in a virtual interface. A number of other interaction and visualization techniques are disclosed. | 12-10-2009 |
20090327171 | RECOGNIZING GESTURES FROM FOREARM EMG SIGNALS - A machine learning model is trained by instructing a user to perform proscribed gestures, sampling signals from EMG sensors arranged arbitrarily on the user's forearm with respect to locations of muscles in the forearm, extracting feature samples from the sampled signals, labeling the feature samples according to the corresponding gestures instructed to be performed, and training the machine learning model with the labeled feature samples. Subsequently, gestures may be recognized using the trained machine learning model by sampling signals from the EMG sensors, extracting from the signals unlabeled feature samples of a same type as those extracted during the training, passing the unlabeled feature samples to the machine learning model, and outputting from the machine learning model indicia of a gesture classified by the machine learning model. | 12-31-2009 |
20100020026 | Touch Interaction with a Curved Display - Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is enabled through various user interface (UI) features. In an example embodiment, a curved display is monitored to detect a touch input. If a touch input is detected based on the act of monitoring, then one or more locations of the touch input are determined. Responsive to the determined one or more locations of the touch input, at least one user UI feature is implemented. Example UI features include an orb-like invocation gesture feature, a rotation-based dragging feature, a send-to-dark-side interaction feature, and an object representation and manipulation by proxy representation feature. | 01-28-2010 |
20100023895 | Touch Interaction with a Curved Display - Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is facilitated by preserving a predetermined orientation for objects. In an example embodiment, a curved display is monitored to detect a touch input on an object. If a touch input on an object is detected based on the monitoring, then one or more locations of the touch input are determined. The object may be manipulated responsive to the determined one or more locations of the touch input. While manipulation of the object is permitted, a predetermined orientation is preserved. | 01-28-2010 |
20100088582 | TALKING PAPER AUTHORING TOOLS - A range of unified software authoring tools for creating a talking paper application for integration in an end user platform are described herein. The authoring tools are easy to use and are interoperable to provide an easy and cost-effective method of creating a talking paper application. The authoring tools provide a framework for creating audio content and image content and interactively linking the audio content and the image content. The authoring tools also provide for verifying the interactively linked audio and image content, reviewing the audio content, the image content and the interactive linking on a display device. Finally, the authoring tools provide for saving the audio content, the video content and the interactive linking for publication to a manufacturer for integration in an end user platform or talking paper platform. | 04-08-2010 |
20100325572 | MULTIPLE MOUSE CHARACTER ENTRY - This document relates to multiple mouse character entry. More particularly, the document relates to multiple mouse character entry tools for use on a common or shared graphical user interface (GUI). In some implementations, the multiple mouse character entry tools (MMCE tools) can generate a GUI that includes multiple distinctively identified cursors. Individual cursors can be controlled by individual users via a corresponding mouse. The MMCE tools can associate a set of characters with an individual cursor effective that an individual user can use the mouse's scroll wheel to scroll to specific characters of the set. The user can select an individual character by clicking a button of the mouse. | 12-23-2010 |
20110214053 | Assisting Input From a Keyboard - Assisting input from a keyboard is described. In an embodiment, a processor receives a plurality of key-presses from the keyboard comprising alphanumeric data for input to application software executed at the processor. The processor analyzes the plurality of key-presses to detect at least one predefined typing pattern, and, in response, controls a display device to display a representation of at least a portion of the keyboard in association with a user interface of the application software. In another embodiment, a computer device has a keyboard and at least one sensor arranged to monitor at least a subset of keys on the keyboard, and detect an object within a predefined distance of a selected key prior to activation of the selected key. The processor then controls the display device to display a representation of a portion of the keyboard comprising the selected key. | 09-01-2011 |
20120253815 | TALKING PAPER AUTHORING TOOLS - A range of unified software authoring tools for creating a talking paper application for integration in an end user platform are described herein. The authoring tools are easy to use and are interoperable to provide an easy and cost-effective method of creating a talking paper application. The authoring tools provide a framework for creating audio content and image content and interactively linking the audio content and the image content. The authoring tools also provide for verifying the interactively linked audio and image content, reviewing the audio content, the image content and the interactive linking on a display device. Finally, the authoring tools provide for saving the audio content, the video content and the interactive linking for publication to a manufacturer for integration in an end user platform or talking paper platform. | 10-04-2012 |
20130232095 | RECOGNIZING FINGER GESTURES FROM FOREARM EMG SIGNALS - A machine learning model is trained by instructing a user to perform various predefined gestures, sampling signals from EMG sensors arranged arbitrarily on the user's forearm with respect to locations of muscles in the forearm, extracting feature samples from the sampled signals, labeling the feature samples according to the corresponding gestures instructed to be performed, and training the machine learning model with the labeled feature samples. Subsequently, gestures may be recognized using the trained machine learning model by sampling signals from the EMG sensors, extracting from the signals unlabeled feature samples of a same type as those extracted during the training, passing the unlabeled feature samples to the machine learning model, and outputting from the machine learning model indicia of a gesture classified by the machine learning model. | 09-05-2013 |
20140340388 | SYSTEM, METHOD AND COMPUTER PROGRAM FOR USING A SUGGESTIVE MODELING INTERFACE - A method, computer system and computer program is provided for using a suggestive modeling interface. The method consists of a method of a computer-implemented rendering of sketches, the method comprising the steps of: (1) a user activating a sketching application: (2) in response, the sketching application displaying on a screen a suggestive modeling interface; (3) the sketching application importing a sketch to the suggestive modeling interface; and (4) the sketching application retrieving from a database one or more suggestions based on the sketch. The method is operable to allow a user interactively using the sketching application to create a drawing that is guided by the imported sketch by selectively using one or more image guided drawing tools provided by the sketching application. The present invention is well-suited for three-dimensional modeling applications. | 11-20-2014 |