Patent application number | Description | Published |
20080204410 | RECOGNIZING A MOTION OF A POINTING DEVICE - The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs. | 08-28-2008 |
20080204411 | RECOGNIZING A MOVEMENT OF A POINTING DEVICE - The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs. | 08-28-2008 |
20080259055 | Manipulating An Object Utilizing A Pointing Device - The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs. | 10-23-2008 |
20080313575 | SYSTEM AND PROCESS FOR CONTROLLING ELECTRONIC COMPONENTS IN A UBIQUITOUS COMPUTING ENVIRONMENT USING MULTIMODAL INTEGRATION - The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs. | 12-18-2008 |
20090164952 | CONTROLLING AN OBJECT WITHIN AN ENVIRONMENT USING A POINTING DEVICE - The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs. | 06-25-2009 |
20090189857 | TOUCH SENSING FOR CURVED DISPLAYS - Described herein is an apparatus that includes a curved display surface that has an interior and an exterior. The curved display surface is configured to display images thereon. The apparatus also includes an emitter that emits light through the interior of the curved display surface. A detector component analyzes light reflected from the curved display surface to detect a position on the curved display surface where a first member is in physical contact with the exterior of the curved display surface. | 07-30-2009 |
20090189917 | PROJECTION OF GRAPHICAL OBJECTS ON INTERACTIVE IRREGULAR DISPLAYS - A method for displaying images on a curved display surface is described herein. The method includes receiving a graphical object and distorting the graphical object at run-time such that an appearance of the graphical object on the curved display surface will be substantially similar regardless of a position of the graphical object on the curved display surface when viewed at a viewing axis that is approximately orthogonal to a plane that is tangential to the curved display surface at a center of the graphical object. The method may further include displaying the graphical object on the curved display surface. | 07-30-2009 |
20090198354 | CONTROLLING OBJECTS VIA GESTURING - The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs. | 08-06-2009 |
20090207135 | SYSTEM AND METHOD FOR DETERMINING INPUT FROM SPATIAL POSITION OF AN OBJECT - A system and method for determining an input is provided. The system includes an object position determination device and an input determination device. The object position determination device is configured to determine a first position of an object at a first time and a second position of the object at a second time. The object position determination device includes a camera configured to detect light traveling from the object to the camera. The input determination device is configured to determine an input based at least partly upon the first position and the second position. The object position determination device can include a second camera. The object can include a radio frequency emitter. The object can include an infrared emitter. The object can be an electronic device. | 08-20-2009 |
20090271691 | LINKING DIGITAL AND PAPER DOCUMENTS - Various embodiments facilitate linking physical documents to digital documents. Links link physical documents to digital documents. Using a sensor, the physical documents are automatically detected and identified on a digital workspace. A computer is capable of displaying graphics, and user interaction with displayed graphics can be detected. The digital workspace displays a GUI component having one or more controls, and the GUI component is displayed at a location relative to a physical document on the digital workspace. User interaction with the control is detected and either a link between the physical document and one of the digital documents is edited, or an existing link between the physical document and a digital document is used to perform an operation on the digital document. Alternatively or additionally, links may be automatically generated digital documents determined to be implicitly related to the physical document. | 10-29-2009 |
20100078303 | MECHANICAL ARCHITECTURE FOR DISPLAY KEYBOARD KEYS - Mechanical architecture for providing maximum viewing area on key button tops of keys for a user input device. The viewing area is for the display of information on the key buttons, and also includes tactile feedback similar to standard laptop keyboards, all using low cost manufacturing methods such as injection molding. The architecture optimizes an aperture through the core of the key switch assembly in order to project an image through the aperture and onto the display area of the key button. The architecture relocates in at least one embodiment the tactile feedback mechanism (e.g., dome assembly) out from underneath the key button to the perimeter or side of the key switch assembly. The architecture finds particular application to input devices such as keyboards, game pods, data entry device, etc., that operate in combination with an optical surface (e.g., wedge lens). | 04-01-2010 |
20100103269 | DETERMINING ORIENTATION IN AN EXTERNAL REFERENCE FRAME - Orientation in an external reference is determined. An external-frame acceleration for a device is determined, the external-frame acceleration being in an external reference frame relative to the device. An internal-frame acceleration for the device is determined, the internal-frame acceleration being in an internal reference frame relative to the device. An orientation of the device is determined based on a comparison between a direction of the external-frame acceleration and a direction of the internal-frame acceleration. | 04-29-2010 |
20100105479 | DETERMINING ORIENTATION IN AN EXTERNAL REFERENCE FRAME - Orientation in an external reference is determined. An external-frame acceleration for a device is determined, the external-frame acceleration being in an external reference frame relative to the device. An internal-frame acceleration for the device is determined, the internal-frame acceleration being in an internal reference frame relative to the device. An orientation of the device is determined based on a comparison between a direction of the external-frame acceleration and a direction of the internal-frame acceleration. | 04-29-2010 |
20100123605 | System and method for determining 3D orientation of a pointing device - The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs. | 05-20-2010 |
20100253624 | SYSTEM FOR DISPLAYING AND CONTROLLING ELECTRONIC OBJECTS - The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs. | 10-07-2010 |
20110001696 | MANIPULATING OBJECTS DISPLAYED ON A DISPLAY SCREEN - Embodiments of the present invention is directed toward determining a location where a pointing device is directed. In one embodiment, the method includes receiving a message at the computing device from the pointing device. Sensor data is extracted from the message, the sensor data comprising accelerometer data, gyroscope data, or a combination thereof. A position of the pointing device in three-dimensional space is identified. An orientation of the pointing device in three-dimensional space is identified using the sensor data. A location to which the pointing device is directed is determined by utilizing the identified position of the pointing device and the identified orientation of the pointing device, and an object on a display screen at the location where the pointing device is directed is altered. | 01-06-2011 |
20110004329 | CONTROLLING ELECTRONIC COMPONENTS IN A COMPUTING ENVIRONMENT - The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs. | 01-06-2011 |
20110041098 | MANIPULATION OF 3-DIMENSIONAL GRAPHICAL OBJECTS OR VIEW IN A MULTI-TOUCH DISPLAY - A system described herein provides six degrees of freedom with respect to a three-dimensional object rendered on a multi-touch display through utilization of three touch points. Multiple axes of rotation are established based at least in part upon location of a first touch point and a second touch point on a multi-touch display. Movement of a third touch point controls appearance of rotation of the three-dimensional object about two axes, and rotational movement of the first touch point relative to the second touch point controls appearance of rotation of the three-dimensional object about a third axis. | 02-17-2011 |
20120113017 | RESOLVING MERGED TOUCH CONTACTS - A method for resolving merged contacts detected by a multi-touch sensor includes resolving a first touch contact to a first centroid(N) for a frame(N) and resolving a second touch contact, distinct from the first touch contact, to a second centroid(N) for the frame(N). Responsive to the first touch contact and the second touch contact merging into a merged touch contact in a frame(N+1), the merged touch contact is resolved to a first centroid(N+1) and a second centroid(N+1). | 05-10-2012 |
20120264510 | INTEGRATED VIRTUAL ENVIRONMENT - An integrated virtual environment is provided by obtaining a 3D spatial model of a physical environment in which a user is located, and identifying, via analysis of the 3D spatial model, a physical object in the physical environment. The method further comprises generating a virtualized representation of the physical object, and incorporating the virtualized representation of the physical object into an existing virtual environment, thereby yielding the integrated virtual environment. The method further comprises displaying, on a display device and from a vantage point of the user, a view of the integrated virtual environment, said view being changeable in response to the user moving and/or interacting within the physical environment. | 10-18-2012 |
20130190089 | System and method for execution a game process - A 3-D imaging system for recognition and interpretation of gestures to control a computer. The system includes a 3-D imaging system that performs gesture recognition and interpretation based on a previous mapping of a plurality of hand poses and orientations to user commands for a given user. When the user is identified to the system, the imaging system images gestures presented by the user, performs a lookup for the user command associated with the captured image(s), and executes the user command(s) to effect control of the computer, programs, and connected devices. | 07-25-2013 |
20130294016 | WIRELESS CONTROLLER - A wireless controller includes a handle portion to be held in one or both hands. The wireless controller also includes a gyroscope to output rotation information indicative of rotation of the handle about a steering axis, an accelerometer to output acceleration information, and a magnetometer to output magnetic bearing information. The wireless controller also includes a communication subsystem to wirelessly transmit sensor data to a computing device. The sensor data represents one or more of the rotation information, the acceleration information, and the magnetic bearing information such that the acceleration information is useable to attenuate gyroscopic drift when the handle has a first orientation and the magnetic bearing information is useable to attenuate gyroscopic drift when the handle has a second orientation. | 11-07-2013 |
20130295539 | PROJECTED VISUAL CUES FOR GUIDING PHYSICAL MOVEMENT - Physical movement of a human subject may be guided by a visual cue. A physical environment may be observed to identify a current position of a body portion of the human subject. A model path of travel may be obtained for the body portion of the human subject. The visual cue may be projected onto the human subject and/or into a field of view of the human subject. The visual cue may indicate the model path of travel for the body portion of the human subject. | 11-07-2013 |
20130297246 | WIRELESS CONTROLLER - A computing device receives acceleration information from an accelerometer mechanically coupled to a wireless controller, magnetic bearing information from a magnetometer mechanically coupled to the wireless controller, and rotation information from a gyroscope mechanically coupled to the wireless controller. When the wireless controller is primarily vertical, the computing device determines a rotation angle of the wireless controller by filtering the rotation information using the acceleration information. When the wireless controller is primarily horizontal, the computing device determines the rotation angle of the wireless controller by filtering the rotation information using the magnetic bearing information. | 11-07-2013 |
20130324248 | SYSTEM AND METHOD FOR EXECUTING A GAME PROCESS - Apparatus and process for controlling a computer process with gestures and a handheld pointing device. The computer system employing the pointing device to determine what component a user wants to control and what control action is desired. | 12-05-2013 |
20140049609 | WIDE ANGLE DEPTH DETECTION - Embodiments for a depth sensing camera with a wide field of view are disclosed. In one example, a depth sensing camera comprises an illumination light projection subsystem, an image detection subsystem configured to acquire image data having a wide angle field of view, a logic subsystem configured to execute instructions, and a data-holding subsystem comprising stored instructions executable by the logic subsystem to control projection of illumination light and to determine depth values from image data acquired via the image sensor. The image detection subsystem comprises an image sensor and one or more lenses. | 02-20-2014 |
20140051510 | IMMERSIVE DISPLAY WITH PERIPHERAL ILLUSIONS - A primary display displays a primary image. A peripheral illusion is displayed around the primary display by an environmental display so that the peripheral illusion appears as an extension of the primary image. | 02-20-2014 |
20140128994 | LOGICAL SENSOR SERVER FOR LOGICAL SENSOR PLATFORMS - A “Logical Sensor Server” or “LSS” acts as a smart hub between related or unrelated sensors, devices, or other systems by translating, morphing, or forwarding signals or events published by various input sources into signals or higher-order events that can be consumed or used by other subscribing sensors, devices, or systems. More specifically, the LSS acts alone or in combination with a Logical Sensor Platform (LSP) to enable various techniques that allow messages received from different input sources to be authored, transformed and made available to one or more subscribers in a manner that allows intelligent event-driven behavior to emerge from a collection of relatively simple input sources. Any combination of automatic configuration or user input is used to define the format of transformed inputs to be received by particular subscribers relative to one or more publications. Subscribers receiving transformed events control their own actions based on those events. | 05-08-2014 |
20140129162 | BATTERY WITH COMPUTING, SENSING AND COMMUNICATION CAPABILTIES - Electrical battery apparatus embodiments are presented that generally involve incorporating sensing, computing, and communication capabilities into the one common component that a vast number of electronic devices employ—namely batteries. By integrating these capabilities into disposable and/or rechargeable batteries, new functionality and intelligence can be provided to otherwise stand-alone devices. | 05-08-2014 |
20140129866 | AGGREGATION FRAMEWORK USING LOW-POWER ALERT SENSOR - An aggregation framework system and method that automatic configures, aggregates, disaggregates, manages, and optimizes components of a consolidated system of devices, modules, and sensors. Embodiments of the system and method include a low-power alert sensor, a data aggregator module, and an interpreter module. The low-power alert sensor is a sensor that is continuously on and continuously monitoring its environment. The low-power alert sensor acts as a watchdog and triggers other sensors to awaken them from a power-conservation state when there is a change or event that occurs in an environment. The data aggregator module manages the set of sensors within the system and aggregates sensor data obtained from the sensors. The interpreter module then translates the physical data collected by sensors into logical information. Together the data aggregator module and the interpreter module present a unified logical view of the capabilities of the sensors under their control. | 05-08-2014 |
20140140590 | TRENDS AND RULES COMPLIANCE WITH DEPTH VIDEO - An instruction-storage machine holds instructions that, when executed by a logic machine, cause the logic machine to find a human subject in depth data acquired with one or more depth cameras and to compute an aspect of the human subject from the depth data. The instructions further cause the logic machine to determine, based on the computed aspect, whether the human subject is complying with or deviating from a predefined rule, and to issue notification if the human subject is deviating from the rule. In another example, the instructions cause the logic machine to identify a trend based on the computed aspect and to report the identified trend. | 05-22-2014 |
20140142729 | CONTROLLING HARDWARE IN AN ENVIRONMENT - An instruction-storage machine holds instructions that, when executed by a logic machine, cause the logic machine to find a human subject in depth data acquired with one or more depth cameras arranged to image an environment, and to compute an aspect of the human subject from the depth data. Based on the computed aspect, the logic machine determines a change to be made in the environment and actuates appropriate hardware in the environment to make the change. | 05-22-2014 |
20140247263 | STEERABLE DISPLAY SYSTEM - A steerable display system includes a projector and a projector steering mechanism that selectively changes a projection direction of the projector. An aiming controller causes the projector steering mechanism to aim the projector at a target location of a physical environment. An image controller supplies the aimed projector with information for projecting an image that is geometrically corrected for the target location. | 09-04-2014 |
20140292654 | SYSTEM AND METHOD FOR DETERMINING 3D ORIENTATION OF A POINTING DEVICE - The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs. | 10-02-2014 |