Patent application number | Description | Published |
20110161890 | Using multi-modal input to control multiple objects on a display - Embodiments of the invention are generally directed to systems, methods, and machine-readable mediums for implementing gesture-based signature authentication. In one embodiment, a system may include several modal input devices. Each modal input device is capable of retrieving a stream of modal input data from a user. The system also includes modal interpretation logic that can interpret each of the retrieved modal input data streams into a corresponding of set of actions. The system additionally includes modal pairing logic to assign each corresponding set of actions to control one of the displayed objects. Furthermore, the system has modal control logic which causes each displayed object to be controlled by its assigned set of actions. | 06-30-2011 |
20110205148 | Facial Tracking Electronic Reader - Facial actuations, such as eye actuations, may be used to detect user inputs to control the display of text. For example, in connection with an electronic book reader, facial actuations and, particularly, eye actuations, can be interpreted to indicate when the turn a page, when to provide a pronunciation of a word, when to provide a definition of a word, and when to mark a spot in the text, as examples. | 08-25-2011 |
20110207504 | Interactive Projected Displays - A projection device may project an image on a display surface. Another projection device may then project an image that appears to interact with the image projected by the first device. A camera in one of the devices may record the interaction. The interaction may be analyzed to implement game play or user selections in general. Communications between the two devices may be established by a network communication protocol. | 08-25-2011 |
20120002952 | CONTENT SYNCHRONIZATION TECHNIQUES - Techniques are disclosed that involve copying recorded content from a host device (e.g., a PVR) to a portable device. A storage medium within the portable device may include a portion that is assigned to store desired content received from the host device. Allocation of this portion may be based on a user selection. In embodiments, content may be automatically copied (“synchronized”) from the host device to the portable device. This copying may be based on various factors, such as previous synchronization and/or outputting activities. | 01-05-2012 |
20120162254 | OBJECT MAPPING TECHNIQUES FOR MOBILE AUGMENTED REALITY APPLICATIONS - Techniques are disclosed that involve mobile augmented reality (MAR) applications in which users (e.g., players) may experience augmented reality (e.g., altered video or audio based on a real environment). Such augmented reality may include various alterations. For example, particular objects may be altered to appear differently. Such alterations may be based on stored profiles and/or user selections. Further features may also be employed. For example, in embodiments, characters and/or other objects may be sent (or caused to appear) to other users in other locations. Also, a user may leave a character at another location and receive an alert when another user/player encounters this character. Also, characteristics of output audio may be affected based on events of the MAR application. | 06-28-2012 |
20120162255 | TECHNIQUES FOR MOBILE AUGMENTED REALITY APPLICATIONS - Techniques are disclosed that involve mobile augmented reality (MAR) applications in which users (e.g., players) may experience augmented reality. Further, the actual geographical position of MAR application objects (e.g., players, characters, and other objects) may be tracked, represented, and manipulated. Accordingly, MAR objects may be tracked across multiple locations (e.g., multiple geographies and player environments). Moreover, MAR content may be manipulated and provided to the user based on a current context of the user. | 06-28-2012 |
20120166993 | PROJECTION INTERFACE TECHNIQUES - Techniques are disclosed that involve projection interfaces, such as multitouch projected displays (MTPDs). For example, a user may activate a projection interface without having to interact with the a non-projected interface (e.g., a keyboard or keypad). Also, a user may select or adjust various device settings. Moreover, various user applications may be allocated among a projected interface and another display (e.g., an integrated display device). Such techniques may be employed in various environments, such as ones in which a display input devices exist in addition to a projection interface. Through such techniques, ease of use for projection interfaces may be advantageously achieved. | 06-28-2012 |
20120249429 | CONTINUED VIRTUAL LINKS BETWEEN GESTURES AND USER INTERFACE ELEMENTS - A device includes a processor to receive input data from an image detector, where the input data includes data obtained from tracking air movements of a user's body part interacting with a virtual object of the electronic display, the processor to map the input data to a control input to move the virtual object beyond the display. The device could, for example, include a mobile device such as a smartphone or a laptop. The virtual object could for example move to another display or to a bezel of the device. A touch screen sensor may allow the virtual object to be pinched from the display, before being lifted beyond the display. The processor may map the input data to control input to create a virtual binding of the virtual object in order to create a visual rendering of a connection between the virtual object and the user's body part. | 10-04-2012 |
20120249587 | KEYBOARD AVATAR FOR HEADS UP DISPLAY (HUD) - In some embodiments, the invention involves using a heads up display (HUD) or head mounted display (HMD) to view a representation of a user's fingers with an input device communicatively connected to a computing device. The keyboard/finger representation is displayed along with the application display received from a computing device. In an embodiment, the input device has an accelerometer to detect tilting movement in the input device, and send this information to the computing device. An embodiment provides visual feedback of key or control actuation in the HUD/HMD display. Other embodiments are described and claimed. | 10-04-2012 |
20140071069 | TECHNIQUES FOR TOUCH AND NON-TOUCH USER INTERACTION INPUT - Various embodiments are generally directed a method and apparatus having a touch screen module to receive first input data from a touch screen sensor based on one or more detected touch inputs at a first location of a virtual object displayed on a display. In addition, an ultrasonic module may receive second input data from an ultrasonic sensor based on detected non-touch motion associated with the virtual object. The detected non-touch motion may be tracked from the first location to a second location in a direction away from the first location based on the second input data and used to determine the second location for the virtual object based on the tracking. | 03-13-2014 |
20140089399 | DETERMINING AND COMMUNICATING USER'S EMOTIONAL STATE - According to various aspects of the present disclosure, a system and associated method and functions to determine an emotional state of a user are disclosed. In some embodiments, the disclosed system includes a data acquisition unit, an emotion determination unit, and an emotion reporting unit. The data acquisition unit is configured to detect user information including physiological and non-physiological data associated with the user. The emotion determination unit is operatively connected to the data acquisition unit, and is configured to process the user information to determine an emotional state of the user. The emotion reporting unit is configured to communicate the emotional state based on a predetermined reporting preference to an application of a communication device, e.g., a social-networking application to share the emotional state of the user such that other members of the social network associated with the user are notified of the user's emotional state. | 03-27-2014 |
20140092130 | SELECTIVELY AUGMENTING COMMUNICATIONS TRANSMITTED BY A COMMUNICATION DEVICE - Technologies for selectively augmenting communications transmitted by a communication device include a communication device configured to acquire new user environment information relating to the environment of the user if such new user environment information becomes available. The communication device is further configured to create one or more user environment indicators based on the new user environment information, to display the one or more created user environment indicators via a display of the communication device and include the created user environment indicator in a communication to be transmitted by the communication device if the created user environment indicator is selected for inclusion in the communication. | 04-03-2014 |
20140095420 | PERSONAL ADVOCATE - According to various aspects of the present disclosure, a system and associated method and functions to anticipate a need of a user are disclosed. In some embodiments, the disclosed system includes a data acquisition unit, a prediction unit, and an operation unit. The data acquisition unit is configured to detect user information, the user information including physiological and non-physiological data associated with the user. The prediction unit is operatively connected to the data acquisition unit to receive the user information, and is configured to anticipate a user need (e.g., need for medical assistance, need for language translation support, etc.) based on pre-defined user preferences, as well as on the physiological data or the non-physiological data or both. And, the operation unit is configured to automatically perform an operation, without user input, to address the user need (e.g., contact a medical facility, provide a language translation application to the user, etc.). | 04-03-2014 |
20140218187 | ASSESSMENT AND MANAGEMENT OF EMOTIONAL STATE OF A VEHICLE OPERATOR - Devices, systems, and techniques are provided for assessment and management of an emotional state of a vehicle operator. Assessment of the emotional state of the vehicle can include accessing operational information indicative of performance of a vehicle, behavioral information indicative of behavior of an operator of the vehicle, and or wellness information indicative of a physical condition of the operator of the vehicle. In one aspect, these three types of information can be combined to generate a rich group set of data, metadata, and/or signaling that can be utilized or otherwise leveraged to generate a condition metric representative of the emotional state of the vehicle operator. Management of the emotional state can be customized to the specific context of the vehicle and/or the emotional state, and can be implemented proactively or reactively. | 08-07-2014 |
20140280137 | SENSOR ASSOCIATED DATA OF MULTIPLE DEVICES BASED COMPUTING - Computer-readable storage media, apparatus and method associated with storing a copy of local data in a historical data store, among other embodiments, are disclosed herein. In embodiments, one or more computer-readable storage media may contain instructions which when executed by a computing device may provide access of local data to one or more applications on the computing device for contemporaneous processing by the one or more applications. The local data may be associated, at least in part, with one or more sensors of the computing device. In some embodiments, a copy of the local data may be transmitted to a remote historical data store where it may be categorized and correlated with data from computing devices associated with one or more other users for further processing. | 09-18-2014 |
20140281956 | MENU SYSTEM AND INTERACTIONS WITH AN ELECTRONIC DEVICE - An electronic device includes a controller that is configured to execute an iconic menu system. A display is coupled to the controller and configured to display icons generated by the controller. A plurality of sensors are coupled to the controller and configured to detect a movement of the electronic device. Memory is also coupled to the controller. The controller is further configured to execute a selected one of a plurality of functions in response to the movement, the function being associated with a selected icon of the iconic menu system. | 09-18-2014 |
20140281975 | SYSTEM FOR ADAPTIVE SELECTION AND PRESENTATION OF CONTEXT-BASED MEDIA IN COMMUNICATIONS - A system and method for adaptive selection of context-based media for use in communication includes a user communication device configured to receive and process data captured by one or more sensors and determine contextual characteristics of a user environment based on the captured data. The user communication device is configured to identify media associated with the contextual characteristics of the user environment. In particular, the identified media may correspond to a contextual characteristic specifically assigned to the media and may also include content related to the contextual characteristics of the user environment. The user communication device is further configured to display the identified media via a display of the user communication device and include the identified media in a communication to be transmitted by the user communication device if the identified media is selected for inclusion in the communication. | 09-18-2014 |
20140282273 | SYSTEM AND METHOD FOR ASSIGNING VOICE AND GESTURE COMMAND AREAS - A system and method for assigning user input command areas for receiving user voice and air-gesture commands and allowing user interaction and control of multiple applications of a computing device. The system includes a voice and air-gesture capturing system configured to allow a user to assign three-dimensional user input command areas within the computing environment for each of the multiple applications. The voice and air-gesture capturing system is configured to receive data captured by one or more sensors in the computing environment and identify user input based on the data, including user speech and/or air-gesture commands within one or more user input command areas. The voice and air-gesture capturing system is further configured to identify an application corresponding to the user input based on the identified user input command area and allow user interaction with the identified application based on the user input. | 09-18-2014 |
20140282278 | DEPTH-BASED USER INTERFACE GESTURE CONTROL - Technologies for depth-based gesture control include a computing device having a display and a depth sensor. The computing device is configured to recognize an input gesture performed by a user, determine a depth relative to the display of the input gesture based on data from the depth sensor, assign a depth plane to the input gesture as a function of the depth, and execute a user interface command based on the input gesture and the assigned depth plane. The user interface command may control a virtual object selected by depth plane, including a player character in a game. The computing device may recognize primary and secondary virtual touch planes and execute a secondary user interface command for input gestures on the secondary virtual touch plane, such as magnifying or selecting user interface elements or enabling additional functionality based on the input gesture. Other embodiments are described and claimed. | 09-18-2014 |
20140292807 | ENVIRONMENT ACTUATION BY ONE OR MORE AUGMENTED REALITY ELEMENTS - Apparatuses, systems, media and methods may provide for environment actuation by one or more augmented reality elements. A location module may determine a location of one or more networked devices in a real space and/or establish a location of the one or more augmented reality elements in a virtual space, which may be mapped to the real space. A coordinator module may coordinate a virtual action in the virtual space of the one or more augmented reality elements with an actuation event by the one or more networked devices in the real space. The actuation event may correspond to the virtual action in the virtual space and be discernible in the real space. | 10-02-2014 |
20140300565 | VIRTUAL LINKS BETWEEN DIFFERENT DISPLAYS TO PRESENT A SINGLE VIRTUAL OBJECT - Virtual links are used between two different displays to represent a single virtual object. In one example, the invention includes generating a three-dimensional space having a first display of a real first device and a second display of a real second device and a space between the first display and the second display, receiving a launch command as a gesture with respect to the first display, the launch command indicating that a virtual object is to be launched through the space toward the second display, determining a trajectory through the space toward the second display based on the received launch command, and presenting a portion of the trajectory on the second display. | 10-09-2014 |
20140355476 | SYSTEMS AND METHODS FOR MESH NETWORK DEPLOYMENT - Systems and methods to deploy a mesh network comprising one or more access point (AP) modules and at least one deployment server to a deployment region. The AP module may be configured to determine location and/or status information based on global navigation satellite signals and/or sensor signals and communicate the location and/or status signals to the deployment servers. The deployment servers may be configured to initiate deployment of additional AP modules to the deployment region based at least in part on the received location and/or status information. | 12-04-2014 |
20150070382 | SYSTEM TO ACCOUNT FOR IRREGULAR DISPLAY SURFACE PHYSICS - This disclosure is directed to a system to account for irregular display surface physics. In one embodiment, an example device may comprise a display including at least one curved surface on which content may be presented. The content may be presented based at least on simulated physical behavior associated with the curved surface. For example, the device may determine the display surface configuration, determine the simulated physical behavior in the content and present the content based at least on these determinations. The content may then appear to behave in accordance with the physics of the curved surface. The device may also comprise sensors to determine at least one of device or environmental condition such as, for example, gravitational force direction, device motion, etc. The device may then take into account the physical behavior associated with the curved surface in view of sensed device or environmental condition when presenting the content. | 03-12-2015 |
20150074545 | CONTENT RECONFIGURATION BASED ON CHARACTERISTIC ANALYSIS - This disclosure is directed to content reconfiguration based on characteristic analysis. A device may comprise a display and a user interface module to cause content to be presented on the display. When content is to be presented, content and display characteristics may first be determined. The content may then be altered based on the characteristics of the display. For example, the content may be divided into portions and at least one portion of the content may be determined for presentation based on subject matter in the content selected in view of at least one of user preferences, contextual information determined by at least one sensor in the device, etc. The user interface module may then alter the at least one portion of content for presentation, if necessary, and may further cause the at least one portion of content to be presented on the display. | 03-12-2015 |
20150078628 | PROCESSING OF IMAGES OF A SUBJECT INDIVIDUAL - Embodiments of techniques, apparatuses, systems and computer-readable media for processing images of a subject individual are disclosed. In some embodiments, a computing system may receive a plurality of candidate images of the subject individual, and may generate pose data for each of the candidate images. The pose data of a candidate image may be representative of a pose of the subject individual in that image. The computing system may compare the pose data to target pose data to generate a similarity value for each of the candidate images for selecting an image from the candidate images. Other embodiments may be disclosed and/or claimed. | 03-19-2015 |