Patent application number | Description | Published |
20100159909 | Personalized Cloud of Mobile Tasks - A dynamically created and automatically updated personalized cloud of mobile tasks may be displayed on an interactive visual display via a personalized cloud generator application. The personalized cloud generator application may receive and/or capture information representing a mobile task performed by a mobile computing device user. The personalized cloud generator application may then store the information and determine a relevance of a given performed mobile task. If the relevance of the performed mobile task meets a prescribed threshold, the personal cloud generator application may display a selectable visual representation (e.g., selectable icon) of the performed mobile task. Given a user's activity, the visual representation may be automatically updated (displayed, removed, moved, resized, etc.) based on the information received and/or captured. Subsequent selection of the displayed visual representation allows quick and easy access or performance of the associated mobile task. | 06-24-2010 |
20100318366 | Touch Anywhere to Speak - The present invention provides a user interface for providing press-to-talk-interaction via utilization of a touch-anywhere-to-speak module on a mobile computing device. Upon receiving an indication of a touch anywhere on the screen of a touch screen interface, the touch-anywhere-to-speak module activates the listening mechanism of a speech recognition module to accept audible user input and displays dynamic visual feedback of a measured sound level of the received audible input. The touch-anywhere-to-speak module may also provide a user a convenient and more accurate speech recognition experience by utilizing and applying the data relative to a context of the touch (e.g., relative location on the visual interface) in correlation with the spoken audible input. | 12-16-2010 |
20120253788 | Augmented Conversational Understanding Agent - An augmented conversational understanding agent may be provided. Upon receiving, by an agent, at least one natural language phrase from a user, a context associated with the at least one natural language phrase may be identified. The natural language phrase may be associated, for example, with a conversation between the user and a second user. An agent action associated with the identified context may be performed according to the at least one natural language phrase and a result associated with performing the action may be displayed. | 10-04-2012 |
20120253789 | Conversational Dialog Learning and Correction - Conversational dialog learning and correction may be provided. Upon receiving a natural language phrase from a first user, at least one second user associated with the natural language phrase may be identified. A context state may be created according to the first user and the at least one second user. The natural language phrase may then be translated into an agent action according to the context state. | 10-04-2012 |
20120253790 | Personalization of Queries, Conversations, and Searches - Personalization of user interactions may be provided. Upon receiving a phrase from a user, a plurality of semantic concepts associated with the user may be loaded. If the phrase is determined to comprise at least one of the plurality of semantic concepts associated with the user, a first action may be performed according to the phrase. If the phrase is determined not to comprise at least one of the plurality of semantic concepts associated with the user, a second action may be performed according to the phrase. | 10-04-2012 |
20120253791 | Task Driven User Intents - Identification of user intents may be provided. A plurality of network applications may be identified, and an ontology associated with each of the plurality of applications may be defined. If a phrase received from a user is associated with at least one of the defined ontologies, an action associated with the network application may be executed. | 10-04-2012 |
20120253802 | Location-Based Conversational Understanding - Location-based conversational understanding may be provided. Upon receiving a query from a user, an environmental context associated with the query may be generated. The query may be interpreted according to the environmental context. The interpreted query may be executed and at least one result associated with the query may be provided to the user. | 10-04-2012 |
20120254227 | Augmented Conversational Understanding Architecture - An augmented conversational understanding architecture may be provided. Upon receiving a natural language phrase from a user, the phrase may be translated into a search phrase and a search action may be performed on the search phrase. | 10-04-2012 |
20120254810 | Combined Activation for Natural User Interface Systems - A user interaction activation may be provided. A plurality of signals received from a user may be evaluated to determine whether the plurality of signals are associated with a visual display. If so, the plurality of signals may be translated into an agent action and a context associated with the visual display may be retrieved. The agent action may be performed according to the retrieved context and a result associated with the performed agent action may be displayed to the user. | 10-04-2012 |
20120259633 | AUDIO-INTERACTIVE MESSAGE EXCHANGE - A completely hands free exchange of messages, especially in portable devices, is provided through a combination of speech recognition, text-to-speech (TTS), and detection algorithms. An incoming message may be read aloud to a user and the user enabled to respond to the sender with a reply message through audio input upon determining whether the audio interaction mode is proper. Users may also be provided with options for responding in a different communication mode (e.g., a call) or perform other actions. Users may further be enabled to initiate a message exchange using natural language. | 10-11-2012 |
20130207898 | Equal Access to Speech and Touch Input - Input access may be provided. A user interface may be displayed on a user device. Upon receiving a selection of at least one element of the user interface, a plurality of input receiving modes of the user device may be activated. | 08-15-2013 |
20130218836 | Deep Linking From Task List Based on Intent - Task list linking may be provided. Upon receiving an input from a user, the input may be translated into at least one actionable item. The at least one actionable item may be linked to a data source and displayed to the user. | 08-22-2013 |
20140194107 | Personalized Cloud of Mobile Tasks - A dynamically created and automatically updated personalized cloud of mobile tasks may be displayed on an interactive visual display via a personalized cloud generator application. The personalized cloud generator application may receive and/or capture information representing a mobile task performed by a mobile computing device user. The personalized cloud generator application may then store the information and determine a relevance of a given performed mobile task. If the relevance of the performed mobile task meets a prescribed threshold, the personal cloud generator application may display a selectable visual representation (e.g., selectable icon) of the performed mobile task. Given a user's activity, the visual representation may be automatically updated (displayed, removed, moved, resized, etc.) based on the information received and/or captured. Subsequent selection of the displayed visual representation allows quick and easy access or performance of the associated mobile task. | 07-10-2014 |
20140250378 | USING HUMAN WIZARDS IN A CONVERSATIONAL UNDERSTANDING SYSTEM - A wizard control panel may be used by a human wizard to adjust the operation of a Natural Language (NL) conversational system during a real-time dialog flow. Input to the wizard control panel is detected and used to interrupt/change an automatic operation of one or more of the NL conversational system components used during the flow. For example, the wizard control panel may be used to adjust results determined by an Automated Speech Recognition (ASR) component, a Natural Language Understanding (NLU) component, a Dialog Manager (DM) component, and a Natural Language Generation (NLG) before the results are used to perform an automatic operation within the flow. A timeout may also be set such that when the timeout expires, the conversational system performs an automated operation by using the results shown in the wizard control panel (edited/not edited). | 09-04-2014 |