Patent application number | Description | Published |
20100159909 | Personalized Cloud of Mobile Tasks - A dynamically created and automatically updated personalized cloud of mobile tasks may be displayed on an interactive visual display via a personalized cloud generator application. The personalized cloud generator application may receive and/or capture information representing a mobile task performed by a mobile computing device user. The personalized cloud generator application may then store the information and determine a relevance of a given performed mobile task. If the relevance of the performed mobile task meets a prescribed threshold, the personal cloud generator application may display a selectable visual representation (e.g., selectable icon) of the performed mobile task. Given a user's activity, the visual representation may be automatically updated (displayed, removed, moved, resized, etc.) based on the information received and/or captured. Subsequent selection of the displayed visual representation allows quick and easy access or performance of the associated mobile task. | 06-24-2010 |
20100318366 | Touch Anywhere to Speak - The present invention provides a user interface for providing press-to-talk-interaction via utilization of a touch-anywhere-to-speak module on a mobile computing device. Upon receiving an indication of a touch anywhere on the screen of a touch screen interface, the touch-anywhere-to-speak module activates the listening mechanism of a speech recognition module to accept audible user input and displays dynamic visual feedback of a measured sound level of the received audible input. The touch-anywhere-to-speak module may also provide a user a convenient and more accurate speech recognition experience by utilizing and applying the data relative to a context of the touch (e.g., relative location on the visual interface) in correlation with the spoken audible input. | 12-16-2010 |
20120253788 | Augmented Conversational Understanding Agent - An augmented conversational understanding agent may be provided. Upon receiving, by an agent, at least one natural language phrase from a user, a context associated with the at least one natural language phrase may be identified. The natural language phrase may be associated, for example, with a conversation between the user and a second user. An agent action associated with the identified context may be performed according to the at least one natural language phrase and a result associated with performing the action may be displayed. | 10-04-2012 |
20120253789 | Conversational Dialog Learning and Correction - Conversational dialog learning and correction may be provided. Upon receiving a natural language phrase from a first user, at least one second user associated with the natural language phrase may be identified. A context state may be created according to the first user and the at least one second user. The natural language phrase may then be translated into an agent action according to the context state. | 10-04-2012 |
20120253790 | Personalization of Queries, Conversations, and Searches - Personalization of user interactions may be provided. Upon receiving a phrase from a user, a plurality of semantic concepts associated with the user may be loaded. If the phrase is determined to comprise at least one of the plurality of semantic concepts associated with the user, a first action may be performed according to the phrase. If the phrase is determined not to comprise at least one of the plurality of semantic concepts associated with the user, a second action may be performed according to the phrase. | 10-04-2012 |
20120253791 | Task Driven User Intents - Identification of user intents may be provided. A plurality of network applications may be identified, and an ontology associated with each of the plurality of applications may be defined. If a phrase received from a user is associated with at least one of the defined ontologies, an action associated with the network application may be executed. | 10-04-2012 |
20120253802 | Location-Based Conversational Understanding - Location-based conversational understanding may be provided. Upon receiving a query from a user, an environmental context associated with the query may be generated. The query may be interpreted according to the environmental context. The interpreted query may be executed and at least one result associated with the query may be provided to the user. | 10-04-2012 |
20120254227 | Augmented Conversational Understanding Architecture - An augmented conversational understanding architecture may be provided. Upon receiving a natural language phrase from a user, the phrase may be translated into a search phrase and a search action may be performed on the search phrase. | 10-04-2012 |
20120254810 | Combined Activation for Natural User Interface Systems - A user interaction activation may be provided. A plurality of signals received from a user may be evaluated to determine whether the plurality of signals are associated with a visual display. If so, the plurality of signals may be translated into an agent action and a context associated with the visual display may be retrieved. The agent action may be performed according to the retrieved context and a result associated with the performed agent action may be displayed to the user. | 10-04-2012 |
20120259633 | AUDIO-INTERACTIVE MESSAGE EXCHANGE - A completely hands free exchange of messages, especially in portable devices, is provided through a combination of speech recognition, text-to-speech (TTS), and detection algorithms. An incoming message may be read aloud to a user and the user enabled to respond to the sender with a reply message through audio input upon determining whether the audio interaction mode is proper. Users may also be provided with options for responding in a different communication mode (e.g., a call) or perform other actions. Users may further be enabled to initiate a message exchange using natural language. | 10-11-2012 |
20130207898 | Equal Access to Speech and Touch Input - Input access may be provided. A user interface may be displayed on a user device. Upon receiving a selection of at least one element of the user interface, a plurality of input receiving modes of the user device may be activated. | 08-15-2013 |
20130218836 | Deep Linking From Task List Based on Intent - Task list linking may be provided. Upon receiving an input from a user, the input may be translated into at least one actionable item. The at least one actionable item may be linked to a data source and displayed to the user. | 08-22-2013 |
20140194107 | Personalized Cloud of Mobile Tasks - A dynamically created and automatically updated personalized cloud of mobile tasks may be displayed on an interactive visual display via a personalized cloud generator application. The personalized cloud generator application may receive and/or capture information representing a mobile task performed by a mobile computing device user. The personalized cloud generator application may then store the information and determine a relevance of a given performed mobile task. If the relevance of the performed mobile task meets a prescribed threshold, the personal cloud generator application may display a selectable visual representation (e.g., selectable icon) of the performed mobile task. Given a user's activity, the visual representation may be automatically updated (displayed, removed, moved, resized, etc.) based on the information received and/or captured. Subsequent selection of the displayed visual representation allows quick and easy access or performance of the associated mobile task. | 07-10-2014 |
20140250378 | USING HUMAN WIZARDS IN A CONVERSATIONAL UNDERSTANDING SYSTEM - A wizard control panel may be used by a human wizard to adjust the operation of a Natural Language (NL) conversational system during a real-time dialog flow. Input to the wizard control panel is detected and used to interrupt/change an automatic operation of one or more of the NL conversational system components used during the flow. For example, the wizard control panel may be used to adjust results determined by an Automated Speech Recognition (ASR) component, a Natural Language Understanding (NLU) component, a Dialog Manager (DM) component, and a Natural Language Generation (NLG) before the results are used to perform an automatic operation within the flow. A timeout may also be set such that when the timeout expires, the conversational system performs an automated operation by using the results shown in the wizard control panel (edited/not edited). | 09-04-2014 |
20150199017 | COORDINATED SPEECH AND GESTURE INPUT - A method to be enacted in a computer system operatively coupled to a vision system and to a listening system. The method applies natural user input to control the computer system. It includes the acts of detecting verbal and non-verbal touchless input from a user of the computer system, selecting one of a plurality of user-interface objects based on coordinates derived from the non-verbal, touchless input, decoding the verbal input to identify a selected action from among a plurality of actions supported by the selected object, and executing the selected action on the selected object. | 07-16-2015 |
20150365448 | FACILITATING CONVERSATIONS WITH AUTOMATED LOCATION MAPPING - Individuals may utilize devices to engage in conversations about topics respectively associated with a location (e.g., restaurants where the individuals may meet for dinner). Often, the individual momentarily withdraws from the conversation in order to issue commands to the device to retrieve and present such information, and may miss parts of the conversation while interacting with the device. Additionally, the individual often explores such topics individually on a device and conveys such information to the other individuals through messages, which is inefficient and error-prone. Presented herein are techniques enabling devices to facilitate conversations by monitoring the conversation for references, by one individual to another (rather than as a command to the device), to a topic associated with a location. In the absence of a command from an individual, the device may automatically present a map alongside a conversation interface showing the location(s) of the topic(s) referenced in the conversation. | 12-17-2015 |
20160034249 | SPEECHLESS INTERACTION WITH A SPEECH RECOGNITION DEVICE - Embodiments for interacting with speech input systems are provided. One example provides an electronic device including an earpiece, a speech input system, and a speechless input system. The electronic device further includes instructions executable to present requests to a user via audio outputs, and receive user inputs in response to the requests via a first input mode in which user inputs are made via the speech input system, and also receive user inputs in response to the requests via a second input mode in which responses to the requests are made via the speechless input system. | 02-04-2016 |
Patent application number | Description | Published |
20090113296 | DISPLAYING A MAP AND ASSOCIATED SYMBOLIC CONTEXT INFORMATION - A map of a destination and its immediate surroundings are displayed at a relatively low level. Symbolic context information is displayed simultaneously, in order to provide a higher-level context for the location. The symbolic context information can include such things as nearby highways, exits, bridges, sports venues or other landmarks and points of interest. The symbolic context information can be displayed, for example, on the perimeter of the map, or on the map in a distinct visual style such as a different color or font or fish-eye view. As the map is updated, (for example if the user zooms in or out), the context information is updated as well. The context information can be interactive, and can display the relationship between the context item and the location being mapped. | 04-30-2009 |
20090125499 | MACHINE-MODERATED MOBILE SOCIAL NETWORKING FOR MANAGING QUERIES - Systems and methods of a machine-moderated mobile social networking for managing queries are disclosed here. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, of receiving queries from a mobile device and intelligently distributing the queries among users that are deemed suitable to provide useful insight to the queries. The queries are typically questions asked by potential patrons regarding specific venues, patrons looking for specific businesses and/or events that fit their specific criteria, by way of example but not limitation, geography, locale, type of cuisine, ambience, music, etc. In most instances, a consumer can send the query from a portable device (e.g., cell phone, Blackberry, telephone, iPhone, Treo, etc.) in various formats (e.g., SMS text, voice call, USSD message, IM, and/or email, etc.) to a predetermined phone number and/or other types of address identifiers. | 05-14-2009 |
20100027776 | METHOD AND APPARATUS FOR PROVIDING RINGBACK TONES - A method of providing a ringback tone to a calling party. The method includes receiving a call directed to a subscriber from the calling party. At least one of an adaptive ringback tone and an actionable ringback tone is provided to the calling party. The adaptive ringback tone is based on state data. | 02-04-2010 |
20120076290 | METHOD AND APPARATUS FOR PROVIDING RINGBACK TONES - A method of providing a ringback tone to a calling party. The method includes receiving a call directed to a subscriber from the calling party. At least one of an adaptive ringback tone and an actionable ringback tone is provided to the calling party. The adaptive ringback tone is based on state data. | 03-29-2012 |
20120207288 | PROVIDING MISSED CALL AND MESSAGE INFORMATION - Information associated with messages and/or missed calls is provided to a subscriber. Calls received but not answered by the subscriber may be monitored. Each monitored call is classified as one of a missed call and a message. The monitored calls may be summarized based on a customizable rule set to create a summary. The summary is provided to the subscriber via, for example, a voice notification. | 08-16-2012 |
20130158980 | SUGGESTING INTENT FRAME(S) FOR USER REQUEST(S) - Techniques are described herein that are capable of suggesting intent frame(s) for user request(s). For instance, the intent frame(s) may be suggested to elicit a request from a user. An intent frame is a natural language phrase (e.g., a sentence) that includes at least one carrier phrase and at least one slot. A slot in an intent frame is a placeholder that is identified as being replaceable by one or more words that identify an entity and/or an action to indicate an intent of the user. A carrier phrase in an intent frame includes one or more words that suggest a type of entity and/or action that is to be identified by the one or more words that may replace the corresponding slot. In accordance with these techniques, the intent frame(s) are suggested in response to determining that natural language functionality of a processing system is activated. | 06-20-2013 |
20130159001 | SATISFYING SPECIFIED INTENT(S) BASED ON MULTIMODAL REQUEST(S) - Techniques are described herein that are capable of satisfying specified intent(s) based on multimodal request(s). A multimodal request is a request that includes at least one request of a first type and at least one request of a second type that is different from the first type. Example types of request include but are not limited to a speech request, a text command, a tactile command, and a visual command. A determination is made that one or more entities in visual content are selected in accordance with an explicit scoping command from a user. In response, speech understanding functionality is automatically activated, and audio signals are automatically monitored for speech requests from the user to be processed using the speech understanding functionality. | 06-20-2013 |
20130322609 | PROVIDING MISSED CALL AND MESSAGE INFORMATION - Information associated with messages and/or missed calls is provided to a subscriber. Calls received but not answered by the subscriber may be monitored. Each monitored call is classified as one of a missed call and a message. The monitored calls may be summarized based on a customizable rule set to create a summary. The summary is provided to the subscriber via, for example, a voice notification. | 12-05-2013 |
20140330570 | SATISFYING SPECIFIED INTENT(S) BASED ON MULTIMODAL REQUEST(S) - Techniques are described herein that are capable of satisfying specified intent(s) based on multimodal request(s). A multimodal request is a request that includes at least one request of a first type and at least one request of a second type that is different from the first type. Example types of request include but are not limited to a speech request, a text command, a tactile command, and a visual command. A determination is made that one or more entities in visual content are selected in accordance with an explicit scoping command from a user. In response, speech understanding functionality is automatically activated, and audio signals are automatically monitored for speech requests from the user to be processed using the speech understanding functionality. | 11-06-2014 |
20160078868 | SUGGESTING INTENT FRAME(S) FOR USER REQUEST(S) - Techniques are described herein that are capable of suggesting intent frame(s) for user request(s). For instance, the intent frame(s) may be suggested to elicit a request from a user. An intent frame is a natural language phrase (e.g., a sentence) that includes at least one carrier phrase and at least one slot. A slot in an intent frame is a placeholder that is identified as being replaceable by one or more words that identify an entity and/or an action to indicate an intent of the user. A carrier phrase in an intent frame includes one or more words that suggest a type of entity and/or action that is to be identified by the one or more words that may replace the corresponding slot. In accordance with these techniques, the intent frame(s) are suggested in response to determining that natural language functionality of a processing system is activated. | 03-17-2016 |