Patent application number | Description | Published |
20110295602 | APPARATUS AND METHOD FOR MODEL ADAPTATION FOR SPOKEN LANGUAGE UNDERSTANDING - An apparatus and a method are provided for building a spoken language understanding model. Labeled data may be obtained for a target application. A new classification model may be formed for use with the target application by using the labeled data for adaptation of an existing classification model. In some implementations, the existing classification model may be used to determine the most informative examples to label. | 12-01-2011 |
20120232898 | SYSTEM AND METHOD OF PROVIDING AN AUTOMATED DATA-COLLECTION IN SPOKEN DIALOG SYSTEMS - The invention relates to a system and method for gathering data for use in a spoken dialog system. An aspect of the invention is generally referred to as an automated hidden human that performs data collection automatically at the beginning of a conversation with a user in a spoken dialog system. The method comprises presenting an initial prompt to a user, recognizing a received user utterance using an automatic speech recognition engine and classifying the recognized user utterance using a spoken language understanding module. If the recognized user utterance is not understood or classifiable to a predetermined acceptance threshold, then the method re-prompts the user. If the recognized user utterance is not classifiable to a predetermined rejection threshold, then the method transfers the user to a human as this may imply a task-specific utterance. The received and classified user utterance is then used for training the spoken dialog system. | 09-13-2012 |
20130085756 | System and Method of Semi-Supervised Learning for Spoken Language Understanding Using Semantic Role Labeling - A system and method are disclosed for providing semi-supervised learning for a spoken language understanding module using semantic role labeling. The method embodiment relates to a method of generating a spoken language understanding module. Steps in the method comprise selecting at least one predicate/argument pair as an intent from a set of the most frequent predicate/argument pairs for a domain, labeling training data using mapping rules associated with the selected at least one predicate/argument pair, training a call-type classification model using the labeled training data, re-labeling the training data using the call-type classification model and iteratively several of the above steps until training set labels converge. | 04-04-2013 |
20130177893 | Method and Apparatus for Responding to an Inquiry - Disclosed is a method and apparatus for responding to an inquiry from a client via a network. The method and apparatus receive the inquiry from a client via a network. Based on the inquiry, question-answer pairs retrieved from the network are analyzed to determine a response to the inquiry. The QA pairs are not predefined. As a result, the QA pairs have to be analyzed in order to determine whether they are responsive to a particular inquiry. Questions of the QA pairs may be repetitive and, without more, will not be useful in determining whether their corresponding answer responds to an inquiry. | 07-11-2013 |
20130289984 | Preserving Privacy in Natural Language Databases - An apparatus and a method for preserving privacy in natural language databases are provided. Natural language input may be received. At least one of sanitizing or anonymizing the natural language input may be performed to form a clean output. The clean output may be stored. | 10-31-2013 |
20130325443 | Library of Existing Spoken Dialog Data for Use in Generating New Natural Language Spoken Dialog Systems - A machine-readable medium may include a group of reusable components for building a spoken dialog system. The reusable components may include a group of previously collected audible utterances. A machine-implemented method to build a library of reusable components for use in building a natural language spoken dialog system may include storing a dataset in a database. The dataset may include a group of reusable components for building a spoken dialog system. The reusable components may further include a group of previously collected audible utterances. A second method may include storing at least one set of data. Each one of the at least one set of data may include ones of the reusable components associated with audible data collected during a different collection phase. | 12-05-2013 |
20140205985 | Method and Apparatus for Responding to an Inquiry - Disclosed is a method and apparatus for responding to an inquiry from a client via a network. The method and apparatus receive the inquiry from a client via a network. Based on the inquiry, question-answer pairs retrieved from the network are analyzed to determine a response to the inquiry. The QA pairs are not predefined. As a result, the QA pairs have to be analyzed in order to determine whether they are responsive to a particular inquiry. Questions of the QA pairs may be repetitive and, without more, will not be useful in determining whether their corresponding answer responds to an inquiry. | 07-24-2014 |
Patent application number | Description | Published |
20120166365 | METHOD AND APPARATUS FOR TAILORING THE OUTPUT OF AN INTELLIGENT AUTOMATED ASSISTANT TO A USER - The present invention relates to a method and apparatus for tailoring the output of an intelligent automated assistant. One embodiment of a method for conducting an interaction with a human user includes collecting data about the user using a multimodal set of sensors positioned in a vicinity of the user, making a set of inferences about the user in accordance with the data, and tailoring an output to be delivered to the user in accordance with the set of inferences. | 06-28-2012 |
20120173464 | METHOD AND APPARATUS FOR EXPLOITING HUMAN FEEDBACK IN AN INTELLIGENT AUTOMATED ASSISTANT - The present invention relates to a method and apparatus for exploiting human feedback in an intelligent automated assistant. One embodiment of a method for conducting an interaction with a human user includes inferring an intent from data entered by the human user, formulating a response in accordance with the intent, receiving feedback from a human advisor in response to at least one of the inferring and the formulating, wherein the human advisor is a person other than the human user, and adapting at least one model used in at least one of the inferring and the formulating, wherein the adapting is based on the feedback. | 07-05-2012 |
20120290290 | Sentence Simplification for Spoken Language Understanding - Sentence simplification may be provided. A spoken phrase may be received and converted to a text phrase. An intent associated with the text phrase may be identified. The text phrase may then be reformatted according to the identified intent and a task may be performed according to the reformatted text phrase. | 11-15-2012 |
20120290293 | Exploiting Query Click Logs for Domain Detection in Spoken Language Understanding - Domain detection training in a spoken language understanding system may be provided. Log data associated with a search engine, each associated with a search query, may be received. A domain label for each search query may be identified and the domain label and link data may be provided to a training set for a spoken language understanding model. | 11-15-2012 |
20120290509 | Training Statistical Dialog Managers in Spoken Dialog Systems With Web Data - Training for a statistical dialog manager may be provided. A plurality of log data associated with an intent may be received, and at least one step associated with completing the intent according to the plurality of log data may be identified. An understanding model associated with the intent may be created, including a plurality of queries mapped to the intent. In response to receiving a natural language query from a user that is associated with the intent a response to the user may be provided according to the understanding model. | 11-15-2012 |
20130218836 | Deep Linking From Task List Based on Intent - Task list linking may be provided. Upon receiving an input from a user, the input may be translated into at least one actionable item. The at least one actionable item may be linked to a data source and displayed to the user. | 08-22-2013 |
20150302003 | GENERIC VIRTUAL PERSONAL ASSISTANT PLATFORM - A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user. | 10-22-2015 |
20160086090 | METHOD AND APPARATUS FOR TAILORING THE OUTPUT OF AN INTELLIGENT AUTOMATED ASSISTANT TO A USER - The present invention relates to a method and apparatus for tailoring the output of an intelligent automated assistant. One embodiment of a method for conducting an interaction with a human user includes collecting data about the user using a multimodal set of sensors positioned in a vicinity of the user, making a set of inferences about the user in accordance with the data, and tailoring an output to be delivered to the user in accordance with the set of inferences. | 03-24-2016 |
Patent application number | Description | Published |
20130304451 | BUILDING MULTI-LANGUAGE PROCESSES FROM EXISTING SINGLE-LANGUAGE PROCESSES - Processes capable of accepting linguistic input in one or more languages are generated by re-using existing linguistic components associated with a different anchor language, together with machine translation components that translate between the anchor language and the one or more languages. Linguistic input is directed to machine translation components that translate such input from its language into the anchor language. Those existing linguistic components are then utilized to initiate responsive processing and generate output. Optionally, the output is directed through the machine translation components. A language identifier can initially receive linguistic input and identify the language within which such linguistic input is provided to select an appropriate machine translation component. A hybrid process, comprising machine translation components and linguistic components associated with the anchor language, can also serve as an initiating construct from which a single language process is created over time. | 11-14-2013 |
20130346066 | Joint Decoding of Words and Tags for Conversational Understanding - Joint decoding of words and tags may be provided. Upon receiving an input from a user comprising a plurality of elements, the input may be decoded into a word lattice comprising a plurality of words. A tag may be assigned to each of the plurality of words and a most-likely sequence of word-tag pairs may be identified. The most-likely sequence of word-tag pairs may be evaluated to identify an action request from the user. | 12-26-2013 |
20140059030 | Translating Natural Language Utterances to Keyword Search Queries - Natural language query translation may be provided. A statistical model may be trained to detect domains according to a plurality of query click log data. Upon receiving a natural language query, the statistical model may be used to translate the natural language query into an action. The action may then be performed and at least one result associated with performing the action may be provided. | 02-27-2014 |
20140172899 | PROBABILITY-BASED STATE MODIFICATION FOR QUERY DIALOGUES - A device may facilitate a query dialog involving queries that successively modify a query state. However, fulfilling such queries in the context of possible query domains, query intents, and contextual meanings of query terms may be difficult. Presented herein are techniques for modifying a query state in view of a query by utilizing a set of query state modifications, each representing a modification of the query state possibly intended by the user while formulating the query (e.g., adding, substituting, or removing query terms; changing the query domain or query intent; and navigating within a hierarchy of saved query states). Upon receiving a query, an embodiment may calculate the probability of the query connoting each query state modification (e.g., using a Bayesian classifier), and parsing the query according to a query state modification having a high probability (e.g., mapping respective query terms to query slots within the current query intent). | 06-19-2014 |
20140180676 | NAMED ENTITY VARIATIONS FOR MULTIMODAL UNDERSTANDING SYSTEMS - Click logs are automatically mined to assist in discovering candidate variations for named entities. The named entities may be obtained from one or more sources and include an initial list of named entities. A search may be performed within one or more search engines to determine common phrases that are used to identify the named entity in addition to the named entity initially included in the named entity list. Click logs associated with results of past searches are automatically mined to discover what phrases determined from the searches are candidate variations for the named entity. The candidate variations are scored to assist in determining the variations to include within an understanding model. The variations may also be used when delivering responses and displayed output in the SLU system. For example, instead of using the listed named entity, a popular and/or shortened name may be used by the system. | 06-26-2014 |
20140222426 | System and Method of Providing an Automated Data-Collection in Spoken Dialog Systems - The invention relates to a system and method for gathering data for use in a spoken dialog system. An aspect of the invention is generally referred to as an automated hidden human that performs data collection automatically at the beginning of a conversation with a user in a spoken dialog system. The method comprises presenting an initial prompt to a user, recognizing a received user utterance using an automatic speech recognition engine and classifying the recognized user utterance using a spoken language understanding module. If the recognized user utterance is not understood or classifiable to a predetermined acceptance threshold, then the method re-prompts the user. If the recognized user utterance is not classifiable to a predetermined rejection threshold, then the method transfers the user to a human as this may imply a task-specific utterance. The received and classified user utterance is then used for training the spoken dialog system. | 08-07-2014 |
20140236570 | EXPLOITING THE SEMANTIC WEB FOR UNSUPERVISED SPOKEN LANGUAGE UNDERSTANDING - An unsupervised training approach for Spoken Language Understanding (SLU) systems uses the structure of content sources (e.g. semantic knowledge graphs, relational databases, . . . ) to automatically specify a semantic representation for SLU. The semantic representation is used when creating entity-relation patterns that are used to mine natural language (NL) examples (e.g. NL surface forms from the web and search query click logs). The structure of the content source (e.g. semantic graph) is enriched with the mined NL examples. The NL examples and patterns may be used to automatically train SLU systems in an unsupervised manner that covers the knowledge represented in the structured content. | 08-21-2014 |
20140236575 | EXPLOITING THE SEMANTIC WEB FOR UNSUPERVISED NATURAL LANGUAGE SEMANTIC PARSING - Structured web pages are accessed and parsed to obtain implicit annotation for natural language understanding tasks. Search queries that hit these structured web pages are automatically mined for information that is used to semantically annotate the queries. The automatically annotated queries may be used for automatically building statistical unsupervised slot filling models without using a semantic annotation guideline. For example, tags that are located on a structured web page that are associated with the search query may be used to annotate the query. The mined search queries may be filtered to create a set of queries that is in a form of a natural language query and/or remove queries that are difficult to parse. A natural language model may be trained using the resulting mined queries. Some queries may be set aside for testing and the model may be adapted using in-domain sentences that are not annotated. The models may be tested using these implicitly annotated natural-language-like queries in an unsupervised fashion. | 08-21-2014 |
20140278409 | PRESERVING PRIVACY IN NATURAL LANGAUGE DATABASES - An apparatus and a method for preserving privacy in natural language databases are provided. Natural language input may be received. At least one of sanitizing or anonymizing the natural language input may be performed to form a clean output. The clean output may be stored. | 09-18-2014 |
20140278424 | KERNEL DEEP CONVEX NETWORKS AND END-TO-END LEARNING - Data associated with spoken language may be obtained. An analysis of the obtained data may be initiated for understanding of the spoken language using a deep convex network that is integrated with a kernel trick. The resulting kernel deep convex network may also be constructed by stacking one shallow kernel network over another with concatenation of the output vector of the lower network with the input data vector. A probability associated with a slot that is associated with slot-filling may be determined, based on local, discriminative features that are extracted using the kernel deep convex network. | 09-18-2014 |
20140330565 | Apparatus and Method for Model Adaptation for Spoken Language Understanding - An apparatus and a method are provided for building a spoken language understanding model. Labeled data may be obtained for a target application. A new classification model may be formed for use with the target application by using the labeled data for adaptation of an existing classification model. In some implementations, the existing classification model may be used to determine the most informative examples to label. | 11-06-2014 |
20140343942 | Multitask Learning for Spoken Language Understanding - Systems for improving or generating a spoken language understanding system using a multitask learning method for intent or call-type classification. The multitask learning method aims at training tasks in parallel while using a shared representation. A computing device automatically re-uses the existing labeled data from various applications, which are similar but may have different call-types, intents or intent distributions to improve the performance. An automated intent mapping algorithm operates across applications. In one aspect, active learning is employed to selectively sample the data to be re-used. | 11-20-2014 |
20140350931 | LANGUAGE MODEL TRAINED USING PREDICTED QUERIES FROM STATISTICAL MACHINE TRANSLATION - A Statistical Machine Translation (SMT) model is trained using pairs of sentences that include content obtained from one or more content sources (e.g. feed(s)) with corresponding queries that have been used to access the content. A query click graph may be used to assist in determining candidate pairs for the SMT training data. All/portion of the candidate pairs may be used to train the SMT model. After training the SMT model using the SMT training data, the SMT model is applied to content to determine predicted queries that may be used to search for the content. The predicted queries are used to train a language model, such as a query language model. The query language model may be interpolated other language models, such as a background language model, as well as a feed language model trained using the content used in determining the predicted queries. | 11-27-2014 |
20150046159 | UNSUPERVISED AND ACTIVE LEARNING IN AUTOMATIC SPEECH RECOGNITION FOR CALL CLASSIFICATION - Utterance data that includes at least a small amount of manually transcribed data is provided. Automatic speech recognition is performed on ones of the utterance data not having a corresponding manual transcription to produce automatically transcribed utterances. A model is trained using all of the manually transcribed data and the automatically transcribed utterances. A predetermined number of utterances not having a corresponding manual transcription are intelligently selected and manually transcribed. Ones of the automatically transcribed data as well as ones having a corresponding manual transcription are labeled. In another aspect of the invention, audio data is mined from at least one source, and a language model is trained for call classification from the mined audio data to produce a language model. | 02-12-2015 |
20150052113 | Answer Determination for Natural Language Questioning - Open-domain question answering is the task of finding a concise answer to a natural language question using a large domain, such as the Internet. The use of a semantic role labeling approach to the extraction of the answers to an open domain factoid (Who/When/What/Where) natural language question that contains a predicate is described. Semantic role labeling identities predicates and semantic argument phrases in the natural language question and the candidate sentences. When searching for an answer to a natural language question, the missing argument in the question is matched using semantic parses of the candidate answers. Such a technique may improve the accuracy of a question answering system and may decrease the length of answers for enabling voice interface to a question answering system. | 02-19-2015 |
20150161107 | Discriminating Between Natural Language and Keyword Language Items - This disclosure pertains to a classification model, and to functionality for producing and applying the classification model. The classification model is configured to discriminate whether an input linguistic item (such as a query) corresponding to either a natural language (NL) linguistic item or a keyword language (KL) linguistic item. An NL linguistic item expresses an intent using a natural language, while a KL linguistic item expresses the intent using one or more keywords. In a training phase, the functionality produces the classification model based on query click log data or the like. In an application phase, the functionality may, among other uses, use the classification model to filter a subset of NL linguistic items from a larger set of items, and then use the subset of NL linguistic items to train a natural language interpretation model, such as a spoken language understanding model. | 06-11-2015 |
20150178273 | Unsupervised Relation Detection Model Training - A relation detection model training solution. The relation detection model training solution mines freely available resources from the World Wide Web to train a relationship detection model for use during linguistic processing. The relation detection model training system searches the web for pairs of entities extracted from a knowledge graph that are connected by a specific relation. Performance is enhanced by clipping search snippets to extract patterns that connect the two entities in a dependency tree and refining the annotations of the relations according to other related entities in the knowledge graph. The relation detection model training solution scales to other domains and languages, pushing the burden from natural language semantic parsing to knowledge base population. The relation detection model training solution exhibits performance comparable to supervised solutions, which require design, collection, and manual labeling of natural language data. | 06-25-2015 |
20150179168 | Multi-user, Multi-domain Dialog System - A dialog system for use in a multi-user, multi-domain environment. The dialog system understands user requests when multiple users are interacting with each other as well as the dialog system. The dialog system uses multi-human conversational context to improve domain detection. Using interactions between multiple users allows the dialog system to better interpret machine directed conversational inputs in multi-user conversational systems. The dialog system employs topic segmentation to chunk conversations for determining context boundaries. Using general topic segmentation methods, as well as the specific domain detector trained with conversational inputs collected by a single user system, allows the dialog system to better determine the relevant context. The use of conversational context helps reduce the domain detection error rate, especially in certain domains, and allows for better interactions with users when the machine addressed turns are not recognized or are ambiguous. | 06-25-2015 |
20150227845 | Techniques for Inferring the Unknown Intents of Linguistic Items - Functionality is described herein for determining the intents of linguistic items (such as queries), to produce intent output information. For some linguistic items, the functionality deterministically assigns intents to the linguistic items based on known intent labels, which, in turn, may be obtained or derived from a knowledge graph or other type of knowledge resource. For other linguistic items, the functionality infers the intents of the linguistic items based on selection log data (such as click log data provided by a search system). In some instances, the intent output information may reveal new intents that are not represented by the known intent labels. In one implementation, the functionality can use the intent output information to train a language understanding model. | 08-13-2015 |
20150310862 | DEEP LEARNING FOR SEMANTIC PARSING INCLUDING SEMANTIC UTTERANCE CLASSIFICATION - One or more aspects of the subject disclosure are directed towards performing a semantic parsing task, such as classifying text corresponding to a spoken utterance into a class. Feature data representative of input data is provided to a semantic parsing mechanism that uses a deep model trained at least in part via unsupervised learning using unlabeled data. For example, if used in a classification task, a classifier may use an associated deep neural network that is trained to have an embeddings layer corresponding to at least one of words, phrases, or sentences. The layers are learned from unlabeled data, such as query click log data. | 10-29-2015 |
20150317302 | TRANSFERRING INFORMATION ACROSS LANGUAGE UNDERSTANDING MODEL DOMAINS - Aspects of the present invention provide a technique to validate the transfer of intents or entities between existing natural language model domains (hereafter “domain” or “NLU”) using click logs, a knowledge graph, or both. At least two different types of transfers are possible. Intents from a first domain may be transferred to a second domain. Alternatively or additionally, entities from the second domain may be transferred to an existing intent in the first domain. Either way, additional intent/entity pairs can be generated and validated. Before the new intent/entity pair is added to a domain, aspects of the present invention validate that the intent or entity is transferable between domains. Validation techniques that are consistent with aspects of the invention can use a knowledge graph, search query click logs, or both to validate a transfer of intents or entities from one domain to another. | 11-05-2015 |
20150332670 | Language Modeling For Conversational Understanding Domains Using Semantic Web Resources - Systems and methods are provided for training language models using in-domain-like data collected automatically from one or more data sources. The data sources (such as text data or user-interactional data) are mined for specific types of data, including data related to style, content, and probability of relevance, which are then used for language model training. In one embodiment, a language model is trained from features extracted from a knowledge graph modified into a probabilistic graph, where entity popularities are represented and the popularity information is obtained from data sources related to the knowledge. Embodiments of language models trained from this data are particularly suitable for domain-specific conversational understanding tasks where natural language is used, such as user interaction with a game console or a personal assistant application on personal device. | 11-19-2015 |
20150332672 | Knowledge Source Personalization To Improve Language Models - Systems and methods are provided for improving language models for speech recognition by personalizing knowledge sources utilized by the language models to specific users or user-population characteristics. A knowledge source, such as a knowledge graph, is personalized for a particular user by mapping entities or user actions from usage history for the user, such as query logs, to the knowledge source. The personalized knowledge source may be used to build a personal language model by training a language model with queries corresponding to entities or entity pairs that appear in usage history. In some embodiments, a personalized knowledge source for a specific user can be extended based on personalized knowledge sources of similar users. | 11-19-2015 |
20150370787 | Session Context Modeling For Conversational Understanding Systems - Systems and methods are provided for improving language models for speech recognition by adapting knowledge sources utilized by the language models to session contexts. A knowledge source, such as a knowledge graph, is used to capture and model dynamic session context based on user interaction information from usage history, such as session logs, that is mapped to the knowledge source. From sequences of user interactions, higher level intent sequences may be determined and used to form models that anticipate similar intents but with different arguments including arguments that do not necessarily appear in the usage history. In this way, the session context models may be used to determine likely next interactions or “turns” from a user, given a previous turn or turns. Language models corresponding to the likely next turns are then interpolated and provided to improve recognition accuracy of the next turn received from the user. | 12-24-2015 |
20160004707 | TRANSLATING NATURAL LANGUAGE UTTERANCES TO KEYWORD SEARCH QUERIES - Natural language query translation may be provided. A statistical model may be trained to detect domains according to a plurality of query click log data. Upon receiving a natural language query, the statistical model may be used to translate the natural language query into an action. The action may then be performed and at least one result associated with performing the action may be provided. | 01-07-2016 |
20160027434 | UNSUPERVISED AND ACTIVE LEARNING IN AUTOMATIC SPEECH RECOGNITION FOR CALL CLASSIFICATION - Utterance data that includes at least a small amount of manually transcribed data is provided. Automatic speech recognition is performed on ones of the utterance data not having a corresponding manual transcription to produce automatically transcribed utterances. A model is trained using all of the manually transcribed data and the automatically transcribed utterances. A predetermined number of utterances not having a corresponding manual transcription are intelligently selected and manually transcribed. Ones of the automatically transcribed data as well as ones having a corresponding manual transcription are labeled. In another aspect of the invention, audio data is mined from at least one source, and a language model is trained for call classification from the mined audio data to produce a language model. | 01-28-2016 |
20160062959 | Method and Apparatus for Responding to an Inquiry - Disclosed is a method and apparatus for responding to an inquiry from a client via a network. The method and apparatus receive the inquiry from a client via a network. Based on the inquiry, question-answer pairs retrieved from the network are analyzed to determine a response to the inquiry. The QA pairs are not predefined. As a result, the QA pairs have to be analyzed in order to determine whether they are responsive to a particular inquiry. Questions of the QA pairs may be repetitive and, without more, will not be useful in determining whether their corresponding answer responds to an inquiry. | 03-03-2016 |