Patent application number | Description | Published |
20140172899 | PROBABILITY-BASED STATE MODIFICATION FOR QUERY DIALOGUES - A device may facilitate a query dialog involving queries that successively modify a query state. However, fulfilling such queries in the context of possible query domains, query intents, and contextual meanings of query terms may be difficult. Presented herein are techniques for modifying a query state in view of a query by utilizing a set of query state modifications, each representing a modification of the query state possibly intended by the user while formulating the query (e.g., adding, substituting, or removing query terms; changing the query domain or query intent; and navigating within a hierarchy of saved query states). Upon receiving a query, an embodiment may calculate the probability of the query connoting each query state modification (e.g., using a Bayesian classifier), and parsing the query according to a query state modification having a high probability (e.g., mapping respective query terms to query slots within the current query intent). | 06-19-2014 |
20140180676 | NAMED ENTITY VARIATIONS FOR MULTIMODAL UNDERSTANDING SYSTEMS - Click logs are automatically mined to assist in discovering candidate variations for named entities. The named entities may be obtained from one or more sources and include an initial list of named entities. A search may be performed within one or more search engines to determine common phrases that are used to identify the named entity in addition to the named entity initially included in the named entity list. Click logs associated with results of past searches are automatically mined to discover what phrases determined from the searches are candidate variations for the named entity. The candidate variations are scored to assist in determining the variations to include within an understanding model. The variations may also be used when delivering responses and displayed output in the SLU system. For example, instead of using the listed named entity, a popular and/or shortened name may be used by the system. | 06-26-2014 |
20140214421 | PROSODIC AND LEXICAL ADDRESSEE DETECTION - Prosodic features are used for discriminating computer-directed speech from human-directed speech. Statistics and models describing energy/intensity patterns over time, speech/pause distributions, pitch patterns, vocal effort features, and speech segment duration patterns may be used for prosodic modeling. The prosodic features for at least a portion of an utterance are monitored over a period of time to determine a shape associated with the utterance. A score may be determined to assist in classifying the current utterance as human directed or computer directed without relying on knowledge of preceding utterances or utterances following the current utterance. Outside data may be used for training lexical addressee detection systems for the H-H-C scenario. H-C training data can be obtained from a single-user H-C collection and that H-H speech can be modeled using general conversational speech. H-C and H-H language models may also be adapted using interpolation with small amounts of matched H-H-C data. | 07-31-2014 |
20140236570 | EXPLOITING THE SEMANTIC WEB FOR UNSUPERVISED SPOKEN LANGUAGE UNDERSTANDING - An unsupervised training approach for Spoken Language Understanding (SLU) systems uses the structure of content sources (e.g. semantic knowledge graphs, relational databases, . . . ) to automatically specify a semantic representation for SLU. The semantic representation is used when creating entity-relation patterns that are used to mine natural language (NL) examples (e.g. NL surface forms from the web and search query click logs). The structure of the content source (e.g. semantic graph) is enriched with the mined NL examples. The NL examples and patterns may be used to automatically train SLU systems in an unsupervised manner that covers the knowledge represented in the structured content. | 08-21-2014 |
20140236575 | EXPLOITING THE SEMANTIC WEB FOR UNSUPERVISED NATURAL LANGUAGE SEMANTIC PARSING - Structured web pages are accessed and parsed to obtain implicit annotation for natural language understanding tasks. Search queries that hit these structured web pages are automatically mined for information that is used to semantically annotate the queries. The automatically annotated queries may be used for automatically building statistical unsupervised slot filling models without using a semantic annotation guideline. For example, tags that are located on a structured web page that are associated with the search query may be used to annotate the query. The mined search queries may be filtered to create a set of queries that is in a form of a natural language query and/or remove queries that are difficult to parse. A natural language model may be trained using the resulting mined queries. Some queries may be set aside for testing and the model may be adapted using in-domain sentences that are not annotated. The models may be tested using these implicitly annotated natural-language-like queries in an unsupervised fashion. | 08-21-2014 |
20140250378 | USING HUMAN WIZARDS IN A CONVERSATIONAL UNDERSTANDING SYSTEM - A wizard control panel may be used by a human wizard to adjust the operation of a Natural Language (NL) conversational system during a real-time dialog flow. Input to the wizard control panel is detected and used to interrupt/change an automatic operation of one or more of the NL conversational system components used during the flow. For example, the wizard control panel may be used to adjust results determined by an Automated Speech Recognition (ASR) component, a Natural Language Understanding (NLU) component, a Dialog Manager (DM) component, and a Natural Language Generation (NLG) before the results are used to perform an automatic operation within the flow. A timeout may also be set such that when the timeout expires, the conversational system performs an automated operation by using the results shown in the wizard control panel (edited/not edited). | 09-04-2014 |
20140278424 | KERNEL DEEP CONVEX NETWORKS AND END-TO-END LEARNING - Data associated with spoken language may be obtained. An analysis of the obtained data may be initiated for understanding of the spoken language using a deep convex network that is integrated with a kernel trick. The resulting kernel deep convex network may also be constructed by stacking one shallow kernel network over another with concatenation of the output vector of the lower network with the input data vector. A probability associated with a slot that is associated with slot-filling may be determined, based on local, discriminative features that are extracted using the kernel deep convex network. | 09-18-2014 |
20140350931 | LANGUAGE MODEL TRAINED USING PREDICTED QUERIES FROM STATISTICAL MACHINE TRANSLATION - A Statistical Machine Translation (SMT) model is trained using pairs of sentences that include content obtained from one or more content sources (e.g. feed(s)) with corresponding queries that have been used to access the content. A query click graph may be used to assist in determining candidate pairs for the SMT training data. All/portion of the candidate pairs may be used to train the SMT model. After training the SMT model using the SMT training data, the SMT model is applied to content to determine predicted queries that may be used to search for the content. The predicted queries are used to train a language model, such as a query language model. The query language model may be interpolated other language models, such as a background language model, as well as a feed language model trained using the content used in determining the predicted queries. | 11-27-2014 |