Entries |
Document | Title | Date |
20080201145 | Unsupervised labeling of sentence level accent - Methods are disclosed for automatic accent labeling without manually labeled data. The methods are designed to exploit accent distribution between function and content words. | 08-21-2008 |
20080208579 | Session recording and playback with selective information masking - A computer-implemented method for session processing includes identifying a type of data item that is presented to a user by a computerized system. A session in which the user interacts with the computerized system is recorded. A data item of the identified type is automatically detected in the recorded session. The recorded session is replayed, while refraining from presenting the detected data item in the replayed session. | 08-28-2008 |
20080243503 | MINIMUM DIVERGENCE BASED DISCRIMINATIVE TRAINING FOR PATTERN RECOGNITION - A method of providing discriminative training of a speech recognition unit is discussed. The method includes receiving an acoustic indication of an utterance having a hypothesis space and comparing the hypothesis space against a reference. The method measures the Kullback-Leibler Divergence (KLD) between the reference and the hypothesis space to adjust the reference and stores the adjusted reference on a tangible storage medium. | 10-02-2008 |
20080288252 | SPEECH RECOGNITION OF SPEECH RECORDED BY A MOBILE COMMUNICATION FACILITY - In embodiments of the present invention improved capabilities are described for a mobile environment speech processing facility. The present invention may provide for the entering of text into a software application resident on a mobile communication facility, where recorded speech may be presented by the user using the mobile communications facility's resident capture facility. Transmission of the recording may be provided through a wireless communication facility to a speech recognition facility, and may be accompanied by information related to the software application. Results may be generated utilizing the speech recognition facility that may be independent of structured grammar, and may be based at least in part on the information relating to the software application and the recording. The results may then be transmitted to the mobile communications facility, where they may be loaded into the software application. | 11-20-2008 |
20090006092 | Speech Recognition Language Model Making System, Method, and Program, and Speech Recognition System - [PROBLEMS] To provide a speech recognition language model making system for making a speech recognition language model so as to recognize a meaningful speech necessary for application of speech recognition, such as a speech in conversation at a call center. | 01-01-2009 |
20090055177 | APPARATUS AND METHOD FOR GENERATING NOISE ADAPTIVE ACOUSTIC MODEL FOR ENVIRONMENT MIGRATION INCLUDING NOISE ADAPTIVE DISCRIMINATIVE ADAPTATION METHOD - Provided are an apparatus and method for generating a noise adaptive acoustic model including a noise adaptive discriminative adaptation method. The method includes: generating a baseline model parameter from large-capacity speech training data including various noise environments; and receiving the generated baseline model parameter and applying a discriminative adaptation method to the generated results to generate an migrated acoustic model parameter suitable for an actually applied environment. | 02-26-2009 |
20090063145 | Combining active and semi-supervised learning for spoken language understanding - Combined active and semi-supervised learning to reduce an amount of manual labeling when training a spoken language understanding model classifier. The classifier may be trained with human-labeled utterance data. Ones of a group of unselected utterance data may be selected for manual labeling via active learning. The classifier may be changed, via semi-supervised learning, based on the selected ones of the unselected utterance data. | 03-05-2009 |
20090112587 | SYSTEM AND METHOD FOR GENERATING A PHRASE PRONUNCIATION - A system and method for a speech recognition technology that allows language models to be customized through the addition of special pronunciations for components of phrases, which are added to the factory language models during customization. It allows components of a phrase to have different pronunciations inside customer-added phrases than are specified for those isolated components in the factory language models. | 04-30-2009 |
20090119104 | Switching Functionality To Control Real-Time Switching Of Modules Of A Dialog System - Systems and methods are described that automatically control modules of dialog systems. The systems and methods include a dialog module that receives and processes utterances from a speaker and outputs data used to generate synthetic speech outputs as responses to the utterances. A controller is coupled to the dialog module, and the controller detects an abnormal output of the dialog module when the dialog module is processing in an automatic mode. The controller comprises a mode control for an agent to control the dialog module by correcting the abnormal output and transferring a corrected output to a downstream dialog module that follows, in a processing path, the dialog module. The corrected output is used in further processing the utterances. | 05-07-2009 |
20090119105 | Acoustic Model Adaptation Methods Based on Pronunciation Variability Analysis for Enhancing the Recognition of Voice of Non-Native Speaker and Apparatus Thereof - The example embodiment of the present invention provides an acoustic model adaptation method for enhancing recognition performance for a non-native speaker's speech. In order to adapt acoustic models, first, pronunciation variations are examined by analyzing a non-native speaker's speech. Thereafter, based on variation pronunciation of a non-native speaker's speech, acoustic models are adapted in a state-tying step during a training process of acoustic models. When the present invention for adapting acoustic models and a conventional acoustic model adaptation scheme are combined, more-enhanced recognition performance can be obtained. The example embodiment of the present invention enhances recognition performance for a non-native speaker's speech while reducing the degradation of recognition performance for a native speaker's speech. | 05-07-2009 |
20090198494 | RESOURCE CONSERVATIVE TRANSFORMATION BASED UNSUPERVISED SPEAKER ADAPTATION - The present invention discloses a solution for conserving computing resources when implementing transformation based adaptation techniques. The disclosed solution limits the amount of speech data used by real-time adaptation algorithms to compute a transformation, which results in substantial computational savings. Appreciably, application of a transform is a relatively low memory and computationally cheap process compared to memory and resource requirements for computing the transform to be applied. | 08-06-2009 |
20090313017 | Language model update device, language Model update method, and language model update program - A framework in which a numerical value that represents a statistical appearance tendency of each of words in a language model is set with respect to the words not only as a constant, but also as an update function that changes in time, is included. The numerical value that represents the set statistical appearance tendency of a word is automatically updated in accordance with passage of time. A time information inputting section | 12-17-2009 |
20100004931 | Apparatus and method for speech utterance verification - An apparatus is provided for speech utterance verification. The apparatus is configured to compare a first prosody component from a recorded speech with a second prosody component for a reference speech. The apparatus determines a prosodic verification evaluation for the recorded speech utterance in dependence of the comparison. | 01-07-2010 |
20100023329 | EXTENDED RECOGNITION DICTIONARY LEARNING DEVICE AND SPEECH RECOGNITION SYSTEM - Speech recognition of even a speaker who uses a speech recognition system is enabled by using an extended recognition dictionary suited to the speaker without requiring any previous learning using an utterance label corresponding to the speech of the speaker. An extended recognition dictionary learning device includes an utterance variation data calculating section for comparing an acoustic model sequence output from a speech recognition result and an input correct acoustic model sequence to calculate a correspondence between the models as utterance variation data; an utterance variation data classifying section for classifying the calculated utterance variation data into widely appearing utterance variations and unevenly appearing utterance variations; and a recognition dictionary extending section for defining a plurality of utterance variation sets by combining the classified utterance variations and thereby extending the recognition dictionary for each utterance variation set according to the utterance variations included in each utterance variation set. A speech recognition device uses the extended recognition dictionary for each utterance variation set to output a speech recognition result. | 01-28-2010 |
20100094629 | WEIGHT COEFFICIENT LEARNING SYSTEM AND AUDIO RECOGNITION SYSTEM - A weighting factor learning system includes an audio recognition section that recognizes learning audio data and outputting the recognition result; a weighting factor updating section that updates a weighting factor applied to a score obtained from an acoustic model and a language model so that the difference between a correct-answer score calculated with the use of a correct-answer text of the learning audio data and a score of the recognition result becomes large; a convergence determination section that determines, with the use of the score after updating, whether to return to the weighting factor updating section to update the weighting factor again; and a weighting factor convergence determination section that determines, with the use of the score after updating, whether to return to the audio recognition section to perform the process again and update the weighting factor using the weighting factor updating section. | 04-15-2010 |
20100100380 | Multitask Learning for Spoken Language Understanding - A system, method and computer-readable medium provide a multitask learning method for intent or call-type classification in a spoken language understanding system. Multitask learning aims at training tasks in parallel while using a shared representation. A computing device automatically re-uses the existing labeled data from various applications, which are similar but may have different call-types, intents or intent distributions to improve the performance. An automated intent mapping algorithm operates across applications. In one aspect, active learning is employed to selectively sample the data to be re-used. | 04-22-2010 |
20100161331 | Method for Preparing Information for a Speech Dialogue System - In many application environments, it is desirable to provide voice access to tables on Internet pages, where the user asks a subject-related question in a natural language and receives an adequate answer from the table read out to him in a natural language. A method is disclosed for preparing information presented in a tabular form for a speech dialogue system so that the information of the table can be consulted in a user dialogue in a targeted manner. | 06-24-2010 |
20100161332 | TRAINING WIDEBAND ACOUSTIC MODELS IN THE CEPSTRAL DOMAIN USING MIXED-BANDWIDTH TRAINING DATA FOR SPEECH RECOGNITION - A method and apparatus are provided that use narrowband data and wideband data to train a wideband acoustic model. | 06-24-2010 |
20100169094 | SPEAKER ADAPTATION APPARATUS AND PROGRAM THEREOF - A speaker adaptation apparatus includes an acquiring unit configured to acquire an acoustic model including HMMs and decision trees for estimating what type of the phoneme or the word is included in a feature value used for speech recognition, the HMMs having a plurality of states on a phoneme-to-phoneme basis or a word-to-word basis, and the decision trees being configured to reply to questions relating to the feature value and output likelihoods in the respective states of the HMMs, and a speaker adaptation unit configured to adapt the decision trees to a speaker, the decision trees being adapted using speaker adaptation data vocalized by the speaker of an input speech. | 07-01-2010 |
20100179812 | SIGNAL PROCESSING APPARATUS AND METHOD OF RECOGNIZING A VOICE COMMAND THEREOF - Provided are an apparatus and method for recognizing voice commands, the apparatus including: a voice command recognition unit which recognizes an input voice command; a voice command recognition learning unit which learns a recognition-targeted voice command; and a controller which controls the voice command recognition unit to recognize the recognition-targeted voice command from an input voice command, controls the voice command recognition learning unit to learn the input voice command if the voice command recognition is unsuccessful, and performs a particular operation corresponding to the recognized voice command if the voice command recognition is successful. | 07-15-2010 |
20100191530 | SPEECH UNDERSTANDING APPARATUS - A speech understanding apparatus includes a speech recognition unit which performs speech recognition of an utterance using multiple language models, and outputs multiple speech recognition results obtained by the speech recognition, a language understanding unit which uses multiple language understanding models to perform language understanding for each of the multiple speech recognition results output from the speech recognition unit, and outputs multiple speech understanding results obtained from the language understanding, and an integrating unit which calculates, based on values representing features of the speech understanding results, utterance batch confidences that numerically express accuracy of the speech understanding results for each of the multiple speech understanding results output from the language understanding unit, and selects one of the speech understanding results with a highest utterance batch confidence among the calculated utterance batch confidences. | 07-29-2010 |
20100312556 | SYSTEM AND METHOD FOR SPEECH PERSONALIZATION BY NEED - Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for speaker recognition personalization. The method recognizes speech received from a speaker interacting with a speech interface using a set of allocated resources, the set of allocated resources including bandwidth, processor time, memory, and storage. The method records metrics associated with the recognized speech, and after recording the metrics, modifies at least one of the allocated resources in the set of allocated resources commensurate with the recorded metrics. The method recognizes additional speech from the speaker using the modified set of allocated resources. Metrics can include a speech recognition confidence score, processing speed, dialog behavior, requests for repeats, negative responses to confirmations, and task completions. The method can further store a speaker personalization profile having information for the modified set of allocated resources and recognize speech associated with the speaker based on the speaker personalization profile. | 12-09-2010 |
20100318355 | MODEL TRAINING FOR AUTOMATIC SPEECH RECOGNITION FROM IMPERFECT TRANSCRIPTION DATA - Techniques and systems for training an acoustic model are described. In an embodiment, a technique for training an acoustic model includes dividing a corpus of training data that includes transcription errors into N parts, and on each part, decoding an utterance with an incremental acoustic model and an incremental language model to produce a decoded transcription. The technique may further include inserting silence between a pair of words into the decoded transcription and aligning an original transcription corresponding to the utterance with the decoded transcription according to time for each part. The technique may further include selecting a segment from the utterance having at least Q contiguous matching aligned words, and training the incremental acoustic model with the selected segment. The trained incremental acoustic model may then be used on a subsequent part of the training data. Other embodiments are described and claimed. | 12-16-2010 |
20110077942 | SYSTEM AND METHOD FOR HANDLING REPEAT QUERIES DUE TO WRONG ASR OUTPUT - Disclosed herein are systems, computer-implemented methods, and computer-readable storage media for handling expected repeat speech queries or other inputs. The method causes a computing device to detect a misrecognized speech query from a user, determine a tendency of the user to repeat speech queries based on previous user interactions, and adapt a speech recognition model based on the determined tendency before an expected repeat speech query. The method can further include recognizing the expected repeat speech query from the user based on the adapted speech recognition model. Adapting the speech recognition model can include modifying an acoustic model, a language model, and/or a semantic model. Adapting the speech recognition model can also include preparing a personalized search speech recognition model for the expected repeat query based on usage history and entries in a recognition lattice. The method can include retaining unmodified speech recognition models with adapted speech recognition models. | 03-31-2011 |
20110082697 | METHOD FOR THE CORRECTION OF MEASURED VALUES OF VOWEL NASALANCE - A method is described for correcting and improving the functioning of certain devices for the diagnosis and treatment of speech that dynamically measure the functioning of the velum in the control of nasality during speech. The correction method uses an estimate of the vowel frequency spectrum to greatly reduce the variation of nasalance with the vowel being spoken, so as to result in a corrected value of nasalance that reflects with greater accuracy the degree of velar opening. Correction is also described for reducing the effect on nasalance values of energy from the oral and nasal channels crossing over into the other channel because of imperfect acoustic separation. | 04-07-2011 |
20110119059 | SYSTEM AND METHOD FOR STANDARDIZED SPEECH RECOGNITION INFRASTRUCTURE - Disclosed herein are systems, methods, and computer-readable storage media for selecting a speech recognition model in a standardized speech recognition infrastructure. The system receives speech from a user, and if a user-specific supervised speech model associated with the user is available, retrieves the supervised speech model. If the user-specific supervised speech model is unavailable and if an unsupervised speech model is available, the system retrieves the unsupervised speech model. If the user-specific supervised speech model and the unsupervised speech model are unavailable, the system retrieves a generic speech model associated with the user. Next the system recognizes the received speech from the user with the retrieved model. In one embodiment, the system trains a speech recognition model in a standardized speech recognition infrastructure. In another embodiment, the system handshakes with a remote application in a standardized speech recognition infrastructure. | 05-19-2011 |
20110137652 | ENHANCED ACCURACY FOR SPEECH RECOGNITION GRAMMARS - Disclosed herein are methods and systems for recognizing speech. A method embodiment comprises comparing received speech with a precompiled grammar based on a database and if the received speech matches data in the precompiled grammar then returning a result based on the matched data. If the received speech does not match data in the precompiled grammar, then dynamically compiling a new grammar based only on new data added to the database after the compiling of the precompiled grammar. The database may comprise a directory of names. | 06-09-2011 |
20110196676 | ADAPTIVE VOICE PRINT FOR CONVERSATIONAL BIOMETRIC ENGINE - A computer-implemented method, system and/or program product update voice prints over time. A receiving computer receives an initial voice print. A determining period of time is calculated for that initial voice print. This determining period of time is a length of time during which an expected degree of change in subsequent voice prints, in comparison to the initial voice print, is predicted to occur. A new voice print is received after the determining period of time has passed, and the new voice print is compared with the initial voice print. In response to a change to the new voice print falling within the expected degree of change in comparison to the initial voice print, a voice print store is updated with the new voice print. | 08-11-2011 |
20110224985 | MODEL ADAPTATION DEVICE, METHOD THEREOF, AND PROGRAM THEREOF - A model adaptation device includes a text database that stores a plurality of sentences containing predetermined phonemes; a sentence list that includes a plurality of sentences that describe the contents of the input voice; an input unit to which the input voice is input; a model adaptation unit that performs the model adaptation using the input voice and the sentence list and outputs adapting characteristic information, which is for making the model approximate to the input voice; a statistic database that stores the adapting characteristic information; a distance calculation unit that outputs a value of an acoustic distance between the adapting characteristic information and the model for each phoneme; a phoneme detection unit that outputs a distance value, among the distance values, which is greater than a threshold value as a detection result; and a label generation unit that extracts from the text database a sentence containing a phoneme associated with the detection result and outputs the sentence. | 09-15-2011 |
20120022869 | ACOUSTIC MODEL ADAPTATION USING GEOGRAPHIC INFORMATION - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for enhancing speech recognition accuracy. In one aspect, a method includes receiving an audio signal that corresponds to an utterance recorded by a mobile device, determining a geographic location associated with the mobile device, adapting one or more acoustic models for the geographic location, and performing speech recognition on the audio signal using the one or more acoustic models model that are adapted for the geographic location. | 01-26-2012 |
20120035928 | SPEAKER ADAPTATION OF VOCABULARY FOR SPEECH RECOGNITION - A phonetic vocabulary for a speech recognition system is adapted to a particular speaker's pronunciation. A speaker can be attributed specific pronunciation styles, which can be identified from specific pronunciation examples. Consequently, a phonetic vocabulary can be reduced in size, which can improve recognition accuracy and recognition speed. | 02-09-2012 |
20120173237 | INTERACTIVE SPEECH RECOGNITION MODEL - A method and apparatus for updating a speech model on a multi-user speech recognition system with a personal speech model for a single user. A speech recognition system, for instance in a car, can include a generic speech model for comparison with the user speech input. A way of identifying a personal speech model, for instance in a mobile phone, is connected to the system. A mechanism is included for receiving personal speech model components, for instance a BLUETOOTH connection. The generic speech model is updated using the received personal speech model components. Speech recognition can then be performed on user speech using the updated generic speech model. | 07-05-2012 |
20120245940 | Guest Speaker Robust Adapted Speech Recognition - A method for speech recognition is implemented in the specific form of computer processes that function in a computer processor. That is, one or more computer processes: process a speech input to produce a sequence of representative speech vectors and perform multiple recognition passes to determine a recognition output corresponding to the speech input. At least one generic recognition pass is based on a generic speech recognition arrangement using generic modeling of a broad general class of input speech. And at least one adapted recognition pass is based on a speech adapted arrangement using pre-adapted modeling of a specific sub-class of the general class of input speech. | 09-27-2012 |
20120284025 | ENHANCED ACCURACY FOR SPEECH RECOGNITION GRAMMARS - Disclosed herein are methods and systems for recognizing speech. A method embodiment comprises comparing received speech with a precompiled grammar based on a database and if the received speech matches data in the precompiled grammar then returning a result based on the matched data. If the received speech does not match data in the precompiled grammar, then dynamically compiling a new grammar based only on new data added to the database after the compiling of the precompiled grammar The database may comprise a directory of names. | 11-08-2012 |
20130006632 | SYSTEM AND METHOD FOR APPLYING DYNAMIC CONTEXTUAL GRAMMARS AND LANGUAGE MODELS TO IMPROVE AUTOMATIC SPEECH RECOGNITION ACCURACY - The invention involves the loading and unloading of dynamic section grammars and language models in a speech recognition system. The values of the sections of the structured document are either determined in advance from a collection of documents of the same domain, document type, and speaker; or collected incrementally from documents of the same domain, document type, and speaker; or added incrementally to an already existing set of values. Speech recognition in the context of the given field is constrained to the contents of these dynamic values. If speech recognition fails or produces a poor match within this grammar or section language model, speech recognition against a larger, more general vocabulary that is not constrained to the given section is performed. | 01-03-2013 |
20130073286 | Consolidating Speech Recognition Results - Candidate interpretations resulting from application of speech recognition algorithms to spoken input are presented in a consolidated manner that reduces redundancy. A list of candidate interpretations is generated, and each candidate interpretation is subdivided into time-based portions, forming a grid. Those time-based portions that duplicate portions from other candidate interpretations are removed from the grid. A user interface is provided that presents the user with an opportunity to select among the candidate interpretations; the user interface is configured to present these alternatives without duplicate elements. | 03-21-2013 |
20130117023 | System and Method for Mobile Automatic Speech Recognition - A system and method of updating automatic speech recognition parameters on a mobile device are disclosed. The method comprises storing user account-specific adaptation data associated with ASR on a computing device associated with a wireless network, generating new ASR adaptation parameters based on transmitted information from the mobile device when a communication channel between the computing device and the mobile device becomes available and transmitting the new ASR adaptation data to the mobile device when a communication channel between the computing device and the mobile device becomes available. The new ASR adaptation data on the mobile device more accurately recognizes user utterances. | 05-09-2013 |
20130132084 | SYSTEM AND METHOD FOR PERFORMING DUAL MODE SPEECH RECOGNITION - A system and method for performing dual mode speech recognition, employing a local recognition module on a mobile device and a remote recognition engine on a server device. The system accepts a spoken query from a user, and both the local recognition module and the remote recognition engine perform speech recognition operations on the query, returning a transcription and confidence score, subject to a latency cutoff time. If both sources successfully transcribe the query, then the system accepts the result having the higher confidence score. If only one source succeeds, then that result is accepted. In either case, if the remote recognition engine does succeed in transcribing the query, then a client vocabulary is updated if the remote system result includes information not present in the client vocabulary. | 05-23-2013 |
20130185071 | VERIFYING A USER - Verifying a user, such as, but not limited to, a user who answered questions for an unproctered test for employment. A representation of a transition pattern is stored ( | 07-18-2013 |
20130226580 | SPOKEN CONTROL FOR USER CONSTRUCTION OF COMPLEX BEHAVIORS - A device interface system is presented. Contemplated device interfaces allow for construction of complex device behaviors by aggregating device functions. The behaviors are triggered based on conditions derived from environmental data about the device. | 08-29-2013 |
20130238333 | System and Method for Automatically Generating a Dialog Manager - Disclosed herein are systems, methods, and computer-readable storage media for automatically generating a dialog manager for use in a spoken dialog system. A system practicing the method receives a set of user interactions having features, identifies an initial policy, evaluates all of the features in a linear evaluation step of the algorithm to identify a set of most important features, performs a cubic policy improvement step on the identified set of most important features, repeats the previous two steps one or more times, and generates a dialog manager for use in a spoken dialog system based on the resulting policy and/or set of most important features. Evaluating all of the features can include estimating a weight for each feature which indicates how much each feature contributes to at least one of the identified policies. The system can ignore features not in the set of most important features. | 09-12-2013 |
20130238334 | DEVICE AND METHOD FOR PASS-PHRASE MODELING FOR SPEAKER VERIFICATION, AND VERIFICATION SYSTEM - A device and method for pass-phrase modeling for speaker verification and a speaker verification system are provided. The device comprises a front end which receives enrollment speech from a target speaker, and a template generation unit which generates a pass-phrase template with a general speaker model based on the enrollment speech. With the device, method and system of the present disclosure, by taking the rich variations contained in a general speaker model into account, the robust pass-phrase modeling is ensured even the enrollment data is insufficient, even just one pass-phrase is available from a target speaker. | 09-12-2013 |
20130246064 | SYSTEM AND METHOD FOR REAL-TIME SPEAKER SEGMENTATION OF AUDIO INTERACTIONS - A system and method for real-time processing a signal of a voice interaction. In an embodiment, a digital representation of a portion of an interaction may be analyzed in real-time and a segment may be selected. The segment may be associated with a source based on a model of the source. The model may updated based on the segment. The updated model is used to associate subsequent segments with the source. Other embodiments are described and claimed. | 09-19-2013 |
20130246065 | Automatic Language Model Update - A method for generating a speech recognition model includes accessing a baseline speech recognition model, obtaining information related to recent language usage from search queries, and modifying the speech recognition model to revise probabilities of a portion of a sound occurrence based on the information. The portion of a sound may include a word. Also, a method for generating a speech recognition model, includes receiving at a search engine from a remote device an audio recording and a transcript that substantially represents at least a portion of the audio recording, synchronizing the transcript with the audio recording, extracting one or more letters from the transcript and extracting the associated pronunciation of the one or more letters from the audio recording, and generating a dictionary entry in a pronunciation dictionary. | 09-19-2013 |
20130317822 | MODEL ADAPTATION DEVICE, MODEL ADAPTATION METHOD, AND PROGRAM FOR MODEL ADAPTATION - A model adaptation device includes a recognition unit which creates a recognition result of recognizing data that complies with a target domain which is an assumed condition of recognition target data, based on at least two models and a candidate of a weighting factor indicating a weight of each model on a recognition process. A weighting factor determination unit determines the weighting factor so as to assign a smaller weight to a model having higher reliability. A model update unit updates at least one model out of the models, using the recognition result as the truth label. | 11-28-2013 |
20130325471 | METHODS AND APPARATUS FOR PERFORMING TRANSFORMATION TECHNIQUES FOR DATA CLUSTERING AND/OR CLASSIFICATION - Some aspects include transforming data, at least a portion of which has been processed to determine at least one representative vector associated with each of a plurality of classifications associated with the data to obtain a plurality of representative vectors. Techniques comprise determining a first transformation based, at least in part, on the plurality of representative vectors, applying at least the first transformation to the data to obtain transformed data, and fitting a plurality of clusters to the transformed data to obtain a plurality of established clusters. Some aspects include classifying input data by transforming the input data using at least the first transformation and comparing the transformed input data to the established clusters. | 12-05-2013 |
20140006024 | SYSTEM AND METHOD FOR STANDARDIZED SPEECH RECOGNITION INFRASTRUCTURE | 01-02-2014 |
20140032216 | Pronunciation Discovery for Spoken Words - A method for a portable device includes receiving a spoken utterance of a word or phrase, generating a plurality of alternative pronunciations of the spoken utterance, scoring one or more pronunciations of the plurality of alternative pronunciations using the spoken utterance, and updating a lexicon with at least one scored pronunciation. | 01-30-2014 |
20140046663 | System and Method for Improving Speech Recognition Accuracy Using Textual Context - Disclosed herein are systems, methods, and computer-readable storage media for improving speech recognition accuracy using textual context. The method includes retrieving a recorded utterance, capturing text from a device display associated with the spoken dialog and viewed by one party to the recorded utterance, and identifying words in the captured text that are relevant to the recorded utterance. The method further includes adding the identified words to a dynamic language model, and recognizing the recorded utterance using the dynamic language model. The recorded utterance can be a spoken dialog. A time stamp can be assigned to each identified word. The method can include adding identified words to and/or removing identified words from the dynamic language model based on their respective time stamps. A screen scraper can capture text from the device display associated with the recorded utterance. The device display can contain customer service data. | 02-13-2014 |
20140067394 | SYSTEM AND METHOD FOR DECODING SPEECH - The system and method for speech decoding in speech recognition systems provides decoding for speech variants common to such languages. These variants include within-word and cross-word variants. For decoding of within-word variants, a data-driven approach is used, in which phonetic variants are identified, and a pronunciation dictionary and language model of a dynamic programming speech recognition system are updated based upon these identifications. Cross-word variants are handled with a knowledge-based approach, applying phonological rules, part-of-speech tagging or tagging of small words to a speech transcription corpus and updating the pronunciation dictionary and language model of the dynamic programming speech recognition system based upon identified cross-word variants. | 03-06-2014 |
20140074470 | PHONETIC PRONUNCIATION - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for improved pronunciation. One of the methods includes receiving data that represents an audible pronunciation of the name of an individual from a user device. The method includes identifying one or more other users that are members of a social circle that the individual is a member. The method includes identifying one or more devices associated with the other users. The method also includes providing information that identifies the individual and the data representing the audible pronunciation to the one or more identified devices. | 03-13-2014 |
20140114661 | METHODS AND SYSTEMS FOR SPEECH RECOGNITION PROCESSING USING SEARCH QUERY INFORMATION - Methods and systems for speech recognition processing are described. In an example, a computing device may be configured to receive information indicative of a frequency of submission of a search query to a search engine for a search query composed of a sequence of words. Based on the frequency of submission of the search query exceeding a threshold, the computing device may be configured to determine groupings of one or more words of the search query based on an order in which the one or more words occur in the sequence of words of the search query. Further, the computing device may be configured to provide information indicating the groupings to a speech recognition system. | 04-24-2014 |
20140136200 | ADAPTATION METHODS AND SYSTEMS FOR SPEECH SYSTEMS - Methods and systems are provided for adapting a speech system. In one example a method includes: processing a spoken command with one or more models of one or more model types to achieve model results; evaluating a frequency of the model results; and selectively updating the one or more models of the one or more model types based on the evaluating. | 05-15-2014 |
20140136201 | ADAPTATION METHODS AND SYSTEMS FOR SPEECH SYSTEMS - Methods and systems are provided for adapting a speech system. In one example a method includes: logging speech data from the speech system; detecting a user characteristic from the speech data; and selectively updating a language model based on the user characteristic. | 05-15-2014 |
20140136202 | ADAPTATION METHODS AND SYSTEMS FOR SPEECH SYSTEMS - Methods and systems are provided for adapting a speech system. In one example a method includes: logging speech data from the speech system; processing the speech data for a pattern of a user competence associated with at least one of task requests and interaction behavior; and selectively updating at least one of a system prompt and an interaction sequence based on the user competence. | 05-15-2014 |
20140156274 | METHODS AND SYSTEMS TO TRAVERSE GRAPH-BASED NETWORKS - Methods and systems to translate input labels of arcs of a network, corresponding to a sequence of states of the network, to a list of output grammar elements of the arcs, corresponding to a sequence of grammar elements. The network may include a plurality of speech recognition models combined with a weighted finite state machine transducer (WFST). Traversal may include active arc traversal, and may include active arc propagation. Arcs may be processed in parallel, including arcs originating from multiple source states and directed to a common destination state. Self-loops associated with states may be modeled within outgoing arcs of the states, which may reduce synchronization operations. Tasks may be ordered with respect to cache-data locality to associate tasks with processing threads based at least in part on whether another task associated with a corresponding data object was previously assigned to the thread. | 06-05-2014 |
20140156275 | Method of Active Learning for Automatic Speech Recognition - State-of-the-art speech recognition systems are trained using transcribed utterances, preparation of which is labor-intensive and time-consuming. The present invention is an iterative method for reducing the transcription effort for training in automatic speech recognition (ASR). Active learning aims at reducing the number of training examples to be labeled by automatically processing the unlabeled examples and then selecting the most informative ones with respect to a given cost function for a human to label. The method comprises automatically estimating a confidence score for each word of the utterance and exploiting the lattice output of a speech recognizer, which was trained on a small set of transcribed data. An utterance confidence score is computed based on these word confidence scores; then the utterances are selectively sampled to be transcribed using the utterance confidence scores. | 06-05-2014 |
20140207459 | System and Method for Mobile Automatic Speech Recognition - A system and method of updating automatic speech recognition parameters on a mobile device are disclosed. The method comprises storing user account-specific adaptation data associated with ASR on a computing device associated with a wireless network, generating new ASR adaptation parameters based on transmitted information from the mobile device when a communication channel between the computing device and the mobile device becomes available and transmitting the new ASR adaptation data to the mobile device when a communication channel between the computing device and the mobile device becomes available. The new ASR adaptation data on the mobile device more accurately recognizes user utterances. | 07-24-2014 |
20140244255 | SPEECH RECOGNITION DEVICE AND METHOD, AND SEMICONDUCTOR INTEGRATED CIRCUIT DEVICE - A semiconductor integrated circuit device for speech recognition includes a conversion candidate setting unit that receives text data indicating words or sentences together with a command and sets the text data in a conversion list in accordance with the command; a standard pattern extracting unit that extracts, from a speech recognition database, a standard pattern corresponding to at least a part of the words or sentences indicated by the text data that is set in the conversion list; a signal processing unit that extracts frequency components of an input speech signal and generates a feature pattern indicating distribution of the frequency components; and a match detecting unit that detects a match between the feature pattern generated from at least a part of the speech signal and the standard pattern and outputs a speech recognition result. | 08-28-2014 |
20140244256 | Enhanced Accuracy for Speech Recognition Grammars - Disclosed herein are methods and systems for recognizing speech. A method embodiment comprises comparing received speech with a precompiled grammar based on a database and if the received speech matches data in the precompiled grammar then returning a result based on the matched data. If the received speech does not match data in the precompiled grammar, then dynamically compiling a new grammar based only on new data added to the database after the compiling of the precompiled grammar The database may comprise a directory of names. | 08-28-2014 |
20140249818 | Document Transcription System Training - A system is provided for training an acoustic model for use in speech recognition. In particular, such a system may be used to perform training based on a spoken audio stream and a non-literal transcript of the spoken audio stream. Such a system may identify text in the non-literal transcript which represents concepts having multiple spoken forms. The system may attempt to identify the actual spoken form in the audio stream which produced the corresponding text in the non-literal transcript, and thereby produce a revised transcript which more accurately represents the spoken audio stream. The revised, and more accurate, transcript may be used to train the acoustic model, thereby producing a better acoustic model than that which would be produced using conventional techniques, which perform training based directly on the original non-literal transcript. | 09-04-2014 |
20140257811 | METHOD FOR REFINING A SEARCH - A method for refining a search is provided. Embodiments may include receiving a first speech signal corresponding to a first utterance and receiving a second speech signal corresponding to a second utterance, wherein the second utterance is a refinement to the first utterance. Embodiments may also include identifying information associated with the first speech signal as first speech signal information and identifying information associated with the second speech signal as second speech signal information. Embodiments may also include determining a first quantity of search results based upon the first speech signal information and determining a second quantity of search results based upon the second speech signal information. Embodiments may also include comparing at least one of the first quantity of search results and the second quantity of search results with a quantity of search results from a combination of information of the first and second signals and determining an information gain from the comparison. | 09-11-2014 |
20140278414 | UPDATING A VOICE TEMPLATE - Updating a voice template for recognizing a speaker on the basis of a voice uttered by the speaker is disclosed. Stored voice templates indicate distinctive characteristics of utterances from speakers. Distinctive characteristics are extracted for a specific speaker based on a voice message utterance received from that speaker. The distinctive characteristics are compared to the characteristics indicated by the stored voice templates to selected a template that matches within a predetermined threshold. The selected template is updated on the basis of the extracted characteristics. | 09-18-2014 |
20140316782 | METHODS AND SYSTEMS FOR MANAGING DIALOG OF SPEECH SYSTEMS - Methods and systems are provided for managing speech dialog of a speech system. In one embodiment, a method includes: receiving a first utterance from a user of the speech system; determining a first list of possible results from the first utterance, wherein the first list includes at least two elements that each represent a possible result; analyzing the at least two elements of the first list to determine an ambiguity of the elements; and generating a speech prompt to the user based on partial orthography and the ambiguity. | 10-23-2014 |
20140316783 | VOCAL KEYWORD TRAINING FROM TEXT - Systems and methods for vocal keyword training from text are provided. In one example method, text is received via keyboard or a touch screen. The text can include one or more words of language known to a user. The received text can be compiled to generate a signature. The signature can embody a spoken keyword and include a sequence of phonemes or a triphone. The signature can be provided as an input to automatic speech recognition (ASR) software for subsequent comparison to an audible input. In various embodiments, a mobile device receives the audible input and the text, and at least one of the compiling and ASR functionality is distributed to a cloud-based system. | 10-23-2014 |
20140324428 | SYSTEM AND METHOD OF IMPROVING SPEECH RECOGNITION USING CONTEXT - A system and method are provided for improving speech recognition accuracy. Contextual information about user speech may be received, and then speech recognition analysis can be performed on the user speech using the contextual information. This allows the system and method to improve accuracy when performing tasks like searching and navigating using speech recognition. | 10-30-2014 |
20140324429 | COMPUTER-IMPLEMENTED METHOD FOR AUTOMATIC TRAINING OF A DIALOGUE SYSTEM, AND DIALOGUE SYSTEM FOR GENERATING SEMANTIC ANNOTATIONS - An adaptive dialogue system and also a computer-implemented method for semantic training of a dialogue system are disclosed. In this connection, semantic annotations are generated automatically on the basis of received speech inputs, the semantic annotations being intended for controlling instruments or for communication with a user. For this purpose, at least one speech input is received in the course of an interaction with a user. A sense content of the speech input is registered and appraised, by the speech input being classified on the basis of a trainable semantic model, in order to make a semantic annotation available for the speech input. Further user information connected with the speech input is taken into account if the registered sense content is appraised erroneously, incompletely and/or as untrustworthy. The sense content of the speech input is learned automatically on the basis of the additional user information. | 10-30-2014 |
20140324430 | System and Method for Standardized Speech Recognition Infrastructure - Disclosed herein are systems, methods, and computer-readable storage media for selecting a speech recognition model in a standardized speech recognition infrastructure. The system receives speech from a user, and if a user-specific supervised speech model associated with the user is available, retrieves the supervised speech model. If the user-specific supervised speech model is unavailable and if an unsupervised speech model is available, the system retrieves the unsupervised speech model. If the user-specific supervised speech model and the unsupervised speech model are unavailable, the system retrieves a generic speech model associated with the user. Next the system recognizes the received speech from the user with the retrieved model. In one embodiment, the system trains a speech recognition model in a standardized speech recognition infrastructure. In another embodiment, the system handshakes with a remote application in a standardized speech recognition infrastructure. | 10-30-2014 |
20140343942 | Multitask Learning for Spoken Language Understanding - Systems for improving or generating a spoken language understanding system using a multitask learning method for intent or call-type classification. The multitask learning method aims at training tasks in parallel while using a shared representation. A computing device automatically re-uses the existing labeled data from various applications, which are similar but may have different call-types, intents or intent distributions to improve the performance. An automated intent mapping algorithm operates across applications. In one aspect, active learning is employed to selectively sample the data to be re-used. | 11-20-2014 |
20140358540 | System and Method for Adapting Automatic Speech Recognition Pronunciation by Acoustic Model Restructuring - Disclosed herein are systems, computer-implemented methods, and computer-readable storage media for recognizing speech by adapting automatic speech recognition pronunciation by acoustic model restructuring. The method identifies an acoustic model and a matching pronouncing dictionary trained on typical native speech in a target dialect. The method collects speech from a new speaker resulting in collected speech and transcribes the collected speech to generate a lattice of plausible phonemes. Then the method creates a custom speech model for representing each phoneme used in the pronouncing dictionary by a weighted sum of acoustic models for all the plausible phonemes, wherein the pronouncing dictionary does not change, but the model of the acoustic space for each phoneme in the dictionary becomes a weighted sum of the acoustic models of phonemes of the typical native speech. Finally the method includes recognizing via a processor additional speech from the target speaker using the custom speech model. | 12-04-2014 |
20140365218 | LANGUAGE MODEL ADAPTATION USING RESULT SELECTION - A received utterance is recognized using different language models. For example, recognition of the utterance is independently performed using a baseline language model (BLM) and using an adapted language model (ALM). A determination is made as to what results from the different language model are more likely to be accurate. Different features may be used to assist in making the determination (e.g. language model scores, recognition confidences, acoustic model scores, quality measurements, . . . ) may be used. A classifier may be trained and then used in determining whether to select the results using the BLM or to select the results using the ALM. A language model may be automatically trained or re-trained that adjusts a weight of the training data used in training the model in response to differences between the two results obtained from applying the different language models. | 12-11-2014 |
20150019219 | SYSTEMS AND METHODS FOR SPOKEN DIALOG SERVICE ARBITRATION - Systems and methods for arbitrating spoken dialog services include determining a capability catalog associated with a plurality of devices accessible within an environment. The capability catalog includes a list of the plurality of devices mapped to a list of spoken dialog services provided by each of the plurality of devices. The system arbitrates between the plurality of devices and the spoken dialog services in the capability catalog to determine a selected device and a selected dialog service. | 01-15-2015 |
20150019220 | VOICE AUTHENTICATION AND SPEECH RECOGNITION SYSTEM AND METHOD - A method for configuring a speech recognition system comprises obtaining a speech sample utilised by a voice authentication system in a voice authentication process. The speech sample is processed to generate acoustic models for units of speech associated with the speech sample. The acoustic models are stored for subsequent use by the speech recognition system as part of a speech recognition process. | 01-15-2015 |
20150025886 | SYSTEM AND METHOD OF EXTRACTING CLAUSES FOR SPOKEN LANGUAGE UNDERSTANDING - A clausifier for extracting clauses for spoken language understanding is disclosed. The method relates to generating a set of clauses from speech utterance text and comprises inserting at least one boundary tag in speech utterance text related to sentence boundaries, inserting at least one edit tag indicating a portion of the speech utterance text to remove, and inserting at least one conjunction tag within the speech utterance text. The result is a set of clauses that may be identified within the speech utterance text according to the inserted at least one boundary tag, at least one edit tag and at least one conjunction tag. The disclosed clausifier comprises a sentence boundary classifier, an edit detector classifier, and a conjunction detector classifier. The clausifier may comprise a single classifier or a plurality of classifiers to perform the steps of identifying sentence boundaries, editing text, and identifying conjunctions within the text. | 01-22-2015 |
20150032451 | Method and Device for Voice Recognition Training - A method on a mobile device for voice recognition training is described. A voice training mode is entered. A voice training sample for a user of the mobile device is recorded. The voice training mode is interrupted to enter a noise indicator mode based on a sample background noise level for the voice training sample and a sample background noise type for the voice training sample. The voice training mode is returned to from the noise indicator mode when the user provides a continuation input that indicates a current background noise level meets an indicator threshold value. | 01-29-2015 |
20150039310 | Method and Apparatus for Mitigating False Accepts of Trigger Phrases - An electronic device includes a microphone that receives an audio signal, and a processor that is electrically coupled to the microphone. The processor detects a trigger phrase in the received audio signal and measure characteristics of the detected trigger phrase. Based on the measured characteristics of the detected trigger phrase, the processor determines whether the detected trigger phrase is valid. | 02-05-2015 |
20150039311 | Method and Apparatus for Evaluating Trigger Phrase Enrollment - An electronic device includes a microphone that receives an audio signal that includes a spoken trigger phrase, and a processor that is electrically coupled to the microphone. The processor measures characteristics of the audio signal, and determines, based on the measured characteristics, whether the spoken trigger phrase is acceptable for trigger phrase model training. If the spoken trigger phrase is determined not to be acceptable for trigger phrase model training, the processor rejects the trigger phrase for trigger phrase model training. | 02-05-2015 |
20150073796 | APPARATUS AND METHOD OF GENERATING LANGUAGE MODEL FOR SPEECH RECOGNITION - Disclosed herein are an apparatus and a method of generating a language model for speech recognition. The present invention is to provide an apparatus of generating a language model capable of improving speech recognition performance by predicting a position at which break is present and reflecting the predicted break information. | 03-12-2015 |
20150073797 | System and Method for Increasing Recognition Rates of In-Vocabulary Words By Improving Pronunciation Modeling - The present disclosure relates to systems, methods, and computer-readable media for generating a lexicon for use with speech recognition. The method includes overgenerating potential pronunciations based on symbolic input, identifying potential pronunciations in a speech recognition context, and storing the identified potential pronunciations in a lexicon. Overgenerating potential pronunciations can include establishing a set of conversion rules for short sequences of letters, converting portions of the symbolic input into a number of possible lexical pronunciation variants based on the set of conversion rules, modeling the possible lexical pronunciation variants in one of a weighted network and a list of phoneme lists, and iteratively retraining the set of conversion rules based on improved pronunciations. Symbolic input can include multiple examples of a same spoken word. Speech data can be labeled explicitly or implicitly and can include words as text and recorded audio. | 03-12-2015 |
20150088511 | NAMED-ENTITY BASED SPEECH RECOGNITION - In embodiments, apparatuses, methods and storage media are described that are associated with recognition of speech based on sequences of named entities. Language models may be trained as being associated with sequences of named entities. A language model may be selected for speech recognition after identification of one or more sequences of named entities by an initial language model. After identification of the one or more sequences of named entities, weights may be assigned to the one or more sequences of named entities. These weights may be utilized to select a language module and/or update the initial language model to one that is associated with the identified one or more sequences of named entities. In various embodiments, the language model may be repeatedly updated until the recognized speech converges sufficiently to satisfy a predetermined threshold. Other embodiments may be described and claimed. | 03-26-2015 |
20150106096 | Configuring Dynamic Custom Vocabulary for Personalized Speech Recognition - The disclosure includes a system and method for configuring custom vocabularies for personalized speech recognition. The system includes a processor and a memory storing instructions that when executed cause the system to: detect a provisioning trigger event; determine a state of a journey associated with a user based on the provisioning trigger event; determine one or more interest places based on the state of the journey; populate a place vocabulary associated with the user using the one or more interest places; filter the place vocabulary based on one or more place filtering parameters; and register the filtered place vocabulary for the user. | 04-16-2015 |
20150112680 | Method for Updating Voiceprint Feature Model and Terminal - A method for updating a voiceprint feature model and a terminal are provided that are applicable to the field of voice recognition technologies. The method includes: obtaining an original audio stream including at least one speaker; obtaining a respective audio stream of each speaker of the at least one speaker in the original audio stream according to a preset speaker segmentation and clustering algorithm; separately matching the respective audio stream of each speaker of the at least one speaker with an original voiceprint feature model, to obtain a successfully matched audio stream; and using the successfully matched audio stream as an additional audio stream training sample for generating the original voiceprint feature model, and updating the original voiceprint feature model. | 04-23-2015 |
20150120298 | VOICE CONTROL SYSTEM FOR AN IMPLANT - A system for the control of an implant ( | 04-30-2015 |
20150127343 | MATCHING AND LEAD PREQUALIFICATION BASED ON VOICE ANALYSIS - A computing device may perform a feature identification of a received voice segment to recognize physical characteristics of the voice segment. The device may also determine paralinguistic voice characteristics of the voice segment according to the physical characteristics of the voice segment. The device may also indicate a match status of the voice segment according to a comparison of the physical characteristics and the paralinguistic voice characteristics of the voice segment to desired characteristics of matching voice segments. | 05-07-2015 |
20150142436 | SPEECH RECOGNITION IN AUTOMATED INFORMATION SERVICES SYSTEMS - The present invention allows feedback from operator workstations to be used to update databases used for providing automated information services. When an automated process fails, recorded speech of the caller is passed on to the operator for decision making. Based on the selections made by the operator in light of the speech or other interactions with the caller, a comparison is made between the speech and the selections made by the operator to arrive at information to update the databases in the information services automation system. Thus, when the operator inputs the words corresponding to the speech provided at the information services automation system, the speech may be associated with those words. The association between the speech and the words may be used to update different databases in the information services automation system. | 05-21-2015 |
20150332671 | DEVICE AND METHOD FOR SPEECH RECOGNITION - A device and a method for speech recognition, usable in a vehicle, with a processing unit for processing audio signals of a user on the basis of a user speech profile assigned to this user. The device is configured to store the user speech profile in an external memory assigned only to this user and located outside the processing unit, and to automatically retrieve the user speech profile stored in the external memory at each start-up of the device. The automatically retrieved user speech profile is communicated to the processing unit for use in the processing of future audio signals of the user. | 11-19-2015 |
20150379983 | UTTERANCE SELECTION FOR AUTOMATED SPEECH RECOGNIZER TRAINING - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a set of training utterances. The methods, systems, and apparatus include actions of obtaining a target multi-dimensional distribution of characteristics in an initial set of candidate utterances and selecting a subset of the initial set of candidate utterances based on speech recognition confidence scores associated with the candidate utterances. Additional actions include selecting a particular candidate utterance from the subset of the initial set of utterances and determining that adding the particular candidate utterance to a set of training utterances reduces a divergence of a multi-dimensional distribution of the characteristics in the set of training utterances from the target multi-dimensional distribution. Further actions include adding the particular candidate utterance to the set of training utterances. | 12-31-2015 |
20160005395 | GENERATING COMPUTER RESPONSES TO SOCIAL CONVERSATIONAL INPUTS - Conversational interactions between humans and computer systems can be provided by a computer system that classifies an input by conversation type, and provides human authored responses for conversation types. The input classification can be performed using trained binary classifiers. Training can be performed by labeling inputs as either positive or negative examples of a conversation type. Conversational responses can be authored by the same individuals that label the inputs used in training the classifiers. In some cases, the process of training classifiers can result in a suggestion of a new conversation type, for which human authors can label inputs for a new classifier and write content for responses for that new conversation type. | 01-07-2016 |
20160012817 | LOCAL AND REMOTE AGGREGATION OF FEEDBACK DATA FOR SPEECH RECOGNITION | 01-14-2016 |
20160019883 | DATASET SHIFT COMPENSATION IN MACHINE LEARNING - A method for inter-dataset variability compensation, the method comprising using at least one hardware processor for: receiving a heterogeneous development dataset comprising multiple samples and metadata associated with at least some of the multiple samples; dividing the multiple samples into multiple homogenous subsets, based on the metadata; averaging high-level features of each of the multiple homogenous subsets, to produce multiple central high-level features for the multiple homogenous subsets, respectively; computing an inter-dataset variability subspace spanned by the multiple central high-level features; removing the inter-dataset variability subspace from the high-level features of the multiple homogenous subsets, to produce denoised samples; and training a machine learning system using the denoised speech samples. | 01-21-2016 |
20160027435 | METHOD FOR TRAINING AN AUTOMATIC SPEECH RECOGNITION SYSTEM - A system and method for speech recognition is provided. Embodiments may include receiving, at a first computing device, a far-talk signal from a far-talk computing device, the far-talk signal transmitted using a first channel and corresponding to an audible sound. Embodiments may further include receiving, at the first computing device, a near-talk signal from a near-talk computing device, the near-talk signal transmitted using a second channel and corresponding to the audible sound, wherein the far-talk signal and the near-talk signal are received during an enrollment phase of a far-talk speech recognition system. Embodiments may also include updating, at the first computing device, one or more models associated with a far-talk speech recognition system based upon, at least in part, one or more characteristics of the far-talk signal and one or more characteristics of the near-talk signal. | 01-28-2016 |
20160027440 | SELECTIVE SPEECH RECOGNITION FOR CHAT AND DIGITAL PERSONAL ASSISTANT SYSTEMS - Disclosed are computer-implemented methods and systems for dynamic selection of speech recognition systems for the use in Chat Information Systems (CIS) based on multiple criteria and context of human-machine interaction. Specifically, once a first user audio input is received, it is analyzed so as to locate specific triggers, determine the context of the interaction or predict the subsequent user audio inputs. Based on at least one of these criteria, one of a free-diction recognizer, pattern-based recognizer, address book based recognizer or dynamically created recognizer is selected for recognizing the subsequent user audio input. The methods described herein increase the accuracy of automatic recognition of user voice commands, thereby enhancing overall user experience of using CIS, chat agents and similar digital personal assistant systems. | 01-28-2016 |
20160071513 | CLOUD BASED ADAPTIVE LEARNING FOR DISTRIBUTED SENSORS - A low power sound recognition sensor is configured to receive an analog signal that may contain a signature sound. Sound parameter information is extracted from the analog signal and compared to a sound parameter reference stored locally with the sound recognition sensor to detect when the signature sound is received in the analog signal. A trigger signal is generated when a signature sound is detected. A portion of the extracted sound parameter information is sent to a remote training location for adaptive training when a signature sound detection error occurs. An updated sound parameter reference from the remote training location is received in response to the adaptive training. | 03-10-2016 |
20160078860 | METHOD AND APPARATUS FOR DISCOVERING TRENDING TERMS IN SPEECH REQUESTS - Systems and processes are disclosed for discovering trending terms in automatic speech recognition. Candidate terms (e.g., words, phrases, etc.) not yet found in a speech recognizer vocabulary or having low language model probability can be identified based on trending usage in a variety of electronic data sources (e.g., social network feeds, news sources, search queries, etc.). When candidate terms are identified, archives of live or recent speech traffic can be searched to determine whether users are uttering the candidate terms in dictation or speech requests. Such searching can be done using open vocabulary spoken term detection to find phonetic matches in the audio archives. As the candidate terms are found in the speech traffic, notifications can be generated that identify the candidate terms, provide relevant usage statistics, identify the context in which the terms are used, and the like. | 03-17-2016 |
20160104478 | VOICE RECOGNITION METHOD USING MACHINE LEARNING - Provided is a speech recognition method using machine learning, including: receiving a speech signal as an input, performing speech recognition to generate speech recognition result information including multiple candidate sentences and ranks of the respective candidate sentences; processing the multiple candidate sentences included in the speech recognition result information according to a machine learning model which is learned in advance and changing the ranks of the multiple candidate sentences to re-rank the multiple candidate sentences; and selecting the highest-rank candidate sentence among the re-ranked multiple candidate sentences as a speech recognition result. Particularly, the machine learning model is generated by: receiving the speech signal and a correct answer sentence as inputs; generating the speech recognition result information and a correct answer set; generating learning data by using the correct answer set; and performing the machine learning of changing the ranks of the candidate sentences. | 04-14-2016 |
20160133249 | SPEECH SIGNAL PROCESSING METHOD AND SPEECH SIGNAL PROCESSING APPARATUS - A speech signal processing method of a user terminal includes: receiving a speech signal, detecting a personalized information section including personal information in the speech signal, performing data processing on the personalized information section of the speech signal by using a personalized model generated based on the personal information, and receiving, from a server, a result of the data processing performed by the server on a general information section of the speech signal that is different than the personalized information section of the speech signal. | 05-12-2016 |
20160140957 | Speech Recognition Semantic Classification Training - An automated method is described for developing an automated speech input semantic classification system such as a call routing system. A set of semantic classifications is defined for classification of input speech utterances, where each semantic classification represents a specific semantic classification of the speech input. The semantic classification system is trained from training data from training data substantially without manually transcribed in-domain training data, and then operated to assign input speech utterances to the defined semantic classifications. Adaptation training data based on input speech utterances is collected with manually assigned semantic labels from at least one source of already collected language data. When the adaptation training data satisfies a pre-determined adaptation criteria, the semantic classification system is automatically retrained based on the adaptation training data. | 05-19-2016 |
20160140961 | PROVIDING PRE-COMPUTED HOTWORD MODELS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for obtaining, for each of multiple words or sub-words, audio data corresponding to multiple users speaking the word or sub-word; training, for each of the multiple words or sub-words, a pre-computed hotword model for the word or sub-word based on the audio data for the word or sub-word; receiving a candidate hotword from a computing device; identifying one or more pre-computed hotword models that correspond to the candidate hotword; and providing the identified, pre-computed hotword models to the computing device. | 05-19-2016 |
20160155438 | METHOD FOR IMPROVING ACOUSTIC MODEL, COMPUTER FOR IMPROVING ACOUSTIC MODEL AND COMPUTER PROGRAM THEREOF | 06-02-2016 |
20160171973 | OUT OF VOCABULARY PATTERN LEARNING | 06-16-2016 |
20160180836 | METHOD FOR IMPROVING ACOUSTIC MODEL, COMPUTER FOR IMPROVING ACOUSTIC MODEL AND COMPUTER PROGRAM THEREOF | 06-23-2016 |
20160196820 | GENERATION OF LANGUAGE UNDERSTANDING SYSTEMS AND METHODS | 07-07-2016 |
20160196821 | Document Transcription System Training | 07-07-2016 |
20160196822 | SYSTEM AND METHOD FOR MOBILE AUTOMATIC SPEECH RECOGNITION | 07-07-2016 |
20170236513 | METHOD AND ELECTRONIC DEVICE FOR PERFORMING VOICE BASED ACTIONS | 08-17-2017 |
20190147885 | METHOD AND APPARATUS FOR OUTPUTTING INFORMATION | 05-16-2019 |