Class / Patent application number | Description | Number of patent applications / Date published |
704240000 | Probability | 80 |
20080243502 | PARTIALLY FILLING MIXED-INITIATIVE FORMS FROM UTTERANCES HAVING SUB-THRESHOLD CONFIDENCE SCORES BASED UPON WORD-LEVEL CONFIDENCE DATA - The invention discloses prompting for a spoken response that provides input for multiple elements. A single spoken utterance including content for multiple elements can be received, where each element is mapped to a data field. The spoken utterance can be speech-to-text converted to derive values for each of the multiple elements. An utterance level confidence score can be determined, which can fall below an associated certainty threshold. Element-level confidence scores for each of the derived elements can then be ascertained. A first set of the multiple elements can have element-level confidence scores above an associated certainty threshold and a second set can have scores below. Values can be stored in data fields mapped to the first set. A prompt for input for the second set can be played. Accordingly, data fields are partially filled in based upon the original speech utterance, where a second prompt for unfilled fields is played. | 10-02-2008 |
20080312921 | SPEECH RECOGNITION UTILIZING MULTITUDE OF SPEECH FEATURES - In a speech recognition system, the combination of a log-linear model with a multitude of speech features is provided to recognize unknown speech utterances. The speech recognition system models the posterior probability of linguistic units relevant to speech recognition using a log-linear model. The posterior model captures the probability of the linguistic unit given the observed speech features and the parameters of the posterior model. The posterior model may be determined using the probability of the word sequence hypotheses given a multitude of speech features. Log-linear models are used with features derived from sparse or incomplete data. The speech features that are utilized may include asynchronous, overlapping, and statistically non-independent speech features. Not all features used in training need to appear in testing/recognition. | 12-18-2008 |
20090030686 | Method and system for computing or determining confidence scores for parse trees at all levels - In a confidence computing method and system, a processor may interpret speech signals as a text string or directly receive a text string as input, generate a syntactical parse tree representing the interpreted string and including a plurality of sub-trees which each represents a corresponding section of the interpreted text string, determine for each sub-tree whether the sub-tree is accurate, obtain replacement speech signals for each sub-tree determined to be inaccurate, and provide output based on corresponding text string sections of at least one sub-tree determined to be accurate. | 01-29-2009 |
20090055176 | Method and System of Optimal Selection Strategy for Statistical Classifications - An optimal selection or decision strategy is described through an example that includes use in dialog systems. The selection strategy or method includes receiving multiple predictions and multiple probabilities. The received predictions predict the content of a received input and each of the probabilities corresponds to one of the predictions. In an example dialog system, the received input includes an utterance. The selection method includes dynamically selecting a set of predictions from the received predictions by generating ranked predictions. The ranked predictions are generated by ordering the plurality of predictions according to descending probability. | 02-26-2009 |
20090076817 | METHOD AND APPARATUS FOR RECOGNIZING SPEECH - Provided are an apparatus and method for recognizing speech, in which reliability with respect to phoneme-recognized phoneme sequences is calculated and performance of speech recognition is enhanced using the calculated results. The method of recognizing speech includes the steps of: determining a boundary between phonemes included in character sequences that are phonetically input to detect each phoneme interval; calculating reliability according to a probability that a phoneme indicated by the detected phoneme interval corresponds to a phoneme included in a predefined phoneme model; calculating a phoneme alignment cost with respect to the character sequences based on the calculated reliability and a pre-trained and stored phoneme recognition probability distribution; and performing phoneme alignment based on the calculated phoneme alignment cost to perform speech recognition on the input character sequences. As a result, reliability with respect to the phoneme-recognized phoneme sequences can be calculated, and the performance of speech recognition can be enhanced using the calculated results. | 03-19-2009 |
20090119102 | SYSTEM AND METHOD OF EXPLOITING PROSODIC FEATURES FOR DIALOG ACT TAGGING IN A DISCRIMINATIVE MODELING FRAMEWORK - Disclosed are a system and method for exploiting information in an utterance for dialog act tagging. An exemplary method includes receiving a user utterance, computing at periodic intervals at least one parameter in the user utterance, quantizing the at least one parameter at each periodic interval, approximating conditional probabilities using an n-gram over a sliding window over the periodic intervals and tagging the utterance as a dialog act based on the approximated conditional probabilities. | 05-07-2009 |
20090259466 | Adaptive Confidence Thresholds for Speech Recognition - Adjusting confidence score thresholds is described for a speech recognition engine. The speech recognition engine is implemented in multiple computer processes functioning in a computer processor, and is characterized by an associated receiver operating characteristic (ROC) curve. A results confirmation process interprets user confirmation of speech recognition results within a given confidence score threshold to create a confirmed portion of the ROC curve for the speech recognition engine. A curve extension process extends the confirmed portion of the ROC curve by extrapolation of unconfirmed speech recognition results beyond the confidence score threshold to generate an extended ROC curve. A threshold adjustment process adjusts the confidence score threshold based on the extended ROC curve to meet target operating constraints for operating the speech recognition engine to perform automatic speech recognition of user speech inputs. | 10-15-2009 |
20090306982 | APPARATUS, METHOD AND PROGRAM FOR TEXT MINING - Disclosed is an apparatus includes a text input device that inputs text data provided with confidence measure, as subject for mining, a language processing unit that performs language analysis of the input text data provided with the confidence measures, a confidence measure exploiting characteristic word count unit that counts the characteristic words in the input text to provide a count result and that exploits the statistical information and the confidence measures provided in the input text to correct the count result obtained, a characteristic measure calculation unit that calculates the characteristic measure of each characteristic word from the corrected count result, a mining result output device that outputs the characteristic measure of each characteristic word obtained, a user operation input device for a user to input setting for language processing of the input text and setting for a technique for calculating the characteristic measure being found, a mining process management unit that transmits a user's command delivered from the user operation input device to respective components, and a statistical information database that records and holds the statistical information representing the property of the input text that may be presupposed. | 12-10-2009 |
20100004930 | Speech Recognition with Parallel Recognition Tasks - The subject matter of this specification can be embodied in, among other things, a method that includes receiving an audio signal and initiating speech recognition tasks by a plurality of speech recognition systems (SRS's). Each SRS is configured to generate a recognition result specifying possible speech included in the audio signal and a confidence value indicating a confidence in a correctness of the speech result. The method also includes completing a portion of the speech recognition tasks including generating one or more recognition results and one or more confidence values for the one or more recognition results, determining whether the one or more confidence values meets a confidence threshold, aborting a remaining portion of the speech recognition tasks for SRS's that have not completed generating a recognition result, and outputting a final recognition result based on at least one of the generated one or more speech results. | 01-07-2010 |
20100030558 | Method for Determining the Presence of a Wanted Signal Component - This invention provides a method for determining, in a speech dialogue system issuing speech prompts, a score value as an indicator for the presence of a wanted signal component in an input signal stemming from a microphone, comprising the steps of: using a first likelihood function to determine a first likelihood value for the presence of the wanted signal component in the input signal, using a second likelihood function to determine a second likelihood value for the presence of a noise signal component in the input signal, and determining a score value based on the first and the second likelihood values, wherein the first likelihood function is based on a predetermined reference wanted signal, and the second likelihood function is based on a predetermined reference noise signal. | 02-04-2010 |
20100036663 | Speech Detection Using Order Statistics - The method and system disclosed herein reduces total bandwidth requirement for communication in a voice over Internet protocol application. Sample [ | 02-11-2010 |
20100121640 | METHOD AND SYSTEM FOR MODELING A COMMON-LANGUAGE SPEECH RECOGNITION, BY A COMPUTER, UNDER THE INFLUENCE OF A PLURALITY OF DIALECTS - The present invention relates to a method for modeling a common-language speech recognition, by a computer, under the influence of multiple dialects and concerns a technical field of speech recognition by a computer. In this method, a triphone standard common-language model is first generated based on training data of standard common language, and first and second monophone dialectal-accented common-language models are based on development data of dialectal-accented common languages of first kind and second kind, respectively. Then a temporary merged model is obtained in a manner that the first dialectal-accented common-language model is merged into the standard common-language model according to a first confusion matrix obtained by recognizing the development data of first dialectal-accented common language using the standard common-language model. Finally, a recognition model is obtained in a manner that the second dialectal-accented common-language model is merged into the temporary merged model according to a second confusion matrix generated by recognizing the development data of second dialectal-accented common language by the temporary merged model. This method effectively enhances the operating efficiency and admittedly raises the recognition rate for the dialectal-accented common language. The recognition rate for the standard common language is also raised. | 05-13-2010 |
20100153107 | TREND EVALUATION DEVICE, ITS METHOD, AND PROGRAM - A trend evaluation device includes trend evaluation means having at least one of relative cooccurrence probability calculation means for calculating a change of cooccurrence probability of a keyword and an associated word and relative associated word similarity calculation means for calculating a change degree of a conversation topic concerning the keyword, so as to calculate a trend score by considering one or more combinations of the relative cooccurrence probability and the relative associated word similarity obtained by these means. | 06-17-2010 |
20100198598 | Speaker Recognition in a Speech Recognition System - A method for recognizing a speaker of an utterance in a speech recognition system is disclosed. A likelihood score for each of a plurality of speaker models for different speakers is determined. The likelihood score indicating how well the speaker model corresponds to the utterance. For each of the plurality of speaker models, a probability that the utterance originates from that speaker is determined. The probability is determined based on the likelihood score for the speaker model and requires the estimation of a distribution of likelihood scores expected based at least in part on the training state of the speaker. | 08-05-2010 |
20100211390 | Speech Recognition of a List Entry - The present invention relates to a method of generating a candidate list from a list of entries in accordance with a string of subword units corresponding to a speech input in a speech recognition system, the list of entries including plural list entries each comprising at least one fragment having one or more subword units. For each list entry, the fragments of the list entry are compared with the string of subword units. A matching score for each of the compared fragments based on the comparison is determined. The matching score for a fragment is further based on a comparison of at least one other fragment of the same list entry with the string of subword units. A total score for each list entry is determined based on the matching scores for the compared fragments of the respective list entry. A candidate list with the best matching entries from the list of entries based on the total scores of the list entries is generated. | 08-19-2010 |
20110040561 | Intersession variability compensation for automatic extraction of information from voice - A method for compensating inter-session variability for automatic extraction of information from an input voice signal representing an utterance of a speaker, includes: processing the input voice signal to provide feature vectors each formed by acoustic features extracted from the input voice signal at a time frame; computing an intersession variability compensation feature vector; and computing compensated feature vectors based on the extracted feature vectors and the intersession variability compensation feature vector. | 02-17-2011 |
20110087492 | SPEECH RECOGNITION SYSTEM, METHOD FOR RECOGNIZING SPEECH AND ELECTRONIC APPARATUS - A speech characteristic-amount calculation circuit | 04-14-2011 |
20110099012 | SYSTEM AND METHOD FOR ESTIMATING THE RELIABILITY OF ALTERNATE SPEECH RECOGNITION HYPOTHESES IN REAL TIME - Disclosed herein are systems, methods, and computer-readable storage media for estimating reliability of alternate speech recognition hypotheses. A system configured to practice the method receives an N-best list of speech recognition hypotheses and features describing the N-best list, determines a first probability of correctness for each hypothesis in the N-best list based on the received features, determines a second probability that the N-best list does not contain a correct hypothesis, and uses the first probability and the second probability in a spoken dialog. The features can describe properties of at least one of a lattice, a word confusion network, and a garbage model. In one aspect, the N-best lists are not reordered according to reranking scores. The determination of the first probability of correctness can include a first stage of training a probabilistic model and a second stage of distributing mass over items in a tail of the N-best list. | 04-28-2011 |
20110131042 | DIALOGUE SPEECH RECOGNITION SYSTEM, DIALOGUE SPEECH RECOGNITION METHOD, AND RECORDING MEDIUM FOR STORING DIALOGUE SPEECH RECOGNITION PROGRAM - Disclosed is a dialogue speech recognition system that can expand the scope of applications by employing a universal dialogue structure as the condition for speech recognition of dialogue speech between persons. An acoustic likelihood computation means ( | 06-02-2011 |
20110137651 | System and Method for Processing Speech Recognition - An automatic speech recognition (ASR) system and method is provided for controlling the recognition of speech utterances generated by an end user operating a communications device. The ASR system and method can be used with a mobile device that is used in a communications network. The ASR system can be used for ASR of speech utterances input into a mobile device, to perform compensating techniques using at least one characteristic and for updating an ASR speech recognizer associated with the ASR system by determined and using a background noise value and a distortion value that is based on the features of the mobile device. The ASR system can be used to augment a limited data input capability of a mobile device, for example, caused by limited input devices physically located on the mobile device. | 06-09-2011 |
20110153326 | SYSTEM AND METHOD FOR COMPUTING AND TRANSMITTING PARAMETERS IN A DISTRIBUTED VOICE RECOGNITION SYSTEM - A system and method for extracting acoustic features and speech activity on a device and transmitting them in a distributed voice recognition system. The distributed voice recognition system includes a local VR engine in a subscriber unit and a server VR engine on a server. The local VR engine comprises a feature extraction (FE) module that extracts features from a speech signal, and a voice activity detection module (VAD) that detects voice activity within a speech signal. The system includes filters, framing and windowing modules, power spectrum analyzers, a neural network, a nonlinear element, and other components to selectively provide an advanced front end vector including predetermined portions of the voice activity detection indication and extracted features from the subscriber unit to the server. The system also includes a module to generate additional feature vectors on the server from the received features using a feed-forward multilayer perceptron (MLP) and providing the same to the speech server. | 06-23-2011 |
20110173000 | WORD CATEGORY ESTIMATION APPARATUS, WORD CATEGORY ESTIMATION METHOD, SPEECH RECOGNITION APPARATUS, SPEECH RECOGNITION METHOD, PROGRAM, AND RECORDING MEDIUM - A word category estimation apparatus ( | 07-14-2011 |
20110184735 | SPEECH RECOGNITION ANALYSIS VIA IDENTIFICATION INFORMATION - Embodiments are disclosed that relate to the use of identity information to help avoid the occurrence of false positive speech recognition events in a speech recognition system. One embodiment provides a method comprising receiving speech recognition data comprising a recognized speech segment, acoustic locational data related to a location of origin of the recognized speech segment as determined via signals from the microphone array, and confidence data comprising a recognition confidence value, and also receiving image data comprising visual locational information related to a location of each person in an image. The acoustic locational data is compared to the visual locational data to determine whether the recognized speech segment originated from a person in the field of view of the image sensor, and the confidence data is adjusted depending on this determination. | 07-28-2011 |
20110218803 | METHOD AND SYSTEM FOR ASSESSING INTELLIGIBILITY OF SPEECH REPRESENTED BY A SPEECH SIGNAL - A method for assessing intelligibility of speech represented by a speech signal includes providing a speech signal and performing a feature extraction on at least one frame of the speech signal so as to obtain a feature vector for each of the at least one frame of the speech signal. The feature vector is input to a statistical machine learning model so as to obtain an estimated posterior probability of phonemes in the at least one frame as an output including a vector of phoneme posterior probabilities of different phonemes for each of the at least one frame of the speech signal. An entropy estimation is performed on the vector of phoneme posterior probabilities of the at least one frame of the speech signal so as to evaluate intelligibility of the at least one frame of the speech signal. An intelligibility measure is output for the at least one frame of the speech signal. | 09-08-2011 |
20110224983 | N-Gram Model Smoothing with Independently Controllable Parameters - Described is a technology by which a probability is estimated for a token in a sequence of tokens based upon a number of zero or more times (actual counts) that the sequence was observed in training data. The token may be a word in a word sequence, and the estimated probability may be used in a statistical language model. A discount parameter is set independently of interpolation parameters. If the sequence was observed at least once in the training data, a discount probability and an interpolation probability are computed and summed to provide the estimated probability. If the sequence was not observed, the probability is estimated by computing a backoff probability. Also described are various ways to obtain the discount parameter and interpolation parameters. | 09-15-2011 |
20110288865 | Single-Sided Speech Quality Measurement - A non-intrusive speech quality estimation technique is based on statistical or probability models such as Gaussian Mixture Models (“GMMs”). Perceptual features are extracted from the received speech signal and assessed by an artificial reference model formed using statistical models. The models characterize the statistical behavior of speech features. Consistency measures between the input speech features and the models are calculated to form indicators of speech quality. The consistency values are mapped to a speech quality score using a mapping optimized using machine learning algorithms, such as Multivariate Adaptive Regression Splines (“MARS”). The technique provides competitive or better quality estimates relative to known techniques while having lower computational complexity. | 11-24-2011 |
20120004912 | METHOD AND SYSTEM FOR USING INPUT SIGNAL QUALITY IN SPEECH RECOGNITION - A method and system for using input signal quality in an automatic speech recognition system. The method includes measuring the quality of an input signal into a speech recognition system and varying a rejection threshold of the speech recognition system at runtime in dependence on the measurement of the input signal quality. If the measurement of the input signal quality is low, the rejection threshold is reduced and, if the measurement of the input signal quality is high, the rejection threshold is increased. The measurement of the input signal quality may be based on one or more of the measurements of signal-to-noise ratio, loudness, including clipping, and speech signal duration. | 01-05-2012 |
20120010884 | Systems And Methods for Manipulating Electronic Content Based On Speech Recognition - Systems and methods are disclosed for displaying electronic multimedia content to a user. One computer-implemented method for manipulating electronic multimedia content includes generating, using a processor, a speech model and at least one speaker model of an individual speaker. The method further includes receiving electronic media content over a network; extracting an audio track from the electronic media content; and detecting speech segments within the electronic media content based on the speech model. The method further includes detecting a speaker segment within the electronic media content and calculating a probability of the detected speaker segment involving the individual speaker based on the at least one speaker model. | 01-12-2012 |
20120072215 | FULL-SEQUENCE TRAINING OF DEEP STRUCTURES FOR SPEECH RECOGNITION - A method is disclosed herein that include an act of causing a processor to access a deep-structured model retained in a computer-readable medium, wherein the deep-structured model comprises a plurality of layers with weights assigned thereto, transition probabilities between states, and language model scores. The method can further include the act of jointly substantially optimizing the weights, the transition probabilities, and the language model scores of the deep-structured model using the optimization criterion based on a sequence rather than a set of unrelated frames. | 03-22-2012 |
20120072216 | AGE DETERMINATION USING SPEECH - A method and device are configured to receive voice data from a user and perform speech recognition on the received voice data. A confidence score is calculated that represents the likelihood that received voice data has been accurately recognized. A likely age range is determined associated with the user based on the confidence score. | 03-22-2012 |
20120101820 | MULTI-STATE BARGE-IN MODELS FOR SPOKEN DIALOG SYSTEMS - A method is disclosed for applying a multi-state barge-in acoustic model in a spoken dialogue system. The method includes receiving an audio speech input from the user during the presentation of a prompt, accumulating the audio speech input from the user, applying a non-speech component having at least two one-state Hidden Markov Models (HMMs) to the audio speech input from the user, applying a speech component having at least five three-state HMMs to the audio speech input from the user, in which each of the five three-state HMMs represents a different phonetic category, determining whether the audio speech input is a barge-in-speech input from the user, and if the audio speech input is determined to be the barge-in-speech input from the user, terminating the presentation of the prompt. | 04-26-2012 |
20120109651 | DATA RETRIEVAL AND INDEXING METHOD AND APPARATUS - A method of searching a plurality of data files, wherein each data file includes a plurality of features. The method: determines a plurality of feature groups, wherein each feature group includes n features and n is an integer of 2 or more; expresses each data file as a file vector, wherein each component of the vector indicates the frequency of a feature group within the data file, wherein the n features which constitute a feature group do not have to be located adjacent to one another; expresses a search query using the feature groups as a vector; and searches the plurality of data files by comparing the search query expressed as a vector with the file vectors. | 05-03-2012 |
20120166195 | STATE DETECTION DEVICE AND STATE DETECTING METHOD - A state detection device includes: a first model generation unit to generate a first specific speaker model obtained by modeling speech features of a specific speaker in an undepressed state; a second model generation unit to generate a second specific speaker model obtained by modeling speech features of the specific speaker in the depressed state; a likelihood calculation unit to calculate a first likelihood as a likelihood of the first specific speaker model with respect to input voice, and a second likelihood as a likelihood of the second specific speaker model with respect to the input voice; and a state determination unit to determine a state of the speaker of the input voice using the first likelihood and the second likelihood. | 06-28-2012 |
20120232901 | AUTOMATIC SPOKEN LANGUAGE IDENTIFICATION BASED ON PHONEME SEQUENCE PATTERNS - A language identification system that includes a universal phoneme decoder (UPD) is described. The UPD contains a universal phoneme set representing both 1) all phonemes occurring in the set of two or more spoken languages, and 2) captures phoneme correspondences across languages, such that a set of unique phoneme patterns and probabilities are calculated in order to identify a most likely phoneme occurring each time in the audio files in the set of two or more potential languages in which the UPD was trained on. Each statistical language model (SLM) uses the set of unique phoneme patterns created for each language in the set to distinguish between spoken human languages in the set of languages. The run-time language identifier module identifies a particular human language being spoken by utilizing the linguistic probabilities supplied by the SLMs that are based on the set of unique phoneme patterns created for each language. | 09-13-2012 |
20120253807 | SPEAKER STATE DETECTING APPARATUS AND SPEAKER STATE DETECTING METHOD - A speaker state detecting apparatus comprises: an audio input unit for acquiring, at least, a first voice emanated by a first speaker and a second voice emanated by a second speaker; a speech interval detecting unit for detecting an overlap period between a first speech period of the first speaker included in the first voice and a second speech period of the second speaker included in the second voice, which starts before the first speech period, or an interval between the first speech period and the second speech period; a state information extracting unit for extracting state information representing a state of the first speaker from the first speech period; and a state detecting unit for detecting the state of the first speaker in the first speech period based on the overlap period or the interval and the first state information. | 10-04-2012 |
20120278076 | DISAMBIGUATION OF CONTACT INFORMATION USING HISTORICAL DATA - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for disambiguating contact information are described. A method includes determining, for each of multiple communications that were initiated by a user of a mobile device, a time when the communication was initiated or received; determining, for each of multiple contacts associated with the user, a probability associated with the contact based at least on the times when the communications were initiated or received; weighting a contact disambiguation grammar according to the probabilities; and processing audio data using the contact disambiguation grammar to select a particular contact. | 11-01-2012 |
20130006631 | Turbo Processing of Speech Recognition - Environmental recognition systems may improve recognition accuracy by leveraging local and nonlocal features in a recognition target. A local decoder may be used to analyze local features, and a nonlocal decoder may be used to analyze nonlocal features. Local and nonlocal estimates may then be exchanged to improve the accuracy of the local and nonlocal decoders. Additional iterations of analysis and exchange may be performed until a predetermined threshold is reached. In some embodiments, the system may comprise extrinsic information extractors to prevent positive feedback loops from causing the system to adhere to erroneous previous decisions. | 01-03-2013 |
20130103402 | SYSTEM AND METHOD FOR COMBINING FRAME AND SEGMENT LEVEL PROCESSING, VIA TEMPORAL POOLING, FOR PHONETIC CLASSIFICATION - Disclosed herein are systems, methods, and non-transitory computer-readable storage media for combining frame and segment level processing, via temporal pooling, for phonetic classification. A frame processor unit receives an input and extracts the time-dependent features from the input. A plurality of pooling interface units generates a plurality of feature vectors based on pooling the time-dependent features and selecting a plurality of time-dependent features according to a plurality of selection strategies. Next, a plurality of segmental classification units generates scores for the feature vectors. Each segmental classification unit (SCU) can be dedicated to a specific pooling interface unit (PIU) to form a PIU-SCU combination. Multiple PIU-SCU combinations can be further combined to form an ensemble of combinations, and the ensemble can be diversified by varying the pooling operations used by the PIU-SCU combinations. Based on the scores, the plurality of segmental classification units selects a class label and returns a result. | 04-25-2013 |
20130132082 | Systems and Methods for Concurrent Signal Recognition - Methods and systems for recognition of concurrent, superimposed, or otherwise overlapping signals are described. A Markov Selection Model is introduced that, together with probabilistic decomposition methods, enable recognition of simultaneously emitted signals from various sources. For example, a signal mixture may include overlapping speech from different persons. In some instances, recognition may be performed without the need to separate signals or sources. As such, some of the techniques described herein may be useful in automatic transcription, noise reduction, teaching, electronic games, audio search and retrieval, medical and scientific applications, etc. | 05-23-2013 |
20130138439 | Interface for Setting Confidence Thresholds for Automatic Speech Recognition and Call Steering Applications - An interactive user interface is described for setting confidence score thresholds in a language processing system. There is a display of a first system confidence score curve characterizing system recognition performance associated with a high confidence threshold, a first user control for adjusting the high confidence threshold and an associated visual display highlighting a point on the first system confidence score curve representing the selected high confidence threshold, a display of a second system confidence score curve characterizing system recognition performance associated with a low confidence threshold, and a second user control for adjusting the low confidence threshold and an associated visual display highlighting a point on the second system confidence score curve representing the selected low confidence threshold. The operation of the second user control is constrained to require that the low confidence threshold must be less than or equal to the high confidence threshold. | 05-30-2013 |
20130138440 | SPEECH RECOGNITION WITH PARALLEL RECOGNITION TASKS - The subject matter of this specification can be embodied in, among other things, a method that includes receiving an audio signal and initiating speech recognition tasks by a plurality of speech recognition systems (SRS's). Each SRS is configured to generate a recognition result specifying possible speech included in the audio signal and a confidence value indicating a confidence in a correctness of the speech result. The method also includes completing a portion of the speech recognition tasks including generating one or more recognition results and one or more confidence values for the one or more recognition results, determining whether the one or more confidence values meets a confidence threshold, aborting a remaining portion of the speech recognition tasks for SRS's that have not generated a recognition result, and outputting a final recognition result based on at least one of the generated one or more speech results. | 05-30-2013 |
20130158996 | Acoustic Processing Unit - Embodiments of the present invention include an apparatus, method, and system for acoustic modeling. The apparatus can include a senone scoring unit (SSU) control module, a distance calculator, and an addition module. The SSU control module can be configured to receive a feature vector. The distance calculator can be configured to receive a plurality of Gaussian probability distributions via a data bus having a width of at least one Gaussian probability distribution and the feature vector from the SSU control module. The distance calculator can include a plurality of arithmetic logic units to calculate a plurality of dimension distance scores and an accumulator to sum the dimension distance scores to generate a Gaussian distance score. Further, the addition module is configured to sum a plurality of Gaussian distance scores to generate a senone score. | 06-20-2013 |
20130158997 | Acoustic Processing Unit Interface - Embodiments of the present invention include an apparatus, method, and system for acoustic modeling. In an embodiment, a speech recognition system is provided. The system includes a processing unit configured to divide a received audio signal into consecutive frames having respective frame vectors, an acoustic processing unit (APU), a data bus that couples the processing unit and the APU. The APU includes a local, non-volatile memory that stores a plurality of senones, a memory buffer coupled to the memory, the acoustic processing unit being configured to load at least one Gaussian probability distribution vector stored in the memory into the memory buffer, and a scoring unit configured to simultaneously compare a plurality of dimensions of a Gaussian probability distribution vector loaded into the memory buffer with respective dimensions of a frame vector received from the processing unit and to output a corresponding score to the processing unit. | 06-20-2013 |
20130173267 | SPEECH RECOGNITION APPARATUS, SPEECH RECOGNITION METHOD, AND SPEECH RECOGNITION PROGRAM - A apparatus includes: a storage unit to store a model representing a relationship between a relative time and an occurrence probabilities; a first detection unit to detect first speech period of a first speaker; a second period detection unit to detect second speech period of a second speaker; a unit to calculate a feature value of the first speech period; a detection unit to detect a word using the calculated feature value; an adjustment unit to make an adjustment such that in detecting a word for a reply by the detection unit, the adjustment unit retrieves an occurrence probability corresponding to a relative position of the reply in the second speech period, and adjusts a word score or a detection threshold value for the reply; and a second detection unit to re-detect, using the adjusted word score or the adjusted detection threshold value, the detected word by the detection unit. | 07-04-2013 |
20130238332 | AUTOMATIC INPUT SIGNAL RECOGNITION USING LOCATION BASED LANGUAGE MODELING - Input signal recognition, such as speech recognition, can be improved by incorporating location-based information. Such information can be incorporated by creating one or more language models that each include data specific to a pre-defined geographic location, such as local street names, business names, landmarks, etc. Using the location associated with the input signal, one or more local language models can be selected. Each of the local language models can be assigned a weight representative of the location's proximity to a pre-defined centroid associated with the local language model. The one or more local language models can then be merged with a global language model to generate a hybrid language model for use in the recognition process. | 09-12-2013 |
20130268271 | SPEECH RECOGNITION SYSTEM, SPEECH RECOGNITION METHOD, AND SPEECH RECOGNITION PROGRAM - A speech recognition system has: hypothesis search means which searches for an optimal solution of inputted speech data by generating a hypothesis which is a bundle of words which are searched for as recognition result candidates; self-repair decision means which calculates a self-repair likelihood of a word or a word sequence included in the hypothesis which is being searched for by the hypothesis search means, and decides whether or not self-repair of the word or the word sequence is performed; and transparent word hypothesis generation means which, when it is decided that the self-repair is performed, generates a transparent word hypothesis which is a hypothesis which regards as a transparent word a word or a word sequence included in a disfluency interval or a repair interval of a self-repair interval including the word or the word sequence. | 10-10-2013 |
20130311182 | APPARATUS FOR CORRECTING ERROR IN SPEECH RECOGNITION - An apparatus for correcting errors in speech recognition is provided. The apparatus includes a feature vector extracting unit extracting feature vectors from a received speech. A speech recognizing unit recognizes the received speech as a word sequence on the basis of the extracted feature vectors. A phoneme weighted finite state transducer (WFST)-based converting unit converts the recognized word sequence recognized by the speech recognizing unit into a phoneme WFST. A speech recognition error correcting unit corrects errors in the converted phoneme WFST. The speech recognition error correcting unit includes a WFST synthesizing unit modeling a phoneme WFST transferred from the phoneme WFST-based converting unit as pronunciation variation on the basis of a Kullback-Leibler (KL) distance matrix. | 11-21-2013 |
20140100848 | PHRASE SPOTTING SYSTEMS AND METHODS - Methods and systems for identifying specified phrases within audio streams are provided. More particularly, a phrase is specified. An audio stream is them monitored for the phrase. In response to determining that the audio stream contains the phrase, verification from a user that the phrase was in fact included in the audio stream is requested. If such verification is received, the portion of the audio stream including the phrase is recorded. The recorded phrase can then be applied to identify future instances of the phrase in monitored audio streams. | 04-10-2014 |
20140114660 | Method and Device for Speaker Recognition - A method and device for speaker recognition are provided. In the present invention, identifiability re-estimation is performed on a first vector (namely, a weight vector) in a score function by adopting a support vector machine (SVM), so that a recognition result of a characteristic parameter of a test voice is more accurate, thereby improving identifiability of speaker recognition. | 04-24-2014 |
20140129221 | SOUND RECOGNITION DEVICE, NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM STORED THREREOF SOUND RECOGNITION PROGRAM, AND SOUND RECOGNITION METHOD - A sound recognition device includes a storage for storing a comment that is input while the user listening to sounds emitted as multimedia data being played. The sound recognition device further includes an extractor for extracting a word that appears in a set of sentences that contains the stored comment, and candidate words that contain co-occurrences of the word in the set of sentences. Furthermore, the sound recognition device includes a sound recognizer for recognizing sounds emitted as the multimedia data being played, based on the extracted candidate words. | 05-08-2014 |
20140149116 | SPEECH SYNTHESIS DEVICE, SPEECH SYNTHESIS METHOD, AND SPEECH SYNTHESIS PROGRAM - There are provided a speech synthesis device, a speech synthesis method and a speech synthesis program which can represent a phoneme as a duration shorter than a duration upon modeling according to a statistical method. A speech synthesis device 80 according to the present invention includes a phoneme boundary updating means 81 which, by using a voiced utterance likelihood index which is an index indicating a degree of voiced utterance likelihood of each state which represents a phoneme modeled by a statistical method, updates a phoneme boundary position which is a boundary with other phonemes neighboring to the phoneme. | 05-29-2014 |
20140214419 | METHOD AND SYSTEM FOR AUTOMATIC SPEECH RECOGNITION - An automatic speech recognition method includes at a computer having one or more processors and memory for storing one or more programs to be executed by the processors, obtaining a plurality of speech corpus categories through classifying and calculating raw speech corpus; obtaining a plurality of classified language models that respectively correspond to the plurality of speech corpus categories through a language model training applied on each speech corpus category; obtaining an interpolation language model through implementing a weighted interpolation on each classified language model and merging the interpolated plurality of classified language models; constructing a decoding resource in accordance with an acoustic model and the interpolation language model; and decoding input speech using the decoding resource, and outputting a character string with a highest probability as a recognition result of the input speech. | 07-31-2014 |
20140278411 | SPEECH RECOGNITION VOCABULARY INTEGRATION - A method for vocabulary integration of speech recognition comprises converting multiple speech signals into multiple words using a processor, applying confidence scores to the multiple words, classifying the multiple words into a plurality of classifications based on classification criteria and the confidence score for each word, determining if one or more of the multiple words are unrecognized based on the plurality of classifications, classifying each unrecognized word and detecting a match for the unrecognized word based on additional classification criteria, and upon detecting a match for an unrecognized word, converting at least a portion of the multiple speech signals corresponding to the unrecognized word into words. | 09-18-2014 |
20140278412 | METHOD AND APPARATUS FOR AUDIO CHARACTERIZATION - Characterizing an acoustic signal includes extracting a vector from the acoustic signal, where the vector contains information about the nuisance characteristics present in the acoustic signal, and computing a set of likelihoods of the vector for a plurality of classes that model a plurality of nuisance characteristics. Training a system to characterize an acoustic signal includes obtaining training data, the training data comprising a plurality of acoustic signals, where each of the plurality of acoustic signals is associated with one of a plurality of classes that indicates a presence of a specific type of nuisance characteristic, transforming each of the plurality of acoustic signals into a vector that summarizes information about the acoustic characteristics of the signal, to produce a plurality of vectors, and labeling each of the plurality of vectors with one of the plurality of classes. | 09-18-2014 |
20150066507 | SOUND RECOGNITION APPARATUS, SOUND RECOGNITION METHOD, AND SOUND RECOGNITION PROGRAM - A sound recognition apparatus can include a sound feature value calculating unit configured to calculate a sound feature value based on a sound signal, and a label converting unit configured to convert the sound feature value into a corresponding label with reference to label data in which sound feature values and labels indicating sound units are correlated. A sound identifying unit is configured to calculate a probability of each sound unit group sequence that a label sequence is segmented for each sound unit group with reference to segmentation data. The segmentated data indicates a probability that a sound unit sequence will be segmented into at least one sound unit group. The sound identity unit can also identify a sound event corresponding to the sound unit group sequence selected based on the calculated probability. | 03-05-2015 |
20150073792 | METHOD AND SYSTEM FOR AUTOMATICALLY DETECTING MORPHEMES IN A TASK CLASSIFICATION SYSTEM USING LATTICES - The invention concerns a method and corresponding system for building a phonotactic model for domain independent speech recognition. The method may include recognizing phones from a user's input communication using a current phonotactic model, detecting morphemes (acoustic and/or non-acoustic) from the recognized phones, and outputting the detected morphemes for processing. The method also updates the phonotactic model with the detected morphemes and stores the new model in a database for use by the system during the next user interaction. The method may also include making task-type classification decisions based on the detected morphemes from the user's input communication. | 03-12-2015 |
20150073793 | SYSTEM AND METHOD FOR COMBINING GEOGRAPHIC METADATA IN AUTOMATIC SPEECH RECOGNITION LANGUAGE AND ACOUSTIC MODELS - Disclosed herein are systems, methods, and computer-readable storage media for a speech recognition application for directory assistance that is based on a user's spoken search query. The spoken search query is received by a portable device and portable device then determines its present location. Upon determining the location of the portable device, that information is incorporated into a local language model that is used to process the search query. Finally, the portable device outputs the results of the search query based on the local language model. | 03-12-2015 |
20150100316 | SYSTEM AND METHOD FOR ADVANCED TURN-TAKING FOR INTERACTIVE SPOKEN DIALOG SYSTEMS - Disclosed herein are systems, methods, and non-transitory computer-readable storage media for advanced turn-taking in an interactive spoken dialog system. A system configured according to this disclosure can incrementally process speech prior to completion of the speech utterance, and can communicate partial speech recognition results upon finding particular conditions. A first condition which, if found, allows the system to communicate partial speech recognition results, is that the most recent word found in the partial results is statistically likely to be the termination of the utterance, also known as a terminal node. A second condition is the determination that all search paths within a speech lattice converge to a common node, also known as a pinch node, before branching out again. Upon finding either condition, the system can communicate the partial speech recognition results. Stability and correctness probabilities can also determine which partial results are communicated. | 04-09-2015 |
20150310857 | APPARATUS AND METHOD FOR PROVIDING AN INFORMED MULTICHANNEL SPEECH PRESENCE PROBABILITY ESTIMATION - An apparatus for providing a speech probability estimation is provided. The apparatus includes a first speech probability estimator for estimating speech probability information indicating a first probability on whether a sound field of a scene includes speech or on whether the sound field of the scene does not include speech. Moreover, the apparatus includes an output interface for outputting the speech probability estimation depending on the speech probability information. The first speech probability estimator is configured to estimate the first speech probability information based on at least spatial information about the sound field or spatial information on the scene. | 10-29-2015 |
20150325236 | CONTEXT SPECIFIC LANGUAGE MODEL SCALE FACTORS - The customization of recognition of speech utilizing context-specific language model scale factors is provided. Training audio may be received from a source in a training phase. The received training audio may be recognized utilizing acoustic and language models being combined utilizing static scale factors. A comparison may then be made of the recognition results to a transcription of the training audio. The recognition results may include one or more hypotheses for recognizing speech. Context specific scale factors may then be generated based on the comparison. The context specific scale factors may then be applied for use in the speech recognition of audio signals in an application phase. | 11-12-2015 |
20150371633 | SPEECH RECOGNITION USING NON-PARAMETRIC MODELS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for using non-parametric models in speech recognition. In some implementations, speech data is accessed. The speech data represents utterances of a particular phonetic unit occurring in a particular phonetic context, and the speech data includes values for multiple dimensions. Boundaries are determined for a set of quantiles for each of the multiple dimensions. Models for the distribution of values within the quantiles are generated. A multidimensional probability function is generated. Data indicating the boundaries of the quantiles, the models for the distribution of values in the quantiles, and the multidimensional probability function are stored. | 12-24-2015 |
20150379984 | SYSTEM AND METHOD FOR DIALOG MODELING - Disclosed herein are systems, computer-implemented methods, and computer-readable media for dialog modeling. The method includes receiving spoken dialogs annotated to indicate dialog acts and task/subtask information, parsing the spoken dialogs with a hierarchical, parse-based dialog model which operates incrementally from left to right and which only analyzes a preceding dialog context to generate parsed spoken dialogs, and constructing a functional task structure of the parsed spoken dialogs. The method can further either interpret user utterances with the functional task structure of the parsed spoken dialogs or plan system responses to user utterances with the functional task structure of the parsed spoken dialogs. The parse-based dialog model can be a shift-reduce model, a start-complete model, or a connection path model. | 12-31-2015 |
20160005399 | NOISE SPEED-UPS IN HIDDEN MARKOV MODELS WITH APPLICATIONS TO SPEECH RECOGNITION - A learning computer system may estimate unknown parameters and states of a stochastic or uncertain system having a probability structure. The system may include a data processing system that may include a hardware processor that has a configuration that: receives data; generates random, chaotic, fuzzy, or other numerical perturbations of the data, one or more of the states, or the probability structure; estimates observed and hidden states of the stochastic or uncertain system using the data, the generated perturbations, previous states of the stochastic or uncertain system, or estimated states of the stochastic or uncertain system; and causes perturbations or independent noise to be injected into the data, the states, or the stochastic or uncertain system so as to speed up training or learning of the probability structure and of the system parameters or the states. | 01-07-2016 |
20160027452 | EMOTIONAL SPEECH PROCESSING - A method for emotion or speaking style recognition and/or clustering comprises receiving one or more speech samples, generating a set of training data by extracting one or more acoustic features from every frame of the one or more speech samples, and generating a model from the set of training data, wherein the model identifies emotion or speaking style dependent information in the set of training data. The method may further comprise receiving one or more test speech samples, generating a set of test data by extracting one or more acoustic features from every frame of the one or more test speeches, and transforming the set of test data using the model to better represent emotion/speaking style dependent information, and use the transformed data for clustering and/or classification to discover speech with similar emotion or speaking style. It is emphasized that this abstract is provided to comply with the rules requiring an abstract that will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. | 01-28-2016 |
20160049164 | METHODS AND APPARATUS FOR INTERPRETING RECEIVED SPEECH DATA USING SPEECH RECOGNITION - A method for processing a received set of speech data, wherein the received set of speech data comprises an utterance, is provided. The method executes a process to generate a plurality of confidence scores, wherein each of the plurality of confidence scores is associated with one of a plurality of candidate utterances; determines a plurality of difference values, each of the plurality of difference values comprising a difference between two of the plurality of confidence scores; and compares the plurality of difference values to determine at least one disparity. | 02-18-2016 |
20160055844 | SYSTEMS AND METHODS FOR DETECTION OF TARGET AND NON-TARGET USERS USING MULTI-SESSION INFORMATION - Systems and methods for maintaining speaker recognition performance are provided. A method for maintaining speaker recognition performance, comprises training a plurality of models respectively corresponding to speaker recognition scores from a plurality of speakers over a plurality of sessions, and using the plurality of models to conclude whether a speaker seeking access to an environment is a non-ideal target speaker or a non-ideal non-target speaker. Using the plurality of models to conclude comprises calculating a first probability that the speaker seeking access is the non-ideal target speaker, calculating a second probability that the speaker seeking access is the non-ideal non-target speaker, and determining whether the first probability, the second probability or a sum of the first probability and the second probability is above a probability threshold. | 02-25-2016 |
20160063990 | METHODS AND APPARATUS FOR INTERPRETING CLIPPED SPEECH USING SPEECH RECOGNITION - A method for receiving and analyzing data compatible with voice recognition technology is provided. The method receives speech data comprising at least a subset of an articulated statement; executes a plurality of processes to generate a plurality of probabilities, based on the received speech data, each of the plurality of processes being associated with a respective candidate articulated statement, and each of the generated plurality of probabilities comprising a likelihood that an associated candidate articulated statement comprises the articulated statement; and analyzes the generated plurality of probabilities to determine a recognition result, wherein the recognition result comprises the articulated statement. | 03-03-2016 |
20160093292 | OPTIMIZATIONS TO DECODING OF WFST MODELS FOR AUTOMATIC SPEECH RECOGNITION - A method in a computing device for decoding a weighted finite state transducer (WFST) for automatic speech recognition is described. The method includes sorting a set of one or more WFST arcs based on their arc weight in ascending order. The method further includes iterating through each arc in the sorted set of arcs according to the ascending order until the score of the generated token corresponding to an arc exceeds a score threshold. The method further includes discarding any remaining arcs in the set of arcs that have yet to be considered. | 03-31-2016 |
20160093295 | STATISTICAL UNIT SELECTION LANGUAGE MODELS BASED ON ACOUSTIC FINGERPRINTING - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for providing statistical unit selection language modeling based on acoustic fingerprinting. The methods, systems and apparatus include the actions of obtaining a unit database of acoustic units and, for each acoustic unit, linguistic data corresponding to the acoustic unit; obtaining stored data associating each acoustic unit with (i) a corresponding acoustic fingerprint and (ii) a probability of the linguistic data corresponding to the acoustic unit occurring in a text corpus; determining that the unit database of acoustic units has been updated to include one or more new acoustic units; for each new acoustic unit in the updated unit database: generating an acoustic fingerprint for the new acoustic unit; identifying an acoustic unit that (i) has an acoustic fingerprint that is indicated as similar to the fingerprint of the new acoustic unit, and (ii) has a stored associated probability. | 03-31-2016 |
20160111086 | SPEECH RECOGNITION SYSTEM AND A METHOD OF USING DYNAMIC BAYESIAN NETWORK MODELS - A computer-implemented method for speech recognition, comprising the steps of: registering ( | 04-21-2016 |
20160140956 | PREDICTION-BASED SEQUENCE RECOGNITION - A sequence recognition system comprises a prediction component configured to receive a set of observed features from a signal to be recognized and to output a prediction output indicative of a predicted recognition based on the set of observed features. The sequence recognition system also comprises a classification component configured to receive the prediction output and to output a label indicative of recognition of the signal based on the prediction output. | 05-19-2016 |
20160148610 | SYSTEM AND METHOD OF PROVIDING INTENT PREDICTIONS FOR AN UTTERANCE PRIOR TO A SYSTEM DETECTION OF AN END OF THE UTTERANCE - In certain implementations, intent prediction is provided for a natural language utterance based on a portion of the natural language utterance prior to a system detection of an end of the natural language utterance. In some implementations, a first portion of a natural language utterance of a user may be received. Speech recognition may be performed on the first portion of the natural language utterance to recognize one or more words of the first portion of the natural language utterance. Context information for the natural language utterance may be obtained. Prior to a detection of an end of the natural language utterance, a first intent may be predicted based on the one or more words of the first portion and the context information. One or more user requests may be determined based on the first predicted intent. | 05-26-2016 |
20160148611 | VOICE RECOGNITION APPARATUS AND METHOD OF CONTROLLING THE SAME - A voice recognition apparatus includes a voice recognizer configured to recognize user utterance, a storage unit configured to store a plurality of tokens, a token network generator configured to generate a plurality of recognition tokens from the recognized user utterance, search for a similar token similar to each of the recognition tokens and a peripheral token having a history used with the recognition token among the plurality of tokens stored in the storage unit, and generate a token network using the recognition token, the similar token, and the peripheral token, and a processor configured to control the token network generator to generate the token network in response to the user utterance being recognized through the voice recognizer, calculate a transition probability between the tokens constituting the token network, and generate text data for corrected user utterance using the calculated transition probability. | 05-26-2016 |
20160180839 | VOICE RETRIEVAL APPARATUS, VOICE RETRIEVAL METHOD, AND NON-TRANSITORY RECORDING MEDIUM | 06-23-2016 |
20160180842 | CONCISE DYNAMIC GRAMMARS USING N-BEST SELECTION | 06-23-2016 |
20160182957 | SYSTEMS AND METHODS FOR MANIPULATING ELECTRONIC CONTENT BASED ON SPEECH RECOGNITION | 06-23-2016 |
20160253992 | OCR THROUGH VOICE RECOGNITION | 09-01-2016 |
20170236511 | AUTOMATIC SPEECH RECOGNITION FOR DISFLUENT SPEECH | 08-17-2017 |
20170236518 | System and Method for Multi-User GPU-Accelerated Speech Recognition Engine for Client-Server Architectures | 08-17-2017 |
20180025720 | OPTIMIZATIONS TO DECODING OF WFST MODELS FOR AUTOMATIC SPEECH RECOGNITION | 01-25-2018 |