Entries |
Document | Title | Date |
20080228482 | SPEECH RECOGNITION SYSTEM AND METHOD FOR SPEECH RECOGNITION - A recognition result extraction unit and an agreement determination unit are provided. The recognition result extraction unit extracts, from a recognition result storage unit, N best solutions A and B obtained by an utterance B. The utterance B follows an utterance A corresponding to the N best solutions A and made by a speaker b who is different from a speaker of the utterance A. In a case where a repeat utterance determination unit determines that the N best solutions B are N best solutions obtained by a repeat utterance B according to the utterance A corresponding to the N best solutions A, when the best solution A and B are different each other, the agreement determination unit determines that some or all of the N best solutions A can be replaced with some or all of the N best solutions B. | 09-18-2008 |
20080312926 | Automatic Text-Independent, Language-Independent Speaker Voice-Print Creation and Speaker Recognition - An automatic dual-step, text independent, language-independent speaker voice-print creation and speaker recognition method, wherein a neural network-based technique is used in a first step and a Markov model-based technique is used in a second step. In particular, the first step uses a neural network-based technique for decoding the content of what is uttered by the speaker in terms of language independent acoustic-phonetic classes, wherein the second step uses the sequence of language-independent acoustic-phonetic classes from the first step and employs a Markov model-based technique for creating the speaker voice-print and for recognizing the speaker. The combination of the two steps enables improvement in the accuracy and efficiency of the speaker voice-print creation and of the speaker recognition, without setting any constraints on the lexical content of the speaker utterance and on the language thereof. | 12-18-2008 |
20090048836 | DATA-DRIVEN GLOBAL BOUNDARY OPTIMIZATION - Portions from segment boundary regions of a plurality of speech segments are extracted. Each segment boundary region is based on a corresponding initial unit boundary. Feature vectors that represent the portions in a vector space are created. For each of a plurality of potential unit boundaries within each segment boundary region, an average discontinuity based on distances between the feature vectors is determined. For each segment, the potential unit boundary associated with a minimum average discontinuity is selected as a new unit boundary. | 02-19-2009 |
20090271197 | IDENTIFYING FEATURES IN A PORTION OF A SIGNAL REPRESENTING SPEECH - Methods, systems, and machine-readable media are disclosed for processing a signal representing speech. According to one embodiment, processing a signal representing speech can comprise receiving a region of the signal representing speech. The region can comprise a portion of a frame of the signal representing speech classified as a voiced frame. The region can be marked based on one or more pitch estimates for the region. A cord can be identified within the region based on occurrence of one or more events within the region of the signal. For example, the one or more events can comprise one or more glottal pulses. In such cases, cord can begin with onset of a first glottal pulse and extend to a point prior to an onset of a second glottal pulse. The cord may exclude a portion of the region of the signal prior to the onset of the second glottal pulse. | 10-29-2009 |
20090271198 | PRODUCING PHONITOS BASED ON FEATURE VECTORS - Methods, systems, and machine-readable media are disclosed for processing a signal representing speech. According to one embodiment, processing a signal representing speech can comprise receiving a first frame of the signal, the first frame comprising a voiced frame. One or more cords can be extracted from the voiced frame based on occurrence of one or more events within the frame. For example, the one or more events can comprise one or more glottal pulses. The one or more cords can collectively comprise less than all of the frame. For example, each of the cords can begin with onset of a glottal pulse and extend to a point prior to an onset of neighboring glottal pulse but may exclude a portion of the frame prior to the onset of the neighboring glottal pulse. A phoneme for the voiced frame can be determined based on at least one of the extracted cords. | 10-29-2009 |
20100010813 | VOICE RECOGNITION APPARATUS, VOICE RECOGNITION METHOD AND RECORDING MEDIUM - A voice recognition apparatus includes an extraction unit extracting a feature amount from a voice signal, a word dictionary storing a plurality of recognition words; a reject word generation unit storing reject words in the word dictionary in association with the recognition words and a collation unit calculating a degree of similarity between the voice signal and each of the recognition words and reject words stored in the word dictionary by using the feature amount extracted by the extraction unit, determining whether or not a word having a high calculated degree of similarity corresponds to a reject word, when the word is determined as the reject word, excluding the recognition word stored in the word dictionary in association with the reject word from a result of recognition, and outputting a recognition word having a high calculated degree of similarity as a result of recognition. | 01-14-2010 |
20100211391 | AUTOMATIC COMPUTATION STREAMING PARTITION FOR VOICE RECOGNITION ON MULTIPLE PROCESSORS WITH LIMITED MEMORY - Speech processing is disclosed for an apparatus having a main processing unit, a memory unit, and one or more co-processors. Memory maintenance and voice recognition result retrievals upon execution are performed with a first main processor thread. Voice detection and initial feature extraction on the raw data are performed with a first co-processor. A second co-processor thread receives feature data derived for one or more features extracted by the first co-processor thread and information for locating probability density functions needed for probability computation by a speech recognition model and computes a probability that the one or more features correspond to a known sub-unit of speech using the probability density functions and the feature data. At least a portion of a path probability that a sequence of sub-units of speech correspond to a known speech unit is computed with a third co-processor thread. | 08-19-2010 |
20110093268 | APPARATUS AND METHOD FOR ANALYSIS OF LANGUAGE MODEL CHANGES - An apparatus, a method, and a machine-readable medium are provided for characterizing differences between two language models. A group of utterances from each of a group of time domains are examined. One of a significant word change or a significant word class change within the plurality of utterances is determined. A first cluster of utterances including a word or a word class corresponding to the one of the significant word change or the significant word class change is generated from the utterances. A second cluster of utterances not including the word or the word class corresponding to the one of the significant word change or the significant word class change is generated from the utterances. | 04-21-2011 |
20110112838 | SYSTEM AND METHOD FOR LOW OVERHEAD VOICE AUTHENTICATION - A system and method are provided to authenticate a voice in a frequency domain. A voice in the time domain is transformed to a signal in the frequency domain. The first harmonic is set to a predetermined frequency and the other harmonic components are equalized. Similarly, the amplitude of the first harmonic is set to a predetermined amplitude, and the harmonic components are also equalized. The voice signal is then filtered. The amplitudes of each of the harmonic components are then digitized into bits to form at least part of a voice ID. In another system and method, a voice is authenticated in a time domain. The initial rise time, initial fall time, second rise time, second fall time and final oscillation time are digitized into bits to form at least part of a voice ID. The voice IDs are used to authenticate a user's voice. | 05-12-2011 |
20110112839 | COMMAND RECOGNITION DEVICE, COMMAND RECOGNITION METHOD, AND COMMAND RECOGNITION ROBOT - A command recognition device includes: an utterance understanding unit that determines or selects word sequence information from speech information; speech confidence degree calculating unit that calculates degree of speech confidence based on the speech information and the word sequence information; a phrase confidence degree calculating unit that calculates a degree of phrase confidence based on image information and phrase information included in the word sequence information; and a motion control instructing unit that determines whether a command of the word sequence information should be executed based on the degree of speech confidence and the degree of phrase confidence. | 05-12-2011 |
20110131045 | SYSTEMS AND METHODS FOR RESPONDING TO NATURAL LANGUAGE SPEECH UTTERANCE - Systems and methods are provided for receiving speech and non-speech communications of natural language questions and/or commands, transcribing the speech and non-speech communications to textual messages, and executing the questions and/or commands. The invention applies context, prior information, domain knowledge, and user specific profile data to achieve a natural environment for one or more users presenting questions or commands across multiple domains. The systems and methods creates, stores and uses extensive personal profile information for each user, thereby improving the reliability of determining the context of the speech and non-speech communications and presenting the expected results for a particular question or command. | 06-02-2011 |
20110184736 | AUTOMATED METHOD OF RECOGNIZING INPUTTED INFORMATION ITEMS AND SELECTING INFORMATION ITEMS - Automated methods are provided for recognizing inputted information items and selecting information items. The recognition and selection processes are performed by selecting category designations that the information items belong to. The category designations improve the accuracy and speed of the inputting and selection processes. | 07-28-2011 |
20110208525 | VOICE RECOGNIZING APPARATUS - A voice recognizing apparatus includes a voice start instructing section | 08-25-2011 |
20110295604 | SYSTEM AND METHOD FOR AUTOMATIC VERIFICATION OF THE UNDERSTANDABILITY OF SPEECH - Disclosed herein are systems, methods, and computer-readable storage media for processing a message received from a user to determine whether an estimate of intelligibility is below an intelligibility threshold. The method includes recognizing a portion of a user's message that contains the one or more expected utterances from a critical information list, calculating an estimate of intelligibility for the recognized portion of the user's message that contains the one or more expected utterances, and prompting the user to repeat at least the recognized portion of the user's message if the calculated estimate of intelligibility for the recognized portion of the user's message is below an intelligibility threshold. In one aspect, the method further includes prompting the user to repeat at least a portion of the message if any of a measured speech level and a measured signal-to-noise ratio of the user's message are determined to be below their respective thresholds. | 12-01-2011 |
20120041762 | Dialogue Detector and Correction - An apparatus and method for tracking dialogue and other sound signals in film, television or other systems with multiple channel sound is described. One or more audio channels which is expected to carry the speech of persons appearing in the program or other particular types of sounds is inspected to determine if that channel's audio includes particular sounds such as MUEVs, including phonemes corresponding to human speech patterns. If an improper number of particular sounds such as phonemes are found in the channel(s) an action such as a report, an alarm, a correction, or other action is taken. The inspection of the audio channel(s) may be made in conjunction with the appearance of corresponding images associated with the sound, such as visemes in the video signal, to improve the determination of types of sounds such as phonemes. | 02-16-2012 |
20120089396 | APPARATUS AND METHOD FOR SPEECH ANALYSIS - A system that incorporates teachings of the present disclosure may include, for example, an interface for receiving an utterance of speech and converting the utterance into a speech signal, such as digital representation including a waveform and/or spectrum; and a processor for dividing the speech signal into segments and detecting the emotional information from speech. The system is designed by comparing the speech segments to a baseline to identify the emotion or emotions from the suprasegmental information (i.e., paralinguistic information) in speech, wherein the baseline is determined from acoustic characteristics of a plurality of emotion categories. Other embodiments are disclosed. | 04-12-2012 |
20120150541 | MALE ACOUSTIC MODEL ADAPTATION BASED ON LANGUAGE-INDEPENDENT FEMALE SPEECH DATA - A method of generating proxy acoustic models for use in automatic speech recognition includes training acoustic models from speech received via microphone from male speakers of a first language, and adapting the acoustic models in response to language-independent speech data from female speakers of a second language, to generate proxy acoustic models for use during runtime of speech recognition of an utterance from a female speaker of the first language. | 06-14-2012 |
20120179467 | USER INTENTION BASED ON N-BEST LIST OF RECOGNITION HYPOTHESES FOR UTTERANCES IN A DIALOG - Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for using alternate recognition hypotheses to improve whole-dialog understanding accuracy. The method includes receiving an utterance as part of a user dialog, generating an N-best list of recognition hypotheses for the user dialog turn, selecting an underlying user intention based on a belief distribution across the generated N-best list and at least one contextually similar N-best list, and responding to the user based on the selected underlying user intention. Selecting an intention can further be based on confidence scores associated with recognition hypotheses in the generated N-best lists, and also on the probability of a user's action given their underlying intention. A belief or cumulative confidence score can be assigned to each inferred user intention. | 07-12-2012 |
20120197643 | MAPPING OBSTRUENT SPEECH ENERGY TO LOWER FREQUENCIES - A speech signal processing system and method which uses the following steps: (a) receiving an utterance from a user via a microphone that converts the utterance into a speech signal; and (b) pre-processing the speech signal using a processor. The pre-processing step includes extracting acoustic data from the received speech signal, determining from the acoustic data whether the utterance includes one or more obstruents; estimating speech energy from higher frequencies associated with the identified obstruents, and mapping the estimated speech energy to lower frequencies. | 08-02-2012 |
20120209609 | USER-SPECIFIC CONFIDENCE THRESHOLDS FOR SPEECH RECOGNITION - A method of automatic speech recognition includes receiving an utterance from a user via a microphone that converts the utterance into a speech signal, pre-processing the speech signal using a processor to extract acoustic data from the received speech signal, and identifying at least one user-specific characteristic in response to the extracted acoustic data. The method also includes determining a user-specific confidence threshold responsive to the at least one user-specific characteristic, and using the user-specific confidence threshold to recognize the utterance received from the user and/or to assess confusability of the utterance with stored vocabulary. | 08-16-2012 |
20120215537 | Sound Recognition Operation Apparatus and Sound Recognition Operation Method - According to one embodiment, a sound recognition operation apparatus includes a sound detection module, a keyword detection module, an audio mute module, and a transmission module. The sound detection module is configured to detect sound. The keyword detection module is configured to detect a particular keyword using voice recognition when the sound detection module detects sound. The audio mute module is configured to transmit an operation signal for muting audio sound when the keyword detection module detects the keyword. The transmission module is configured to recognize the voice command after the keyword is detected by the keyword detection module, and transmit an operation signal corresponding to the voice command. | 08-23-2012 |
20120239400 | SPEECH DATA ANALYSIS DEVICE, SPEECH DATA ANALYSIS METHOD AND SPEECH DATA ANALYSIS PROGRAM - A speaker or a set of speakers can be recognized with high accuracy even when multiple speakers and a relationship between speakers change over time. A device comprises a speaker model derivation means for deriving a speaker model for defining a voice property per speaker from speech data made of multiple utterances to which speaker labels as information for identifying a speaker are given, a speaker co-occurrence model derivation means for, by use of the speaker model derived by the speaker model derivation means, deriving a speaker co-occurrence model indicating a strength of a co-occurrence relationship between the speakers from session data which is divided speech data in units of a series of conversation, and a model structure update means for, with reference to a session of newly-added speech data, detecting predefined events, and when the predefined event is detected, updating a structure of at least one of the speaker model and the speaker co-occurrence model. | 09-20-2012 |
20120253811 | SPEECH PROCESSING SYSTEM AND METHOD - A method for identifying a plurality of speakers in audio data and for decoding the speech spoken by said speakers;
| 10-04-2012 |
20120296651 | USER AUTHENTICATION BY COMBINING SPEAKER VERIFICATION AND REVERSE TURING TEST - Methods and system for authenticating a user are disclosed. The present invention includes accessing a collection of personal information related to the user. The present invention also includes performing an authentication operation that is based on the collection of personal information. The authentication operation incorporates at least one dynamic component and prompts the user to give an audible utterance. The audible utterance is compared to a stored voiceprint. | 11-22-2012 |
20120303370 | SYSTEMS AND METHODS FOR EXTRACTING MEANING FROM MULTIMODAL INPUTS USING FINITE-STATE DEVICES - Multimodal utterances contain a number of different modes. These modes can include speech, gestures, and pen, haptic, and gaze inputs, and the like. This invention use recognition results from one or more of these modes to provide compensation to the recognition process of one or more other ones of these modes. In various exemplary embodiments, a multimodal recognition system inputs one or more recognition lattices from one or more of these modes, and generates one or more models to be used by one or more mode recognizers to recognize the one or more other modes. In one exemplary embodiment, a gesture recognizer inputs a gesture input and outputs a gesture recognition lattice to a multimodal parser. The multimodal parser generates a language model and outputs it to an automatic speech recognition system, which uses the received language model to recognize the speech input that corresponds to the recognized gesture input. | 11-29-2012 |
20130018657 | Method and System for Bio-Metric Voice Print Authentication | 01-17-2013 |
20130030809 | SPEAKER VERIFICATION METHODS AND APPARATUS - One aspect includes determining validity of an identity asserted by a speaker using a voice print associated with a user whose identity the speaker is asserting, the voice print obtained from characteristic features of at least one first voice signal obtained from the user uttering at least one enrollment utterance including at least one enrollment word by obtaining a second voice signal of the speaker uttering at least one challenge utterance that includes at least one word not in the at least one enrollment utterance, obtaining at least one characteristic feature from the second voice signal, comparing the at least one characteristic feature with at least a portion of the voice print to determine a similarity between the at least one characteristic feature and the at least a portion of the voice print, and determining whether the speaker is the user based, at least in part, on the similarity. | 01-31-2013 |
20130080169 | AUDIO ANALYSIS SYSTEM, AUDIO ANALYSIS APPARATUS, AUDIO ANALYSIS TERMINAL - An audio analysis system includes a terminal apparatus and a host system. The terminal apparatus acquires an audio signal of a sound containing utterances of a user and another person, discriminates between portions of the audio signal corresponding to the utterances of the user and the other person, detects an utterance feature based on the portion corresponding to the utterance of the user or the other person, and transmits utterance information including the discrimination and detection results to the host system. The host system detects a part corresponding to a conversation from the received utterance information, detects portions of the part of the utterance information corresponding to the user and the other person, compares a combination of plural utterance features corresponding to the portions of the part of the utterance information of the user and the other person with relation information to estimate an emotion, and outputs estimation information. | 03-28-2013 |
20130080170 | AUDIO ANALYSIS APPARATUS AND AUDIO ANALYSIS SYSTEM - An audio analysis apparatus includes the following components. A main body includes a discrimination unit and a transmission unit. A strap is used for hanging the main body from a user's neck. A first audio acquisition device is provided to the strap or the main body. A second audio acquisition device is provided to the strap at a position where a distance between the second audio acquisition device and the user's mouth is smaller than the distance between the first audio acquisition device and the user's in a state where the strap is worn around the user's neck. The discrimination unit discriminates whether an acquired sound is an uttered voice of the user or of another person by comparing audio signals of the sound acquired by the first and second audio acquisition devices. The transmission unit transmits information including the discrimination result to an external apparatus. | 03-28-2013 |
20130090927 | PHONOLOGICALLY-BASED BIOMARKERS FOR MAJOR DEPRESSIVE DISORDER - A system and a method for assessing a condition in a subject. Phones from speech of the subject are recognized, one or more prosodic or speech-excitation-source features of the phones are extracted, and an assessment of a condition of the subject, is generated based on a correlation between the features of the phones and the condition. | 04-11-2013 |
20130090928 | SYSTEM AND METHOD FOR PROCESSING SPEECH RECOGNITION - An automatic speech recognition (ASR) system and method is provided for controlling the recognition of speech utterances generated by an end user operating a communications device. The ASR system and method can be used with a mobile device that is used in a communications network. The ASR system can be used for ASR of speech utterances input into a mobile device, to perform compensating techniques using at least one characteristic and for updating an ASR speech recognizer associated with the ASR system by determined and using a background noise value and a distortion value that is based on the features of the mobile device. The ASR system can be used to augment a limited data input capability of a mobile device, for example, caused by limited input devices physically located on the mobile device. | 04-11-2013 |
20130144623 | VISUAL PRESENTATION OF SPEAKER-RELATED INFORMATION - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to determine and present speaker-related information based on speaker utterances. In one embodiment, the AEFS receives data that represents an utterance of a speaker received by a hearing device of the user, such as a hearing aid, smart phone, media player/device, or the like. The AEFS identifies the speaker based on the received data, such as by performing speaker recognition. The AEFS determines speaker-related information associated with the identified speaker, such as by determining an identifier (e.g., name or title) of the speaker, by locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS then informs the user of the speaker-related information, such as by presenting the speaker-related information on a display of the hearing device or some other device accessible to the user. | 06-06-2013 |
20130151253 | System and Method for Targeted Tuning of a Speech Recognition System - A system and method of targeted tuning of a speech recognition system are disclosed. A particular method includes detecting that a frequency of occurrence of a particular type of utterance satisfies a threshold. The method further includes tuning a speech recognition system with respect to the particular type of utterance. | 06-13-2013 |
20130158998 | Systems and Methods for Extracting Meaning from Multimodal Inputs Using Finite-State Devices - Multimodal utterances contain a number of different modes. These modes can include speech, gestures, and pen, haptic, and gaze inputs, and the like. This invention use recognition results from one or more of these modes to provide compensation to the recognition process of one or more other ones of these modes. In various exemplary embodiments, a multimodal recognition system inputs one or more recognition lattices from one or more of these modes, and generates one or more models to be used by one or more mode recognizers to recognize the one or more other modes. In one exemplary embodiment, a gesture recognizer inputs a gesture input and outputs a gesture recognition lattice to a multimodal parser. The multimodal parser generates a language model and outputs it to an automatic speech recognition system, which uses the received language model to recognize the speech input that corresponds to the recognized gesture input. | 06-20-2013 |
20130173268 | SPEAKER VERIFICATION IN A HEALTH MONITORING SYSTEM - A method for verifying that a person is registered to use a telemedical device includes identifying an unprompted trigger phrase in words spoken by a person and received by the telemedical device. The telemedical device prompts the person to state a name of a registered user and optionally prompts the person to state health tips for the person. The telemedical device verifies that the person is the registered user using utterance data generated from the unprompted trigger phrase, name of the registered user, and health tips. | 07-04-2013 |
20130268273 | METHOD OF RECOGNIZING GENDER OR AGE OF A SPEAKER ACCORDING TO SPEECH EMOTION OR AROUSAL - A method of recognizing gender or age of a speaker according to speech emotion or arousal includes the following steps of A) segmentalizing speech signals into a plurality of speech segments; B) fetching the first speech segment from the plural speech segments to further acquire at least one of emotional features or arousal degree in the speech segment; C) determining whether at least one of the emotional feature and the arousal degree conforms to some condition; if yes, proceed to the step D); if no, return to the step B) and then fetch the next speech segment; D) fetching the feature indicative of gender or age from the speech segment to further acquire at least one feature parameter; and E) recognizing the at least one feature parameter to further determine the gender or age of the speaker at the currently-processed speech segment. | 10-10-2013 |
20130289992 | VOICE RECOGNITION METHOD AND VOICE RECOGNITION APPARATUS - A voice recognition method includes: detecting a vocal section including a vocal sound in a voice, based on a feature value of an audio signal representing the voice; identifying a word expressed by the vocal sound in the vocal section, by matching the feature value of the audio signal of the vocal section and an acoustic model of each of a plurality of words; and selecting, with a processor, the word expressed by the vocal sound in a word section based on a comparison result between a signal characteristic of the word section and a signal characteristic of the vocal section. | 10-31-2013 |
20130325473 | METHOD AND SYSTEM FOR DUAL SCORING FOR TEXT-DEPENDENT SPEAKER VERIFICATION - Embodiments of systems and methods for speaker verification are provided. In various embodiments, a method includes receiving an utterance from a speaker and determining a text-independent speaker verification score and a text-dependent speaker verification score in response to the utterance. Various embodiments include a system for speaker verification, the system comprising an audio receiving device for receiving an utterance from a speaker and converting the utterance to an utterance signal, and a processor coupled to the audio receiving device for determining speaker verification in response to the utterance signal, wherein the processor determines speaker verification in response to a UBM-independent speaker-normalized score. | 12-05-2013 |
20140012576 | SIGNAL PROCESSING METHOD - A signal processing method includes separating a mixed sound signal in which a plurality of excitations are mixed into the respective excitations, and performing speech detection on the plurality of separated excitation signals, judging whether or not the plurality of excitation signals are speech and generating speech section information indicating speech/non-speech information for each excitation signal. The signal processing signal also includes at least one of calculating and analyzing an utterance overlap duration using the speech section information for combinations of the plurality of excitation signals and of calculating and analyzing a silence duration. The signal processing signal further includes calculating a degree of establishment of a conversation indicating the degree of establishment of a conversation based on the extracted utterance overlap duration or the silence duration. | 01-09-2014 |
20140012577 | SYSTEM AND METHOD FOR DELIVERING TARGETED ADVERTISEMENTS AND TRACKING ADVERTISEMENT INTERACTIONS IN VOICE RECOGNITION CONTEXTS - The system and method described herein may use various natural language models to deliver targeted advertisements and track advertisement interactions in voice recognition contexts. In particular, in response to an input device receiving an utterance, a conversational language processor may select and deliver one or more advertisements targeted to a user that spoke the utterance based on cognitive models associated with the user, various users having similar characteristics to the user, an environment in which the user spoke the utterance, or other criteria. Further, subsequent interaction with the targeted advertisements may be tracked to build and refine the cognitive models and thereby enhance the information used to deliver targeted advertisements in response to subsequent utterances. | 01-09-2014 |
20140025377 | SYSTEM, METHOD AND PROGRAM PRODUCT FOR PROVIDING AUTOMATIC SPEECH RECOGNITION (ASR) IN A SHARED RESOURCE ENVIRONMENT - A speech recognition system, method of recognizing speech and a computer program product therefor. A client device identified with a context for an associated user selectively streams audio to a provider computer, e.g., a cloud computer. Speech recognition receives streaming audio, maps utterances to specific textual candidates and determines a likelihood of a correct match for each mapped textual candidate. A context model selectively winnows candidate to resolve recognition ambiguity according to context whenever multiple textual candidates are recognized as potential matches for the same mapped utterance. Matches are used to update the context model, which may be used for multiple users in the same context. | 01-23-2014 |
20140025378 | Multi-Stage Speaker Adaptation - A first gender-specific speaker adaptation technique may be selected based on characteristics of a first set of feature vectors that correspond to a first unit of input speech. The first set of feature vectors may be configured for use in automatic speech recognition (ASR) of the first unit of input speech. A second set of feature vectors, which correspond to a second unit of input speech, may be modified based on the first gender-specific speaker adaptation technique. The modified second set of feature vectors may be configured for use in ASR of the second unit of input speech. A first speaker-dependent speaker adaptation technique may be selected based on characteristics of the second set of feature vectors. A third set of feature vectors, which correspond to a third unit of speech, may be modified based on the first speaker-dependent speaker adaptation technique. | 01-23-2014 |
20140039893 | Personalized Voice-Driven User Interfaces for Remote Multi-User Services - Disclosed embodiments provide for personalizing a voice user interface of a remote multi-user service. A voice user interface for the remote multi-user service can be provided and voice information from an identified user can be received at the multi-user service through the voice user interface. A language model specific to the identified user can be retrieved that models one or more language elements. The retrieved language model can be applied to interpret the received voice information and a response can be generated by the multi-user service in response the interpreted voice information. | 02-06-2014 |
20140046666 | INFORMATION PROCESSING APPARATUS, COMPUTER PROGRAM PRODUCT, AND INFORMATION PROCESSING METHOD - According to an embodiment, an information processing apparatus includes a dividing unit, an assigning unit, and a generating unit. The dividing unit is configured to divide speech data into pieces of utterance data. The assigning unit is configured to assign speaker identification information to each piece of utterance data based on an acoustic feature of the each piece of utterance data. The generating unit is configured to generate a candidate list that indicates candidate speaker names so as to enable a user to determine a speaker name to be given to the piece of utterance data identified by instruction information, based on operation history information in which at least pieces of utterance identification information, pieces of the speaker identification information, and speaker names given by the user to the respective pieces of utterance data are associated with one another. | 02-13-2014 |
20140081638 | CUT AND PASTE SPOOFING DETECTION USING DYNAMIC TIME WARPING - The invention refers to a method for comparing voice utterances, the method comprising the steps: extracting a plurality of features ( | 03-20-2014 |
20140081639 | COMMUNICATION SUPPORT DEVICE AND COMMUNICATION SUPPORT METHOD - The communication support device includes: a storing unit configured to store an utterance of a first speaker transmitted from a first terminal as utterance information; an analyzing unit configured to obtain a holding notice which sets communications with the first terminal to a holding state, the communications being transmitted from a second terminal used by a second speaker who communicates with the first speaker, and to analyze features of utterance information which correspond to a time of a holding state; and an instructing unit configured to output to the second terminal determination information on the first speaker based on the features of the utterance information of the first speaker. | 03-20-2014 |
20140081640 | SPEAKER VERIFICATION METHODS AND APPARATUS - One aspect includes determining validity of an identity asserted by a speaker using a voice print associated with a user whose identity the speaker is asserting, the voice print obtained from characteristic features of at least one first voice signal obtained from the user uttering at least one enrollment utterance including at least one enrollment word by obtaining a second voice signal of the speaker uttering at least one challenge utterance that includes at least one word not in the at least one enrollment utterance, obtaining at least one characteristic feature from the second voice signal, comparing the at least one characteristic feature with at least a portion of the voice print to determine a similarity between the at least one characteristic feature and the at least a portion of the voice print, and determining whether the speaker is the user based, at least in part, on the similarity. | 03-20-2014 |
20140095162 | HIERARCHICAL METHODS AND APPARATUS FOR EXTRACTING USER INTENT FROM SPOKEN UTTERANCES - Improved techniques are disclosed for permitting a user to employ more human-based grammar (i.e., free form or conversational input) while addressing a target system via a voice system. For example, a technique for determining intent associated with a spoken utterance of a user comprises the following steps/operations. Decoded speech uttered by the user is obtained. An intent is then extracted from the decoded speech uttered by the user. The intent is extracted in an iterative manner such that a first class is determined after a first iteration and a sub-class of the first class is determined after a second iteration. The first class and the sub-class of the first class are hierarchically indicative of the intent of the user, e.g., a target and data that may be associated with the target. The multi-stage intent extraction approach may have more than two iterations. By way of example only, the user intent extracting step may further determine a sub-class of the sub-class of the first class after a third iteration, such that the first class, the sub-class of the first class, and the sub-class of the sub-class of the first class are hierarchically indicative of the intent of the user. | 04-03-2014 |
20140122077 | VOICE AGENT DEVICE AND METHOD FOR CONTROLLING THE SAME - A voice agent device includes: a position detection unit which detects a position of a person in a conversation space to which the voice agent device is capable of providing information; a voice volume detection unit which detects a voice volume of the person from a sound signal in the conversation space obtained by a sound acquisition unit; a conversation area determination unit which determines a conversation area as a first area including the position when the voice volume has a first voice volume value and determines the conversation area as a second area including the position and being smaller than the first area when the voice volume has a second voice volume value smaller than the first voice volume value, the conversation area being a spatial range where an utterance of the person can be heard; and an information provision unit which provides provision information to the conversation area. | 05-01-2014 |
20140129224 | METHOD AND APPARATUS FOR UTTERANCE VERIFICATION - A method and apparatus for utterance verification are provided for verifying a recognized vocabulary output from speech recognition. The apparatus for utterance verification includes a reference score accumulator, a verification score generator and a decision device. A log-likelihood score obtained from speech recognition is processed by taking a logarithm of the value of the probability of one of feature vectors of an input speech conditioned on one of states of each model vocabulary. A verification score is generated based on the processed result. The verification score is compared with a predetermined threshold value so as to reject or accept the recognized vocabulary. | 05-08-2014 |
20140136204 | METHODS AND SYSTEMS FOR SPEECH SYSTEMS - Methods and systems are provided for a speech system of a vehicle. In one embodiment, the method includes: generating an utterance signature from a speech utterance received from a user of the speech system without a specific need for a user identification interaction; developing a user signature for a user based on the utterance signature; and managing a dialog with the user based on the user signature. | 05-15-2014 |
20140136205 | DISPLAY APPARATUS, VOICE ACQUIRING APPARATUS AND VOICE RECOGNITION METHOD THEREOF - Disclosed are a display apparatus, a voice acquiring apparatus and a voice recognition method thereof, the display apparatus including: a display unit which displays an image; a communication unit which communicates with a plurality of external apparatuses; and a controller which includes a voice recognition engine to recognize a user's voice, receives a voice signal from a voice acquiring unit, and controls the communication unit to receive candidate instruction words from at least one of the plurality of external apparatuses to recognize the received voice signal. | 05-15-2014 |
20140136206 | MASH-UP SERVICE GENERATION APPARATUS AND METHOD BASED ON VOICE COMMAND - Provided are a mash-up service generation apparatus and method based on a voice command. The mash-up service generation apparatus includes a voice recognizer configured to convert a voice command into a character string, a mash-up natural language processor configured to extract a word corresponding to a mash-up module based on the character string, and convert the word into at least one of metadata of the mash-up module and metadata of a mash-up sequence in which a plurality of mash-up modules are combined, and a mash-up sequence processor configured to search for and select a target mash-up sequence corresponding to the metadata of the mash-up sequence, and newly generate the target mash-up sequence. Accordingly, a customized mash-up service can be provided to a user. | 05-15-2014 |
20140142943 | SIGNAL PROCESSING DEVICE, METHOD FOR PROCESSING SIGNAL - A signal processing device includes a processor; and a memory which stores a plurality of instructions, which when executed by the processor, cause the processor to execute, receiving speech of a speaker as a first signal; detecting an expiration period included in the first signal; extracting a number of phonemes included in the expiration period; and controlling, a second signal, which is an output to the speaker, on the basis of the number of phonemes and a length of the expiration period. | 05-22-2014 |
20140195236 | SPEAKER VERIFICATION AND IDENTIFICATION USING ARTIFICIAL NEURAL NETWORK-BASED SUB-PHONETIC UNIT DISCRIMINATION - In one embodiment, a computer system stores speech data for a plurality of speakers, where the speech data includes a plurality of feature vectors and, for each feature vector, an associated sub-phonetic class. The computer system then builds, based on the speech data, an artificial neural network (ANN) for modeling speech of a target speaker in the plurality of speakers, where the ANN is configured to discriminate between instances of sub-phonetic classes uttered by the target speaker and instances of sub-phonetic classes uttered by other speakers in the plurality of speakers. | 07-10-2014 |
20140195237 | FAST, LANGUAGE-INDEPENDENT METHOD FOR USER AUTHENTICATION BY VOICE - A method and system for training a user authentication by voice signal are described. In one embodiment, a set of feature vectors are decomposed into speaker-specific recognition units. The speaker-specific recognition units are used to compute distribution values to train the voice signal. In addition, spectral feature vectors are decomposed into speaker-specific characteristic units which are compared to the speaker-specific distribution values. If the speaker-specific characteristic units are within a threshold limit of the speaker-specific distribution values, the speech signal is authenticated. | 07-10-2014 |
20140214425 | VOICE RECOGNITION APPARATUS AND METHOD FOR PROVIDING RESPONSE INFORMATION - A voice recognition apparatus and a method for providing response information are provided. The voice recognition apparatus according to the present disclosure includes an extractor configured to extract a first utterance element representing a user action and a second utterance element representing an object from a user's utterance voice signal; a domain determiner configured to detect an expansion domain related to the extracted first and second utterance elements based on a hierarchical domain model, and determine at least one candidate domain related to the detected expansion domain as a final domain; a communicator which performs communication with an external apparatus; and a controller configured to control the communicator to transmit information regarding the first and second utterance elements and information regarding the determined final domain. | 07-31-2014 |
20140236598 | Methods and Systems for Sharing of Adapted Voice Profiles - Methods and systems for sharing of adapted voice profiles are provided. The method may comprise receiving, at a computing system, one or more speech samples, and the one or more speech samples may include a plurality of spoken utterances. The method may further comprise determining, at the computing system, a voice profile associated with a speaker of the plurality of spoken utterances, and including an adapted voice of the speaker. Still further, the method may comprise receiving, at the computing system, an authorization profile associated with the determined voice profile, and the authorization profile may include one or more user identifiers associated with one or more respective users. Yet still further, the method may comprise the computing system providing the voice profile to at least one computing device associated with the one or more respective users, based at least in part on the authorization profile. | 08-21-2014 |
20140236599 | SYSTEM AND METHOD FOR TARGETED TUNING OF A SPEECH RECOGNITION SYSTEM - A system and method of targeted tuning of a speech recognition system are disclosed. A particular method includes detecting that a frequency of occurrence of a particular type of utterance satisfies a threshold. The method further includes tuning a speech recognition system with respect to the particular type of utterance. | 08-21-2014 |
20140244258 | SPEECH RECOGNITION METHOD OF SENTENCE HAVING MULTIPLE INSTRUCTIONS - A voice recognition method for a single sentence including a multi-instruction in an interactive voice user interface, method includes steps of detecting a connection ending by analyzing the morphemes of a single sentence on which voice recognition has been performed, separating the single sentence into a plurality of passages based on the connection ending, detecting a multi-connection ending by analyzing the connection ending and extracting instructions by specifically analyzing passages including the multi-connection ending and outputting a multi-instruction included in the single sentence by combining the instructions extracted in the step of extracting instructions. In accordance with the present invention, consumer usability can be significantly increased because a multi-operation intention can be checked in one sentence. | 08-28-2014 |
20140278419 | VOICE COMMAND DEFINITIONS USED IN LAUNCHING APPLICATION WITH A COMMAND - A voice command definition file (VCDF) declaratively defines voice commands for an application. For example, the VCDF may include definitions for: voice commands; one or more phrases/utterances that may be said to execute each of the commands; a navigation location to navigate to within the application (e.g. a page); phrase lists containing items that may be used as a parameter in a voice command; examples; feedback; and the like. A user may say a single utterance to launch the application, navigate to the associated location of the command and execute the command. The VCDF may define multiple ways to listen for a particular command. The VCDF may be edited/defined by a user and may include a user friendly name for an application. A speech engine loads the VCDF for use such that it may recognize the commands associated with an application. The definitions may be updated during runtime. | 09-18-2014 |
20140278420 | Method and Apparatus for Training a Voice Recognition Model Database - An electronic device digitally combines a single voice input with each of a series of noise samples. Each noise sample is taken from a different audio environment (e.g., street noise, babble, interior car noise). The voice input/noise sample combinations are used to train a voice recognition model database without the user having to repeat the voice input in each of the different environments. In one variation, the electronic device transmits the user's voice input to a server that maintains and trains the voice recognition model database. | 09-18-2014 |
20140288932 | Automated Speech Recognition Proxy System for Natural Language Understanding - An interactive response system mixes HSR subsystems with ASR subsystems to facilitate overall capability of voice user interfaces. The system permits imperfect ASR subsystems to nonetheless relieve burden on HSR subsystems. An ASR proxy is used to implement an IVR system, and the proxy dynamically determines how many ASR and HSR subsystems are to perform recognition for any particular utterance, based on factors such as confidence thresholds of the ASRs and availability of human resources for HSRs. | 09-25-2014 |
20140324432 | Method of Accessing a Dial-Up Service - A method of accessing a dial-up service is disclosed. An example method of providing access to a service includes receiving a first speech signal from a user to form a first utterance; recognizing the first utterance using speaker independent speaker recognition; requesting the user to enter a personal identification number; and when the personal identification number is valid, receiving a second speech signal to form a second utterance and providing access to the service. | 10-30-2014 |
20140350933 | VOICE RECOGNITION APPARATUS AND CONTROL METHOD THEREOF - A voice recognition apparatus includes: an extractor configured to extract utterance elements from a user's uttered voice; an LSP converter configured to convert the extracted utterance elements into LSP formats; and a controller configured to determine whether an utterance element related to an OOV exists among the utterance elements converted into the LSP formats with reference to vocabulary list information including pre-registered vocabularies, and to determine an OOD area in which it is impossible to provide response information in response to the uttered voice, in response to determining that the utterance element related to the OOV exists. Accordingly, the voice recognition apparatus provides appropriate response information according to a user's intent by considering a variety of utterances and possibilities regarding a user's uttered voice. | 11-27-2014 |
20140350934 | Systems and Methods for Voice Identification - Systems and methods are provided for voice identification. For example, audio characteristics are extracted from acquired voice signals; a syllable confusion network is identified based on at least information associated with the audio characteristics; a word lattice is generated based on at least information associated with the syllable confusion network and a predetermined phonetic dictionary; and an optimal character sequence is calculated in the word lattice as an identification result. | 11-27-2014 |
20140365219 | Speaker Verification in a Health Monitoring System - A method for verifying that a person is registered to use a telemedical device includes identifying an unprompted trigger phrase in words spoken by a person and received by the telemedical device. The telemedical device prompts the person to state a name of a registered user and optionally prompts the person to state health tips for the person. The telemedical device verifies that the person is the registered user using utterance data generated from the unprompted trigger phrase, name of the registered user, and health tips. | 12-11-2014 |
20150058017 | COLLABORATIVE AUDIO CONVERSATION ATTESTATION - Disclosed in some examples are systems, methods, devices, and machine readable mediums which may produce an audio recording with included verification from the individuals in the recording that the recording is accurate. In some examples, the system may also provide rights management control to those individuals. This may ensure that individuals participating in audio events that are to be recorded are assured that their words are not changed, taken out of context, or otherwise altered and that they retain control over the use of their words even after the physical file has left their control. | 02-26-2015 |
20150081301 | BIOMETRIC PASSWORD SECURITY - A system includes a user speech profile stored on a computer readable storage device, the speech profile containing a plurality of phonemes with user identifying characteristics for the phonemes, and a speech processor coupled to access the speech profile to generate a phrase containing user distinguishing phonemes based on a difference between the user identifying characteristics for such phonemes and average user identifying characteristics, such that the phrase has discriminability from other users. The speech processor may also or alternatively select the phrase as a function of ambient noise. | 03-19-2015 |
20150081302 | SYSTEM AND METHOD FOR DYNAMIC FACIAL FEATURES FOR SPEAKER RECOGNITION - Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge. | 03-19-2015 |
20150088514 | In-Call Virtual Assistants - Techniques for providing virtual assistants to assist users during a voice communication between the users. For instance, a first user operating a device may establish a voice communication with respective devices of one or more additional users, such as with a device of a second user. For instance, the first user may utilize her device to place a telephone call to the device of the second user. A virtual assistant may also join the call and, upon invocation by a user on the call, may identify voice commands from the call and may perform corresponding tasks for the users in response. | 03-26-2015 |
20150095029 | Computer-Implemented System And Method For Quantitatively Assessing Vocal Behavioral Risk - Engaging persona candidates are provided with a skills assessment that includes vocal behavior. Each candidate provides both scripted and spontaneous answers to questions in a situational setting that closely matches the daily demands of the customer support industry. Samples of the candidate's speech are evaluated to identify distinct voice cues that qualitatively describe speech characteristics, which are scored based on the candidate's spoken performance. One or more of the voice cues are mapped to phonetic analytics that quantitatively describe vocal behavior. Each voice cue also has an assigned weight. The voice cue scores for each phonetic analytic are multiplied by their assigned weights and added together to form a weighted phonetic analytic, which is then used to form a part of the vocal behavior risk assessments. | 04-02-2015 |
20150112681 | VOICE RETRIEVAL DEVICE AND VOICE RETRIEVAL METHOD - A voice retrieval device includes a processor; and a memory which stores a plurality of instructions, which when executed by the processor, cause the processor to execute: setting detection criteria for a retrieval word, based on a characteristic of the retrieval word, such that the higher the detection accuracy of the retrieval word or the lower the pronunciation difficulty of the retrieval word or the lower the appearance probability of the retrieval word, the stricter the detection criteria; performing first voice retrieval processing on voice data according to the detection criteria and detecting a section that possibly includes the retrieval word as a candidate section from the voice data; and performing second voice retrieval processing different from the first voice retrieval processing on each candidate section and determining whether or not the retrieval word is included in each candidate section. | 04-23-2015 |
20150112682 | METHOD FOR VERIFYING THE IDENTITY OF A SPEAKER AND RELATED COMPUTER READABLE MEDIUM AND COMPUTER - The present invention refers to a method for verifying the identity of a speaker based on the speaker's voice comprising the steps of: a) receiving a voice utterance; b) using biometric voice data to verify that the speakers voice corresponds to the speaker the identity of which is to be verified based on the received voice utterance; and c) verifying that the received voice utterance is not falsified, preferably after having verified the speakers voice; d) accepting the speaker's identity to be verified in case that both verification steps give a positive result and not accepting the speaker's identity to be verified if any of the verification steps give a negative result. The invention further refers to a corresponding computer readable medium and a computer. | 04-23-2015 |
20150142440 | Automatic Speech Recognition (ASR) Feedback for Head Mounted Displays (HMD) - Feedback mechanisms to the user of a Head Mounted Display (HMD) are provided. It is important to provide feedback to the user when speech is recognized as soon as possible after the user utters a voice command. The HMD displays and/or audibly renders an ASR acknowledgment in a manner that ensures the user that the HMD has received/understood his voiced command. | 05-21-2015 |
20150142441 | DISPLAY APPARATUS AND CONTROL METHOD THEREOF - A display apparatus is provided. The display apparatus includes a communicator configured to communicate with a voice recognition apparatus that recognizes an uttered voice of a user, an input unit configured to receive the uttered voice of the user, a display unit configured to receiving voice recognition result information about the uttered voice of the user received from the voice recognition apparatus and display the voice recognition result information, and a processor configured to, when the display apparatus is turned on, perform an access to the voice recognition apparatus by transmitting access request information to the voice recognition apparatus, and when the uttered voice is inputted through the input unit, transmit voice information on the uttered voice to the voice recognition apparatus through the communicator. | 05-21-2015 |
20150310855 | VOICE RECOGNITION DEVICE AND METHOD OF CONTROLLING SAME - A voice recognition device includes an extractor configured to extract at least one of a first utterance element indicating an execution command and a second utterance element indicating a subject, from a user's voice utterance, a domain determiner configured to determine a current domain to provide response information regarding the voice utterance based on the first and the second utterance elements, and a controller configured to determine a candidate conversation frame to provide the response information regarding the voice utterance on at least one of the current domain and a previous domain based on a conversation state of the current domain, wherein the previous domain is determined from a previous voice utterance of the user. | 10-29-2015 |
20150332667 | ANALYZING AUDIO INPUT FOR EFFICIENT SPEECH AND MUSIC RECOGNITION - Systems and processes for analyzing audio input for efficient speech and music recognition are provided. In one example process, an audio input can be received. A determination can be made as to whether the audio input includes music. In addition, a determination can be made as to whether the audio input includes speech. In response to determining that the audio input includes music, an acoustic fingerprint representing a portion of the audio input that includes music is generated. In response to determining that the audio input includes speech rather than music, an end-point of a speech utterance of the audio input is identified. | 11-19-2015 |
20150340029 | OPERATION ASSISTING METHOD AND OPERATION ASSISTING DEVICE - An operation assisting method comprising, comparing an input spoken voice with a preliminarily stored keyword associated with an operation target to obtain an evaluation value of likelihood that the keyword is included in the spoken voice, and determining whether or not the keyword was spoken, based on the evaluation value of likelihood where it is determined that the keyword was spoken if the evaluation value of likelihood exceeds a threshold value, detecting whether or not eyes of a user are directed at the operation target. A threshold value is decreased in cases where the eyes of the user are directed at the operation target and where the evaluation value of the spoken voice falls within a predetermined range. | 11-26-2015 |
20150340031 | TERMINAL AND CONTROL METHOD THEREFOR - A method for controlling the operation of a terminal according to an embodiment of the present invention includes the steps of: operating the terminal in a voice-recognition mode by receiving a voice-recognition command from the user; analyzing a voice received from the user so as to determine the user's intention; outputting the primary response in a voice according to the user's intention; analyzing the user's reaction to the primary response; and controlling the operation of the terminal according to the result of analyzing the user's reaction. | 11-26-2015 |
20150371630 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus and a method for performing speech training in speech rehabilitation are disclosed. A report about content to be uttered in an utterance training is made to a trainee of the utterance training and the volume of a voice uttered by the trainee in response to the report is obtained. Then, the result of comparison between the obtained volume and a volume predetermined as a target volume is reported. | 12-24-2015 |
20160005396 | EVALUATION INFORMATION POSTING DEVICE AND EVALUATION INFORMATION POSTING METHOD - An evaluation information posting device determines a rest state of a vehicle on the basis of rest information, determines a facility at which the vehicle has stopped off by using position information showing a rest position of the vehicle, map information including facility information about facilities located in an area surrounding the position shown by this position information, and a keyword about a facility at the rest position of the vehicle, and, by using both stop-off facility information about the facility which is a result of the determination, and a keyword about an evaluation which is provided for this facility, generates evaluation information about the stop-off facility and posts this evaluation information to an evaluation information managing server. | 01-07-2016 |
20160011854 | VOICE RECOGNITION DEVICE AND DISPLAY METHOD | 01-14-2016 |
20160019896 | SPEAKER VERIFICATION USING CO-LOCATION INFORMATION - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for identifying a user in a multi-user environment. One of the methods includes receiving, by a first user device, an audio signal encoding an utterance, obtaining, by the first user device, a first speaker model for a first user of the first user device, obtaining, by the first user device for a second user of a second user device that is co-located with the first user device, a second speaker model for the second user or a second score that indicates a respective likelihood that the utterance was spoken by the second user, and determining, by the first user device, that the utterance was spoken by the first user using (i) the first speaker model and the second speaker model or (ii) the first speaker model and the second score. | 01-21-2016 |
20160063991 | SYSTEM AND METHOD FOR COMBINING FRAME AND SEGMENT LEVEL PROCESSING, VIA TEMPORAL POOLING, FOR PHONETIC CLASSIFICATION - Disclosed herein are systems, methods, and non-transitory computer-readable storage media for combining frame and segment level processing, via temporal pooling, for phonetic classification. A frame processor unit receives an input and extracts the time-dependent features from the input. A plurality of pooling interface units generates a plurality of feature vectors based on pooling the time-dependent features and selecting a plurality of time-dependent features according to a plurality of selection strategies. Next, a plurality of segmental classification units generates scores for the feature vectors. Each segmental classification unit (SCU) can be dedicated to a specific pooling interface unit (PIU) to form a PIU-SCU combination. Multiple PIU-SCU combinations can be further combined to form an ensemble of combinations, and the ensemble can be diversified by varying the pooling operations used by the PIU-SCU combinations. Based on the scores, the plurality of segmental classification units selects a class label and returns a result. | 03-03-2016 |
20160093293 | METHOD AND DEVICE FOR PREPROCESSING SPEECH SIGNAL - A method and a device that preprocess a speech signal are disclosed, which include extracting at least one frame corresponding to a speech recognition range from frames included in a speech signal, generating a supplementary frame to supplement speech recognition with respect to the speech recognition range based on the at least one extracted frame, and outputting a preprocessed speech signal including the supplementary frame along with the frames of the speech signal. | 03-31-2016 |
20160093305 | BIO-PHONETIC MULTI-PHRASE SPEAKER IDENTITY VERIFICATION - Systems and methods for bio-phonetic multi-phrase speaker identity verification are disclosed. Generally, a speaker identity verification engine generates a dynamic phrase including at least one dynamically-generated word. The speaker identity verification engine prompts a user to speak the dynamic phrase and receives a dynamic phrase utterance. The speaker identity verification engine extracts at least one voice characteristic from the dynamic phrase utterance and compares the at least one voice characteristic with a voice profile the generate a score. The speaker identity verification engine then determines whether to accept a speaker identity claim based on the score. | 03-31-2016 |
20160118049 | METHOD AND APPARATUS FOR SPEAKER-CALIBRATED SPEAKER DETECTION - The present invention relates to a method and apparatus for speaker-calibrated speaker detection. One embodiment of a method for generating a speaker model for use in detecting a speaker of interest includes identifying one or more speech features that best distinguish the speaker of interest from a plurality of impostor speakers and then incorporating the speech features in the speaker model. | 04-28-2016 |
20160155445 | SYSTEM AND METHOD FOR LOCALIZED ERROR DETECTION OF RECOGNITION RESULTS | 06-02-2016 |
20160253993 | METHOD, APPARATUS, AND COMPUTER-READABLE RECORDING MEDIUM FOR IMPROVING AT LEAST ONE SEMANTIC UNIT SET BY USING PHONETIC SOUND | 09-01-2016 |
20160253999 | Systems and Methods for Automated Evaluation of Human Speech | 09-01-2016 |
20180025734 | SEGMENT-BASED SPEAKER VERIFICATION USING DYNAMICALLY GENERATED PHRASES | 01-25-2018 |
20190146753 | Automatic Speech Recognition (ASR) Feedback For Head Mounted Displays (HMD) | 05-16-2019 |