Patent application number | Description | Published |
20090299746 | METHOD AND SYSTEM FOR SPEECH SYNTHESIS - A method for performing speech synthesis to a textual content at a client. The method includes the steps of: performing speech synthesis to the textual content based on a current acoustical unit set S | 12-03-2009 |
20110054901 | METHOD AND APPARATUS FOR ALIGNING TEXTS - A method and apparatus for aligning texts. The method includes acquiring a target text and a reference text and aligning the target text and the reference text at word level based on phoneme similarity. The method can be applied to automatically archiving a multimedia resource and a method of automatically searching a multimedia resource. | 03-03-2011 |
20110270605 | ASSESSING SPEECH PROSODY - A method, system and computer readable storage medium for assessing speech prosody. The method includes the steps of: receiving input speech data; acquiring a prosody constraint; assessing prosody of the input speech data according to the prosody constraint; and providing assessment result where at least of the steps is carried out using a computer device. | 11-03-2011 |
20120308038 | Sound Source Localization Apparatus and Method - Sound source localization apparatuses and methods are described. A frame amplitude difference vector is calculated based on short time frame data acquired through an array of microphones. The frame amplitude difference vector reflects differences between amplitudes captured by microphones of the array during recording the short time frame data. Similarity between the frame amplitude difference vector and each of a plurality of reference frame amplitude difference vectors is evaluated. Each of the plurality of reference frame amplitude difference vectors reflects differences between amplitudes captured by microphones of the array during recording sound from one of a plurality of candidate locations. A desired location of sound source is estimated based at least on the candidate locations and associated similarity. The sound source localization can be performed based at least on amplitude difference. | 12-06-2012 |
20130054244 | METHOD AND SYSTEM FOR ACHIEVING EMOTIONAL TEXT TO SPEECH - A method and system for achieving emotional text to speech. The method includes: receiving text data; generating emotion tag for the text data by a rhythm piece; and achieving TTS to the text data corresponding to the emotion tag, where the emotion tags are expressed as a set of emotion vectors; where each emotion vector includes a plurality of emotion scores given based on a plurality of emotion categories. A system for the same includes: a text data receiving module; an emotion tag generating module; and a TTS module for achieving TTS, wherein the emotion tag is expressed as a set of emotion vectors; and wherein emotion vector includes a plurality of emotion scores given based on a plurality of emotion categories. | 02-28-2013 |
20140129220 | SPEAKER AND CALL CHARACTERISTIC SENSITIVE OPEN VOICE SEARCH - Techniques disclosed herein include systems and methods for open-domain voice-enabled searching that is speaker sensitive. Techniques include using speech information, speaker information, and information associated with a spoken query to enhance open voice search results. This includes integrating a textual index with a voice index to support the entire search cycle. Given a voice query, the system can execute two matching processes simultaneously. This can include a text matching process based on the output of speech recognition, as well as a voice matching process based on characteristics of a caller or user voicing a query. Characteristics of the caller can include output of voice feature extraction and metadata about the call. The system clusters callers according to these characteristics. The system can use specific voice and text clusters to modify speech recognition results, as well as modifying search results. | 05-08-2014 |
20150032446 | METHOD AND SYSTEM FOR SIGNAL TRANSMISSION CONTROL - An audio signal with a temporal sequence of blocks or frames is received or accessed. Features are determined as characterizing aggregately the sequential audio blocks/frames that have been processed recently, relative to current time. The feature determination exceeds a specificity criterion and is delayed, relative to the recently processed audio blocks/frames. Voice activity indication is detected in the audio signal. VAD is based on a decision that exceeds a preset sensitivity threshold and is computed over a brief time period, relative to blocks/frames duration, and relates to current block/frame features. The VAD and the recent feature determination are combined with state related information, which is based on a history of previous feature determinations that are compiled from multiple features, determined over a time prior to the recent feature determination time period. Decisions to commence or terminate the audio signal, or related gains, are outputted based on the combination. | 01-29-2015 |
20150071446 | Audio Processing Method and Audio Processing Apparatus - An audio processing method and an audio processing apparatus are described. A mono-channel audio signal is transformed into a plurality of first subband signals. Proportions of a desired component and a noise component are estimated in each of the subband signals. Second subband signals corresponding respectively to a plurality of channels are generated from each of the first subband signals. Each of the second subband signals comprises a first component and a second component obtained by assigning a spatial hearing property and a perceptual hearing property different from the spatial hearing property to the desired component and the noise component in the corresponding first subband signal respectively, based on a multi-dimensional auditory presentation method. The second subband signals are transformed into signals for rendering with the multi-dimensional auditory presentation method. By assigning different hearing properties to desired sound and noise, the intelligibility of the audio signal can be improved. | 03-12-2015 |
20150081283 | HARMONICITY ESTIMATION, AUDIO CLASSIFICATION, PITCH DETERMINATION AND NOISE ESTIMATION - Embodiments are described for harmonicity estimation, audio classification, pitch determination and noise estimation. Measuring harmonicity of an audio signal includes calculation a log amplitude spectrum of audio signal. A first spectrum is derived by calculating each component of the first spectrum as a sum of components of the log amplitude spectrum on frequencies. In linear frequency scale, the frequencies are odd multiples of the component's frequency of the first spectrum. A second spectrum is derived by calculating each component of the second spectrum as a sum of components of the log amplitude spectrum on frequencies. In linear frequency scale, the frequencies are even multiples of the component's frequency of the second spectrum. A difference spectrum is derived subtracting the first spectrum from the second spectrum. A measure of harmonicity is generated as a monotonically increasing function of the maximum component of the difference spectrum within predetermined frequency range. | 03-19-2015 |