Class / Patent application number | Description | Number of patent applications / Date published |
704202000 | Neural network | 11 |
20120290294 | NEURAL TRANSLATOR - A method and apparatus are provided for processing a set of communicated signals associated with a set of muscles, such as the muscles near the larynx of the person, or any other muscles the person use to achieve a desired response. The method includes the steps of attaching a single integrated sensor, for example, near the throat of the person proximate to the larynx and detecting an electrical signal through the sensor. The method further includes the steps of extracting features from the detected electrical signal and continuously transforming them into speech sounds without the need for further modulation. The method also includes comparing the extracted features to a set of prototype features and selecting a prototype feature of the set of prototype features providing a smallest relative difference. | 11-15-2012 |
20130262096 | METHODS FOR ALIGNING EXPRESSIVE SPEECH UTTERANCES WITH TEXT AND SYSTEMS THEREFOR - A system-effected method for synthesizing speech, or recognizing speech including a sequence of expressive speech utterances. The method can be computer-implemented and can include system-generating a speech signal embodying the sequence of expressive speech utterances. Other possible steps include: system-marking the speech signal with a pitch marker indicating a pitch change at or near a first zero amplitude crossing point of the speech signal following a glottal closure point, at a minimum, at a maximum or at another location; system marking the speech signal with at least one further pitch marker; system-aligning a sequence of prosodically marked text with the pitch-marked speech signal according to the pitch markers; and system outputting the aligned text or the aligned speech signal, respectively. Computerized systems, and stored programs for implementing method embodiments of the invention are also disclosed. | 10-03-2013 |
20140142929 | DEEP NEURAL NETWORKS TRAINING FOR SPEECH AND PATTERN RECOGNITION - The use of a pipelined algorithm that performs parallelized computations to train deep neural networks (DNNs) for performing data analysis may reduce training time. The DNNs may be one of context-independent DNNs or context-dependent DNNs. The training may include partitioning training data into sample batches of a specific batch size. The partitioning may be performed based on rates of data transfers between processors that execute the pipelined algorithm, considerations of accuracy and convergence, and the execution speed of each processor. Other techniques for training may include grouping layers of the DNNs for processing on a single processor, distributing a layer of the DNNs to multiple processors for processing, or modifying an execution order of steps in the pipelined algorithm. | 05-22-2014 |
20140278379 | INTEGRATION OF SEMANTIC CONTEXT INFORMATION - In one implementation, a computer-implemented method includes receiving, at a computer system, a request to predict a next word in a dialog being uttered by a speaker; accessing, by the computer system, a neural network comprising i) an input layer, ii) one or more hidden layers, and iii) an output layer; identifying the local context for the dialog of the speaker; selecting, by the computer system and using a semantic model, at least one vector that represents the semantic context for the dialog; applying input to the input layer of the neural network, the input comprising i) the local context of the dialog and ii) the values for the at least one vector; generating probability values for at least a portion of the candidate words; and providing, by the computer system and based on the probability values, information that identifies one or more of the candidate words. | 09-18-2014 |
20140358526 | METHODS AND APPARATUS FOR SIGNAL QUALITY ANALYSIS - A non-intrusive objective speech quality assessment is performed on a degraded speech signal. The methods are well suited for systems where random and bursty packet losses may occur and/or packet stream regeneration may also occur prior to speech signal quality assessment. In one embodiment received packetized speech is analyzed to determine to an overall final signal quality score. A limited set of trained neural networks, e.g., 5, corresponding to different signal features, each determine a signal feature quality score. A trained joint quality score determination module determines a joint quality score based on the signal feature quality scores. Packet loss is estimated based on received packet header information and/or detected gap durations. The determined joint quality score is adjusted, based on estimated packet loss information obtained from examining the speech signal, network level statistics and/or codec parameters to generate the final quality score. | 12-04-2014 |
20150039299 | CONTEXT-BASED SPEECH RECOGNITION - A processing system receives an audio signal encoding a portion of an utterance. The processing system receives context information associated with the utterance, wherein the context information is not derived from the audio signal or any other audio signal. The processing system provides, as input to a neural network, data corresponding to the audio signal and the context information, and generates a transcription for the utterance based on at least an output of the neural network. | 02-05-2015 |
20150127327 | CONTEXT-DEPENDENT STATE TYING USING A NEURAL NETWORK - The technology described herein can be embodied in a method that includes receiving an audio signal encoding a portion of an utterance, and providing, to a first neural network, data corresponding to the audio signal. The method also includes generating, by a processor, data representing a transcription for the utterance based on an output of the first neural network. The first neural network is trained using features of multiple context-dependent states, the context-dependent states being derived from a plurality of context-independent states provided by a second neural network. | 05-07-2015 |
20160055845 | GENERATING TRAINING DATA FOR DISAMBIGUATION - A method for generating training data for disambiguation of an entity comprising a word or word string related to a topic to be analyzed includes acquiring sent messages by a user, each including at least one entity in a set of entities; organizing the messages and acquiring sets, each containing messages sent by each user; identifying a set of messages including different entities, greater than or equal to a first threshold value, and identifying a user corresponding to the identified set as a hot user; receiving an instruction indicating an object entity to be disambiguated; determining a likelihood of co-occurrence of each keyword and the object entity in sets of messages sent by hot users; and determining training data for the object entity on the basis of the likelihood of co-occurrence of each keyword and the object entity in the sets of messages sent by the hot users. | 02-25-2016 |
20160078880 | Systems and Methods for Restoration of Speech Components - A method for restoring distorted speech components of an audio signal distorted by a noise reduction or a noise cancellation includes determining distorted frequency regions and undistorted frequency regions in the audio signal. The distorted frequency regions include regions of the audio signal in which a speech distortion is present. Iterations are performed using a model to refine predictions of the audio signal at distorted frequency regions. The model is configured to modify the audio signal and may include deep neural network trained using spectral envelopes of clean or undamaged audio signals. Before each iteration, the audio signal at the undistorted frequency regions is restored to values of the audio signal prior to the first iteration; while the audio signal at distorted frequency regions is refined starting from zero at the first iteration. Iterations are ended when discrepancies of audio signal at undistorted frequency regions meet pre-defined criteria. | 03-17-2016 |
20160086600 | Frame Skipping With Extrapolation and Outputs On Demand Neural Network For Automatic Speech Recognition - Techniques related to implementing neural networks for speech recognition systems are discussed. Such techniques may include implementing frame skipping with approximated skip frames and/or distances on demand such that only those outputs needed by a speech decoder are provided via the neural network or approximation techniques. | 03-24-2016 |
20160111108 | Method for Enhancing Audio Signal using Phase Information - A method transforms a noisy audio signal to an enhanced audio signal, by first acquiring the noisy audio signal from an environment. The noisy audio signal is processed by an enhancement network having network parameters to jointly produce a magnitude mask and a phase estimate. Then, the magnitude mask and the phase estimate are used to obtain the enhanced audio signal. | 04-21-2016 |