Patent application number | Description | Published |
20130121495 | Sound Mixture Recognition - A sound mixture may be received that includes a plurality of sources. A model may be received that includes a dictionary of spectral basis vectors for the plurality of sources. A weight may be estimated for each of the plurality of sources in the sound mixture based on the model. In some examples, such weight estimation may be performed using a source separation technique without actually separating the sources. | 05-16-2013 |
20130121506 | Online Source Separation - Online source separation may include receiving a sound mixture that includes first audio data from a first source and second audio data from a second source. Online source separation may further include receiving pre-computed reference data corresponding to the first source. Online source separation may also include performing online separation of the second audio data from the first audio data based on the pre-computed reference data. | 05-16-2013 |
20130124200 | Noise-Robust Template Matching - Noise robust template matching may be performed. First features of a first signal may be computed. Based at least on a portion of the first features, second features of a second signal may be computed. A new signal may be generated based on at least another portion of the first features and on at least a portion of the second features. | 05-16-2013 |
20130124462 | Clustering and Synchronizing Content - Clustering and synchronizing content may include extracting audio features for each of a plurality of files that include audio content. The plurality of files may be clustered into one or more clusters. Clustering may include clustering based on a histogram that may be generated for each file pair of the plurality of files. Within each of the clusters, the files of the cluster may be time aligned. | 05-16-2013 |
20130132077 | Semi-Supervised Source Separation Using Non-Negative Techniques - Systems and methods for semi-supervised source separation using non-negative techniques are described. In some embodiments, various techniques disclosed herein may enable the separation of signals present within a mixture, where one or more of the signals may be emitted by one or more different sources. In audio-related applications, for instance, a signal mixture may include speech (e.g., from a human speaker) and noise (e.g., background noise). In some cases, speech may be separated from noise using a speech model developed from training data. A noise model may be created, for example, during the separation process (e.g., “on-the-fly”) and in the absence of corresponding training data. | 05-23-2013 |
20130132082 | Systems and Methods for Concurrent Signal Recognition - Methods and systems for recognition of concurrent, superimposed, or otherwise overlapping signals are described. A Markov Selection Model is introduced that, together with probabilistic decomposition methods, enable recognition of simultaneously emitted signals from various sources. For example, a signal mixture may include overlapping speech from different persons. In some instances, recognition may be performed without the need to separate signals or sources. As such, some of the techniques described herein may be useful in automatic transcription, noise reduction, teaching, electronic games, audio search and retrieval, medical and scientific applications, etc. | 05-23-2013 |
20130132085 | Systems and Methods for Non-Negative Hidden Markov Modeling of Signals - Methods and systems for non-negative hidden Markov modeling of signals are described. For example, techniques disclosed herein may be applied to signals emitted by one or more sources. In some embodiments, methods and systems may enable the separation of a signal's various components. As such, the systems and methods disclosed herein may find a wide variety of applications. In audio-related fields, for example, these techniques may be useful in music recording and processing, source extraction, noise reduction, teaching, automatic transcription, electronic games, audio search and retrieval, and many other applications. | 05-23-2013 |
20130226558 | Language Informed Source Separation - Methods and systems for non-negative hidden Markov modeling of signals are described. For example, techniques disclosed herein may be applied to signals emitted by one or more sources. The modeling may be constrained according to high level information. In some embodiments, methods and systems may enable the separation of a signal's various components. As such, the systems and methods disclosed herein may find a wide variety of applications. In audio-related fields, for example, these techniques may be useful in music recording and processing, source separation/extraction, noise reduction, teaching, automatic transcription, electronic games, audio search and retrieval, and many other applications. | 08-29-2013 |
20130226858 | Feature Estimation in Sound Sources - A sound mixture may be received that includes a plurality of sources. A model may be received for one of the source that includes a dictionary of spectral basis vectors corresponding to that one source. At least one feature of the one source in the sound mixture may be estimated based on the model. In some examples, the estimation may be constrained according to temporal data. | 08-29-2013 |
20140133675 | Time Interval Sound Alignment - Time interval sound alignment techniques are described. In one or more implementations, one or more inputs are received via interaction with a user interface that indicate that a first time interval in a first representation of sound data generated from a first sound signal corresponds to a second time interval in a second representation of sound data generated from a second sound signal. A stretch value is calculated based on an amount of time represented in the first and second time intervals, respectively. Aligned sound data is generated from the sound data for the first and second time intervals based on the calculated stretch value. | 05-15-2014 |
20140135962 | Sound Alignment using Timing Information - Sound alignment techniques that employ timing information are described. In one or more implementations, features and timing information of sound data generated from a first sound signal are identified and used to identify features of sound data generated from a second sound signal. The identified features may then be utilized to align portions of the sound data from the first and second sound signals to each other. | 05-15-2014 |
20140136976 | Sound Alignment User Interface - Sound alignment user interface techniques are described. In one or more implementations, a user interface is output having a first representation of sound data generated from a first sound signal and a second representation of sound data generated from a second sound signal. One or more inputs are received, via interaction with the user interface, that indicate that a first point in time in the first representation corresponds to a second point in time in the second representation. Aligned sound data is generated from the sound data from the first and second sound signals based at least in part on correspondence of the first point in time in the sound data generated from the first sound signal to the second point in time in the sound data generated from the second sound signal. | 05-15-2014 |
20140140517 | Sound Data Identification - Sound data identification techniques are described. In one or more implementations, common sound data and uncommon sound data are identified from a plurality of sound data from a plurality of recordings of an audio source using a collaborative technique. The identification may include recognition of spectral and temporal aspects of the plurality of the sound data from the plurality of the recordings and sharing of the recognized spectral and temporal aspects to identify the common sound data as common to the plurality of recordings and the uncommon sound data as not common to the plurality of recordings. | 05-22-2014 |
20140142947 | Sound Rate Modification - Sound rate modification techniques are described. In one or more implementations, an indication is received of an amount that a rate of output of sound data is to be modified. One or more sound rate rules are applied to the sound data that, along with the received indication, are usable to calculate different rates at which different portions of the sound data are to be modified, respectively. The sound data is then output such that the calculated rates are applied. | 05-22-2014 |
20140148933 | Sound Feature Priority Alignment - Sound feature priority alignment techniques are described. In one or more implementations, features of sound data are identified from a plurality of recordings. Values are calculated for frames of the sound data from the plurality of recordings. The values are based on similarity of the frames of the sound data from the plurality of recordings to each other, the similarity based on the identified features and a priority that is assigned based on the identified features of respective frames. The sound data from the plurality of recordings is then aligned based at least in part on the calculated values. | 05-29-2014 |