Patent application number | Description | Published |
20080229911 | WAVEFORM FETCH UNIT FOR PROCESSING AUDIO FILES - This disclosure describes techniques that make use of a waveform fetch unit that operates to retrieve waveform samples on behalf of each of a plurality of hardware processing elements that operate simultaneously to service various audio synthesis parameters generated from one or more audio files, such as musical instrument digital interface (MIDI) files. In one example, a method comprises receiving a request for a waveform sample from an audio processing element, and servicing the request by calculating a waveform sample number for the requested waveform sample based on a phase increment contained in the request and an audio synthesis parameter control word associated with the requested waveform sample, retrieving the waveform sample from a local cache using the waveform sample number, and sending the retrieved waveform sample to the requesting audio processing element. | 09-25-2008 |
20080229918 | PIPELINE TECHNIQUES FOR PROCESSING MUSICAL INSTRUMENT DIGITAL INTERFACE (MIDI) FILES - This disclosure describes techniques for processing audio files that comply with the musical instrument digital interface (MIDI) format. In particular, various tasks associated with MIDI file processing are delegated between software operating on a general purpose processor, firmware associated with a digital signal processor (DSP), and dedicated hardware that is specifically designed for MIDI file processing. Alternatively, a multi-threaded DSP may be used instead of a general purpose processor and the DSP. In one aspect, this disclosure provides a method comprising parsing MIDI files and scheduling MIDI events associated with the MIDI files using a first process, processing the MIDI events using a second process to generate MIDI synthesis parameters, and generating audio samples using a hardware unit based on the synthesis parameters. | 09-25-2008 |
20080229919 | AUDIO PROCESSING HARDWARE ELEMENTS - This disclosure describes techniques that make use of a plurality of hardware elements that operate simultaneously to service synthesis parameters generated from one or more audio files, such as musical instrument digital interface (MIDI) files. In one example, a method comprises storing audio synthesis parameters generated for one or more audio files of an audio frame, processing a first audio synthesis parameter using a first audio processing element of a hardware unit to generate first audio information, processing a second audio synthesis parameter using a second audio processing element of the hardware unit to generate second audio information, and generating audio samples for the audio frame based at least in part on a combination of the first and second audio information. | 09-25-2008 |
20080269926 | AUTOMATIC VOLUME AND DYNAMIC RANGE ADJUSTMENT FOR MOBILE AUDIO DEVICES - A mobile audio device (for example, a cellular telephone, personal digital audio player, or MP3 player) performs Audio Dynamic Range Control (ADRC) and Automatic Volume Control (AVC) to increase the volume of sound emitted from a speaker of the mobile audio device so that faint passages of the audio will be more audible. This amplification of faint passages occurs without overly amplifying other louder passages, and without substantial distortion due to clipping. Multi-Microphone Active Noise Cancellation (MMANC) functionality is, for example, used to remove background noise from audio information picked up on microphones of the mobile audio device. The noise-canceled audio may then be communicated from the device. The MMANC functionality generates a noise reference signal as an intermediate signal. The intermediate signal is conditioned and then used as a reference by the AVC process. The gain applied during the AVC process is a function of the noise reference signal. | 10-30-2008 |
20090024397 | UNIFIED FILTER BANK FOR PERFORMING SIGNAL CONVERSIONS - A unified filter bank for performing signal conversions may include an interface that receives signal conversion commands in relation to multiple types of compressed audio bitstreams. The unified filter bank may also include a reconfigurable transform component that performs a transform as part of signal conversion for the multiple types of compressed audio bitstreams. The unified filter bank may also include complementary modules that perform complementary processing as part of the signal conversion for the multiple types of compressed audio bitstreams. The unified filter bank may also include an interface command controller that controls the configuration of the reconfigurable transform component and the complementary modules. | 01-22-2009 |
20090089053 | MULTIPLE MICROPHONE VOICE ACTIVITY DETECTOR - Voice activity detection using multiple microphones can be based on a relationship between an energy at each of a speech reference microphone and a noise reference microphone. The energy output from each of the speech reference microphone and the noise reference microphone can be determined. A speech to noise energy ratio can be determined and compared to a predetermined voice activity threshold. In another embodiment, the absolute value of the autocorrelation of the speech and noise reference signals are determined and a ratio based on autocorrelation values is determined. Ratios that exceed the predetermined threshold can indicate the presence of a voice signal. The speech and noise energies or autocorrelations can be determined using a weighted average or over a discrete frame size. | 04-02-2009 |
20090089054 | APPARATUS AND METHOD OF NOISE AND ECHO REDUCTION IN MULTIPLE MICROPHONE AUDIO SYSTEMS - Multiple microphone noise suppression apparatus and methods are described herein. The apparatus and methods implement a variety of noise suppression techniques and apparatus that can be selectively applied to signals received using multiple microphones. The microphone signals received at each of the multiple microphones can be independently processed to cancel echo signal components that can be generated from a local audio source. The echo cancelled signals may be processed by some or all modules within a signal separator that operates to separate or otherwise isolate a speech signal from noise signals. The signal separator can include a pre-processing de-correlator followed by a blind source separator. The output of the blind source separator can be post filtered to provide post separation de-correlation. The separated speech and noise signals can be non-linearly processed for further noise reduction, and additional post processing can be implemented following the non-linear processing. | 04-02-2009 |
20090135976 | RESOLVING BUFFER UNDERFLOW/OVERFLOW IN A DIGITAL SYSTEM - In a digital system with more than one clock source, lack of synchronization between the clock sources may cause overflow or underflow in sample buffers, also called sample slipping. Sample slipping may lead to undesirable artifacts in the processed signal due to discontinuities introduced by the addition or removal of extra samples. To smooth out discontinuities caused by sample slipping, samples are filtered to when a buffer overflow condition occurs, and the samples are interpolated to produce additional samples when a buffer underflow condition occurs. The interpolated samples may also be filtered. The filtering and interpolation operations can be readily implemented without adding significant burden to the computational complexity of a real-time digital system. | 05-28-2009 |
20090136044 | METHODS AND APPARATUS FOR PROVIDING A DISTINCT PERCEPTUAL LOCATION FOR AN AUDIO SOURCE WITHIN AN AUDIO MIXTURE - In accordance with a method for providing a distinct perceptual location for an audio source within an audio mixture, a foreground signal may be processed to provide a foreground perceptual angle for the foreground signal. The foreground signal may also be processed to provide a desired attenuation level for the foreground signal. A background signal may be processed to provide a background perceptual angle for the background signal. The background signal may also be processed to provide a desired attenuation level for the background signal. The foreground signal and the background signal may be combined into an output audio source. | 05-28-2009 |
20090136063 | METHODS AND APPARATUS FOR PROVIDING AN INTERFACE TO A PROCESSING ENGINE THAT UTILIZES INTELLIGENT AUDIO MIXING TECHNIQUES - A method for providing an interface to a processing engine that utilizes intelligent audio mixing techniques may include receiving a request to change a perceptual location of an audio source within an audio mixture from a current perceptual location relative to a listener to a new perceptual location relative to the listener. The audio mixture may include at least two audio sources. The method may also include generating one or more control signals that are configured to cause the processing engine to change the perceptual location of the audio source from the current perceptual location to the new perceptual location via separate foreground processing and background processing. The method may also include providing the one or more control signals to the processing engine. | 05-28-2009 |