Patent application number | Description | Published |
20120101826 | DECOMPOSITION OF MUSIC SIGNALS USING BASIS FUNCTIONS WITH TIME-EVOLUTION INFORMATION - Decomposition of a multi-source signal using a basis function inventory and a sparse recovery technique is disclosed. | 04-26-2012 |
20120128160 | THREE-DIMENSIONAL SOUND CAPTURING AND REPRODUCING WITH MULTI-MICROPHONES - Systems, methods, apparatus, and machine-readable media for three-dimensional sound recording and reproduction using a multi-microphone setup are described. | 05-24-2012 |
20120128165 | SYSTEMS, METHOD, APPARATUS, AND COMPUTER-READABLE MEDIA FOR DECOMPOSITION OF A MULTICHANNEL MUSIC SIGNAL - Decomposition of a multichannel signal using direction-of-arrival estimation, a basis function inventory, and a sparse recovery technique is disclosed. | 05-24-2012 |
20120128166 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR HEAD TRACKING BASED ON RECORDED SOUND SIGNALS - Systems, methods, apparatus, and machine-readable media for detecting head movement based on recorded sound signals are described. | 05-24-2012 |
20120128175 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR ORIENTATION-SENSITIVE RECORDING CONTROL - Systems, methods, apparatus, and machine-readable media for orientation-sensitive selection and/or preservation of a recording direction using a multi-microphone setup are described. | 05-24-2012 |
20120294446 | BLIND SOURCE SEPARATION BASED SPATIAL FILTERING - A method for blind source separation based spatial filtering on an electronic device includes obtaining a first source audio signal and a second source audio signal. The method also includes applying a blind source separation filter set to the first source audio signal and to the second source audio signal to produce a spatially filtered first audio signal and a spatially filtered second audio signal. The method further includes playing the spatially filtered first audio signal over a first speaker to produce an acoustic spatially filtered first audio signal and playing the spatially filtered second audio signal over a second speaker to produce an acoustic spatially filtered second audio signal. The acoustic spatially filtered first audio signal and the acoustic spatially filtered second audio signal produce an isolated acoustic first source audio signal at a first position and an isolated acoustic second source audio signal at a second position. | 11-22-2012 |
20130156198 | AUTOMATED USER/SENSOR LOCATION RECOGNITION TO CUSTOMIZE AUDIO PERFORMANCE IN A DISTRIBUTED MULTI-SENSOR ENVIRONMENT - A wireless device is provided that makes use of other nearby audio transducer devices to generate a surround sound effect for a targeted user. To do this, the wireless device first ascertains whether there are any nearby external microphones and/or loudspeaker devices. An internal microphone for the wireless device and any other nearby external microphones may be used to ascertain a location of the desired/targeted user as well as the nearby loudspeaker devices. This information is then used to generate a surround sound effect for the desired/targeted user by having the wireless device steer audio signals to its internal loudspeakers and/or the nearby external loudspeaker devices. | 06-20-2013 |
20130259254 | SYSTEMS, METHODS, AND APPARATUS FOR PRODUCING A DIRECTIONAL SOUND FIELD - A system may be used to drive an array of loudspeakers to produce a sound field that includes a source component, whose energy is concentrated along a first direction relative to the array, and a masking component that is based on an estimated intensity of the source component in a second direction that is different from the first direction. | 10-03-2013 |
20130272097 | SYSTEMS, METHODS, AND APPARATUS FOR ESTIMATING DIRECTION OF ARRIVAL - Systems, methods, and apparatus for matching pair-wise differences (e.g., phase delay measurements) to an inventory of source direction candidates, and application of pair-wise source direction estimates, are described. | 10-17-2013 |
20130272538 | SYSTEMS, METHODS, AND APPARATUS FOR INDICATING DIRECTION OF ARRIVAL - Systems, methods, and apparatus for projecting an estimated direction of arrival of sound onto a plane that does not include the estimated direction are described. | 10-17-2013 |
20130272539 | SYSTEMS, METHODS, AND APPARATUS FOR SPATIALLY DIRECTIVE FILTERING - Systems, methods, and apparatus are described for applying, based on angles of arrival of source components relative to the axes of different microphone pairs, a spatially directive filter to a multichannel audio signal to produce an output signal. | 10-17-2013 |
20130272548 | OBJECT RECOGNITION USING MULTI-MODAL MATCHING SCHEME - Methods, systems and articles of manufacture for recognizing and locating one or more objects in a scene are disclosed. An image and/or video of the scene are captured. Using audio recorded at the scene, an object search of the captured scene is narrowed down. For example, the direction of arrival (DOA) of a sound can be determined and used to limit the search area in a captured image/video. In another example, keypoint signatures may be selected based on types of sounds identified in the recorded audio. A keypoint signature corresponds to a particular object that the system is configured to recognize. Objects in the scene may then be recognized using a shift invariant feature transform (SIFT) analysis comparing keypoints identified in the captured scene to the selected keypoint signatures. | 10-17-2013 |
20130275077 | SYSTEMS AND METHODS FOR MAPPING A SOURCE LOCATION - A method for mapping a source location by an electronic device is described. The method includes obtaining sensor data. The method also includes mapping a source location to electronic device coordinates based on the sensor data. The method further includes mapping the source location from electronic device coordinates to physical coordinates. The method additionally includes performing an operation based on a mapping. | 10-17-2013 |
20130275872 | SYSTEMS AND METHODS FOR DISPLAYING A USER INTERFACE - A method for displaying a user interface on an electronic device is described. The method includes presenting a user interface. The user interface includes a coordinate system. The coordinate system corresponds to physical coordinates based on sensor data. The method also includes providing a sector selection feature that allows selection of at least one sector of the coordinate system. The method further includes providing a sector editing feature that allows editing the at least one sector. | 10-17-2013 |
20130275873 | SYSTEMS AND METHODS FOR DISPLAYING A USER INTERFACE - A method for displaying a user interface on an electronic device is described. The method includes presenting a user interface. The user interface includes a coordinate system. The coordinate system corresponds to physical coordinates based on sensor data. The method also includes displaying at least a target audio signal and an interfering audio signal on the user interface. | 10-17-2013 |
20130282369 | SYSTEMS AND METHODS FOR AUDIO SIGNAL PROCESSING - A method for signal level matching by an electronic device is described. The method includes capturing a plurality of audio signals from a plurality of microphones. The method also includes determining a difference signal based on an inter-microphone subtraction. The difference signal includes multiple harmonics. The method also includes determining whether a harmonicity of the difference signal exceeds a harmonicity threshold. The method also includes preserving the harmonics to determine an envelope. The method further applies the envelope to a noise-suppressed signal. | 10-24-2013 |
20130282372 | SYSTEMS AND METHODS FOR AUDIO SIGNAL PROCESSING - A method for detecting voice activity by an electronic device is described. The method includes detecting near end speech based on a near end voiced speech detector and at least one single channel voice activity detector. The near end voiced speech detector is associated with a harmonic statistic based on a speech pitch histogram. | 10-24-2013 |
20130282373 | SYSTEMS AND METHODS FOR AUDIO SIGNAL PROCESSING - A method for restoring a processed speech signal by an electronic device is described. The method includes obtaining at least one audio signal. The method also includes performing bin-wise voice activity detection based on the at least one audio signal. The method further includes restoring the processed speech signal based on the bin-wise voice activity detection. | 10-24-2013 |
20130301837 | Audio User Interaction Recognition and Context Refinement - A system which tracks a social interaction between a plurality of participants, includes a fixed beamformer that is adapted to output a first spatially filtered output and configured to receive a plurality of second spatially filtered outputs from a plurality of steerable beamformers. Each steerable beamformer outputs a respective one of the second spatially filtered outputs associated with a different one of the participants. The system also includes a processor capable of determining a similarity between the first spatially filtered output and each of the second spatially filtered outputs. The processor determines the social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs. | 11-14-2013 |
20130304476 | Audio User Interaction Recognition and Context Refinement - A system which performs social interaction analysis for a plurality of participants includes a processor. The processor is configured to determine a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs. The processor is configured to determine the social interaction between the participants based on the similarities between the first spatially filtered output and each of the second spatially filtered outputs and display an output that is representative of the social interaction between the participants. The first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant. | 11-14-2013 |
20130315402 | THREE-DIMENSIONAL SOUND COMPRESSION AND OVER-THE-AIR TRANSMISSION DURING A CALL - A method for encoding multiple directional audio signals using an integrated codec by a wireless communication device is disclosed. The wireless communication device records a plurality of directional audio signals. The wireless communication device also generates a plurality of audio signal packets based on the plurality of directional audio signals. At least one of the audio signal packets includes an averaged signal. The wireless communication device further transmits the plurality of audio signal packets. | 11-28-2013 |
20130317830 | THREE-DIMENSIONAL SOUND COMPRESSION AND OVER-THE-AIR TRANSMISSION DURING A CALL - A method for encoding three dimensional audio by a wireless communication device is disclosed. The wireless communication device detects an indication of a plurality of localizable audio sources. The wireless communication device also records a plurality of audio signals associated with the plurality of localizable audio sources. The wireless communication device also encodes the plurality of audio signals. | 11-28-2013 |
20130339011 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR PITCH TRAJECTORY ANALYSIS - Systems, methods, and apparatus for pitch trajectory analysis are described. Such techniques may be used to remove vocals and/or vibrato from an audio mixture signal. For example, such a technique may be used to pre-process the signal before an operation to decompose the mixture signal into individual instrument components. | 12-19-2013 |
20140003611 | SYSTEMS AND METHODS FOR SURROUND SOUND ECHO REDUCTION | 01-02-2014 |
20140003635 | AUDIO SIGNAL PROCESSING DEVICE CALIBRATION | 01-02-2014 |
20140146970 | COLLABORATIVE SOUND SYSTEM - In general, techniques are described for forming a collaborative sound system. A headend device comprising one or more processors may perform the techniques. The processors may be configured to identify mobile devices that each includes a speaker and that are available to participate in a collaborative surround sound system. The processors may configure the collaborative surround sound system to utilize the speaker of each of the mobile devices as one or more virtual speakers of this system and then render audio signals from an audio source such that when the audio signals are played by the speakers of the mobile devices the audio playback of the audio signals appears to originate from the one or more virtual speakers of the collaborative surround sound system. The processors may then transmit the processed audio signals rendered to the mobile device participating in the collaborative surround sound system. | 05-29-2014 |
20140146983 | IMAGE GENERATION FOR COLLABORATIVE SOUND SYSTEMS - In general, techniques are described for image generation for a collaborative sound system. A headend device comprising a processor may perform these techniques. The processor may be configured to determine a location of a mobile device participating in a collaborative surround sound system as a speaker of a plurality of speakers of the collaborative surround sound system. The processor may further be configured to generate an image that depicts the location of the mobile device that is participating in the collaborative surround sound system relative to the plurality of other speakers of the collaborative surround sound system. | 05-29-2014 |
20140146984 | CONSTRAINED DYNAMIC AMPLITUDE PANNING IN COLLABORATIVE SOUND SYSTEMS - In general, techniques are described for performing constrained dynamic amplitude panning in collaborative sound systems. A headend device comprising one or more processors may perform the techniques. The processors may be configured to identify, for a mobile device participating in a collaborative surround sound system, a specified location of a virtual speaker of the collaborative surround sound system and determine a constraint that impacts playback of audio signals rendered from an audio source by the mobile device. The processors may be further configure to perform dynamic spatial rendering of the audio source with the determined constraint to render audio signals that reduces the impact of the determined constraint during playback of the audio signals by the mobile device. | 05-29-2014 |
20140233725 | PERSONALIZED BANDWIDTH EXTENSION - A personalized (i.e., speaker-derivable) bandwidth extension is provided in which the model used for bandwidth extension is personalized (e.g., tailored) to each specific user. A training phase is performed to generate a bandwidth extension model that is personalized to a user. The model may be subsequently used in a bandwidth extension phase during a phone call involving the user. The bandwidth extension phase, using the personalized bandwidth extension model, will be activated when a higher band (e.g., wideband) is not available and the call is taking place on a lower band (e.g., narrowband). | 08-21-2014 |
20140254816 | CONTENT BASED NOISE SUPPRESSION - Apparatus and methods for audio noise attenuation are disclosed. An audio signal analyzer can determine whether an input audio signal received from a microphone device includes a noise signal having identifiable content. If there is a noise signal having identifiable content, a content source is accessed to obtain a copy of the noise signal. An audio canceller can generate a processed audio signal, having an attenuated noise signal, based on comparing the copy of the noise signal to the input audio signal. Additionally or alternatively, data may be communicated on a communication channel to a separate media device to receive at least a portion of the copy of the noise signal from the separate media device, or to receive content-identification data corresponding to the content source. | 09-11-2014 |
20140328490 | MULTI-CHANNEL ECHO CANCELLATION AND NOISE SUPPRESSION - A method for multi-channel echo cancellation and noise suppression is described. One of multiple echo estimates is selected for non-linear echo cancellation. Echo notch masking is performed on a noise-suppressed signal based on an echo direction of arrival (DOA) to produce an echo-suppressed signal. Non-linear echo cancellation is performed on the echo-suppressed signal based, at least in part, on the selected echo estimate. | 11-06-2014 |
20140337021 | SYSTEMS AND METHODS FOR NOISE CHARACTERISTIC DEPENDENT SPEECH ENHANCEMENT - A method for noise characteristic dependent speech enhancement by an electronic device is described. The method includes determining a noise characteristic of input audio. Determining a noise characteristic of input audio includes determining whether noise is stationary noise and determining whether the noise is music noise. The method also includes determining a noise reference based on the noise characteristic. Determining the noise reference includes excluding a spatial noise reference from the noise reference when the noise is stationary noise and including the spatial noise reference in the noise reference when the noise is not music noise and is not stationary noise. The method further includes performing noise suppression based on the noise characteristic. | 11-13-2014 |