Patent application number | Description | Published |
20130251177 | Techniques for Localized Perceptual Audio - Audio perception in local proximity to visual cues is provided. A device includes a video display, first row of audio transducers, and second row of audio transducers. The first and second rows can be vertically disposed above and below the video display. An audio transducer of the first row and an audio transducer of the second row form a column to produce, in concert, an audible signal. The perceived emanation of the audible signal is from a plane of the video display (e.g., a location of a visual cue) by weighing outputs of the audio transducers of the column In certain embodiments, the audio transducers are spaced farther apart at a periphery for increased fidelity in a center portion of the plane and less fidelity at the periphery. | 09-26-2013 |
20140037117 | METHOD AND SYSTEM FOR UPMIXING AUDIO TO GENERATE 3D AUDIO - In some embodiments, a method for upmixing input audio comprising N full range channels to generate | 02-06-2014 |
20140133682 | UPMIXING OBJECT BASED AUDIO - In some embodiments, a method for rendering an object based audio program indicative of a trajectory of an audio source, including by generating speaker feeds for driving loudspeakers to emit sound intended to be perceived as emitting from the source, but with the source having a different trajectory than that indicated by the program. In other embodiments, a method for modifying (upmixing) an object based audio program indicative of a trajectory of an audio object within a subspace of a full volume, to determine a modified program indicative of a modified trajectory of the object such that at least a portion of the modified trajectory is outside the subspace. Other aspects include a system configured to perform, and a computer readable medium which stores code for implementing, any embodiment of the inventive method. | 05-15-2014 |
20140133683 | System and Method for Adaptive Audio Signal Generation, Coding and Rendering - Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent. The object position metadata contains the appropriate allocentric frame of reference information required to play the sound correctly using the available speaker positions in a room that is set up to play the adaptive audio content. | 05-15-2014 |
20140240610 | TECHNIQUES FOR LOCALIZED PERCEPTUAL AUDIO - Audio perception in local proximity to visual cues is provided. A device includes a video display, first row of audio transducers, and second row of audio transducers. The first and second rows can be vertically disposed above and below the video display. An audio transducer of the first row and an audio transducer of the second row form a column to produce, in concert, an audible signal. The perceived emanation of the audible signal is from a plane of the video display (e.g., a location of a visual cue) by weighing outputs of the audio transducers of the column. In certain embodiments, the audio transducers are spaced farther apart at a periphery for increased fidelity in a center portion of the plane and less fidelity at the periphery. | 08-28-2014 |
20150146873 | Rendering and Playback of Spatial Audio Using Channel-Based Audio Systems - Embodiments are described for a method and system of rendering and playing back spatial audio content using a channel-based format. Spatial audio content that is played back through legacy channel-based equipment is transformed into the appropriate channel-based format resulting in the loss of certain positional information within the audio objects and positional metadata comprising the spatial audio content. To retain this information for use in spatial audio equipment even after the audio content is rendered as channel-based audio, certain metadata generated by the spatial audio processor is incorporated into the channel-based data. The channel-based audio can then be sent to a channel-based audio decoder or a spatial audio decoder. The spatial audio decoder processes the metadata to recover at least some positional information that was lost during the down-mix operation by upmixing the channel-based audio content back to the spatial audio content for optimal playback in a spatial audio environment. | 05-28-2015 |
20150223002 | System for Rendering and Playback of Object Based Audio in Various Listening Environments - Embodiments are described for a system of rendering object-based audio content through a system that includes individually addressable drivers, including at least one driver that is configured to project sound waves toward one or more surfaces within a listening environment for reflection to a listening area within the listening environment; a renderer configured to receive and process audio streams and one or more metadata sets associated with each of the audio streams and specifying a playback location of a respective audio stream; and a playback system coupled to the renderer and configured to render the audio streams to a plurality of audio feeds corresponding to the array of audio drivers in accordance with the one or more metadata sets. | 08-06-2015 |
20150304791 | VIRTUAL HEIGHT FILTER FOR REFLECTED SOUND RENDERING USING UPWARD FIRING DRIVERS - Embodiments are directed to speakers and circuits that reflect sound off a ceiling to a listening location at a distance from a speaker. The reflected sound provides height cues to reproduce audio objects that have overhead audio components. The speaker comprises upward firing drivers to reflect sound off of the upper surface and represents a virtual height speaker. A virtual height filter based on a directional hearing model is applied to the upward-firing driver signal to improve the perception of height for audio signals transmitted by the virtual height speaker to provide optimum reproduction of the overhead reflected sound. The virtual height filter may be incorporated as part of a crossover circuit that separates the full band and sends high frequency sound to the upward-firing driver. | 10-22-2015 |