Patent application number | Description | Published |
20090315905 | LAYERED TEXTURE COMPRESSION ARCHITECTURE - Various technologies for a layered texture compression architecture. In one implementation, the layered texture compression architecture may include a texture consumption pipeline. The texture compression pipeline may include a processor, memory devices, and textures compressed at varying ratios of compression. The textures within the pipeline may be compressed at ratios in accordance with characteristics of the devices in the pipeline that contains and processes the textures. | 12-24-2009 |
20110178798 | ADAPTIVE AMBIENT SOUND SUPPRESSION AND SPEECH TRACKING - A device for suppressing ambient sounds from speech received by a microphone array is provided. One embodiment of the device comprises a microphone array, a processor, an analog-to-digital converter, and memory comprising instructions stored therein that are executable by the processor. The instructions stored in the memory are configured to receive a plurality of digital sound signals, each digital sound signal based on an analog sound signal originating at the microphone array, receive a multi-channel speaker signal, generate a monophonic approximation signal of the multi-channel speaker signal, apply a linear acoustic echo canceller to suppress a first ambient sound portion of each digital sound signal, generate a combined directionally-adaptive sound signal from a combination of each digital sound signal by a combination of time-invariant and adaptive beamforming techniques, and apply one or more nonlinear noise suppression techniques to suppress a second ambient sound portion of the combined directionally-adaptive sound signal. | 07-21-2011 |
20110234756 | DE-ALIASING DEPTH IMAGES - Techniques are provided for de-aliasing depth images. The depth image may have been generated based on phase differences between a transmitted and received modulated light beam. A method may include accessing a depth image that has a depth value for a plurality of locations in the depth image. Each location has one or more neighbor locations. Potential depth values are determined for each of the plurality of locations based on the depth value in the depth image for the location and potential aliasing in the depth image. A cost function is determined based on differences between the potential depth values of each location and its neighboring locations. Determining the cost function includes assigning a higher cost for greater differences in potential depth values between neighboring locations. The cost function is substantially minimized to select one of the potential depth values for each of the locations. | 09-29-2011 |
20110267269 | HETEROGENEOUS IMAGE SENSOR SYNCHRONIZATION - A computer implemented method for synchronizing information from a scene using two heterogeneous sensing devices. Scene capture information is provided by a first sensor and a second sensor. The information comprises video streams including successive frames provided at different frequencies. Each frame is separated by a vertical blanking interval. A video output comprising a stream of successive frames each separated by a vertical blanking interval is rendered based on information in the scene. The method determines whether an adjustment of the first and second video stream relative to the video output stream is required by reference to the video output stream. A correction is then generated to at least one of said vertical blanking intervals. | 11-03-2011 |
20110274366 | DEPTH MAP CONFIDENCE FILTERING - An apparatus and method for filtering depth information received from a capture device. Depth information is filtered by using confidence information provided with the depth information based an adaptively created, optimal spatial filter on a per pixel basis. Input data including depth information is received on a scene. The depth information comprises a plurality of pixels, each pixel including a depth value and a confidence value. A confidence weight normalized filter for each pixel in the depth information is generated. The weight normalized filter is combined with the input data to provide filtered data to an application. | 11-10-2011 |
20110298967 | Controlling Power Levels Of Electronic Devices Through User Interaction - A processor-implemented method, system and computer readable medium for intelligently controlling the power level of an electronic device in a multimedia system based on user intent, is provided. The method includes receiving data relating to a first user interaction with a device in a multimedia system. The method includes determining if the first user interaction corresponds to a user's intent to interact with the device. The method then includes setting a power level for the device based on the first user interaction. The method further includes receiving data relating to a second user interaction with the device. The method then includes altering the power level of the device based on the second user interaction to activate the device for the user. | 12-08-2011 |
20110301934 | MACHINE BASED SIGN LANGUAGE INTERPRETER - A computer implemented method for performing sign language translation based on movements of a user is provided. A capture device detects motions defining gestures and detected gestures are matched to signs. Successive signs are detected and compared to a grammar library to determine whether the signs assigned to gestures make sense relative to each other and to a grammar context. Each sign may be compared to previous and successive signs to determine whether the signs make sense relative to each other. The signs may further be compared to user demographic information and a contextual database to verify the accuracy of the translation. An output of the match between the movements and the sign is provided. | 12-08-2011 |
20110304713 | INDEPENDENTLY PROCESSING PLANES OF DISPLAY DATA - Independently processing planes of display data is provided by a method of outputting a video stream. The method includes retrieving from memory a first plane of display data having a first set of display parameters and post-processing the first plane of display data to adjust the first set of display parameters. The method further includes retrieving from memory a second plane of display data having a second set of display parameters and post-processing the second plane of display data independently of the first plane of display data. The method further includes blending the first plane of display data with the second plane of display data to form blended display data and outputting the blended display data. | 12-15-2011 |
20120093320 | SYSTEM AND METHOD FOR HIGH-PRECISION 3-DIMENSIONAL AUDIO FOR AUGMENTED REALITY - Techniques are provided for providing 3D audio, which may be used in augmented reality. A 3D audio signal may be generated based on sensor data collected from the actual room in which the listener is located and the actual position of the listener in the room. The 3D audio signal may include a number of components that are determined based on the collected sensor data and the listener's location. For example, a number of (virtual) sound paths between a virtual sound source and the listener may be determined The sensor data may be used to estimate materials in the room, such that the affect that those materials would have on sound as it travels along the paths can be determined In some embodiments, sensor data may be used to collect physical characteristics of the listener such that a suitable HRTF may be determined from a library of HRTFs. | 04-19-2012 |
20120105473 | LOW-LATENCY FUSING OF VIRTUAL AND REAL CONTENT - A system that includes a head mounted display device and a processing unit connected to the head mounted display device is used to fuse virtual content into real content. In one embodiment, the processing unit is in communication with a hub computing device. The processing unit and hub may collaboratively determine a map of the mixed reality environment. Further, state data may be extrapolated to predict a field of view for a user in the future at a time when the mixed reality is to be displayed to the user. This extrapolation can remove latency from the system. | 05-03-2012 |
20120147038 | SYMPATHETIC OPTIC ADAPTATION FOR SEE-THROUGH DISPLAY - A method for overlaying first and second images in a common focal plane of a viewer comprises forming the first image and guiding the first and second images along an axis to a pupil of the viewer. The method further comprises adjustably diverging the first and second images at an adaptive diverging optic to bring the first image into focus at the common focal plane, and, adjustably converging the second image at an adaptive converging optic to bring the second image into focus at the common focal plane. | 06-14-2012 |
20120154542 | PLURAL DETECTOR TIME-OF-FLIGHT DEPTH MAPPING - A depth-mapping method comprises exposing first and second detectors oriented along different optical axes to light dispersed from a scene, and furnishing an output responsive to a depth coordinate of a locus of the scene. The output increases with an increasing first amount of light received by the first detector during a first period, and decreases with an increasing second amount of light received by the second detector during a second period different than the first. | 06-21-2012 |
20120159090 | SCALABLE MULTIMEDIA COMPUTER SYSTEM ARCHITECTURE WITH QOS GUARANTEES - Versions of a multimedia computer system architecture are described which satisfy quality of service (QoS) guarantees for multimedia applications such as game applications while allowing platform resources, hardware resources in particular, to scale up or down over time. Computing resources of the computer system are partitioned into a platform partition and an application partition, each including its own central processing unit (CPU) and, optionally, graphics processing unit (GPU). To enhance scalability of resources up or down, the platform partition includes one or more hardware resources which are only accessible by the multimedia application via a software interface. Additionally, outside the partitions may be other resources shared by the partitions or which provide general purpose computing resources. | 06-21-2012 |
20120245933 | ADAPTIVE AMBIENT SOUND SUPPRESSION AND SPEECH TRACKING - A device for suppressing ambient sounds from speech received by a microphone array is provided. One embodiment of the device comprises a microphone array, a processor, an analog-to-digital converter, and memory comprising instructions stored therein that are executable by the processor. The instructions stored in the memory are configured to receive a plurality of digital sound signals, each digital sound signal based on an analog sound signal originating at the microphone array, receive a multi-channel speaker signal, generate a monophonic approximation signal of the multi-channel speaker signal, apply a linear acoustic echo canceller to suppress a first ambient sound portion of each digital sound signal, generate a combined directionally-adaptive sound signal from a combination of each digital sound signal by a combination of time-invariant and adaptive beamforming techniques, and apply one or more nonlinear noise suppression techniques to suppress a second ambient sound portion of the combined directionally-adaptive sound signal. | 09-27-2012 |
20130044222 | IMAGE EXPOSURE USING EXCLUSION REGIONS - Calculating a gain setting for a primary image sensor includes receiving a test-matrix of pixels from a test image sensor, and receiving a first-frame matrix of pixels from a primary image sensor. A gain setting is calculated for the primary image sensor using the first-frame matrix of pixels except those pixels imaging one or more exclusion regions identified from the test matrix of pixels. | 02-21-2013 |
20130208897 | SKELETAL MODELING FOR WORLD SPACE OBJECT SOUNDS - A method for providing three-dimensional audio includes determining a world space object position and a world space ear position of a human subject based on a modeled virtual skeleton. The method further includes providing three-dimensional audio output to the human subject via an acoustic transducer array including one or more acoustic transducers. The three-dimensional audio output is configured such that sounds appear to originate from the object. | 08-15-2013 |
20130208898 | THREE-DIMENSIONAL AUDIO SWEET SPOT FEEDBACK - A method for providing three-dimensional audio is provided. The method includes receiving a depth map imaging a scene from a depth camera and recognizing a human subject present in the scene. The human subject is modeled with a virtual skeleton comprising a plurality of joints defined with a three-dimensional position. A world space ear position of the human subject is determined based on the virtual skeleton. Furthermore, a target world space ear position of the human subject is determined. The target world space ear position is the world space position where a desired audio effect can be produced via an acoustic transducer array. The method further includes outputting a notification representing a spatial relationship between the world space ear position and the target world space ear position. | 08-15-2013 |
20130208899 | SKELETAL MODELING FOR POSITIONING VIRTUAL OBJECT SOUNDS - Providing three-dimensional audio includes determining a world space ear position of a human subject based on a modeled virtual skeleton. A world space sound source position is determined such that a spatial relationship between the world space sound source position and the world space ear position models a spatial relationship between a virtual space sound source position of a virtual space sound source and a virtual space listening position. Three-dimensional audio is output to the human subject via an acoustic transducer array including one or more acoustic transducers. The three-dimensional audio output is configured such that at the world space ear position a sound provided by a particular virtual space sound source appears to originate from a corresponding world space sound source position | 08-15-2013 |
20130208900 | DEPTH CAMERA WITH INTEGRATED THREE-DIMENSIONAL AUDIO - A three-dimensional audio system includes a depth camera and one or more acoustic transducers in the same housing. Further, the same housing also houses logic for determining a world space ear position of a human subject observed by the depth camera. The logic also determines one or more audio-output transformations based on the world space ear position. The one or more audio-output transformations are configured to produce a three-dimensional audio output configured to provide a desired audio effect at the world space ear position. | 08-15-2013 |
20130208926 | SURROUND SOUND SIMULATION WITH VIRTUAL SKELETON MODELING - A method for providing three-dimensional audio includes determining a world space ear position of a human subject based on a modeled virtual skeleton. The method further includes providing three-dimensional audio output to the human subject via an acoustic transducer array including one or more acoustic transducers. The three-dimensional audio output is configured such that channel-specific sounds appear to originate from corresponding simulated world speaker positions. | 08-15-2013 |
20140316763 | MACHINE BASED SIGN LANGUAGE INTERPRETER - A computer implemented method for performing sign language translation based on movements of a user is provided. A capture device detects motions defining gestures and detected gestures are matched to signs. Successive signs are detected and compared to a grammar library to determine whether the signs assigned to gestures make sense relative to each other and to a grammar context. Each sign may be compared to previous and successive signs to determine whether the signs make sense relative to each other. The signs may further be compared to user demographic information and a contextual database to verify the accuracy of the translation. An output of the match between the movements and the sign is provided. | 10-23-2014 |