Patent application number | Description | Published |
20130095920 | GENERATING FREE VIEWPOINT VIDEO USING STEREO IMAGING - Methods and systems for generating free viewpoint video using an active infrared (IR) stereo module are provided. The method includes computing a depth map for a scene using an active IR stereo module. The depth map may be computed by projecting an IR dot pattern onto the scene, capturing stereo images from each of two or more synchronized IR cameras, detecting dots within the stereo images, computing feature descriptors corresponding to the dots in the stereo images, computing a disparity map between the stereo images, and generating the depth map using the disparity map. The method also includes generating a point cloud for the scene using the depth map, generating a mesh of the point cloud, and generating a projective texture map for the scene from the mesh of the point cloud. The method further includes generating the video for the scene using the projective texture map. | 04-18-2013 |
20130100256 | GENERATING A DEPTH MAP - Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map. | 04-25-2013 |
20130321589 | AUTOMATED CAMERA ARRAY CALIBRATION - The automated camera array calibration technique described herein pertains to a technique for automating camera array calibration. The technique can leverage corresponding depth and single or multi-spectral intensity data (e.g., RGB (Red Green Blue) data) captured by hybrid capture devices to automatically determine camera geometry. In one embodiment it does this by finding common features in the depth maps between two hybrid capture devices and derives a rough extrinsic calibration based on shared depth map features. It then uses the intensity (e.g., RGB) data corresponding to the depth maps and uses the features of the intensity (e.g., RGB) data to refine the rough extrinsic calibration. | 12-05-2013 |
20130321590 | GLANCING ANGLE EXCLUSION - The glancing angle exclusion technique described herein selectively limits projective texturing near depth map discontinuities. A depth discontinuity is defined by a jump between a near-depth surface and a far-depth surface. The claimed technique can limit projective texturing on near and far surfaces to a different degree—for example, the technique can limit far-depth projective texturing within a certain distance to a depth discontinuity but not near-depth projective texturing. | 12-05-2013 |
20130321593 | VIEW FRUSTUM CULLING FOR FREE VIEWPOINT VIDEO (FVV) - The view frustum culling technique described herein allows Free Viewpoint Video (FVV) or other 3D spatial video rendering at a client by sending only the 3D geometry and texture (e.g., RGB) data necessary for a specific viewpoint or view frustum from a server to the rendering client. The synthetic viewpoint is then rendered by the client by using the received geometry and texture data for the specific viewpoint or view frustum. In some embodiments of the view frustum culling technique, the client has both some texture data and 3D geometric data stored locally if there is sufficient local processing power. Additionally, in some embodiments, additional spatial and temporal data can be sent to the client to support changes in the view frustum by providing additional geometry and texture data that will likely be immediately used if the viewpoint is changed either spatially or temporally. | 12-05-2013 |
20140307047 | ACTIVE STEREO WITH ADAPTIVE SUPPORT WEIGHTS FROM A SEPARATE IMAGE - The subject disclosure is directed towards stereo matching based upon active illumination, including using a patch in a non-actively illuminated image to obtain weights that are used in patch similarity determinations in actively illuminated stereo images. To correlate pixels in actively illuminated stereo images, adaptive support weights computations may be used to determine similarity of patches corresponding to the pixels. In order to obtain meaningful adaptive support weights for the adaptive support weights computations, weights are obtained by processing a non-actively illuminated (“clean”) image. | 10-16-2014 |
20140307056 | Multimodal Foreground Background Segmentation - The subject disclosure is directed towards a framework that is configured to allow different background-foreground segmentation modalities to contribute towards segmentation. In one aspect, pixels are processed based upon RGB background separation, chroma keying, IR background separation, current depth versus background depth and current depth versus threshold background depth modalities. Each modality may contribute as a factor that the framework combines to determine a probability as to whether a pixel is foreground or background. The probabilities are fed into a global segmentation framework to obtain a segmented image. | 10-16-2014 |
20140307058 | ROBUST STEREO DEPTH SYSTEM - The subject disclosure is directed towards a high resolution, high frame rate, robust stereo depth system. The system provides depth data in varying conditions based upon stereo matching of images, including actively illuminated IR images in some implementations. A clean IR or RGB image may be captured and used with any other captured images in some implementations. Clean IR images may be obtained by using a notch filter to filter out the active illumination pattern. IR stereo cameras, a projector, broad spectrum IR LEDs and one or more other cameras may be incorporated into a single device, which may also include image processing components to internally compute depth data in the device for subsequent output. | 10-16-2014 |
20140307098 | EXTRACTING TRUE COLOR FROM A COLOR AND INFRARED SENSOR - The subject disclosure is directed towards color correcting for infrared (IR) components that are detected in the R, G, B parts of a sensor photosite. A calibration process determines true R, G, B based upon obtaining or estimating IR components in each photosite, such as by filtering techniques and/or using different IR lighting conditions. A set of tables or curves obtained via offline calibration model the correction data needed for online correction of an image. | 10-16-2014 |
20140307953 | ACTIVE STEREO WITH SATELLITE DEVICE OR DEVICES - The subject disclosure is directed towards communicating image-related data between a base station and/or one or more satellite computing devices, e.g., tablet computers and/or smartphones. A satellite device captures image data and communicates image-related data (such as the images or depth data processed therefrom) to another device, such as a base station. The receiving device uses the image-related data to enhance depth data (e.g., a depth map) based upon the image data captured from the satellite device, which may be physically closer to something in the scene than the base station, for example. To more accurately capture depth data in various conditions, an active illumination pattern may be projected from the base station or another external projector, whereby satellite units may use the other source's active illumination and thereby need not consume internal power to benefit from active illumination. | 10-16-2014 |