Patent application number | Description | Published |
20090033750 | METHOD AND APPARATUS FOR MOTION STABILIZATION - A method and apparatus for digital image stabilization. The method comprises segmenting an exposure time to have multiple partial-exposure images of a scene and manipulating the partially exposed images to produce a stable image | 02-05-2009 |
20100066842 | METHOD AND APPARATUS FOR PRODUCING SHARP FRAMES WITH LESS BLUR - A method and apparatus motion triggered image stabilization. The method includes computing projection vector for at least a portion of a frame of an image using horizontal and vertical sums, performing motion estimation utilizing projection vector with the shift of the projection vector from a previous frame, performing temporal IIR filter on the motion vector, calculating the maximum horizontal and vertical motion vectors, obtaining exposure time based on the horizontal and vertical motion vectors and the gain, returning the exposure time and the gain to the auto-exposure, utilizing the returned exposure time and gain, and producing a frame with less motion blur. | 03-18-2010 |
20100079606 | Motion Stabilization - Stabilization for devices such as hand-held camcoders segments a low-resolution frame into a region of reliable estimation, finds a global motion vector for the region at high resolution, and uses the global motion vector to compensate for jitter. | 04-01-2010 |
20100194918 | Methods and Systems for Automatic White Balance - A method for calibrating automatic white balance (AWB) is provided that includes using a plurality of references for AWB that include color temperature references, wherein an AWB failure is detected in an image, generating a scene prototype reference based on the image, and adding the generated scene prototype reference to the references for AWB. A method for calibrating AWB is provided that includes selecting an image as a scene prototype, wherein the selected image was captured by a second imaging sensor different from a first imaging sensor, transforming the selected image into a scene prototype image for the first imaging sensor, generating a scene prototype reference using the scene prototype image for the first imaging sensor, and adding the generated scene prototype reference to references for AWB that include color temperature references. | 08-05-2010 |
20110176014 | Video Stabilization and Reduction of Rolling Shutter Distortion - A method of processing a digital video sequence is provided that includes estimating compensated motion parameters and compensated distortion parameters (compensated M/D parameters) of a compensated motion/distortion (M/D) affine transformation for a block of pixels in the digital video sequence, and applying the compensated M/D affine transformation to the block of pixels using the estimated compensated M/D parameters to generate an output block of pixels, wherein translational and rotational jitter in the block of pixels is stabilized in the output block of pixels and distortion due to skew, horizontal scaling, vertical scaling, and wobble in the block of pixels is reduced in the output block of pixels. | 07-21-2011 |
20110228051 | Stereoscopic Viewing Comfort Through Gaze Estimation - A method of improving stereo video viewing comfort is provided that includes capturing a video sequence of eyes of an observer viewing a stereo video sequence on a stereoscopic display, estimating gaze direction of the eyes from the video sequence, and manipulating stereo images in the stereo video sequence based on the estimated gaze direction, whereby viewing comfort of the observer is improved. | 09-22-2011 |
20110229019 | Scene Adaptive Brightness/Contrast Enhancement - A method for brightness and contrast enhancement includes computing a luminance histogram of a digital image, computing first distances from the luminance histogram to a plurality of predetermined luminance histograms, estimating first control point values for a global tone mapping curve from predetermined control point values corresponding to a subset of the predetermined luminance histograms selected based on the computed first distances, and interpolating the estimated control point values to determine the global tone mapping curve. The method may also include dividing the digital image into a plurality of image blocks, and enhancing each pixel in the digital image by computing second distances from a pixel in an image block to the centers of neighboring image blocks, and computing an enhanced pixel value based on the computed second distances, predetermined control point values corresponding to the neighboring image blocks, and the global tone mapping curve. | 09-22-2011 |
20120063669 | Automatic Convergence of Stereoscopic Images Based on Disparity Maps - A method for automatic convergence of stereoscopic images is provided that includes receiving a stereoscopic image, generating a disparity map comprising a plurality of blocks for the stereoscopic image, clustering the plurality of blocks into a plurality of clusters based on disparities of the blocks, selecting a cluster of the plurality of clusters with a smallest disparity as a foreground cluster, determining a first shift amount and a first shift direction and a second shift amount and a second shift direction based on the smallest disparity, and shifting a left image in the stereoscopic image in the first shift direction by the first shift amount and a right image in the stereoscopic image in the second shift direction by the second shift amount, wherein the smallest disparity is reduced. | 03-15-2012 |
20120093433 | Dynamic Adjustment of Noise Filter Strengths for use with Dynamic Range Enhancement of Images - Dynamic adjustment of noise filter strengths for use with dynamic range enhancement of images is performed to produce better quality images by adapting dynamically to the image noise profile. Global and local brightness and contrast enhancement (GLBCE) is performed on a digital image to form an enhanced image. The GLBCE applies local gain values to the digital image based on local intensity values. A GLBCE gain versus intensity curve is determined for the enhanced image. A set of noise filter thresholds is adjusted in response to the GLBCE gain versus intensity curve to form a set of dynamically adjusted noise filter thresholds. The enhanced image is noise filtered using the set of dynamically adjusted noise filter thresholds to form a noise filtered enhanced image. | 04-19-2012 |
20120128236 | METHOD AND APPARATUS FOR STEREO MISALIGNMENT ESTIMATION USING MODIFIED AFFINE OR PERSPECTIVE MODEL - A method and apparatus for estimating stereo misalignment using modified affine or perspective model. The method includes dividing a left frame and a right frame into blocks, comparing horizontal and vertical boundary signals in the left frame and the right frame, estimating the horizontal and the vertical motion vector for each block in a reference frame, selecting a reliable motion vectors from a set of motion vectors, dividing the selected block into smaller features, feeding the data to an affine or a perspective transformation model to solve for the model parameters, running the model parameters through a temporal filter, portioning the estimated misalignment parameters between the left frame and right frame, and modifying the left frame and the right frame to save some boundary space. | 05-24-2012 |
20130016187 | METHOD AND APPARATUS FOR AUTO-CONVERGENCE FOR STEREOSCOPIC IMAGES AND VIDEOS - A method and apparatus for reducing convergence accommodation conflict. The method includes estimating disparities between images for different lens, analyzing the estimated disparities, selecting a point of convergence, determining the amount of shift relating to the convergence point selected, and performing adjustment to the disparity to maintain a disparity value below a threshold. | 01-17-2013 |
20130028582 | Stereoscopic Auto-Focus Based on Coordinated Lens Positions - Methods for automatic focus in a stereographic imaging device that includes two imaging sensor systems are provided. Focus searches are executed on both imaging sensor systems to determine optimal focused lens positions for each. The focus searches may be executed currently or sequentially, and may be at differing lens position granularities. Focal scores and spatial locations, i.e., the locations of focus regions, may be shared between the imaging sensor systems to coordinate the focus searches. Touchscreen coordinates may also be used to coordinate the focus searches. | 01-31-2013 |
20130083202 | Method, System and Computer Program Product for Reducing a Delay From Panning a Camera System - For reducing a delay from panning a camera system, an estimate is received of a physical movement of the camera system. In response to the estimate, a determination is made of whether the camera system is being panned. In response to determining that the camera system is not being panned, most effects of the physical movement are counteracted in a video sequence from the camera system. In response to determining that the camera system is being panned, most effects of the panning are preserved in the video sequence, while concurrently the video sequence is shifted toward a position that balances flexibility in counteracting effects of a subsequent physical movement of the camera system. | 04-04-2013 |
20130100125 | Method, System and Computer Program Product for Enhancing a Depth Map - A first depth map is generated in response to a stereoscopic image from a camera. The first depth map includes first pixels having valid depths and second pixels having invalid depths. In response to the first depth map, a second depth map is generated for replacing at least some of the second pixels with respective third pixels having valid depths. For generating the second depth map, a particular one of the third pixels is generated for replacing a particular one of the second pixels. For generating the particular third pixel, respective weight(s) is/are assigned to a selected one or more of the first pixels in response to value similarity and spatial proximity between the selected first pixel(s) and the particular second pixel. The particular third pixel is computed in response to the selected first pixel(s) and the weight(s). | 04-25-2013 |
20130177253 | Multi-Pass Video Noise Filtering - A method of noise filtering of a digital video sequence is provided that includes computing a motion image for a frame, wherein the motion image includes a motion value for each pixel in the frame, and wherein the motion values are computed as differences between pixel values in a luminance component of the frame and corresponding pixel values in a luminance component of a reference frame, applying a first spatial noise filter to the motion image to obtain a final motion image, computing a blending factor image for the frame, wherein the blending factor image includes a blending factor for each pixel in the frame, and wherein the blending factors are computed based on corresponding motion values in the final motion image, generating a filtered frame, wherein the blending factors are applied to corresponding pixel values in the reference frame and the frame, and outputting the filtered frame. | 07-11-2013 |
20140152686 | Local Tone Mapping for High Dynamic Range Images - A method of local tone mapping of a high dynamic range (HDR) image is provided that includes dividing a luminance image of the HDR image into overlapping blocks and computing a local tone curve for each block, computing a tone mapped value for each pixel of the luminance image as a weighted sum of values computed by applying local tone curves of neighboring blocks to the pixel value, computing a gain for each pixel as a ratio of the tone mapped value to the value of the pixel, and applying the gains to corresponding pixels in the HDR image. A weight for each value is computed based on distance from the pixel to the center point of the block having the local tone curve applied to compute the value and the intensity difference between the value of the pixel and the block mean pixel value. | 06-05-2014 |
20140152694 | Merging Multiple Exposures to Generate a High Dynamic Range Image - A method of generating a high dynamic range (HDR) image is provided that includes capturing a long exposure image and a short exposure image of a scene, computing a merging weight for each pixel location of the long exposure image based on a pixel value of the pixel location and a saturation threshold, and computing a pixel value for each pixel location of the HDR image as a weighted sum of corresponding pixel values in the long exposure image and the short exposure image, wherein a weight applied to a pixel value of the pixel location of the short exposure image and a weight applied to a pixel value of the pixel location in the pixel long exposure image are determined based on the merging weight computed for the pixel location and responsive to motion in a scene of the long exposure image and the short exposure image. | 06-05-2014 |
20140241620 | Illumination Estimation Using Natural Scene Statistics - A method for estimating illumination of an image captured by a digital system is provided that includes computing a feature vector for the image, identifying at least one best reference illumination class for the image from a plurality of predetermined reference illumination classes using the feature vector, an illumination classifier, and predetermined classification parameters corresponding to each reference illumination class, and computing information for further processing of the image based on the at least one best reference illumination class, wherein the information is at least one selected from a group consisting of color temperature and white balance gains. | 08-28-2014 |
20150022693 | Wide Dynamic Range Depth Imaging - Wide dynamic range depth imaging in a structured light device is provided that improves depth maps for scenes with a wide range of albedo values under varying light conditions. A structured light pattern, e.g., a time-multiplexed structured light pattern, is projected into a scene at various projection times and a camera captures images of the scene for at least the same exposure times as the projection times. A depth image is computed for each of the projection/exposure times and the resulting depth images are combined to generate a composite depth image. | 01-22-2015 |
20150022697 | Projector Auto-Focus Correction with the Aid of a Camera - A method of automatically focusing a projector in a projection system is provided that includes projecting, by the projector, a binary pattern on a projection surface, capturing an image of the projected binary pattern by a camera synchronized with the projector, computing a depth map from the captured image, and adjusting focus of the projector based on the computed depth map. | 01-22-2015 |
20150023594 | Transforming Wide Dynamic Range Images to Reduced Dynamic Range Images - A method of transforming an N-bit raw wide dynamic range (WDR) Bayer image to a K-bit raw red-green-blue (RGB) image wherein N>K is provided that includes converting the N-bit raw WDR Bayer image to an N-bit raw RGB image, computing a luminance image from the N-bit raw RGB image, computing a pixel gain value for each luminance pixel in the luminance image to generate a gain map, applying a hierarchical noise filter to the gain map to generate a filtered gain map, applying the filtered gain map to the N-bit raw RGB image to generated a gain mapped N-bit RGB image, and downshifting the gain mapped N-bit RGB image by (N−K) to generate the K-bit RGB image. | 01-22-2015 |
20150036999 | Viewer Attention Controlled Video Playback - A method of viewer attention controlled video playback on a video display device is provided that includes displaying a video on a display included in the video display device, determining whether or not attention of a viewer watching the video is focused on the display, and halting the displaying of the video when the attention of the viewer is not focused on the display. | 02-05-2015 |
20150085151 | METHOD AND APPARATUS FOR FUSING IMAGES FROM AN ARRAY OF CAMERAS - An image fusing method, apparatus and system for fusing images from an array of cameras, method includes selecting a camera from the array of cameras as a reference camera, estimating misalignment between the retrieved input images from the reference camera and the retrieved input images from the other cameras, estimating misalignment parameters between the reference camera and the other cameras, estimating local disparity between the reference camera image data and the other cameras based on the estimated misalignment parameters, using the estimated misalignment parameters and the estimated disparity values, mapping the image data into a reference camera grid, the retrieved input image data from the other cameras in the array of cameras is fused in the reference camera grid utilizing fractional offsets from integer coordinates, and producing an output image grid on the reference camera grid and interpolate output pixels using processed data for producing a high resolution image. | 03-26-2015 |