Patent application number | Description | Published |
20100110107 | METHOD AND APPARATUS FOR DETERMINING IMAGE ORIENTATION - A method and apparatus for automatically deciding on the orientation of image. The method includes computing average and standard deviation of luminance of the top region of a portion of the image, of the bottom region of a portion of the image, of the left region of a portion of the image, and of the right region of a portion of the image, computing average luminance of the image, computing, in the digital signal processor, consolidated luminance difference and uniformity of top and bottom regions, and left and right regions, utilizing the computed average and standard deviation of at least one of the bottom region, the left region or the right region andutilizing portrait orientation if difLR−difTB>t | 05-06-2010 |
20100110226 | METHOD AND APPARATUS FOR DETECTING TYPE OF BACK LIGHT OF AN IMAGE - A method and apparatus for determining the type of back light of an image. The method includes computing an average luminance of the center region of at least a portion of the image, wherein the average luminance is aveCenter, computing the average luminance of the image, wherein the average luminance is aveImage, computing the ratio of the average image luminance to the average center luminance, wherein the ratio=aveImage/aveCenter, determining that the scene related to the image is a backlit scene if the ratio is greater than a predetermined value and performing exposure compensation for the backlit scene and determining that the scene related to the image is a backlit scene if the ratio is greater than a predetermined value and performing exposure compensation for the backlit scene by utilizing the equation Exp2=Exp1 * lum_ref/(t | 05-06-2010 |
20110242129 | SYSTEM, METHOD AND APPARATUS FOR AN EDGE-PRESERVING SMOOTH FILTER FOR LOW POWER ARCHITECTURE - Embodiments of an apparatus, system and method are described for an edge-preserving smooth filter for low power architecture. A weighted pixel sum may be determined based on a weight of a central pixel and a weight of one or more neighboring pixels. The weight sum for the central pixel may be set to a power of two. An output of the central pixel may be displayed based on the weight sum and the weighted pixel sum. Other embodiments are described and claimed. | 10-06-2011 |
20110243440 | ADAPTIVE LIGHT METHOD AND SYSTEM FOR LOW POWER ARCHITECTURES - Embodiments of an apparatus, system and method are described for an adaptive light method for low power architecture. A histogram with a plurality of bins may be determined based on luminance for an image. A tone differential may be decreased based on a sum of pixel counts from adjacent bins. The image may be displayed based at least in part on a tone differential of a bin. Other embodiments are described and claimed. | 10-06-2011 |
20110316971 | SINGLE PIPELINE STEREO IMAGE CAPTURE - In some embodiments, an electronic device comprises a first camera module and a second camera module, a first receiver to receive a first set of lines of raw image data from the first camera module, a second receiver to receive a second set of lines of raw image data from the second camera module, and logic to combine the first set of lines and the second set of lines to generate combined lines of raw image data, process the combined lines of raw image data, and generate a combined image frame from the combined lines of raw image data. Other embodiments may be described. | 12-29-2011 |
20110317034 | IMAGE SIGNAL PROCESSOR MULTIPLEXING - In some embodiments, an electronic device comprises a first camera and a second camera, a first buffer to receive a first set of input frames from the first camera and a second buffer to receive a second set of input frames from the second camera, a single image signal processor coupled to the first buffer and the second buffer to process the first set of input frames from the first frame buffer using one or more processing parameters stored in a first memory to generate a first video stream and to process the second set of input frames from the second frame buffer using one or more processing parameters stored in a second memory register to generate a second video stream, and a memory module to store the first video stream and the second video stream. | 12-29-2011 |
20120076403 | SYSTEM AND METHOD FOR ALL-IN-FOCUS IMAGING FROM MULTIPLE IMAGES ACQUIRED WITH HAND-HELD CAMERA - Methods and systems to create an image in which objects at different focal depths all appear to be in focus. In an embodiment, all objects in the scene may appear in focus. Non-stationary cameras may be accommodated, so that variations in the scene resulting from camera jitter or other camera motion may be tolerated. An image alignment process may be used, and the aligned images may be blended using a process that may be implemented using logic that has relatively limited performance capability. The blending process may take a set of aligned input images and convert each image into a simplified Laplacian pyramid (LP). The LP is a data structure that includes several processed versions of the image, each version being of a different size. The set of aligned images is therefore converted into a corresponding set of LPs. The LPs may be combined into a composite LP, which may then undergo Laplacian pyramid reconstruction (LPR). The output of the LPR process is the final blended image. | 03-29-2012 |
20120076407 | BRIGHTNESS ENHANCEMENT METHOD, SYSTEM AND APPARATUS FOR LOW POWER ARCHITECTURES - Embodiments are described for a brightness enhancement method, system and apparatus for low power architecture. A histogram of a contrast image may be generated. One or more parameters may be set based on the histogram. A tone mapping for the contrast image may be determined based on the one or more parameters. An output pixel may be determined based on the tone mapping. | 03-29-2012 |
20120082387 | OPTIMIZED FAST HESSIAN MATRIX COMPUTATION ARCHITECTURE - Methods and systems of recognizing images may include an apparatus having a hardware module with logic to, for a plurality of vectors in an image, determine a first intermediate computation based on even pixels of an image vector, and determine a second intermediate computation based on odd pixels of an image vector. The logic can also combine the first and second intermediate computations into a Hessian matrix computation. | 04-05-2012 |
20120293608 | Positional Sensor-Assisted Perspective Correction for Panoramic Photography - This disclosure pertains to devices, methods, and computer readable media for performing positional sensor-assisted panoramic photography techniques in handheld personal electronic devices. Generalized steps that may be used to carry out the panoramic photography techniques described herein include, but are not necessarily limited to: 1.) acquiring image data from the electronic device's image sensor; 2.) performing “motion filtering” on the acquired image data, e.g., using information returned from positional sensors of the electronic device to inform the processing of the image data; 3.) performing image registration between adjacent captured images; 4.) performing geometric corrections on captured image data, e.g., due to perspective changes and/or camera rotation about a non-center of perspective (COP) camera point; and 5.) “stitching” the captured images together to create the panoramic scene, e.g., blending the image data in the overlap area between adjacent captured images. The resultant stitched panoramic image may be cropped before final storage. | 11-22-2012 |
20120293609 | Positional Sensor-Assisted Motion Filtering for Panoramic Photography - This disclosure pertains to devices, methods, and computer readable media for perforating positional sensor-assisted panoramic photography techniques in handheld personal electronic devices. Generalized steps that may be used to carry out the panoramic photography techniques described herein include, but are not necessarily limited to: 1.) acquiring image data from the electronic device's image sensor; 2.) performing “motion filtering” on the acquired image data, e.g., using information returned from positional sensors of the electronic device to inform the processing of the image data; 3.) performing image registration between adjacent captured images; 4.) performing geometric corrections on captured image data, e.g., due to perspective changes and/or camera rotation about a non-center of perspective (COP) camera point; and 5.) “stitching” the captured images together to create the panoramic scene, e.g., blending the image data in the overlap area between adjacent captured images. The resultant stitched panoramic image may be cropped before final storage. | 11-22-2012 |
20120306999 | Motion-Based Image Stitching - Systems, methods, and computer readable media for stitching or aligning multiple images (or portions of images) to generate a panoramic image are described. In general, techniques are disclosed for using motion data (captured at substantially the same time as image data) to align images rather than performing image analysis and/or registration operations. More particularly, motion data may be used to identify the rotational change between successive images. The identified rotational change, in turn, may be used to generate a transform that, when applied to an image allows it to be aligned with a previously captured image. In this way, images may be aligned in real-time using only motion data. | 12-06-2012 |
20130044228 | Motion-Based Video Stabilization - Systems, methods, and computer readable media for stabilizing video frames based on information from a motion sensor are described. In general, digital video stabilization techniques are disclosed for generating and applying image-specific transformations to individual frames (images) in a video sequence after, rather than before, the image has been captured. The transformations may be used to counter-balance or compensate for unwanted jitter occurring during video capture due to, for example, a person's hand shaking. | 02-21-2013 |
20130044230 | ROLLING SHUTTER REDUCTION BASED ON MOTION SENSORS - This disclosure pertains to devices, methods, and computer readable media for reducing rolling shutter distortion effects in captured video frames based on timestamped positional information obtained from positional sensors in communication with an image capture device. In general, rolling shutter reduction techniques are described for generating and applying image segment-specific perspective transforms to already-captured segments of a single image or images in a video sequence, to compensate for unwanted distortions that occurred during the read out of the image sensor. Such distortions may be due to, for example, the use of CMOS sensors combined with the movement of the image capture device. In contrast to the prior art, rolling shutter reduction techniques described herein may be applied to captured images or videos in real-time or near real-time using positional sensor information and without intensive image processing that would require an analysis of the content of the underlying image data. | 02-21-2013 |
20130044241 | Rolling Shutter Reduction Based on Motion Sensors - This disclosure pertains to devices, methods, and computer readable media for reducing rolling shutter distortion effects in captured video frames based on timestamped positional information obtained from positional sensors in communication with an image capture device. In general, rolling shutter reduction techniques are described for generating and applying image segment-specific perspective transforms to already-captured segments (i.e., portions) of images in a video sequence, so as to counter or compensate for unwanted distortions that occurred during the read out of the image sensor. Such distortions may be due to, for example, the use of CMOS sensors combined with the rapid movement of the image capture device. In contrast to the prior art, rolling shutter reduction techniques described herein may be applied to captured images in real-time or near real-time using positional sensor information and without intensive image processing that would require an analysis of the content of the underlying image data. | 02-21-2013 |
20130155264 | MOTION SENSOR BASED VIRTUAL TRIPOD METHOD FOR VIDEO STABILIZATION - An apparatus, method, and computer-readable medium for motion sensor-based video stabilization. A motion sensor may capture motion data of a video sequence. A controller may compute average motion data of the camera used to capture the video sequence based on motion data from the motion sensor. The controller may then determine the difference between the actual camera motion and the average camera motion to set a video stabilization strength parameter for the frames in the video sequence. A video stabilization unit may utilize the strength parameter to stabilize the frames in the video sequence. | 06-20-2013 |
20130155266 | FOCUS POSITION ESTIMATION - A method, apparatus and computer-readable storage medium computer-implemented method for lens position estimation. A drive current value may be received from a lens driver. An orientation of an electronic device may be detected using a motion sensor. A gravity vector may be determined by a processor based upon the orientation. A drive current offset may be determined based upon the gravity vector. The drive current value may be combined with the calculated drive current offset to create a normalized drive current. A lens position value associated with a camera lens of the electronic device may be computed based upon the normalized drive current. | 06-20-2013 |
20130235221 | CHOOSING OPTIMAL CORRECTION IN VIDEO STABILIZATION - To correct for the motion a perspective transform of a pair of matrices can be applied to an input image to provide an output frame. The first matrix can represent a transform to be applied to the input frame and the second matrix can represent an identity matrix. Each matrix can contribute to the output frame according to a respective weighting factor. The weighting factors for the two matrices can be determined based on an estimate of the overscan. | 09-12-2013 |
20130329062 | STATIONARY CAMERA DETECTION AND VIRTUAL TRIPOD TRANSITION FOR VIDEO STABILIZATION - An apparatus, method, and computer-readable medium for motion sensor-based video stabilization. A motion sensor may capture motion data of a video sequence. A controller may compute instantaneous motion of the camera for a current frame of the video sequence and accumulated motion of the camera corresponding to motion of a plurality of frames of the video sequence. The controller may compare the instantaneous motion to a first threshold value, compare the accumulated motion to a second threshold value, and set a video stabilization strength parameter for the current frame based on the results of the comparison. A video stabilization unit may perform video stabilization on the current frame according to the frame's strength parameter. | 12-12-2013 |
20130329063 | NOISE REDUCTION BASED ON MOTION SENSORS - A method for reducing noise in a sequence of frames may include generating a transformed frame from an input frame according to a perspective transform of a transform matrix, wherein the transform matrix corrects for motion associated with input frame. A determination may be made to identify pixels in the transformed frame that have a difference with corresponding pixels in a neighboring frame below a threshold. An output frame may be generated by adjusting pixels in the transformed frame that are identified to have the difference with the corresponding pixels in the neighboring frame below the threshold. | 12-12-2013 |
20130329066 | HARDWARE-CONSTRAINED TRANSFORMS FOR VIDEO STABILIZATION PROCESSES - The video stabilization method can generate output data for an output frame from input data of an input frame according to a perspective transform of a transform matrix. The input data used for the perspective transform can be obtained from a buffer of a predetermined depth. The transform matrix can be altered when the input data required for the transform exceeds the depth of the buffer. | 12-12-2013 |
20130329070 | Projection-Based Image Registration - Systems, methods, and computer readable media to register images in real-time and that are capable of producing reliable registrations even when the number of high frequency image features is small. The disclosed techniques may also provide a quantitative measure of a registration's quality. The latter may be used to inform the user and/or to automatically determine when visual registration techniques may be less accurate than motion sensor-based approaches. When such a case is detected, an image capture device may be automatically switched from visual-based to sensor-based registration. Disclosed techniques quickly determine indicators of an image's overall composition (row and column projections) which may be used to determine the translation of a first image, relative to a second image. The translation so determined may be used to align/register the two images. | 12-12-2013 |
20130329072 | Motion-Based Image Stitching - Systems, methods, and computer readable media for stitching or aligning multiple images (or portions of images) to generate a panoramic image are described. In general, techniques are disclosed for using motion data (captured at substantially the same time as image data) to align images rather than performing image analysis and/or registration operations. More particularly, motion data may be used to identify the rotational change between successive images. The identified rotational change, in turn, may be used to calculate a motion vector that describes the change in position between a point in a first image and a corresponding point in a subsequent image. The motion vector may be utilized to align successive images in an image sequence based on the motion data associated with the images. | 12-12-2013 |
20130329087 | High Dynamic Range Image Registration Using Motion Sensor Data - Motion sensor data may be used to register a sequence of standard dynamic range images for producing a high dynamic range (HDR) image, reducing use of computational resources over software visual feature mapping techniques. A rotational motion sensor may produce information about orientation changes in the imaging device between images in the sequence of images sufficient to allow registration of the images, instead of using registration based on analysis of visual features of the images. If the imaging device has been moved laterally, then the motion sensor data may not be useful and visual feature mapping techniques may be employed to produce the HDR image. | 12-12-2013 |
20130342714 | AUTOMATED TRIPOD DETECTION AND HANDLING IN VIDEO STABILIZATION - An apparatus, method, and computer-readable medium for motion sensor-based video stabilization. A motion sensor may capture motion data of a video sequence. A controller may compute instantaneous motion of the camera for a current frame of the video sequence. The controller may compare the instantaneous motion to a threshold value representing a still condition and reduce a video stabilization strength parameter for the current frame if the instantaneous motion is less than the threshold value. A video stabilization unit may perform video stabilization on the current frame according to the frame's strength parameter. | 12-26-2013 |
20140314323 | OPTIMIZED FAST HESSIAN MATRIX COMPUTATION ARCHITECTURE - Methods and systems of recognizing images may include an apparatus having a hardware module with logic to, for a plurality of vectors in an image, determine a first intermediate computation based on even pixels of an image vector, and determine a second intermediate computation based on odd pixels of an image vector. The logic can also combine the first and second intermediate computations into a Hessian matrix computation. | 10-23-2014 |
20140320731 | FOCUS POSITION ESTIMATION - A method for lens position estimation can include receiving from a lens driver a drive current value representing a current to be provided to a motor to position a camera lens of an electronic device, detecting an orientation of the electronic device using a motion sensor, determining a gravity vector based upon the orientation, and computing an estimated value of a lens position of the camera lens of the electronic device based upon the drive current value and gravity vector. | 10-30-2014 |
20140362256 | Reference Frame Selection for Still Image Stabilization - Systems, methods, and computer readable media to improve image stabilization operations are described. A novel combination of image quality and commonality metrics are used to identify a reference frame from a set of commonly captured images which, when the set's other images are combined with it, results in a quality stabilized image. The disclosed image quality and commonality metrics may also be used to optimize the use of a limited amount of image buffer memory during image capture sequences that return more images that the memory may accommodate at one time. Image quality and commonality metrics may also be used to effect the combination of multiple relatively long-exposure images which, when combined with a one or more final (relatively) short-exposure images, yields images exhibiting motion-induced blurring in interesting and visually pleasing ways. | 12-11-2014 |
20140363044 | Efficient Machine-Readable Object Detection and Tracking - A method to improve the efficiency of the detection and tracking of machine-readable objects is disclosed. The properties of image frames may be pre-evaluated to determine whether a machine-readable object, even if present in the image frames, would be likely to be detected. After it is determined that one or more image frames have properties that may enable the detection of a machine-readable object, image data may be evaluated to detect the machine-readable object. When a machine-readable object is detected, the location of the machine-readable object in a subsequent frame may be determined based on a translation metric between the image frame in which the object was identified and the subsequent frame rather than a detection of the object in the subsequent frame. The translation metric may be identified based on an evaluation of image data and/or motion sensor data associated with the image frames. | 12-11-2014 |
20140363087 | Methods of Image Fusion for Image Stabilization - Systems, methods, and computer readable media to improve image stabilization operations are described. Novel approaches for fusing non-reference images with a pre-selected reference frame in a set of commonly captured images are disclosed. The fusing approach may use a soft transition by using a weighted average for ghost/non-ghost pixels to avoid sudden transition between neighborhood and almost similar pixels. Additionally, the ghost/non-ghost decision can be made based on a set of neighboring pixels rather than independently for each pixel. An alternative approach may involve performing a multi-resolution decomposition of all the captured images, using temporal fusion, spatio-temporal fusion, or combinations thereof, at each level and combining the different levels to generate an output image. | 12-11-2014 |
20140363096 | Image Registration Methods for Still Image Stabilization - Systems, methods, and computer readable media to improve image stabilization operations are described. A novel approach to pixel-based registration of non-reference images to a reference frame in a set of commonly captured images is disclosed which makes use of pyramid decomposition to more efficiently detect corners. The disclosed pixel-based registration operation may also be combined with motion sensor data-based registration approaches to register non-reference images with respect to the reference frame. When the registered non-reference images are combined with the pre-selected reference image, the resulting image is a quality stabilized image. | 12-11-2014 |