21st week of 2013 patent applcation highlights part 28 |
Patent application number | Title | Published |
20130127984 | System and Method for Fast Tracking and Visualisation of Video and Augmenting Content for Mobile Devices - The present invention relates to a method and an apparatus for panoramic visualization on hand held and similar devices. The hand held devices consist of at least a camera sensor, a CPU and a display module, and often consisting of one or more additional sensor such as accelerometer, magnetometer or gyroscope. The main features of the system consist of a sub-system for displaying on-line augmented reality content, building in the same time a panoramic view, a sub-system for off-line displaying of panoramic augmented reality content and a user trigger to commute between the two modes. By alternating the on-line and off-line phases, the system provides ease of use as well as reduced energy consumption to the device. The system does not attempt to build a mosaic panoramic image. It temporary stores the acquired video frames together with their relative position with respect to each other. | 2013-05-23 |
20130127985 | System And Method For Capturing Adjacent Images By Utilizing A Panorama Mode - A system and method for capturing adjacent images includes an imaging device with a panorama manager that performs various procedures to manipulate one or more image parameters that correspond to adjacent frames of captured image data. An image-stitching software program may then produce a cohesive combined panorama image from the adjacent frames of image data by utilizing the manipulated image parameters. | 2013-05-23 |
20130127986 | COMMON HOLOGRAPHIC IMAGING PLATFORM - An imaging system and method is provided with transition in depth of field, the system comprising: an array of selectably activated sensors comprising first sensors receiving light of a first wavelength and second sensor receiving light of a second wavelength, the first wavelength corresponding to a short effective focal length and the second wavelength corresponding to a long effective focal length; an aperture common to both the first and second sensors configured with chromatic distortion such that a focal plane of light of the first wavelength is different from light of the second wavelength; a controller, the controller digitally between outputs of the first and second sensors based on the focal length required. | 2013-05-23 |
20130127987 | SIGNALING DEPTH RANGES FOR THREE-DIMENSIONAL VIDEO CODING - In one example, a video coder, such as a video encoder or a video decoder, is configured to code a first set of one or more depth range values for a first set of video data, wherein the first set of one or more depth range values have respective first precisions, code a second set of one or more depth range values for a second set of video data, wherein the second set of one or more depth range values have respective second precisions different than the respective first precisions, and code at least a portion of the second set of video data using the second set of one or more depth range values. In this manner, the video coder may update precisions (e.g., numbers of bits) used to represent depth range values for coding multiview plus depth video data. | 2013-05-23 |
20130127988 | MODIFYING THE VIEWPOINT OF A DIGITAL IMAGE - A method for modifying the viewpoint of a main image of a scene captured from a first viewpoint. The method uses one or more complementary images of the scene captured from viewpoints that are different from the first viewpoint. A warped main image is determined corresponding to a target viewpoint by warping the main image responsive to a corresponding range map, wherein the warped main image includes one or more holes corresponding to scene content that was occluded. Warped complementary images are similarly determined by warping the complementary images to the target viewpoint responsive to corresponding range maps. Pixel values to fill the one or more holes in the warped main image are determined using pixel values at corresponding pixel locations in the warped complementary images. | 2013-05-23 |
20130127989 | Conversion of 2-Dimensional Image Data into 3-Dimensional Image Data - Two dimensional data is converted into three dimensional picture data in a method that can provide a real time high quality display during conversion. Pixels of a frame of picture data are segmented to create pixel segments by applying a k-means algorithm. The k-means algorithm groups pixels based on closeness of a combined value that includes luma, chroma, and motion information. By balancing this information the algorithm collects pixels into groups that are assigned relative depths to turn the two-dimensional information into three-dimensional information for display. Another method includes determining a depth map for the different pixel segments by determining an amount of motion of one of the pixel segments between two frames of a video and scaling the three-dimensional depth of one of the pixel segments based on the amount of motion between the two frames. | 2013-05-23 |
20130127990 | VIDEO PROCESSING APPARATUS FOR GENERATING VIDEO OUTPUT SATISFYING DISPLAY CAPABILITY OF DISPLAY DEVICE ACCORDING TO VIDEO INPUT AND RELATED METHOD THEREOF - An exemplary video processing apparatus includes a first detection unit, a second detection unit, a format conversion control unit, and a format conversion processing unit. The first detection unit detects a video format of a video input. The second detection unit detects a display capability of a display device. The format conversion control unit determines whether the video input has a three-dimensional (3D) video format or a two-dimensional (2D) video format by referring to the detected video format, determines whether the display device supports a 3D video format or a 2D video format by referring to the detected display capability, and accordingly generates a control signal. The format conversion processing unit is controlled by the control signal to generate a video output satisfying the detected display capability according to the video input when the video input does not satisfy the detected display capability. | 2013-05-23 |
20130127991 | SUPPLEMENTARY INFORMATION CODING APPARATUS AND METHOD FOR 3D VIDEO - An encoding apparatus and method for encoding supplementary information of a three-dimensional (3D) video may determine that an updated parameter among parameters included in camera information and parameters included in depth range information is a parameter to be encoded. The encoding apparatus may generate update information including information about the updated parameter and information about a parameter not updated, perform floating-point conversion of the updated parameter, and encode the update information and the floating-point converted parameter. A decoding apparatus and method for decoding supplementary information of a 3D video may receive and decode encoded supplementary information by determining whether the encoded supplementary information includes update information. When update information is included, the decoding apparatus may classifying the decoded supplementary information, perform floating-point inverse conversion of the updated parameter, and store latest supplementary information in a storage. | 2013-05-23 |
20130127992 | METHOD OF CONVERTING VIDEO IMAGES TO THREE-DIMENSIONAL VIDEO STREAM - A method is provided which comprises a first processing step that demultiplexes each of a left-eye video image file and a right-eye video image file which were recorded with time synchronization and in accordance with MPEG | 2013-05-23 |
20130127993 | METHOD FOR STABILIZING A DIGITAL VIDEO - A method for stabilizing an input digital video. Input camera positions are determined for each of the input video frames, and an input camera path is determined representing input camera position as a function of time. A smoothing operation is applied to the input camera path to determine a smoothed camera path, and a corresponding sequence of smoothed camera positions. A stabilized video frame is determined corresponding to each of the smoothed camera positions by: selecting an input video frame having a camera position near to the smoothed camera position; warping the selected input video frame responsive to the input camera position; warping a set of complementary video frames captured from different camera positions than the selected input video frame; and combining the warped input video frame and the warped complementary video frames to form the stabilized video frame. | 2013-05-23 |
20130127994 | VIDEO COMPRESSION USING VIRTUAL SKELETON - Optical sensor information captured via one or more optical sensors imaging a scene that includes a human subject is received by a computing device. The optical sensor information is processed by the computing device to model the human subject with a virtual skeleton, and to obtain surface information representing the human subject. The virtual skeleton is transmitted by the computing device to a remote computing device at a higher frame rate than the surface information. Virtual skeleton frames are used by the remote computing device to estimate surface information for frames that have not been transmitted by the computing device. | 2013-05-23 |
20130127995 | PREPROCESSING APPARATUS IN STEREO MATCHING SYSTEM - A preprocessing apparatus in a stereo matching system is provided. In the preprocessing apparatus, coordinate information of a stereo camera is stored, a new address of the pixel is specified using the coordinate information, and left and right images received from the stereo camera are rectified using the new address of the pixel. | 2013-05-23 |
20130127996 | METHOD OF RECOGNIZING STAIRS IN THREE DIMENSIONAL DATA IMAGE - A method of recognizing stairs in a 3D data image includes an image acquirer that acquires a 3D data image of a space in which stairs are located. An image processor calculates a riser height between two consecutive treads of the stairs in the 3D data image, identifies points located between the two consecutive treads according to the calculated riser height, and detects a riser located between the two consecutive treads through the points located between the two consecutive treads. Then, the image processor calculates a tread depth between two consecutive risers of the stairs in the 3D data image, identifies points located between the two consecutive risers according to the calculated tread depth, and detects a tread located between the two consecutive risers through the points located between the two consecutive risers. | 2013-05-23 |
20130127997 | 3D IMAGE PICKUP OPTICAL APPARATUS AND 3D IMAGE PICKUP APPARATUS - An optical apparatus used for a 3D image pickup apparatus for taking two subject images having a disparity by using two lens apparatuses, each of which is directly connectable to an image pickup apparatus, and one image pickup apparatus, the optical apparatus including: a first attaching unit for detachably attaching a first lens apparatus; a second attaching unit for detachably attaching a second lens apparatus; a camera attaching unit for detachably attaching the image pickup apparatus, the image pickup apparatus including an image pickup portion; and a switch unit for alternately switching light rays from the first and second lens apparatuses to guide the light ray to the image pickup apparatus in a state that the first and second lens apparatuses and the image pickup apparatus are connected to the optical apparatus. Intermediate images are formed in the optical apparatus by the first and second lens apparatuses. | 2013-05-23 |
20130127998 | MEASUREMENT APPARATUS, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - A measurement apparatus includes a projection unit configured to project pattern light onto an object to be measured, an imaging unit configured to capture an image of the object to be measured on which the pattern light is projected to acquire a captured image of the object to be measured, a measurement unit configured to measure a position and/or orientation of the object to be measured on the basis of the captured image, a position and orientation of the projection unit, and a position and orientation of the imaging unit, a setting unit configured to set identification resolution of the pattern light using a range of variation in the position and/or orientation of the object to be measured; and a change unit configured to change a pattern shape of the pattern light in accordance with the identification resolution. | 2013-05-23 |
20130127999 | CALIBRATION APPARATUS FOR VEHICLE MOUNTED CAMERA - According to one embodiment, an apparatus for calibrating a camera while a vehicle is moving includes a rearward monitoring camera mounted in a vehicle to acquire image information, a memory that stores a camera calibration program for computationally determining and calibrating mounting parameters of the camera while the vehicle is moving, using the image information acquired by the camera, a gear position detector to detect gear positions of the vehicle and generates position signals according to the gear positions and a control unit that receives the image information, and when the gear position detector detects a position signal for other than a reverse gear position, that reads out and executes the camera calibration program stored in the memory. | 2013-05-23 |
20130128000 | MOBILE TERMINAL AND CONTROL METHOD THEREOF - A mobile terminal may include a camera module and a controller. The camera module may include a first capture unit to capture a subject to acquire a first image, a second capture unit to capture the subject to acquire a second image, an actuator disposed at least one of the first capture unit and the second capture unit, and a drive unit configured to drive the actuator. The controller may analyze a perceived 3-dimensional (3D) image formed by using the first image and the second image, and may produce a relative position compensation value corresponding to a relative displacement between the first capture unit and the second capture unit according to a result of the analysis. The drive unit may drive the actuator to control a relative position between the first capture unit and the second capture unit based on the relative position compensation value. | 2013-05-23 |
20130128001 | METHOD AND SYSTEM FOR DETECTING OBJECT ON A ROAD - Disclosed are a method and a system for detecting an object on a road. The method comprises a step of simultaneously capturing two depth maps, and then calculating a disparity map; a step of obtaining, based on the disparity map, a V-disparity image by adopting a V-disparity algorithm; a step of detecting an oblique line in the V-disparity image, and then removing points in the disparity map, corresponding to the oblique line so as to acquire a sub-disparity map excluding the road; a step of detecting plural vertical lines in the V-disparity image, and then extracting, for each of the plural vertical lines, points corresponding to this vertical line from the sub-disparity map as an object sub-disparity map corresponding to this vertical line; and a step of merging any two rectangular areas of the object sub-disparity maps approaching each other, into a rectangular object area. | 2013-05-23 |
20130128002 | STEREOGRAPHY DEVICE AND STEREOGRAPHY METHOD - In the stereography device, the stereography is unavailable if the specifications of a plurality of image pickup units differ from each other, therefore, a stereography device includes a first photographing unit; a second photographing unit whose optical specification is different from that of the first photographing unit; and an operational processing unit; wherein each of the first photographing unit and the second photographing unit outputs a first photographed image and a second photographed image with respect to a target to be photographed including an identical object; the first photographed image has a first image property determined by an optical specification of the first photographing unit; the second photographed image has a second image property determined by an optical specification of the second photographing unit. | 2013-05-23 |
20130128003 | STEREOSCOPIC IMAGE CAPTURING DEVICE, AND STEREOSCOPIC IMAGE CAPTURING METHOD - A stereoscopic image capturing apparatus which is capable of capturing a stereoscopic image including a left-eye image and a right-eye image. The stereoscopic image capturing apparatus includes: a stereoscopic image imaging unit including a first image capturing unit which is operable to capture the left-eye image and a second image capturing unit which is operable to capture the right-eye image; a display unit which is operable to display information; and a controller which is operable to control the stereoscopic image imaging unit and the display unit, in which: the controller derives a subject distance that satisfies a prescribed condition for parallax of a subject in the stereoscopic image based on information on a horizontal angle of view and a convergence angle of the stereoscopic image imaging unit; and the display unit displays information on the subject distance derived by the controller. | 2013-05-23 |
20130128004 | Modular Night-Vision System with Fused Optical Sensors - A modular night visualization system with fused optical sensors comprises a light-intensifying base module and an image-sensing auxiliary module. The connections according to the system provide a modular system in which the auxiliary module is easily interchangeable with a different auxiliary module. Thus, a compact and modular visualization system that is operationally easier to use is provided. The system may be a night-vision field-glass for a foot soldier enabling fusion of sensors. Any other application for a vision field-glass with fused sensors is possible. | 2013-05-23 |
20130128005 | TWO-PARALLEL-CHANNEL REFLECTOR WITH FOCAL LENGTH AND DISPARITY CONTROL - A two-parallel-channel reflector (TPCR) with focal length and disparity control is used after being combined with an imaging device. A left parallel channel and a right parallel channel are formed in the TPCR, so that the imaging device can synchronously perform an imaging operation on a left side view and a right side view of a scene, so as to obtain a stereoscopic image. Each parallel channel is bounded by two curved reflecting mirrors, so that captured light rays may be parallelly reflected in the channel, and an operator may adjust a convergence angle and an interocular distance between the left side view and the right side view, so as to control the focal length and disparity during imaging as require. | 2013-05-23 |
20130128006 | IMAGE CAPTURING APPARATUS, PLAYBACK APPARATUS, CONTROL METHOD, IMAGE CAPTURING SYSTEM AND RECORDING MEDIUM - A first image nonalignment amount, between a first type image and a second type image captured when the focusing lens is located at an in-focus position, of any subject, that is positioned a predetermined distance from the image capturing apparatus, is obtained. A second image nonalignment amount, between the first type image and the second type image obtained when the focusing lens is located at a predetermined reference position, of the subject is obtained. Then an image shift amount, for at least one of the first type image and the second type image captured when the focusing lens is located at the in-focus position, is determined, so that the difference between the first image nonalignment amount and the second image nonalignment amount falls within a predetermined range. | 2013-05-23 |
20130128007 | THREE-DIMENSIONAL IMAGE PICKUP SYSTEM - Provided is a three-dimensional image pickup system including: a pair of lens apparatus; a camera apparatus for picking up subject images formed by the pair of lens apparatus; a convergence angle changing unit for changing a convergence angle of the pair of lens apparatus; a controller for controlling the convergence angle of the pair of lens apparatus in association with an operation of focus lens units of the pair of lens apparatus; and an interlock switching unit for switching between an interlocked state in which the convergence angle is interlocked with the operation of the focus lens units and a non-interlocked state in which the convergence angle is not interlocked with the operation of the focus lens units. | 2013-05-23 |
20130128008 | SMART PSEUDOSCOPIC-TO-ORTHOSCOPIC CONVERSION (SPOC) PROTOCOL FOR THREE-DIMENSIONAL (3D) DISPLAY - A smart pseudoscopic to orthoscopic conversion (SPOC) protocol, and method for using the SPOC protocol for three-dimensional imaging, are disclosed. The method allows full control over the optical display parameters in Integral Imaging (InI) monitors. From a given collection of elemental images, a new set of synthetic elemental images (SEIs) ready to be displayed in an InI monitor can be calculated, in which the pitch, microlenses focal length, number of pixels per elemental cell, depth position of the reference plane, and grid geometry of the microlens array (MLA) can be selected to fit the conditions of the display architecture. | 2013-05-23 |
20130128009 | IMAGING APPARATUS, IMAGE CORRECTION METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - An imaging apparatus includes: an imaging unit for acquiring a plurality of view point images imaged from a plurality of viewpoints by a plurality of imaging optical systems each including a zoom lens; a storage unit for storing an error of the imaging optical system; an optical zoom magnification specifying unit for receiving specification instruction of an optical zoom magnification; a zoom lens driving unit for moving the zoom lens to a position corresponding to the instruction of the optical zoom magnification; and a correction unit for setting an electronic zoom magnification corresponding to the position of the zoom lens, magnifying a viewpoint image to be corrected from within the plurality of viewpoint images based on the electronic zoom magnification, and extracting a part of the magnified viewpoint image to eliminate an object point shift amount corresponding to the error from the magnified viewpoint image. | 2013-05-23 |
20130128010 | Monoscopic 3D Image Photgraphing Device and 3D Camera - A 3D camera for photographing a 3D image which can be viewed in a monoscopic manner without three-dimensional glasses by using an existing camera. The present invention provides: a device which rotates a monoscopic camera so that an image is photographed at various angles on various frames within one second by rotating the existing camera by 360 degrees toward a front subject in order to photograph the subject; and a 3D camera having an embedded device in which a lens of the camera is rotated by 360 degrees toward the front subject. Further, the present invention provides a 3D image photographing device and a 3D camera which enable free adjustment of a rotational width of the monoscopic lens and thus free adjustment of the depth of horizontal and vertical images when photographing the images, thereby solving eyestrain and dizziness which may occur when the 3D image is viewed. | 2013-05-23 |
20130128011 | HEAD TRACKING EYEWEAR SYSTEM - In some embodiments, a system for tracking with reference to a three-dimensional display system may include a display device, an image processor, a surface including at least three emitters, at least two sensors, a processor. The display device may image, during use, a first stereo three-dimensional image. The surface may be positionable, during use, with reference to the display device. At least two of the sensors may detect, during use, light received from at least three of the emitters as light blobs. The processor may correlate, during use, the assessed referenced position of the detected light blobs such that a first position/orientation of the surface is assessed. The image processor may generate, during use, the first stereo three-dimensional image using the assessed first position/orientation of the surface with reference to the display. The image processor may generate, during use, a second stereo three-dimensional image using an assessed second position/orientation of the surface with reference to the display. | 2013-05-23 |
20130128012 | SIMULATED HEAD MOUNTED DISPLAY SYSTEM AND METHOD - To overcome problems with vergence, a binocular head mounted display (HMD) is used in a simulator in which an out-the-window scene is displayed in real time on a screen arrangement. Imagery for the left and right eyes of the HMD is derived by generating a starting HMD image for a Cyclops viewpoint between the user's eves, and then rendering respective views for each eve from the position of the eye in a virtual 3D model of the screen arrangement, wherein the starting HMD image is frustum projected against the screen arrangement of the 3D model. | 2013-05-23 |
20130128013 | 3D TV Display System - A wall fixture supports at least one display screen and a display shelf. The display shelf is located adjacent the display screen and includes a holder for housing a pair of three-dimensional (3D) glasses that are tethered to the holder The display screen displays two-dimensional (2D) media content. In response to receiving an indication from a presence sensor that the 3D glasses located in a glasses holder of the display shelf have been removed from the glasses holder, 3D media content is displayed on the display monitor. | 2013-05-23 |
20130128014 | System for stereoscopically viewing motion pictures - Moving pictures, as may be exemplified by television programming, are viewed stereoscopically. The system preferably comprises a visual display screen upon which may be displayed a left-to-right reversed visual display. A reflecting surface is positioned opposite the visual display screen for reflecting imagery from the visual display screen toward the viewer. The reflected imagery provides a reflected left-to-right correct visual display. The visual display screen is spaced from the reflecting surface such that the viewer's perception of the visual display screen causes the viewer to focus on a point behind the reflecting surface thereby requiring the viewer to perceive laterally offset reflections of the visual display screen at the reflecting surface. In one embodiment, a cabinet assembly enables the viewer or user to selectively position the visual display screen relative to the primary reflecting surface for enhancing the perception of depth in imagery effected by the perceived laterally offset reflections. | 2013-05-23 |
20130128015 | DISPLAY DEVICE AND DRIVING METHOD THEREOF - A display device configured to display a selected image type, the image type including a first image and a second image, according to an image source signal, the display device including: a display unit, the display unit including a first group pixel and a second group pixel; and an image processor, the image processor: arranging image signals for each frame of the display device according to a display sequence of the first image and the second image in the first group pixel and the second group pixel, changing the image type displayed during a remaining period for each frame unit of the image source signal. | 2013-05-23 |
20130128016 | Viewing of Different Full-Screen Television Content by Different Viewers At the Same Time Using Configured Glasses and A Related Display - Different full screen content is displayed on the same television at the same time from the perspective of the viewer by displaying as the content as two full screen sequential frames. The different full screen content may be provided as a single combined frame signal such as a side-by-side, top-bottom, or checkerboard signal which is then displayed as two sequential full screen frames. Configured glasses such as polarized or shutter are used to view the different content as full screen content where one pair of configured glasses views an initial one of the sequential frames but blocks the subsequent one and another pair of configured glasses blocks the initial one of the sequential frames and views the subsequent one. Shutter glasses have both lenses open during the initial frame and both closed during the subsequent one. For polarized glasses, the initial frame has a polarization matching both lenses of one pair of glasses while the subsequent frame has a polarization that differs from both lenses. | 2013-05-23 |
20130128017 | GLASSES AND METHOD FOR PAIRING THEREOF - Glasses and a method for pairing thereof are provided. The method includes: if a preset event occurs, determining whether a transmission packet for driving the glasses has been received from a display apparatus; and if the transmission packet is determined not to have been received, automatically performing a pairing of the glasses with the display apparatus. | 2013-05-23 |
20130128018 | METHOD AND APPARATUS FOR PRESENTING MEDIA CONTENT - A system that incorporates teachings of the present disclosure may include, for example, presenting a plurality of unassociated media programs from a single presentation device having overlapping presentation periods, receiving information from a viewing apparatus to adjust an intensity of emitted light associated with one of the unassociated media programs, and adjusting the intensity of the emitted light in a manner that is detectable by the viewing apparatus supplying the information. Other embodiments are disclosed and contemplated. | 2013-05-23 |
20130128019 | DISPLAY DEVICE, DISPLAY CONTROL METHOD, DISPLAY CONTROL PROGRAM, AND COMPUTER READABLE RECORDING MEDIUM - A display device ( | 2013-05-23 |
20130128020 | IMAGE PICKUP APPARATUS, ENDOSCOPE AND IMAGE PICKUP APPARATUS MANUFACTURING METHOD - An image pickup apparatus includes: a cover glass portion having a function of a right angle prism; an image pickup device substrate portion including an image pickup device on a first principal surface and a back-face electrode on a second principal surface, the back-face electrode being connected to the image pickup device via a through-wiring; and a bonding layer that bonds the cover glass portion and the image pickup device substrate portion that have a same outer dimension. | 2013-05-23 |
20130128021 | ROTATABLE PRESENTATION BILLBOARD - A billboard includes an image capturing device, a processor, and a driving unit. The image capturing device uses a camera to capture consecutive scene images of a scene area. The processor uses a geometric figure to mark a figure in each captured image. If the center of the geometric figure is not the center of the captured image, the processor generates a control signal to direct the driving unit to rotate the billboard until the center of the geometric figure is the center of the captured image. | 2013-05-23 |
20130128022 | INTELLIGENT MOTION CAPTURE ELEMENT - Intelligent motion capture element that includes sensor personalities that optimize the sensor for specific movements and/or pieces of equipment and/or clothing and may be retrofitted onto existing equipment or interchanged therebetween and automatically detected for example to switch personalities. May be used for low power applications and accurate data capture for use in healthcare compliance, sporting, gaming, military, virtual reality, industrial, retail loss tracking, security, baby and elderly monitoring and other applications for example obtained from a motion capture element and relayed to a database via a mobile phone. System obtains data from motion capture elements, analyzes data and stores data in database for use in these applications and/or data mining. Enables unique displays associated with the user, such as 3D overlays onto images of the user to visually depict the captured motion data. Enables performance related equipment fitting and purchase. Includes active and passive identifier capabilities. | 2013-05-23 |
20130128023 | SYSTEM FOR GENERATING VIRTUAL CLOTHING EXPERIENCES - A system for generating a virtual clothing experience has a display for mounting with a wall, one or more digital cameras for capturing first images of the person standing in front of the display, an image processing module for synthesizing the first images and for generating a display image on the display that substantially appears, to the person, like a reflection of the person in a mirror positioned at the display. The cameras capture second images of a garment with the person; the module synthesizes the second images with the first images to generate the image that substantially appears, to the person, that the reflection wears the garment. A home version of the system may be formed with a home computer and a database storing garment images in cooperation with a manufacturer. | 2013-05-23 |
20130128024 | IMAGE OBTAINING APPARATUS, IMAGE OBTAINING METHOD, AND IMAGE OBTAINING PROGRAM - An image obtaining apparatus includes: a light source configured to irradiate a biological sample having a fluorescent label with an excitation light, the excitation light exciting the fluorescent label; a focus moving unit configured to move a focus position of an optical system in the thickness direction of the biological sample; and a data processing unit configured to expose an image sensor to light while moving the focus position of the optical system in each of a plurality of preset scanning ranges to thereby obtain fluorescent images of the biological sample, each of the plurality of scanning ranges having a predetermined scanning length, the predetermined scanning length being smaller than the length of a bright-point detection range of the biological sample in the thickness direction, the center positions of the scanning ranges being different from each other in the thickness direction. | 2013-05-23 |
20130128025 | Device and Method for Acquiring a Microscopic Image of a Sample Structure - A device and a method for acquiring a microscopic image of a sample structure are described. An optic for imaging the sample structure and a reference structure is provided, as well as a drift sensing unit for sensing a drift of the sample structure relative to the optic on the basis of the imaged reference structure. The optic comprises a first sharpness plane for imaging the sample structure and at the same time a second sharpness plane, modifiable in location relative to the first sharpness plane, for imaging the reference structure. | 2013-05-23 |
20130128026 | IMAGE PROCESSING DEVICE FOR DEFECT INSPECTION AND IMAGE PROCESSING METHOD FOR DEFECT INSPECTION - An image processing device for defect inspection that processes image data taken continually in time from a moving molded sheet with an area camera, having a data extraction unit extracting line data at an identical position from each different image data, for a plurality of different positions on the image data; a data storage unit arranging the plurality of line data in time series for each of the positions on the image data to generate a plurality of line-composited image data; a change amount calculation unit performing a differential operator operation on the plurality of line-composited image data to generate a plurality of emphasized image data; an identical position judgment/extraction unit extracting data indicating an identical position of the molded sheet from the plurality of emphasized image data; and an integration unit accumulating, at respective pixels, brightness values of the extracted emphasized image data to generate defect inspection image data. | 2013-05-23 |
20130128027 | Image Processing Apparatus And Image Processing Method - Provided is an image processing apparatus which can prevent deterioration in defect detection accuracy. The image processing apparatus includes an imaging unit and an image processing unit. The imaging unit converts a color image imaged by the imaging element to an HDR image based on a conversion characteristic for expanding a dynamic range, to output the HDR image. The image processing unit inversely converts color component values with respect to each pixel of the HDR image outputted from the imaging unit, based on the conversion characteristic, calculates brightness and a ratio of the color component values with respect to each pixel from the inversely converted color component values, and converts the calculated brightness based on the conversion characteristic, to generate a post-correction HDR image based on a ratio of the converted brightness and the calculated color component values. | 2013-05-23 |
20130128028 | Image Processing Apparatus - Provided is an image processing apparatus which can easily perform focus adjustment only by switching a program in accordance with a kind of an inspection object. The image processing apparatus according to the invention includes: an imaging unit for imaging a region including an inspection object; a focus adjustment mechanism; and a control unit for controlling a operation of the focus adjustment mechanism. A plurality of pieces of inspection condition data are set, the data being made up of a plurality of setting items including a focus position data. When the switching instruction from one inspection condition data to another inspection condition data is accepted, the operation of the focus adjustment mechanism is controlled based on focus position data included in another inspection condition data after switching. | 2013-05-23 |
20130128029 | Holding Device for Visually Inspecting a Tire - A for inspecting the internal surface located between a first and a second bead of a tire (P), comprising means for acquiring the image of the internal surface ( | 2013-05-23 |
20130128030 | Thin Plenoptic Cameras Using Solid Immersion Lenses - Methods and apparatus for capturing and rendering high-quality photographs using relatively small, thin plenoptic cameras. Plenoptic camera technology, in particular focused plenoptic camera technology including but not limited to super-resolution techniques, and other technologies such as solid immersion lens (SIL) technology may be leveraged to provide thin form factor, megapixel resolution cameras suitable for use in mobile devices and other applications. In addition, at least some embodiments of these cameras may also capture radiance, allowing the imaging capabilities provided by plenoptic camera technology to be realized through appropriate rendering techniques. Hemispherical SIL technology, along with multiple main lenses and a mask on the photosensor, may be employed in some thin plenoptic cameras. Other thin cameras may include a layer between hemispherical SILs and the photosensor that effectively implements superhemispherical SIL technology in the camera. | 2013-05-23 |
20130128031 | APPARATUS AND METHOD FOR MEASURING MOISTURE CONTENT IN STEAM FLOW - An apparatus and method for estimating moisture content in a steam flow through a steam turbine is disclosed. At least a portion of a steam flow path through a turbine is illuminated using at least one laser assembly, and a plurality of digital images of the illuminated portion of the steam flow are obtained. The digital images are analyzed to measure an amount of light scattered in each digital image, and this analysis of each digital image is compared to estimate moisture content of the steam flow. | 2013-05-23 |
20130128032 | SYSTEM AND METHOD TO VERIFY COMPLETE CONNECTION OF TWO CONNECTORS - An inspection system which verifies the complete connection of two connectors at an inspection station. An indicator is placed on one of the first and second connectors which becomes hidden from view only when the first and second connectors are in a fully connected position. A camera is positioned at the inspection station which generates an output signal representative of the field of vision of the camera. That camera output signal is coupled to an optical recognition circuit which generates an alarm signal if the indicator is present in the camera image. | 2013-05-23 |
20130128033 | Method for Smear Measurement of Display Device and Device for Smear Measurement of Display Device - The present invention discloses a method for smear measurement of display device and a device for smear measurement of display device which comprises the following steps: a flash with a moving pattern and an unmovable scale is played in the display device; the smear extent of the moving pattern is judged in accordance with the scale number occupied by the smear in the flash. The present invention plays a flash with a moving pattern and an unmovable scale in the display device and judges smear extent of the moving pattern in accordance with the scale number occupied by the smear in the flash, so that the smear can be quantified by scales and different scale numbers corresponding to different smear extents are formed, to accurately judge the smear extent by observing the scale number occupied by the smear, thereby effectively monitoring the product quality. | 2013-05-23 |
20130128034 | Real-Time Player Detection From A Single Calibrated Camera - A method for detecting the location of objects from a calibrated camera involves receiving an image capturing an object on a surface from a first vantage point; generating an occupancy map corresponding to the surface; filtering the occupancy map using a spatially varying kernel specific to the object shape and the first vantage point, resulting in a filtered occupancy map; and estimating the ground location of the object based on the filtered occupancy map. | 2013-05-23 |
20130128035 | ROBOTIC ARM - An analytical laboratory system and method for processing samples is disclosed. A sample container is transported from an input area to a distribution area by a gripper comprising a means for inspecting a tube. An image is captured of the sample container. The image is analyzed to determine a sample container identification. A liquid level of the sample in the sample container is determined. A scheduling system determines a priority for processing the sample container based on the sample container identification. The sample container is transported from the distribution area to a subsequent processing module by the gripper. | 2013-05-23 |
20130128036 | Lateral Flow and Flow-through Bioassay Devices Based on Patterned Porous Media, Methods of Making Same, and Methods of Using Same - Embodiments of the invention provide lateral flow and flow-through bioassay devices based on patterned porous media, methods of making same, and methods of using same. Under one aspect, an assay device includes a porous, hydrophilic medium; a fluid impervious barrier comprising polymerized photoresist, the barrier substantially permeating the thickness of the porous, hydrophilic medium and defining a boundary of an assay region within the porous, hydrophilic medium; and an assay reagent in the assay region. | 2013-05-23 |
20130128037 | PHOTOGRAMMETRIC NETWORKS FOR POSITIONAL ACCURACY - The present invention involves a surveying system and method which determines the position of an object point using two images. First, at least two reference points appearing on the two images are correlated. Then the position of the object point is determined based on the two images and the two reference points. | 2013-05-23 |
20130128038 | METHOD FOR MAKING EVENT-RELATED MEDIA COLLECTION - A method for making a media collection associated with an event having an event location includes using a processor to receive one or more media elements from each of a plurality of media-capture devices, each media element having a capture location; defining the event in response to receiving one or more media-capture-device signals having the event location; and associating media elements having the event location received at the same time or after the event definition with a stored media-event collection corresponding to the event for subsequent use. | 2013-05-23 |
20130128039 | POSITION DEPENDENT REAR FACING CAMERA FOR PICKUP TRUCK LIFT GATES - A rear camera system for a vehicle with a rear-lift door including a camera unit mounted on the rear-lift door, the rear-lift door having open and closed positions, the camera unit having a first field of view when the rear-lift door is in the open position and a second field of view when the rear-lift door is in the closed position, the first and second fields of view overlapping in a shared field of view; a sensor configured to indicate when the rear-lift door is in the open or closed position; a controller configured to receive image data from the camera unit, determine whether the rear-lift door is in the open or closed position based on a signal received from the sensor, and adjust the image data to primarily include the shared field of view based on whether the rear-lift door is in the open position or the closed position. | 2013-05-23 |
20130128040 | SYSTEM AND METHOD FOR ALIGNING CAMERAS - A method is provided for determining a position where a reference point should be located on a display ( | 2013-05-23 |
20130128041 | MOBILE AND ONE-TOUCH TASKING AND VISUALIZATION OF SENSOR DATA - The technology described herein includes a system and/or a method for data tasking and visualization of data. The method includes receiving, by a computing device, a screening policy selection from a user associated with the computing device; transmitting, by the computing device, the screening policy selection to one or more sensor platform devices; receiving, by the computing device, one or more data sets from the one or more sensor platform devices in response to the transmission of the screening policy selection; and displaying, by the computing device, the one or more data sets to the user. | 2013-05-23 |
20130128042 | HIGH-SPEED EVENT DETECTION USING A COMPRESSIVE-SENSING HYPERSPECTRAL-IMAGING ARCHITECTURE - A compressive imaging system and method for quickly detecting spectrally and spatially localized events (such as explosions or gun discharges) occurring within the field of view. An incident light stream is modulated with a temporal sequence of spatial patterns. The wavelength components in the modulated light stream are spatially separated, e.g., using a diffractive element. An array of photodetectors is used to convert subsets of the wavelength components into respective signals. An image representing the field of view may be reconstructed based on samples from some or all the signals. A selected subset of the signals are monitored to detect event occurrences, e.g., by detecting sudden changes in intensity. When the event is detected, sample data from the selected subset of signals may be analyzed to determine the event location within the field of view. The event location may be highlighted in an image being generated by the imaging system. | 2013-05-23 |
20130128043 | Lawn Mower - A vegetation cutting apparatus including a movable carriage having a conveyance system for facilitating conveyance of the carriage over ground. The conveyance system can have conveyance members for facilitating such conveyance. A cutting system can be mounted to the carriage for cutting vegetation. The cutting system can include at least one cutting member which is positionable laterally around the periphery of at least one conveyance member for enabling cutting of vegetation laterally around the at least one conveyance member. | 2013-05-23 |
20130128044 | VISION-BASED SCENE DETECTION - A method of distinguishing between daytime lighting conditions and nighttime lighting conditions based on a captured image by a vision-based imaging device along a path of travel. An image is captured by a vision-based imaging device. A region of interest is selected in the captured image. A light intensity value is determined for each pixel within the region of interest. A cumulative histogram is generated based on light intensity values within the region of interest. The cumulative histogram including a plurality of category bins representing the light intensity values. Each category bin identifies an aggregate value of light intensity values assigned to each respective category bin. An aggregate value is compared within a predetermined category bin of the histogram to a first predetermined threshold. A determination is made whether the image is captured during the daytime lighting conditions as a function of the aggregate value within the predetermined category bin. | 2013-05-23 |
20130128045 | DYNAMIC LIINE-DETECTION SYSTEM FOR PROCESSORS HAVING LIMITED INTERNAL MEMORY - A line-detection system computes, using a local memory, a result of a partial conversion of image-space pixel data from image space to Hough space. The result is analyzed for edges corresponding to a line present in the partial conversion. The line is compared against other lines detected in previously computed partial results to identify a longest line in the image. | 2013-05-23 |
20130128046 | CAMERA EQUIPPED HELMET - A method and an article of manufacture are disclosed configured to allow viewing of scenes not directly in the field of view of the viewer. In various embodiments, a helmet is equipped with a video camera, facing in a direction other than the direction of view of the user of the helmet, and a display visible to the user to display the images captured by the video camera. The helmet may be used while riding a bicycle, a motorcycle, a horse, while walking, and the like. In some embodiments, the video camera transmits data wirelessly and the direction of its view is adjustable. In some embodiments, a storage device is integrated in the helmet to allow recording of the images and sounds captured by the video camera for future download to another recording medium or a computing device. | 2013-05-23 |
20130128047 | TOUCH TYPE DISPLAY MIRROR - A touch type display mirror apparatus, may include a touch panel though which a user inputs a first signal, a photosensitive panel arranged on a rear surface side of the touch panel, and a display panel arranged on a rear surface side of the photosensitive panel and including a signal processing unit processing the first signal that may be input from the touch panel, wherein the signal processing unit may be connected to an angle-adjustable rear view camera and adjusts a shooting angle of the angle-adjustable rear view camera according to a second signal that may be input from the touch panel. | 2013-05-23 |
20130128048 | IMAGING SYSTEM AND IMAGING DEVICE - Disclosed herein is an imaging system including: an imaging section attached to a bottom of a vehicle to capture an omnidirectional image; and a display section adapted to display the omnidirectional image captured by the imaging section | 2013-05-23 |
20130128049 | DRIVER ASSISTANCE SYSTEM FOR A VEHICLE - A driver assistance system for a vehicle includes a first imager disposed at a left side of the vehicle, a second imager disposed at a rear of the vehicle and a third imager disposed at a right side of the vehicle. The first, second and third imagers have respective fields of view and the first imager is spatially separated from the second imager and the third imager is spatially separated from the second imager. A display system has a single display screen that is viewable by a driver of the vehicle. The display screen is operable to display an image derived from image data captured by the first, second and third imagers. The display screen displays an image synthesized from image data captured by each of the first, second and third imagers. The displayed image approximates a view from a single virtual camera location. | 2013-05-23 |
20130128050 | GEOGRAPHIC MAP BASED CONTROL - Disclosed are methods, systems, computer readable media and other implementations, including a method that includes determining, from image data captured by a plurality of cameras, motion data for multiple moving objects, and presenting, on a global image representative of areas monitored by the plurality of cameras, graphical indications of the determined motion data for the multiple objects at positions on the global image corresponding to geographic locations of the multiple moving objects. The method further includes presenting captured image data from one of the plurality of cameras in response to selection, based on the graphical indications presented on the global image, of an area of the global image presenting at least one of the graphical indications for at least one of the multiple moving objects captured by the one of the plurality of cameras. | 2013-05-23 |
20130128051 | AUTOMATIC DETECTION BY A WEARABLE CAMERA - There is set forth herein a system including a camera device. In one embodiment the system is operative to perform image processing for detection of an event involving a human subject. There is set forth herein in one embodiment, a camera equipped system employed for fall detection. | 2013-05-23 |
20130128052 | Synchronization of Cameras for Multi-View Session Capturing - The invention provides a solution for synchronizing cameras connected via a telecommunication network for capturing a multi-view session controlled by a synchronization module. The synchronization module provides a multi-view capturing parameter determined from capturing parameter received from the at least one camera. Said multi-view capturing parameter is provided to the cameras participating in the session capturing in order to synchronize the capturing of the multi-view session. Further a camera device is proposed which inform the synchronization module about its disposability and about the capturing parameter of the camera. Said camera receives a multi-view capturing parameter which is used for capturing the session. | 2013-05-23 |
20130128053 | Method and Apparatus for Yoga Class Imaging and Streaming - The ability to view and participate in various types of instructional classes, including Yoga, remotely and on-demand has become increasingly popular and accessible. However, participating in instructional classes off-site does not replicate the same experience as participating in an instructional class on-site, live with an instructor. The claimed system and method allow the viewer participant to view and take part in an instructional class from any location and at any time without compromising the viewer's ability to experience a participatory class experience. The system and method place the instructor at the head of the classroom with live-participants arranged between the instructor and the camera with a direct line of sight between the camera and the instructor allowing for the viewer participant to have unobstructed views while simultaneously allowing for the viewer participant to have live participants in the periphery, as if the viewer was attending a live class. | 2013-05-23 |
20130128054 | System and Method for Controlling Fixtures Based on Tracking Data - Systems and methods are provided for using tracking data to control the functions of an automated fixture. Examples of automated fixtures include light fixtures and camera fixtures. A method includes obtaining a first position of a tracking unit. The tracking unit includes an inertial measurement unit and a visual indicator configured to be tracked by a camera. A first distance is computed between the automated fixture and the first position and it is used to set a function of the automated fixture to a first setting. A second position of the tracking unit is obtained. A second distance between the automated fixture and the second position is computed, and the second distance is used to set the function of the automated fixture to a second setting. | 2013-05-23 |
20130128055 | MODELING HUMAN PERCEPTION OF MEDIA CONTENT - A system includes a content evaluation device is configured to receive reproduced media content and compare the reproduced media content to reference media content to determine a quality of the reproduced media content relative to the reference media content. The content evaluation device is configured to apply an entropy factor to the determined quality to model human perception of the reproduced media content. A method of determining the entropy factor includes converting the reference media content or the reproduced media content to a grayscale image, counting a number of unique luminance values in the grayscale image, determining the total number of possible luminance values, and defining the entropy factor to be the number of total pixels compared relative to the maximum number of pixels represented by any single luminance value multiplied by the number of possible luminance values. | 2013-05-23 |
20130128056 | ESTIMATING SENSOR SENSITIVITY - A method, system, and computer-readable storage medium for determining an estimate of sensor sensitivity associated with an image. A noise level of an image is determined, then an estimate of sensor sensitivity associated with the image is automatically determined, e.g., by a trained classifier based on the determined noise level. Additionally, the sensor sensitivity estimate can be used to determine scene brightness. | 2013-05-23 |
20130128057 | GEOMETRIC CORRECTION APPARATUS AND METHOD BASED ON RECURSIVE BEZIER PATCH SUB-DIVISION CROSS-REFERENCE TO RELATED APPLICATION - Provided is a geometric correction apparatus and method based on a recursive Bezier patch sub-division. A geometric correction method may include: receiving, from a camera, a first image that is obtained by photographing a black screen that is projected by a projector onto a projection surface; receiving, from the camera, a second image that is obtained by photographing a predetermined pattern that is projected by the projector onto the projection screen; generating a third image by subtracting the first image from the second image; and performing geometric correction with respect to the predetermined pattern to correct a distortion between the predetermined pattern and the third image. | 2013-05-23 |
20130128058 | VIDEO RESPONSES TO MESSAGES - A device may receive a message from another device. The message may include a request to capture a reaction of a user when the message is displayed. The device may further cause the received message to be displayed, and cause, based on the request, at least one of a video, a picture, or an audio of the user to be captured while the received message is being displayed. | 2013-05-23 |
20130128059 | METHOD FOR SUPPORTING A USER TAKING A PHOTO WITH A MOBILE DEVICE - A method for supporting a user taking a photo with a mobile device and a system comprising a mobile device and a server are described. | 2013-05-23 |
20130128060 | INTUITIVE COMPUTING METHODS AND SYSTEMS - A smart phone senses audio, imagery, and/or other stimulus from a user's environment, and acts autonomously to fulfill inferred or anticipated user desires. In one aspect, the detailed technology concerns phone-based cognition of a scene viewed by the phone's camera. The image processing tasks applied to the scene can be selected from among various alternatives by reference to resource costs, resource constraints, other stimulus information (e.g., audio), task substitutability, etc. The phone can apply more or less resources to an image processing task depending on how successfully the task is proceeding, or based on the user's apparent interest in the task. In some arrangements, data may be referred to the cloud for analysis, or for gleaning. Cognition, and identification of appropriate device response(s), can be aided by collateral information, such as context. A great number of other features and arrangements are also detailed. | 2013-05-23 |
20130128061 | IMAGE PROCESSING APPARATUS AND PROCESSING METHOD THEREOF - An image processing apparatus and a processing method thereof are provided. The image processing apparatus includes an image capturing module, an image separation module, an image stabilization module, a temporal noise reduction module, and a spatial noise reduction module. The image capturing module captures a plurality of Bayer pattern images. The image separation module decreases the Bayer pattern images in size and transforms them into a plurality of YCbCr format images. The image stabilization module receives Y channel images of the YCbCr format images and the Bayer pattern images to perform motion estimation, to produce a plurality of global motion vectors (GMVs). The temporal noise reduction module performs temporal blending process on the Bayer pattern images according to the GMVs, to produce first noise reduction images. The spatial noise reduction module performs 2-dimensional spatial noise reduction on the first noise reduction images to produce second noise reduction images. | 2013-05-23 |
20130128062 | Methods and Apparatus for Robust Video Stabilization - Methods and apparatus for robust video stabilization. A video stabilization technique applies a feature tracking technique to an input video sequence to generate feature trajectories. The technique applies a video partitioning technique to segment the input video sequence into factorization windows and transition windows. The technique smoothes the trajectories in each of the windows, in sequence. For factorization windows, a subspace-based optimization technique may be used. For transition windows, a direct track optimization technique that uses a similarity motion model may be used. The technique then determines and applies warping models to the frames in the video sequence. In at least some embodiments, the warping models may include a content-preserving warping model, a homography model, a similarity transform model, and a whole-frame translation model. The warped frames may then be cropped according to a cropping technique. | 2013-05-23 |
20130128063 | Methods and Apparatus for Robust Video Stabilization - Methods and apparatus for robust video stabilization. A video stabilization technique applies a feature tracking technique to an input video sequence to generate feature trajectories. The technique applies a video partitioning technique to segment the input video sequence into factorization windows and transition windows. The technique smoothes the trajectories in each of the windows, in sequence. For factorization windows, a subspace-based optimization technique may be used. For transition windows, a direct track optimization technique that uses a similarity motion model may be used. The technique then determines and applies warping models to the frames in the video sequence. In at least some embodiments, the warping models may include a content-preserving warping model, a homography model, a similarity transform model, and a whole-frame translation model. The warped frames may then be cropped according to a cropping technique. | 2013-05-23 |
20130128064 | Methods and Apparatus for Robust Video Stabilization - Methods and apparatus for robust video stabilization. A video stabilization technique applies a feature tracking technique to an input video sequence to generate feature trajectories. The technique applies a video partitioning technique to segment the input video sequence into factorization windows and transition windows. The technique smoothes the trajectories in each of the windows, in sequence. For factorization windows, a subspace-based optimization technique may be used. For transition windows, a direct track optimization technique that uses a similarity motion model may be used. The technique then determines and applies warping models to the frames in the video sequence. In at least some embodiments, the warping models may include a content-preserving warping model, a homography model, a similarity transform model, and a whole-frame translation model. The warped frames may then be cropped according to a cropping technique. | 2013-05-23 |
20130128065 | Methods and Apparatus for Robust Video Stabilization - Methods and apparatus for robust video stabilization. A video stabilization technique applies a feature tracking technique to an input video sequence to generate feature trajectories. The technique applies a video partitioning technique to segment the input video sequence into factorization windows and transition windows. The technique smoothes the trajectories in each of the windows, in sequence. For factorization windows, a subspace-based optimization technique may be used. For transition windows, a direct track optimization technique that uses a similarity motion model may be used. The technique then determines and applies warping models to the frames in the video sequence. In at least some embodiments, the warping models may include a content-preserving warping model, a homography model, a similarity transform model, and a whole-frame translation model. The warped frames may then be cropped according to a cropping technique. | 2013-05-23 |
20130128066 | Methods and Apparatus for Robust Video Stabilization - Methods and apparatus for robust video stabilization. A video stabilization technique applies a feature tracking technique to an input video sequence to generate feature trajectories. The technique applies a video partitioning technique to segment the input video sequence into factorization windows and transition windows. The technique smoothes the trajectories in each of the windows, in sequence. For factorization windows, a subspace-based optimization technique may be used. For transition windows, a direct track optimization technique that uses a similarity motion model may be used. The technique then determines and applies warping models to the frames in the video sequence. In at least some embodiments, the warping models may include a content-preserving warping model, a homography model, a similarity transform model, and a whole-frame translation model. The warped frames may then be cropped according to a cropping technique. | 2013-05-23 |
20130128067 | WIRELESS HANDSET INTERFACE FOR VIDEO RECORDING CAMERA CONTROL - Video recording where an input image signal is received with one or more optical sensor disposed in a hands-free video recorder. The input image signal is processed into an encoded video data stream with one or more processor disposed in the hands-free video recorder. First frames of the video data stream are relayed over a wireless communication link to a cellular-enabled wireless telephony handset and presented on a display screen of the handset along with a graphical user video control interface. Video control commands are wirelessly sent to the recorder in response to receiving a first input through the video control interface. Second frames of the video data stream are directed by the video control commands to at least one of a plurality of destinations. | 2013-05-23 |
20130128068 | Methods and Apparatus for Rendering Focused Plenoptic Camera Data using Super-Resolved Demosaicing - A super-resolved demosaicing technique for rendering focused plenoptic camera data performs simultaneous super-resolution and demosaicing. The technique renders a high-resolution output image from a plurality of separate microimages in an input image at a specified depth of focus. For each point on an image plane of the output image, the technique determines a line of projection through the microimages in optical phase space according to the current point and angle of projection determined from the depth of focus. For each microimage, the technique applies a kernel centered at a position on the current microimage intersected by the line of projection to accumulate, from pixels at each microimage covered by the kernel at the respective position, values for each color channel weighted according to the kernel. A value for a pixel at the current point in the output image is computed from the accumulated values for the color channels. | 2013-05-23 |
20130128069 | Methods and Apparatus for Rendering Output Images with Simulated Artistic Effects from Focused Plenoptic Camera Data - Methods, apparatus, and computer-readable storage media for simulating artistic effects in images rendered from plenoptic data. An impressionistic-style artistic effect may be generated in output images of a rendering process by an “impressionist” 4D filter applied to the microimages in a flat captured with focused plenoptic camera technology. Individual pixels are randomly selected from blocks of pixels in the microimages, and only the randomly selected pixels are used to render an output image. The randomly selected pixels are rendered to generate the artistic effect, such as an “impressionistic” effect, in the output image. A rendering technique is applied that samples pixel values from microimages using a thin sampling kernel, for example a thin Gaussian kernel, so that pixel values are sampled only from one or a few of the microimages. | 2013-05-23 |
20130128070 | INFORMATION PROCESSING APPARATUS, IMAGING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes a control unit configured to perform control for selecting settings for an imaging operation of an imaging unit based on surrounding sound obtained by a sound obtaining unit and an image generated by the imaging unit. | 2013-05-23 |
20130128071 | DETECTING OBJECTS IN AN IMAGE BEING ACQUIRED BY A DIGITAL CAMERA OR OTHER ELECTRONIC IMAGE ACQUISITION DEVICE - The likelihood of a particular type of object, such as a human face, being present within a digital image, and its location in that image, are determined by comparing the image data within defined windows across the image in sequence with two or more sets of data representing features of the particular type of object. The evaluation of each set of features after the first is preferably performed only on data of those windows that pass the evaluation with respect to the first set of features, thereby quickly narrowing potential target windows that contain at least some portion of the object. Correlation scores are preferably calculated by the use of non-linear interpolation techniques in order to obtain a more refined score. Evaluation of the individual windows also preferably includes rotating the feature set data with respect to the image data for the individual windows about another axis. | 2013-05-23 |
20130128072 | PHOTOGRAPHING DEVICE AND PHOTOGRAPHING METHOD - In a photographing device including a plurality of photographing means, a failure is liable to arise in photographing an object, therefore, a photographing device according to an exemplary aspect of the invention includes a plurality of photographing means; a photographing control means to which the plurality of photographing means are connected; and an output control means connected to the photographing control means; wherein the photographing control means includes an image information extraction means obtaining photographed images from the photographing means and extracting image information from each of the photographed images, an image information processing means detecting a difference between multiple pieces of the image information, and a comparison means making a comparison in size between the difference and a predetermined threshold value, the output control means outputs a warning signal if the comparison means determines that the size of the difference exceeds the size of the predetermined threshold value. | 2013-05-23 |
20130128073 | APPARATUS AND METHOD FOR ADJUSTING WHITE BALANCE - An apparatus and method for adjusting a white balance, and more particularly, an apparatus and method for adjusting a white balance of an image captured with a complex light source are provided. The apparatus includes a camera unit, and a controller for controlling recognition of a facial image from an image captured by the camera unit and, if it is determined that the image has been captured with a complex light source based on a comparison between a white balance gain calculated based on a facial skin color of the recognized facial image and a white balance gain calculated based on a white balance scheme, for adjusting a white balance of the captured image based on a final white balance gain extracted by interpolating the two white balance gains. | 2013-05-23 |
20130128074 | DISPLAY UNIT, IMAGE SENSING UNIT AND DISPLAY SYSTEM - A host unit | 2013-05-23 |
20130128075 | METHOD OF ADJUSTING DIGITAL CAMERA IMAGE PROCESSING PARAMETERS - A method for adjusting predetermined ISO-dependent image processing parameters for images captured by a digital camera includes measuring the exposure deviation in exposure units from an optimal exposure as determined by the camera during an image capture process, deriving an estimated camera sensitivity from the exposure deviation, and adjusting the ISO-dependent image processing parameters for images captured by the camera as a function of the derived estimated camera sensitivity. | 2013-05-23 |
20130128076 | METHOD, DEVICE, AND MACHINE READABLE MEDIUM FOR IMAGE CAPTURE AND SELECTION - The invention is related to a method, a device, and a machine readable medium for image capture and selection. One of the disclosed embodiments of the invention is specifically related to a method performed by an image capturing device The method includes capturing a sequence of images; storing a plurality of the captured images in a buffer, wherein each of the buffered images has an interested region supposed to encompass an interested target; detecting intactness information describing intactness of the interested target as encompassed in the interested regions of a plurality of the buffered images; and selecting at least one of the buffered images based on the detected intactness information. | 2013-05-23 |
20130128077 | Thin Plenoptic Cameras Using Microspheres - Methods and apparatus for capturing and rendering high-quality photographs using relatively small, thin plenoptic cameras. Plenoptic camera technology, in particular focused plenoptic camera technology including but not limited to super-resolution techniques, and other technologies such as microsphere technology may be leveraged to provide thin form factor, megapixel resolution cameras suitable for use in mobile devices and other applications. In addition, at least some embodiments of these cameras may also capture radiance, allowing the imaging capabilities provided by plenoptic camera technology to be realized through appropriate rendering techniques. | 2013-05-23 |
20130128078 | DIGITAL PHOTOGRAPHING APPARATUS AND METHOD OF CONTROLLING THE SAME - A digital photographing apparatus includes: an imaging device that generates an image signal by capturing image light; a storage unit that stores a template including a background area and a composite area that indicates at least a part of an image according to the image signal; an image changing unit that changes orientations of the template and the image; an image composing unit that composes the image and the template of which orientations are changed; and a display unit that displays the composed image, wherein the image changing unit determines orientations to be changed of the template and the image according to a rotation amount of the imaging device with respect to an optical axis of the image light and an orientation in which an imaging surface of the imaging device faces. Accordingly, a user may naturally perform a self-photography function using a template. | 2013-05-23 |
20130128079 | IMAGE PROCESSING APPARATUS, METHOD FOR CONTROLLING THE SAME, AND STORAGE MEDIUM AND PROGRAM USED THEREWITH - An image processing apparatus is intended to display an image arbitrarily selected between an original image and an edited image in accordance with a user's preference after image editing. The image processing apparatus is designed so that an image displayed after the image editing can be selected depending on a user's intention. The image processing apparatus specifies and displays an unedited or edited image on the basis of a user operation, whereby the user can easily view a desired image. | 2013-05-23 |
20130128080 | IMAGE ANALYSIS DEVICE AND METHOD THEREOF - A method for analyzing images includes: obtaining at least one brightness value of at least one area of an image of a scene; obtaining a brightness value of the same at lease one area of each of a predetermined number of previous images of the scene and calculating an average brightness of the same at lease one area of the predetermined number of previous images to obtain at least one average brightness value; comparing the at least one brightness value of the image with that of the predetermined number of previous images to obtain at least one brightness difference value; comparing the at least one brightness difference value with a first value and a second value; adjusting a reference background model according to a first adjustment mode or a second adjustment according to the comparison result. | 2013-05-23 |
20130128081 | Methods and Apparatus for Reducing Plenoptic Camera Artifacts - Methods and apparatus for reducing plenoptic camera artifacts. A first method is based on careful design of the optical system of the focused plenoptic camera to reduce artifacts that result in differences in depth in the microimages. A second method is computational; a focused plenoptic camera rendering algorithm is provided that corrects for artifacts resulting from differences in depth in the microimages. While both the artifact-reducing focused plenoptic camera design and the artifact-reducing rendering algorithm work by themselves to reduce artifacts, the two approaches may be combined. | 2013-05-23 |
20130128082 | IMAGE-PICKUP APPARATUS AND METHOD OF DETECTING DEFECTIVE PIXEL THEREOF - An image pickup apparatus which can detect, when pixels have a structure in which part of electrical construction is shared therebetween, a defective pixel by taking into account a high possibility of the other pixels sharing the part of electrical construction becoming defective pixels, thereby making it possible to obtain an excellent image. A ROM stores in advance position information on each defective pixel. A defective pixel-detecting section detects a new defective pixel on which position information is not stored by the storage unit, from the pixels forming each pixel group, by performing one of different types of defective pixel detection processing. A system controller causes the defective pixel-detecting section to execute one of the different types of detection processing, according to the number of defective pixels which are included in each pixel group and on which the position information is stored in the storage unit. | 2013-05-23 |
20130128083 | HIGH DYNAMIC RANGE IMAGE SENSING DEVICE AND IMAGE SENSING METHOD AND MANUFACTURING METHOD THEREOF - A high dynamic range (HDR) image sensing method is provided. The image sensor includes steps of: sensing an image with a long integration time by a first long integration time sensor; and sensing the image with a short integration time by a first short integration time sensor. | 2013-05-23 |