Entries |
Document | Title | Date |
20080199043 | Image Enhancement in Sports Recordings - A video signal representing rapid ball movement is produced from a series of source images. An initial image position for the moving ball is identified by, for each image, producing a difference image between sequential images. In the difference image, image elements representing a contents alteration below a threshold are allocated a first value, and those representing a contents alteration above or equal to the threshold are allocated a second value. A set of candidates is then identified, where each candidate is represented by a group of neighboring image elements that all contain the second value. The group must fulfill a ball size criterion. A ball selection algorithm selects an initial image position from the set of ball candidates. The ball is tracked, and a composite image sequence is generated wherein a synthetic trace representing the path of the moving ball is shown as successively added image data. | 08-21-2008 |
20080199044 | Image Processing Apparatus, Image Processing Method, and Program - Disclosed herein is an image processing apparatus for recognizing, from a taken image, an object corresponding to a registered image registered in advance, including, an image taker configured to take an image of a subject to obtain the taken image of the subject, a recognizer configured to recognize, from the taken image, an object corresponding to the registered image, a first specified area tracker configured to execute first specified area tracking processing for tracking, in the taken image, a first tracking area specified on the basis of a result of recognition by the recognizer, and a second specified area tracker configured to execute second specified area tracking processing for tracking a second specified area specified on the basis of a result of the first specified area tracking processing. | 08-21-2008 |
20080205700 | Apparatus and Method for Assisted Target Designation - A method for assisting a user to designate a target as viewed on a video image displayed on a video display by use of a user operated pointing device. The method includes the steps of evaluating prior to target designation one or more tracking function indicative of a result which would be generated by designating a target at a current pointing direction of the pointing device, and providing to the user, prior to target designation, an indication indicative of the result. | 08-28-2008 |
20080205701 | ENHANCED INPUT USING FLASHING ELECTROMAGNETIC RADIATION - Enhanced input using flashing electromagnetic radiation, in which first and second images, captured on a first side of a screen, of an object and an ambient electromagnetic radiation emitter disposed on a second side of the screen, are accessed. The first image being captured while the object is illuminated with projected electromagnetic radiation, and the second image being captured while the projected electromagnetic radiation is extinguished. A position of the object relative to the screen based on coniparing the first and second images is determined. An application is controlled based on the determined position. | 08-28-2008 |
20080205702 | BACKGROUND IMAGE GENERATION APPARATUS - The information of a detection area is obtained by using radar, the information is sent to a mobile body detection unit, the position of a mobile body existing within the detection area is detected, a zone excluding a predetermined range surrounding the mobile body is identified by using a nonexistence zone identification unit, the information of the detection area at the time is obtained, a zone which does not include a mobile body is accurately generated by a background image generation unit, then the information of the detection area is obtained by a camera, and the difference between the generated background image and the aforementioned information is detected by a difference process unit; thereby an accurate position of the mobile body is detected. | 08-28-2008 |
20080205703 | Methods and Apparatus for Automatically Tracking Moving Entities Entering and Exiting a Specified Region - Techniques for tracking entities using a single overhead camera are provided. A foreground region is detected in a video frame of the single overhead camera corresponding to one or more entities. It is determined if the foreground region is associated with an existing tracker. It is determined whether the detected foreground region is the result of at least one of a merger of two or more smaller foreground regions having corresponding existing trackers and a split of a larger foreground region having a corresponding existing tracker when the detected foreground region is not associated with an existing tracker. The detected foreground region is tracked via at least one existing tracker when the foreground region is associated with an existing tracker or the foreground region is the result of at least one of a merger and a split. | 08-28-2008 |
20080212830 | Efficient Calculation of Ensquared Energy in an Imaging System - Systems and methods are provided for determining an ensquared energy associated with an imaging system. In one embodiment of the invention, a focal plane array captures an image of a target comprising a plurality of point sources, each point source being associated with a pixel within the focal plane array. An image analysis component estimates an ensquared energy value for the imaging system from respective intensity values of the associated pixels and known relative positions of the plurality of point sources. | 09-04-2008 |
20080212831 | REMOTE CONTROL OF AN IMAGE CAPTURING UNIT IN A PORTABLE ELECTRONIC DEVICE - A method and computer program product are described herein for remotely controlling a first image capturing unit in a portable electronic device as well as to such a portable electronic device. The portable electronic device may include a first and a second image capturing unit. The device detects and tracks an object via the second capturing unit and detects changes in an area of the object. These changes are then used for controlling the first image capturing unit remotely. When the control involves capturing of images an improved image quality can be obtained. Also the time it takes to capture an image is reduced. | 09-04-2008 |
20080212832 | DISCRIMINATOR GENERATING APPARATUS AND OBJECT DETECTION APPARATUS - A discriminator generating apparatus includes a learning unit ( | 09-04-2008 |
20080212833 | ENHANCEMENT OF AIMPOINT IN SIMULATED TRAINING SYSTEMS - Embodiments of the invention, therefore, provide improved systems and methods for tracking targets in a simulation environment. Merely by way of example, an exemplary embodiment provides a reflected laser target tracking system that tracks a target with a video camera and associated computational logic. In certain embodiments, a closed loop algorithm may be used to predict future positions of targets based on formulas derived from prior tracking points. Hence, the target's next position may be predicted. In some cases, targets may be filtered and/or sorted based on predicted positions. In certain embodiments, equations (including without limitation, first order equations and second order equations) may be derived from one or more video frames. Such equations may also be applied to one or more successive frames of video received and/or produced by the system. In certain embodiments, these formulas also may be used to compute predicted positions for targets; this prediction may, in some cases, compensate for inherent delays in the processing pipeline. | 09-04-2008 |
20080212834 | User interface using camera and method thereof - A user interface using a camera and a method thereof, wherein two or more images that were shot in time sequence are preprocessed to form N×M matrices, and then each element of the matrices are compared. The comparison is thus made (N+1)(M+1) times to select a result of the highest similarity and produce a motion vector. The interface and method help to produce more accurate motion vectors and to obviate inaccuracy that is yielded throughout low-pass filtering. | 09-04-2008 |
20080212835 | Object Tracking by 3-Dimensional Modeling - Disclosed a method for tracking 3-dimensional objects, or some of these objects' features, using range imaging for depth-mapping merely a few points on the surface area of each object, mapping them onto a geometrical 3-dimensional model, finding the object's pose, and deducing the spatial positions of the object's features, including those not captured by the range imaging. | 09-04-2008 |
20080212836 | Visual Tracking Using Depth Data - Real-time visual tracking using depth sensing camera technology, results in illumination-invariant tracking performance. Depth sensing (time-of-flight) cameras provide real-time depth and color images of the same scene. Depth windows regulate the tracked area by controlling shutter speed. A potential field is derived from the depth image data to provide edge information of the tracked target. A mathematically representable contour can model the tracked target. Based on the depth data, determining a best fit between the contour and the edge of the tracked target provides position information for tracking. Applications using depth sensor based visual tracking include head tracking, hand tracking, body-pose estimation, robotic command determination, and other human-computer interaction systems. | 09-04-2008 |
20080219501 | Motion Measuring Device, Motion Measuring System, In-Vehicle Device, Motion Measuring Method, Motion Measurement Program, and Computer-Readable Storage - An embodiment of the present invention includes: a tracking object image extracting section that extracts a tracking object image, which represents a tracking object, from an image captured by a monocular camera; a two-dimensional displacement calculating section that calculates, as actual movement amounts, amounts of inter-frame movement of the tracking object image; a two-dimensional plane projecting section that generates on a two-dimensional plane a projected image of a three-dimensional model, which represents in three dimensions a capturing object captured by the monocular camera; a small motion generating section that calculates, as estimated movement amounts, amounts of inter-frame movement of the projected image; and a three-dimensional displacement estimating section that estimates amounts of three-dimensional motion of the tracking object on the basis of the actual movement amounts and the estimated movement amounts. | 09-11-2008 |
20080219502 | TRACKING BIMANUAL MOVEMENTS - Hands may be tracked before, during, and after occlusion, and a gesture may be recognized. Movement of two occluded hands may be tracked as a unit during an occlusion period. A type of synchronization characterizing the two occluded hands during the occlusion period may be determined based on the tracked movement of the occluded hands. Based on the determined type of synchronization, it may be determined whether directions of travel for each of the two occluded hands change during the occlusion period. Implementations may determine that a first hand and a second hand are occluded during an occlusion period, the first hand having come from a first direction and the second hand having come from a second direction. The first hand may be distinguished from the second hand after the occlusion period based on a determined type of synchronization characterizing the two hands, and a behavior of the two hands. | 09-11-2008 |
20080219503 | MEANS FOR USING MICROSTRUCTURE OF MATERIALS SURFACE AS A UNIQUE IDENTIFIER - A method and apparatus for the visual identification of materials for tracking an object comprises parameter setting, acquisition and identification phases. The parameter setting phase comprises the steps of defining acquisition parameters for the objects. The acquisition phase comprises the steps of digitally acquiring two-dimensional template image of an object, applying a flattening function and generating downsampled template version of the flattened template and storing it in a reference database with the flattened template. The identification phase comprises the steps of digitally acquiring a snapshot image, applying the flattening function and generating one downsampled version, cross-correlating the downsampled version of the flattened snapshot with the corresponding downsampled templates of the reference database, and selecting templates according to the value of the signal to noise ratio, for the selected templates, cross-correlating the flattened snapshot image with the reference flattened template, and identifying the object by finding the best corresponding template. | 09-11-2008 |
20080219504 | AUTOMATIC MEASUREMENT OF ADVERTISING EFFECTIVENESS - An automated system for measuring information about a target image in a video is described. One embodiment includes receiving a set of one or more video images for the video, automatically finding the target image in at least a subset of the video images, determining one or more statistics regarding the target image being in the video, and reporting the one or more statistics. | 09-11-2008 |
20080219505 | Object Detection System - An object detection system is provided a plurality of image capture units for capturing images of surroundings of the system, a distance information calculation unit for dividing a captured image which constitutes a reference of captured images captured by the plurality of image capture units into a plurality of pixel blocks, individually retrieving corresponding pixel positions within the other captured image for the pixel blocks, and individually calculating distance information, and a histogram generation module for dividing a range image representing the individual distance information of the pixel blocks calculated by the distance information calculation unit into a plurality of segments having predetermined sizes, providing histograms relating to the distance information for the respective divided segments, and casting the distance information of the pixel blocks to the histograms of the respective segments. | 09-11-2008 |
20080219506 | Method and apparatus for automatic object identification - A method and system for processing image data to identify objects in an image. The method and system operate using various resolutions of the image to identify the objects. Information obtained while processing the image at one resolution is employed when processing the image at another resolution. | 09-11-2008 |
20080219507 | Passive Touch System And Method Of Detecting User Input - A method of tracking an object of interest preferably includes (i) acquiring a first image and a second image representing different viewpoints of the object of interest; (ii) processing the first image into a first image data set and the second image into a second image data set; (iii) processing the first image data set and the second image data set to generate a background data set associated with a background; (iv) generating a first difference map by determining differences between the first image data set and the background data set and a second difference map by determining differences between the second image data set and the background data set; (v) detecting a first relative position of the object of interest in the first difference map and a second relative position of the object of interest in the second difference map; and (vi) producing an absolute position of the object of interest from the first and second relative positions of the object of interest. | 09-11-2008 |
20080226126 | Object-Tracking Apparatus, Microscope System, and Object-Tracking Program - An object-tracking apparatus ( | 09-18-2008 |
20080226127 | LINKING TRACKED OBJECTS THAT UNDERGO TEMPORARY OCCLUSION - A method and system is configured to characterize regions of an environment by the likelihoods of transition of a target from each region to another. The likelihoods of transition between regions is preferably used in combination with conventional object-tracking algorithms to determine the likelihood that a newly-appearing object in a scene corresponds to a recently-disappeared target. The likelihoods of transition may be predefined based on the particular environment, or may be determined based on prior appearances and disappearances in the environment, or a combination of both. The likelihoods of transition may also vary as a function of the time of day, day of the week, and other factors that may affect the likelihoods of transitions between regions in the particular surveillance environment. | 09-18-2008 |
20080226128 | SYSTEM AND METHOD FOR USING FEATURE TRACKING TECHNIQUES FOR THE GENERATION OF MASKS IN THE CONVERSION OF TWO-DIMENSIONAL IMAGES TO THREE-DIMENSIONAL IMAGES - The present invention is directed to systems and methods for controlling 2-D to 3-D image conversion and/or generation. The methods and systems use auto-fitting techniques to create a mask based upon tracking features from frame to frame. When features are determined to be missing they are added prior to auto-fitting the mask. | 09-18-2008 |
20080226129 | Cart Inspection for Suspicious Items - Methods and apparatus provide for a Cart Inspector to create a suspicion level for a transaction when a video image of the transaction portrays an item(s) left in a shopping cart. Specifically, the Cart Inspector obtains video data associated with a time(s) of interest. The video data originates from a video camera that monitors a transaction area. The Cart Inspector analyzes the video data with respect to target image(s) associated with a transaction in the transaction area during the time(s) of interest. The Cart Inspector creates an indication of a suspicion level for the transaction based on analysis of the target image(s). Creation of a high suspicion level for the transaction indicates that the transaction's corresponding video images most likely portray occurrences where the purchase price of an item transported through the transaction area was not included in the total amount paid by the customer. | 09-18-2008 |
20080232641 | SYSTEM AND METHOD FOR THE MEASUREMENT OF RETAIL DISPLAY EFFECTIVENESS - The present invention relates to the measurement of human activities through video, particularly in retail environments. A method for measuring retail display effectiveness in accordance with an embodiment of the present invention includes: detecting a moving object in a field of view of an imaging device, the imaging device obtaining image data of a product display; tracking the object in the field of view of the imaging device to obtain a track; and obtaining statistics for the track with regard to the product display. | 09-25-2008 |
20080232642 | System and method for 3-D recursive search motion estimation - A method for 3-D recursive search motion estimation is provided to estimate a motion vector for a current block in a current frame. The method includes the following steps. First, provide a spatial prediction by selecting at least one motion vector for at least one neighboring block in the current frame. Then, provide a temporal prediction. After that, estimate the motion vector for the current block based on the spatial prediction and the temporal prediction. The temporal prediction is obtained by selecting at least one most frequent motion vector from a plurality of motion vectors for a plurality of blocks in a corresponding region of a previous frame, wherein the corresponding block encloses a previous block which is location corresponding to the current block in the current frame. | 09-25-2008 |
20080232643 | Bitmap tracker for visual tracking under very general conditions - System and method for visually tracking a target object silhouette in a plurality of video frames under very general conditions. The tracker does not make any assumption about the object or the scene. The tracker works by approximating, in each frame, a PDF (probability distribution function) of the target's bitmap and then estimating the maximum a posteriori bitmap. The PDF is marginalized over all possible motions per pixel, thus avoiding the stage in which optical flow is determined. This is an advantage over other general-context trackers that do not use the motion cue at all or rely on the error-prone calculation of optical flow. Using a Gibbs distribution with a first order neighborhood system yields a bitmap PDF whose maximization may be transformed into that of a quadratic pseudo-Boolean function, the maximum of which is approximated via a reduction to a maximum-flow problem. | 09-25-2008 |
20080232644 | Storage medium having information processing program stored thereon and information processing apparatus - A motion information obtaining step successively obtains motion information from a motion sensor. An imaging information obtaining step successively obtains imaging information from an imaging means. An invalid information determination step determines whether the imaging information is valid information or invalid information for predetermined processing. A motion value calculation step calculates a motion value representing a magnitude of a motion of the operation apparatus in accordance with the motion information. A processing step executes, when the imaging information is determined as the invalid information in the invalid information determination step and when the motion value calculated in the motion calculation step is within a predetermined value range, predetermined processing in accordance with most recent valid imaging information among valid imaging information previously obtained. | 09-25-2008 |
20080232645 | TRACKING A SURFACE IN A 3-DIMENSIONAL SCENE USING NATURAL VISUAL FEATURES OF THE SURFACE - A facility for determining the 3-dimensional location and orientation of a subject surface in a distinguished perspective image of the subject surface is described. The subject surface has innate visual features, a subset of which are selected. The facility uses the location of the selected visual features in a perspective image of the subject surface that precedes the distinguished perspective image in time to identify search zones in the distinguished perspective image. The facility searches the identified search zones for the selected visual features to determine the 2-dimensional locations at which the selected visual features occur. Based on the determined 2-dimensional locations, the facility determines the 3-dimensional location and orientation of the subject surface in the distinguished perspective image. | 09-25-2008 |
20080240496 | APPROACH FOR RESOLVING OCCLUSIONS, SPLITS AND MERGES IN VIDEO IMAGES - Aspects of the present invention provide a solution for resolving an occlusion in a video image. Specifically, an embodiment of the present invention provides an environment in which portions of a video image in which occlusions have occurred may be determined and analyzed to determine the type of occlusion. Furthermore, regions of the video image may be analyzed to determine which object in the occlusion the region belongs to. The determinations and analysis may use such factors as pre-determined attributes of an object, such as color or texture of the object and/or a temporal association of the object, among others. | 10-02-2008 |
20080240497 | Method for tracking objects in videos using forward and backward tracking - A method tracks an object in a sequence of frames of a video. The method is provided with a set of tracking modules. Frames of a video are buffered in a memory buffer. First, an object is tracked in the buffered frames forward in time using a selected one of the plurality of tracking module. Second, the object is tracked in the buffered frames backward in time using the selected tracking module. Then, a tracking error is determined from the first tracking and the second tracking. If the tracking error is less than a predetermined threshold, then additional frames are buffered in the memory buffer and the first tracking, the second tracking and the determining steps are repeated. Otherwise, if the error is greater than the predetermined threshold, then a different tracking module is selected and the first tracking, the second tracking and the determining steps are repeated. | 10-02-2008 |
20080240498 | RUNWAY SEGMENTATION USING VERTICES DETECTION - Methods and apparatus are provided for locating a runway by detecting an object (or blob) within data representing a region of interest provided by a vision sensor. The vertices of the object are determined by finding points on the contour of the object nearest for the four corners of the region of interest. The runway can then be identified to the pilot of the aircraft by extending lines between the vertices to identify the location of the runway. | 10-02-2008 |
20080240499 | Jointly Registering Images While Tracking Moving Objects with Moving Cameras - A method tracks a moving object by registering a current image in a sequence of images with a previous image. The sequence of images is acquired of a scene by a moving camera. The registering produces a registration result. The moving object is tracked in the registered image to produce a tracking result. The registered current image is registered with the previous image using tracking result for all the images in the sequence. | 10-02-2008 |
20080240500 | IMAGE PROCESSING METHODS - A method of image processing, the method comprising receiving an image frame including a plurality of pixels, each of the plurality of pixels including an image information, conducting a first extraction based on the image information to identify foreground pixels related to a foreground object in the image frame and background pixels related to a background of the image frame, scanning the image frame in regions, identifying whether each of the regions includes a sufficient number of foreground pixels, identifying whether each of regions including a sufficient number of foreground pixels includes a foreground object, clustering regions including a foreground object into at least one group, each of the at least one group corresponding to a different foreground object in the image frame, and conducting a second extraction for each of at least one group to identify whether a foreground pixel in the each of the at least one group is to be converted to a background pixel. | 10-02-2008 |
20080240501 | Measurement system, lithographic apparatus and method for measuring a position dependent signal of a movable object - An encoder-type measurement system is configured to measure a position dependent signal of a movable object, the measurement system including at least one sensor mountable on the movable object a sensor target object mountable on a substantially stationary frame, and a mounting device configured to mount the sensor target object on the substantially stationary frame. The measurement system further includes a compensation device configured to compensate movements and/or deformations of the sensor target object with respect to the substantially stationary frame. The compensation device may include a passive or an active damping device and/or a feedback position control system. In an alternative embodiment, the compensation device includes a gripping device which fixes the position of the sensor target object during a high accuracy movement of the movable object. | 10-02-2008 |
20080240502 | Depth mapping using projected patterns - Apparatus for mapping an object includes an illumination assembly, which includes a single transparency containing a fixed pattern of spots. A light source transilluminates the single transparency with optical radiation so as to project the pattern onto the object. An image capture assembly captures an image of the pattern that is projected onto the object using the single transparency. A processor processes the image captured by the image capture assembly so as to reconstruct a three-dimensional (3D) map of the object. | 10-02-2008 |
20080240503 | Image Processing Apparatus And Image Pickup Apparatus Mounting The Same, And Image Processing Method - A coding unit codes a moving image. An object detector detects an object from within a picture contained in the moving image, and generates, for each picture, object detection information containing at least the number of objects detected within an identical picture. When a codestream is generated from coded data generated by the coding unit, a stream generator describes the object detection information in a prescribed region of the codestream. | 10-02-2008 |
20080240504 | Integrating Object Detectors - An N-object detector comprises an N-object decision structure incorporating decision sub-structures of N object detectors. Some decision sub-structures have multiple different versions composed of the same classifiers with the classifiers rearranged. Said multiple versions associated with an object detector are arranged in the N-object decision structure so that the order in which the classifiers are evaluated is dependent upon the results of the evaluation of a classifier of another object detector. Each version of the same decision sub-structure produces the same logical behaviour as the other versions. Such an N-object decision structure is generated by generating multiple candidate N-object decision structures and analysing the expected computational cost of these candidates to select one of them. | 10-02-2008 |
20080240505 | Feature information collecting apparatuses, methods, and programs - Apparatuses, methods, and programs acquire vehicle position information that represents a current position of a vehicle, acquire image information of a vicinity of the vehicle, and carry out image recognition processing of a target feature that is included in the image information to determine a position of the target feature. The apparatuses, methods, and programs store recognition position information that is based on the acquired vehicle position information and that represents the determined recognition position of the target feature. The apparatuses, methods, and programs determine an estimated position of the target feature based on a set of a plurality of stored recognition position information for the target feature, the plurality of stored recognition position information for the target feature being stored due to the target feature being subject to image recognition processing a plurality of times. | 10-02-2008 |
20080247599 | Method for Detecting Objects Left-Behind in a Scene - A method detects an object left-behind in a scene by updating a set of background models using a sequence of images acquired of the scene by a camera. Each background model is updated at a different temporal scales ranging from short term to long term. A foreground mask is determined from each background model after the updating for a particular image of the sequence. A motion image is updated from the set of foreground masks. In the motion, image, each pixel has an associated evidence value. The evidence values are compared with a evidence threshold to detect and signal an object left behind in the scene. | 10-09-2008 |
20080247600 | IMAGE RECORDING DEVICE, PLAYER DEVICE, IMAGING DEVICE, PLAYER SYSTEM, METHOD OF RECORDING IMAGE, AND COMPUTER PROGRAM - An imaging device detects a face of a subject from an image in response to inputting of the image containing the subject, and generates face data related to the face. The imaging device generates face data management information managing the face data and controls recording of the input image, the generated face data and the face data management information on a recording unit with the input image mapped to the face data and the face data management information. The face data contains a plurality information components recorded in a predetermined recording order. The face data management information, in a data structure responsive to the recording order of the information components of the face data, contains a train of consecutively assigned bits. The information components are assigned predetermined flags in the recording order. Each flag represents the presence or absence of the information component corresponding to the flag in the face data. | 10-09-2008 |
20080247601 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus such as a monitor system for executing image processing to present a suspicious object effectively. An object detecting unit detects an object contained in an image, an associating unit associates a plurality of objects detected with the object detecting unit, with each other, and an evaluating unit evaluates (e.g., evaluation as being suspicious) an object detected by the object detecting unit, and an association evaluating unit evaluates another object associated by the associating unit with the object evaluated by the evaluating unit, in accordance with the evaluation made by the evaluating unit. | 10-09-2008 |
20080253609 | TRACKING WORKFLOW IN MANIPULATING MEDIA ITEMS - A computer-implemented method is described including receiving input specifying an image frame from among a series of image frames, and automatically detecting one or more points in the specified image frame that would be suitable for tracking a point in the series of image frames. In addition, a computer-implemented method is described including choosing a first position of a point on a first image frame of a plurality of image frames, and displaying in a bounded region on the first image frame content relating to a second image frame of the plurality of image frames, wherein the content displayed in the bounded region includes a second position of the point at a different time than the first position of the point. | 10-16-2008 |
20080253610 | Three dimensional shape reconstitution device and estimation device - A face model providing portion provides an stored average face model to an estimation portion estimating an affine parameter for obtaining a head pose. An individual face model learning portion obtains a result of tracking feature points by the estimation portion and learns an individual face model. The individual face model learning portion terminates the learning when a free energy of the individual face model is over a free energy of the average face model, and switches a face model provided to the estimation portion from the average face model to the individual face model. While learning the individual face mode, an observation matrix is factorized using a reliability matrix showing reliability of each observation value forming the observation matrix with emphasis on the feature point having higher reliability. | 10-16-2008 |
20080253611 | Analyst cueing in guided data extraction - The Analyst Cueing method addresses the issues of locating desired targets of interest from among very large datasets in a timely and efficient manner. The combination of computer aided methods for classifying targets and cueing a prioritized list for an analyst produces a robust system for generalized human-guided data mining. Incorporating analyst feedback adaptively trains the computerized portion of the system in the identification and labeling of targets and regions of interest. This system dramatically improves analyst efficiency and effectiveness in processing data captured from a wide range of deployed sensor types. | 10-16-2008 |
20080253612 | Method and an Arrangement for Locating and Picking Up Objects From a Carrier - The invention relates to a method for locating and picking up objects that are placed on a carrier. A scanning operation is performed over the carrier. The scanning is performed by a line laser scanner whose results are used to generate a virtual surface that represents the area that has been scanned. The virtual surface is compared to a pre-defined virtual object corresponding to an object to be picked from the carrier, whereby a part of the virtual surface that matches the pre-defined virtual object is identified. A robot arm is then caused to move to a location corresponding to the identified part of the virtual surface and pick up an object from the carrier at this location. | 10-16-2008 |
20080253613 | System and Method for Cooperative Remote Vehicle Behavior - A method for facilitating cooperation between humans and remote vehicles comprises creating image data, detecting humans within the image data, extracting gesture information from the image data, mapping the gesture information to a remote vehicle behavior, and activating the remote vehicle behavior. Alternatively, voice commands can by used to activate the remote vehicle behavior. | 10-16-2008 |
20080253614 | METHOD AND APPARATUS FOR DISTRIBUTED ANALYSIS OF IMAGES - A method and apparatus for intelligent distributed analyses of images including capturing the images and analyzing the captured images, where feature information is extracted from the captured images. The extracted feature information is used in determining whether a predefined condition is met, and the extracted feature information is transmitted for further analysis when the predefined condition is met. The extracted feature information is stored and is used to generate statistical information related to the extracted feature information. Further, additional feature information is provided from other databases to implement further analysis including an event detection or recognition. Accordingly, distributed intelligent analyses of images is provided for analyzing captured images to efficiently and effectively implement event detection or recognition. | 10-16-2008 |
20080260205 | Image Processing Device and Method - The present invention relates to an image processing device and a corresponding image processing method for processing medical image data showing at least two image objects, including a segmentation unit for detection and/or segmentation of image objects in said image data. To allow a more accurate and better segmentation of target objects which are hard to localize and detect, it is proposed that the segmentation unit comprises: a selection unit ( | 10-23-2008 |
20080260206 | IMAGE PROCESSING APPARATUS AND COMPUTER PROGRAM PRODUCT - An image processing apparatus includes a feature-quantity calculating unit that calculates feature quantities of target regions each indicating a tracking object in respective target images, the target images being obtained by capturing the tracking object at a plurality of time points; a provisional-tracking processing unit that performs provisional tracking of the target region by associating the target regions of the target images with each other using the calculated feature quantities; and a final-tracking processing unit that acquires a final tracking result of the target region based on a result of the provisional tracking. | 10-23-2008 |
20080260207 | Vehicle environment monitoring apparatus - A vehicle environment monitoring apparatus capable of extracting an image of a monitored object in an environment around a vehicle by separating the same from the background image with a simple configuration having a single camera mounted on the vehicle is provided. The apparatus includes a first image portion extracting processing unit to extract first image portions (A | 10-23-2008 |
20080267449 | 3-D MODELING - A system comprising an imaging device adapted to capture images of a target object at multiple angles. The system also comprises storage coupled to the imaging device and adapted to store a generic model of the target object. The system further comprises processing logic coupled to the imaging device and adapted to perform an iterative process by which the generic model is modified in accordance with the target object. During each iteration of the iterative process, the processing logic obtains structural and textural information associated with at least one of the captured images and modifies the generic model with the structural and textural information. The processing logic displays the generic model. | 10-30-2008 |
20080267450 | Position Tracking Device, Position Tracking Method, Position Tracking Program and Mixed Reality Providing System - The present invention has a simpler structure than before and is designed to precisely detect the position of a real environment's target object on a screen. The present invention generates a special marker image MKZ including a plurality of areas whose brightness levels gradually change in X and Y directions, displays the special marker image MKZ on the screen of a liquid crystal display | 10-30-2008 |
20080267451 | System and Method for Tracking Moving Objects - A method for tracking an object that is embedded within images of a scene, including: in a sensor unit that includes movable sensor, generating, storing and transmitting over a communication link a succession of images of a scene. In a remote control unit, receiving the succession of images. Receiving a user command for selecting an object of interest in a given image of the received succession of images and determining object data associated with the object and transmitting through the link to the sensor unit the object data. In the sensor unit, identifying the given image of the stored succession of images and the object of interest using the object data, and tracking the object in other image of the stored succession of images. The other image being later than the given image. In the case that the object cannot be located in the latest image of the stored succession of images, using information of at images in which the object was located to predict estimated real-time location of the object and generating direction command to the movable sensor for generating realtime image of the scene and locking on the object. | 10-30-2008 |
20080267452 | APPARATUS AND METHOD OF DETERMINING SIMILAR IMAGE - An apparatus of determining a similar image contains a subject-region-detecting unit that detects a subject region from a received image, a pixel-value-distribution-generating unit that generates pixel value distribution of pixels included in the subject region detected by the subject-region-detecting unit, and a determination unit that determines whether or not an image relative to the subject region is similar to a previously registered subject image based on the pixel value distribution generated by the pixel-value-distribution-generating unit and a registered pixel value distribution of the previously registered subject image. | 10-30-2008 |
20080267453 | METHOD FOR ESTIMATING THE POSE OF A PTZ CAMERA - Provided is an iterative method of estimating the pose of a moving PTZ camera. The first step is to use an image registration method on a reference image and a current image to calculate a matrix that estimates the motion of sets of points corresponding to the same object in both images. Information about the absolute camera pose, embedded in the matrix obtained in the first step, is used to simultaneously recalculate both the starting positions in the reference image and the motion estimate. The recalculated starting positions and motion estimate are used to determine the pose of the camera in the current image. The current image is taken as a new reference image, a new current image is selected and the process is repeated in order to determine the pose of the camera in the new current image. The entire process is repeated until the camera stops moving. | 10-30-2008 |
20080273750 | Apparatus and Method For Automatically Detecting Objects - A device automatically detects boundary lines on the road from an image captured by a camera mounted on the vehicle. The device includes a controller that performs image processing on the image to compute the velocity information for each pixel in the image, and, on the basis of the computed velocity information for each pixel in the image, extracts the pixels that contain velocity information, detects the oblique lines formed by the extracted pixels, and detects the boundary lines on the road on the basis of the detected oblique lines. | 11-06-2008 |
20080273751 | Detection and Tracking of Moving Objects from a Moving Platform in Presence of Strong Parallax - Among other things, methods, systems and computer program products are described for detecting and tracking a moving object in a scene. One or more residual pixels are identified from video data. At least two geometric constraints are applied to the identified one or more residual pixels. A disparity of the one or more residual pixels to the applied at least two geometric constraints is calculated. Based on the detected disparity, the one or more residual pixels are classified as belonging to parallax or independent motion and the parallax classified residual pixels are filtered. Further, a moving object is tracked in the video data. Tracking the object includes representing the detected disparity in probabilistic likelihood models. Tracking the object also includes accumulating the probabilistic likelihood models within a number of frames during the parallax filtering. Further, tracking the object includes based on the accumulated probabilistic likelihood models, extracting an optimal path of the moving object. | 11-06-2008 |
20080273752 | SYSTEM AND METHOD FOR VEHICLE DETECTION AND TRACKING - A method for vehicle detection and tracking includes acquiring video data including a plurality of frames, comparing a first frame of the acquired video data against a set of one or more vehicle detectors to form vehicle hypotheses, pruning and verifying the vehicle hypotheses using a set of course-to-fine constraints to detect a vehicle, and tracking the detected vehicle within one or more subsequent frames of the acquired video data by fusing shape template matching with one or more vehicle detectors. | 11-06-2008 |
20080273753 | System for Detecting Image Abnormalities - An image capture system for capturing images of an object, the image capture system comprising a moving platform such as an airplane, one or more image capture devices mounted to the moving platform, and a detection computer. The image capture device has a sensor for capturing an image. The detection computer executes an abnormality detection algorithm for detecting an abnormality in an image immediately after the image is captured and then automatically and immediately causing a re-shoot of the image. Alternatively, the detection computer sends a signal to the flight management software executed on a computer system to automatically schedule a re-shoot of the image. When the moving platform is an airplane, the detection computer schedules a re-shoot of the image such that the image is retaken before landing the airplane. | 11-06-2008 |
20080273754 | APPARATUS AND METHOD FOR DEFINING AN AREA OF INTEREST FOR IMAGE SENSING - A method for defining an area of interest or a trip line using a camera by tracking the movement of a person within a field of view of the camera. The area of interest is defined by a path or boundary indicated by the person's movement. Alternatively, a trip line comprising a path between a starting point and a stopping point may be defined by tracking the movement of the person within the camera's field of view. An occupancy sensor may be structured to sense the movement of an occupant within an area, and to adjust the lighting in the area accordingly if the occupant enters the area of interest or crosses the trip line. The occupancy sensor includes an image sensor coupled to a processor, an input facility such as a pushbutton to receive input, and an output facility such as an electronic beeper to provide feedback to the person defining the area of interest or the trip line. | 11-06-2008 |
20080273755 | CAMERA-BASED USER INPUT FOR COMPACT DEVICES - A camera is used to detect a position and/or orientation of an object such as a user's finger as an approach for providing user input, for example to scroll through data, control a cursor position, and provide input to control a video game based on a position of a user's finger. Input may be provided to a handheld device, including, for example, cell phones, video games systems, portable music (MP3) players, portable video players, personal data assistants (PDAs), audio/video equipment remote controls, and consumer digital cameras, or other types of devices. | 11-06-2008 |
20080273756 | POINTING DEVICE AND MOTION VALUE CALCULATING METHOD THEREOF - A pointing device is provided. A sensor generates a motion detection signal by sensing motion. A calculator receives the motion detection signal, calculates a motion value based on the motion detection signal, calculates a conversion motion value base on an angle of the motion value, and outputs the conversion motion value. An interface outputs the motion conversion value inputted from the calculator. By limiting a motion angle, the pointing device can provide a positioning operation suitable for a motion intended by a user. The user can optionally use a motion control method in all directions according to need. | 11-06-2008 |
20080279420 | VIDEO AND AUDIO MONITORING FOR SYNDROMIC SURVEILLANCE FOR INFECTIOUS DISEASES - We present, in exemplary embodiments of the present invention, novel systems and methods for syndromic surveillance that can automatically monitor symptoms that may be associated with the early presentation of a syndrome (e.g., fever, coughing, sneezing, runny nose, sniffling, rashes). Although not so limited, the novel surveillance systems described herein can be placed in common areas occupied by a crowd of people, in accordance with local and national laws applicable to such surveillance. Common areas may include public areas (e.g., an airport, train station, sports arena) and private areas (e.g., a doctor's waiting room). The monitored symptoms may be transmitted to a responder (e.g., a person, an information system) outside of the surveillance system, such that the responder can take appropriate action to identifying, treat and quarantine potentially infected individuals, as necessary. | 11-13-2008 |
20080279421 | OBJECT DETECTION USING COOPERATIVE SENSORS AND VIDEO TRIANGULATION - Methods and apparatus are provided for detecting and tracking a target. Images are captured from a field of view by at least two cameras mounted on one or more platforms. These images are analyzed to identify landmarks with the images which can be used to track the targets position from frame to frame. The images are fused (merged) with information about the target or platform position from at least one sensor to detect and track the target. The targets position with respect to the position of the platform is displayed or the position of the platform relative to the target is displayed. | 11-13-2008 |
20080285797 | METHOD AND SYSTEM FOR BACKGROUND ESTIMATION IN LOCALIZATION AND TRACKING OF OBJECTS IN A SMART VIDEO CAMERA - Aspects of a method and system for change detection in localization and tracking of objects in a smart video camera are provided. A programmable surveillance video camera comprises processors for detecting objects in a video signal based on an object mask. The processors may generate a textual representation of the video signal by utilizing a description language to indicate characteristics of the detected objects, such as shape, texture, color, and/or motion, for example. The object mask may be based on a detection field value generated for each pixel in the video signal by comparing a first observation field and a second observation field associated with each of the pixels. The first observation field may be based on a difference between an input video signal value and an estimated background value while the second observation field may be based on a temporal difference between first observation fields. | 11-20-2008 |
20080285798 | Obstacle detection apparatus and a method therefor - An apparatus of detecting an object on a road surface includes a stereo set of video cameras mounted on a vehicle to produce right and left images, a storage to store the right and left images, a parameter computation unit to compute a parameter representing road planarity constraint based on the images of the storage, a corresponding point computation unit to compute correspondence between a first point on one of the right and left images and a second point on the other, which corresponds to the first point, based on the parameter, an image transformation unit to produce a transformed image from the one image using the correspondence, and a detector to detect an object having a dimension larger than a given value in a vertical direction with respect to the road surface, using the correspondence and the transformed image. | 11-20-2008 |
20080285799 | APPARATUS AND METHOD FOR DETECTING OBSTACLE THROUGH STEREOVISION - According to an apparatus and method for detecting an obstacle through stereovision, an image capturing module comprises a plurality of cameras and is used for capturing a plurality of images; an image processing module edge-detecting the image to generate a plurality of edge objects and object information corresponding to each edge object; an object detection module matches a focus and a horizontal spacing interval of the camera according to the object information to generate a relative object distance corresponding to each edge object; a group module compares the relative object distance with a threshold distance and groups the edge objects with the relative object distance smaller than the threshold distance to be an obstacle and obtains a relative obstacle distance corresponding to the obstacle. | 11-20-2008 |
20080285800 | INFORMATION PROCESSING APPARATUS AND METHOD, AND PROGRAM - An information processing apparatus includes an obtaining unit configured to obtain feature quantities of an image; and a detector configured to detect a gazing point at which a user gazes within the image, wherein the gazing point detected by the detector among the feature quantities obtained by the obtaining unit or the feature quantities extracted from the image in a predetermined range containing the gazing point is stored. | 11-20-2008 |
20080285801 | Visual Tracking Eye Glasses In Visual Head And Eye Tracking Systems - The invention relates to the application area of camera-based head and eye tracking systems. The performance of such systems typically suffers when eye glasses are worn, as the frames of the glasses interfere with the tracking of the facial features utilized by the system. This invention describes how the appearance of the glasses can be utilized by such a tracking system, not only eliminating the interference of the glasses with the tracking but also aiding the tracking of the facial features. The invention utilizes a shape model of the glasses which can be tracked by a specialized tracker to derive 3D pose information. | 11-20-2008 |
20080285802 | TAILGATING AND REVERSE ENTRY DETECTION, ALARM, RECORDING AND PREVENTION USING MACHINE VISION - Unauthorized entry into controlled access areas using tailgating or reverse entry methods is detected using machine vision methods. Camera images of the controlled area are processed to identify and track objects in the controlled area. In a preferred embodiment, this processing includes 3D surface analysis to distinguish and classify objects in the field of view. Feature extraction, color analysis, and pattern recognition may also be used for identification and tracking of objects. Integration with security monitoring and control systems provides notification when a tailgating or reverse entry event has occurred. More reliable operation in practical circumstances is thus obtained, such as when multiple people are using an entrance or exit under variable light and shadow conditions. Electronic access control systems may further be combined with the machine vision methods of the invention to more effectively prevent tailgating or reverse entry. | 11-20-2008 |
20080292140 | Tracking people and objects using multiple live and recorded surveillance camera video feeds - Tracking a target across a region is disclosed. A graphical user interface is provided that displays, in a first region, video from a field of view of a main video device, and, in a plurality of second regions, video from a field of view of each of a plurality of perimeter video devices (PVDs). The field of view of each PVD is proximate to the main video device's field of view. A selection of one of the plurality of PVDs is received. In response, video from a field of view of the selected PVD is displayed in the first region, and a plurality of candidate PVDs is identified. Each candidate PVD has a field of view proximate to the field of view of the selected PVD. The plurality of second regions is then repopulated with video from a field of view of each of the plurality of identified candidate PVDs. | 11-27-2008 |
20080298636 | METHOD FOR DETECTING WATER REGIONS IN VIDEO - A computer-based method for automatic detection of water regions in a video include the steps of estimating a water map of the video and outputting the water map to an output medium, such as a video analysis system. The method may further include the steps of training a water model from the water map; re-classifying the water map using the water model by detecting water pixels in the video; and refining the water map. | 12-04-2008 |
20080298637 | Head Pose Assessment Methods and Systems - Improvements are provided to effectively assess a user's face and head pose such that a computer or like device can track the user's attention towards a display device(s). Then the region of the display or graphical user interface that the user is turned towards can be automatically selected without requiring the user to provide further inputs. A frontal face detector is applied to detect the user's frontal face and then key facial points such as left/right eye center, left/right mouth corner, nose tip, etc., are detected by component detectors. The system then tracks the user's head by an image tracker and determines yaw, tilt and roll angle and other pose information of the user's head through a coarse to fine process according to key facial points and/or confidence outputs by pose estimator. | 12-04-2008 |
20080304705 | SYSTEM AND METHOD FOR SIDE VISION DETECTION OF OBSTACLES FOR VEHICLES - This invention provides a system and method for object detection and collision avoidance for objects and vehicles located behind the cab or front section of an elongated, and possibly tandem, vehicle. Through the use of narrow-baseline stereo vision that can be vertically oriented relative to the ground/road surface, the system and method can employ relatively inexpensive cameras, in a stereo relationship, on a low-profile mounting, to perform reliable detection with good range discrimination. The field of detection is sufficiently behind and aside the rear area to assure an adequate safety zone in most instances. Moreover, this system and method allows all equipment to be maintained on the cab of a tandem vehicle, rather than the interchangeable, and more-prone-to-damage cargo section and/or trailer. One or more cameras can be mounted on, or within, the mirror on each side, on aerodynamic fairings or other exposed locations of the vehicle. Image signals received from each camera can be conditioned before they are matched and compared for disparities viewed above the ground surface, and according to predetermined disparity criteria. | 12-11-2008 |
20080304706 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - There is provided an information processing apparatus, comprising: an obtaining unit which obtains video data captured by an image capturing apparatus disposed in a monitored space, location information regarding a location of a moving object in the monitored space, and existence information regarding a capturing period of the moving object in the video data; and a display processing unit which processes a display of a trajectory of the moving object in the monitored space based on the location information, the display processing unit processing a display of the trajectory so that the portion of the trajectory that corresponds to the capturing period is distinguishable from the other portions of the trajectory, based on the existence information. | 12-11-2008 |
20080304707 | Information Processing Apparatus, Information Processing Method, and Computer Program - An information processing apparatus that executes processing for creating an environmental map includes a camera that photographs an image, a self-position detecting unit that detects a position and a posture of the camera on the basis of the image, an image-recognition processing unit that detects an object from the image, a data constructing unit that is inputted with information concerning the position and the posture of the camera and information concerning the object and executes processing for creating or updating the environmental map, and a dictionary-data storing unit having stored therein dictionary data in which object information is registered. The image-recognition processing unit executes processing for detecting an object from the image acquired by the camera with reference to the dictionary data. The data constructing unit applies the three-dimensional shape data registered in the dictionary data to the environmental map and executes object arrangement on the environmental map. | 12-11-2008 |
20080310676 | Method and System for Optoelectronic Detection and Location of Objects - Disclosed are methods and systems for optoelectronic detection and location of moving objects. The disclosed methods and systems capture one-dimensional images of a field of view through which objects may be moving, make measurements in those images, select from among those measurements those that are likely to correspond to objects in the field of view, make decisions responsive to various characteristics of the objects, and produce signals that indicate those decisions. The disclosed methods and systems provide excellent object discrimination, electronic setting of a reference point, no latency, high repeatability, and other advantages that will be apparent to one of ordinary skill in the art. | 12-18-2008 |
20080310677 | OBJECT DETECTION SYSTEM AND METHOD INCORPORATING BACKGROUND CLUTTER REMOVAL - A method and system for optically detecting an object within a field of view where detection is difficult because of background clutter within the field of view that obscures the object. A camera is panned with movement of the object to motion stabilize the object against the background clutter while taking a plurality of image frames of the object. A frame-by-frame analysis is performed to determine variances in the intensity of each pixel, over time, from the collected frames. From this analysis a variance image is constructed that includes an intensity variance value for each pixel. Pixels representing background clutter will typically vary considerably in intensity from frame to frame, while pixels making up the object will vary little or not at all. A binary threshold test is then applied to each variance value and the results are used to construct a final image. The final image may be a black and white image that clearly shows the object as a silhouette. | 12-18-2008 |
20080310678 | Pedestrian Detecting Apparatus - A first pedestrian judging unit judges, on the basis of the size and motion state of a target three-dimensional object, whether the object is a pedestrian. A second pedestrian judging unit judges, on the basis of shape data on the object, whether the object is a pedestrian. A pedestrian judging unit finally determines that the object is a pedestrian when both the first and second pedestrian judging units judge the object as a pedestrian, when the second pedestrian judging unit judges the object as a pedestrian, when the first pedestrian judging unit judges the object as a pedestrian and a result of this judgment is held for a preset period, or when the first pedestrian judging unit judges the object as a pedestrian in a current judgment operation and the second pedestrian judging unit judged the object as a pedestrian in the previous judging operation. | 12-18-2008 |
20080317281 | MEDICAL MARKER TRACKING WITH MARKER PROPERTY DETERMINATION - A method for tracking at least one medical marker is provided, wherein actual properties of the at least one marker are compared with nominal properties of the at least one marker. A basis for subsequent use of information obtained from the at least one marker is formed based on the comparison. | 12-25-2008 |
20080317282 | Vehicle-Use Image Processing System, Vehicle-Use Image Processing Method, Vehicle-Use Image Processing Program, Vehicle, and Method of Formulating Vehicle-Use Image Processing System - A system or the like capable of detecting lane marks more accurately by preventing false lane marks from being erroneously detected as true lane marks. A vehicle-use image processing system ( | 12-25-2008 |
20080317283 | SIGNAL PROCESSING METHOD AND DEVICE FOR MULTI APERTURE SUN SENSOR - The disclosure relates to a signal processing method for multi aperture sun sensor comprising the following steps: reading the information of sunspots in a row from a centroid coordinate memory, judging the absence of sunspots in that row, identifying the row and column index of the sunspots in the complete row, selecting the corresponding calibration parameter based on the row and column index, calculating attitude with the attitude calculation module the corresponding to identified sunspots, averaging the accumulated attitude of all sunspots and outputting the final attitude. At the same time, a signal processing device for multi aperture sun sensor is also presented. It is comprised of a sunspot absence judgment and an identification module and an attitude calculation module. The disclosure implements the integration of sun sensors without additional image processor or attitude processor, reduces field programmable gate array resource and improves the reliability of sun sensors. | 12-25-2008 |
20080317284 | Face tracking device - A face tracking device for tracking an orientation of a person's face with using a cylindrical head model, the face tracking device comprises: an image means for continuously shooting the person's face and for obtaining a first image data based on a shot of the person's face; an extraction means for extracting a second image data from the first image data, the second image data corresponding to a facial area of the person's face; a determination means for determining whether the second image is usable as an initial value required for the cylindrical head model; and a face orientation detection means for detecting the orientation of the person's face with using the cylindrical head model and with using the initial value determined to be usable by the determination means. | 12-25-2008 |
20080317285 | IMAGING DEVICE, IMAGING METHOD AND COMPUTER PROGRAM - With a digital still camera, a user freely detects a smiling face on a touchpanel displaying a through image and selects a subject having that smiling face. The digital still camera displays the smiling face as a smiling face detection target and a non-target detected face on the through image in a distinctly different manner to discriminate the smiling face detection target from the non-target detected face. For example, when persons in an event such as a party are photographed in a relatively large viewing angle, an auto photographing operation may be performed in response to smiling face detections on condition that at least two members in the party are smiling. | 12-25-2008 |
20080317286 | SECURITY DEVICE AND SYSTEM - A security device and system is disclosed. This security device is particularly useful in a security system where there are many security cameras to be monitored. This device automatically highlights to a user a camera feed in which an incident is occurring. This assists a user in identifying incidents and to make an appropriate decision regarding whether or not to intervene. This highlighting is performed by a trigger signal generated in accordance with a comparison between a sequence of representations of sensory data and other corresponding sequences of representations of sensory data. | 12-25-2008 |
20080317287 | Image processing apparatus for reducing effects of fog on images obtained by vehicle-mounted camera and driver support apparatus which utilizies resultant processed images - Kalman filter processing is applied to each of successive images of a scene obscured by fog, captured by an onboard camera of a vehicle. The measurement matrix for the Kalman filter is established based on currently estimated characteristics of the fog, and intrinsic luminance values of a scene portrayed by a current image constitute the state vector for the Kalman filter. Adaptive filtering for removing the effects of fog from the images is thereby achieved, with the filtering being optimized in accordance with the degree of image deterioration caused by the fog. | 12-25-2008 |
20090003651 | OBJECT SEGMENTATION RECOGNITION - A system for segmenting radiographic images of a cargo container can include an object segmentation recognition module adapted to perform a series of functions. The functions can include receiving a plurality of radiographic images of a cargo container, each image generated using a different energy level and segmenting each of the radiographic images using one or more segmentation modules to generate segmentation data representing one or more image segments. The functions can also include identifying image layers within the radiographic images using a plurality of layer analysis modules by providing the plurality of radiographic images and the segmentation data as input to the layer analysis modules, and determining adjusted atomic number values for an atomic number image based on the image layers. The functions can include adjusting the atomic number image based on the adjusted atomic number values for the regions of interest to generate an adjusted atomic number image and identifying regions of interest within the adjusted atomic number image based on an image characteristic. The functions can also include providing coordinates of each region of interest and the adjusted atomic number image as output. | 01-01-2009 |
20090003652 | REAL-TIME FACE TRACKING WITH REFERENCE IMAGES - A method of tracking a face in a reference image stream using a digital image acquisition device includes acquiring a full resolution main image and an image stream of relatively low resolution reference images each including one or more face regions. One or more face regions are identified within two or more of the reference images. A relative movement is determined between the two or more reference images. A size and location are determined of the one or more face regions within each of the two or more reference images. Concentrated face detection is applied to at least a portion of the full resolution main image in a predicted location for candidate face regions having a predicted size as a function of the determined relative movement and the size and location of the one or more face regions within the reference images, to provide a set of candidate face regions for the main image. | 01-01-2009 |
20090003653 | Trajectory processing apparatus and method - A trajectory processing apparatus comprises a trajectory database configured to store a position coordinate of a movable body detected from a camera image in association with data that specifies the camera image from which the movable body is detected, and a camera image database configured to store the camera image. A control section fetches the position coordinate of the movable body and the specifying data for the camera image from which the movable body is detected from the trajectory database. Further, the position coordinate of the movable body fetched from the trajectory database is displayed in a display section as a trajectory of the movable body. Furthermore, the control section acquires from the camera image database the camera image specified by the specifying data fetched from the trajectory database. Moreover, this camera image is displayed in the display section. | 01-01-2009 |
20090010490 | SYSTEM AND PROCESS FOR DETECTING, TRACKING AND COUNTING HUMAN OBJECTS OF INTEREST - A method of identifying, tracking, and counting human objects of interest based upon at least one pair of stereo image frames taken by at least one image capturing device, comprising the steps of: obtaining said stereo image frames and converting each said stereo image frame to a rectified image frame using calibration data obtained for said at least one image capturing device; generating a disparity map based upon a pair of said rectified image frames; generating a depth map based upon said disparity map and said calibration data; identifying the presence or absence of said objects of interest from said depth map and comparing each of said objects of interest to existing tracks comprising previously identified objects of interest; for each said presence of an object of interest, adding said object of interest to one of said existing tracks if said object of interest matches said one existing track, or creating a new track comprising said object of interest if said object of interest does not match any of said existing tracks; updating each said existing track; and maintaining a count of said objects of interest in a given time period based upon said existing tracks created or modified during said given time period. | 01-08-2009 |
20090010491 | METHOD AND APPARATUS FOR PROVIDING PICTURE FILE - A method and an apparatus for providing a picture file are provided. The picture file providing apparatus includes a controller which searches for one or more picture files based on a location of a subject, and a screen display unit which forms a display screen to display the one or more picture files that were found, in order to provide a user with the direction information included in each picture file. Each picture file includes picture data, information on a location in which the picture data was created, and information on a direction of a captured image of a subject included in the picture data. | 01-08-2009 |
20090010492 | IMAGE RECOGNITION DEVICE, FOCUS ADJUSTMENT DEVICE, IMAGING APPARATUS, IMAGE RECOGNITION METHOD AND FOCUS ADJUSTMENT METHOD - An image recognition device includes a detection unit which is configured to detect a first difference between partial information of at least a part of the first image information and the reference information and to detect a second difference between partial information of at least a part of the second image information and the reference information. A recognition unit is configured to recognize a first area corresponding to the reference image in the first image information. A calculation unit is configured to calculate a determination value based on a reference area in the second image information corresponding to the first area by weighting the second difference. The recognition unit is configured to recognize a second area corresponding to the reference image in the second image information based on at least one of the second difference and the determination value. | 01-08-2009 |
20090010493 | Motion-Validating Remote Monitoring System - A method of autonomously monitoring a remote site, including the steps of locating a primary detector at a site to be monitored; creating one or more geospatial maps of the site using an overhead image of the site; calibrating the primary detector to the geospatial map using a detector-specific model; detecting an object in motion at the site; tracking the moving object on the geospatial map; and alerting a user to the presence of motion at the site. In addition thermal image data from a infrared cameras, rather than optical/visual image data, is used to create detector-specific models and geospatial maps in substantially the same way that optical cameras and optical image data would be used. | 01-08-2009 |
20090016570 | METHOD AND APPARATUS FOR CALIBRATING SAMPLING OPERATIONS FOR AN OBJECT DETECTION PROCESS - One embodiment of the present invention provides a system that detects an object in an image. During operation, the system determines a relationship between sampling parameters and a detection rate for an object detection process. The system also determines a relationship between the sampling parameters and a detection speed for the object detection process. The system uses the determined relationships to generate specific sampling parameters. Next, the system performs the object detection process, wherein the object detection process uses the sampling parameters to sample locations in the image. This sampling process is used to refine the search for the object by identifying locations that respond to an object detector and are hence likely to be proximate to an instance of the object. | 01-15-2009 |
20090022364 | MULTI-POSE FAC TRACKING USING MULTIPLE APPEARANCE MODELS - A system and method are provided for tracking a face moving through multiple frames of a video sequence. A predicted position of a face in a video frame is obtained. Similarity matching for both a color model and an edge model are performed to derive correlation values for each about the predicted position. The correlation values are then combined to determine a best position and scale match to track a face in the video. | 01-22-2009 |
20090022365 | METHOD AND APPARATUS FOR MEASURING POSITION AND ORIENTATION OF AN OBJECT - An information processing method includes acquiring an image of an object captured by an imaging apparatus, acquiring an angle of inclination measured by an inclination sensor mounted on the object or the imaging apparatus, detecting a straight line from the captured image, and calculating a position and orientation of the object or the imaging apparatus, on which the inclination sensor is mounted, based on the angle of inclination, an equation of the detected straight line on the captured image, and an equation of a straight line in a virtual three-dimensional space that corresponds to the detected straight line. | 01-22-2009 |
20090022366 | SYSTEM AND METHOD FOR ANALYZING VIDEO FROM NON-STATIC CAMERA - A novel system and method of treating the output of moving cameras, in particular ones that enable the application of conventional “static camera” algorithms, e.g., to enable the continuous vigilance of computer surveillance technology to be applied to moving cameras that cover a wide area. According to the invention, a single camera is deployed to cover an area that might require many static cameras and a corresponding number of processing units. A novel system for processing the main video sufficiently enables long-term change detection, particularly the observation that a static object has been moved or has appeared, for instance detecting the parking and departure of vehicles in a parking lot, the arrival of trains in stations, delivery of goods, arrival and dispersal of people, or any other application. | 01-22-2009 |
20090022367 | THREE-DIMENSIONAL SHAPE DETECTING DEVICE AND THREE-DIMENSIONAL SHAPE DETECTING METHOD - A three-dimensional shape detection device which can detect a three-dimensional shape of an object to be picked up even in the case that an image pick-up part with a narrow dynamic range is used is disclosed. An image of an object to be picked up is picked up under a plurality of different exposure conditions in a state that each of a plurality of kinds of patterned lights alternatively disposing bright and dark portions is time-sequentially projected to the object to be picked up and a plurality of brightness images are generated on for respective exposure conditions. Further, based on such a plurality of the brightness images, a coded image is formed on each exposure condition and a code edge position for a space code is obtained for every exposure condition. Based on a plurality of code edge positions for every exposure condition obtained in this manner, one code edge position for calculating a three-dimensional shape of the object to be picked up is determined such that the three-dimensional shape of the object to be picked up is calculated. | 01-22-2009 |
20090022368 | MONITORING DEVICE, MONITORING METHOD, CONTROL DEVICE, CONTROL METHOD, AND PROGRAM - The present invention relates to a monitoring device, monitoring method, control device, control method, and program that use information on a face direction or gaze direction of a person to cause a device to perform processing in accordance with a movement or status of the person. A target detector | 01-22-2009 |
20090028384 | Three-dimensional road map estimation from video sequences by tracking pedestrians - Estimation of a 3D layout of roads and paths traveled by pedestrians is achieved by observing the pedestrians and estimating road parameters from the pedestrian's size and position in a sequence of video frames. The system includes a foreground object detection unit to analyze video frames of a 3D scene and detect objects and object positions in video frames, an object scale prediction unit to estimate 3D transformation parameters for the objects and to predict heights of the objects based at least in part on the parameters, and a road map detection unit to estimate road boundaries of the 3D scene using the object positions to generate the road map. | 01-29-2009 |
20090028385 | DETECTING AN OBJECT IN AN IMAGE USING EDGE DETECTION AND MORPHOLOGICAL PROCESSING - A representation of an object in a live event is detected in an image of the event. A location of the object in the live event is translated to an estimated location in the image based on camera sensor and/or registration data. A search area is determined around the estimated location in the image. A direction of motion of the object in the image is also determined. A representation of the object is identified in the search area by detecting edges of the object, e.g., perpendicular to the direction of motion and parallel to the direction of motion, performing morphological processing, and matching against a model or other template of the object. Based on the position of the representation of the object, the camera sensor and/or registration data can be updated, and a graphic can be located in the image substantially in real time. | 01-29-2009 |
20090028386 | AUTOMATIC TRACKING APPARATUS AND AUTOMATIC TRACKING METHOD - An automatic tracking apparatus is provided, which is capable of solving a failure occurred in an automatic tracking operation in connection with a zooming operation, and capable of tracking an object in a stable manner, while a zooming-up operation, or a zooming-down operation is carried out in a high speed. | 01-29-2009 |
20090028387 | Apparatus and method for recognizing position of mobile robot - Provided is an apparatus for recognizing the position of a mobile robot. The apparatus includes an image capturing unit which is loaded into a mobile robot and captures an image; an illuminance determining unit which determines illuminance at a position where an image is to be captured; a light-emitting unit which emits light toward the position; a light-emitting control unit which controls the light-emitting unit according to the determined illuminance; a driving control unit which controls the speed of the mobile robot according to the determined illuminance; and a position recognizing unit which recognizes the position of the mobile robot by comparing a pre-stored image to the captured image. | 01-29-2009 |
20090034789 | MOVING THING RECOGNITION SYSTEM - A moving thing recognition system using a camera on a path that the moving thing (such as a train, vehicle or ship) is proceeding, in which a check aligning device is used to align each picture file of the moving thing with virtual checks to compare for the body type and speed of the moving thing. The virtual checks of the moving thing are provided in taking the length of a fixed marking article or some other fixed article on the path of the moving thing as a reference. Thereby, under the circumstance that the path of the moving thing is unchanged and there is no emitted signal, accurate recognition can be obtained. | 02-05-2009 |
20090034790 | Method for customs inspection of baggage and cargo - A method and system of inspecting baggage to be transported from a location of origin to a destination is provided that includes generating scan data representative of a piece of baggage while the baggage is at the location of origin, and storing the scan data in a database. A rendered view representative of a content of the baggage is provided where the rendered views are based on the scan data retrieved from the database over a network. The rendered views are presented at a destination different from the origin. | 02-05-2009 |
20090034791 | Image processing for person and object Re-identification - A device and method for processing an image to create appearance and shape labeled images of a person or object captured within the image. The appearance and shape labeled images are unique properties of the person or object and can be used to re-identify the person or object in subsequent images. The appearance labeled image is an aggregate of pre-stored appearance labels that are assigned to image segments of the image based on calculated appearance attributes of each image segment. The shape labeled image is an aggregate of pre-stored shape labels that are assigned to image segments of the image based on calculated shape attributes of each image segment. An identifying descriptor of the person or object can be computed based on both the appearance labeled image and the shape labeled image. The descriptor can be compared with other descriptors of later captured images to re-identify a person or object. | 02-05-2009 |
20090034792 | REDUCING LATENCY IN A DETECTION SYSTEM - A first multi-dimensional digital image of a scan region is generated. The scan region is included in a materials-detection apparatus and is configured to receive and move containers through the materials-detection apparatus. A pre-defined background range of values is accessed, the background range of values representing a range of values associated with non-target materials and the background range of values being distinct from values associated with the target materials. A value of a voxel included in the multi-dimensional digital image is compared to the background range of values to determine whether the value of the voxel is within the background range of values. If the value of the voxel is within the background range of values, the voxel is identified as a voxel representing a low-density material. A second multi-dimensional digital image that disregards the identified voxel is generated to compress the first multi-dimensional digital image. | 02-05-2009 |
20090034793 | Fast Crowd Segmentation Using Shape Indexing - A method for performing crowd segmentation includes receiving video image data (S | 02-05-2009 |
20090034794 | Conduct inference apparatus - In a conduct inference process, feature points are extracted from a capture image. The extracted feature points are collated with conduct inference models to select conduct inference models in each of which an accordance ratio between a target vector and a movement vector is within a tolerance. Among the selected conduct inference models, one conduct inference model in which a distance from a relative feature point to a return point is shortest is selected. Then, a specific conduct designated in the selected conduct inference model is tentatively determined as a specific conduct the driver intends to perform. Furthermore, based on the tentatively determined specific conduct, it is determined whether the specific conduct is probable. When it is determined that the specific conduct is probable, an alarm process is executed to output an alarm to the driver. | 02-05-2009 |
20090034795 | METHOD FOR GEOLOCALIZATION OF ONE OR MORE TARGETS - The subject of the invention is a method for geolocalization of one or more stationary targets from an aircraft by means of a passive optronic sensor. The sensor acquires at least one image I | 02-05-2009 |
20090034796 | INCAPACITY MONITOR - A method of monitoring incapacity of a subject which includes the steps of continuously monitoring eye and eyelid movement of at least one eye of the subject; analyzing eye and eyelid movements to obtain measures of ocular quiescence and the duration of an interval of no eye or eyelid movement; and if the duration of ocular quiescence exceeds a predetermined value providing a potential incapacity warning and requesting a response within a predetermined period, and applying an emergency procedure if no response is made within a predetermined interval. | 02-05-2009 |
20090041297 | Human detection and tracking for security applications - A computer-based system for performing scene content analysis for human detection and tracking may include a video input to receive a video signal; a content analysis module, coupled to the video input, to receive the video signal from the video input, and analyze scene content from the video signal and determine an event from one or more objects visible in the video signal; a data storage module to store the video signal, data related to the event, or data related to configuration and operation of the system; and a user interface module, coupled to the content analysis module, to allow a user to configure the content analysis module to provide an alert for the event, wherein, upon recognition of the event, the content analysis module produces the alert. | 02-12-2009 |
20090041298 | IMAGE CAPTURE SYSTEM AND METHOD - Video capture systems, methods and computer program products can be provided and configured to capture video sequences of one or more participants during an activity. The video capture system can be configured to include one or more video capture devices positioned at predetermined locations in an activity area; a tracking device configured to track a location of the participant during the activity; a content storage device communicatively coupled to the video capture devices and configured to store video content received from the video capture devices; and a content assembly device communicatively coupled to the content storage device and to the tracking device, and configured to use tracking information from the tracking device to retrieve video sequences of the participant from the tracking device and to assemble the retrieved video sequences into a composite participant video. | 02-12-2009 |
20090041299 | Method and Apparatus for Recognition of an Object by a Machine - Disclosed is a method and apparatus for recognition of an object by a machine including isolating and processing an image to help facilitate recognition of the object by the machine. | 02-12-2009 |
20090041300 | HEADLIGHT SYSTEM FOR VEHICLES, PREFERABLY FOR MOTOR VEHICLES - 1. Headlight system for vehicles, preferably for motor vehicles | 02-12-2009 |
20090041301 | FRAME OF REFERENCE REGISTRATION SYSTEM AND METHOD - A system for assisting in work carried out on a workpiece and having a frame of reference. The system includes a referencing arrangement to register the position of a first location in the frame of reference of the system; a tool holder for holding a tool to assist with the work; a data interface to receive image data relating to the workpiece; and a processing arrangement to register the image data within the frame of reference of the system. The position of the tool holder is known within the frame of reference of the system. The image data represents an image which is indexed by position relative to the first location. The processing arrangement utilizes the relative position of the image represented by the image data with respect to the first location and the position of the first location in the frame of reference of the system. | 02-12-2009 |
20090041302 | Object type determination apparatus, vehicle, object type determination method, and program for determining object type - An object type determination apparatus, an object type determination method, a vehicle, and a program for determining an object type, capable of accurately determining the type of the object by appropriately determining periodicity in movement of the object from images, are provided. The object type determination apparatus includes an object area extracting means ( | 02-12-2009 |
20090046893 | SYSTEM AND METHOD FOR TRACKING AND ASSESSING MOVEMENT SKILLS IN MULTIDIMENSIONAL SPACE - Accurate simulation of sport to quantify and train performance constructs by employing sensing electronics for determining, in essentially real time, the player's three dimensional positional changes in three or more degrees of freedom (three dimensions); and computer controlled sport specific cuing that evokes or prompts sport specific responses from the player that are measured to provide meaningful indicia of performance. The sport specific cuing is characterized as a virtual opponent that is responsive to, and interactive with, the player in real time. The virtual opponent continually delivers and/or responds to stimuli to create realistic movement challenges for the player. | 02-19-2009 |
20090052737 | Method and Apparatus for Detecting a Target in a Scene - A method of detecting a target in a scene is described that comprises the step of taking one or more data sets, each data set comprising a plurality of normalised data elements, each normalised data element corresponding to the return from a part of the scene normalised to a reference return for the same part of the scene. The method then involves thresholding ( | 02-26-2009 |
20090052738 | SYSTEM AND METHOD FOR COUNTING FOLLICULAR UNITS - A system and method for counting follicular units using an automated system comprises acquiring an image of a body surface having skin and follicular units, filtering the image to remove skin components in the image, processing the resulted image to segment it, and filtering noise to eliminate all elements other than hair follicles of interest so that hair follicles in an area of interest can be counted. The system may comprise an image acquisition device and an image processor for performing the method. In another aspect, the system and method also classifies the follicular units based on the number of hairs in the follicular unit. | 02-26-2009 |
20090052739 | HUMAN PURSUIT SYSTEM, HUMAN PURSUIT APPARATUS AND HUMAN PURSUIT PROGRAM - A human pursuit system includes a plurality of cameras, shooting directions of which are directed toward a floor, are installed on a ceiling, a parallax of an object reflected in an overlapping image domain is calculated on the basis of at least a portion of the overlapping image domain where images are overlapped among shot images shot by the plurality of cameras, the object equal to or greater than a threshold value predetermined by the calculated parallax is detected as a human, a pattern image including the detected human object is extracted, and a pattern matching is applied to the extracted pattern image and the image shot by the camera to thereby pursue a human movement trajectory. | 02-26-2009 |
20090052740 | MOVING OBJECT DETECTING DEVICE AND MOBILE ROBOT - A moving object detecting device measures a congestion degree of a space and utilizes the congestion degree for tracking. In performing the tracking, a direction measured by a laser range sensor is heavily weighted when the congestion degree is low. When the congestion degree is high, a sensor fusion is performed by heavily weighting a direction measured by a image processing on a captured image to obtain a moving object estimating direction, and obtains a distance by the laser range sensor in the moving object estimating direction. | 02-26-2009 |
20090052741 | Subject tracking method, subject tracking device, and computer program product - A subject tracking method, includes: calculating a similarity factor indicating a level of similarity between an image contained in a search frame at each search frame position and a template image by shifting the search frame within a search target area set in each of individual frames of input images input in time sequence; determining a position of the search frame for which a highest similarity factor value has been calculated, within each input image to be a position (subject position) at which a subject is present; tracking the subject position thus determined through the individual frames of input images; calculating a difference between a highest similarity factor value and a second highest similarity factor value; and setting the search target area for a next frame based upon the calculated difference. | 02-26-2009 |
20090060270 | Image Detection Method - An image detection method is performed by a computer to determine whether or not an image in a region shot by a camera changes. According to the method, consecutive images shot by the camera are captured, and at least one anchored frame for the consecutive images is set. Whether or not the images in the anchored frame should or should not change is determined, and a signal is transmitted to determine whether or not the detected region is normal or not. Then, a notification signal is transmitted automatically to remind supervisors to closely observe the detected region. | 03-05-2009 |
20090060271 | METHOD AND APPARATUS FOR MANAGING VIDEO DATA - A method for managing video data including selecting a target object from a monitored area monitored by at least one image capturing device, extracting feature data of the selected target object, detecting motion of an object occurring in video data corresponding to the monitored area, comparing feature data the object causing the detected motion with the extracted feature data of the target object, and outputting information related to the object causing the motion when the comparing step determines the object causing the motion is the target object. | 03-05-2009 |
20090060272 | SYSTEM AND METHOD FOR OVERLAYING COMPUTER GENERATED HIGHLIGHTS IN A DISPLAY OF MILLIMETER WAVE IMAGERY - A system and method for overlaying computer-generated highlights in a display of millimeter wave imagery is disclosed. In a particular embodiment, visible spectrum and algorithmically created images are displayed adjacent to corresponding millimeter wave imagery on a graphical user interface (GUI). The millimeter wave imagery is used to detect a threat such as a concealed object. A computer generated highlight coinciding with a location of the detected concealed object is used to automatically overlay at least one or more of the visible spectrum images, algorithmically created images, and millimeter wave imagery. The computer generated highlight is encoded with information valuable for aiding the user when viewing and assessing the image date. | 03-05-2009 |
20090060273 | SYSTEM FOR EVALUATING AN IMAGE - In a system for evaluating an image, a processing device includes an input for receiving image data representing the image and another input for receiving distance information on a distance of an object relative to an image plane of the image. The distance information may be determined based on a three-dimensional image including depth information captured utilizing a | 03-05-2009 |
20090060274 | IMAGE PICK-UP APPARATUS HAVING A FUNCTION OF RECOGNIZING A FACE AND METHOD OF CONTROLLING THE APPARATUS - It is judged whether or not a human face detecting mode is set (S | 03-05-2009 |
20090060275 | MOVING BODY IMAGE EXTRACTION APPARATUS AND COMPUTER READABLE STORAGE MEDIUM STORING PROGRAM - A moving body image extraction apparatus calculates difference intensity relating to a background portion with respect to a plurality of frame of continuous shoot, calculates a value by dividing difference intensity of an arbitrary frame of the plurality of frames by summed difference intensity for the plurality of frames, outputs an extracted image of a moving body in the arbitrary frame based on the calculated value. | 03-05-2009 |
20090060276 | METHOD FOR DETECTING AND/OR TRACKING OBJECTS IN MOTION IN A SCENE UNDER SURVEILLANCE THAT HAS INTERFERING FACTORS; APPARATUS; AND COMPUTER PROGRAM - A method for detection and/or tracking of objects in motion | 03-05-2009 |
20090060277 | BACKGROUND MODELING WITH FEATURE BLOCKS - Video content analysis of a video may include: modeling a background of the video; detecting at least one target in a foreground of the video based on the feature blocks of the video; and tracking each target of the video. Modeling a background of the video may include: dividing each frame of the video into image blocks; determining features for each image block of each frame to obtain feature blocks for each frame; determining a feature block map for each frame based on the feature blocks of each frame; and determining a background feature block map to model the background of the vide based on at least one of the feature block maps. | 03-05-2009 |
20090060278 | STATIONARY TARGET DETECTION BY EXPLOITING CHANGES IN BACKGROUND MODEL - A sequence of video frames of an area of interest is obtained. A first background model of the area of interest is constructed based on a first parameter. A second background model of the area of interest is constructed based on a second parameter, the second parameter being different from the first parameter. A difference between the first and second background models is determined. A stationary target is determined based on the determined difference. An alert concerning the stationary target is generated. | 03-05-2009 |
20090067673 | METHOD AND APPARATUS FOR DETERMINING THE POSITION OF A VEHICLE, COMPUTER PROGRAM AND COMPUTER PROGRAM PRODUCT - The present invention relates to an apparatus and a method for determining the position of a vehicle moved along a path, markers, particularly code carriers or barcodes being located along the path.. The method is characterized in that the markers are detected with a digital camera placed on the vehicle and that by means of image processing from a position of at least one marker image in the detection or coverage range of the digital camera a position of the vehicle relative to the given marker or the given markers in the main vehicle movement direction along the path and in at least one direction at right angles to the main movement direction is determined. The invention also relates to a computer program and a computer program product. | 03-12-2009 |
20090067674 | Monitoring device - The invention concerns a monitoring device with a multi-camera device and an object tracking device for the high resolution observation of moving objects. Hereby it is provided that the object tracking device comprises an image integration device for the generation of a total image from the individual images of the multi-camera device and a cut-out definition device for the definition, independent from the borders of the individual images, of the to be observed cut-out. | 03-12-2009 |
20090074244 | Wide luminance range colorimetrically accurate profile generation method - Generating a color profile for a digital input device. Color values for at least one color target positioned within a first scene are measured, the color target having multiple color patches. An image of the first scene is generated using the digital input device, the first scene including the color target(s). Color values from a portion of the image corresponding to the color target are extracted and a color profile is generated, based on the measured color values and the extracted color values. The generated color profile is used to transform the color values of an image of a second scene captured under the same lighting conditions as the first scene. Using this generated color profile to transform images is likely to result in more colorimetrically accurate transformations of images created under real-world lighting conditions. | 03-19-2009 |
20090074245 | Miniature autonomous agents for scene interpretation - A miniature autonomous apparatus for performing scene interpretation, comprising: image acquisition means, image processing means, memory means and communication means, the processing means comprising means for determining an initial parametric representation of the scene; means for updating the parametric representation according to predefined criteria; means for analyzing the image, comprising means for determining, for each pixel of the image, whether it is a hot pixel, according to predefined criteria; means for defining at least one target from the hot pixels; means for measuring predefined parameters for at least one target; and means for determining, for at least one target whether said target is of interest, according to application-specific criteria, and wherein said communication means are adapted to output the results of said analysis. | 03-19-2009 |
20090074246 | METHOD AND SYSTEM FOR THE AUTOMATIC DETECTION OF EVENTS IN SPORT FIELDS - The present invention refers to the problem of the automatic detection of events in sport field, in particular Goal/NoGoal events by signalling to the mach management, which can autonomously take the final decision upon the event. The system is not invasive for the field structures, neither it requires to interrupt the game or to modify the rules thereof, but it only aims at detecting objectively the event occurrence and at providing support in the referees' decisions by means of specific signalling of the detected events. | 03-19-2009 |
20090074247 | Obstacle detection method - A method is provided for the detection of an obstacle in a road, in particular of a pedestrian, in the surroundings in the range of view of an optical sensor attached to a movable carrier such as in particular a vehicle, wherein a first image is taken by means of the optical sensor at a first time and a second image is taken at a later second time, a first transformed image is produced by a transformation of the first taken image from the image plane of the optical sensor into the road plane, a further transformed image is produced from the first transformed image while taking account of the carrier movement in the time period between the first time and the second time, the further transformed image is transformed back from the road plane into the image plane and an image stabilization is carried out based on the image transformed back into the image plane and on the second taken image. | 03-19-2009 |
20090074248 | GESTURE-CONTROLLED INTERFACES FOR SELF-SERVICE MACHINES AND OTHER APPLICATIONS - A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines. | 03-19-2009 |
20090080695 | Electro-optical Foveated Imaging and Tracking System - Conventional electro-optical imaging systems can not achieve wide field of view (FOV) and high spatial resolution imaging simultaneously due to format size limitations of image sensor arrays. To implement wide field of regard imaging with high resolution, mechanical scanning mechanisms are typically used. Still, sensor data processing and communication speed is constrained due to large amount of data if large format image sensor arrays are used. This invention describes an electro-optical imaging system that achieves wide FOV global imaging for suspect object detection and local high resolution for object recognition and tracking. It mimics foveated imaging property of human eyes. There is no mechanical scanning for changing the region of interest (ROI). Two relatively small format image sensor arrays are used to respectively acquire global low resolution image and local high resolution image. The ROI is detected and located by analysis of the global image. A lens array along with an electronically addressed switch array and a magnification lens is used to pick out and magnify the local image. The global image and local image are processed by the processor, and can be fused for display. Three embodiments of the invention are descried. | 03-26-2009 |
20090080696 | Automated person identification and location for search applications - A “be on the look out” or BOLO device is an unsupervised device that can be deployed at a particular location to watch for a specific target or person. A camera produces scene images that the BOLO device analyzes to determine if they contain a pattern matching a target descriptor. If a matching pattern is found, then the BOLO device emits an alarm signal. The alarm signal can contain the BOLO device's location or identification. A location database can produce the device's location when given the device's identification. A target transmitter can supply new target descriptors to deployed BOLO devices. | 03-26-2009 |
20090080697 | Imaging position analyzing method - The imaging position of each of the frames in image data of a plurality of frames captured while a vehicle is traveling is accurately determined. | 03-26-2009 |
20090080698 | Image display apparatus and computer program product - A comprehensive degree of relevance of other moving picture contents with respect to a moving-picture content to be processed is calculated by using any one of or all of content information, frame information, and image characteristics, to display a virtual space in which a visualized content corresponding to a moving picture content to be displayed, which is selected based on the degree of relevance, is located at a position away from a layout position of the visualized content corresponding to the moving picture content to be processed, according to the degree of relevance. | 03-26-2009 |
20090080699 | 3D Beverage Container Localizer - Objects placed on a flat surface are identified and localized by using a single view image. The single view image in the perspective projection is transformed to a normalized image in a pseudo plan to view to enhance detection of the bottom or top shapes of the objects. One or more geometric features are detected from the normalized image by processing the normalized image. The detected geometric features are analyzed to determine the identity and the location the objects on the flat surface. | 03-26-2009 |
20090080700 | PROJECTILE TRACKING SYSTEM - A system and method for determining the track of a projectile use a thermal signature of the projectile. Sequential infrared image frames are acquired from a sensor at a given position. A set of frames containing spots with characteristics consistent with a projectile in flight are identified. A possible projectile track solution for said spots is identified. A thermal signature value for each pixel of each spot of the possible solution is determined. The determined thermal signature is then compared to an actual thermal signature for a substantially similar projectile track to ascertain whether the determined thermal signature substantially matches the actual thermal signature, which indicates that the possible projectile track solution is the correct solution. | 03-26-2009 |
20090080701 | Method for object tracking - The present invention relates to a method for the recognition and tracking of a moving object, in particular of a pedestrian, from a motor vehicle, at which a camera device is arranged. An image of the environment including picture elements is taken in the range of view of the camera device ( | 03-26-2009 |
20090080702 | Method for the recognition of obstacles - A method is provided for the recognition of an obstacle, in particular a pedestrian, located in the travel path of a movable carrier such as in particular a motor vehicle, in the environment in the range of view of an optical sensor attached to the movable carrier, wherein a first image is taken by means of the optical sensor at a first time and a second image is taken at a later second time, wherein a first transformed lower part image is generated by a projection of an image section of the first taken image lying below the horizon from the image plane of the optical sensor into the ground plane, wherein a first transformed upper part image is generated by a projection of an image section of the first taken image lying above the horizon from the image plane of the optical sensor into a virtual plane parallel to the ground plane, wherein a second transformed lower part image is generated by a projection of an image section of the second taken image lying below the horizon from the image plane of the optical sensor into the ground plane, wherein a second transformed upper part image is generated by a projection of an image section of the second taken image lying above the horizon from the image plane of the optical sensor into a virtual plane parallel to the ground plane, wherein a lower difference part image is determined from the first and second transformed lower part images, an upper difference part image is determined from the first and second upper part images and it is determined by evaluation of the lower difference part image and of the upper difference part image whether an obstacle is located in the travel path of the movable carrier. | 03-26-2009 |
20090087023 | Method and System for Detecting and Tracking Objects in Images - Invention describes a method and system for detecting and tracking an object in a sequence of images. For each image the invention determines an object descriptor from a tracking region in a current image in a sequence of images, in which the tracking region corresponds to a location of an object in a previous image. A regression function is applied to the descriptor to determine a motion of the object from the previous image to the current image, in which the motion has a matrix Lie group structure. The location of the tracking region is updated using the motion of the object. | 04-02-2009 |
20090087024 | CONTEXT PROCESSOR FOR VIDEO ANALYSIS SYSTEM - Embodiments of the present invention provide a method and a system for mapping a scene depicted in an acquired stream of video frames that may be used by a machine-learning behavior-recognition system. A background image of the scene is segmented into plurality of regions representing various objects of the background image. Statistically similar regions may be merged and associated. The regions are analyzed to determine their z-depth order in relation to a video capturing device providing the stream of the video frames and other regions, using occlusions between the regions and data about foreground objects in the scene. An annotated map describing the identified regions and their properties is created and updated. | 04-02-2009 |
20090087025 | Shadow and highlight detection system and method of the same in surveillance camera and recording medium thereof - A method and system for detecting a shadow region and a highlight region from a foreground region in a surveillance system, and a recording medium thereof, are provided. The system includes an image capturing unit to capture a new image, a background model unit to receive the new image and update a stored background model with the new image, a difference image obtaining unit to compare the new image with the background model and to obtain a difference image between the new image and the background model, a penumbra region extraction unit to extract a partial shadow region or a partial highlight region by measuring a sharpness of an edge of the difference image and expanding a background region, and an umbra region extraction unit to extract a complete shadow region or a complete highlight region based on the result of the extraction by the penumbra region extraction unit. | 04-02-2009 |
20090087026 | METHOD AND SYSTEM OF MATERIAL IDENTIFICATION USING BINOCULAR STEROSCOPIC AND MULTI-ENERGY TRANSMISSION IMAGES - The present invention provides a method and system of material identification using binocular steroscopic and multi-energy transmission image. With the method, any obstacle that dominates the ray absorption can be peeled off from the objects that overlap in the direction of a ray beam. The object that is unobvious due to a relatively small amount of ray absorption will thus stand out, and the material property of the object, such as organic, mixture, metal and the like can be identified. This method lays a fundament for automatic identification of harmful objects, such as explosive, drugs, etc., concealed in a freight container. | 04-02-2009 |
20090087027 | ESTIMATOR IDENTIFIER COMPONENT FOR BEHAVIORAL RECOGNITION SYSTEM - An estimator/identifier component for a computer vision engine of a machine-learning based behavior-recognition system is disclosed. The estimator/identifier component may be configured to classify an object being one of two or more classification types, e.g., as being a vehicle or a person. Once classified, the estimator/identifier may evaluate the object to determine a set of kinematic data, static data, and a current pose of the object. The output of the estimator/identifier component may include the classifications assigned to a tracked object, as well as the derived information and object attributes. | 04-02-2009 |
20090087028 | Hand Washing Monitoring System - A hand washing monitoring system ( | 04-02-2009 |
20090087029 | 4D GIS based virtual reality for moving target prediction - The technology of the 4D-GIS system deploys a GIS-based algorithm used to determine the location of a moving target through registering the terrain image obtained from a Moving Target Indication (MTI) sensor or small Unmanned Aerial Vehicle (UAV) camera with the digital map from GIS. For motion prediction the target state is estimated using an Extended Kalman Filter (EKF). In order to enhance the prediction of the moving target's trajectory a fuzzy logic reasoning algorithm is used to estimate the destination of a moving target through synthesizing data from GIS, target statistics, tactics and other past experience derived information, such as, likely moving direction of targets in correlation with the nature of the terrain and surmised mission. | 04-02-2009 |
20090087030 | Digital Image Processing Using Face Detection Information - A method of processing a digital image using face detection within the image achieves one or more desired image processing parameters. A group of pixels is identified that correspond to an image of a face within the digital image. Default values are determined of one or more parameters of at least some portion of the digital image. Values are adjusted of the one or more parameters within the digitally-detected image based upon an analysis of the digital image including the image of the face and the default values. | 04-02-2009 |
20090092282 | System and Method for Tracking Objects with a Synthetic Aperture - A computer implemented method tracks 3D positions of an object moving in a scene. A sequence of images is acquired of the scene with a set of cameras such that each time instant a set of images are acquired of the scene, in which each image includes pixels. Each set of images is aggregated into a synthetic aperture image including the pixels, and the pixels in each the set of images are matched corresponding to multiple locations and multiple depths of a target window with an appearance model to determine scores for the multiple locations and multiple depths. A particular location and a particular depth having a maximal score is selected as the 3D position of the moving object. | 04-09-2009 |
20090092283 | SURVEILLANCE AND MONITORING SYSTEM - A system having one or more devices for detection, surveillance and monitoring. Video images of scenes with persons from the devices may be processed and provided to a biometrics component for standoff biometric acquisition and matching. Various remote and internal databases may be resorted to for biometric matching. Matching results may go to the history component and the strategy and association component. The output of the latter component may be subject to behavior inference and analysis. The system may be interconnected with outside entities such as an access control system. | 04-09-2009 |
20090092284 | Light Modulation Techniques for Imaging Objects in or around a Vehicle - Method and system for obtaining information about an object in a compartment in a vehicle includes directing illumination into the compartment, spatial or temporally modulating the illumination, receiving light reflected from an object in the compartment, and analyzing the reflected light to obtain information about the object. The compartment may be a passenger compartment of an automobile, the trunk of an automobile or the interior of a trailer of a truck. The illumination may be directed from a light source and the reflected light received at a receiver spaced apart from the light source. Analysis of the reflected light may therefore entail applying a triangulation calculation to enable a determination of a distance between the light source and illuminated point on the object. The same method and system can be adapted for monitoring the environment around the vehicle. | 04-09-2009 |
20090092285 | METHOD OF LOCAL TRACING OF CONNECTIVITY AND SCHEMATIC REPRESENTATIONS PRODUCED THEREFROM - A schematic diagram detailing a circuit that was reverse engineered from a plurality of images taken of the circuit is provided. The schematic diagram includes at least one circuit element that was represented as an object in at least one of the plurality of images, such that signal continuity information was determined through local tracing of connectivity between a first image and a second image of the plurality of images. A method of tracing the connectivity within the plurality of images to produce the schematic diagram is also disclosed. | 04-09-2009 |
20090092286 | IMAGE GENERATING APPARATUS, IMAGE GENERATING PROGRAM, IMAGE GENERATING PROGRAM RECORDING MEDIUM AND IMAGE GENERATING METHOD - When an obstacle does not exist in a horizontal direction in a direction of a virtual camera, a PC coordinate is set as a point of gaze. When the player character comes close to a high wall while the procedure of S | 04-09-2009 |
20090092287 | Mixed Media Reality Recognition With Image Tracking - An MMR system integrating image tracking and recognition comprises a plurality of mobile devices, a pre-processing server or MMR gateway, and an MMR matching unit, and may include an MMR publisher. The MMR matching unit receives an image query from the pre-processing server or MMR gateway and sends it to one or more of the recognition units to identify a result including a document, the page, and the location on the page. Image tracking information also is provided for determining relative locations of images within a document page. The mobile device includes an image tracker for providing at least a portion of the image tracking information. The present invention also includes methods for image tracking-assisted recognition, recognition of multiple images using a single image query, and improved image tracking using MMR recognition. | 04-09-2009 |
20090097704 | ON-CHIP CAMERA SYSTEM FOR MULTIPLE OBJECT TRACKING AND IDENTIFICATION - Apparatus and methods provide multiple object identification and tracking using an object recognition system, such as a camera system. One method of tracking multiple objects includes constructing a first set of objects in real time as a camera scans an image of a first frame row by row. A second set of objects is constructed concurrently in real time as the camera scans an image of a second frame row by row. The first and second sets of objects are stored separately in memory and the sets of objects are compared. Based on the comparison between the first frame (previous frame) and the second frame (current frame), a unique ID is assigned to an object in the second frame (current frame). | 04-16-2009 |
20090097705 | OBTAINING INFORMATION BY TRACKING A USER - A device may obtain tracking information of a face or a head of a user, determine a position and orientation of the user, and determine a direction of focus of the user based on the tracking information, the position, and the orientation. In addition, the device may retrieve information associated with a location at which the user focused. | 04-16-2009 |
20090097706 | SYSTEMS AND METHODS FOR DETERMINING IF OBJECTS ARE IN A QUEUE - Systems and methods that determine a position value of a first object and a position value of a second object, and compare the position value of the first object with the position value of the second object to determine if the second object is in a queue with the first object are provided. | 04-16-2009 |
20090097707 | Method of controlling digital image processing apparatus for face detection, and digital image processing apparatus employing the method - Provided is a method of controlling a digital image processing apparatus for detecting a face from continuously input images, the method comprising operations (a) to (c). In (a), if a face is detected, image information of a body area is stored. In (b), if the face is not detected, a body having the image information stored in (a) is detected. In (c), if a current body is detected after a previous body was detected in (b), an image characteristic of the previously detected body is compared to an image characteristic of the currently detected body, and a movement state of the face is determined according to the comparison result. | 04-16-2009 |
20090097708 | Image-Processing System and Image-Processing Method - A vehicle-periphery-image-providing system may include an image-capturing unit, a viewpoint-change unit, an image-composition unit, an object-decttion unit, a line-width-setting unit, and a line-selection unit. The image-capturing units, such as cameras, capture images outside a vehicle periphery and generate image-data items. The viewpoint-change unit generates a bird's-eye-view image for each image-data item based on the image-data item so that end portions of the real spaces corresponding to two adjacent bird's-eye-view images overlap each other. The image-composition unit generates a bird's-eye-view-composite image by combining the bird's-eye-view images according to a predetermined layout. The object-detection unit detects an object existing in the real space corresponding to a portion where the bird's-eye-view images of the bird's-eye-composite image are joined to each other. The line-width-setting unit sets the width of the line image corresponding to the joining portion. The line-selection unit adds a line image having the set width to an overlap portion of one of the bird's-eye-view images. | 04-16-2009 |
20090097709 | SIGNAL PROCESSING APPARATUS - A signal processing apparatus for displaying an input image in the sate in which a part of the image is enlarged, displays an enlarged image obtained by enlarging a part of a designated object in the input image so that the enlarged image is superimposed at a position in accordance with the position of the designated object. | 04-16-2009 |
20090097710 | METHODS AND SYSTEM FOR COMMUNICATION AND DISPLAYING POINTS-OF-INTEREST - A method for displaying point-of-interest coordinate locations in perspective images and for coordinate-based information transfer between perspective images on different platforms includes providing a shared reference image of a region overlapping the field of view of the perspective view. The perspective view is then correlated with the shared reference image so as to generate a mapping between the two views. This mapping is then used to derive a location of a given coordinate from the shared reference image within the perspective view and the location is indicated in the context of the perspective view on a display. | 04-16-2009 |
20090097711 | Detecting apparatus of human component and method thereof - Disclosed are an apparatus and a method of detecting a human component from an input image. The apparatus includes a training database (DB) to store positive and negative samples of a human component, an image processor to calculate a difference image for the input image, a sub-window processor to extract a feature population from a difference image that is calculated by the image processor for the positive and negative samples of a predetermined human component stored in the training DB, and a human classifier to detect a human component corresponding to a human component model using the human component model that is learned from the feature population. | 04-16-2009 |
20090103775 | Multi-Tracking of Video Objects - An inventive method for video object tracking includes the steps of selecting an object; choosing an object type for the object, and enabling one of multiple object tracking processes responsive to the object type chosen. In a preferred embodiment selecting the object includes one of segmenting the object by using a region, selecting points on the boundary of an object, aggregating regions or combining a selected region and selected points on a boundary of an object. The object tracking processes can be expanded to include tracking processes adapted to newly created object types. | 04-23-2009 |
20090103776 | Method of Non-Uniformity Compensation (NUC) of an Imager - The present invention provides for simple and streamlined boresight correlation of FLIR-to-missile video. Boresight correlation is performed with un-NUCed missile video, which allows boresight correlation and NUC to be performed simultaneously thereby reducing the time required to acquire a target and fire the missile. The current approach uses the motion of the missile seeker for NUCing to produce spatial gradient filtering in the missile image by differencing images as the seeker moves. This compensates DC non-uniformities in the image. A FLIR image is processed with a matching displace and subtract spatial filter constructed based on the tracked scene motion. The FLIR image is resampled to match the missile image resolution, and the two images are preprocessed and correlated using conventional methods. Improved NUC is provided by cross-referencing multiple measurements of each area of the scene as viewed by different pixels in the imager. This approach is based on the simple yet novel premise that every pixel in the array that looks at the same thing should see the same thing. As a result, the NUC terms adapt to non-uniformities in the imager and not the scene. | 04-23-2009 |
20090103777 | Lock and hold structured light illumination - A method, system, and associated program code, for 3-dimentional image acquisition, using structured light illumination, of a surface-of-interest under observation by at least one camera. One aspect includes: illuminating the surface-of-interest, while static/at rest, with structured light to obtain initial depth map data therefor; while projecting a hold pattern comprised of a plurality of snake-stripes at the static surface-of-interest, assigning an identity to and an initial lock position of each of the snake-stripes of the hold pattern; and while projecting the hold pattern, tracking, from frame-to-frame each of the snake-stripes. Another aspect includes: projecting a hold pattern comprised of a plurality of snake-stripes; as the surface-of-interest moves into a region under observation by at least one camera that also comprises the projected hold pattern, assigning an identity to and an initial lock position of each snake-stripe as it sequentially illuminates the surface-of-interest; and while projecting the hold pattern, tracking, from frame-to-frame, each snake-stripe while it passes through the region. Yet another aspect includes: projecting, in sequence at the surface-of-interest positioned within a region under observation by at least one camera, a plurality of snake-stripes of a hold pattern by opening/moving a shutter cover; as each of the snake-stripes sequentially illuminates the surface-of-interest, assigning an identity to and an initial lock position of that snake-stripe; and while projecting the hold pattern, tracking, from frame-to-frame, each of the snake-stripes once it has illuminated the surface-of-interest and entered the region. | 04-23-2009 |
20090103778 | Composition determining apparatus, composition determining method, and program - A composition determining apparatus includes a subject detecting unit configured to detect one or more specific subjects in an image based on image data; a subject orientation detecting unit configured to detect subject orientation information indicating an orientation in the image of the subject detected by the subject detecting unit, the detection of the subject orientation information being performed for each of the detected subjects; and a composition determining unit configured to determine a composition based on the subject orientation information. When a plurality of subjects are detected by the subject detecting unit, the composition determining unit determines a composition based on a relationship among a plurality of pieces of the subject orientation information corresponding to the plurality of subjects. | 04-23-2009 |
20090103779 | MULTI-SENSORIAL HYPOTHESIS BASED OBJECT DETECTOR AND OBJECT PURSUER - The invention relates to a method for multi-sensorial object detection, wherein sensor information is evaluated together from several different sensor signal flows having different sensor signal properties. For said evaluation, the at least two sensor signal flows are not adapted to each other and/or projected onto each other, but object hypotheses are generated in each of the at least two sensor signal flows and characteristics for at least one classifier are generated based of said object hypotheses. Said object hypotheses are subsequently evaluated by means of a classifier and are associated with one or more categories. At least two categories are identified and the object is associated with one of the two categories. | 04-23-2009 |
20090103780 | Hand-Gesture Recognition Method - One embodiment of the invention includes a method of providing device inputs. The method includes illuminating hand gestures performed via a bare hand of a user in a foreground of a background surface with at least one infrared (IR) light source. The method also includes generating a first plurality of silhouette images associated with the bare hand based on an IR light contrast between the bare hand and the background surface and generating a second plurality of silhouette images associated with the bare hand based on an IR light contrast between the bare hand and the background surface. The method also includes determining a plurality of three-dimensional features of the bare hand relative to the background surface based on a parallax separation of the bare hand in the first plurality of silhouette images relative to the second plurality of silhouette images. The method also includes determining a provided input gesture based on the plurality of three-dimensional features of the bare hand and comparing the provided input gesture with a plurality of predefined gesture inputs in a gesture library. The method further includes providing at least one device input corresponding to interaction with displayed visual content based on the provided input gesture corresponding to one of the plurality of predefined gesture inputs. | 04-23-2009 |
20090110235 | SYSTEM AND METHOD FOR SELECTION OF AN OBJECT OF INTEREST DURING PHYSICAL BROWSING BY FINGER FRAMING - A system and method selecting an object from a plurality of objects in a physical environment is disclosed. The method may include framing an object located in a physical environment by positioning an aperture at a selected distance from a user's eye, the position of the aperture being selected such that the aperture substantially encompasses the object as viewed from the user's perspective, detecting the aperture by analyzing image data including the aperture and the physical environment, and selecting the object substantially encompassed by the detected aperture. The method may further include identifying the selected object based on its geolocation, collecting and merging data about the identified object from a plurality of data sources, and displaying the collected and merged data. | 04-30-2009 |
20090110236 | Method And System For Object Detection And Tracking - Disclosed is a method and system for object detection and tracking. Spatio-temporal information for a foreground/background appearance module is updated, based on a new input image and the accumulated previous appearance information and foreground/background information module labeling information over time. Object detection is performed according to the new input image and the updated spatio-temporal information and transmitted previous information over time, based on the labeling result generated by the object detection. The information for the foreground/background appearance module is repeatedly updated until a convergent condition is reached. The produced labeling result from objection detection is considered as a new tracking measurement for further updating on a tracking prediction module. A final tracking result may be obtained through the updated tracking prediction module, which is determined by the current tracking measurement and the previous observed tracking results. The tracking object location at the next time is predicted. The returned predicted appearance information for the foreground/background object is used as the input for updating the foreground and background appearance module. The returned labeling information is used as the information over time for the object detection. | 04-30-2009 |
20090110237 | METHOD FOR POSITIONING A NON-STRUCTURAL OBJECT IN A SERIES OF CONTINUING IMAGES - A method for positioning a non-structural object in a series of continuing images is disclosed, which comprises the steps of: establishing a pattern representing a target object while analyzing the pattern for obtaining positions relative to a representative feature of the pattern; picking up a series of continuing images including the target object for utilizing the brightness variations at the boundary defining the representative feature which are detected in the series of continuing images to calculate and thus obtain a predictive candidate position of the representative feature in an image picked up next to the series of continuing images; calculating the differences between the boundaries defining the representative feature at the predictive candidate position in the series of continuing images and also calculating the similarities between the pattern and those boundaries; and using the differences and the similarities to calculate and thus obtain the position of the representative feature in the image picked up next to the series of continuing images. | 04-30-2009 |
20090110238 | Automatic correlation modeling of an internal target - A method and apparatus to automatically control the timing of an image acquisition by an imaging system in developing a correlation model of movement of a target within a patient. | 04-30-2009 |
20090110239 | System and method for revealing occluded objects in an image dataset - Disclosed are a system and method for identifying objects in an image dataset that occlude other objects and for transforming the image dataset to reveal the occluded objects. In some cases, occluding objects are identified by processing the image dataset to determine the relative positions of visual objects. Occluded objects are then revealed by removing the occluding objects from the image dataset or by otherwise de-emphasizing the occluding objects so that the occluded objects are seen behind it. A visual object may be removed simply because it occludes another object, because of privacy concerns, or because it is transient. When an object is removed or de-emphasized, the objects that were behind it may need to be “cleaned up” so that they show up well. To do this, information from multiple images can be processed using interpolation techniques. The image dataset can be further transformed by adding objects to the images. | 04-30-2009 |
20090110240 | METHOD FOR DETECTING A MOVING OBJECT IN AN IMAGE STREAM - The invention relates to a method for detecting a moving object in a stream of images taken at successive instants, of the type comprising, for each zone of a predefined set of zones of at least one pixel of the image constituting a current image, a step ( | 04-30-2009 |
20090110241 | IMAGE PROCESSING APPARATUS AND METHOD FOR OBTAINING POSITION AND ORIENTATION OF IMAGING APPARATUS - An image processing apparatus obtains location information of each image feature in a captured image based on image coordinates of the image feature in the captured image. The image processing apparatus selects location information usable to calculate a position and an orientation of the imaging apparatus among the obtained location information. The image processing apparatus obtains the position and the orientation of the imaging apparatus based on the selected location information and an image feature corresponding to the selected location information among the image features included in the captured image. | 04-30-2009 |
20090116691 | METHOD FOR LOCATING AN OBJECT ASSOCIATED WITH A DEVICE TO BE CONTROLLED AND A METHOD FOR CONTROLLING THE DEVICE - The invention describes a method for locating an object (B | 05-07-2009 |
20090116692 | REALTIME OBJECT TRACKING SYSTEM - A real-time computer vision system tracks one or more objects moving in a scene using a target location technique which does not involve searching. The imaging hardware includes a color camera, frame grabber and processor. The software consists of the low-level image grabbing software and a tracking algorithm. The system tracks objects based on the color, motion and/or shape of the object in the image. A color matching function is used to compute three measures of the target's probable location based on the target color, shape and motion. The method then computes the most probable location of the target using a weighting technique. Once the system is running, a graphical user interface displays the live image from the color camera on the computer screen. The operator can then use the mouse to select a target for tracking. The system will then keep track of the moving target in the scene in real-time. | 05-07-2009 |
20090116693 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing method is provided for an image processing apparatus which executes processing by allocating a plurality of weak discriminators to form a tree structure having branches corresponding to types of objects so as to detect objects included in image data. Each weak discriminator calculates a feature amount to be used in a calculation of an evaluation value of the image data, and discriminates whether or not the object is included in the image data by using the evaluation value. The weak discriminator allocated to a branch point in the tree structure further selects a branch destination using at least some of the feature amounts calculated by weak discriminators included in each branch destination. | 05-07-2009 |
20090123028 | Target Position Setting Device And Parking Assist Device With The Same - A target position setting device includes a distance meter, an imager, first and second calculating portions, a determination portion, and a setting portion. The distance meter measures a distance to an object around a vehicle. The imager takes an image of an environment around the vehicle. The first calculating portion calculates a first candidate of a target position of the vehicle according to a measuring result of the distance meter. The second calculating portion calculates a second candidate of the target position of the vehicle according to an imaging result of the imager. The determination portion determines whether a relationship between the first candidate and the second candidate meets a given condition. The setting portion sets the target position according to the second candidate of the target position when the determination portion determines that the relationship between the first candidate and the second candidate meets the given condition. | 05-14-2009 |
20090123029 | Display-and-image-pickup apparatus, object detection program and method of detecting an object - A display-and-image-pickup apparatus includes: a display-and-image-pickup panel having an image display function and an image pickup function; an image producing means for producing a predetermined processed image on the basis of a picked-up image of a proximity object obtained through the use of the display-and-image-pickup panel; an image processing means for obtaining information about the proximity object through selectively using one of two obtaining modes on the basis of at least one of the picked-up image and the processed image; and a switching means for switching processes so that, in the case where the parameter is increasing, one of the two obtaining modes is switched to the other obtaining mode when the parameter reaches a threshold value, and in the case where the parameter is decreasing, the other obtaining mode is switched to the one obtaining mode when the parameter reaches a smaller threshold value. | 05-14-2009 |
20090123030 | Method For The Autostereoscopic Presentation Of Image Information With Adaptation To Suit Changes In The Head Position Of The Observer - For continuous tracking without noticeable skips during physical changes in head position, the intensities of all subpixels of the matrix screen are reduced in order to form intensity focuses for subpixel groups behind barrier elements, which comprise a number n of subpixels, including a subpixel reserve, in the image lines. In the case of parallel alterations, these intensity focuses are then displaced by a constant absolute value continuously through directly adjacent subpixels and also through subpixel group boundaries with different stereo image views. Distance changes involve the intensity focuses being increasingly widened or compressed relative to the screen edges. The intensities of the individual subpixels can be altered by means of simple multiplication by standardized constant or variable intensity factors which can be ascertained as a function of motion. | 05-14-2009 |
20090129628 | METHOD FOR DETERMINING THE POSITION OF AN OBJECT FROM A DIGITAL IMAGE - Method for determining the position of an object point in a scene from a digital image thereof acquired through an optical system is presented. The image comprises a set of image points corresponding to object points and the position of the object points are determined by means of predetermined vectors associated with the image points. The predetermined vector represents the inverted direction of a light ray in the object space that will produce this image point through the optical system comprising all distortion effects of the optical system. | 05-21-2009 |
20090129629 | Method And Apparatus For Adaptive Object Detection - Disclosed is a method and apparatus for adaptive object detection, which may be applied in detecting an object having an ellipse feature. The method for adaptive object detection comprises performing an object shape detection based on the extracted foreground from the object; determining whether the object being occluded according to the detected feature statistic information of the object; if the object being not occluded, determining whether to switching object shape detection to ellipse detection; if the object being occluded or necessary to switch to ellipse detection, performing ellipse detection on the foreground; when the foreground being detected to have ellipse features, the object is continuously tracked; and when the current detection being ellipse detection, determining whether the ellipse detection being able to switch back to object shape detection. | 05-21-2009 |
20090129630 | 3D TEXTURED OBJECTS FOR VIRTUAL VIEWPOINT ANIMATIONS - 3d textured objects are provided for virtual viewpoint animations. In one aspect, an image of an event is obtained from a camera and an object in the image is automatically detected. For example, the event may be a sports event and the object may be a stationary object which is detected based on a known location, color and shape. A 3d model of the object is combined with a textured 3d model of the event to depict a virtual viewpoint which differs from a viewpoint of the camera. The textured 3d model of the event has texture applied from an image of the event, while the 3d model of the object does not have such texture applied, in one approach. In another aspect, an object in the image such as a participant in a sporting event is represented in the virtual viewpoint by a textured 3d kinematics model. | 05-21-2009 |
20090129631 | Method of Tracking the Position of the Head in Real Time in a Video Image Stream - The invention relates to a method of tracking the position of the bust of a user on the basis of a video image stream, said bus comprising the user's torso and head, the method comprising the determination of the position of the torso on a first image, in which method a virtual reference frame is associated with the torso on said first image, and in which method, for a second image, a new position of the virtual reference frame is determined on said second image, and, a relative position of the head with respect to said new position of the virtual reference frame is measured by comparison with the position of the virtual reference frame on said first image, so as to determine independently the movements of the head and the torso. | 05-21-2009 |
20090129632 | Method of object detection - A method is set forth for the detection of an object, in particular in a road, in particular of a pedestrian, in the surroundings in the range of view of an optical sensor attached to a carrier such as in particular a vehicle, wherein, from the range of view of the optical sensor, a relevant spatial region disposed below the horizon is determined, a gray scale image is produced by means of the optical sensor which includes a relevant image region corresponding to the relevant spatial region, and a search for a possible object is only made in this relevant image region corresponding to the relevant spatial region disposed below the horizon for the detection of the object. | 05-21-2009 |
20090136089 | 3D inspection of an object using x-rays - A method is presented for a 3D inspection of an object or bag in order to check for explosives or contraband. The method is applicable to Computed Tomography, Laminography or any other method that can be used to produce images of slices through the object. According to this method, it is not necessary to reconstruct the slice image with a high resolution as is required for visual display, but it is sufficient to reconstruct the image at only a sample or a set of points or pixels that are sparsely distributed within the reconstructed slice. The properties of the object are then analyzed only at these sparsely distributed pixels within the slice to make a determination for the presence or absence of explosives or contraband. This process of image reconstruction and analysis is repeated over several slices spaced through the volume of the object. In another embodiment of this invention, the set of points or pixels at which the image is reconstructed are offset spatially with respect to the set of pixels in the adjacent or neighboring slice. This invention greatly reduces the computational burden, hence simplifies the hardware and software design, speeds up the scanning process and allows for a more complete and uniform inspection of the entire volume of the object. | 05-28-2009 |
20090136090 | House Displacement Judging Method, House Displacement Judging Device - To attain a house change judging method and device which can judge a change with high precision and is capable of fully automating the judgment, the present invention provides a house change judging method for judging a change of a house ( | 05-28-2009 |
20090141935 | MOTION COMPENSATED CT RECONSTRUCTION OF HIGH CONTRAST OBJECTS - Cardiac CT imaging using gated reconstruction is currently limited in its temporal and spatial resolution. According to an exemplary embodiment of the present invention, an examination apparatus is provided in which an identification of a high contrast object is performed. This high contrast object is then followed through the phases, resulting in a motion vector field of the high contrast object, on the basis of which a motion compensated reconstruction is then performed. | 06-04-2009 |
20090141936 | Object-Tracking Computer Program Product, Object-Tracking Device, and Camera - A computer performs following steps according to a program for tracking an object. Template matching of each frame of an input image to a plurality of template images is performed, a template image having a highest similarity with an image within a predetermined region of the input image is selected as a selected template among the plurality of template images and the predetermined region of the input image is extracted as a matched region. With reference to an image within the matched region thus extracted, by tracking motion between frames, motion of an object is tracked between the images of the plurality of frames. It is determined as to whether or not a result of template matching satisfies an update condition for updating the plurality of template images. In a case that the update condition is determined to be satisfied, at least one of the plurality of template images. | 06-04-2009 |
20090141937 | Subject Extracting Method, Subject Tracking Method, Image Synthesizing Method, Computer Program for Extracting Subject, Computer Program for Tracking Subject, Computer Program for Synthesizing Images, Subject Extracting Device, Subject Tracking Device, and Image Synthesizing Device - A binary mask image for extracting subject is generated by binarizing an image after image-processing (processed image) with a predefined threshold value. Based on an image before image-processing (pre-processing image) and the binary mask image for extracting image, a subject image in which only a subject included in the pre-processing image is extracted is generated by eliminating a background region from the pre-processing image. | 06-04-2009 |
20090141938 | ROBOT VISION SYSTEM AND DETECTION METHOD - A robot vision system for outputting a disparity map includes a stereo camera for receiving left and right images and outputting a disparity map between the two images; an encoder for encoding either the left image or the right image into a motion compensation-based video bit-stream; and a decoder for extracting an encoding type of an image block, a motion vector, and a DCT coefficient from the video bit-stream. Further, the system includes a person detector for detecting and labeling person blocks in the image using the disparity map between the left image and the right image, the block encoding type, and the motion vector, and detecting a distance from the labeled person to the camera; and an obstacle detector for detecting a closer obstacle than the person using the block encoding type, the motion vector, and the DCT coefficient extracted from the video bit-stream, and the disparity map. | 06-04-2009 |
20090141939 | Systems and Methods for Analysis of Video Content, Event Notification, and Video Content Provision - A method for remote event notification over a data network is disclosed. The method includes receiving video data from any source, analyzing the video data with reference to a profile to select a segment of interest associated with an event of significance, encoding the segment of interest, and sending to a user a representation of the segment of interest for display at a user display device. A further method for sharing video data based on content according to a user-defined profile over a data network is disclosed. The method includes receiving the video data, analyzing the video data for relevant content according to the profile, consulting a profile to determine a treatment of the relevant content, and sending data representative of the relevant content according to the treatment. | 06-04-2009 |
20090141940 | Integrated Systems and Methods For Video-Based Object Modeling, Recognition, and Tracking - The present disclosure relates to systems and methods for modeling, recognizing, and tracking object images in video files. In one embodiment, a video file, which includes a plurality of frames, is received. An image of an object is extracted from a particular frame in the video file, and a subsequent image is also extracted from a subsequent frame. A similarity value is then calculated between the extracted images from the particular frame and subsequent frame. If the calculated similarity value exceeds a predetermined similarity threshold, the extracted object images are assigned to an object group. The object group is used to generate an object model associated with images in the group, wherein the model is comprised of image features extracted from optimal object images in the object group. Optimal images from the group are also used for comparison to other object models for purposes of identifying images. | 06-04-2009 |
20090141941 | IMAGE PROCESSING APPARATUS AND METHOD FOR ESTIMATING ORIENTATION - A method of estimating an orientation of one or more of a plurality of objects disposed on a plane, from one or more video images of a scene, which includes the objects on the plane produced from a view of the scene by a video camera. The method comprises receiving for each of the one or more objects, object tracking data, which provides a position of the object on the plane in the video images with respect to time, determining from the object tracking data a plurality of basis vectors associated with at least one of the objects, each basis vector corresponding to a factor, which can influence the orientation of the object and each basis vector being related to the movement or location of the one or more objects, and combining the basis vectors in accordance with a blending function to calculate an estimate of the orientation of the object on the plane, the blending function including blending coefficients which determine a relative magnitude of each basis vector used in the blending function. | 06-04-2009 |
20090147991 | METHOD, SYSTEM, AND COMPUTER PROGRAM FOR DETECTING AND CHARACTERIZING MOTION - A method for motion detection/characterization is provided including the steps of (a) capturing a series of time lapsed images of the target, wherein the target moves between at least two of such images; (b) generating a motion distribution in relation to the target across the series of images; and (c) identifying motion of the target based on analysis of the motion distribution. In a further aspect of motion detection/characterization in accordance with the invention, motion is detected/characterized based on calculation of a color distribution for a series of images. A system and computer program for presenting an augmented environment based on the motion detection/characterization is also provided. An interface means based on the motion detection/characterization is also provided. | 06-11-2009 |
20090147992 | THREE-LEVEL SCHEME FOR EFFICIENT BALL TRACKING - A three-level ball detection and tracking method is disclosed. The ball detection and tracking method employs three levels to generate multiple ball candidates rather than a single one. The ball detection and tracking method constructs multiple trajectories using candidate linking, then uses optimization criteria to determine the best ball trajectory. | 06-11-2009 |
20090147993 | HEAD-TRACKING SYSTEM - A head-tracking system and a method for operating a head-tracking system in which a stationary reference point is detected are provided. A detector for detecting the position of a head is calibrated based on the detected stationary reference point. In one example implementation, the detection of the stationary reference point is used to determine the position of the head. | 06-11-2009 |
20090147994 | TORO: TRACKING AND OBSERVING ROBOT - The present invention provides a method for tracking entities, such as people, in an environment over long time periods. A region-based model is generated to model beliefs about entity locations. Each region corresponds to a discrete area representing a location where an entity is likely to be found. Each region includes one or more positions which more precisely specify the location of an entity within the region so that the region defines a probability distribution of the entity residing at different positions within the region. A region-based particle filtering method is applied to entities within the regions so that the probability distribution of each region is updated to indicate the likelihood of the entity residing in a particular region as the entity moves. | 06-11-2009 |
20090147995 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM - An information processing apparatus includes information input units which inputs observation information in a real space; an event detection unit which generates event information including estimated position and identification information on users existing in the actual space through analysis of the input information; and an information integration processing unit which sets hypothesis probability distribution data regarding user position and user identification information and generates analysis information including the user position information through hypothesis update and sorting out based on the event information, in which the event detection unit detects a face area from an image frame input from an image information input unit, extracts face attribute information from the face area, and calculates and outputs a face attribute score corresponding to the extracted face attribute information to the information integration processing unit, and the information integration processing unit applies the face attribute score to calculate target face attribute expectation values. | 06-11-2009 |
20090154768 | METHOD OF MOTION DETECTION AND AUTONOMOUS MOTION TRACKING USING DYNAMIC SENSITIVITY MASKS IN A PAN-TILT CAMERA - A method of identifying motion within a field of view includes capturing at least two sequential images within the field of view. Each of the images includes a respective array of pixel values. An array of difference values between corresponding ones of the pixel values in the sequential images is calculated. A sensitivity region map corresponding to the field of view is provided. The sensitivity region map includes a plurality of regions having different threshold values. A presence of motion is determined by comparing the difference values to corresponding ones of the threshold values. | 06-18-2009 |
20090154769 | Moving robot and moving object detecting method and medium thereof - A moving robot and moving object detecting method and medium thereof is disclosed. The moving object detecting method includes transforming an omni-directional image captured in the moving robot to a panoramic image, comparing the panoramic image with a previous panoramic image and estimating a movement region of the moving object based on the comparison, and recognizing that a movement of the moving object exist in the estimated movement region when the area of the estimated movement region exceeds the reference area. | 06-18-2009 |
20090154770 | Moving Amount Calculation System and Obstacle Detection System - An arithmetic device ( | 06-18-2009 |
20090161911 | Moving Object Detection Apparatus And Method - Disclosed is directed to a moving object detection apparatus and method. The apparatus comprises an image capture module, an image alignment module, a temporal differencing module, a distance transform module, and a background subtraction module. The image capture module derives a plurality of images in a time series. The image alignment module aligns the images if the image capture module is situated on a movable platform. The temporal differencing module performs temporal differencing on the captured images or the aligned images, and generates a difference image. The distance transform module transforms the difference image into a distance map. The background subtraction module applies the distance map to background subtraction technology and compares the results with the current captured image, so as to obtain the information for moving objects. | 06-25-2009 |
20090161912 | METHOD FOR OBJECT DETECTION - In one aspect, the present invention is directed to a method for object detection, the method comprising the steps of: dividing a digital image into a plurality of sub-windows of substantially the same dimensions; processing the image of each of the sub-windows by a cascade of homogeneous classifiers (each of the homogenous classifiers produces a CRV, which is a value relative to the likelihood of a sub-window to comprise an image of the object of interest, and wherein each of the classifiers has an increasing accuracy in identifying features associated with the object of interest); and upon classifying by all of the classifiers of the cascade a sub-window as comprising an image of the object of interest, applying a post-classifier on the cascade CRVS, for evaluating the likelihood of the sub-window to comprise an image of the object of interest, wherein the post-classifier differs from the homogenous classifiers. | 06-25-2009 |
20090169052 | Object Detector - An object position area ( | 07-02-2009 |
20090169053 | COLLABORATIVE TRACKING - Disclosed is a system ( | 07-02-2009 |
20090169054 | METHOD OF ADJUSTING SELECTED WINDOW SIZE OF IMAGE OBJECT - A method of adjusting selected window size of an image object is applicable for tracking a target object in a video. The video includes a plurality of frames, and the target object has a display range changing with the playback of each frame. According to a variation trend of the display range of the target object, whether a variation times corresponding to the variation trend reaches a threshold value or not is recorded, and then the selected window size is reset, such that the target object is enclosed with a selected window having a size closer to the target object. | 07-02-2009 |
20090175496 | Image processing device and method, recording medium, and program - The present invention relates to image processing apparatus and method, a recording medium, and a program for providing reliable tracking of a tracking point. When a right eye | 07-09-2009 |
20090175497 | LOCATION MEASURING DEVICE AND METHOD - With apparatus and method for measuring in three dimensions by applying an estimating process to points corresponding to feature points in a plurality of motion image frames, high speed and high accuracy are realized. The apparatus comprises: a first track determining section ( | 07-09-2009 |
20090175498 | LOCATION MEASURING DEVICE AND METHOD - To realize high speed and high precision with device and method of three-dimensional measurement by applying estimating process to points corresponding to feature points in a plurality of motion frame images. With the device and method of calculating location information through processes of choosing a stereo pair, relative orientation, and bundle adjustment and using corresponding points of feature points extracted from respective motion frame images, each process is made up of two stages. To the first process section (stages: | 07-09-2009 |
20090175499 | Systems and methods for identifying objects and providing information related to identified objects - Systems and methods for identifying an object and presenting additional information about the identified object are provided. The techniques of the present invention can allow the user to specify modes to help with identifying objects. Furthermore, the additional information can be provided with different levels of detail depending on user selection. Apparatus for presenting a user with a log of the identified objects is also provided. The user can customize the log by, for example, creating a multi-media album. | 07-09-2009 |
20090175500 | Object tracking apparatus - An object tracking apparatus tracks an object on image data captured continuously. The object tracking apparatus includes an object color adjusting unit and a particle filter processing unit. The object color adjusting unit calculates tendency of color change in regions on image data and adjusts a color of the object set as an object color based on the tendency of color change to obtain a reference color. The particle filter processing unit estimates a region corresponding to the object on image data based on likelihood of each particle calculated by comparing a color around each particle with the reference color, using particles which move on image data according to a predefined rule. | 07-09-2009 |
20090175501 | Imaging control apparatus and imaging control method - An imaging control apparatus includes preset information management means for holding and managing unit preset information including positional information indicative of the position of an imaging field changing mechanism that changes the imaging field of view of an imaging unit, and reference image data, the preset information management means, as a registration process in response to a registration instruction, producing and holding unit preset information including positional information indicative of the position of the imaging field changing mechanism when the registration instruction is issued and reference image data related to the positional information and produced based on an image signal obtained through imaging performed by the imaging unit when the registration instruction is issued; operation screen display control means for controlling display of an operation image used to select among preset items that correspond to respective sets of unit preset information held in the preset information management means, the operation screen display control means displaying and presenting, for each of the preset items, the reference image data contained in the corresponding unit preset information on the operation screen; and drive control means for carrying out drive control for changing the position of the imaging field changing mechanism, the drive control means carrying out the drive control in such a way that when a preset item is selected and entered on the operation screen, the imaging field changing mechanism is positioned as indicated by the positional information in the unit preset information that corresponds to the selected and entered preset item. | 07-09-2009 |
20090175502 | Methods for discriminating moving objects in motion image sequences - In an exemplary embodiment of the present invention, an automated, computerized method is provided for classifying pixel values in a motion sequence of images. According to a feature of the present invention, the method comprises the steps of determining spectral information relevant to the sequence of images, and utilizing the spectral information to classify a pixel as one of background, shadow and object. | 07-09-2009 |
20090185715 | SYSTEM AND METHOD FOR DEFORMABLE OBJECT RECOGNITION - The present invention provides a system and method for detecting deformable objects in images even in the presence of partial occlusion, clutter and nonlinear illumination changes. A holistic approach for deformable object detection is disclosed that combines the advantages of a match metric that is based on the normalized gradient direction of the model points, the decomposition of the model into parts and a search method that takes all search results for all parts at the same time into account. Despite the fact that the model is decomposed into sub-parts, the relevant size of the model that is used for the search at the highest pyramid level is not reduced. Hence, the present invention does not suffer the speed limitations of a reduced number of pyramid levels that prior art methods have. | 07-23-2009 |
20090185716 | DUST DETECTION SYSTEM AND DIGITAL CAMERA - A dust detection system, comprising a receiver, a dust extraction block, a memory and an image correction block, is provided. The receiver receives an image signal. The dust extraction block generates a dust image signal on the basis of the image signal. The memory stores an intrinsic-flaw image signal corresponding to an intrinsic-flaw image including sub-images of dust that the dust extraction block extracts in initializing. The image correction block generates a corrected dust-image signal on the basis of the intrinsic-flaw image signal and a normal dust-image signal. The normal dust-image signal corresponds to a normal dust image including sub-image of dust that the dust extraction block extracts after initializing. The corrected dust image is the normal dust image that sub-images of dust in the intrinsic-flaw image are deleted from. | 07-23-2009 |
20090185717 | OBJECT DETECTION SYSTEM WITH IMPROVED OBJECT DETECTION ACCURACY - In a system for detecting a target object, a similarity determining unit sets a block in a picked-up image, and compares a part of the picked-up image contained in the block with a pattern image data while changes a location of the block in the picked-up image to determine a similarity of each part of the picked-up image contained in a corresponding one of the different-located blocks with respect to the pattern image data. A specifying unit extracts some different-located blocks from all of the different-located blocks. The determined similarity of the part of the picked-up image contained in each of some different-located blocks is equal to or greater than a predetermined threshold similarity. The specifying unit specifies, in the picked-up image, a target area based on a frequency distribution of some different-located blocks therein. | 07-23-2009 |
20090190797 | RECOGNIZING IMAGE ENVIRONMENT FROM IMAGE AND POSITION - A method of recognizing the environment of an image from an image and position information associated with the image includes acquiring the image and its associated position information; using the position information to acquire an aerial image correlated to the position information; identifying the environment of the image from the acquired aerial image; and storing the environment of the image in association with the image for subsequent use. | 07-30-2009 |
20090190798 | SYSTEM AND METHOD FOR REAL-TIME OBJECT RECOGNITION AND POSE ESTIMATION USING IN-SITU MONITORING - Provided are a system and method for real-time object recognition and pose estimation using in-situ monitoring. The method includes the steps of: a) receiving 2D and 3D image information, extracting evidences from the received 2D and 3D image information, recognizing an object by comparing the evidences with model, and expressing locations and poses by probabilistic particles; b) probabilistically fusing various locations and poses and finally determining a location and a pose by filtering inaccurate information; c) generating ROI by receiving 2D and 3D image information and the location and pose from the step b) and collecting and calculating environmental information; d) selecting an evidence or a set of evidences probabilistically by receiving the information from the step c) and proposing a cognitive action of a robot for collecting additional evidence; and e) repeating the steps a) and b) and the steps c) and d) in parallel until a result of object recognition and pose estimation is probabilistically satisfied. | 07-30-2009 |
20090190799 | METHOD FOR CHARACTERIZING THE EXHAUST GAS BURN-OFF QUALITY IN COMBUSTION SYSTEMS - A method for characterizing a flue gas burnout quality of a combustion process in a combustion system having a gas burnout zone includes optically detecting in a visible wavelength range, in a flow cross section of the gas burnout zone, low-soot combustion regions, regions without combustion, and sooting regions, so as to provide a plurality of successive individual images, the regions without combustion and the sooting regions having different dynamics. The plurality of successive individual images are analyzed so as to distinguish regions of transition, to the low-soot combustion regions, of the regions without combustion and the sooting regions. | 07-30-2009 |
20090196459 | Image manipulation and processing techniques for remote inspection device - A remote inspection apparatus has an imager disposed in an imager head and capturing image data. An active display unit receives the image data in digital form and graphically renders the image data on an active display. Movement tracking sensors track movement of the imager head and/or image display unit. In some aspects, a computer processor located in the active display unit employs information from movement tracking sensors tracking movement of the imager head to generate and display a marker indicating a position of the imager head. In additional aspects, the computer processor employs information from movement tracking sensors tracking movement of the active display unit to control movement of the imager head. In other aspects, the computer processor employs information from movement tracking sensors tracking movement of the active display unit to modify the image data rendered on the active display. | 08-06-2009 |
20090196460 | EYE TRACKING SYSTEM AND METHOD - An eye tracking system and method is provided giving persons with severe disabilities the ability to access a computer through eye movement. A system comprising a head tracking system, an eye tracking system, a display device, and a processor which calculates the gaze point of the user is provided. The eye tracking method comprises determining the location and orientation of the head, determining the location and orientation of the eye, calculating the location of the center of rotation of the eye, and calculating the gaze point of the eye. A method for inputting to an electronic device a character selected by a user through alternate means is provided, the method comprising placing a cursor near the character to be selected by said user, shifting the characters on a set of keys which are closest to the cursor, tracking the movement of the character to be selected with the cursor, and identifying the character to be selected by comparing the direction of movement of the cursor with the direction of movement of the characters of the set of keys which are closest to the cursor. | 08-06-2009 |
20090196461 | IMAGE CAPTURE DEVICE AND PROGRAM STORAGE MEDIUM - An image capture device includes a capture unit configured to capture an image of an object, an object detection unit configured to detect the object in the image captured by the capture unit, an angle detection unit configured to detect an angle of the object detected by the object detection unit, and a control unit configured to perform a predetermined control operation for the image capture device based on the angle of the object detected by the angle detection unit. | 08-06-2009 |
20090196462 | VIDEO AND AUDIO CONTENT ANALYSIS SYSTEM - The present invention is directed to various methods and systems for analysis and processing of video and audio signals from a plurality of sources in real-time or off-line. According to some embodiments of the present invention, analysis and processing applications are dynamically installed in the processing units. | 08-06-2009 |
20090202107 | Object detection and recognition system - An object recognition system is provided including at least one image capturing device configured to capture at least one image, wherein the image includes a plurality of pixels and is represented in an image data set, an object detection device configured to identify a plurality of pixels corresponding to objects from the at least one image, wherein an object includes a plurality of pixels and is represented in an object data set, wherein the object data set includes a set of features corresponding to each pixel in the object, and an image recognition device configured to recognize objects of interest present in an object by image correlation against a set of template images to recognize an object as one of the templates. | 08-13-2009 |
20090202108 | ASSAYING AND IMAGING SYSTEM IDENTIFYING TRAITS OF BIOLOGICAL SPECIMENS - A method of system is provided for assaying specimens. In connection with such system or method, plural multi-pixel target images of a field of view are obtained at different corresponding points in time over a given sample period. A background image is obtained using a plural set of the plural target images. For a range of points in time, the background image is removed from the target images to produce corresponding background-removed target images. Analysis is performed using at least a portion of the corresponding background-removed target images to identify visible features of the specimens. A holding structure is provided to hold a set of discrete specimen containers. A positioning mechanism is provided to position a plural subset of the containers to place the moving specimens within the plural subset of the containers within a field of view of the camera. | 08-13-2009 |
20090208052 | INTERACTIVE DEVICE AND METHOD FOR TRANSMITTING COMMANDS FROM A USER - According to the present invention, it is provided an interactive device comprising a display, a camera, an image analysing means, said interactive device comprising means to acquire an image with the camera, the analysing means detecting at least a human face on the acquired image and displaying on the display at least a pattern where the human face was detected wherein the interactive device further comprises means to determine a halo region extending at least around the pattern and means to add into the halo region at least one interactive zone related to a command, means to detect movement onto the interactive zone and means to execute the command by said device. | 08-20-2009 |
20090208053 | AUTOMATIC IDENTIFICATION AND REMOVAL OF OBJECTS IN AN IMAGE, SUCH AS WIRES IN A FRAME OF VIDEO - A wire tracking system is described that provides a method and system for automatically locating wires in a digital image and tracking the located wires through a sequence of digital images. The wire tracking system is particularly good at removing wires from complex shots where background replacement is difficult. The wire tracking system performs complex signal processing to automatically remove the wire from the original image while preserving grain and background detail. Thus, the wire tracking system provides a reliable method of automatically identifying wires and replacing the wires with a reconstructed background image, and frees artists to make other enhancements to the scene. | 08-20-2009 |
20090208054 | MEASURING A COHORT'S VELOCITY, ACCELERATION AND DIRECTION USING DIGITAL VIDEO - A computer implemented method, apparatus, and computer program product for identifying positional data for an object moving in an area of interest. Positional data for each camera in a set of cameras associated with the object is retrieved. The positional data identifies a location of each camera in the set of cameras within the area of interest. The object is within an image capture range of each camera in the set of cameras. Metadata describing video data captured by the set of cameras is analyzed using triangulation analytics and the positional data for the set of cameras to identify a location of the object. The metadata is generated in real time as the video data is captured by the set of cameras. The positional data for the object is identified based on locations of the object over a given time interval. The positional data describes motion of the object. | 08-20-2009 |
20090208055 | Efficient detection of broken line segments in a scanned image - Systems and methods are presented for detecting and repairing broken lines within an image from a plurality of edge segments comprising a plurality of pixels and having associated first and second endpoints. A characteristic angle is determined for each edge segment. A normal distance is determined for each line segment according the distance of closest approach to a reference point for a line defined by the first and second endpoints of each edge segment. At least one line within the scanned image is located according to the determined characteristic angles and the determined normal distance for the plurality of edge segments. | 08-20-2009 |
20090208056 | REAL-TIME FACE TRACKING IN A DIGITAL IMAGE ACQUISITION DEVICE - An image processing apparatus for tracking faces in an image stream iteratively receives a new acquired image from the image stream, the image potentially including one or more face regions. The acquired image is sub-sampled ( | 08-20-2009 |
20090208057 | VIRTUAL CONTROLLER FOR VISUAL DISPLAYS - Virtual controllers for visual displays are described. In one implementation, a camera captures an image of hands against a background. The image is segmented into hand areas and background areas. Various hand and finger gestures isolate parts of the background into independent areas, which are then assigned control parameters for manipulating the visual display. Multiple control parameters can be associated with attributes of multiple independent areas formed by two hands, for advanced control including simultaneous functions of clicking, selecting, executing, horizontal movement, vertical movement, scrolling, dragging, rotational movement, zooming, maximizing, minimizing, executing file functions, and executing menu choices. | 08-20-2009 |
20090208058 | IMAGING SYSTEM FOR VEHICLE - An imaging system for a vehicle includes an imaging device having a field of view exteriorly and forward of the vehicle in its direction of travel, and an image processor operable to process the captured images in accordance with an algorithm. The algorithm comprises a sign recognition routine and a character recognition routine. The image processor processes the image data captured by the imaging device to detect signs in the field of view of the imaging device and applies the sign recognition routine to determine a sign type of the detected sign. The image processor is operable to apply the character recognition routine to the image data to determine information on the detected sign. The image processor applies the character recognition routine to the captured images in response to an output of the sign recognition routine being indicative of the detected sign being a sign type of interest. | 08-20-2009 |
20090214077 | Method For Determining The Self-Motion Of A Vehicle - A method and a device for determining the self-motion of a vehicle in an environment are provided, in which at least part of the environment is recorded via snapshots by an imaging device mounted on the vehicle. At least two snapshots are analyzed for determining the optical flows of image points, reference points that seem to be stationary from the point of view of the imaging device being ascertained from the optical flows. The reference points are collected in an observed set, new reference points being dynamically added to the observed set with the aid of a first algorithm, and existing reference points being dynamically removed from the observed set with the aid of a second algorithm. | 08-27-2009 |
20090214078 | Method for Handling Static Text and Logos in Stabilized Images - To handle static text and logos in stabilized images without destabilizing the static text and logos, a method of handling overlay subpictures in stabilized images includes detecting an overlay subpicture in an input image, separating the overlay subpicture from the input image, stabilizing the input image to form a stabilized image, and merging the overlay subpicture with the stabilized image to obtain an output image. | 08-27-2009 |
20090214079 | SYSTEMS AND METHODS FOR RECOGNIZING A TARGET FROM A MOVING PLATFORM - Systems and methods for recognizing a location of a target are provided. One system includes a camera configured to generate first data representing an object resembling the target, a memory storing second data representing a template of the target, and a processor. The processor is configured to receive the first data and the second data, and determine that the object is the target if the object matches the template within a predetermined percentage error. A method includes receiving first data representing an object resembling the target, receiving second data representing a template of the target, and determining that the object is the target if the object matches the template within a predetermined percentage error. Also provided are computer-readable mediums including processor instructions for executing the above method. | 08-27-2009 |
20090214080 | METHODS AND APPARATUS FOR RUNWAY SEGMENTATION USING SENSOR ANALYSIS - Systems and methods for determining whether a region of interest (ROI) includes a runway are provided. One system includes a camera for capturing an image of the ROI, an analysis module for generating a binary large object (BLOB) of at least a portion of the ROI, and a synthetic vision system including a template of the runway. The system further includes a segmentation module for determining if the ROI includes the runway based on a comparison of the template and the BLOB. One method includes the steps of identifying a position for each corner on the BLOB and forming a polygon on the BLOB based on the position of each corner. The method further includes the step of determining that the BLOB represents the runway based on a comparison of the polygon and a template of the runway. Also provided are computer-readable mediums storing instructions for performing the above method. | 08-27-2009 |
20090214081 | APPARATUS AND METHOD FOR DETECTING OBJECT - A disparity profile indicating a relation between a perpendicular position on time series images and a disparity on a target monitoring area based on an arrangement of a camera is calculated. Processing areas are set, by setting a height of each of the processing areas using a length at the bottom of the image obtained by converting a reference value of a height of an object according to the profile, while setting a position of each bottom of processing areas on the image. An object having a height higher than a certain height with respect to the monitoring area, unify an object detection result in each processing area according to the disparity of the object, and detect the object of the whole monitoring area from each processing area is detected. Position and speed for the object detected by the object primary detection unit are estimated. | 08-27-2009 |
20090220122 | TRACKING SYSTEM FOR ORTHOGNATHIC SURGERY - Systems and methods are provided for measuring relative movement between two portions of the facial skeleton. A target ( | 09-03-2009 |
20090220123 | APPARATUS AND METHOD FOR COUNTING NUMBER OF OBJECTS - An image processing apparatus includes a first detecting unit configured to detect an object based on an upper body of a person and a second detecting unit configured to detect an object based on a face of a person. The image processing apparatus determines a level of congestion of objects contained in an input image, selects the first detecting unit when the level of congestion is low, and selects the second detecting unit when the level of congestion is high. The image processing apparatus counts the number of objects detected by the selected first or second detecting unit from the image. Thus, the image processing apparatus can detect an object and count the number of objects with high precision even when the level of congestion is high and the objects tend to overlap one another. | 09-03-2009 |
20090220124 | AUTOMATED SCORING SYSTEM FOR ATHLETICS - Disclosed are methods and systems for utilizing motion capture techniques, for example, video based motion capture techniques, for capturing and modeling the captured 3D movement of an athlete through a defined space. The model is then compared with an intended motion pattern in order to identify deviations and/or form breaks that, in turn, may be used in combination with a scoring algorithm to quantify the athlete's execution of the intended motion pattern to produce an objective score. It is anticipated that these methods and systems will be particularly useful for training and judging in those sports that have struggled with the vagaries introduced by the subjective nature of human scoring. | 09-03-2009 |
20090220125 | IMAGE RECONSTRUCTION BY POSITION AND MOTION TRACKING - A system, method, and apparatus provide the ability to reconstruct an image from an object. A hand-held image acquisition device is configured to acquire local image information from a physical object. A tracking system obtains displacement information for the hand-held acquisition device while the device is acquiring the local image information. An image reconstruction system computes the inverse of the displacement information and combines the inverse with the local image information to transform the local image information into a reconstructed local image information. A display device displays the reconstructed local image information. | 09-03-2009 |
20090232353 | METHOD AND SYSTEM FOR MARKERLESS MOTION CAPTURE USING MULTIPLE CAMERAS - Completely automated end-to-end method and system for markerless motion capture performs segmentation of articulating objects in Laplacian Eigenspace and is applicable to handling of the poses of some complexity. 3D voxel representation of acquired images are mapped to a higher dimensional space ( | 09-17-2009 |
20090232354 | ADVERTISEMENT INSERTION SYSTEMS AND METHODS FOR DIGITAL CAMERAS BASED ON OBJECT RECOGNITION - Digital cameras include an image capture system, an object recognition system and an advertisement insertion system. The image capture system captures a visible image as a digital image. The object recognition system recognizes visible objects in the digital image. The advertisement insertion system inserts an advertising-related image into the digital image in response to a visible object in the digital image that was recognized. The user of the digital camera may be compensated for exposure to the advertising-related image. | 09-17-2009 |
20090232355 | REGISTRATION OF 3D POINT CLOUD DATA USING EIGENANALYSIS | 09-17-2009 |
20090232356 | Tracking System and Method for Tracking Objects - Disclosed are tracking system and a method for locating a plurality of objects. The tracking system includes an identification module, a receiver, a processing module, and a transmitter. The identification module is configured to obtain unit identification information associated with the one or more traceable units. The receiver is configured to receive an information of a spatial location and unit identification information of the one or more traceable units. The processing module is electronically coupled to the identification module and the receiver and is configured to identify the one or more traceable units based on the obtained unit identification information and the received unit identification information. The processing module is further configured to determine locations of the one or more traceable units based on the information of the spatial location of the one or more identified traceable units. The transmitter is electronically coupled to the processing module. | 09-17-2009 |
20090232357 | DETECTING BEHAVIORAL DEVIATIONS BY MEASURING EYE MOVEMENTS - According to one embodiment of the present invention, a computer implemented method, apparatus, and computer usable program product is provided for detecting behavioral deviations in members of a cohort group. A member of a cohort group is identified. Each member of the cohort group shares a common characteristic. Ocular metadata associated with the member of the cohort group is generated in real-time. The ocular metadata describes movements of an eye of the member of the cohort group. The ocular metadata is analyzed to identify patterns of ocular movements. In response to the patterns of ocular movements indicating behavioral deviations in the member of the cohort group, the member of the cohort group is identified as a person of interest. A person of interest may be subjected to an increased level of monitoring and/or other security measures. | 09-17-2009 |
20090232358 | Method and apparatus for processing an image - There is provided an efficient, fast image processing apparatus with low error probability for rapidly scrutinizing a digitized video image frame and processing said image frame to detect and characterize features of interest while ignoring other features of said image frame. There is further provided an efficient fast image processing method with low error probability for rapidly scrutinizing a digitized video image frame and processing said image frame to detect and characterize features of interest while ignoring other features of said image frame. In a first embodiment of the invention an image processing apparatus comprises an imaging device coupled to a digital electronic image processor. Video data from the imaging device is linked to a location data source. Objects of interest in a scene are identified by comparing computed Maximally Stable Extremal Regions (MSERs) of captured images with MSERs of images of objects contained in a object template database. | 09-17-2009 |
20090238404 | METHODS FOR USING DEFORMABLE MODELS FOR TRACKING STRUCTURES IN VOLUMETRIC DATA - A computerized method for tracking of a 3D structure in a 3D image including a plurality of sequential image frames, one of which is a current image frame, includes representing the 3D structure being tracked with a parametric model with parameters for local shape deformations. A predicted state vector is created for the parametric model using a kinematic model. The parametric model is deformed using the predicted state vector, and a plurality of actual points for the 3D structure is determined using a current frame of the 3D image, and displacement values and a measurement vectors are determined using differences between the plurality of actual points and the plurality of predicted points. The displacement values and the measurement vectors are filtered to generate an updated state vector and an updated covariance matrix, and an updated parametric model is generated for the current image frame using the updated state vector. | 09-24-2009 |
20090238405 | METHOD AND SYSTEM FOR ENABLING A USER TO PLAY A LARGE SCREEN GAME BY MEANS OF A MOBILE DEVICE - The present invention relates to a system and method for determining and tracking one or more objects, or one or more image sections within each image of a video stream to be displayed on user's mobile device, comprising: (a) one or more video streams to be run on a streaming server; (b) an image capture software component for capturing images of said one or more video streams, according to a first group of one or more sets of rules; (c) a receiver for receiving one or more commands generated by a user and transferring said commands to an extra-layer software component; (d) an extra-layer software component for: (d.1.) determining one or more objects or image sections within the captured images; (d.2.) tracking said objects or image sections within said captured images; and (d.3.) processing said captured images, to generate corresponding images to be displayed on a mobile device screen, according to a second group of one or more sets of rules and according to user's commands received by means of said receiver; (e) a compression software component for compressing the images, processed by means of said extra-layer software component, according to a third group of one or more sets of rules; (f) a data software component for providing groups of one or more sets of rules to said image capture software component, said extra-layer software component and said compression software component; and (g) a transmitter for transmitting the compressed images to a mobile device. The system and method further comprises a relayout software component for: (a) determining one or more objects or image sections within each image of the one or more video streams; (b) tracking said objects or image sections within said each image of said one or more video streams; and (c) processing said each image, to generate corresponding images to be displayed on a mobile device screen, according to a first group of one or more sets of rules and according to user's commands received by means of the receiver. | 09-24-2009 |
20090238406 | Dynamic state estimation - According to an implementation, a set of particles is provided for use in estimating a location of a state of a dynamic system. A local-mode seeking mechanism is applied to move one or more particles in the set of particles, and the number of particles in the set of particles is modified. The location of the state of the dynamic system is estimated using particles in the set of particles. Another implementation provides dynamic state estimation using a particle filter for which the particle locations are modified using a local-mode seeking algorithm based on a mean-shift analysis and for which the number of particles is adjusted using a Kullback-Leibler-distance sampling process. The mean-shift analysis may reduce degeneracy in the particles, and the sampling process may reduce the computational complexity of the particle filter. The implementation may be useful with non-linear and non-Gaussian systems. | 09-24-2009 |
20090238407 | Object detecting apparatus and method for detecting an object - An apparatus for detecting an object, includes: a candidate point detection unit detecting a candidate point between the ground and an object from an image; a tracking unit calculating positions of the candidate point at a first time and a second time; a difference calculation unit calculating a difference between an estimated position at the second time and the candidate point position at the second time; and a state determination unit determining a new state of the candidate point at the second time based on the difference, and changing the search threshold value or a state. | 09-24-2009 |
20090238408 | IMAGE-SIGNAL PROCESSOR, IMAGE-SIGNAL PROCESSING METHOD, AND PROGRAM - An image-signal processing apparatus configured to track an object moving in an image includes a setting unit configured to set an eliminating area in an image constituting a moving image; a motion-vector detecting unit configured to detect an object in the image constituting a moving image and detect a motion vector corresponding to the object using an area excluding the eliminating area in the image; and an estimating unit configured to estimate a position to which the object moves on the basis of the detected motion vector. | 09-24-2009 |
20090238409 | Method for testing a motion vector - A method for testing a motion vector is described, which has: provision of at least one item of motion information assigned to the image sequence; storing a first image section of the first image in a first buffer memory and storing a second image section of the second image in a second intermediate memory, whereby a position of the first image section in the first image and a position of the second image section in the second image have reciprocal offset, which is dependent on the at least one item of motion information; determining a first image block in the first image section and a second image block in a second image section using the motion vector; comparing the contents of the first and of the second image block. | 09-24-2009 |
20090238410 | FACE RECOGNITION WITH COMBINED PCA-BASED DATASETS - A face recognition method for working with two or more collections of facial images is provided. A representation framework is determined for a first collection of facial images including at least principle component analysis (PCA) features. A representation of said first collection is stored using the representation framework. A modified representation framework is determined based on statistical properties of original facial image samples of a second collection of facial images and the stored representation of the first collection. The first and second collections are combined without using original facial image samples. A representation of the combined image collection (super-collection) is stored using the modified representation framework. A representation of a current facial image, determined in terms of the modified representation framework, is compared with one or more representations of facial images of the combined collection. Based on the comparing, it is determined which, if any, of the facial images within the combined collection matches the current facial image. | 09-24-2009 |
20090245570 | METHOD AND SYSTEM FOR OBJECT DETECTION IN IMAGES UTILIZING ADAPTIVE SCANNING - An object detection method and system for detecting an object in an image utilizing an adaptive image scanning strategy is disclosed herein. An initial rough shift can be determined based on the size of a scanning window and the image can be scanned continuously for several detections of similar sizes using the rough shift. The scanning window can be classified with respect to a cascade of homogenous classification functions covering one or more features of the object. The size and scanning direction of the scanning window can be adaptively changed depending on the probability of the object occurrence in accordance with scan acceleration. The object can be detected by an object detector and can be localized with higher precision and accuracy. | 10-01-2009 |
20090245571 | Digital video target moving object segmentation method and system - A digital video target moving object segmentation method and system is designed for processing a digital video stream for segmentation of every target moving object that appears in the video content. The proposed method and system is characterized by the operations of a multiple background imagery extraction process and a background imagery updating process for extracting characteristic background imagery whose content includes the motional background objects in addition to the static background scenes; and wherein the multiple background imagery extraction process is based on a background difference threshold comparison method, while the background imagery updating process is based on a background-matching and weight-counting method. This feature allows an object mask to be defined based on the characteristic background imagery, which can mask both the motional background objects as well as the static background scenes. | 10-01-2009 |
20090245572 | Control apparatus and method - The invention discloses a control apparatus for a user to control an electronic apparatus. The control apparatus of the invention includes a monitoring module, a sensing module, a first processing module, and a first transmitting module. The monitoring module is used to monitor the user's eyeball(s), and generates related eyeball-movement information. The sensing module is used to monitor a body portion of the user, and generates related body portion-movement information. The first processing module is connected to the monitoring module and the sensing module respectively, for calculating the control information in accordance with the eyeball-movement information and the body portion-movement information. Additionally, the first transmitting module is connected to the first processing module, for transmitting the control information to the electronic device, which can act according to the control information. | 10-01-2009 |
20090245573 | OBJECT MATCHING FOR TRACKING, INDEXING, AND SEARCH - A camera system comprises an image capturing device, object detection module, object tracking module, and match classifier. The object detection module receives image data and detects objects appearing in one or more of the images. The object tracking module temporally associates instances of a first object detected in a first group of the images. The first object has a first signature representing features of the first object. The match classifier matches object instances by analyzing data derived from the first signature of the first object and a second signature of a second object detected in a second image. The second signature represents features of the second object derived from the second image. The match classifier determine whether the second signature matches the first signature. A training process automatically configures the match classifier using a set of possible object features. | 10-01-2009 |
20090245574 | OPTICAL POINTING DEVICE AND METHOD OF DETECTING CLICK EVENT IN OPTICAL POINTING DEVICE - A method of detecting a click event for sensing a motion of a finger corresponding to a click on a sensing area of an optical pointing device, the method including: obtaining an image of the finger from the sensing area; sensing a change in the image of the finger; analyzing a horizontal movement of the finger based on the change in the image of the finger; and generating a click signal when the horizontal movement of the finger is within a predetermined range is provided. | 10-01-2009 |
20090245575 | METHOD, APPARATUS, AND PROGRAM STORAGE MEDIUM FOR DETECTING OBJECT - In an object detecting method according to an aspect of the invention, a specific kind of object such as a human head can be detected with high accuracy even if the detecting target object appears in various shapes. The object detecting method includes a primary evaluated value computing step of applying plural filters to an image of an object detecting target to compute plural feature quantities and of obtaining a primary evaluated value corresponding to each-feature quantity; a secondary evaluated value computing step of obtaining a secondary evaluated value by integrating the plural primary evaluated values obtained in the primary evaluated value computing step; and a region extracting step of comparing the secondary evaluated value obtained in the secondary evaluated value computing step and a threshold to extract a region where an existing probability of the specific kind of object is higher than the threshold. | 10-01-2009 |
20090245576 | METHOD, APPARATUS, AND PROGRAM STORAGE MEDIUM FOR DETECTING OBJECT - The invention relates to an object detecting method for detecting a specific kind of object such as a human head and a human face from an image expressed by two-dimensionally arrayed pixels, the object detecting method including an image group producing step of producing an image group including an original image of the object detecting target and at least one thinned-out image by thinning out pixels constituting the original image at a predetermined rate or by thinning out the pixels at the predetermined rate in a stepwise manner; and a stepwise detection step of detecting the specific kind of object from the original image by sequentially repeating plural extraction processes from an extraction process of applying a filter acting on a relatively small region to a relatively small image toward an extraction process of applying a filter acting on a relatively wide region to a relatively large image. | 10-01-2009 |
20090245577 | Tracking Processing Apparatus, Tracking Processing Method, and Computer Program - A tracking processing apparatus includes: first state-variable-sample-candidate generating means for generating state variable sample candidates at first present time; plural detecting means each for performing detection concerning a predetermined detection target related to a tracking target; sub-information generating means for generating sub-state variable probability distribution information at present time; second state-variable-sample-candidate generating means for generating state variable sample candidates at second present time; a state-variable-sample acquiring means for selecting state variable samples out of the state variable sample candidates at the first present time and the state variable sample candidates at the second present time at random according to a predetermined selection ratio set in advance; and estimation-result generating means for generating main state variable probability distribution information at the present time as an estimation result. | 10-01-2009 |
20090245578 | METHOD OF DETECTING PREDETERMINED OBJECT FROM IMAGE AND APPARATUS THEREFOR - In an object detecting method, an imaging condition of an image pickup unit is determined, a detecting method is selected based on the determined imaging condition, and at least one predetermined object is detected from an image picked up through the image pickup unit according to the selected detecting method. | 10-01-2009 |
20090245579 | PROBABILITY DISTRIBUTION CONSTRUCTING METHOD, PROBABILITY DISTRIBUTION CONSTRUCTING APPARATUS, STORAGE MEDIUM OF PROBABILITY DISTRIBUTION CONSTRUCTING PROGRAM, SUBJECT DETECTING METHOD, SUBJECT DETECTING APPARATUS, AND STORAGE MEDIUM OF SUBJECT DETECTING PROGRAM - A probability distribution constructing method extracts a subject shape similar to a subject of a specific type repeatedly appearing in various sizes in plural images obtained by repeatedly photographing a field using a fixedly disposed camera, from plurality images, in accordance with a size of the similar subject shape and positional information of the camera on a view angle. Subsequently, the probability distribution constructing method determines the similar subject shape, and calculates an appearance probability distribution of the size of the subject, and detects the subject using the appearance probability distribution. | 10-01-2009 |
20090245580 | MODIFYING PARAMETERS OF AN OBJECT DETECTOR BASED ON DETECTION INFORMATION - Embodiments of an object detection unit configured to modify parameters for one or more object detectors based on detection information are provided. | 10-01-2009 |
20090252373 | Method and System for detecting polygon Boundaries of structures in images as particle tracks through fields of corners and pixel gradients - A stochastic method and system for detecting polygon structures in images, by detecting a set of best matching corners of predetermined acuteness α of a polygon model from a set of similarity scores based on GDM features of corners, and tracking polygon boundaries as particle tracks using a sequential Monte Carlo approach. The tracking involves initializing polygon boundary tracking by selecting pairs of corners from the set of best matching corners to define a first side of a corresponding polygon boundary; tracking all intermediate sides of the polygon boundaries using a particle filter, and terminating polygon boundary tracking by determining the last side of the tracked polygon boundaries to close the polygon boundaries. The particle tracks are then blended to determine polygon matches, which may be made available, such as to a user, for ranking and inspection. | 10-08-2009 |
20090252374 | IMAGE SIGNAL PROCESSING APPARATUS, IMAGE SIGNAL PROCESSING METHOD, AND PROGRAM - An image signal processing apparatus includes a detecting unit configured to detect a motion vector of a tracking point provided in an object in a moving image, a computing unit configured to compute a reliability parameter representing the reliability of the detected motion vector, a determining unit configured to determine whether the detected motion vector is adopted by comparing the computed reliability parameter with a boundary, an accumulating unit configured to accumulate the reliability parameter, and a changing unit configured to change the boundary on the basis of the accumulated reliability parameters. | 10-08-2009 |
20090252375 | Position Detection System, Position Detection Method, Program, Object Determination System and Object Determination Method - There is provided a position detection system including an imaging unit to capture an image of a projection plane of an electromagnetic wave, an electromagnetic wave emission unit to emit the electromagnetic wave to the projection plane, a control unit to control emission of the electromagnetic wave by the electromagnetic wave emission unit, and a position detection unit including a projected image detection section to detect a projected image of an object existing between the electromagnetic wave emission unit and the projection plane based on a difference between an image of the projection plane captured during emission of the electromagnetic wave fay the electromagnetic wave emission unit and an image of the projection plane captured during no emission of the electromagnetic wave, and a position detection section to detect a position of the object based on a position of the projected image of the object. | 10-08-2009 |
20090257621 | Method and System for Dynamic Feature Detection - Disclosed are methods and systems for dynamic feature detection of physical features of objects in the field of view of a sensor. Dynamic feature detection substantially reduces the effects of accidental alignment of physical features with the pixel grid of a digital image by using the relative motion of objects or material in and/or through the field of view to capture and process a plurality of images that correspond to a plurality of alignments. Estimates of the position, weight, and other attributes of a feature are based on an analysis of the appearance of the feature as it moves in the field of view and appears at a plurality of pixel grid alignments. The resulting reliability and accuracy is superior to prior art static feature detection systems and methods. | 10-15-2009 |
20090257622 | METHOD FOR REMOTE SPECTRAL ANALYSIS OF GAS PLUMES - A method for reducing the effects of background radiation introduced into gaseous plume spectral data obtained by an aerial imaging sensor, includes capturing spectral data of a gaseous plume with its obscured background along a first line of observation and capturing a second image of the previously obscured background along a different line of observation. The parallax shift of the plume enables the visual access needed to capture the radiometric data emanating exclusively from the background. The images are then corresponded on a pixel-by-pixel basis to produce a mapping. An image-processing algorithm is applied to the mapped images to reducing the effects of background radiation and derive information about the content of the plume. | 10-15-2009 |
20090262976 | POSITION-DETERMINING SYSTEM AND METHOD - A position-determining system for determining position and orientation of an object on a work surface parallel to an X-Y plane of a Cartesian coordinate system includes an image-capturing device, a processor and a recognition assistant. The image-capturing device is directed towards the work surface for capturing images of the object and sending the images to the processor. The processor processes the images captured by the image-capturing device. The recognition assistant is attached on the object. The recognition assistant includes a first recognition assistant part and a second recognition assistant part configured to be readily recognizable in an image examined by the processor. Then the processor determines position and orientation of the object via a template matching algorithm. | 10-22-2009 |
20090262977 | VISUAL TRACKING SYSTEM AND METHOD THEREOF - The present invention provides a visual tracking system and its method comprising: a sensor unit, for capturing monitored scenes continuously; an image processor unit, for detecting when a target enters into a monitored scene, and extracting its characteristics to establish at least one model, and calculating the matching scores of the models; a hybrid tracking algorithm unit, for combining the matching scores to produce optimal matching results; a visual probability data association filter, for receiving the optimal matching results to eliminate the interference and output a tracking signal; an active moving platform, for driving the platform according to the tracking signal to situate the target at the center of the image. Therefore, the visual tracking system of the present invention can help a security camera system to record the target in details and maximize the visual information of the intruding target. | 10-22-2009 |
20090262978 | Automatic Detection Of Fires On Earth's Surface And Of Atmospheric Phenomena Such As Clouds, Veils, Fog Or The Like, Using A Satellite System - A method for automatically detecting fires on Earth's surface using a satellite system is provided. The method includes acquiring multi-spectral images of the Earth at different times, using a multi-spectral satellite sensor, each multi-spectral image being a collection of single-spectral images each associated with a respective wavelength (λ), and each single-spectral image being made up of pixels each indicative of a spectral radiance (R | 10-22-2009 |
20090262979 | Determining a Material Flow Characteristic in a Structure - An volume of a patient can be mapped with a system operable to identify a plurality of locations and save a plurality of locations of a mapping instrument. The mapping instrument can include one or more electrodes that can sense a voltage that can be correlated to a three dimensional location of the electrode at the time of the sensing or measurement. Therefore, a map of a volume can be determined based upon the sensing of the plurality of points without the use of other imaging devices. An implantable medical device can then be navigated relative to the mapping data. | 10-22-2009 |
20090262980 | Method and Apparatus for Determining Tracking a Virtual Point Defined Relative to a Tracked Member - An volume of a patient can be mapped with a system operable to identify a plurality of locations and save a plurality of locations of a mapping instrument. The mapping instrument can include one or more electrodes that can sense a voltage that can be correlated to a three dimensional location of the electrode at the time of the sensing or measurement. Therefore, a map of a volume can be determined based upon the sensing of the plurality of points without the use of other imaging devices. An implantable medical device can then be navigated relative to the mapping data. | 10-22-2009 |
20090262981 | IMAGE PROCESSING APPARATUS AND METHOD THEREOF - An image processing apparatus estimates an estimated object region including an object on an input image on the basis of a stored object data, obtains a similarity distribution of the estimated object region and peripheral regions thereof by at least one classifier, and obtains an object region coordinate and a template image on the basis of the similarity distribution. | 10-22-2009 |
20090262982 | Determining a Location of a Member - An volume of a patient can be mapped with a system operable to identify a plurality of locations and save a plurality of locations of a mapping instrument. The mapping instrument can include one or more electrodes that can sense a voltage that can be correlated to a three dimensional location of the electrode at the time of the sensing or measurement. Therefore, a map of a volume can be determined based upon the sensing of the plurality of points without the use of other imaging devices. An implantable medical device can then be navigated relative to the mapping data. | 10-22-2009 |
20090262983 | Image processing based on object information - A CPU divides an image into plural regions and for each of the regions, generates a histogram and calculates an average brightness Y ave. The CPU determines a focus location on the image by using focus location information, sets a region at the determined location as an emphasis region, and sets the average brightness Y ave of the emphasis region as a brightness criterion Y std. The CPU uses the brightness criterion Y std to determine non-usable regions. By using the regions not excluded as non-usable regions, the CPU calculates an image quality adjustment average brightness Y′ ave, i.e. the average brightness of the entire image, with a weighting W in accordance with the locations of the regions reflected thereto, and executes a bright value correction by using the calculated image quality adjustment average brightness Y′ ave. | 10-22-2009 |
20090262984 | Multiple Camera Control System - A multiple camera tracking system for interfacing with an application program running on a computer is provided. The tracking system includes two or more video cameras arranged to provide different viewpoints of a region of interest, and are operable to produce a series of video images. A processor is operable to receive the series of video images and detect objects appearing in the region of interest. The processor executes a process to generate a background data set from the video images, generate an image data set for each received video image, compare each image data set to the background data set to produce a difference map for each image data set, detect a relative position of an object of interest within each difference map, and produce an absolute position of the object of interest from the relative positions of the object of interest and map the absolute position to a position indicator associated with the application program. | 10-22-2009 |
20090268941 | VIDEO MONITOR FOR SHOPPING CART CHECKOUT - A system ensures payment for the purchase of merchandise carried through a checkout aisle on the lower tray of a shopping cart. For that purpose, the system includes a controller with an embedded program for identifying a virtual structure substantially equivalent to the physical structure of the tray. Further, the system includes a sensor that determines when a cart is positioned at the checkout aisle. The system also includes a camera for creating an image of the physical structure of the tray and transmitting the image to the controller. The controller includes a means for activating the embedded program to compare the image with the virtual structure. As a result of the comparison, the controller determines whether merchandise is on the physical structure of the tray. During the comparison, the controller removes the virtual structure from the image. | 10-29-2009 |
20090268942 | Methods and apparatus for detection of motion picture piracy for piracy prevention - A copiers' camera or camcorder in a motion-picture audience region is detected by illuminating the audience region with invisible infrared light, and locating any copiers' camera or camcorder within the audience region by imaging the audience region with one or more infrared-light-sensitive cameras. The image captured by the infrared-sensitive camera(s) during a performance may be correlated with information about the audience region, such as row and seat numbers. Copiers may be identified by their presence at seats where copying activity is detected, and the infrared images may be preserved as evidence of the piracy. | 10-29-2009 |
20090268943 | COMPOSITION DETERMINATION DEVICE, COMPOSITION DETERMINATION METHOD, AND PROGRAM - A composition determination device includes: a subject detection unit configured to detect a subject in an image based on acquired image data; an actual subject size detection unit configured to detect the actual size which can be viewed as being equivalent to actual measurements, for each subject detected by the subject detection unit; a subject distinguishing unit configured to distinguish relevant subjects from subjects detected by the subject detection unit, based on determination regarding whether or not the actual size detected by the actual subject size detection unit is an appropriate value corresponding to a relevant subject; and a composition determination unit configured to determine a composition with only relevant subjects, distinguished by the subject distinguishing unit, as objects. | 10-29-2009 |
20090268944 | LINE OF SIGHT DETECTING DEVICE AND METHOD - A line of sight detecting method includes estimating a face direction of an object person based on a shot face image of the object person, detecting a part of an eye outline in the face image of the object person, detecting a pupil in the face image of the object person, and estimating the direction of a line of sight of the object person based on the correlation of the pupil position in the eye outline and the face direction with respect to the direction of the line of sight, and the pupil position and the face direction of the object person. | 10-29-2009 |
20090268945 | ARCHITECTURE FOR CONTROLLING A COMPUTER USING HAND GESTURES - Architecture for implementing a perceptual user interface. The architecture comprises alternative modalities for controlling computer application programs and manipulating on-screen objects through hand gestures or a combination of hand gestures and verbal commands. The perceptual user interface system includes a tracking component that detects object characteristics of at least one of a plurality of objects within a scene, and tracks the respective object. Detection of object characteristics is based at least in part upon image comparison of a plurality of images relative to a course mapping of the images. A seeding component iteratively seeds the tracking component with object hypotheses based upon the presence of the object characteristics and the image comparison. A filtering component selectively removes the tracked object from the object hypotheses and/or at least one object hypothesis from the set of object hypotheses based upon predetermined removal criteria. | 10-29-2009 |
20090274339 | Behavior recognition system - A system for recognizing various human and creature motion gaits and behaviors is presented. These behaviors are defined as combinations of “gestures” identified on various parts of a body in motion. For example, the leg gestures generated when a person runs are different than when a person walks. The system described here can identify such differences and categorize these behaviors. Gestures, as previously defined, are motions generated by humans, animals, or machines. Multiple gestures on a body (or bodies) are recognized simultaneously and used in determining behaviors. If multiple bodies are tracked by the system, then overall formations and behaviors (such as military goals) can be determined. | 11-05-2009 |
20090279736 | MAGNETIC RESONANCE EYE TRACKING SYSTEMS AND METHODS - Embodiments of magnetic resonance eye tracking systems and methods are disclosed. One embodiment, among others, comprises a method that receives magnetic resonance based data and determines direction of a subject's gaze based on the data. | 11-12-2009 |
20090279737 | PROCESSING METHOD FOR CODED APERTURE SENSOR - A method of processing for a coded aperture imaging apparatus which is useful for target identification and tracking. The method uses a statistical scene model and, preferably using several frames of data, determines a likelihood of the position and/or velocity of one or more targets assumed to be in the scene. The method preferably applies a recursive Bayesian filter or Bayesian batch filter to determine a probability distribution of likely state parameters. The method acts upon the acquired data directly without requiring any processing to form an image. | 11-12-2009 |
20090279738 | Apparatus for image recognition - An image recognition apparatus includes an image recognition unit, an evaluation value calculation unit, and a motion extraction unit. The image recognition unit uses motion vectors that are generated in the course of coding image data into MPEG format data or in the course of decoding the MPEG coded data by the evaluation value calculation unit and the motion extraction unit as well as two dimensional DCT coefficients and encode information such as picture types and block types for generating the evaluation values that represent feature of the image. The apparatus further includes an update unit for recognizing the object in the image based on the determination rules for a unit of macro block. The apparatus can thus accurately detect the motion of the object based on the evaluation values derived from DCT coefficients even when generation of the motion vectors is difficult. | 11-12-2009 |
20090285449 | SYSTEM FOR OPTICAL RECOGNITION OF THE POSITION AND MOVEMENT OF AN OBJECT ON A POSITIONING DEVICE - The optical recognition system determines the position and/or movement of an object ( | 11-19-2009 |
20090285450 | IMAGE-BASED SYSTEM AND METHODS FOR VEHICLE GUIDANCE AND NAVIGATION - A method of estimating position and orientation of a vehicle using image data is provided. The method includes capturing an image of a region external to the vehicle using a camera mounted to the vehicle, and identifying in the image a set of feature points of the region. The method further includes subsequently capturing another image of the region from a different orientation of the camera, and identifying in the image the same set of feature points. A pose estimation of the vehicle is generated based upon the identified set of feature points and corresponding to the region. Each of the steps are repeated at with respect to a different region at least once so as to generate at least one succeeding pose estimation of the vehicle. The pose estimations are then propagated over a time interval by chaining the pose estimation and each succeeding pose estimation one with another according to a sequence in which each was generated. | 11-19-2009 |
20090290755 | System Having a Layered Architecture For Constructing a Dynamic Social Network From Image Data - A system having a layered architecture for constructing dynamic social network from image data of actors and events. It may have a low layer for capturing raw data and identifying actors and events. The system may have a middle layer that receives actor and event information from the low layer and puts it in to a two dimensional matrix. A high layer of the system may add weighted relationship information to the matrix to form the basis for constructing a social network. The system may have a sliding window thus making the social network dynamic. | 11-26-2009 |
20090290756 | METHODS AND APPARATUS FOR DETECTING A COMPOSITION OF AN AUDIENCE OF AN INFORMATION PRESENTING DEVICE - Methods and apparatus for detecting a composition of an audience of an information presenting device are disclosed. A disclosed example method includes: capturing at least one image of the audience; determining a number of people within the at least one image; prompting the audience to identify its members if a change in the number of people is detected based on the number of people determined to be within the at least one image; and if a number of members identified by the audience is different from the determined number of people after a predetermined number of prompts of the audience, adjusting a value to avoid excessive prompting of the audience. | 11-26-2009 |
20090296984 | System and Method for Three-Dimensional Object Reconstruction from Two-Dimensional Images - A system and method for three-dimensional (3D) acquisition and modeling of a scene using two-dimensional (2D) images are provided. The system and method provides for acquiring first and second images of a scene, applying a smoothing function to the first image to make feature points of objects, e.g., corners and edges of the objects, in the scene more visible, applying at least two feature detection functions to the first image to detect feature points of objects in the first image, combining outputs of the at least two feature detection functions to select object feature points to be tracked, applying a smoothing function to the second image, applying a tracking function on the second image to track the selected object feature points, and reconstructing a three-dimensional model of the scene from an output of the tracking function. | 12-03-2009 |
20090296985 | Efficient Multi-Hypothesis Multi-Human 3D Tracking in Crowded Scenes - System and methods are disclosed to perform multi-human 3D tracking with a plurality of cameras. At each view, a module receives each camera output and provides 2D human detection candidates. A plurality of 2D tracking modules are connected to the CNNs, each 2D tracking module managing 2D tracking independently. A 3D tracking module is connected to the 2D tracking modules to receive promising 2D tracking hypotheses. The 3D tracking module selects trajectories from the 2D tracking modules to generate 3D tracking hypotheses. | 12-03-2009 |
20090296986 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD AND PROGRAM - An image processing device includes: a tracking unit to track a predetermined point on an image as a tracking point, to correspond with an operation of a user; a display control unit to display the tracking point candidate serving as the tracking point candidates, which are greater in number than objects moving on the image and fewer than the number of pixels of the image, on the image; and a setting unit to set the tracking point candidates as the tracking points on the next frame of the tracking unit, corresponding to an operation by a user. | 12-03-2009 |
20090296987 | ROAD LANE BOUNDARY DETECTION SYSTEM AND ROAD LANE BOUNDARY DETECTING METHOD - A road lane boundary detection system includes a detection region setting unit that sets a certain region in a road image, as a target detection region to be searched for detection of a road lane boundary, and a detecting unit that processes image data in the target detection region set by the detection region setting unit, so as to detect the road lane boundary. The detection region setting unit sets a first detection region as the target detection region if no road lane boundary is detected, and sets a second detection region as the target detection region if the road lane boundary is detected, such that the first and second detection regions are different in size from each other. | 12-03-2009 |
20090296988 | CHARACTER INPUT APPARATUS AND CHARACTER INPUT METHOD - A character input apparatus includes a liquid crystal monitor | 12-03-2009 |
20090296989 | Method for Automatic Detection and Tracking of Multiple Objects - A method for automatically detecting and tracking objects in a scene. The method acquires video frames from a video camera; extracts discriminative features from the video frames; detects changes in the extracted features using background subtraction to produce a change map; uses the change map to use a hypothesis to estimate of an approximate number of people along with uncertainty in user specified locations; and using the estimate, track people and update the hypotheses for a refinement of the estimation of people count and location. | 12-03-2009 |
20090304229 | OBJECT TRACKING USING COLOR HISTOGRAM AND OBJECT SIZE - A solution for monitoring an area uses color histograms and size information (e.g., heights and widths) for blob(s) identified in an image of the area and model(s) for existing object track(s) for the area. Correspondence(s) between the blob(s) and the object track(s) are determined using the color histograms and size information. Information on an object track is updated based on the type of correspondence(s). The solution can process merges, splits and occlusions of foreground objects as well as temporal and spatial fragmentations. | 12-10-2009 |
20090304230 | Detecting and tracking targets in images based on estimated target geometry - A system for detecting and tracking targets captured in images, such as people and object targets that are captured in video images from a surveillance network. Targets can be detected by an efficient, geometry-driven approach that determines likely target configuration of the foreground imagery based on estimated geometric information of possible targets. The detected targets can be tracked using a centralized tracking system. | 12-10-2009 |
20090304231 | Method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device - A method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device includes: decomposing a frame into intensity, color and direction features according to human perceptions; filtering an input image by a Gaussian pyramid to obtain levels of pyramid representations by down sampling; calculating the features of pyramid representations; using a linear center-surround operator similar to a biological perception to expedite the calculation of a mean value of the peripheral region; using the difference of each feature between a small central region and the peripheral region as a measured value; overlaying the pyramid feature maps to obtain a conspicuity map and unify the conspicuity maps of the three features; obtaining a saliency map of the frames by linear combination; and using the saliency map for a segmentation to mark an interesting region of a frame in the large region of the conspicuity maps. | 12-10-2009 |
20090304232 | VISUAL AXIS DIRECTION DETECTION DEVICE AND VISUAL LINE DIRECTION DETECTION METHOD - Provided is a visual axis direction detection device capable of obtaining a highly accurate visual axis direction detection result without performing a particular calibration for each of examinees. The device ( | 12-10-2009 |
20090304233 | RECOGNITION APPARATUS AND RECOGNITION METHOD - A barcode recognition apparatus includes an image interface, an image analysis unit, an image conversion unit, and a bar recognition unit. The image interface acquires an image including a barcode captured by a camera. The image analysis unit analyzes a characteristic of an input image acquired from the camera, and decides an image conversion method for the conversion from the input image into an image for recognition processing on the basis of the analysis result. The image conversion unit converts the input image into an image for recognition processing by the image conversion method decided by the image analysis unit. The bar recognition unit performs barcode recognition processing for the image for recognition processing obtained by the image conversion unit. | 12-10-2009 |
20090304234 | TRACKING POINT DETECTING DEVICE AND METHOD, PROGRAM, AND RECORDING MEDIUM - A tracking point detecting device includes: a frame decimation unit for decimation the frame interval of a moving image configured of multiple frame images continuing temporally; a first detecting unit for detecting, of two consecutive frames of the decimated moving image, a temporally-subsequent frame pixel corresponding to a predetermined pixel of a temporally-previous frame; a forward-direction detecting unit for detecting the pixel corresponding to a predetermined pixel of a temporally-previous frame of the decimated moving image, at each of the decimated frames in the same direction as time; an opposite-direction detecting unit for detecting the pixel corresponding to the detected pixel of a temporally-subsequent frame of the decimated moving image, at each of the decimated frames in the opposite direction of time; and a second detecting unit for detecting a predetermined pixel of each of the decimated frames by employing the pixel positions detected in the forward and opposite directions. | 12-10-2009 |
20090310820 | IMPROVEMENTS RELATING TO TARGET TRACKING - A method and system are disclosed for tracking a target imaged in video footage. The target may, for example, be a person moving through a crowd The method comprises the steps of: identifying a target in a first frame; generating a population of sub-templates by sampling from a template area defined around the target position; and searching for instances of the sub-templates in a second frame so as to locate the target in the second frame. Sub-templates whose instances are not consistent with the new target position are removed from the population and replaced by newly sampled sub-templates. The method can then be repeated so as to find the target in further frames. It can be implemented in a system comprising video imaging means, such as a CCTV camera, and processing means operable to carry out the method. | 12-17-2009 |
20090310821 | DETECTION OF AN OBJECT IN AN IMAGE - The invention provides a method, system, and program product for detecting an object in a digital image. In one embodiment, the invention includes: deriving an initial object indication mask based on pixel-wise differences between a first digital image and a second digital image, at least one of which includes the object; performing an edge finding operation on both the first and second digital images, wherein the edge finding operation includes marking added edges; generating a plurality of straight linear runs of pixels across an image containing the object, wherein each of the plurality of straight linear runs starts and ends on an added edge and is contained within the initial object indication mask; and forming a final object indication mask by retaining only pixels that are part of at least one of the plurality of straight linear runs. | 12-17-2009 |
20090310822 | Feedback object detection method and system - A feedback object detection method and system. The system includes an object segmentation element, an object tracking element and an object prediction element. The object segmentation element extracts the object from an image according to prediction information of the object provided by the object prediction element. Then, the object tracking element tracks the extracted object to generate motion information of the object like moving speed and moving direction. The object prediction element generates the prediction information such as predicted position and predicted size of the object according to the motion information. The feedback of the prediction information to the object segmentation element facilitates accurately extracting foreground pixels from the image. | 12-17-2009 |
20090310823 | Object tracking method using spatial-color statistical model - An object tracking method utilizing spatial-color statistical models is used for tracking an object in different frames. A first object is extracted from a first frame and a second object is extracted from a second frame. The first object is divided into several first blocks and the second object is divided into several second blocks according to pixel parameters of each pixel within the first object and the second object. The comparison between the first blocks and the second blocks is made to find the corresponding relation therebetween. The second object is identified as the first object according to the corresponding relation. | 12-17-2009 |
20090316951 | MOBILE IMAGING DEVICE AS NAVIGATOR - Embodiments of the invention are directed to obtaining information based on directional orientation of a mobile imaging device, such as a camera phone. Visual information is gathered by the camera and used to determine a directional orientation of the camera, to search for content based on the direction, to manipulate 3D virtual images of a surrounding area, and to otherwise use the directional information. Direction and motion can be determined by analyzing a sequence of images. Distance from a current location, inputted search parameters, and other criteria can be used to expand or filter content that is tagged with such criteria. Search results with distance indicators can be overlaid on a map or a camera feed. Various content can be displayed for a current direction, or desired content, such as a business location, can be displayed only when the camera is oriented toward the desired content. | 12-24-2009 |
20090316952 | GESTURE RECOGNITION INTERFACE SYSTEM WITH A LIGHT-DIFFUSIVE SCREEN - One embodiment of the invention includes a gesture recognition interface system. The interface system may comprise at least one light source positioned to illuminate a first side of a light-diffusive screen. The interface system may also comprise at least one camera positioned on a second side of the light-diffusive screen, the second side being opposite the first side, and configured to receive a plurality of images based on a brightness contrast difference between the light-diffusive screen and an input object. The interface system may further comprise a controller configured to determine a given input gesture based on changes in relative locations of the input object in the plurality of images. The controller may further be configured to initiate a device input associated with the given input gesture. | 12-24-2009 |
20090316953 | Adaptive match metric selection for automatic target recognition - An automatic target recognition system with adaptive metric selection. The novel system includes an adaptive metric selector for selecting a match metric based on the presence or absence of a particular feature in an image and a matcher for identifying a target in the image using the selected match metric. In an illustrative embodiment, the adaptive metric selector is designed to detect a shadow in the image and select a first metric if a shadow is detected and not cut off, and select a second metric otherwise. The system may also include an automatic target cuer for detecting targets in a full-scene image and outputting one or more target chips, each chip containing one target. The adaptive metric selector adaptively selects the match metric for each chip separately, and may also adaptively select an appropriate chip size such that a shadow in the chip is not unnecessarily cut off. | 12-24-2009 |
20090316954 | INPUT APPARATUS AND IMAGE FORMING APPARATUS - An input apparatus for enabling a user to enter an instruction into a main apparatus has high durability and offers superior operability. The input apparatus includes a table device having a table with a variable size. An image of plural virtual keys that is adapted to the size of the table is projected by a projector unit onto the table. Position information about a finger of the user that is placed on the table is detected by a position detecting device contactlessly. One of the plural virtual keys that corresponds to the position of the finger of the user detected by the position detecting device is detected by a key detecting device based on information about the image of the plural virtual keys and a result of the detection made by the position detecting device. | 12-24-2009 |
20090316955 | IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM - An image processing system includes: an object detecting unit that detects a moving body object from image data of an image of a predetermined area; an object-occurrence-position detecting unit that detects an occurrence position of the object detected by the object detecting unit; and a valid-object determining unit that determines that the object detected by the object detecting unit is a valid object when the object is present in a mask area set as a non-detection target in the image of the predetermined area and the occurrence position of the object in the mask area detected by the object-occurrence-position detecting unit is outside the mask area. | 12-24-2009 |
20090316956 | Image Processing Apparatus - An image processing accuracy estimation unit estimates an image processing accuracy by calculating a size of an object by which the accuracy of measurement of the distance of the object photographed by an on-vehicle camera becomes a permissible value or less. An image post-processing area determination unit determines, in accordance with the estimated image processing accuracy, a partial area inside a detection area of the object as an image post-processing area for which an image post-processing is carried out and lattices the determined image post-processing area to cells. An image processing unit processes the image photographed by the on-vehicle camera to detect a candidate for object and calculates a three-dimensional position of the detected object candidate. An image post-processing unit calculates, in each the individual cell inside the determined area the probability as to whether the detected object is present and determines the presence/absence of the object. | 12-24-2009 |
20090324008 | METHOD, APPARTAUS AND COMPUTER PROGRAM PRODUCT FOR PROVIDING GESTURE ANALYSIS - A method for providing gesture analysis may include analyzing image data using a skin detection model generated with respect to detecting skin of a specific user, tracking a portion of the image data correlating to a skin region, and performing a gesture recognition for the tracked portion of the image based on comparing features recognized in the skin region to stored features corresponding to a predefined gesture. An apparatus and computer program product corresponding to the method are also provided. | 12-31-2009 |
20090324009 | Method and system for the determination of object positions in a volume - A method or a system embodiment determines positional information about a moveable object to which is affixed a pattern of stripes having reference lines. A method determines image lines of stripe images of each stripe within at least two video frames, uses the image lines to prescribe planes having lines of intersection, and determines a transformation mapping reference lines to lines of intersection. Position information about the object may be derived from the transformation. A system embodiment comprises a pattern of stripes in a known fixed relationship to an object, reference lines characterizing the stripes, two or more cameras at known locations, a digital computer adapted to receive video frames from the pixel arrays of the cameras, and a program stored in the computer's memory. The program performs some or all of the method. When there are two or more moveable objects, an embodiment may further determine the position information about a first object to be transformed to a local coordinate system fixed with respect to a second object. | 12-31-2009 |
20090324010 | Neural network-controlled automatic tracking and recognizing system and method - A neural network-controlled automatic tracking and recognizing system includes a fixed field of view collection module, a full functions variable field of view collection module, a video image recognition algorithm module, a neural network control module, a suspect object track-tracking module, a database comparison and alarm judgment module, a monitored characteristic recording and rule setting module, a light monitoring and control module, a backlight module, an alarm output/display/storage module, and security monitoring sensors. The invention relates also to the operation method of the system. | 12-31-2009 |
20090324011 | METHOD OF DETECTING MOVING OBJECT - Proposed is a method of detecting a moving object, including: providing an image-set at least including a first image and a second image correlated in a time series, the first image preceding the second image; defining a detecting region and a detecting direction so as to construct a virtual gate in the first image; estimating the motion vector in a time series; comparing, by the virtual gate, the second image with the first image so as to determine a difference therebetween in terms of an object's position and motion vector; and retrieving the object to be an effective moving object upon determination of the object as lying within the detecting region defined in the virtual gate and moving in a direction substantively the same with the detecting direction. This invention presents a moving object detection method without the need to construct a background model a priori. | 12-31-2009 |
20090324012 | SYSTEM AND METHOD FOR CONTOUR TRACKING IN CARDIAC PHASE CONTRAST FLOW MR IMAGES - A method for tracking a contour in cardiac phase contrast flow magnetic resonance (MR) images includes estimating a global translation of a contour in a reference image in a time sequence of cardiac phase contrast flow MR images to a contour in a current image in the time sequence of images by finding a 2-dimensional translation vector that maximizes a similarity function of the contour in the reference image and the current image calculated over a bounding rectangle containing the contour in the reference image, estimating an affine transformation of the contour in the reference image to the contour in the current image, and performing a constrained local deformation of the contour in the current image. | 12-31-2009 |
20090324013 | Image processing apparatus and image processing method - An image processing apparatus, a feature point tracking method and a feature point tracking program, which enable efficient feature point tracking by taking the easiness of convergence of a displacement amount according to the image pattern into account in a hierarchical gradient method, are provided. A displacement calculating unit reads a hierarchical tier image with the smallest image size from each of a reference pyramid py | 12-31-2009 |
20090324014 | RETRIEVING SCENES FROM MOVING IMAGE DATA - A computer system, method and computer program that retrieves, from at least one piece of moving image data, at least one scene that includes moving image content to be retrieved. The computer system includes a storage unit that stores a locus of a model of the moving image to be retrieved and velocity variation of the model; a first calculation unit that calculates a first vector including the locus and the velocity variation of the model; a second calculation unit that calculates a second vector regarding the moving image content to be retrieved included in the at least one piece of moving image data; a third calculation unit that calculates a degree of similarity between the first and second vectors; and a selection unit that selects, at least one scene which includes the moving image content to be retrieved, on the basis of the degree of similarity. | 12-31-2009 |
20090324015 | EMITTER TRACKING SYSTEM - An improved emitter tracking system. In aspects of the present teachings, the presence of a desired emitter may be established by a relatively low-power emitter detection module, before images of the emitter and/or its surroundings are captured with a relatively high-power imaging module. Capturing images of the emitter may be synchronized with flashes of the emitter, to increase the signal-to-noise ratio of the captured images. | 12-31-2009 |
20090324016 | MOVING TARGET DETECTING APPARATUS, MOVING TARGET DETECTING METHOD, AND COMPUTER READABLE STORAGE MEDIUM HAVING STORED THEREIN A PROGRAM CAUSING A COMPUTER TO FUNCTION AS THE MOVING TARGET DETECTING APPARATUS - To extract a target pixel that shows a moving target in an image containing a complicated background. An image storing section | 12-31-2009 |
20090324017 | CAPTURING AND PROCESSING FACIAL MOTION DATA - Capturing and processing facial motion data includes: coupling a plurality of sensors to target points on a facial surface of an actor; capturing frame by frame images of the plurality of sensors disposed on the facial surface of the actor using at least one motion capture camera disposed on a head-mounted system; performing, in the head-mounted system, a tracking function on the frame by frame images of the plurality of sensors to accurately map the plurality of sensors for each frame; and generating, in the head-mounted system, a modeled surface representing the facial surface of the actor. | 12-31-2009 |
20090324018 | Efficient And Accurate 3D Object Tracking - A method of tracking an object in an input image stream, the method comprising iteratively applying the steps of: (a) rendering a three-dimensional object model according to a previously predicted state vector from a previous tracking loop or the state vector from an initialisation step; (b) extracting a series of point features from the rendered object; (c) localising corresponding point features in the input image stream; (d) deriving a new state vector from the point feature locations in the input image stream. | 12-31-2009 |
20100002908 | Pedestrian Tracking Method and Pedestrian Tracking Device - A pedestrian tracking method and a pedestrian tracking device with a simple structure can estimate the motion of a pedestrian in images without using color information, making it possible to achieve a robust pedestrian tracking. The pedestrian tracking device ( | 01-07-2010 |
20100002909 | Method and device for detecting in real time interactions between a user and an augmented reality scene - The invention consists in a system for detection in real time of interactions between a user and an augmented reality scene, the interactions resulting from the modification of the appearance of an object present in the image. After having created ( | 01-07-2010 |
20100002910 | Method and Apparatus for Developing Synthetic Three-Dimensional Models from Imagery - A method and apparatus for modeling an object in software are disclosed. The method includes generating a three-dimensional geometry of the object from a plurality of points obtained from a plurality of images of the object, the images having been acquired from a plurality of perspectives; and generating a three-dimensional model from the three-dimensional geometry for integration into an object recognition system. The apparatus may be a program storage medium encoded with instructions that, when executed by a computer, perform such a method or a computer programmed to perform such a method. | 01-07-2010 |
20100008539 | SYSTEMS AND METHODS FOR IMPROVED TARGET TRACKING FOR TACTICAL IMAGING - Certain embodiments provide systems and methods for target image acquisition using sensor data. The system includes at least one sensor adapted to detect an event and generate a signal based at least in part on the event. The system also includes an imager obtaining an image of a target and target area based on a target tracking and recognition algorithm. The imager is configured to trigger image acquisition based at least in part on the signal from the sensor. The imager adjusts the target tracking and recognition algorithm based at least in part on sensor data in the signal. In certain embodiments, the imager may also adjust an image acquisition threshold for obtaining an image based on the sensor data. | 01-14-2010 |
20100008540 | Method for Object Detection - A method for object detection from a visual image of a scene. The method includes: using a first order predicate logic formalism to specify a set of logical rules to encode contextual knowledge regarding the object to be detected; inserting the specified logical rules into a knowledge base; obtaining the visual image of the scene; applying specific object feature detectors to some or all pixels in the visual image of the scene to obtain responses at those locations; using the obtained responses to generate logical facts indicative of whether specific features or parts of the object are present or absent at that location in the visual image; inserting the generated logical facts into the knowledge base; and combining the logical facts with the set of logical rules to whether the object is present or absent at a particular location in the scene. | 01-14-2010 |
20100008541 | Method for Presenting Images to Identify Target Objects - A method presents a set of images to a viewer. The images include objects, which can be either distractor objects or target objects. A prevalence of the target objects is substantially lower than the distractor objects. Each image is segmented into portions so that each portion includes one object. The portions are then combined into a combined image. The combined image is presented to a viewer so that the target objects can be accurately and rapidly identified. The combining of the portions can be random or ordered in either the spatial or temporal domain. | 01-14-2010 |
20100008542 | Object detection method and apparatus - An object detection method and apparatus is provided. When an object pixel having a target pixel value is found while an image including an object is scanned at intervals of a preset number of pixels, whether or not each pixel around the object pixel has the target pixel value is sequentially determined, while spreading to pixels around the object pixel, to find an entire pixel region constituting the object and position values of the found pixels are stored. This ensures that an entire pixel region of the object is simply, easily, quickly, and correctly found. | 01-14-2010 |
20100014707 | Vehicle and road sign recognition device - A vehicle and road sign recognition device each includes: image capturing means ( | 01-21-2010 |
20100014708 | TARGET RANGE-FINDING METHOD AND DEVICE - The present invention provides a target range-finding method and device. The device includes a marking portion on the target, which is set with an area or size and defined by a first and second measurement edge. An image acquisition device includes a lens and operating screen. The operating screen displays the target image captured by the image acquisition device. A measuring mark selection unit selects the position of the first and second measurement edges of the target image from the operating screen of the image acquisition device. A processing unit calculates the range of the target. The target range-finding device presents better range-finding accuracy, ease-of-operation and higher efficiency as well as improved applicability. | 01-21-2010 |
20100014709 | Super-resolving moving vehicles in an unregistered set of video frames - A method is provided for accurately determining the registration for a moving vehicle over a number of frames so that the vehicle can be super-resolved. Instead of causing artifacts in a super-resolved image, the moving vehicle can be specifically registered and super-resolved individually. This method is very accurate, as it uses a mathematical model that captures motion with a minimal number of parameters and uses all available image information to solve for those parameters. Methods are provided that implement the vehicle registration algorithm and super-resolve moving vehicles using the resulting vehicle registration. One advantage of this system is that better images of moving vehicles can be created without requiring costly new aerial surveillance equipment. | 01-21-2010 |
20100014710 | METHOD AND SYSTEM FOR TRACKING POSITIONS OF HUMAN EXTREMITIES - A method for tracking positions of human extremities is disclosed. A left image of a first extremity portion is retrieved using a first picturing device and an outline candidate position of the first extremity portion is obtained according to feature information of the left image. A right image of the first extremity portion is retrieved using a second picturing device and a depth candidate position of the first extremity portion is obtained according to depth information of the right image. Geometry relations between the outline candidate position and the depth candidate position and a second extremity portion of a second extremity position are calculated to determine whether a current extremity position of the first extremity portion is required to be updated. | 01-21-2010 |
20100021005 | Time Managing Device of a Computer System and Related Method - A time managing device of a computer system including a graphic user interface capable of displaying application windows is disclosed. The time managing device includes an image capturing device, a sight-light detecting unit and a reminding unit. The image capturing device is used for capturing a user image corresponding to a user. The sight-light detecting unit is coupled to the image capturing device and used for analyzing a user sight-light state according to the user image to generate a sight-light detection result. The reminding unit is coupled to the sight-light detecting unit and the graphic user interface, and used for performing a reminder to a predetermined application window displayed on the graphic user interface according to a predetermined time and the sight-light detection result. | 01-28-2010 |
20100021006 | OBJECT TRACKING METHOD AND SYSTEM - An object tracking method uses a system having an object identifying device and at least one video tracking device, wherein the object identifying device monitors an area to identify an object entering the area and the video tracking device wired/wirelessly connected to the object identifying device monitors the area monitored by the object identifying device. The method includes: extracting, at the object identifying device, object identification information of the object; providing, at the object identifying device, the object identification information to the video tracking device; tracking, at the video tracking device, the object to extract physical information of the object; mapping, at the video tracking device, the physical information to the object identification information to generate object information of the object; and storing, at the video tracking device, the object information in a memory of the video tracking device. | 01-28-2010 |
20100021007 | RECONSTRUCTION OF DATA PAGE FROM IMAGED DATA - The present invention relates to an electronic device ( | 01-28-2010 |
20100021008 | System and Method for Face Tracking - Improved face tracking is provided during determination of an image by an imaging device using a low power face tracking unit. In one embodiment, image data associated with a frame and one or more face detection windows from a face detection unit may be received by the face tracking unit. The face detection windows are associated with the image data of the frame. A face list may be determined based on the face detection windows and one or more faces may be selected from the face list to generate an output face list. The output face list may then be provided to a processor of an imaging device for the detection of an image based on at least one of coordinate and scale values of the one or more faces on the output face list. | 01-28-2010 |
20100021009 | METHOD FOR MOVING TARGETS TRACKING AND NUMBER COUNTING - The invention discloses a method for moving targets tracking and number counting, comprising the steps of: a). acquiring continuously the video images comprising moving targets; b). acquiring the video image of a current frame, and pre-processing the video image of the current frame; c). segmenting the target region of the processed image, and extracting the target region; d). matching the target region of the current frame obtained in step c) with that of the previous frame based on an online feature selection to establish a match tracking link; and e). determining the number of the targets corresponding to each match tracking link based on the target region tracks recorded by the match tracking link. The invention can solve the problem of low precision of the number statistic results caused by the bad environment, such as that the distribution of the illumination is extremely not equilibrium spatially, the change in a time period is complicated, the change of the gesture during the people goes by is evident, and the like, under the normal application condition. | 01-28-2010 |
20100027839 | SYSTEM AND METHOD FOR TRACKING MOVEMENT OF JOINTS - A first image is obtained. At least one moving object indicated by the at least one image is selected. At least one joint that is associated with the at least one moving object is identified. At least one second image including the at least one moving object with the at least one joint is obtained and the movement of the at least one joint is tracked in a three-dimensional space. | 02-04-2010 |
20100027840 | System and method for bullet tracking and shooter localization - A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios. | 02-04-2010 |
20100027841 | METHOD AND SYSTEM FOR DETECTING A SIGNAL STRUCTURE FROM A MOVING VIDEO PLATFORM - The present invention aims at providing a method for detecting a signal structure from a moving vehicle. The method for detecting signal structure includes capturing an image from a camera mounted on the moving vehicle. The method further includes restricting a search space by predefining candidate regions in the image, extracting a set of features of the image within each candidate region and detecting the signal structure accordingly. | 02-04-2010 |
20100027842 | OBJECT DETECTION METHOD AND APPARATUS THEREOF - An object detection method and an apparatus thereof are provided. In the object detection method, a plurality of images in an image sequence is sequentially received. When a current image is received, a latest background image is established by referring to the current image and the M images previous to the current image, so as to update one of N background images, wherein M and N are positive integers. Next, color models of the current image and the background images are analyzed to determine whether a pixel in the current image belongs to a foreground object. Accordingly, the accuracy in object detection is increased by instantly updating the background images. | 02-04-2010 |
20100027843 | SURFACE UI FOR GESTURE-BASED INTERACTION - Disclosed is a unique system and method that facilitates gesture-based interaction with a user interface. The system involves an object sensing configured to include a sensing plane vertically or horizontally located between at least two imaging components on one side and a user on the other. The imaging components can acquire input images taken of a view of and through the sensing plane. The images can include objects which are on the sensing plane and/or in the background scene as well as the user as he interacts with the sensing plane. By processing the input images, one output image can be returned which shows the user objects that are in contact with the plane. Thus, objects located at a particular depth can be readily determined. Any other objects located beyond can be “removed” and not seen in the output image. | 02-04-2010 |
20100027844 | MOVING OBJECT RECOGNIZING APPARATUS - Provided is a moving object recognizing apparatus capable of effectively showing reliability of result of image processing involved in moving object recognition and issuing alarms in an appropriate manner when needed. The moving object recognizing apparatus includes a data acquisition unit ( | 02-04-2010 |
20100034422 | OBJECT TRACKING USING LINEAR FEATURES - A method of tracking objects within an environment comprises acquiring sensor data related to the environment, identifying linear features within the sensor data, and determining a set of tracked linear features using the linear features identified within the sensor data and a previous set of tracked linear features, the set of tracked linear features being used to track objects within the environment. | 02-11-2010 |
20100034423 | SYSTEM AND METHOD FOR DETECTING AND TRACKING AN OBJECT OF INTEREST IN SPATIO-TEMPORAL SPACE - The present invention provides a system and method for detecting and tracking a moving object. First, robust change detection is applied to find initial candidate regions in consecutive frames. These initial detections in consecutive frames are stacked to produce space-time bands which are extracted by Hough transform and entropy minimization based band detection algorithm. | 02-11-2010 |
20100034424 | POINTING SYSTEM FOR LASER DESIGNATOR - A system for illuminating an object of interest includes a platform and a gimbaled sensor associated with an illuminator. The gimbaled sensor provides sensor data corresponding to a sensed condition associated with an area. The gimbaled sensor is configured to be articulated with respect to the platform. A first transceiver transceives communications to and from a ground control system. The ground system includes an operator control unit allowing a user to select and transmit to the first transceiver at least one image feature corresponding to the object of interest. An optical transmitter is configured to emit a signal operable to illuminate a portion of the sensed area proximal to the object of interest. A correction subsystem is configured to determine an illuminated-portion-to-object-of-interest error and, in response to the error determination, cause the signal to illuminate the object of interest. | 02-11-2010 |
20100034425 | METHOD, APPARATUS AND SYSTEM FOR GENERATING REGIONS OF INTEREST IN VIDEO CONTENT - A method, apparatus and system for generating regions of interest in a video content include identifying the program content of received video content, categorizing the scene content of the identified program content and defining at least one region of interest in at least one of the characterized scenes by identifying at least one of a location and an object of interest in the scenes. In one embodiment of the invention, a region of interest is defined using user preference information for the identified program content and the categorized scene content. | 02-11-2010 |
20100046796 | METHOD OF RECOGNIZING A MOTION PATTERN OF AN OBJECT - A method and a motion recognition system is disclosed for recognizing a motion pattern of at least one object by means of determining relative motion blur variations around the at least on object in an image or a sequence of images. Motion blur parameters are extracted from the motion blur in the images, and based thereon the motion blur variations are determined by means of determining variations between the motion blur parameters. | 02-25-2010 |
20100046797 | METHODS AND SYSTEMS FOR AUDIENCE MONITORING - Systems and methods for audience monitoring are provided that include receiving an input including a recording or live feed of an audience composed of several persons, detecting foreground of the input, performing blob segmentation of the input, and analyzing human presence on each segmented blob by identifying at least one person, identifying a spatial distribution of at least one identified person, determining a dwell time of at least one identified person, determining a temporal distribution of at least one identified person, and determining a gaze direction of at least one identified person. Such detecting provides the ability to track individual persons present in the audience, and how long they remain in the audience. The method also provides the ability to determine gaze direction of persons in the audience, and how long one or more persons are gazing in a particular direction. | 02-25-2010 |
20100046798 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - In an image processing apparatus that performs tracking processing based on a correlation between flame images, when an object that is a tracking target is missed and a frame indicating the tracking target is set to a uniform background during tracking processing, a display of the frame may blur. An image processing apparatus is provided which detects a tracking target candidate region which has a highest correlation with a set tracking target region, calculates a difference between an evaluation value acquired in the tracking target candidate region and an evaluation value acquired in a peripheral region of the tracking target candidate region, and stops tracking if the difference is less than a threshold value. | 02-25-2010 |
20100046799 | METHODS AND SYSTEMS FOR DETECTING OBJECTS OF INTEREST IN SPATIO-TEMPORAL SIGNALS - Methods and systems detect objects of interest in a spatio-temporal signal. According to one embodiment, a system processes a digital spatio-temporal input signal containing zero or more foreground objects of interest superimposed on a background. The system comprises a foreground/background separation module, a foreground object grouping module, an object classification module, and a feedback connection. The foreground/background separation module receives the spatio-temporal input signal and, according to one or more adaptable parameters, produces foreground/background labels designating elements of the spatio-temporal input signal as either foreground or background. The foreground object grouping module is connected to the foreground/background separation module and identifies groups of selected foreground-labeled elements as foreground objects. The object classification module is connected to the foreground object grouping module and generates object-level information related to the foreground object. The object-level information adapts the one or more adaptable parameters of the foreground/background separation module, via the feedback connection. | 02-25-2010 |
20100054533 | Digital Image Processing Using Face Detection Information - A method of processing a digital image using face detection within the image achieves one or more desired image processing parameters. A group of pixels is identified that correspond to an image of a face within the digital image. Default values are determined of one or more parameters of at least some portion of the digital image. Values are adjusted of the one or more parameters within the digitally-detected image based upon an analysis of the digital image including the image of the face and the default values. | 03-04-2010 |
20100054534 | SYSTEM AND METHOD FOR INTERACTING WITH A MEDIA DEVICE USING FACES AND PALMS OF VIDEO DISPLAY VIEWERS - Systems and method which allow for user interaction with and control of televisions and other media device are disclosed. A television set is provided with a face and/or palm detection device configured to identify faces and/or palms and map them into coordinates. The mapped coordinates may be translated into data inputs which may be used to interact with applications related to the television. In some embodiments, multiple faces and/or palms may be detected and inputs may be received from each of them. The inputs received by mapping the coordinates may include inputs for interactive television programs in which viewers are asked to vote or rank some aspect of the program. | 03-04-2010 |
20100054535 | Video Object Classification - Techniques for classifying one or more objects in at least one video, wherein the at least one video comprises a plurality of frames are provided. One or more objects in the plurality of frames are tracked. A level of deformation is computed for each of the one or more tracked objects in accordance with at least one change in a plurality of histograms of oriented gradients for a corresponding tracked object. Each of the one or more tracked objects is classified in accordance with the computed level of deformation. | 03-04-2010 |
20100054536 | ESTIMATING A LOCATION OF AN OBJECT IN AN IMAGE - An implementation provides a method including forming a metric surface in a particle-based framework for tracking an object, the metric surface relating to a particular image in a sequence of digital images. Multiple hypotheses are formed of a location of the object in the particular image, based on the metric surface. The location of the object is estimated based on probabilities of the multiple hypotheses. | 03-04-2010 |
20100054537 | VIDEO FINGERPRINTING - A method for fingerprinting video comprising identifying motion in a video as a function of time; using the identified motion to create a motion fingerprint; identifying peaks and/or troughs in the motion fingerprint, and using these to create a reduced size points of interest motion fingerprint. Reduced size fingerprints for a plurality of known videos can be prepared and stored for later comparison with reduced size fingerprints for unknown videos, thereby providing a mechanism for identifying the unknown videos. | 03-04-2010 |
20100061591 | OBJECT RECOGNITION DEVICE - An object recognition device detects a position of a vehicle based on a running path obtained by GPS, vehicle speed, steering angle, etc., and also detects the position of the vehicle based on a result of recognition of an object obtained using a captured image of a camera. The device computes a positioning accuracy in detecting the vehicle position, which accuracy mostly deteriorates as a movement distance of the vehicle increases. | 03-11-2010 |
20100061592 | SYSTEM AND METHOD FOR ANALYZING THE MOVEMENT AND STRUCTURE OF AN OBJECT - A system and method for analyzing the movement and structure of an object ( | 03-11-2010 |
20100061593 | Extrapolation system for solar access determination - An extrapolation system includes acquiring a first orientation-referenced image at a first position, acquiring a second orientation-referenced image at a second position having a vertical offset from the first position, and processing the first orientation-referenced image and the second orientation-referenced image to provide an output parameter extrapolated to a third position that has an offset from the first position and the second position. | 03-11-2010 |
20100061594 | DETECTION OF MOTOR VEHICLE LIGHTS WITH A CAMERA - A method for detecting front headlights and tail lights of a motor vehicle with a colour camera sensor is presented. The colour camera sensor comprises a plurality of red pixels, i.e. image points which are only sensitive in the red spectral range, and a plurality of pixels of other colours. In a first evaluation stage, only the intensity of the red pixels in the image is analysed in order to select relevant points of light in the image. | 03-11-2010 |
20100061595 | INVENTORY MANAGEMENT SYSTEM - The location of objects in a building is recorded in the inventory management system. The objects are moved through the building with a vehicle. The vehicle transmits wireless messages indicating actions of the vehicle, such as loading or unloading of objects. A camera captures images of an area in which the vehicle moves. Positions of the vehicle are automatically detected from the captured images. The information about locations of objects is updated using the detected positions at time points indicated by the messages. In an embodiment the actions of the vehicle are signalled with light signals and picked up via the camera. | 03-11-2010 |
20100067738 | IMAGE ANALYSIS USING A PRE-CALIBRATED PATTERN OF RADIATION - A system and method of image content analysis using a pattern generator that emits a regular and pre-calibrated pattern of non-visible electromagnetic radiation from a surface in range of a camera adapted to perceive the pattern. The camera captures images of the perceived pattern and other objects within the camera's range, and outputs image data. The image data is analyzed to determine attributes of the objects and area within the camera's range. The pattern provides a known background, which enables an improved and simplified image analysis. | 03-18-2010 |
20100067739 | Sequential Stereo Imaging for Estimating Trajectory and Monitoring Target Position - A method for determining a position of a target includes obtaining a first image of the target, obtaining a second image of the target, wherein the first and the second images have different image planes and are generated at different times, processing the first and second images to determine whether the target in the first image corresponds spatially with the target in the second image, and determining the position of the target based on a result of the act of processing. Systems and computer products for performing the method are also described. | 03-18-2010 |
20100067740 | Pedestrian Detection Device and Pedestrian Detection Method - A near-infrared night vision device to which a pedestrian detection device is applied includes a near-infrared projector, a near-infrared camera, a display and an ECU. By executing programs, the ECU constitutes a pedestrian candidate extraction portion and a determination portion. The pedestrian candidate extraction portion extracts pedestrian candidate regions from near-infrared images. The determination portion normalizes the sizes and the brightnesses of the pedestrian candidates extracted by the pedestrian candidate extraction portion, and then computes the degrees of similarity between the normalized pedestrian candidates. The determination portion determines that a pedestrian candidate having two or more other pedestrian candidates whose degree of similarity with the pedestrian candidate is greater than or equal to a predetermined value is not a pedestrian. | 03-18-2010 |
20100067741 | Real-time tracking of non-rigid objects in image sequences for which the background may be changing - A method and apparatus is disclosed for tracking an arbitrarily moving object in a sequence of images where the background may be changing. The tracking is based on visual features, such as color or texture, where regions of images (such as those which represent the object being tracked or the background) can be characterized by statistical distributions of feature values. The method improves on the prior art by incorporating a means whereby characterizations of the background can be rapidly re-learned for each successive image frame. This makes the method robust against the scene changes that occur when the image capturing device moves. It also provides robustness in difficult tracking situations, such as when the tracked object passes in front of backgrounds with which it shares similar colors or other features. Furthermore, a method is disclosed for automatically detecting and correcting certain kinds of errors which may occur when employing this or other tracking methods. | 03-18-2010 |
20100067742 | OBJECT DETECTING DEVICE, IMAGING APPARATUS, OBJECT DETECTING METHOD, AND PROGRAM - An object detecting device includes a calculating unit configured to calculate gradient intensity and gradient orientation of luminance for a plurality of regions in an image and calculate a frequency distribution of the luminance gradient intensity as to the calculated luminance gradient orientation for each of the regions, and a determining unit configured to determine whether or not an identified object is included in the image by comparing a plurality of frequency distributions calculated for each of the regions. | 03-18-2010 |
20100067743 | SYSTEM AND METHOD FOR TRACKING AN ELECTRONIC DEVICE - A system for tracking a spatially manipulated user controlling object using a camera associated with a processor. While the user spatially manipulates the controlling object, an image of the controlling object is picked-up via a video camera, and the camera image is analyzed to isolate the part of the image pertaining to the controlling object for mapping the position and orientation of the device in a two-dimensional space. Robust data processing systems and computerized method employing calibration and tracking algorithms such that minimal user intervention is required for achieving and maintaining successful tracking of the controlling object in changing backgrounds and lighting conditions. | 03-18-2010 |
20100067744 | Method and Single Laser Device for Detecting Magnifying Optical Systems - The invention comprises illuminating a scene where said magnifying optical system (OP) may occur with at least one pulse generated by first laser transmitter (E). The laser transmitter (E) and a first detector of the scene thus illuminated (D | 03-18-2010 |
20100074469 | Vehicle and road sign recognition device - The present invention includes: image capturing means ( | 03-25-2010 |
20100074470 | Combination detector and object detection method using the same - Provided are a detector and a method of detecting an object using the detector. The method includes combining a first detector and a second detector in a combination scheme to form a multi-layer combination detector, the second detector being of a type different from that of the first detector, processing a binary classification detection with respect to an inputted sample starting from an uppermost layer detector, allowing a sample of an object detected from a current layer to approach a lower layer, while rejecting a sample of a non-object detected from the current layer whereby the rejected non-object may not approach the lower layer, and outputting a sample passing through all layers as a detected object. | 03-25-2010 |
20100074471 | Gesture Processing with Low Resolution Images with High Resolution Processing for Optical Character Recognition for a Reading Machine - A portable reading machine that operates in several modes and performs image preprocessing to prior to optical character recognition. The portable reading machine receives a low resolution image and a high resolution image of a scene and processing the low resolution image to recognize a user-initiated gesture using a gesturing item that indicates a command from the user to the reading machine and the high resolution image to recognize text in the image of the scene, according to the command from the user to the machine. | 03-25-2010 |
20100074472 | SYSTEM FOR AUTOMATED SCREENING OF SECURITY CAMERAS - The present invention involves a system for automatically screening closed circuit television (CCTV) cameras for large and small scale security systems, as used for example in parking garages. The system includes six primary software elements, each of which performs a unique function within the operation of the security system to provide intelligent camera selection for operators, resulting in a marked decrease of operator fatigue in a CCTV system. Real-time image analysis of video data is performed wherein a single pass of a video frame produces a terrain map which contains parameters indicating the content of the video. Based on the parameters of the terrain map, the system is able to make decisions about which camera an operator should view based on the presence and activity of vehicles and pedestrians, furthermore, discriminating vehicle traffic from pedestrian traffic. The system is compatible with existing CCTV (closed circuit television) systems and is comprised of modular elements to facilitate integration and upgrades. | 03-25-2010 |
20100080415 | OBJECT-TRACKING SYSTEMS AND METHODS - A system and method for tracking, identifying, and labeling objects or features of interest is provided. In some embodiments, tracking is accomplished using unique signature of the feature of interest and image stabilization techniques. According to some aspects a frame of reference using predetermined markers is defined and updated based on a change in location of the markers and/or specific signature information. Individual objects or features within the frame may also be tracked and identified. Objects may be tracked by comparing two still images, determining a change in position of an object between the still images, calculating a movement vector of the object, and using the movement vector to update the location of an image device. | 04-01-2010 |
20100080416 | EYE DETECTION SYSTEM USING A SINGLE CAMERA - A system and a method for detecting the eyes of a driver of a vehicle using a single camera. The method includes determining a set of positional parameters corresponding to a driving seat of the vehicle. The camera is positioned at a pre-determined location inside the vehicle, and a set of parameters corresponding to the camera is determined. The location of the driver's eyes is detected using the set of positional parameters, an image of the driver's face and the set of parameters corresponding to the camera. | 04-01-2010 |
20100080417 | Object-Tracking Systems and Methods - A system and method for tracking, identifying, and labeling objects or features of interest is provided. In some embodiments, tracking is accomplished using unique signature of the feature of interest and image stabilization techniques. According to some aspects a frame of reference using predetermined markers is defined and updated based on a change in location of the markers and/or specific signature information. Individual objects or features within the frame may also be tracked and identified. Objects may be tracked by comparing two still images, determining a change in position of an object between the still images, calculating a movement vector of the object, and using the movement vector to update the location of an image device. | 04-01-2010 |
20100080418 | PORTABLE SUSPICIOUS INDIVIDUAL DETECTION APPARATUS, SUSPICIOUS INDIVIDUAL DETECTION METHOD, AND COMPUTER-READABLE MEDIUM - Cameras provided to glasses successively take subject images around a wearer of the glasses. The subject images are searched to detect human face regions, and if human face regions are detected, feature quantities of each face are calculated to detect the face direction and the eye direction, and an eye-gaze direction is detected based on them. Whether or not each person with the detected human face region is looking at the cameras is determined from the eye-gaze direction, and if there is a human face looking at the cameras for a given period of time or more, a person with the human face is determined as being a suspicious individual, and a warning message indicating the detection of the suspicious individual is output to the wearer. Furthermore, the detection information and images can be provided to a device in a remote location. | 04-01-2010 |
20100086174 | METHOD OF AND APPARATUS FOR PRODUCING ROAD INFORMATION - An embodiment of the present invention discloses a method of producing road information for use in a map database including: acquiring a source image from an image sequence obtained by means of a terrestrial based camera mounted on a moving vehicle; determining a road color sample from pixels associated with a predefined area in the source image representative of the road surface in front of or behind the moving vehicle; generating a road surface image from the source image in dependence of the road color sample; and, producing road information in dependence of the road surface image and position and orientation data associated with the source image. | 04-08-2010 |
20100086175 | Image Processing Apparatus, Image Processing Method, Program, and Recording Medium - An image processing apparatus includes a detector, a setting unit, and an image generator. The detector detects a target object image region from a first image. When one or more predetermined parameters are applicable to a target object within the region detected by the detector, the setting unit sets the relevant target object image region as a first region. The image generator then generates a second image by applying predetermined processing to either the image portion within the first region, or to the image portions in a second region containing image portions within the first image that are not contained in the first region. | 04-08-2010 |
20100086176 | Learning Apparatus and Method, Recognition Apparatus and Method, Program, and Recording Medium - A learning apparatus includes an image generator, a feature point extractor, a feature value calculator, and a classifier generator. The image generator generates, from an input image, images having differing scale coefficients. The feature point extractor extracts feature points from each image generated by the image generator. The feature value calculator calculates feature values for the feature points by filtering the feature points using a predetermined filter. The classifier generator generates one or more classifiers for detecting a predetermined target object from an image by means of statistical learning using the feature values. | 04-08-2010 |
20100086177 | IMAGE PROCESSING APPARATUS AND METHOD - An image processing apparatus which is capable of suppressing an increase in the circuit size of buffers between data-processing circuits, thereby enabling an associated component thereof to be implemented by hardware. A position control unit sequentially shifts a position of a sub window image by a predetermined skip amount in a predetermined scanning direction, for scanning, and further repeating the scanning for skipped sub window images, after shifting a start position of the scanning, to thereby determine positions of all sub window images each as an area from a face image is to be detected. | 04-08-2010 |
20100092030 | SYSTEM AND METHOD FOR COUNTING PEOPLE NEAR EXTERNAL WINDOWED DOORS - A system for counting objects, such as people, is provided having a camera ( | 04-15-2010 |
20100092031 | SELECTIVE AND ADAPTIVE ILLUMINATION OF A TARGET - There are provided a method and a system for illuminating one or more target in a scene. An image of the scene is acquired using a sensing device that may use an infrared sensor for example. From the image, an illumination controller determines an illumination figure, such that the illumination figure adaptively matches at least a position of the target in the image. The target is the selectively illuminated using an illumination device, according to the illumination figure. | 04-15-2010 |
20100092032 | METHODS AND APPARATUS TO FACILITATE OPERATIONS IN IMAGE BASED SYSTEMS - Vision based systems may select actions based on analysis of images to redistribute objects. Actions may include action type, action axis and/or action direction. Analysis may determine whether an object is accessible by a robot, whether an upper surface of a collection of objects meet a defined criteria and/or whether clusters of objects preclude access. | 04-15-2010 |
20100092033 | METHOD FOR TARGET GEO-REFERENCING USING VIDEO ANALYTICS - A method to geo-reference a target between subsystems of a targeting system is provided. The method includes receiving a target image formed at a sender subsystem location, generating target descriptors for a first selected portion of the target image, sending target location information and the target descriptors from a sender subsystem of the targeting system to a receiver subsystem of the targeting system, pointing an optical axis of a camera of the receiver subsystem at the target based on the target location information received from the sending subsystem, forming a target image at a receiver subsystem location when the optical axis is pointed at the target, and identifying a second selected portion of the target image formed at the receiver subsystem location that is correlated to the first selected portion of the target image formed at the sender subsystem location. | 04-15-2010 |
20100092034 | METHOD AND SYSTEM FOR POSITION DETERMINATION USING IMAGE DEFORMATION - A method and system of position determination using image deformation is provided. One implementation involves receiving an image of a visual tag, the image captured by an image capturing device, wherein the visual tag has a predefined position associated therewith; based on the image determining a distance of the image capturing device from the visual tag, and determining an angular position of the image capturing device relative to the visual tag; and determining position of the image capturing device based on said distance and said angular position. | 04-15-2010 |
20100092035 | AUTOMATIC RECOGNITION APPARATUS - The invention concerns an apparatus for automatic recognition of objects, which includes a device for capturing images of one object, or of a plurality of objects, which are to be recognized. The objects to be evaluated are manually introduced into a field of view of said camera. The invented apparatus possesses an image recognition device, whereby, from an image of an object within the field of view of the camera, an identification-signal representing the object is generated. The data acquired therefrom can serve, for example, a weighing scale, which has been equipped with the invented automatic recognition apparatus. | 04-15-2010 |
20100092036 | METHOD AND APPARATUS FOR DETECTING TARGETS THROUGH TEMPORAL SCENE CHANGES - A system and method for detecting a target in imagery is disclosed. At least one image region exhibiting changes in at least intensity is detected from among at least a pair of aligned images. A distribution of changes in at least intensity inside the at least one image region is determined using an unsupervised learning method. The distribution of changes in at least intensity is used to identify pixels experiencing changes of interest. At least one target from the identified pixels is identified using a supervised learning method. The distribution of changes in at least intensity is a joint hue and intensity histogram when the pair of images pertain to color imagery. The distribution of changes in at least intensity is an intensity histogram when the pair of images pertain to grey-level imagery. | 04-15-2010 |
20100092037 | METHOD AND SYSTEM FOR VIDEO INDEXING AND VIDEO SYNOPSIS - In a system and method for generating a synopsis video from a source video, at least three different source objects are selected according to one or more defined constraints, each source object being a connected subset of image points from at least three different frames of the source video. One or more synopsis objects are sampled from each selected source object by temporal sampling using image points derived from specified time periods. For each synopsis object a respective time for starting its display in the synopsis video is determined, and for each synopsis object and each frame a respective color transformation for displaying the synopsis object may be determined. The synopsis video is displayed by displaying selected synopsis objects at their respective time and color transformation, such that in the synopsis video at least three points that each derive from different respective times in the source video are displayed simultaneously. | 04-15-2010 |
20100092038 | SYSTEM AND METHOD OF DETECTING OBJECTS - The present invention is a system and a method of segmenting and detecting objects which can be approximated by planar or nearly planar surfaces in order to detect one or more objects with threats or potential threats. The method includes capturing imagery of the scene proximate a platform, producing a depth map from the imagery and tessellating the depth map into a number of patches. The method also includes classifying the plurality of patches as threat patches and projecting the threat patches into a pre-generated vertical support histogram to facilitate selection of the projected threat patches having a score value within a sufficiency criterion. The method further includes grouping the selected patches having the score value using a plane fit to obtain a region of interest and processing the region of interest to detect said object. | 04-15-2010 |
20100092039 | Digital Image Processing Using Face Detection Information - A method of processing a digital image using face detection within the image achieves one or more desired image processing parameters. A group of pixels is identified that correspond to an image of a face within the digital image. Default values are determined of one or more parameters of at least some portion of the digital image. Values are adjusted of the one or more parameters within the digitally-detected image based upon an analysis of the digital image including the image of the face and the default values. | 04-15-2010 |
20100098292 | Image Detecting Method and System Thereof - An image detecting method and a system thereof are provided. The image detecting method includes the following steps. An original image is captured. A moving-object image of the original image is created. An edge-straight-line image of the original image is created, wherein the edge-straight-line image comprises a plurality of edge-straight-lines. Whether the original image has a mechanical moving-object image is detected according to the length, the parallelism and the gap of the part of the edge-straight-lines corresponding to the moving-object image. | 04-22-2010 |
20100098293 | Structure and Motion with Stereo Using Lines - A system and method are disclosed for estimating camera motion and structure reconstruction of a scene using lines. The system includes a line detection module, a line correspondence module, a temporal line tracking module and structure and motion module. The line detection module is configured to detect lines in visual input data comprising a plurality of image frames. The line correspondence module is configured to find line correspondence between detected lines in the visual input data. The temporal line tracking module is configured to track the detected lines temporally across the plurality of the image frames. The structure and motion module is configured to estimate the camera motion using the detected lines in the visual input data and to reconstruct three-dimensional lines from the estimated camera motion. | 04-22-2010 |
20100098294 | METHOD AND APPARATUS FOR DETECTING LANE - A method and an apparatus for detecting a lane are disclosed. The lane detecting apparatus includes: a region of ID setup setting a region of ID including a road region of a current lane in an acquired image; a road sign verifier verifying existence of a road sign within the set region of ID; an ROI setup calculating a difference value between a lane prediction result and previous lane information when there exists a road sign and setting an ROI based on the calculated difference value; and a lane detector detecting a lane by extracting lane markings based on the set ROI. Accordingly, a lane can be more accurately detected even in a road environment including a road sign by removing the road sign to extract only necessary lane markings. | 04-22-2010 |
20100098295 | CLEAR PATH DETECTION THROUGH ROAD MODELING - A method for detecting a clear path of travel for a vehicle including fusion of clear path detection by image analysis and road geometry data describing road geometry includes monitoring an image from a camera device on the vehicle, analyzing the image through clear path detection analysis to determine a clear path of travel within the image, monitoring the road geometry data, analyzing the road geometry data to determine an impact of the data to the clear path, modifying the clear path based upon the analysis of the road geometry data, and utilizing the clear path in navigation of the vehicle. | 04-22-2010 |
20100104134 | Interaction Using Touch and Non-Touch Gestures - A computer interface may use touch- and non-touch-based gesture detection systems to detect touch and non-touch gestures on a computing device. The systems may each capture an image, and interpret the image as corresponding to a predetermined gesture. The systems may also generate similarity values to indicate the strength of a match between a captured image and corresponding gesture, and the system may combine gesture identifications from both touch- and non-touch-based gesture identification systems to ultimately determine the gesture. A threshold comparison algorithm may be used to apply different thresholds for different gesture detection systems and gesture types. | 04-29-2010 |
20100104135 | MARKER GENERATING AND MARKER DETECTING SYSTEM, METHOD AND PROGRAM - A marker generating system is characterized in having a special feature extracting element that extracts a portion, as a special feature, including a distinctive pattern in a video image not including a marker; a unique special feature selecting element that, based on the extracted special feature, selects a special feature of an image, as a unique special feature, that does not appear on the video image; and a marker generating element that generates a marker based on the unique special feature. | 04-29-2010 |
20100104136 | METHOD AND APPARATUS FOR DETECTING THE PLACEMENT OF A GOLF BALL FOR A LAUNCH MONITOR - A novel method and apparatus for detecting the placement of a golf ball for a launch monitor is disclosed. The method comprises capturing an image of a scan zone that is adjacent to the launch monitor and in the field of view of the launch monitor's image sensor, analyzing the scan zone image for the placement of an object, and determining if the object is likely the golf ball. An apparatus is also disclosed that implements the golf ball detection method. | 04-29-2010 |
20100119109 | MULTI-CORE MULTI-THREAD BASED KANADE-LUCAS-TOMASI FEATURE TRACKING METHOD AND APPARATUS - A multi-core multi-thread based Kanade-Lucas-Tomasi (KLT) feature tracking method includes subdividing an input image into regions and allocating a core to each region; extracting KLT features for each region in parallel and in real time; and tracking the extracted features in the input image. Said extracting the features is carried out based on single-region/multi-thread/single-core architecture, while said tracking the features is carried out based on multi-feature/multi-thread/single-core architecture. | 05-13-2010 |
20100119110 | IMAGE DISPLAY DEVICE, COMPUTER READABLE STORAGE MEDIUM STORING IMAGE PROCESSING PROGRAM, AND IMAGE PROCESSING METHOD - An image processing apparatus includes an area dividing unit that divides an image obtained by capturing inside of a body lumen into one or more areas by using a value of a specific wavelength component that is specified in accordance with a degree of absorption or scattering in vivo from a plurality of wavelength components included in the image or wavelength components obtained by conversion of the plurality of wavelength components; and a target-of-interest site specifying unit that specifies a target-of-interest site in the area by using a discriminant criterion in accordance with an area obtained by the division. | 05-13-2010 |
20100119111 | TIME EXPANSION FOR DISPLAYING PATH INFORMATION - Embodiments of the present invention provide systems and methods for displaying sequential information representing a path. The sequential information can include a number of tokens representing a path. A representation of the tokens and path of the sequential information can be displayed. An instruction to adjust the representation of the path of the sequential information can be received. For example, instruction can comprise user instruction, including but not limited to a user manipulation of a slider control of a user interface through which the representation of the sequence is displayed. The displayed representation of the path of the sequential information can be updated based on and corresponding to the instruction. So for example, the user can click and drag or otherwise manipulate the slider control above and the displayed representation of the path can be expanded and/or contracted based on the user's movement of the slider control. | 05-13-2010 |
20100119112 | GRAPHICAL REPRESENTATIONS FOR AGGREGATED PATHS - Techniques for displaying path-related information. Techniques are provided for generating and displaying graphical representations for a path. For example, radial histograms, radial vector plots, and other graphical representations may be rendered for multiple paths aggregated together. | 05-13-2010 |
20100119113 | METHOD AND APPARATUS FOR DETECTING OBJECTS - A method for detecting an object on an image representable by picture elements includes: “determining first and second adaptive thresholds for picture elements of the image, depending on an average intensity in a region around the respective picture element”, “determining partial objects of picture elements of a first type that are obtained based on a comparison with the first adaptive threshold”, “determining picture elements of a second type that are obtained based on a comparison with the second adaptive threshold” and “combining a first and a second one of the partial objects to an extended partial object by picture elements of the second type, when a minimum distance exists between the first and the second of the partial objects, wherein the object to be detected can be described by a sum of the partial objects of picture elements of the first type and/or the obtained extended partial objects”. | 05-13-2010 |
20100124356 | DETECTING OBJECTS CROSSING A VIRTUAL BOUNDARY LINE - An approach that detects objects crossing a virtual boundary line is provided. Specifically, an object detection tool provides this capability. The object detection tool comprises a boundary component configured to define a virtual boundary line in a video region of interest, and establish a set of ground patch regions surrounding the virtual boundary line. The object detection tool further comprises an extraction component configured to extract a set of attributes from each of the set of ground patch regions, and update a ground patch history model with the set of attributes from each of the set of ground patch regions. An analysis component is configured to analyze the ground patch history model to detect whether an object captured in at least one of the set of ground patch regions is crossing the virtual boundary line in the video region of interest. | 05-20-2010 |
20100124357 | SYSTEM AND METHOD FOR MODEL BASED PEOPLE COUNTING - An approach that allows for model based people counting is provided. In one embodiment, there is a generating tool configured to generate a set of person-shape models based on results of a cumulative training process; a detecting tool configured to detect persons in a camera field-of-view by using the set of person-shape models, and a counting tool configured to track detected persons upon crossing by the detected persons of a previously established virtual boundary. | 05-20-2010 |
20100124358 | METHOD FOR TRACKING MOVING OBJECT - A method for tracking a moving object is provided. The method detects the moving object in a plurality of continuous images so as to obtain space information of the moving object in each of the images. In addition, appearance features of the moving object in each of the images are captured to build an appearance model. Finally, the space information and the appearance model are combined to track a moving path of the moving object in the images. Accordingly, the present invention is able to keep tracking the moving object even if the moving object leaves the monitoring frame and returns again, so as to assist the supervisor in finding abnormal acts and making following reactions. | 05-20-2010 |
20100124359 | METHOD AND SYSTEM FOR AUTOMATIC DETECTION OF A CLASS OF OBJECTS - An apparatus and method for providing automatic threat detection using passive millimeter wave detection and image processing analysis. | 05-20-2010 |
20100124360 | METHOD AND APPARATUS FOR RECORDING EVENTS IN VIRTUAL WORLDS - A method and an apparatus for recording an event in a virtual world. The method includes acquiring camera view regions of avatars joining the event; identifying one or more key avatars and/or key objects based on information about the targets in the camera view regions of the avatars; setting one or more recorders for the identified one or more key avatars and/or key objects for recording the event such that the one or more key avatars and/or key objects are located in the camera view regions of the one or more recorders. The apparatus includes devices configured to perform the steps of the method. | 05-20-2010 |
20100128926 | ITERATIVE MOTION SEGMENTATION - An image processing device which simultaneously secures and extracts a background image, at least two object images, a shape of each object image and motion of each object image, from among plural images, the image processing device including an image input unit ( | 05-27-2010 |
20100128927 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - By a method such as foreground extraction or facial extraction, the area of a target object is detected from an input image, and the feature amount such as the center of gravity, size, and inclination is acquired. Using the value of a temporarily-set internal parameter, edge image generation, particle generation, and transition are carried out, and a contour is estimated by obtaining the probability density distribution by observing the likelihood. Comparing a feature amount obtained from the estimated contour and a feature amount of the area of the target object, the temporarily setting is reset by determining that the value for the temporary setting is not appropriate when the degree of matching of the both is smaller than a reference value. When the degree of matching is larger than the reference value, the value of the parameter is determined to be the final value. | 05-27-2010 |
20100128928 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing apparatus including a dynamic body detecting unit for detecting a dynamic body contained in a moving image, a dynamic body region setting unit for, during a predetermined time from a time point the dynamic body is detected by the dynamic body detecting unit, setting a region containing the dynamic body at the detection time point as a dynamic body region, and a fluctuation removable processing unit for performing a fluctuation removal process on a region other than the dynamic body region set by the dynamic body region setting unit. | 05-27-2010 |
20100128929 | IMAGE PROCESSING APPARATUS AND METHOD FOR TRACKING A LOCATION OF A TARGET SUBJECT - A digital image processing apparatus has a tracking function for tracking a location variation of a set tracking area on a plurality of frame images. The digital image processing apparatus includes a similarity calculation unit that calculates a similarity by varying a location of a template on one frame image. The similarity calculation unit calculates a second direction similarity by fixing a first direction location of the template in a first direction on the one frame image and by varying a second direction location of the template in a second direction which is perpendicular to the first direction, and then calculates a first direction similarity by fixing the second direction location of the template at a location where the second direction similarity is the highest and by varying the first direction location of the template in the first direction on the one frame image. | 05-27-2010 |
20100128930 | DETECTION OF ABANDONED AND VANISHED OBJECTS - Disclosed herein are a method and system for classifying a detected region of change of a video frame as one of an abandoned object event and an object removal event, wherein a plurality of boundary blocks define a boundary of said region of change. For each one of a set of said boundary blocks ( | 05-27-2010 |
20100135527 | Image recognition algorithm, method of identifying a target image using same, and method of selecting data for transmission to a portable electronic device - An image recognition algorithm includes a keypoints-based comparison and a region-based color comparison. A method of identifying a target image using the algorithm includes: receiving an input at a processing device, the input including data related to the target image; performing a retrieving step including retrieving an image from an image database, and, until the image is either accepted or rejected, designating the image as a candidate image; performing an image recognition step including using the processing device to perform an image recognition algorithm on the target and candidate images in order to obtain an image recognition algorithm output; and performing a comparison step including: if the image recognition algorithm output is within a pre-selected range, accepting the candidate image as the target image; and if the image recognition algorithm output is not within the pre-selected range, rejecting the candidate image and repeating the retrieving, image recognition, and comparison steps. | 06-03-2010 |
20100135528 | ANALYZING REPETITIVE SEQUENTIAL EVENTS - Techniques for analyzing one or more sequential events performed by a human actor to evaluate efficiency of the human actor are provided. The techniques include identifying one or more segments in a video sequence as one or more components of one or more sequential events performed by a human actor, integrating the one or more components into one or more sequential events by incorporating a spatiotemporal model and one or more event detectors, and analyzing the one or more sequential events to analyze behavior of the human actor. | 06-03-2010 |
20100135529 | Systems and methods for tracking images - Image tracking as described herein can include: segmenting a first image into regions; determining an overlap of intensity distributions in the regions of the first image, and segmenting a second image into regions such that an overlap of intensity distributions in the regions of the second image is substantially similar to the overlap of intensity distributions in the regions of the first image. In certain embodiments, images can depict a heart at different points in time and the tracked regions can be the left ventricle cavity and the myocardium. In such embodiments, segmenting the second image can include generating first and second curves that track the endocardium and epicardium boundaries, and the curves can be generated by minimizing functions containing a coefficient based on the determined overlap of intensity distributions in the regions of the first image. | 06-03-2010 |
20100135530 | METHODS AND SYSTEMS FOR CREATING A HIERARCHICAL APPEARANCE MODEL - A method for creating an appearance model of an object includes receiving an image of the object and creating a hierarchical appearance model of the object from the image of the object. The hierarchical appearance model has a plurality of layers, each layer including one or more nodes. Nodes in each layer contain information of the object with a corresponding level of detail. Nodes in different layers of the hierarchical appearance model correspond to different levels of detail. | 06-03-2010 |
20100135531 | Position Alignment Method, Position Alignment Device, and Program - A position alignment method, a position alignment device, and a program in which processing load can be reduced are proposed. A group of some points in a first set of points extracted from an object appearing in one image and a group of some points in a second set of points extracted from an object appearing in another image are used as a reference, and the second set of points is aligned with respect to the first set of points. Thereafter, all the points in the first set of points and all the points in the aligned second set of point are used as a reference, and the second set of points is aligned with respect to the first set of points. | 06-03-2010 |
20100135532 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM FOR STORING PROGRAM - An image processing apparatus comprises an image capture unit configured to capture an image, a characteristic part detector configured to detect a characteristic part of a face from the image captured by the image capture unit, an outline generator configured to generate a pseudo outline of the face based on positions of the characteristic part detected by the characteristic part detector and a correction unit configured to correct the image based on the pseudo outline generated by the outline generator. | 06-03-2010 |
20100142758 | Method for Providing Photographed Image-Related Information to User, and Mobile System Therefor - System for providing a mobile user, object related information related to an object visible thereto, the system including a camera directable toward the object, a local interest points and semi global geometry (LIPSGG) extraction processor, and a remote LIPSGG identifier, the camera acquiring an image of at least a portion of the object, the LIPSGG extraction processor being coupled with the camera, the LIPSGG extraction processor extracting an LIPSGG model of the object from the image, remote LIPSGG identifier being coupled with the LIPSGG extraction processor via a network, the remote LIPSGG identifier receiving the LIPSGG model from the LIPSGG extraction processor, via the network, the remote LIPSGG identifier identifying the object according to the LIPSGG model, the remote LIPSGG identifier retrieving the object related information, the remote LIPSGG identifier providing the object related information to the mobile user operating the camera. | 06-10-2010 |
20100150399 | APPARATUS AND METHOD FOR OPTICAL GESTURE RECOGNITION - An optical gesture recognition system is shown having a first light source and a first optical receiver configured to receive reflected light from an object when the first light source is activated and output a first measured reflectance value corresponding to an amplitude of the reflected light. A processor is configured to receive the first measured reflectance value and to compare the first measured reflectance value at first and second points in time to track motion of the object and identify a gesture of the object corresponding to the tracked motion of the object. | 06-17-2010 |
20100150400 | INFORMATION PROCESSOR, INFORMATION PROCESSING METHOD, AND COMPUTER READABLE MEDIUM - A first movement control section sequentially moves a first image to multiple first positions. A first comparison section compares the moved first image with a second image. A target first position selection section selects a target first position based on the result of said comparison. After the target first position is selected, the second movement control section sequentially moves the first image to multiple second positions located in the periphery of the target first position. The second comparison section compares the moved first image with the second image. A target second position selection section selects a target second position based on the result of said comparison. A second position alignment execution section performs geometric transformation based on the difference between the position of the first image and the target second position and aligns the positions of the first and second images. | 06-17-2010 |
20100150401 | Target tracker - For a tracking of a target object in a time series of frames of image data, a tracking object designation acceptor accepts a designation of a tracking object, a target color setter sets a color of the designated tracking object as a target color, and a particle filter processor employs particles for measurements to determine color likelihoods by comparison between the target color and colors in vicinities of particles, works, as the color likelihoods meet a criterion, to estimate a region of the tracking object in a frame of image data in accordance with results of the measurements, and as the color likelihoods fails to meet the criterion, to use particles, for measurements to determine luminance likelihoods based on luminance differences between frames of image data in a time series of frames of image data, and estimate a region of the tracking object in a frame of image data in accordance with results of the measurements, and updates the target color by a color in either estimated region. | 06-17-2010 |
20100158312 | Method for tracking and processing image - The invention relates to a method for image processing, which can be used to calibrate the background quickly. When the external environment is changed due to the switch of light, the color of background is calibrated quickly, and the background can be updated together. The method not only is used to update the background, but also can be used to eliminate the convergence of background again. | 06-24-2010 |
20100158313 | COUPLING ALIGNMENT APPARATUS AND METHOD - An apparatus for axially aligning a first coupling member and a second coupling member that can be connected so as to form a rotating assembly. The apparatus includes a measurement arrangement configured to be mounted onto the first coupling member and to be rotated therewith. The measurement arrangement includes an emitter arrangement configured to emit first and second signals in the direction of the second coupling member so as to cause at least a portion of said first and second signals to be reflected by the second coupling member. The measurement apparatus further has a capture arrangement configured to capture at least a portion of the first and second reflected signals. The apparatus includes a control arrangement configured to determine an offset in axial alignment between the first and second coupling member based on at least the first and second reflected signals. | 06-24-2010 |
20100158314 | METHOD AND APPARATUS FOR MONITORING TREE GROWTH - A system for identifying forest stands within an area of interest that are exhibiting abnormal growth determines a relationship between vegetation index (VI) values determined from a first and a second image of the area of interest. From the relationship, an expected or predicted VI value for each forest stand is determined and compared with the actual VI value computed for the forest stand from the first image. Those forest stands with a difference between the actual and predicted VI values that exceed a threshold are identified as exhibiting abnormal growth. | 06-24-2010 |
20100158315 | SPORTING EVENT IMAGE CAPTURE, PROCESSING AND PUBLICATION - Systems, methods and software are disclosed for capturing and/or importing and processing media items such as digital images or video ( | 06-24-2010 |
20100158316 | Action estimating apparatus, method for updating estimation model, and program - A storage unit stores a model defining a position or a locus of a feature point of an occupant in each specific action. An action estimation unit compares the feature point with each of the models to detect an estimated action. A detecting unit detects that a specific action is being performed as a definite action. A first generating unit generates a new definite model corresponding to the definite action by modifying a position or a locus of the feature point according to an in-action feature point when the definite action is being performed. A second generating unit generates a new non-definite model using the in-action feature point according to a correspondence between the feature point in the definite action and the feature point of a non-definite model other than the definite model. An update unit updates the definite action model and the non-definite action model. | 06-24-2010 |
20100166256 | Method and apparatus for identification and position determination of planar objects in images - A method of identifying a planar object in source images is disclosed. In at least one embodiment, the method includes: retrieving a first source image obtained by a first terrestrial based camera; retrieving a second source image obtained by a second terrestrial based camera; retrieving position data associated with the first and second source image; retrieving orientation data associated with the first and second source image; performing a looking axis rotation transformation on the first and second source image by use of the associated position data and orientation data to obtain first and second intermediate images, wherein the first and second intermediate images have an identical looking axis; performing a radial logarithmic space transformation on the first and second intermediate images to obtain first and second radial logarithmic data images; detecting an area in the first image potentially being a planar object; comparing the potential planar object having similar dimensions in the second radial logarithmic data image and similar rgb characteristics; and finally, identifying the area as a planar object and determining its position. At least one embodiment of the method enables the engineer to detect very efficiently planar perpendicular objects in subsequent images. | 07-01-2010 |
20100166257 | METHOD AND APPARATUS FOR DETECTING SEMI-TRANSPARENCIES IN VIDEO - A method and apparatus for detecting semi-transparencies in video is disclosed. | 07-01-2010 |
20100166258 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR PROVIDING HAND SEGMENTATION FOR GESTURE ANALYSIS - A method for providing hand segmentation for gesture analysis may include determining a target region based at least in part on depth range data corresponding to an intensity image. The intensity image may include data descriptive of a hand. The method may further include determining a point of interest of a hand portion of the target region, determining a shape corresponding to a palm region of the hand, and removing a selected portion of the target region to identify a portion of the target region corresponding to the hand. An apparatus and computer program product corresponding to the method are also provided. | 07-01-2010 |
20100166259 | OBJECT ENUMERATING APPARATUS AND OBJECT ENUMERATING METHOD - An object enumerating apparatus comprises means for generating and binarizing inter-frame differential data from moving image data representative of a photographed object under detection, means for extracting feature data from a plurality of the inter-frame binary differential data directly adjacent to each other on a pixel-by-pixel basis through cubic higher-order local auto-correlation, means for calculating a coefficient of each factor vector from a factor matrix comprised of a plurality of factor vectors previously generated through learning using a factor analysis and arranged for one object under detection, and the feature data, and means for adding a plurality of the coefficients for one object under detection, and rounding off the sum to the decimal point to the closest integer representative of a quantity. By courtesy of small fluctuations in the sum of coefficients and accurate matching with the quantity of objects intended for recognition, a recognition can be accomplished with robustness to differences in scale and speed of objects and to dynamic changes thereof. | 07-01-2010 |
20100166260 | METHOD FOR AUTOMATIC DETECTION AND TRACKING OF MULTIPLE TARGETS WITH MULTIPLE CAMERAS AND SYSTEM THEREFOR - A method for automatically detecting and tracking multiple targets in a multi-camera surveillance zone and system thereof. In each camera view of the system only a simple object detection algorithm is needed. The detection results from multiple cameras are fused into a posterior distribution, named TDP, based on the Bayesian rule. This TDP distribution represents a likelihood of presence of some moving targets on the ground plane. To properly handle the tracking of multiple moving targets with time, a sample-based framework which combines Markov Chain Monte carlo (MCMC), Sequential Monte Carlo (SMC), and Mean-Shift Clustering, is provided. The detection and tracking accuracy is evaluated by both synthesized videos and real videos. The experimental results show that this method and system can accurately track a varying number of targets. | 07-01-2010 |
20100166261 | SUBJECT TRACKING APPARATUS AND CONTROL METHOD THEREFOR, IMAGE CAPTURING APPARATUS, AND DISPLAY APPARATUS - A subject tracking apparatus extracts a subject region which is similar to a reference image on the basis of a degree of correlation with the reference image for tracking a predetermined subject from images supplied in a time series manner. Further, the subject tracking apparatus detects the position of the predetermined subject in the subject region on the basis of the distribution of characteristic pixels representing the predetermined subject contained in the subject region, and corrects the subject region so as to reduce a shift in position of the predetermined subject in the subject region. Moreover, the corrected subject region is taken as the result of tracking the predetermined subject, and the reference image is updated with the corrected subject region as the reference image to be used for the next supplied image. | 07-01-2010 |
20100166262 | MULTI-MODAL OBJECT SIGNATURE - Disclosed herein are a method and system for appearance-invariant tracking of an object in an image sequence. A track is associated with the image sequence, wherein the track has an associated track signature comprising at least one mode. The method detects the object in a frame of the image sequence ( | 07-01-2010 |
20100172541 | TARGETING METHOD, TARGETING DEVICE, COMPUTER READABLE MEDIUM AND PROGRAM ELEMENT - According to an exemplary embodiment a targeting method for targeting a first object from an entry point to a target point in an object ( | 07-08-2010 |
20100172542 | BUNDLING OF DRIVER ASSISTANCE SYSTEMS - A traffic sign recognition system including a detection mechanism adapted for detecting a candidate traffic sign and a recognition mechanism adapted for recognizing the candidate traffic sign as being an electronic traffic sign. A partitioning mechanism may be adapted for partitioning the image frames into a first partition and a second partition. The detection mechanism may use the first portion of the image frames and the recognition mechanism may use the second portion of the image frames. When the candidate traffic sign is detected as an electronic traffic sign, the recognition mechanism may use both the first partition of the image frames and the second portion of the image frames. | 07-08-2010 |
20100177929 | ENHANCED SAFETY DURING LASER PROJECTION - The present invention is directed to systems and methods that provide enhanced eye safety for image projection systems. In particular, the instant invention provides enhanced eye safety for long throw laser projection systems. | 07-15-2010 |
20100177930 | METHODS FOR DETERMINING A WAVEFRONT POSITION - The present disclosure relates to methods for determining a wavefront position of a liquid on a surface of an assay test strip placing a liquid on the surface of the test strip; and acquiring one or more signals from the surface of the test strip at one or more times, comparing the one or more acquired signals to a threshold, wherein the wavefront position is a position on the surface of the test strip where a signal is greater than or less than a threshold (e.g., fixed or dynamic threshold). Such methods may be used to determine the wavefront velocity of a liquid on a surface of an assay test strip and the transit time of a liquid sample to traverse the one or more positions on the surface of the assay test strip. | 07-15-2010 |
20100177931 | VIRTUAL OBJECT ADJUSTMENT VIA PHYSICAL OBJECT DETECTION - Various embodiments related to the location and adjustment of a virtual object on a display in response to a detected physical object are disclosed. One disclosed embodiment provides a computing device comprising a multi-touch display, a processor and memory comprising instructions executable by the processor to display on the display a virtual object, to detect a change in relative location between the virtual object and a physical object that constrains a viewable area of the display, and to adjust a location of the virtual object on the display in response to detecting the change in relative location between the virtual object and the physical object. | 07-15-2010 |
20100177932 | OBJECT DETECTION APPARATUS AND OBJECT DETECTION METHOD - An object detection apparatus includes an image acquisition unit that acquires image data, a reading unit that reads the acquired image data in a predetermined image area at predetermined resolution, an object area detection unit that detects an object area from first image data read by the reading unit, an object discrimination unit that discriminates a predetermined object from the object area detected by the object area detection unit, and a determination unit that determines an image area and resolution used to read second image data which is captured later than the first image data from the object area detected by the object area detection unit, wherein the reading unit reads the second image data from the image area at the resolution determined by the determination unit. | 07-15-2010 |
20100183192 | SYSTEM AND METHOD FOR OBJECT MOTION DETECTION BASED ON MULTIPLE 3D WARPING AND VEHICLE EQUIPPED WITH SUCH SYSTEM - The present invention relates to a technique for detecting dynamic (i.e., moving) objects using sensor signals with 3D information and can be deployed e.g. in driver assistance systems. | 07-22-2010 |
20100183193 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND INTEGRATED CIRCUIT FOR PROCESSING IMAGES - This image processing apparatus, for photographed images taken at a predetermined time interval and input sequentially, specifies an image area as the target of predetermined processing. The apparatus (i) has processing capability to generate, in accordance with a particular input photographed image, reduced images at K (K≧1) ratios within the predetermined time interval, (ii) selects, for each photographed image that is input, M (M≦K) or fewer ratios from among L (L>K) different ratios in accordance with ratios indicated for a photographed image input prior to the photographed image, (iii) compares each of the reduced images generated at the selected M or fewer ratios with template images, and (iv) in accordance with the comparison results, specifies the image area. | 07-22-2010 |
20100183194 | THREE-DIMENSIONAL MEASURING DEVICE - A three-dimensional measuring device includes an irradiation device configured to irradiate and switch among a multiplicity of light patterns having different periods and having a striped light intensity distribution on at least a measurement object, a camera having an imaging element capable of imaging reflected light from the measurement object irradiated by the light pattern, a rack configured to cause relative change in positional relationship between the imaging element and the measurement object, and a control device configured to perform three-dimensional measurements based on image data imaged by the camera. The control device performs the three-dimensional measurements by performing a phase shift method calculation of height data as a first height data for each pixel unit of image data based on a multiply phase-shifted image data obtained by irradiating on a first position a multiply phase-shifted first light pattern having a first period. | 07-22-2010 |
20100183195 | Method and Apparatus for Object Detection in an Image - A method and apparatus for detecting at least one of a location and a scale of an object in an image. The method comprising distinguishing the trailing and leading edges of a moving object in at least one portion of the image, applying a symmetry detection filter to at least a portion of the image to produce symmetry scores relating to the at least one portion of the image, and identifying at least one location corresponding to locally maximal symmetry scores of the symmetry scores relating to the at least one portion of the image, and utilizing the at least one location of the locally maximal symmetry scores to detect at least one of a location and a scale of the object in the image, wherein the scale relates to the size of the symmetry detection filter. | 07-22-2010 |
20100183196 | DYNAMIC TRACKING OF SOFT TISSUE TARGETS WITH ULTRASOUND IMAGES, WITHOUT USING FIDUCIAL MARKERS - An apparatus and method of dynamically tracking a soft tissue target with ultrasound images, without the use of fiducial markers. In one embodiment, the apparatus includes an ultrasound imager to generate a reference ultrasound and a first ultrasound image having a soft tissue target, and a processing device coupled to the ultrasound imager to receive the reference ultrasound image and the first ultrasound image, to register the first ultrasound image with the reference ultrasound image, and to determine a displacement of the soft tissue target based on registration of the first ultrasound image with the reference ultrasound image. | 07-22-2010 |
20100195867 | VISUAL TARGET TRACKING USING MODEL FITTING AND EXEMPLAR - A method of tracking a target includes receiving an observed depth image of the target from a source and analyzing the observed depth image with a prior-trained collection of known poses to find an exemplar pose that represents an observed pose of the target. The method further includes rasterizing a model of the target into a synthesized depth image having a rasterized pose and adjusting the rasterized pose of the model into a model-fitting pose based, at least in part, on differences between the observed depth image and the synthesized depth image. Either the exemplar pose or the model-fitting pose is then selected to represent the target. | 08-05-2010 |
20100195868 | TARGET-LOCKING ACQUISITION WITH REAL-TIME CONFOCAL (TARC) MICROSCOPY - Presented herein is a real-time target-locking confocal microscope that follows an object moving along an arbitrary path, even as it simultaneously changes its shape, size and orientation. This Target-locking Acquisition with Realtime Confocal (TARC) microscopy system integrates fast image processing and rapid image acquisition using, for example, a Nipkow spinning-disk confocal microscope. The system acquires a 3D stack of images, performs a full structural analysis to locate a feature of interest, moves the sample in response, and then collects the next 3D image stack. In this way, data collection is dynamically adjusted to keep a moving object centered in the field of view. The system's capabilities are demonstrated by target-locking freely-diffusing clusters of attractive colloidal particles, and actively-transported quantum dots (QDs) endocytosed into live cells free to move in three dimensions for several hours. During this time, both the colloidal clusters and live cells move distances several times the length of the imaging volume. Embodiments may be applied to other applications, such as manufacturing, open water observation of marine life, aerial observation of flying animals, or medical devices, such as tumor removal. | 08-05-2010 |
20100195869 | VISUAL TARGET TRACKING - A visual target tracking method includes representing a human target with a machine-readable model configured for adjustment into a plurality of different poses and receiving an observed depth image of the human target from a source. The observed depth image is compared to the model. A refine-z force vector is then applied to one or more force-receiving locations of the model to move a portion of the model towards a corresponding portion of the observed depth image if that portion of the model is Z-shifted from that corresponding portion of the observed depth image. | 08-05-2010 |
20100195870 | TRACKING METHOD AND DEVICE ADOPTING A SERIES OF OBSERVATION MODELS WITH DIFFERENT LIFE SPANS - The present invention relates to a tracking method and a tracking device adopting multiple observation models with different life spans. The tracking method is suitable for tracking an object in a low frame rate video or with abrupt motion, and uses three observation models with different life spans to track and detect a specific subject in frame images of a video sequence. An observation model I performs online learning with one frame image prior to the current image, an observation model II performs online learning with five frames prior to the current image, and an observation model III is offline trained. The three observation models are combined by a cascade particle filter so that the specific subject in the low frame rate video or the object with abrupt motion can be tracked quickly and accurately. | 08-05-2010 |
20100202656 | Ultrasonic Doppler System and Method for Gesture Recognition - A method and system recognizes an unknown gesture by directing an ultrasonic signal at an object making an unknown gestures. A set of Doppler signals are acquired of the ultrasonic signal after reflection by the object. Doppler features are extracted from the reflected Doppler signal, and the Doppler features are classified using a set of Doppler models storing the Doppler features and identities of known gestures to recognize and identify the unknown gesture, wherein there is one Doppler model for each known gesture. | 08-12-2010 |
20100202657 | SYSTEM AND METHOD FOR OBJECT DETECTION FROM A MOVING PLATFORM - The present invention relates to a system and method for detecting one or more targets belonging to a first class (e.g., moving and/or stationary people), from a moving platform in a 3D-rich environment. The framework described here is implemented using a number of monocular or stereo cameras distributed around the vehicle to provide 360 degrees coverage. Furthermore, the framework described here utilizes numerous filters to reduce the number of false positive identifications of the targets. | 08-12-2010 |
20100202658 | Drowsiness detector - A drowsiness detector detects the drowsiness by measuring a distance of an eyebrow at three points from a reference line defined by an inner eye corner and an outer eye corner. The three distances of the eyebrow from the reference line are respectively standardized by an inter-eye distance between the inner eye corners of the left and right eyes, and are respectively compared with thresholds for determining the rise of the eyebrow. The rise of the eyebrow is then translated as the start of the drowsiness, and is associated with an operation such as a doze prevention operation or the like. | 08-12-2010 |
20100202659 | IMAGE SAMPLING IN STOCHASTIC MODEL-BASED COMPUTER VISION - A method for tracking a target in computer vision is disclosed. The method generates an integral image ( | 08-12-2010 |
20100202660 | OBJECT TRACKING SYSTEMS AND METHODS - An object tracking method may include: receiving frames of data containing image information of an object; performing an object segmentation to obtain an object motion result; and using the object motion result to conduct an object tracking. In particular, the object segmentation may include: extracting motion vectors from the frames of data; estimating a global motion using the motion vectors; and subtracting the global motion from the motion vectors to generate an object motion result. | 08-12-2010 |
20100202661 | MOVING OBJECT DETECTION APPARATUS AND COMPUTER READABLE STORAGE MEDIUM STORING MOVING OBJECT DETECTION PROGRAM - The approaching object detection unit in a moving object detection apparatus for a moving picture calculates a moving distance of each characteristic point in an image frame obtained at time point t−1, on the basis of an image frame obtained at time point t and an image frame obtained at time point t−1, and on the basis of the image frame obtained at time point t−1 and an image frame obtained at time point t+m, a moving distance of a characteristic point is in the image frame obtained at time point t−1 and has a moving distance to be less than a prescribed value. | 08-12-2010 |
20100208939 | STATISTICAL OBJECT TRACKING IN COMPUTER VISION - A method and system for object tracking in computer vision. The tracked object is recognized from an image that has been acquired with the camera of the computer vision system. The image is processed by randomly generating samples in the search space and then computing fitness functions. Regions of high fitness attract more samples. Computations may be stored into a tree structure. The method provides efficient means for sampling from a very peaked probability density function that can be expressed as a product of factor functions. | 08-19-2010 |
20100208940 | PRE TENSION MONITORING SOLUTION - The present invention relates to a tension monitoring system comprising: —at least one camera for acquiring at least one image of at least one pattern located on an object of interest, wherein the pattern comprises a plurality of points and where each point is arranged on the object in such as way as to follow the movement of the object; —a computational device; wherein the computational device is arranged to analyze the acquired image for detecting the position of each pattern point using an image analysis algorithm arranged to determine the geometrical centre of a point using a contrast detection method, determining the distance between at least two pattern portions, and calculating the tension induced in the object using a reference value of distance between the two pattern portions when the object is mechanically relaxed. | 08-19-2010 |
20100208941 | ACTIVE COORDINATED TRACKING FOR MULTI-CAMERA SYSTEMS - A method and system for coordinated tracking of objects is disclosed. A plurality of images is received from a plurality of nodes, each node comprising at least one image capturing device. At least one target in the plurality of images is identified to produce at least one local track corresponding to each of the plurality of nodes having the at least one target in its field of view. The at least one local track corresponding to each of the plurality of nodes is fused according to a multi-hypothesis tracking method to produce at least one fused track corresponding to the at least one target. At least one of the plurality of nodes is assigned to track the at least one target based on minimizing at least one cost function comprising a cost matrix using the k-best algorithm for tracking at least one target for each of the plurality of nodes. The at least one fused track is sent to the at least one of the plurality of nodes assigned to track the at least one target based on the at least one fused track. | 08-19-2010 |
20100215213 | TARGETING METHOD, TARGETING DEVICE, COMPUTER READABLE MEDIUM AND PROGRAM ELEMENT - This invention will introduce a fast and effective target approach planning method preferably for needle guided percutaneous interventions using a rotational X-ray device. According to an exemplary embodiment A targeting method for targeting a first object in an object under examination is provided, wherein the method comprises selecting a first two-dimensional image of an three-dimensional data volume representing the object under examination, determining a target point in the first two-dimensional image, displaying an image of the three-dimensional data volume with the selected target point. Furthermore, the method comprises positioning the said image of the three-dimensional data volume by scrolling and/or rotating such that a suitable path of approach crossing the target point has a first direction parallel to an actual viewing direction of the said image of the three-dimensional data volume and generating a second two-dimensional image out of the three-dimensional data volume, wherein a normal of the plane of the second two-dimensional image is oriented parallel to the first direction and crosses the target point. | 08-26-2010 |
20100215214 | IMAGE PROCESSING METHOD - A method and apparatus for localizing an area in relative movement and for determining the speed and direction thereof in real time is disclosed. Each pixel of an image is smoothed using its own time constant. A binary value corresponding to the existence of a significant variation in the amplitude of the smoothed pixel from the prior frame, and the amplitude of the variation, are determined, and the time constant for the pixel is updated. For each particular pixel, two matrices are formed that include a subset of the pixels spatially related to the particular pixel. The first matrix contains the binary values of the subset of pixels. The second matrix contains the amplitude of the variation of the subset of pixels. In the first matrix, it is determined whether the pixels along an oriented direction relative to the particular pixel have binary values representative of significant variation, and, for such pixels, it is determined in the second matrix whether the amplitude of these pixels varies in a known manner indicating movement in the oriented direction. In each of several domains, histogram of the values in the first and second matrices falling in such domain is formed. Using the histograms, it is determined whether there is an area having the characteristics of the particular domain. The domains include luminance, hue, saturation, speed (V), oriented direction (D | 08-26-2010 |
20100215215 | OBJECT DETECTING APPARATUS, INTERACTIVE SYSTEM, OBJECT DETECTING METHOD, INTERACTIVE SYSTEM REALIZING METHOD, AND RECORDING MEDIUM - This is provided with a plurality of retroreflective sheets each of which is attached to a screen and retroreflectively reflects received light, an imaging unit which photographs the retroreflective sheets, and an MCU which analyzes a differential picture obtained by photographing. The MCU detects, from the differential picture, a shade area corresponding to a part of the retroreflective sheet which is covered by a foot of a player. The detection of the shade area corresponds to the detection of the foot of the player. Because, in the case where the foot is placed on the retroreflective sheet, the part corresponding thereto is not captured in the differential picture, and is present as a shade area. It is possible to detect a foot without attaching and fixing a reflecting sheet to the foot. | 08-26-2010 |
20100215216 | Localization system and method - Disclosed herein is a localization system and method to recognize the location of an autonomous mobile platform. In order to recognize the location of the autonomous mobile platform, a beacon (three-dimensional structure) having a recognizable image pattern is disposed at a location desired by a user, the mobile platform which knows image pattern information of the beacon photographs the image of the beacon and finds and analyzes a pattern to be recognized from the photographed image. A relative distance and a relative angle of the mobile platform are computed using the analysis of the pattern such that the location of the mobile platform is accurately recognized. | 08-26-2010 |
20100215217 | Method and System of Tracking and Stabilizing an Image Transmitted Using Video Telephony - Herein described is a system and method that tracks the face of a person engaged in a videophone conversation. In addition to performing facial tracking, the invention provides stabilization of facial images that are transmitted during the videophone conversation. The face is tracked by employing one or more algorithms that correlate videophone captured facial images against a stored facial image. The face may be better identified by way of employing one or more voice recognition algorithms. The one or more voice recognition algorithms may correlate utterances of the person engaged in a conversation to one or more stored utterances. The identified utterances are subsequently mapped to a stored facial image. In a representative embodiment, the system used for performing facial tracking and image stabilization comprises an image sensor, a lens, an actuator, and a controller/processor. | 08-26-2010 |
20100220891 | AUGMENTED REALITY METHOD AND DEVICES USING A REAL TIME AUTOMATIC TRACKING OF MARKER-FREE TEXTURED PLANAR GEOMETRICAL OBJECTS IN A VIDEO STREAM - The invention relates to a method and to devices for the real-time tracking of one or more substantially planar geometrical objects of a real scene in at least two images of a video stream for an augmented-reality application. After receiving a first image of the video stream ( | 09-02-2010 |
20100220892 | DRIVER IMAGING APPARATUS AND DRIVER IMAGING METHOD - An imaging mechanism captures an image of a face of a driver of a vehicle. A first image processor performs image processing on a wide portion of the face of the driver in a first image using a first image captured by the imaging mechanism. A second image processor performs image processing on a part of the face of the driver in a second image captured by the imaging mechanism at a higher exposure than the exposure of the first image, using the second image. | 09-02-2010 |
20100226531 | MAKEUP SIMULATION SYSTEM, MAKEUP SIMULATOR, MAKEUP SIMULATION METHOD, AND MAKEUP SIMULATION PROGRAM - According to the present invention, a makeup simulation system applying makeup to a video having an image of the face of a user captured thereon is characterized by image capturing means for capturing the image of the face of the user and outputting the video, control means for receiving the video output from the image capturing means, performing image processing on the video, and outputting the video; and display means for displaying the video output from the control means, wherein the control means includes face recognition means for recognizing the face of the user from the video based on predetermined tracking points; and makeup processing means for applying a predetermined makeup on the face of the user included in the video based on the tracking points and outputting the video to the display means. | 09-09-2010 |
20100226532 | Object Detection Apparatus, Method and Program - An object detection apparatus for detecting an object from an image obtained by taking a front view picture of a road in a traveling direction of a vehicle includes a camera unit for taking the front view picture of the road and inputting the image; a dictionary modeling the object; a search unit for searching the image with a search window; a histogram production unit for producing a histogram by comparing the image in the search window with the dictionary and counting a detection frequency in a direction parallel to a road plane; and a detection unit for detecting the detection object by detecting a unimodal distribution from the histogram. | 09-09-2010 |
20100226533 | METHOD OF IMAGE PROCESSING - The present invention relates to a method of identifying a target object in an image using image processing. It further relates to apparatus and computer software implementing the method. The method includes storing template data representing a template orientation field indicative of an orientation of each of a plurality of features of a template object; receiving image data representing the image; processing the image data to generate an image orientation field indicating an orientation corresponding to the plurality of image features; processing the image orientation field using the template orientation field to generate a match metric indicative of an extent of matching between at least part of the template orientation field and at least part of the image orientation field; and using the match metric to determine whether or not the target object has been identified in the image. Image and/or template confidence data is used to generate the match metric. | 09-09-2010 |
20100226534 | FUSION FOR AUTOMATED TARGET RECOGNITION - A method of predicting a target type in a set of target types from at least one image is provided. At least one image is obtained. A first and second set of confidence values and associated azimuth angles are determined for each target type in the set of target types from the at least one image. The first and second set of confidence values are fused for each of the azimuth angles to produce a fused curve for each target type in the set of target types. When multiple images are obtained, first and second set of possible detections are compiled corresponding to regions of interest in the multiple images. The possible detections are associated by regions of interest. The fused curves are produced for every region of interest. In the embodiments, the target type is predicted from the set of target types based on criteria concerning the fused curve. | 09-09-2010 |
20100226535 | AUGMENTING A FIELD OF VIEW IN CONNECTION WITH VISION-TRACKING - The claimed subject matter relates to an architecture that can employ vision-monitoring techniques to enhance an experience associated with elements of a local environment. In particular, the architecture can establish gaze- or eye-tracking attributes in connection with a user. In addition, a location and a head or face-based perspective of the user can also be obtained. By aggregating this information, the architecture can identify a current field of view of the user, and then map that field of view to a modeled view in connection with a geospatial model of the environment. In addition, the architecture can select additional content that relates to an entity in the view or a modeled entity in the modeled view, and further present the additional content to the user. | 09-09-2010 |
20100226536 | VIDEO SIGNAL DISPLAY DEVICE, VIDEO SIGNAL DISPLAY METHOD, STORAGE MEDIUM, AND INTEGRATED CIRCUIT - A technical problem is to inhibit variation in the correction between frames of a moving image while maintaining a correction amount of the overall image. The video signal display device has an attraction point determination portion ( | 09-09-2010 |
20100226537 | DETECTION AND TRACKING OF INTERVENTIONAL TOOLS - The present invention relates to minimally invasive X-ray guided interventions, in particular to an image processing and rendering system and a method for improving visibility and supporting automatic detection and tracking of interventional tools that are used in electrophysiological procedures. According to the invention, this is accomplished by calculating differences between 2D projected image data of a preoperatively acquired 3D voxel volume showing a specific anatomical region of interest or a pathological abnormality (e.g. an intracranial arterial stenosis, an aneurysm of a cerebral, pulmonary or coronary artery branch, a gastric carcinoma or sarcoma, etc.) in a tissue of a patient's body and intraoperatively recorded 2D fluoroscopic images showing the aforementioned objects in the interior of said patient's body, wherein said 3D voxel volume has been generated in the scope of a computed tomography, magnet resonance imaging or 3D rotational angiography based image acquisition procedure and said 2D fluoroscopic images have been co-registered with the 2D projected image data. After registration of the projected 3D data with each of said X-ray images, comparison of the 2D projected image data with the 2D fluoroscopic images—based on the resulting difference images—allows removing common patterns and thus enhancing the visibility of interventional instruments which are inserted into a pathological tissue region, a blood vessel segment or any other region of interest in the interior of the patient's body. Automatic image processing methods to detect and track those instruments are also made easier and more robust by this invention. Once the 2D-3D registration is completed for a given view, all the changes in the system geometry of an X-ray system used for generating said fluoroscopic images can be applied to a registration matrix. Hence, use of said method as claimed is not limited to the same X-ray view during the whole procedure. | 09-09-2010 |
20100226538 | OBJECT DETECTION APPARATUS AND METHOD THEREFOR - An image processing apparatus includes a moving image input unit configured to input a moving image, an object likelihood information storage unit configured to store object likelihood information in association with a corresponding position in an image for each object size in each frame included in the moving image, a determination unit configured to determine a pattern clipping position where a pattern is clipped out based on the object likelihood information stored in the object likelihood information storage unit, and an object detection unit configured to detect an object in an image based on the object likelihood information of the pattern clipped out at the pattern clipping position determined by the determination unit. | 09-09-2010 |
20100232643 | Method, Apparatus, and Computer Program Product For Object Tracking - A method for object tracking is provided. The method may include identifying a first interest point, receiving a video frame, and detecting, via a processor, a second interest point in the video frame using a scale space image pyramid. The method may further include matching the second interest point with the first interest point, and determining a motion estimation based on the matched interest points. Similar apparatuses and computer program products are also provided. | 09-16-2010 |
20100232644 | SYSTEM AND METHOD FOR COUNTING THE NUMBER OF PEOPLE - This invention discloses a method and system for counting the number of people. First, a first face information is stored in a memory. Then, an image is determined to be a complexion region or not. The complexion region is determined to be a real face or not. Next, a one-to-one similarity matching is processed between the potential face information and the first face information, when the similarity matching achieves a predetermined condition, use the potential face information to update the first face information, when the similarity matching does not achieve a predetermined condition and the potential face is the real face, the potential face is viewed as a second face information and added to the memory, and the first face information is set as been occluded. Finally, the number of people in front of the camera is counted according to the faces stored in the memory. | 09-16-2010 |
20100232645 | MODEL-BASED SPECT HEART ORIENTATION ESTIMATION - When estimating a position or orientation of a patient's heart, a mesh model of a nominal heart is overlaid on a SPECT or PET image of the patient's heart and manipulated to conform to the image of the patient's heart. A mesh adaptation protocol applies opposing forces to the mesh model to constrain the mesh model from changing shape and to pull the mesh model to the shape of the patient's heart. A heart orientation estimator ( | 09-16-2010 |
20100232646 | SUBJECT TRACKING APPARATUS, IMAGING APPARATUS AND SUBJECT TRACKING METHOD - A subject tracking apparatus includes a region extraction section extracting a region similar to a reference image in a first image based on respective feature amounts of the first image being picked up and the reference image being set, a motion vector calculating section calculating a motion vector in each of a plurality of regions in the first image using a second image and the first image, the second image being picked up at a different time from that of the first image, and a control section determining an object region of subject tracking in the first image based on an extraction result in the region extraction section and a calculation result in the motion vector calculating section. | 09-16-2010 |
20100232647 | THREE-DIMENSIONAL RECOGNITION RESULT DISPLAYING METHOD AND THREE-DIMENSIONAL VISUAL SENSOR - In the present invention, whether three-dimensional measurement or checking processing with a model is properly performed by setting information and recognition processing result can easily be confirmed. After setting processing is performed to a three-dimensional visual sensor including a stereo camera, a real workpiece is imaged, the three-dimensional measurement is performed to an edge included in a produced stereo image, and restored three-dimensional information is checked with a three-dimensional model to compute a position of the workpiece and a rotation angle for an attitude indicated by the three-dimensional model. Thereafter, perspective transformation of the three-dimensional information on the edge obtained through measurement processing and the three-dimensional model to which coordinate transformation is already performed based on recognition result is performed into a coordinate system of a camera that performs the imaging, and projection images are displayed while being able to be checked with each other. | 09-16-2010 |
20100232648 | IMAGING APPARATUS, MOBILE BODY DETECTING METHOD, MOBILE BODY DETECTING CIRCUIT AND PROGRAM - An imaging apparatus includes: a moving body detecting section that detects if an object in an image is a moving body which makes a motion between frames; and an attribute determining section that determines a similarity indicating whether or not the object detected as the moving body is similar among a plurality of frames, and a change in luminance of the object based on a texture and luminance of the object, and, when determining that the object is a light/shadow-originated change in luminance, adds attribute information indicating the light/shadow-originated change in luminance to the object detected as the moving body. | 09-16-2010 |
20100232649 | Locating Device for a Magnetic Resonance System - The present utility model provides a locating device for a magnetic resonance system comprising an image sensor, an image display for displaying images acquired by the abovementioned image sensor, and a locator, which locator has at least one locating mark. There is no need for the abovementioned locating device for a magnetic resonance system to use a laser for locating and, therefore, the case where an operator is hurt by the laser will not occur. On the other hand, due to the use of the image sensor and the image display, the remote control of adjustment conditions can be accomplished, therefore there is no need to repeatedly enter into a magnetic resonance examination room to carry out operations during the adjustment process, which saves time and costs for the adjustments. | 09-16-2010 |
20100239119 | SYSTEM FOR IRIS DETECTION TRACKING AND RECOGNITION AT A DISTANCE - A stand-off range or at-a-distance iris detection and tracking for iris recognition having a head/face/eye locator, a zoom-in iris capture mechanism and an iris recognition module. The system may obtain iris information of a subject with or without his or her knowledge or cooperation. This information may be sufficient for identification of the subject, verification of identity and/or storage in a database. | 09-23-2010 |
20100239120 | IMAGE OBJECT-LOCATION DETECTION METHOD - An image object-location detection method includes dividing a target image into a plurality of image blocks, calculating a plurality of sharpness values respectively corresponding to the plurality of image blocks, and analyzing the plurality of sharpness values to accordingly select image blocks corresponding to object-locations in the target image from the plurality of image blocks. | 09-23-2010 |
20100239121 | METHOD AND SYSTEM FOR ASCERTAINING THE POSITION AND ORIENTATION OF A CAMERA RELATIVE TO A REAL OBJECT - The invention relates to a method for ascertaining the position and orientation of a camera ( | 09-23-2010 |
20100239122 | METHOD FOR CREATING AND/OR UPDATING TEXTURES OF BACKGROUND OBJECT MODELS, VIDEO MONITORING SYSTEM FOR CARRYING OUT THE METHOD, AND COMPUTER PROGRAM - Video monitoring systems are used for camera-supported monitoring of relevant areas, and usually comprise a plurality of monitoring cameras placed in the relevant areas for recording monitoring scenes. The monitoring scenes may be, for example, parking lots, intersections, streets, plazas, but also regions within buildings, plants, hospitals, or the like. In order to simplify the analysis of the monitoring scenes by monitoring personnel, the invention proposes displaying at least the background of the monitoring scene on a monitor as a virtual reality in the form of a three-dimensional scene model using background object models. The invention proposes a method for creating and/or updating textures of background object models in the three-dimensional scene model, wherein a background image of the monitoring scene is formed from one or more camera images | 09-23-2010 |
20100239123 | METHODS AND SYSTEMS FOR PROCESSING OF VIDEO DATA | 09-23-2010 |
20100239124 | IMAGE PROCESSING APPARATUS AND METHOD - It is an object to accurately detect an image of an object from an image created by photographing. A computer | 09-23-2010 |
20100239125 | DIGITAL IMAGE PROCESSING APPARATUS, TRACKING METHOD, RECORDING MEDIUM FOR STORING COMPUTER PROGRAM FOR EXECUTING THE TRACKING METHOD, AND DIGITAL IMAGE PROCESSING APPARATUS ADOPTING THE TRACKING METHOD - A digital image processing apparatus and tracking method are provided to rapidly and accurately track a subject location in video images. The apparatus searches for a target image that is most similar to a reference image, in a current frame image in which each pixel has luminance, and other, data, the reference image being smaller than the current frame image, and includes a similarity calculator for calculating a degree of similarity between the reference image and each of a plurality of matching images that have the same size as the reference image and are portions of the current frame image; and a target image determination unit for determining one of the plurality of matching images as the target image using the degree of similarity obtained by the similarity calculator. The similarity calculator calculates the degree of similarity by applying greater weight to the other data than to the luminance data. | 09-23-2010 |
20100246884 | METHOD AND SYSTEM FOR DIAGNOSTICS SUPPORT - A method for displaying a diagnostic image acquires the diagnostic digital image and applies one or more pattern recognition algorithms to the acquired diagnostic digital image, detecting at least one feature within the acquired diagnostic digital image. At least a portion of the acquired diagnostic digital image displays with a marking at the location of the at least one detected feature. At least one detected feature displays under a first set of image display settings for a first interval, then under at least a second set of image display settings for a second interval. | 09-30-2010 |
20100246885 | SYSTEM AND METHOD FOR MONITORING MOTION OBJECT - A motion object monitoring system captures images of monitored objects in a monitored area, and gives numbers to the monitored objects according to specific features of the monitored objects. The specific features of the monitored objects are obtained by detecting the captured images. Only one of the numbers of each of the monitored objects is stored, instead of repeatedly storing the numbers of same motion objects. The motion object monitoring system analyzes the stored numbers, and displays an analysis result. The motion object monitoring system also determines a movement of each of the motion objects according to corresponding numbers of the motion objects. | 09-30-2010 |
20100246886 | MOVING OBJECT IMAGE TRACKING APPARATUS AND METHOD - An apparatus includes a first-computation unit computing first-angular-velocity-instruction values for driving first-and-second-rotation units to track a moving object, using a detected tracking error and a detected angles, when the moving object exists in a first range separate from a zenith by at least a preset distance, a second-computation unit computing second-angular-velocity-instruction values for driving the first-and-second-rotation units to track the moving object and avoid a zenith-singular point, using the detected angles, the detected tracking error and an estimated traveling direction, and a control unit controlling the first-and-second-rotation units to eliminate differences between the first-angular-velocity-instruction values and the angular velocities when the moving object exists in the first range, and controlling the first-and-second-rotation units to eliminate differences between the second-angular-velocity instruction values and the angular velocities when the moving object exists in a second range within the preset distance from the zenith. | 09-30-2010 |
20100246887 | METHOD AND APPARATUS FOR OBJECT TRACKING - There is described an apparatus and method for tracking objects in video. In particular, there is described a method and apparatus that improves the realism of the object in the captured scene. This improvement is effected by identifying a first and last frame in a video and subjecting the detected path of the object to a correcting function which improves the output positional data. | 09-30-2010 |
20100246888 | IMAGING APPARATUS, IMAGING METHOD AND COMPUTER PROGRAM FOR DETERMINING AN IMAGE OF A REGION OF INTEREST - The present invention relates to an imaging apparatus for determining an image of a region of interest, wherein a motion generation unit ( | 09-30-2010 |
20100254571 | FACE IMAGE PICKUP DEVICE AND METHOD - There are provided a face image pickup device and a face image pickup method which can stably acquire a face image by appropriate illumination, and a program thereof. The face image pickup device comprises a camera which picks up an image of a face of a target person, an illumination light source which illuminates the face of the target person with near-infrared light having an arbitrary light amount, and a computer. The computer detects an area including an eye from the face image of the target person picked up by the camera. The computer measures a brightness distribution in the detected area. Thereafter, the computer controls the illumination light source so as to change the amount of near-infrared light based on the measured brightness distribution. | 10-07-2010 |
20100254572 | CONTINUOUS EXTENDED RANGE IMAGE PROCESSING - Methods and systems for image processing are provided. A method for processing images of a scene includes receiving image data of a reference and a current frame; generating N motion vectors that describe motion of the image data within the scene by computing a correlation function on the reference and current frames at each of N registration points; registering the current frame based on the N motion vectors to produce a registered current frame; and updating the image data of the scene based on the registered current frame. Optionally, registered frames may be oversampled. Techniques for generating the N motion vectors according to roll, zoom, shift and optical flow calculations, updating image data of the scene according to switched and intermediate integration approaches, re-introducing smoothed motion into image data of the scene, re-initializing the process, and processing images of a scene and moving target within the scene are provided. | 10-07-2010 |
20100260376 | MAPPER COMPONENT FOR MULTIPLE ART NETWORKS IN A VIDEO ANALYSIS SYSTEM - Techniques are disclosed for detecting the occurrence of unusual events in a sequence of video frames Importantly, what is determined as unusual need not be defined in advance, but can be determined over time by observing a stream of primitive events and a stream of context events. A mapper component may be configured to parse the event streams and supply input data sets to multiple adaptive resonance theory (ART) networks. Each individual ART network may generate clusters from the set of inputs data supplied to that ART network. Each cluster represents an observed statistical distribution of a particular thing or event being observed that ART network. | 10-14-2010 |
20100260377 | MOBILE DETECTOR, MOBILE DETECTING PROGRAM, AND MOBILE DETECTING METHOD - When a mobile is detected using an imaging device installed in the mobile, the image of a partial area is enlarged/reduced depending on variation in distance to the detection object mobile and then it is compared under a fixed scale thus causing increase in computation cost. In order to eliminate the need for an enlargement/reduction processing or a deformation correction processing every time when collation is performed, an input image is converted into a virtual plane image having a size or a shape on the image of a detection object mobile which does not vary depending on the distance between the mobiles. Using a pair of virtual plane images obtained at two different times, points are made to correspond and the mobile is detected based on the gap of corresponding points. | 10-14-2010 |
20100260378 | SYSTEM AND METHOD FOR DETECTING THE CONTOUR OF AN OBJECT ON A MOVING CONVEYOR BELT - A system for detecting the contour of an object situated on a surface includes an image acquisition assembly, wherein there is relative motion between the image acquisition assembly and the object. The image acquisition assembly includes a line detector, operable for scanning the surface line by line by virtue of the relative motion. Each line is scanned during a scan cycle, the line being transverse to the direction of the relative motion. A light source is operable for emitting light toward the line detector during active periods between idle periods, such that during each of the active periods the light is emitted for at least one cycle synchronized with the scan cycle, allowing the line detector to acquire a first group of at least one lit scan line. During each of the idle periods lasting for at least another cycle synchronized with the scan cycle, no light is emitted, allowing the line detector to acquire a second group of at least one unlit scan line. The object passes between the line detector and the light source by virtue of the relative motion. A processor is coupled with the image acquisition assembly and receives and analyzes scan lines acquired by the line detector. For each of the first group of at least one lit scan line and a successive one of the second group of at least one unlit scan line, the processor identifies a token pattern consisting of a lit segment of the first group adjoining an unlit segment of the second group. The processor searches along the first group and the successive second group for locations where the token pattern ends or reappears, thereby defining edges of the object, and combining the collection of the defined edges to produce a contour of the object. | 10-14-2010 |
20100260379 | Image Processing Apparatus And Image Sensing Apparatus - A tracking process portion includes a search area setting portion for setting a search area in the input image, an image analysis portion for analyzing an image in the search area, an auxiliary track value setting portion for setting an auxiliary track value based on a result of the analysis, a track value setting portion for setting an auxiliary track value based on a result of the analysis and deciding whether the set track value is correct or not, and a track target detection portion for detecting a track object from the image in the search area based on the track value. If the set track value is incorrect, the track value setting portion performs a switching operation for setting the auxiliary track value and a track value. | 10-14-2010 |
20100260380 | DEVICE FOR OPTICALLY MEASURING AND/OR TESTING OBLONG PRODUCTS - A device for optically measuring and/or testing oblong products moving in a longitudinal direction. The device includes a plurality of cameras arranged in a plane perpendicular to the longitudinal direction, and distributed around the longitudinal direction. Each of the cameras has a fixed focus. The device further includes a displacing device adapted to displace each of the cameras simultaneously and jointly over the same distance toward the surface of the oblong product to focus on the oblong product, wherein the device defines a center that is located in the plane. | 10-14-2010 |
20100260381 | SUBJECT TRACKING DEVICE AND CAMERA - A subject tracking device includes: a first similarity factor calculation unit that compares an input image assuming characteristics quantities corresponding to a plurality of characteristics components, with a template image assuming characteristics quantities corresponding to the plurality of characteristics components, and calculates a similarity factor indicating a level of similarity between the input image and the template image in correspondence to each of the plurality of characteristics components; a normalization unit that normalizes similarity factors corresponding to the plurality of characteristics components having been calculated by the first similarity factor calculation unit; and a second similarity factor calculation unit that calculates a similarity factor indicating a level of similarity between the input image and the template image based upon results of normalization achieved via the normalization unit. | 10-14-2010 |
20100266158 | SYSTEM AND METHOD FOR OPTICALLY TRACKING A MOBILE DEVICE - A system and method for optically tracking a mobile device uses a first displacement value along a first direction and a second displacement value along a second direction, which are produced using frames of image data of a navigation surface, to compute first and second tracking values that indicate the current position of the mobile device. The first tracking value is computed using the second displacement value and the sine of a tracking angle value, while the second tracking value is computed using the second displacement value and the cosine of the tracking angle value. The tracking angle value is an angle value derived using at least one previous second displacement value. | 10-21-2010 |
20100266159 | HUMAN TRACKING APPARATUS, HUMAN TRACKING METHOD, AND HUMAN TRACKING PROCESSING PROGRAM - A human tracking apparatus and method capable of highly accurately tracking the movement of persons photographed in moving images includes: an image memory | 10-21-2010 |
20100266160 | Image Sensing Apparatus And Data Structure Of Image File - An image sensing apparatus includes an image sensing portion which generates image data of an image by image sensing, and a record control portion which records image data of a main image generated by the image sensing portion together with main additional information obtained from the main image in a recording medium, in which the record control portion records sub additional information obtained from a sub image taken at a timing different from that of the main image in the recording medium in association with the image data of the main image and the main additional information. | 10-21-2010 |
20100266161 | METHOD AND APPARATUS FOR PRODUCING LANE INFORMATION - A method of producing lane information for use in a map database is disclosed. In at least one embodiment, the method includes acquiring one or more source images of a road surface and associated position and orientation data, the road having a direction and lane markings parallel to the direction of the road; acquiring road information representative of the direction of said road; transforming the one or more source images to obtain a transformed image in dependence of the road information, wherein each column of pixels of the transformed image corresponds to a surface parallel to the direction of said road; applying a filter with asymmetrical mask on the transformed image to obtain a filtered image; and producing lane information from the filtered image in dependence of the position and orientation data associated with the one or more source images. | 10-21-2010 |
20100266162 | Methods, Systems, And Computer Program Products For Protecting Information On A User Interface Based On A Viewability Of The Information - Methods, systems, and computer program products for protecting information on a user interface based on a viewability of the information are disclosed. According to one method, a viewing position of a person other than a user with respect to information on a user interface is identified. An information viewability threshold is determined based on the information on the user interface. Further, an action associated with the user interface is performed based on the identified viewing position and the determined information viewability threshold. | 10-21-2010 |
20100272314 | OBSTRUCTION DETECTOR - An optical reader of a form is discussed where the form has a stored known boundary or boundaries. When the boundaries in a captured image do not match those of the stored known boundaries, it may be determined that an obstruction exists that will interfere with a correct reading of the form. The boundary may be printed, blank, and may include quiet areas, or combinations thereof in stored known patterns. A captured image of the form is compared to retrieved, stored boundary information and differences are noted. The differences may be thresholded to determine if an obstruction exists. If an obstruction is detected, the operator may be signaled, and the location may be displayed or highlighted. The form may be discarded or obstruction may be cleared and the form may be re-processed. | 10-28-2010 |
20100272315 | Automatic Measurement of Morphometric and Motion Parameters of the Coronary Tree From A Rotational X-Ray Sequence - Automatic measurement of morphometric and motion parameters of a coronary target includes extracting reference frames from input data of a coronary target at different phases of a cardiac cycle, extracting a three-dimensional centerline model for each phase of the cardiac cycle based on the references frames and projection matrices of the coronary target, tracking a motion of the coronary target through the phases based on the three-dimensional centerline models, and determining a measurement of morphologic and motion parameters of the coronary target based on the motion. | 10-28-2010 |
20100272316 | Controlling An Associated Device - In an illustrative embodiment a computer-implemented process for controlling an associated device utilizing an automated location tracking and control system to produce an action associates a target with a blind node having a wireless transmitter, wherein the target moves within a predetermined area among a set of reference nodes. The computer-implemented process performs a continuous data acquisition based on a target movement data, wherein the continuous data acquisition is repeated within a predetermined interval, performs a continuous calculation of a target location using the target movement to form target location vectors, wherein the continuous calculation is repeated within the predetermined interval, performs a transmission of current coordinate information using the target location vectors, and transforms received current coordinate information into a device control code, wherein the device control code is a set of voltages. The computer-implemented process transmits the device control code to an associated device, and responsive to the device control code, controls an action on the associated device in real time, wherein the action is directed to the tracked object. | 10-28-2010 |
20100278383 | SYSTEM AND METHOD FOR RECOGNITION OF A THREE-DIMENSIONAL TARGET - A system for recognition of a target three-dimensional object is disclosed. The system may include a photon-counting detector and a three-dimensional integral imaging system. The three-dimensional integral imaging system may be positioned between the photon-counting detector and the target three-dimensional object. | 11-04-2010 |
20100278384 | Human body pose estimation - Techniques for human body pose estimation are disclosed herein. Depth map images from a depth camera may be processed to calculate a probability that each pixel of the depth map is associated with one or more segments or body parts of a body. Body parts may then be constructed of the pixels and processed to define joints or nodes of those body parts. The nodes or joints may be provided to a system which may construct a model of the body from the various nodes or joints. | 11-04-2010 |
20100278385 | FACIAL EXPRESSION RECOGNITION APPARATUS AND FACIAL EXPRESSION RECOGNITION METHOD THEREOF - A facial expression recognition apparatus and a facial expression recognition method thereof are provided. The facial expression recognition apparatus comprises a gray image generating unit, a face edge detection unit, a motion skin extraction unit, a face contour generating unit and a facial expression recognition unit. The gray image generating unit generates a gray image according to an original image. The face edge detection unit outputs a face edge detection result according to the gray image. The motion skin extraction unit generates a motion skin extraction result according to the original image, and generates a face and background division result according to the motion skin extraction result. The face contour generating unit outputs a face contour according to the gray image, the face edge detection result and the face and background division result. The facial expression recognition unit outputs a facial expression recognition result according to the face contour. | 11-04-2010 |
20100278386 | VIDEOTRACKING - A method for tracking an object in a sequence of video frames includes the following steps: creating a model with characteristic features for the object to be tracked; and performing a template matching algorithm in individual frames on the basis of the created model for determining a position of the object in the respective frame. An apparatus arrangement for performing the method includes at least one video camera ( | 11-04-2010 |
20100278387 | Passive Electro-Optical Tracker - A passive electro-optical tracker uses a two-band IR intensity ratio to discriminate high-speed projectiles and obtain a speed estimate from their temperature, as well as determining the trajectory back to the source of fire. In an omnidirectional system a hemispheric imager with an MWIR spectrum splitter forms two CCD images of the environment. Three methods are given to determine the azimuth and range of a projectile, one for clear atmospheric conditions and two for nonhomogeneous atmospheric conditions. The first approach uses the relative intensity of the image of the projectile on the pixels of a CCD camera to determine the azimuthal angle of trajectory with respect to the ground, and its range. The second calculates this angle using a different algorithm. The third uses a least squares optimization over multiple frames based on a triangle representation of the smeared image to yield a real-time trajectory estimate. | 11-04-2010 |
20100278388 | SYSTEM AND METHOD FOR GENERATING A DYNAMIC BACKGROUND - A system and methodology that counts a number of moving objects including the pedestrians within predetermined areas. According to certain embodiments, a system comprises an image sensing device and a data processing device. The image sensing device is situated at a predetermined area. The image sensing device retrieves a series of images of the moving objects within the predetermined area. The data processing device is coupled to the image sensing device. The data processing device processes the retrieved image to generate a dynamic background of the first predetermined area and determine a flow of the moving objects thereon. | 11-04-2010 |
20100284565 | Method and apparatus for fingerprint motion tracking using an in-line array - A fingerprint motion tracking method and system is provided for sensing features of a fingerprint along an axis of finger motion, where a linear sensor array has a plurality of substantially contiguous sensing elements configured to capture substantially contiguous overlapping segments of image data. A processing element is configured to receive segments of image data captured by the linear sensor array and to generate fingerprint motion data. Multiple sensor arrays may be included for generating directional data. The motion tracking data may be used in conjunction with a fingerprint image sensor to reconstruct a fingerprint image using the motion data either alone or together with the directional data. | 11-11-2010 |
20100284566 | PICTURE DATA MANAGEMENT APPARATUS AND PICTURE DATA MANAGEMENT METHOD - A land mark used as a key for organizing images captured by, e.g., a digital camera is adequately selected. A association degree adding section ( | 11-11-2010 |
20100284567 | SYSTEM AND PRACTICE FOR SURVEILLANCE PRIVACY-PROTECTION CERTIFICATION AND REGISTRATION - There is provided an apparatus for the certification of privacy compliance. The apparatus includes a registry of at least one of enrolled video surveillance operators, approved surveillance hardware devices, approved surveillance software programs, approved surveillance system installers, and approved entities that manage surveillance systems. The apparatus further includes a registry searcher, in signal communication with the registry, for receiving queries to the registry, and for determining whether at least one of a particular surveillance operator, a particular surveillance hardware device, a particular surveillance software program, a particular surveillance system installer, and a particular entity that manages a particular surveillance system is on the registry based on a given query. | 11-11-2010 |
20100284568 | OBJECT RECOGNITION APPARATUS AND OBJECT RECOGNITION METHOD - An object recognition apparatus recognizes an object from video data for a predetermined time period generated by a camera, analyzes the recognition result, and determines a minimum size and moving speed of faces of the video image recognized from the received frame image. Then, the object recognition apparatus determines a lower limit value of a frame rate and resolution from the determined minimum size and moving speed of the faces. | 11-11-2010 |
20100284569 | LANE RECOGNITION SYSTEM, LANE RECOGNITION METHOD, AND LANE RECOGNITION PROGRAM - To provide a lane recognition system which can improve the lane recognition accuracy by suppressing noises that are likely to be generated respectively in an original image and a bird's-eye image. The lane recognition system recognizes a lane based on an image. The system includes: a synthesized bird's-eye image creation module which creates a synthesized bird's-eye image by connecting a plurality of bird's-eye images that are obtained by transforming respective partial regions of original images picked up at a plurality of different times into bird's-eye images; a lane line candidate extraction module which detects a lane line candidate by using information of the original images or the bird's-eye images created from the original images, and the synthesized bird's-eye image; and a lane line position estimation module which estimates a lane line position based on information of the lane line candidate. | 11-11-2010 |
20100284570 | SYSTEM AND METHOD FOR GAS LEAKAGE DETECTION - Imaging system and method for detecting the presence of a substance that has a detectable signature in a known spectral band. The system comprises a thermal imaging sensor and optics, and two interchangeable band-pass uncooled filters located between the optics and the detector. A first filter transmits electromagnetic radiation in a first spectral band that includes the known spectral band and blocks electromagnetic radiation for other spectral bands. A second filter transmits only electromagnetic radiation in a second spectral band in which the substance has no detectable signature. The system also includes a processor for processing the images to obtain a reconstructed fused image involving using one or more transforms aimed at obtaining similarity between one or more images acquired with the first filter and one or more images acquired with the second filter before reconstructing the fused image. | 11-11-2010 |
20100290668 | LONG DISTANCE MULTIMODAL BIOMETRIC SYSTEM AND METHOD - A system for multimodal biometric identification has a first imaging system that detects one or more subjects in a first field of view, including a targeted subject having a first biometric characteristic and a second biometric characteristic; a second imaging system that captures a first image of the first biometric characteristic according to first photons, where the first biometric characteristic is positioned in a second field of view smaller than the first field of view, and the first image includes first data for biometric identification; a third imaging system that captures a second image of the second biometric characteristic according to second photons, where the second biometric characteristic is positioned in a third field of view which is smaller than the first and second fields of view, and the second image includes second data for biometric identification. At least one active illumination source emits the second photons. | 11-18-2010 |
20100290669 | IMAGE JUDGMENT DEVICE - The present invention provides an image judgment device that can prevent increase in a storage capacity to store element characteristic information. The image judgment device stores the element characteristic information for each element that a characteristic part of a sample object has and first and second positional information defining a position of each element, selects either the first or the second positional information, acquires image characteristic information for a partial image that is in an image frame and considered as an element specified by the first positional information in a characteristic extraction method based on a first axis when the first positional information is selected, extracts image characteristic information for a partial image that is in an image frame and considered as an element specified by the second positional information in a characteristic extraction method based on a second axis, which is acquired by rotating the first axis, when the second positional information is selected, specifies element characteristic information for an element corresponding to a position of the partial image, and judges whether or not the characteristic part appears in the image frame with use of the specified element characteristic information and the extracted image characteristic information. | 11-18-2010 |
20100290670 | IMAGE PROCESSING APPARATUS, DISPLAY DEVICE, AND IMAGE PROCESSING METHOD - According to one embodiment, an image processing apparatus includes an extracted coordinates setting module, an image generator, and an output module. The extracted coordinates setting module sets extracted coordinates in a captured image along a direction in which a viewpoint moves with respect to an object in the captured image. The image generator sequentially extracts partial areas from the captured image in which perspective deformation of the object has been corrected based on the extracted coordinates, and generates a plurality of partial area images from the partial areas. The partial areas are in a size corresponding to the viewing angle of the human eye calculated according to an angle of view of the captured image. The output module outputs a moving image including the partial area images as frames. | 11-18-2010 |
20100290671 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An association degree evaluation unit acquires pieces of position information of an image sensing apparatus at respective times within an adjacent time range to an imaging time of a designated image of those sensed by the image sensing apparatus. Furthermore, the association degree evaluation unit acquires pieces of position information of a moving object at the respective times within the adjacent time range. Then, the association degree evaluation unit calculates a similarity between routes of the image sensing apparatus and moving object based on the acquired position information group, and decides a degree of association between the designated image and moving object based on the calculated similarity. An associating unit registers information indicating the degree of association in association with the designated image. | 11-18-2010 |
20100290672 | MOVING OBJECT DETECTING DEVICE, MOVING OBJECT DETECTING METHOD, AND COMPUTER PROGRAM - An apparatus for detecting movement of an object captured by an imaging device, the apparatus includes a moving object detection unit, that is (1) operable to detect movement of an object based on a first moving object detecting process, and (2) operable to detect movement of the object based on a second moving object detecting process. The apparatus also includes an output unit operable to generate an output based on the detection by the moving object detection unit based on at least one of the first and second moving object detecting processes. | 11-18-2010 |
20100290673 | IMAGE PROCESSING DEVICE, ELECTRONIC INSTRUMENT, AND INFORMATION STORAGE MEDIUM - An image processing device includes a weighted image generation section that generates a weighted image in which at least one of an object-of-interest area of an input image and an edge of a background area other than the object-of-interest area is weighted, a composition grid generation section that generates a composition grid that includes grid lines that are weighted, and a composition evaluation section that performs composition evaluation calculations on the input image based on the weighted image and the composition grid. | 11-18-2010 |
20100296697 | OBJECT TRACKER AND OBJECT TRACKING METHOD - Referring to FIG. | 11-25-2010 |
20100296698 | MOTION OBJECT DETECTION METHOD USING ADAPTIVE BACKGROUND MODEL AND COMPUTER-READABLE STORAGE MEDIUM - A motion object detection method using an adaptive background model and a computer-readable storage medium are provided. In the motion object detection method, a background model establishing step is firstly performed to establish a background model to provide a plurality of background brightness reference values. Then, a foreground object detecting step is performed to use the background model to detect foreground objects. In the background model establishing step, a plurality of brightness weight values are firstly provided in accordance with the brightness of background pixels, wherein each of the brightness weight values is determined in accordance with the relative background pixel. Thereafter, the background brightness reference values are calculated based on the brightness of the background pixels and the brightness weight values. In addition, a computer can perform the motion object detection method after reading the computer-readable storage medium. | 11-25-2010 |
20100296699 | APPARATUS AND METHOD OF IMAGE ANALYSIS - A method of analysing a captured image comprising an instance of a target object comprises the steps of: for each of a plurality of different brightness threshold levels, generating contours from the captured digital image that indicate where in the captured digital image the pixel values of the captured digital image cross the respective brightness threshold level; identifying instances of a contour corresponding to a characteristic feature of said target object, the instances being detected at substantially similar image positions in the contours derived using at least two of the respective brightness threshold levels; and estimating a homography which maps the characteristic feature of the target object to its representation in the captured image, based upon the two or more instances of that target object's corresponding contour. | 11-25-2010 |
20100296700 | METHOD AND DEVICE FOR DETECTING THE COURSE OF A TRAFFIC LANE - A method for detecting the course of a traffic lane, including the following steps: | 11-25-2010 |
20100296701 | PERSON TRACKING METHOD, PERSON TRACKING APPARATUS, AND PERSON TRACKING PROGRAM STORAGE MEDIUM - A person tracking method capable of tracking movements of a person captured by a camera through lighter processing in comparison with tracking processing that employs a Kalman filter or the like is provided. The method includes: detecting a head on each frame image; calculating a feature quantity that features a person whose head is detected on the frame images; calculating a relevance ratio that represents a degree of agreement between a feature quantity on a past frame image and a feature quantity on a current frame image, which belong to each person whose head is detected on the current frame image; and determining that, a head, which is a basis for calculation of a relevance ratio that represents a degree of agreement being a first threshold as well as being a maximum degree of agreement, is a head of the same person as the person having the head. | 11-25-2010 |
20100296702 | PERSON TRACKING METHOD, PERSON TRACKING APPARATUS, AND PERSON TRACKING PROGRAM STORAGE MEDIUM - A person tracking method capable of obtaining information representing a correspondence between a shot image and a three-dimensional real space, without actual measurement, thereby enabling lighter processing is provided. The method includes: calculating a statistically average correspondence between a size of person's head and a position representing a height of the head on the shot image, the camera looking down a measured space and taking the measured space; detecting a position and a size of a head on each of measured frame images; calculating, based on positions and sizes of heads on plural past measured frame images and the correspondence, a movement feature quantity representing a possibility that a head on a current measured frame image is of the same person on the past measured frame images; and determining that the head on the current measured frame image is of the same person on the past measured frame images. | 11-25-2010 |
20100296703 | METHOD AND DEVICE FOR DETECTING AND CLASSIFYING MOVING TARGETS - Horizontal velocity profile sensing techniques, methods and systems may be used to detect and classify moving targets, including but not limited to a person, an animal, or a vehicle, or any other object that lends itself to characterization. Such techniques, methods and systems may be implemented with an autonomous stand-alone device, for example, as an unattended ground sensor, or it may constitute part of a sensor system. An exemplary illustrative non-limiting implementation allows the device to be fixed to a location, while detecting and classifying moving targets. In another exemplary illustrative non-limiting implementation, the device may be placed on a moving or rotating platform and used to detect stationary objects. | 11-25-2010 |
20100296704 | SYSTEM AND METHOD FOR ANALYZING VIDEO FROM NON-STATIC CAMERA - A novel system and method of treating the output of moving cameras, in particular ones that enable the application of conventional “static camera” algorithms, e.g., to enable the continuous vigilance of computer surveillance technology to be applied to moving cameras that cover a wide area. According to the invention, a single camera is deployed to cover an area that might require many static cameras and a corresponding number of processing units. A novel system for processing the main video sufficiently enables long-term change detection, particularly the observation that a static object has been moved or has appeared, for instance detecting the parking and departure of vehicles in a parking lot, the arrival of trains in stations, delivery of goods, arrival and dispersal of people, or any other application. | 11-25-2010 |
20100303289 | DEVICE FOR IDENTIFYING AND TRACKING MULTIPLE HUMANS OVER TIME - A system recognizes human beings in their natural environment, without special sensing devices attached to the subjects, uniquely identifies them and tracks them in three dimensional space. The resulting representation is presented directly to applications as a multi-point skeletal model delivered in real-time. The device efficiently tracks humans and their natural movements by understanding the natural mechanics and capabilities of the human muscular-skeletal system. The device also uniquely recognizes individuals in order to allow multiple people to interact with the system via natural movements of their limbs and body as well as voice commands/responses. | 12-02-2010 |
20100303290 | Systems And Methods For Tracking A Model - An image such as a depth image of a scene may be received, observed, or captured by a device and a model of a user in the depth image may be generated. The background of a received depth image may be removed to isolate a human target in the received depth image. A model may then be adjusted to fit with in the isolated human target in the received depth image. To adjust the model, a joint or a bone may be magnetized to the closest pixel of the isolated human target. The joint or the bone may then be refined such that the joint or the bone may be further adjusted to a pixels equidistant between two edges the body part of the isolated human target where the joint or bone may have been magnetized. | 12-02-2010 |
20100303291 | Virtual Object - An image of a scene may be observed, received, or captured. The image may then be scanned to determine one or more signals emitted or reflected by an indicator that belongs to an input object. Upon determining the one or more signals, the signals may be grouped together into a cluster that may be used to generate a first vector that may indicate the orientation of the input object in the captured scene. The first vector may then be tracked, a virtual object and/or an avatar associated with the first vector may be rendered, and/or controls to perform in an application executing on the computer environment may be determined based on the first vector. | 12-02-2010 |
20100303292 | APPARATUS AND METHOD FOR DETECTING MOVEMENT DIRECTION OF OBJECT - An apparatus for detecting movement direction of object, includes a converging lens, an image sensor and an image processor. The converging lens has an axial chromatic aberration between a first and a second rays in different wavelengths. The image sensor is for receiving and converting the first and second rays into first and second electronic image signals associated with the object. The image processor is configured for analyzing whether the object is closer to an object plane associated with the first ray or closer to an object plane associated with the second ray when the object moves to different positions, and determining the movement direction of the object based on the analyzed positions of the object relative to the object plane associated with the first ray and the object plane associated with the second ray. | 12-02-2010 |
20100303293 | System and Method for Linking Real-World Objects and Object Representations by Pointing - A system and method are described for selecting and identifying a unique object or feature in the system user's three-dimensional (“3-D”) environment in a two-dimensional (“2-D”) virtual representation of the same object or feature in a virtual environment. The system and method may be incorporated in a mobile device that includes position and orientation sensors to determine the pointing device's position and pointing direction. The mobile device incorporating the present invention may be adapted for wireless communication with a computer-based system that represents static and dynamic objects and features that exist or are present in the system user's 3-D environment. The mobile device incorporating the present invention will also have the capability to process information regarding a system user's environment and calculating specific measures for pointing accuracy and reliability. | 12-02-2010 |
20100303294 | Method and Device for Finding and Tracking Pairs of Eyes - A method for finding and subsequently tracking the 3-D coordinates of a pair of eyes in at least one face, including receiving image data, which contains a sequence of at least one digital video signal of at least one image sensor, finding eyes or tracking previously found eyes in the image data, ascertaining the 3-D coordinates of the found or tracked eyes, associating the found or tracked eyes with a pair of eyes and providing the 3-D coordinates of the pair of eyes. | 12-02-2010 |
20100303295 | X-Ray Monitoring - Apparatus for monitoring in real time the movement of a plurality of substances in a mixture, such as oil water and air flowing through a pipe comprises an X-ray scanner arranged to make a plurality of scans of the mixture over a monitoring period to produce a plurality of scan data sets, and control means arranged to analyze the data sets to identify volumes of each of the substances and to measure their movement. By identifying volumes of each of the substances in each of a number of layers and for each of a number of scans, real time analysis and imaging of the substance can be achieved. | 12-02-2010 |
20100303296 | MONITORING CAMERA SYSTEM, MONITORING CAMERA, AND MONITORING CAMERACONTROL APPARATUS - A system includes a plurality of image capturing units configured to capture an object image to generate video data, a video coding unit configured to code each of the generated video data, a measurement unit configured to measure a recognition degree representing a feature of the object from each of the generated video data, and a control unit configured to control the video coding unit to code each of the video data based on the measured recognition degree. | 12-02-2010 |
20100303297 | COLOR CALIBRATION FOR OBJECT TRACKING - To calibrate a tracking system a computing device locates an object in one or more images taken by an optical sensor. The computing device determines environment colors included in the image, the environment colors being colors in the one or more images that are not emitted by the object. The computing device determines one or more trackable colors that, if assumed by the object, will enable the computing device to track the object. | 12-02-2010 |
20100303298 | SELECTIVE SOUND SOURCE LISTENING IN CONJUNCTION WITH COMPUTER INTERACTIVE PROCESSING - A method and apparatus for capturing image and sound during interactivity with a computer program is provided. The apparatus includes an image capture unit that is configured to capture one or more image frames. Also provided is a sound capture unit. The sound capture unit is configured to identify one or more sound sources. The sound capture unit generates data capable of being analyzed to determine a zone of focus at which to process sound to the substantial exclusion of sounds outside of the zone of focus. In this manner, sound that is captured and processed for the zone of focus is used for interactivity with the computer program. | 12-02-2010 |
20100310120 | METHOD AND SYSTEM FOR TRACKING MOVING OBJECTS IN A SCENE - A method and system for tracking moving objects in a scene is described. One embodiment acquires a digital video signal corresponding to the scene; identifies in the digital video signal one or more candidate moving objects; locates at least one candidate moving object in the digital video signal subsequent to identification of the at least one candidate moving object; tracks candidate moving objects that, for at least a predetermined period after they have been identified, continue to be located in the digital video signal; assigns a score to each tracked candidate moving object in accordance with how long after passage of the predetermined period the tracked candidate moving object has continued to be located in the digital video signal; combines the respective scores of the tracked candidate moving objects to obtain an overall score for the scene; and indicates to a user whether the overall score satisfies a predetermined criterion. | 12-09-2010 |
20100310121 | System and method for passive automatic target recognition (ATR) - A passive automatic target recognition (ATR) system includes a range map processor configured to generate range-to-pixel map data based on digital elevation map data and parameters of a passive image sensor. The passive image sensor is configured to passively acquire image data. The passive ATR system also includes a detection processor configured to identify a region of interest (ROI) in the passively acquired sensor image data based on the range-to-pixel map data, and an ATR processor configured to generate an ATR decision for the ROI. | 12-09-2010 |
20100310122 | Method and Device for Detecting Stationary Targets - Techniques for detecting stationary targets in videos or frame images are described. According to one aspect of the present invention, a sequence of frame images is being received from a video system. Each of the frame images into a plurality of image blocks, and dividing a background image is divided into a plurality of corresponding background image blocks. Characteristic values of the image blocks in each of the frame images are calculated. A plurality of characteristic value sequences is then formed, each of the characteristic value sequences comprises a predefined number of characteristic values for each of the image blocks in the frame images. A histogram of each of the characteristic value sequences is computed to determine whether one of the image blocks in one of the frame images contains a stationary target. | 12-09-2010 |
20100310123 | METHOD AND SYSTEM FOR ACTIVELY DETECTING AND RECOGNIZING PLACARDS - A method and a system for actively detecting and recognizing a placard are provided. In the present method, an image capturing device is moved according to a maneuver rule, wherein the image capturing device captures an image continuously during the movement. Then whether a placard exists in the image or not is determined. If a placard exists in the image, a content of the placard is identified and a corresponding action is executed. The method repeatedly processes the foregoing steps to further continuously move the image capturing device and determine whether the placard exists in a newly captured image so as to achieve a purpose of detecting and recognizing placards actively. | 12-09-2010 |
20100310124 | METHOD OF AND DEVICE FOR DETERMINING THE DISTANCE BETWEEN AN INTEGRATED CIRCUIT AND A SUBSTRATE - In a method of determining the distance (d) between an integrated circuit ( | 12-09-2010 |
20100310125 | Method and Device for Detecting Distance, Identifying Positions of Targets, and Identifying Current Position in Smart Portable Device - A method for detecting distance in a smart portable device includes acquiring an image of a target object, calculating a length of a side of the target object in the image, acquiring a predicted length of the side of the target object, and determining a distance between the smart portable device and the target object according to the length of the side of the target object in the image and the predicted length. | 12-09-2010 |
20100310126 | OPTICAL TRIANGULATION - The present invention relates to a method for determining the extension of a trajectory in a space-time volume of measure images. The space-time volume of measure images is generated by a measuring method utilizing a measuring system comprising a first light source and a sensor. The measuring method comprises a step of, in a predetermined operating condition of the measuring system, moving a measure object along a first direction of movement in relation to the measuring system while the first light source illuminates the measure object whereby the sensor generates a measure image of the measure object at each time instant in a set of at least two subsequent time instants, thus generating said space-time volume of measure images wherein a feature point of the measure object maps to a trajectory in the space-time volume. | 12-09-2010 |
20100310127 | SUBJECT TRACKING DEVICE AND CAMERA - A subject tracking device includes: an input unit that sequentially inputs input images; an arithmetic operation unit that calculates a first similarity level between an initial template image and a target image and a second similarity level between an update template image and the target image; a position determining unit that determines a subject position based upon at least one of the first and the second similarity level; a decision-making unit that decides whether or not to update the update template image based upon the first and the second similarity level; and an update unit that generates a new update template image based upon the initial template image multiplied by a first weighting coefficient and the target image multiplied by a second weighting coefficient, and updates the update template image with the newly generated update template image, if the update template image is decided to be updated. | 12-09-2010 |
20100310128 | System and Method for Remote Measurement of Displacement and Strain Fields - A computer-implemented method for measuring full field deformation characteristics of a deformable body. The method includes determining optical setup design parameters for measuring displacement and strain fields, and generating and applying a dot pattern on a planar side of a deformable body. A sequence of images of the dot pattern is acquired before and after deformation of the body. Irregular objects are eliminated from the images based on dot light intensity threshold and the object area or another geometrical cutoff criterion. The characteristic points of the dots are determined, and the characteristic points are matched between two or more of the sequential images. The displacement vector of the characteristic points is found, and mesh free or other techniques are used to estimate the full field displacement based on the displacement vector of the characteristic points. Strain tensor or other displacement-derived quantities can also be estimated using mesh-free or other analysis techniques. | 12-09-2010 |
20100316253 | PERVASIVE SENSING - A method of electronically monitoring a subject, for example in a home care environment, to determine the presence of the subject in zones of the environment as a function of time includes fusing data from image and wearable sensors. A grid display for displaying the presence in the zones is also provided. | 12-16-2010 |
20100316254 | USE OF Z-ORDER DATA IN AN IMAGE SENSOR - Systems and methods are provided for detecting objects of an object class, such as faces, in an image sensor. In some embodiments, the image sensor can include a detector with an image buffer. The image buffer can store image data in raster order. The detector can read the data out in Z order to perform object detection. The image data can then compute feature responses using the Z-ordered image data and determine whether any objects of the object class are present based on the feature responses. In some embodiments, the detector can downscale the image data while the object detection is performed and use the downscaled image data to continue the detection process. In some embodiments, the image data can perform detection even if the image is rotated. | 12-16-2010 |
20100316255 | DRIVER ASSISTANCE SYSTEM FOR MONITORING DRIVING SAFETY AND CORRESPONDING METHOD FOR DETECTING AND EVALUATING A VEHICLE MOVEMENT - A driver assistance system for monitoring driving safety has a mobile electronic unit including a video sensor, a computer unit for image data processing, and an acoustic output unit, which detects the immediate surroundings of the vehicle from the data of the video sensor and outputs a warning or information via an output unit when the computer unit detects a dangerous situation. The mobile electronic unit detects noises within the vehicle or from the outside via an acoustic input unit, and incorporates the information in the assessment of driving safety. | 12-16-2010 |
20100316256 | OBJECT DETECTION APPARATUS AND METHOD THEREOF - An image processing apparatus includes a discrimination unit configured to sequentially perform discrimination of whether each of a plurality of image data includes a predetermined object using a parameter stored in a storage unit, an update unit configured to update the parameter stored in the storage unit, and a control unit configured to, when the discrimination unit discriminates that the predetermined object is included, control the update unit to update the parameter and the discrimination unit to perform the discrimination on current image data using the updated parameter, and when the discrimination unit discriminates that the predetermined object is not included, control the update unit to maintain the parameter stored in the storage unit and the discrimination unit to perform the discrimination on next image data using the maintained parameter. By using this image processing apparatus, the processing can be speeded up without increasing a size of a circuit. | 12-16-2010 |
20100316257 | MOVABLE OBJECT STATUS DETERMINATION - Embodiments of the present invention relate to automated methods and systems for determining a degree of presence of a movable object in a physical space. Video images are used to define a region of interest ( | 12-16-2010 |
20100322471 | Motion invariant generalized hyperspectral targeting and identification methodology and apparatus therefor - The present disclosure relates to a method and system for enhancing the ability of nuclear, chemical, and biological (“NBC”) sensors, specifically mobile sensors, to detect, analyze, and identify NBC agents on a surface, in an aerosol, in a vapor cloud, or other similar environment. Embodiments include the use of a two-stage approach including targeting and identification of a contaminant. Spectral imaging sensors may be used for both wide-field detection (e.g., for scene classification) and narrow-field identification. | 12-23-2010 |
20100322472 | OBJECT TRACKING IN COMPUTER VISION - A method and system for object tracking in computer vision. The tracked object is recognized from an image that has been acquired with the camera of the computer vision system. The image is processed by randomly generating samples in the search space and then computing fitness functions. Regions of high fitness attract more samples. The random selection may be based on standard deviation or other weights. Computations are stored into a tree structure. The tree structure can be used as prior information for next image. | 12-23-2010 |
20100322473 | DECENTRALIZED TRACKING OF PACKAGES ON A CONVEYOR - A decentralized tracking system is discussed herein. The decentralized tracking system can be comprised of two or more tracking elements and be used to track packages moving on a conveyor system. Each tracking element can operate independently, despite being highly sophisticated and dynamically coordinated with one or more other tracking elements. The conveyor system can be a modular and/or accumulation conveyor system that has sorting functionality. The decentralized tracking system can be used to divert packages for sortation by, for example, embedding a destination zone into the package's tracking data and/or preprogramming conveyor zones to sort specific packages based on a package identifier. | 12-23-2010 |
20100322474 | DETECTING MULTIPLE MOVING OBJECTS IN CROWDED ENVIRONMENTS WITH COHERENT MOTION REGIONS - Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks. | 12-23-2010 |
20100322475 | OBJECT AREA DETECTING DEVICE, OBJECT AREA DETECTING SYSTEM, OBJECT AREA DETECTING METHOD AND PROGRAM - To enable detection of an overlying object distinctively even if a stationary object is overlaid with another stationary object or a moving object. A data processing device includes a first unit which detects an object area in a plurality of time-series continuous input images, a second unit which detects a stationary area in the object area from the plurality of continuous input images, a third unit which stores information of the stationary area as time-series background information, and a fourth unit which compares the time-series background information with the object area to thereby detect each object included in the object area. | 12-23-2010 |
20100322476 | VISION BASED REAL TIME TRAFFIC MONITORING - A system and method for detecting and tracking one or more vehicles using a system for obtaining two-dimensional visual data depicting traffic flow on a road is disclosed. In one exemplary embodiment, the system and method identifies groups of features for determining traffic data. The features are classified as stable features or unstable features based on whether each feature is on the frontal face of a vehicle close to the road plane. In another exemplary embodiment, the system and method identifies vehicle base fronts as a basis for determining traffic data. In yet another exemplary embodiment, the system and method includes an automatic calibration procedure based on identifying two vanishing points | 12-23-2010 |
20100322477 | DEVICE AND METHOD FOR DETECTING A PLANT - A device for detecting a plant includes a two-dimensional camera for detecting a two-dimensional image of a plant leaf having a high two-dimensional resolution, and a three-dimensional camera for detecting a three-dimensional image of the plant leaf having a high three-dimensional resolution. The two-dimensional camera is a conventional high-resolution color camera, for example, and the three-dimensional camera is a TOF camera, for example. A processor for merging the two-dimensional image and the three-dimensional image creates a three-dimensional result representation having a higher resolution than the three-dimensional image of the 3D camera, which may include, among other things, the border of a leaf. The three-dimensional result representation serves to characterize a plant leaf, such as to calculate the surface area of the leaf, the alignment of the leaf, or serves to identify the leaf. | 12-23-2010 |
20100322478 | Restoration apparatus for weather-degraded image and driver assistance system - In a restoration apparatus, an estimating unit divides a captured original image into a plurality of local pixel blocks, and estimates an luminance level of airlight in each of the plurality of local pixel blocks. A calculating unit directly calculates, from a particle-affected luminance model, a luminance level of each pixel of each of the plurality of local pixel blocks in the original image to thereby generate, based on the luminance level of each pixel of each of the plurality of local pixel blocks, a restored image of the original image. The particle-affected luminance model expresses an intrinsic luminance of a target observed by the image pickup device as a function between the luminance level of airlight and an extinction coefficient. The extinction coefficient represents the concentration of particles in the atmosphere. | 12-23-2010 |
20100322479 | SYSTEMS AND METHODS FOR 3-D TARGET LOCATION - A target is imaged in a three-dimensional real space using two or more video cameras. A three-dimensional image space combined from two video cameras of the two or more video cameras is displayed to a user using a stereoscopic display. A right eye and a left eye of the user are imaged as the user is observing the target in the stereoscopic video display, a right gaze line of the right eye and a left gaze line of the left eye are calculated in the three-dimensional image space, and a gazepoint in the three-dimensional image space is calculated as the intersection of the right gaze line and the left gaze line using a binocular eyetracker. A real target location is determined by translating the gazepoint in the three-dimensional image space to the real target location in the three-dimensional real space from the locations and the positions of the two video cameras using a processor. | 12-23-2010 |
20100322480 | Systems and Methods for Remote Tagging and Tracking of Objects Using Hyperspectral Video Sensors - Detection and tracking of an object by exploiting its unique reflectance signature. This is done by examining every image pixel and computing how closely that pixel's spectrum matches a known object spectral signature. The measured radiance spectra of the object can be used to estimate its intrinsic reflectance properties that are invariant to a wide range of illumination effects. This is achieved by incorporating radiative transfer theory to compute the mapping between the observed radiance spectra to the object's reflectance spectra. The consistency of the reflectance spectra allows for object tracking through spatial and temporal gaps in coverage. Tracking an object then uses a prediction process followed by a correction process. | 12-23-2010 |
20100329508 | Detecting Ground Geographic Features in Images Based on Invariant Components - Systems, devices, features, and methods for detecting geographic features in images, such as, for example, to develop a navigation database are disclosed. For example, a method of detecting a path marking from collected images includes collecting a plurality of images of geographic areas along a path. An image of the plurality of images is selected. Components that represent an object on the path in the selected image are determined. In one embodiment, the determined components are independent or invariant to scale of the object. The determined components are compared to reference components in a data library. If the determined components substantially meet a matching threshold with the reference components, the object in the selected image is identified to be a path marking corresponding to the reference components in the data library. | 12-30-2010 |
20100329509 | METHOD AND SYSTEM FOR GESTURE RECOGNITION - A method and a system for gesture recognition are provided for recognizing a gesture performed by a user in front of an electronic product having a video camera. In the present method, an image containing the upper body of the user is captured and a hand area in the image is obtained. The hand area is fully scanned by a first couple of concentric circles. During the scanning, a proportion of a number of skin color pixels on an inner circumference of the first couple of concentric circles and a proportion of a number of skin color pixels on an outer circumference of the first couple of concentric circles are used to determine a number of fingertips in the hand area. The gesture is recognized by the number of fingertips and an operation function of the electronic product is executed according to an operating instruction corresponding to the recognized gesture. | 12-30-2010 |
20100329510 | METHOD AND DEVICE FOR DISPLAYING THE SURROUNDINGS OF A VEHICLE - In a method for displaying on a display device the surroundings of a vehicle, the surroundings are detected by at least one detection sensor as an image of the surroundings while the vehicle is traveling or at a standstill. A surroundings image from a given surrounding area is ascertained by the detection sensor in different vehicle positions, and/or at least one surroundings image from the given surrounding area is ascertained by each of at least two detection sensors situated at a distance from one another, and in each case a composite surroundings image is obtained from the surroundings images and displayed by the display device. | 12-30-2010 |
20100329511 | Apparatus and method for detecting hands of subject in real time - An apparatus and method can effectively detect both hands and hand shape of a user from images input through cameras. A skin image detecting skin regions from one of the input images and a stereoscopic distance image are used. For hand detection, background and noise are eliminated from a combined image of the skin image and the distance image and regions corresponding to actual both hands are detected from effective images having a high probability of hands. For hand shape detection, a non-skin region is eliminated from the skin image based on the stereoscopic distance information, hand shape candidate regions are detected from the remaining region after elimination, and finally a hand shape is determined. | 12-30-2010 |
20100329512 | METHOD FOR REALTIME TARGET DETECTION BASED ON REDUCED COMPLEXITY HYPERSPECTRAL PROCESSING - There is provided a method for real-time target detection comprising detecting a preprocessed pixel as a target and/or a background, based on a library, and refining the library by extracting a sample from the target or the background. | 12-30-2010 |
20110002505 | System and Method For Analysis of Image Data - A method and apparatus for optical damage assessment using an existing imaging focal plane array and a fixed or moving set of optics and filters. Advantages include cost reductions and improved reliability due to fewer components arid therefore fewer points of failure. | 01-06-2011 |
20110002506 | Eye Beautification - Sub-regions within a face image are identified to be enhanced by applying a localized smoothing kernel to luminance data corresponding to the sub-regions of the face image. An enhanced face image is generated including an enhanced version of the face that includes certain original pixels in combination with pixels corresponding to the one or more enhanced sub-regions of the face. | 01-06-2011 |
20110002507 | Obstacle detection procedure for motor vehicle - The present invention concerns an obstacle detection procedure within the area surrounding a motor vehicle. | 01-06-2011 |
20110002508 | DIGITALLY-GENERATED LIGHTING FOR VIDEO CONFERENCING APPLICATIONS - A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model. | 01-06-2011 |
20110002509 | MOVING OBJECT DETECTION METHOD AND MOVING OBJECT DETECTION APPARATUS - A moving object detection method with which a region of a moving object is accurately extracted without being affected by a change in shape or size or occlusion of the moving object and in which a distance indicating a similarity between trajectories of an image in each of the blocks included in video is calculated (S | 01-06-2011 |
20110007938 | Thermal and short wavelength infrared identification systems - A method and apparatus for preventing fratricide including an emitter that emits a signaling code at a wavelength, the signaling code representing a coded message; a receiver that captures an image of a field of view including the emitter and generates image information corresponding to the captured image; a translation system that: receives the image information, and decodes the coded message from the image information; and a output device that outputs the decoded message. | 01-13-2011 |
20110007939 | Image-based tracking - A method of image-tracking by using an image capturing device. The method comprises: performing an image-capture of a scene by using an image capturing device; and tracking movement of the image capturing device by analyzing a set of images by using an image processing algorithm. | 01-13-2011 |
20110007940 | AUTOMATED TARGET DETECTION AND RECOGNITION SYSTEM AND METHOD - Methods and apparatus are provided for recognizing particular objects of interest in a captured image. One or more salient features that are correlative to an object of interest are detected within a captured image. The captured image is segmented into one or more regions of interest that include a detected salient feature. A covariance appearance model is generated for each of the one or more regions of interest, and first and second comparisons are conducted. The first comparisons comprise comparing each of the generated covariance appearance models to a plurality of stored covariance appearance models, and the second comparisons comprise comparing each of the generated covariance appearance models to each of the other generated covariance appearance model. Based on the first and second comparisons, a determination is made as to whether each of the one or more detected salient features is a particular object of interest. | 01-13-2011 |
20110007941 | PRECISELY LOCATING FEATURES ON GEOSPATIAL IMAGERY - Methods for locating a feature on geospatial imagery and systems for performing those methods are disclosed. An accuracy level of each of a plurality of geospatial vector datasets available in a database can be determined. Each of the plurality of geospatial vector datasets corresponds to the same spatial region as the geospatial imagery. The geospatial vector dataset having the highest accuracy level may be selected. When the selected geospatial vector dataset and the geospatial imagery are misaligned, the selected geospatial vector dataset is aligned to the geospatial imagery. The location of the feature on the geospatial imagery is then determined based on the selected geospatial vector dataset and outputted via a display device. | 01-13-2011 |
20110007942 | Real-Time Tracking System - There is provided a real-time tracking system and a method associated therewith for identifying and tracking objects moving in a physical region, typically for producing a physical effect, in real-time, in response to the movement of each object. The system scans a plane, which intersects a physical space, in order to collect reflection-distance data as a function of position along the plane. The reflection-distance data is then processed by a shape-analysis subsystem in order to locate among the reflection-distance data, a plurality of discontinuities, which are in turn associated to one or more detected objects. Each detected object is identified and stored in an identified-object structure. The scanning and processing is repeated for a number of iterations, wherein each detected object is identified with respect to the previously scanned objects, through matching with the identified-object structures, in order to follow the course of each particular object. | 01-13-2011 |
20110007943 | Registration Apparatus, Checking Apparatus, Data Structure, and Storage Medium (amended - A registration apparatus, a checking apparatus, a data structure, and a storage medium that are capable of achieving an improved authentication accuracy are provided. The registration apparatus includes an image acquisition unit configured to acquire a venous image for a vein of a living body, an extraction unit configured to extract a parameter resistant to affine transformation from part of the venous image, and a registration unit configured to register the parameter extracted by the extraction unit in storage means. The part of the venous image is set as a target for extracting the parameter resistant to affine transformation. | 01-13-2011 |
20110007944 | SYSTEM AND METHOD FOR OCCUPANCY ESTIMATION - A system generates occupancy estimates based on a Kinetic-Motion (KM)-based model that predicts the movements of occupants through a region divided into a plurality of segments. The system includes a controller for executing an algorithm representing the KM-based model. The KM-based model includes state equations that define each of the plurality of segments as containing congested portions and uncongested portions. The state equations define the movement of occupants based, in part, on the distinctions made between congested and uncongested portions of each segment. | 01-13-2011 |
20110007945 | FAST ALGORITHM FOR STREAMING WAVEFRONT - The invention is generally directed to the field of image processing, and more particularly to a method and an apparatus for determining a wavefront of an object, in particular a human eye. The invention discloses a method and an apparatus for real-time wavefront sensing of an optical system utilizing two different algorithms for detecting centroids of a centroid image as provided by a Hartmann-Shack wavefront sensor. A first algorithm detects an initial position of all centroids and a second algorithm detects incremental changes of all centroids detected by said first algorithm. | 01-13-2011 |
20110007946 | UNIFIED SYSTEM AND METHOD FOR ANIMAL BEHAVIOR CHARACTERIZATION WITH TRAINING CAPABILITIES - In general, the present invention is directed to systems and methods for finding the position and shape of an object using video. The invention includes a system with a video camera coupled to a computer in which the computer is configured to automatically provide object segmentation and identification, object motion tracking (for moving objects), object position classification, and behavior identification. In a preferred embodiment, the present invention may use background subtraction for object identification and tracking, probabilistic approach with expectation-maximization for tracking the motion detection and object classification, and decision tree classification for behavior identification. Thus, the present invention is capable of automatically monitoring a video image to identify, track and classify the actions of various objects and the object's movements within the image. The image may be provided in real time or from storage. The invention is particularly useful for monitoring and classifying animal behavior for testing drugs and genetic mutations, but may be used in any of a number of other surveillance applications. | 01-13-2011 |
20110013804 | Method for Normalizing Displaceable Features of Objects in Images - A method normalizes a feature of an object in an image. The feature of the object is extracted from a 2D or 3D image. The feature is displaceable within a displacement zone in the object, and wherein the feature has a location within the displacement zone. An associated description of the feature is determined. Then, the feature is displaced to a best location in the displacement zone to produce a normalized feature. | 01-20-2011 |
20110013805 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND INTERFACE APPARATUS - In order to detect a specific detection object from an input image, a color serving as a reference is calculated in a reference image region. The difference for each color component between each pixel in the detection window and the reference color is calculated. Whether or not the detection object is included in the detection window is discriminated by a feature vector indicating how the difference is distributed in the detection window. | 01-20-2011 |
20110013806 | Methods of object search and recognition - Embodiments of the invention disclose techniques for processing of machine-readable forms of unfixed or flexible format. An auxiliary brief description may be optionally specified to determine the spatial orientation of the image. A method of searching for elements of a document comprises the following main operations in addition to the operations of preliminary image processing: selecting the varieties of structural description from several available variants, determining the orientation of the image, selecting the text objects, where the text must be recognized, and determining the minimal required volume of recognition, recognizing the text objects, searching for elements of the form. Searching for elements of the form comprises the following actions: selecting a searched element in the structural description, gaining the algorithm of search constraints from the structural description, searching for the element, testing the obtained variants. | 01-20-2011 |
20110019873 | PERIPHERY MONITORING DEVICE AND PERIPHERY MONITORING METHOD - A flow calculating section | 01-27-2011 |
20110019874 | DEVICE AND METHOD FOR DETERMINING GAZE DIRECTION - An eye tracker device ( | 01-27-2011 |
20110019875 | IMAGE DISPLAY DEVICE - On a table type image display device A, a display ( | 01-27-2011 |
20110026764 | DETECTION OF OBJECTS USING RANGE INFORMATION - A system and method for detecting objects and background in digital images using range information includes receiving the digital image representing a scene; identifying range information associated with the digital image and including distances of pixels in the scene from a known reference location; generating a cluster map based at least upon an analysis of the range information and the digital image, the cluster map grouping pixels of the digital image by their distances from a viewpoint; identifying objects in the digital image based at least upon an analysis of the cluster map and the digital image; and storing an indication of the identified objects in a processor-accessible memory system. | 02-03-2011 |
20110026765 | SYSTEMS AND METHODS FOR HAND GESTURE CONTROL OF AN ELECTRONIC DEVICE - Systems and methods of generating device commands based upon hand gesture commands are disclosed. An exemplary embodiment generates image information from a series of captured images, generates commands based upon hand gestures made by a user that emulate device commands generated by a remote control device, identifies a hand gesture made by the user from the received image information, determines a hand gesture command based upon the identified hand gesture, compares the determined hand gesture command with the plurality of predefined hand gesture commands to identify a corresponding matching hand gesture command from the plurality of predefined hand gesture commands, generates an emulated remote control device command based upon the identified matching hand gesture command, and controls the media device based upon the generated emulated remote control device command. | 02-03-2011 |
20110026766 | MOVING IMAGE EXTRACTING APPARATUS, PROGRAM AND MOVING IMAGE EXTRACTING METHOD - There is provided a moving image extracting apparatus including a movement detecting unit which detects movement of an imaging apparatus at the time when imaging a moving image based on the moving image imaged by the imaging apparatus, an object detecting unit which detects an object from the moving image, a salient object selecting unit which selects an object detected by the object detecting unit over a period of predetermined length or longer as a salient object within a segment in which movement of the imaging apparatus is detected by the movement detecting unit, and an extracting unit which extracts a segment including the salient object selected by the salient object selecting unit from the moving image. | 02-03-2011 |
20110026767 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An apparatus stores a luminance signal and a color signal extracted from a tracking area in image data and determines a correlation with the stored luminance signal, thereby extracting an area where a specified object exists in another image data to update the tracking area using the position information of the extracted area. If a sufficient correlation cannot be obtained from the luminance signal, the apparatus makes a comparison with the stored color signal to determine whether the specified object is lost. The apparatus updates the luminance signal every time the tracking area is updated, but does not update the color signal even if the tracking area is updated or updates the color signal at a period longer than a period at which the luminance signal is updated. | 02-03-2011 |
20110026768 | Tracking a Spatial Target - Apparatuses and methods for tracking a dermatological feature are disclosed. One method includes establishing an imaging reference proximate to an identified dermatological feature, wherein the imaging reference has a known color spectrum and known physical dimensions. A digital image sequence is obtained containing one or more images of the identified dermatological feature and the imaging reference. At least one trait of the identified dermatological feature is estimated using the imaging reference and at least one image of the digital image sequence. | 02-03-2011 |
20110026769 | PRESENTATION DEVICE - A presentation device comprises an image capture portion for capturing an image of a subject and generating a raw image thereof; a detection portion adapted to analyze whether a first marker is present in the raw image, and if the first marker is present in the raw image, to detect an existing position of the first marker within the raw image; a storage portion for storing a positional relationship of a synthesis position at which a mask image for masking at least a portion of the raw image is synthesized with the raw image relative to the existing position of the first marker; a synthesized image generation portion adapted to determine the synthesis position according to the positional relationship with the detected existing position, and to synthesize the mask image at the determined synthesis position within the raw image to generate a synthesized image; and an output portion for outputting the synthesized image. | 02-03-2011 |
20110026770 | Person Following Using Histograms of Oriented Gradients - A method for using a remote vehicle having a stereo vision camera to detect, track, and follow a person, the method comprising: detecting a person using a video stream from the stereo vision camera and histogram of oriented gradient descriptors; estimating a distance from the remote vehicle to the person using depth data from the stereo vision camera; tracking a path of the person and estimating a heading of the person; and navigating the remote vehicle to an appropriate location relative to the person. | 02-03-2011 |
20110033084 | IMAGE CLASSIFICATION SYSTEM AND METHOD THEREOF - An image classification system configured to classify a target and method thereof is provided, wherein the system includes at least one light source configured to emit light with at least one line pattern towards the target, wherein at least a portion of the emitted light and line pattern is reflected by the target. The system further includes an imager configured to receive at least a portion of the reflected light and line pattern, such that an obtained 2-D line pattern is produced that is representative of at least a portion of the emitted light and line pattern reflected by the target, and a controller configured to compare the 2-D line pattern to at least one previously obtained 2-D line pattern stored in a database, such that the controller classifies the 2-D line pattern as a function of the comparison. | 02-10-2011 |
20110033085 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An apparatus includes a storage unit configured to store an attribute of each pixel existing inside a tracking target area set on an image and an attribute of a pixel existing adjacent to the pixel, an allocation unit configured to allocate an evaluation value to a pixel to be evaluated according to a result of comparison between an attribute of the pixel to be evaluated and an attribute of a pixel existing inside the tracking target area and a result of comparison between an attribute of a pixel existing adjacent to the pixel to be evaluated and an attribute of a pixel existing adjacent to the pixel existing inside the tracking target area, and a changing unit configured to change the tracking target area based on the allocated evaluation value. | 02-10-2011 |
20110033086 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An apparatus includes a storage unit configured to classify pixels existing inside a tracking target area set on an image and pixels existing outside the tracking target area according to an attribute and to store a result of classification of the pixels on a storage medium, a first derivation unit configured to derive a first ratio of the pixels existing inside the tracking target area and having the attribute to the pixels existing outside the tracking target area and having the attribute, a second derivation unit configured to derive a second ratio of pixels, whose the first ratio is higher than a first predetermined value, to all pixels existing inside the tracking target area, and a determination unit configured, if the second ratio is higher than a second predetermined value, to determine that the tracking target area can be tracked. | 02-10-2011 |
20110033087 | VIDEO CONTENT ANALYSIS - A video content analysis (VCA) system generates an output regarding a detected condition that provides an indication of a confidence level regarding the detected condition. One example VCA system determines whether a first characteristic of a detected object in a field of vision of the video content analysis system satisfies a first criterion. If so, a first signal is generated under selected conditions. The VCA system also determines whether a second characteristic of the detected object satisfies a corresponding second criterion. If so, a second, different signal is generated if the first and second criteria are satisfied. The first and second signals indicate respective, different confidence levels that an event has occurred. A disclosed example includes a VCA as part of a security system. | 02-10-2011 |
20110038508 | SYSTEM AND METHOD FOR PERFORMING OPTICAL NAVIGATION USING PORTIONS OF CAPTURED FRAMES OF IMAGE DATA - A system and method for performing optical navigation selectively uses portions of captured frame of image data for cross-correlation for displacement estimation, which can reduce the power consumption and/or increase the tracking performance at higher speed usage. | 02-17-2011 |
20110044497 | SYSTEM, METHOD AND PROGRAM PRODUCT FOR CAMERA-BASED OBJECT ANALYSIS - A system, method and program product for camera-based object analyses including object recognition, object detection, and/or object categorization. An exemplary embodiment of the computerized method for analyzing objects in images obtained from a camera system includes receiving image(s) having pixels from the camera system; calculating a pool of features for each pixel; then deriving either a pool of radial moment of features from the pool of features and a geometric center of the image(s) or a pool of central moments of features from the pool of features; then calculating a normalized descriptor, based on an area of the image(s) and either of the derived pool of moments of features; and then based on the normalized descriptor, a computer then either recognizes, detects, and/or categorizes an object(s) in the image(s). | 02-24-2011 |
20110044498 | VISUALIZING AND UPDATING LEARNED TRAJECTORIES IN VIDEO SURVEILLANCE SYSTEMS - Techniques are disclosed for visually conveying a trajectory map. The trajectory map provides users with a visualization of data observed by a machine-learning engine of a behavior recognition system. Further, the visualization may provide an interface used to guide system behavior. For example, the interface may be used to specify that the behavior recognition system should alert (or not alert) when a particular trajectory is observed to occur. | 02-24-2011 |
20110044499 | INTER-TRAJECTORY ANOMALY DETECTION USING ADAPTIVE VOTING EXPERTS IN A VIDEO SURVEILLANCE SYSTEM - A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies. | 02-24-2011 |
20110044500 | Light Information Receiving Method, Unit and Method for Recognition of Light-Emitting Objects - A light information receiving method, a method and a unit for the recognition of light-emitting objects are provided. The light information receiving method includes the following steps. A light-emitting object array is captured to obtain a plurality of images, wherein the light-emitting object array includes at least one light-emitting object. A temporal filtering process is performed to the images to recognize a light-emitting object. A light-emitting status of the light-emitting object array is recognized according to the light-emitting object location. A decoding process is performed according to the light-emitting status to output an item of information. | 02-24-2011 |
20110044501 | Systems and methods for personalized motion control - End users, unskilled in the art, generating motion recognizers from example motions, without substantial programming, without limitation to any fixed set of well-known gestures, and without limitation to motions that occur substantially in a plane, or are substantially predefined in scope. From example motions for each class of motion to be recognized, a system automatically generates motion recognizers using machine learning techniques. Those motion recognizers can be incorporated into an end-user application, with the effect that when a user of the application supplies a motion, those motion recognizers will recognize the motion as an example of one of the known classes of motion. Motion recognizers can be incorporated into an end-user application; tuned to improve recognition rates for subsequent motions to allow end-users to add new example motions. | 02-24-2011 |
20110044502 | MOTION DETECTION METHOD, APPARATUS AND SYSTEM - A motion detection method, apparatus and system are disclosed in the present invention, which relates to the video image processing field. The present invention can effectively overcome the influence of the background on motion detection and the problem of object “conglutination” to avoid false detection, thereby accomplishing object detection in complex scenes with a high precision. The motion detection method disclosed in embodiments of the present invention comprises: acquiring detection information of the background scene and detection information of the current scene, wherein the current scene is a scene comprising an object(s) to be detected and the same background scene; and calculating the object(s) to be detected according to the detection information of the background scene and the detection information of the current scene. The present invention is applicable to any scenes where moving objects need to be detected, e.g., automatic passenger flow statistical systems in railway, metro and bus sectors, and is particularly applicable to detection and calibration of objects in places where brightness varies greatly. | 02-24-2011 |
20110044503 | VEHICLE TRAVEL SUPPORT DEVICE, VEHICLE, VEHICLE TRAVEL SUPPORT PROGRAM - A vehicle travel support device determines presence of a recognition inhibiting factor of a lane mark on a road on which a vehicle is traveling with high accuracy irrespective of an imaging history by a vehicular camera from the same position. The vehicle travel support system generates an edge image by extracting an edge or actualizing an edge in an image obtained through the vehicular camera. When Hough transform of the edge image is performed, votes for a specified vote value of a linear component is evaluated in a ρ-θ space (Hough space). Presence of a recognition inhibiting factor of a lane mark on a road is determined by determining whether or not the votes of a specified vote value in a specified region denoting a standard travel lane of a vehicle in the real space is ≧a threshold in the ρ-θ space. | 02-24-2011 |
20110044504 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND PROGRAM - An information processing device, including: a three-dimensional information generating section for obtaining position and attitude of a moving camera or three-dimensional positions of feature points by successively receiving captured images from different viewpoints, and updating status data using observation information which includes tracking information of the feature points, the status data including three-dimensional positions of the feature points within the images and position and attitude information of the camera; and a submap generating section for generating submaps by dividing an area for which the three-dimensional position is to be calculated. The three-dimensional information generating section obtains position and attitude of the camera or three-dimensional positions of the feature points by generating status data corresponding to the submaps not including information about feature points outside of a submap area for each of the generated submaps and updating the generated status data corresponding to the submaps. | 02-24-2011 |
20110044505 | EQUIPMENT OPERATION SAFETY MONITORING SYSTEM AND METHOD AND COMPUTER-READABLE MEDIUM RECORDING PROGRAM FOR EXECUTING THE SAME - Provided are equipment operation safety monitoring system and method and computer-readable medium having a program recorded thereon, the program allowing a computer to execute the method. The equipment operation safety monitoring system includes an image input unit, an integrated image generation unit, a guideline generation unit, and an image output unit. The image input unit is mounted on heavy equipment and inputs a plurality of images acquired by photographing partitioned areas in all the directions around the heavy equipment. The integrated image generation unit generates an integrated image including the areas in all the directions around the heavy equipment by using the plurality of the images. The guideline generation unit generates a guideline indicating a position separated by a predetermined distance from the heavy equipment. The image output unit illustrates the guideline on the integrated image and outputs the integrated image. | 02-24-2011 |
20110044506 | TARGET ANALYSIS APPARATUS, METHOD AND COMPUTER-READABLE MEDIUM - Provided is a target analysis apparatus, method and computer-readable medium based on a depth image and an intensity image of a target is provided. The target analysis apparatus may include a body detection unit to detect a body of the target from the intensity image of the target, a foreground segmentation unit to calculate an intensity threshold value in accordance with intensity values from the detected body, to transform the intensity image into a binary image using the intensity threshold value, and to mask the depth image of the target using the binary image as a mask to thereby obtain a masked depth image, and an active portion detection unit to detect an active portion of the body of the target from the masked depth image. | 02-24-2011 |
20110044507 | METHOD AND ASSISTANCE SYSTEM FOR DETECTING OBJECTS IN THE SURROUNDING AREA OF A VEHICLE - A method for determining relevant objects in a vehicle moving on a roadway An assistance function is executed in relation to a position of a relevant object, and the relevant objects are determined on the basis of an image evaluation of images of a surrounding area of the vehicle. The images are detected by way of camera sensors. By way of a radar sensor positions of stationary objects in the surrounding area of the vehicle are determined. A profile of a roadway edge is determined using the positions of the stationary objects and that the image evaluation is carried out in relation to the roadway edge profile determined. A driver assistance system suitable for carrying out the method is also described. | 02-24-2011 |
20110044508 | APPARATUS AND METHOD FOR RAY TRACING USING PATH PREPROCESS - Disclosed is an apparatus and method for ray-tracing using a path preprocess. The method for ray-tracing including launching a ray from a transmitting point at angles with regular intervals, setting a first side of an object where the launched ray is projected as a reference patch, and searching predetermined preprocessed path data for a counterpart patch corresponding to a second side of another object, the second side being exposed to the projected ray reflected or diffracted from the set reference patch, and tracing a transmission path of the reflected or diffracted ray. | 02-24-2011 |
20110051999 | Device and method for detecting targets in images based on user-defined classifiers - A device and method for detecting targets of interest in an image, such as people or objects of a certain type. Targets are detected based on an optimized strong classifier descriptor that can be based on a combination of weak classifier descriptors. The weak classifier descriptors can include a user-defined weak classifier descriptor that is defined by a user to represent a shape or appearance attribute that is characteristic of parts of the target of interest. The strong classifier descriptor can be optimized by selecting a subset of weak classifier descriptors that exhibit improved performance in detecting targets in training images. | 03-03-2011 |
20110052000 | DETECTING ANOMALOUS TRAJECTORIES IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for determining anomalous trajectories of objects tracked over a sequence of video frames. In one embodiment, a symbol trajectory may be derived from observing an object moving through a scene. The symbol trajectory represents semantic concepts extracted from the trajectory of the object. Whether the symbol trajectory is anomalous may be determined, based on previously observed symbol trajectories. A user may be alerted upon determining that the symbol trajectory is anomalous. | 03-03-2011 |
20110052001 | AUTOMATIC ERROR DETECTION FOR INVENTORY TRACKING AND MANAGEMENT SYSTEMS USED AT A SHIPPING CONTAINER YARD - A method automatically detects errors in a container inventory database associated with a container inventory tracking system of a container storage facility. A processor in the inventory tracking system performs a method that: obtains a first data record, identifies an event (e.g., pickup, drop-off, or movement) associated with the first record, provides a list of error types based on the identified event, and determines whether a data error has occurred through a checking process. In each of the checking steps, the processor selects an error type from the list of error types, determines a search criterion based on the selected error type and the first data record, queries the database using the search criterion, compares query results with the first data record to detect data conflicts between them, and upon the detection of the data conflicts, reports that a data error of the selected error type has been detected. | 03-03-2011 |
20110052002 | FOREGROUND OBJECT TRACKING - Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications. | 03-03-2011 |
20110052003 | FOREGROUND OBJECT DETECTION IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the detected foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications. | 03-03-2011 |
20110052004 | CAMERA DEVICE AND IDENTITY RECOGNITION METHOD UTILIZING THE SAME - A camera device includes an image capturing module, a face detection module, a light detection and ranging (LIDAR) system, a storage module, and a microprocessor. The image capturing module continuously captures images of a determined filed. The face detection module detects the images to obtain a face to be tested, and records coordinates of the face in the image. The LIDAR system scans the face to be tested in the determined field according to the coordinates thereby to obtain three-dimensional information of the face to be tested. The storage module stores three-dimensional information of a determined face. The microprocessor compares the three-dimensional information of the face to be tested with the three-dimensional information of the determined face, and then outputs a recognition signal. | 03-03-2011 |
20110052005 | Designation of a Characteristic of a Physical Capability by Motion Analysis, Systems and Methods - Motion Analysis is used to classify or rate human capability in a physical domain via a minimized movement and data collection protocol producing a discreet, overall figure of merit of the selected physical capability. The minimal protocol is determined by data mining of a more extensive movement and data collection. Protocols are relevant in medical, sports and occupational applications. Kinematic, kinetic, body type, Electromyography (EMG), Ground Reactive Force (GRF), demographic, and psychological data are encompassed. Resulting protocols are capable of transforming raw data representing specific human motions into an objective rating of a skill or capability related to those motions. | 03-03-2011 |
20110052006 | EXTRACTION OF SKELETONS FROM 3D MAPS - A method for processing data includes receiving a temporal sequence of depth maps of a scene containing a humanoid form having a head. The depth maps include a matrix of pixels having respective pixel depth values. A digital processor processes at least one of the depth maps so as to find a location of the head and estimates dimensions of the humanoid form based on the location. The processor tracks movements of the humanoid form over the sequence using the estimated dimensions. | 03-03-2011 |
20110052007 | GESTURE RECOGNITION METHOD AND INTERACTIVE SYSTEM USING THE SAME - A gesture recognition method for an interactive system includes the steps of: capturing image windows with an image sensor; obtaining information of object images associated with at least one pointer in the image windows; calculating a position coordinate of the pointer relative to the interactive system according to the position of the object images in the image windows when a single pointer is identified according to the information of object images; and performing gesture recognition according to a relation between the object images in the image window when a plurality of pointers are identified according to the information of object images. The present invention further provides an interactive system. | 03-03-2011 |
20110052008 | System and Method for Image Based Sensor Calibration - Apparatus and methods are disclosed for the calibration of a tracked imaging probe for use in image-guided surgical systems. The invention uses actual image data collected from an easily constructed calibration jig to provide data for the calibration algorithm. The calibration algorithm analytically develops a geometric relationship between the probe and the image so objects appearing in the collected image can be accurately described with reference to the probe. The invention can be used with either two or three dimensional image data-sets. The invention also has the ability to automatically determine the image scale factor when two dimensional data-sets are used. | 03-03-2011 |
20110058708 | OBJECT TRACKING APPARATUS AND OBJECT TRACKING METHOD - Candidate contour curves for a tracking object in the current frame are determined using a particle filter, based on the existence probability distribution of the tracking object in a frame which is one frame previous to the current frame. To match a candidate curve against a contour image of the current frame, a processing to search for the closest contour to the candidate curves is divided for each knot constituting the candidate contour curve and is executed in parallel by a plurality of processors. Each image data on a search region for each knot to be processed are copied from a contour image stored in an image storage to the respective local memories. | 03-10-2011 |
20110058709 | VISUAL TARGET TRACKING USING MODEL FITTING AND EXEMPLAR - A method of tracking a target includes receiving an observed depth image of the target from a source and analyzing the observed depth image with a prior-trained collection of known poses to find an exemplar pose that represents an observed pose of the target. The method further includes rasterizing a model of the target into a synthesized depth image having a rasterized pose and adjusting the rasterized pose of the model into a model-fitting pose based, at least in part, on differences between the observed depth image and the synthesized depth image. Either the exemplar pose or the model-fitting pose is then selected to represent the target. | 03-10-2011 |
20110064267 | CLASSIFIER ANOMALIES FOR OBSERVED BEHAVIORS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A combination of a self organizing map (SOM) and an adaptive resonance theory (ART) network may be used to identify a variety of different anomalous inputs at each cluster layer. As progressively higher layers of the cortex model component represent progressively higher levels of abstraction, anomalies occurring in the higher levels of the cortex model represent observations of behavioral anomalies corresponding to progressively complex patterns of behavior. | 03-17-2011 |
20110064268 | VIDEO SURVEILLANCE SYSTEM CONFIGURED TO ANALYZE COMPLEX BEHAVIORS USING ALTERNATING LAYERS OF CLUSTERING AND SEQUENCING - Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A video surveillance system may be configured to observe a scene (as depicted in a sequence of video frames) and, over time, develop hierarchies of concepts including classes of objects, actions and behaviors. That is, the video surveillance system may develop models at progressively more complex levels of abstraction used to identify what events and behaviors are common and which are unusual. When the models have matured, the video surveillance system issues alerts on unusual events. | 03-17-2011 |
20110064269 | OBJECT POSITION TRACKING SYSTEM AND METHOD - A method of tracking an object is provided. The method includes obtaining sensed positions of the object at a plurality of time instants and predicting a future position of the object by applying fuzzy predictive rules to the sensed positions of the object obtained from at least two previous time instants. | 03-17-2011 |
20110064270 | OPTICAL TRACKING DEVICE AND POSITIONING METHOD THEREOF - The present invention discloses an optical tracking device and a positioning method thereof. The optical tracking device comprises several light-emitting units, several image tracking units, an image processing unit, an analysis unit, and a calculation unit. First, the light-emitting units are correspondingly disposed on a carrier in geometric distribution and provide light sources. Secondly, the image tracking units track the plurality of light sources and capture images. The images are subjected to image processing by the image processing unit to obtain light source images corresponding to the light sources from each image. Then the analysis unit analyzes the light source images to obtain positions and colors corresponding to the light-emitting units. Lastly, the calculation unit establishes three-dimensional coordinates corresponding to the light-emitting units based on the positions and colors and calculates the position of the carrier based on the three-dimensional coordinates. | 03-17-2011 |
20110064271 | METHOD FOR DETERMINING A THREE-DIMENSIONAL REPRESENTATION OF AN OBJECT USING A SEQUENCE OF CROSS-SECTION IMAGES, COMPUTER PROGRAM PRODUCT, AND CORRESPONDING METHOD FOR ANALYZING AN OBJECT AND IMAGING SYSTEM - The method comprises, for each cross-section image, determining the position of the object (O) in relation to the cross-section plane at the moment the cross-section image is captured, and determining a three-dimensional representation (V) of the object (O) using cross-section images (X | 03-17-2011 |
20110064272 | Method and apparatus for three-dimensional tracking of infra-red beacons - A method for processing data includes identifying a time signature of an infra-red (IR) beacon. Image data associated with the IR beacon is identified using the time signature. | 03-17-2011 |
20110069865 | METHOD AND APPARATUS FOR DETECTING OBJECT USING PERSPECTIVE PLANE - A method and apparatus for detecting an object using a perspective plane are disclosed. The method includes determining a perspective plane for a background scene, and determining a moving object within the background scene based upon the determined perspective plane. By using a visual surveillance device and an apparatus for detecting objects, the method and apparatus for detecting an object using a perspective plane is capable of efficiently detecting objects and tracking the movements of the corresponding objects. | 03-24-2011 |
20110069866 | Image processing apparatus and method - Provided is an image processing apparatus. The image processing apparatus may extract a three-dimensional (3D) silhouette image in an input color image and/or an input depth image. Motion capturing may be performed using the 3D silhouette image and 3D body modeling may be performed. | 03-24-2011 |
20110069867 | TECHNIQUE FOR REGISTERING IMAGE DATA OF AN OBJECT - A technique of registering image data of an object | 03-24-2011 |
20110069868 | SIGNAL PROCESSING SYSTEM AND SIGNAL PROCESSING PROGRAM - A dedicated base vector based on a known spectral characteristic of a subject as an identification target having the known spectral characteristic and a spectral characteristic of an imaging system, which includes a spectral characteristic concerning a color imaging system used for image acquisition of subjects including the subject as the identification target and a spectral characteristic concerning illumination light used when image acquisition of the subjects by the color imaging system, are acquired. A weighting factor concerning the dedicated base vector is calculated based on an image signal obtained by image acquisition of the subject by the color imaging system, the dedicated has vector, and the spectral characteristic of the imaging system. An identification result of the subject which is the identification target having the known spectral characteristic is calculated based on the weighting factor concerning the dedicated base vector to output as an output signal. | 03-24-2011 |
20110069869 | SYSTEM AND METHOD FOR DEFINING AN ACTIVATION AREA WITHIN A REPRESENTATION SCENERY OF A VIEWER INTERFACE - The invention describes a system ( | 03-24-2011 |
20110075884 | Automatic Retrieval of Object Interaction Relationships - A method for automatically retrieving interaction information between objects, including: with a server, transforming a first image and a second image submitted to said server from a source into first and second sets of parameters, respectively; searching a database for an interaction relationship between the first and second images using the first and second sets of parameters; and returning a representation of the interaction relationship to the source. | 03-31-2011 |
20110081043 | USING VIDEO-BASED IMAGERY FOR AUTOMATED DETECTION, TRACKING, AND COUNTING OF MOVING OBJECTS, IN PARTICULAR THOSE OBJECTS HAVING IMAGE CHARACTERISTICS SIMILAR TO BACKGROUND - A system and method to automatically detect, track and count individual moving objects in a high density group without regard to background content, embodiments performing better than a trained human observer. Select embodiments employ thermal videography to detect and track even those moving objects having thermal signatures that are similar to a complex stationary background pattern. The method allows tracking an object that need not be identified every frame of the video, that may change polarity in the imagery with respect to background, e.g., switching from relatively light to dark or relatively hot to cold and vice versa, or both. The methodology further provides a permanent record of an “episode” of objects in motion, permitting reprocessing with different parameters any number of times. Post-processing of the recorded tracks allows easy enumeration of the number of objects tracked with the FOV of the imager. | 04-07-2011 |
20110081044 | Systems And Methods For Removing A Background Of An Image - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may then be discarded to isolate one or more voxels associated with a foreground object such as a human target and the isolated voxels associated with the foreground object may be processed. | 04-07-2011 |
20110081045 | Systems And Methods For Tracking A Model - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose. | 04-07-2011 |
20110081046 | METHOD OF IMPROVING THE RESOLUTION OF A MOVING OBJECT IN A DIGITAL IMAGE SEQUENCE - A method of improving the resolution of a small moving object in a digital image sequence comprises the steps of:
| 04-07-2011 |
20110081047 | ELECTRONIC APPARATUS AND IMAGE DISPLAY METHOD - According to one embodiment, an electronic apparatus detects face images in a still image. The apparatus sets positions and sizes of display ranges on the still image such that the display ranges include the face images respectively, the display ranges being associated with display areas obtained by dividing a display screen. The apparatus displays partial images included in the display ranges on the display areas in order to display the face images on the display areas respectively, and changes the position and size of each of the display ranges such that a display mode of the display screen is caused to transit from a first display mode in which the face images are displayed on the display areas respectively to a second display mode in which an entire image of the still image is displayed on the display screen. | 04-07-2011 |
20110081048 | METHOD AND APPARATUS FOR TRACKING MULTIPLE OBJECTS AND STORAGE MEDIUM - The present invention relates to a method and an apparatus for tracking multiple objects and a storage medium. More particularly, the present invention relates to a method and an apparatus for tracking multiple objects that performs object detection of one subset per an input image by performing only objection detection of one subset per camera image regardless of the number N of objects to be tracked and tracks all objects among images while the objects are detected to track multiple objects in real time, and a storage medium. The method for tracking multiple objects according to the exemplary embodiment of the present invention includes: (a) performing object detection with respect to only objects of one subset among multiple objects with respect to an input image at a predetermined time; and (b) tracking all objects among images from an image of a time prior to the predetermined time with respect to all objects in the input image while step (a) is performed. | 04-07-2011 |
20110085698 | Measuring Turbulence and Winds Aloft using Solar and Lunar Observable Features - Presented is a system and method for detecting turbulence in the atmosphere comprising an image capturing device for capturing a plurality of images of a visual feature of a celestial object such as the sun, combined with a lens having focal length adapted to focus an image onto image capturing device such that the combination of the lens and the image capturing device are adapted to resolve a distortion caused by a turbule of turbulent air, and an image processor adapted to compare said plurality of images of said visual feature to detect the transit of a turbule of turbulent air in between said image capturing device and said celestial object, and compute a measurement of the angular velocity of the turbule. A second plurality of images is used to triangulate the distance to the turbule and the velocity of the turbule. | 04-14-2011 |
20110085699 | Method and apparatus for tracking image patch considering scale - A method and apparatus for tracking an image considering scale are provided. A registered image patch may be divided into a scale-invariant image patch and a scale-variant image patch according to a predetermined scale invariance index (SII). If a registered image patch within an image is a scale-invariant image patch, the scale-invariant image patch is tracked by adjusting its position, while if the registered image patch is a scale-variant image patch, the scale-invariant image patch is tracked by adjusting its position and scale. | 04-14-2011 |
20110085700 | Systems and Methods for Generating Bio-Sensory Metrics - Neuromarketing processing systems and methods are described that provide marketers with a window into the mind of the consumer with a scientifically validated, quantitatively-based means of bio-sensory measurement. The neuromarketing processing system generates, from bio-sensory inputs, quantitative models of consumers' responses to information in the consumer environment, under an embodiment. The quantitative models provide information including consumers' emotion, engagement, cognition, and feelings. The information in the consumer environment includes advertising, packaging, in-store marketing, and online marketing. | 04-14-2011 |
20110085701 | STRUCTURE DETECTION APPARATUS AND METHOD, AND COMPUTER-READABLE MEDIUM STORING PROGRAM THEREOF - A plurality of candidate points are extracted from image data. The plurality of candidate points are normalized, and a set of representative points composing form model that is most similar to set form is selected from the plurality of candidate points. Further, the candidate points and the form model are compared with each other, and correction is performed by adding a region forming structure or by deleting a region, or the like. Accordingly, the structure is detected in image data. | 04-14-2011 |
20110085702 | OBJECT TRACKING BY HIERARCHICAL ASSOCIATION OF DETECTION RESPONSES - Systems, methods, and computer readable storage media are described that can provide a multi-level hierarchical framework to progressively associate detection responses, in which different methods and models are adopted to improve tracking robustness. A modified transition matrix for the Hungarian algorithm can be used to solve the association problem that considers not only initialization, termination and transition of tracklets but also false alarm hypotheses. A Bayesian inference approach can be used to automatically estimate a scene structure model as the high-level knowledge for the long-range trajectory association. | 04-14-2011 |
20110085703 | Method and apparatus for automatic object identification - A method and system for processing image data to identify objects in an image. Terrain types are identified in the image. A second image is generated identifying areas of the image which border regions of different intensities by identifying a gradient magnitude value for each pixel of the image. A filtered image is generated from the second image, the filtered image identifying potential objects which have a smaller radius than the size of a filter and a different brightness than background pixels surrounding the potential objects. The second image and the filtered image are compared to identify potential objects as an object. A potential object is identified as an object if the potential object has a gradient magnitude greater than a threshold gradient magnitude, and the threshold gradient magnitude is based on the terrain type identified in the portion of the image where the potential object is located. | 04-14-2011 |
20110085704 | Markerless motion capturing apparatus and method - A markerless motion capturing apparatus and method is provided. The markerless motion capturing apparatus may track a pose and a motion of a performer from an image, inputted from a camera, without using a marker or a sensor, and thereby may extend an application of the markerless motion capturing apparatus and selection of a location. | 04-14-2011 |
20110085705 | DETECTION OF BODY AND PROPS - A system and method for detecting and tracking targets including body parts and props is described. In one aspect, the disclosed technology acquires one or more depth images, generates one or more classification maps associated with one or more body parts and one or more props, tracks the one or more body parts using a skeletal tracking system, tracks the one or more props using a prop tracking system, and reports metrics regarding the one or more body parts and the one or more props. In some embodiments, feedback may occur between the skeletal tracking system and the prop tracking system. | 04-14-2011 |
20110085706 | DEVICE AND METHOD FOR LOCALIZING AN OBJECT OF INTEREST IN A SUBJECT - The present invention relates to a device, a method and a computer program which allow for the localization of an object of interest in a subject. The device includes a registration unit ( | 04-14-2011 |
20110091068 | Secure Tracking Of Tablets - A method of tracking and tracing tablets, in particular pharmaceutical tablets, includes reading, i.e. detecting, code structure from the tablet, reading additional information from the package on an information sheet, and then comparing the readings to verify authenticity. The code structure may be two-dimensional or three-dimensional. The detected code may further be compared with information stored in a database. | 04-21-2011 |
20110091069 | INFORMATION PROCESSING APPARATUS AND METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - An information processing apparatus comprises: an extraction unit configured to extract a person from a video obtained by capturing a real space; a holding unit configured to hold a movement estimation rule corresponding to a partial region specified in the video; a determination unit configured to determine whether a region where the person has disappeared from the video or appeared in the video corresponds to the partial region; and an estimation unit configured to estimate, based on the movement estimation rule corresponding to the partial region determined to correspond, a movement of the person after the person has disappeared from the video or before the person has appeared in the video. | 04-21-2011 |
20110091070 | COMBINING MULTI-SENSORY INPUTS FOR DIGITAL ANIMATION - Animating digital characters based on motion captured performances, including: receiving sensory data collected using a variety of collection techniques including optical video, electro-oculography, and at least one of optical, infrared, and inertial motion capture; and managing and combining the collected sensory data to aid cleaning, tracking, labeling, and re-targeting processes. Keywords include Optical Video Data and Inertial Motion Capture. | 04-21-2011 |
20110091071 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus including an image acquisition unit that acquires a target image; a face part extraction unit that extracts a face region including a face part from the target image; an identification unit that identifies a model face part by comparing the face part to a plurality of model face parts stored in a storage unit; and an illustration image determination unit that determines an illustration image corresponding to the identified model face part. | 04-21-2011 |
20110091072 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND CONTROL METHOD FOR IMAGE PROCESSING APPARATUS - An image processing apparatus capable of communicating with a plurality of servers stores image data including an object of recognition, and a plurality of recognition dictionaries. The image processing apparatus establishes communication with one of the servers to receive, from the server with which the communication has been established, designation information designating a recognition dictionary for recognizing the object of recognition included in the image data. The image processing apparatus identifies the recognition dictionary designated in the received designation information from among the stored recognition dictionaries and uses the identified recognition dictionary to recognize the object of recognition included in the image data. | 04-21-2011 |
20110091073 | MOVING OBJECT DETECTION APPARATUS AND MOVING OBJECT DETECTION METHOD - To provide a moving object detection apparatus which accurately performs region extraction, regardless of the pose or size of a moving object. The moving object detection apparatus includes: an image receiving unit receiving the video sequence; a motion analysis unit calculating movement trajectories based on motions of the image; a segmentation unit performing segmentation so as to divide the movement trajectories into subsets, and setting a part of the movement trajectories as common points shared by the subsets; a distance calculation unit calculating a distance representing a similarity between a pair of movement trajectories, for each of the subsets; a geodesic distance calculation unit transforming the calculated distance into a geodesic distance; an approximate geodesic distance calculation unit calculating an approximate geodesic distance bridging over the subsets, by integrating geodesic distances including the common points; and a region extraction unit performing clustering on the calculated approximate geodesic distance. | 04-21-2011 |
20110091074 | MOVING OBJECT DETECTION METHOD AND MOVING OBJECT DETECTION APPARATUS - A moving object detection method includes: extracting NL long-term trajectories (NL≧2) over TL pictures (TL≧3) and NS short-term trajectories (NS>NL) over TS pictures (TL>TS≧2), using movement trajectories; calculating a geodetic distance between the NL long-term trajectories and a geodetic distance between the NS short-term trajectories (S | 04-21-2011 |
20110096954 | OBJECT AND MOVEMENT DETECTION - Motions, positions or configurations off, for example a human hand can be recognised by transmitting a plurality of transmit signals in respective time frames; receiving a plurality of receive signals; determining a plurality of channel impulse responses using the transmit and receive signals; defining a matrix of impulse responses, with impulse responses for adjacent time frames adjacent each other; and analysing the matrix for patterns ( | 04-28-2011 |
20110096955 | SECURE ITEM IDENTIFICATION AND AUTHENTICATION SYSTEM AND METHOD BASED ON UNCLONABLE FEATURES - The present invention is a method and apparatus for protection of various items against counterfeiting using physical unclonable features of item microstructure images. The protection is based on the proposed identification and authentication protocols coupled with portable devices. In both cases a special transform is applied to data that provides a unique representation in the secure key-dependent domain of reduced dimensionality that also simultaneously resolves performance-security-complexity and memory storage requirement trade-offs. The enrolled database needed for the identification can be stored in the public domain without any risk to be used by the counterfeiters. Additionally, it can be easily transportable to various portable devices due to its small size. Notably, the proposed transformations are chosen in such a way to guarantee the best possible performance in terms of identification accuracy with respect to the identification in the raw data domain. The authentication protocol is based on the proposed transform jointly with the distributed source coding. Finally, the extensions of the described techniques to the protection of artworks and secure key exchange and extraction are disclosed in the invention. | 04-28-2011 |
20110096956 | VEHICLE PERIPHERY MONITORING DEVICE - A vehicle periphery monitoring device is operable to report a high contact possibility between a vehicle and an object at an appropriate time or frequency according to the type of the object. When the object is determined to be a human being and the position of the object in real space is contained in a first contact determination area, a high contact possibility between the vehicle and the object is reported. On the other hand, when the object is determined to be a quadruped animal and the real spatial position of the object is contained in a second contact determination area, the corresponding report is made. The second contact determination area has an overlapped area that overlaps with the first contact determination area, and an overflowed area that has at least a part thereof overflowing from the first contact determination area. | 04-28-2011 |
20110103642 | Multipass Data Integration For Automatic Detection And Classification Of Objects - Classification of a potential target is accomplished by receiving image information, detecting a potential target within the image information and determining a plurality of features forming a feature set associated with the potential target. The location of the potential target is compared with a detection database to determine if it is close to an element in the detection database. If not, a single-pass classifier receives a potential target's feature set, classifies the potential target, and transmits the location, feature set and classification to the detection database. If it is close, a fused multi-pass feature determiner determines fused multi-pass features of the potential target and a multi-pass classifier receives the potential target's feature set and fused multi-pass features, classifies the potential target, and transmits its location, feature set, fused multi-pass features and classification to the detection database. | 05-05-2011 |
20110103643 | IMAGING SYSTEM WITH INTEGRATED IMAGE PREPROCESSING CAPABILITIES - An electronic device may have a camera module. The camera module may include a camera sensor and associated image preprocessing circuitry. The image preprocessing circuitry may analyze images from the camera module to perform motion detection, facial recognition, and other operations. The image preprocessing circuitry may generate signals that indicate the presence of a user and that indicate the identity of the user. The electronic device may receive the signals from the camera module and may use the signals in implementing power saving functions. The electronic device may enter a power conserving mode when the signals do not indicate the presence of a user, but may keep the camera module powered in the power conserving mode. When the camera module detects that a user is present, the signals from the camera module may activate the electronic device and direct the electronic device to enter an active operating mode. | 05-05-2011 |
20110103644 | METHOD AND APPARATUS FOR IMAGE DETECTION WITH UNDESIRED OBJECT REMOVAL - A method and image detection device are provided for removal of undesired objects from image data. In one embodiment, a method includes detecting image data for a first frame, detecting image data for a second frame, and detecting motion of an undesired object based, at least in part, on image data for the first and second frames. Image data of the first frame may be replaced with image data of the second frame to generate corrected image data, wherein the undesired object is removed from the corrected image data. The corrected image data may be stored. | 05-05-2011 |
20110103645 | Motion Detecting Apparatus - A motion detecting includes a fetcher which repeatedly fetches an object scene image having a designated resolution. An assigner assigns a plurality of areas each of which has a representative point to the object scene image in a manner to have an overlapping amount different depending on a size of the designated resolution. A divider divides each of a plurality of images respectively corresponding to the plurality of areas assigned by the assigner, into a plurality of partial images, by using the representative points as a base point. A detector detects a difference in brightness between a pixel corresponding to the representative point and surrounding pixels, from each of the plurality of partial images divided by the divider. A creator creates motion information indicating a motion of the object scene image fetched by the fetcher, based on a detection result of the detector. | 05-05-2011 |
20110103646 | PROCEDE POUR GENERER UNE IMAGE DE DENSITE D'UNE ZONE D'OBSERVATION - A method for generating a density image of an observation zone over a given time interval, in which method a plurality of images of the observation zone is acquired, for each image acquired the following steps are carried out: a) detection of zones of pixels standing out from the fixed background of the image, b) detection of individuals, c) for each individual detected, determination of the elementary surface areas occupied by this individual, and d) incrementation of a level of intensity of the elementary surface areas thus determined in the density image. | 05-05-2011 |
20110103647 | Device and Method for Classifying Vehicles - Device for classifying objects, in particular vehicles, on a roadway, with a sensor, which operates according to the light-section procedure and is directed onto the roadway to detect the surface contour of an object, and an evaluation unit connected to the sensor that classifies the object on the basis of the detected surface contour. | 05-05-2011 |
20110103648 | Method and apparatus for automatic object identification - A method and system for processing image data to identify objects in an image. A gradient vector image is generated from the image, the gradient vector image identifying a gradient magnitude value and a gradient direction for each pixel of the image. Lines are identified in the gradient vector image. It is determined whether the identified lines are perpendicular, whether more than a predetermined number of pixels on each of the lines identified as perpendicular have a gradient magnitude greater than a predetermined threshold, and whether the individual lines which are identified as perpendicular are within a predetermined distance of each other. A portion of the image is identified as an object if the identified lines are perpendicular, more than the predetermined number of pixels on each of the lines have a gradient magnitude greater than the predetermined threshold, and are within a predetermined distance of each other. | 05-05-2011 |
20110103649 | Complex Wavelet Tracker - The present invention relates to a video tracker which allows automatic tracking of a selected area over video frames. Motion of the selected area is defined by a parametric motion model. In addition to simple displacement of the area it can also detect motions such as rotation, scaling and shear depending on the motion model. The invention realizes the tracking of the selected area by estimating the parameters of this motion model in the complex discrete wavelet domain. The invention can achieve the result in a non-iterative direct way. Estimation carried out in the complex discrete wavelet domain provides a robust tracking opportunity without being effected by noise and illumination changes in the video as opposed to the intensity-based methods. The invention can easily be adapted to many fields in addition to video tracking. | 05-05-2011 |
20110110557 | Geo-locating an Object from Images or Videos - The present invention discloses a novel method, computer program product, and system for determining a spatial location of a target object from the selection of points in multiple images that correspond to the object location within the images. In one aspect, the method includes collecting location and orientation information of one or more image sensors producing the images; the collected location and orientation information is then used to determine the spatial location of the target object. | 05-12-2011 |
20110110558 | Apparatus, System, and Method for Automatic Airborne Contaminant Analysis - An apparatus, system, and method are disclosed for locating, classifying, and quantifying airborne contaminants. In one embodiment, the apparatus contains an air sampler, an imaging device, a processing module, and a user interface. The air sampler may contain at least one opening into which ambient air is flowable. The imaging device may produce images of the ambient air within an interior volume of the air sampler. The processing module may receive the images produced by the imaging device and may locate, classify, and quantify specific airborne contaminants, such as mold and pollen spores. Data concerning the airborne contaminants can be output to a user at a user interface. | 05-12-2011 |
20110110559 | Optical Positioning Apparatus And Positioning Method Thereof - An optical positioning apparatus and method are adapted for determining a position of an object in a three-dimensional coordinate system which has a first axis, a second axis and a third axis perpendicular to one another. The optical positioning apparatus includes a host device which has a first optical sensor and a second optical sensor located along the first axis with a first distance therebetween, and a processor connected with the optical sensors, and a calibrating device placed in the sensitivity range of the optical sensors with a second distance between an origin of the second axis and a coordinate of the calibrating device projected in the second axis. The optical sensors sense the calibrating device to make the processor execute a calibrating procedure, and then sense the object to make the processor execute a positioning procedure for determining the position of the object in the three-dimensional coordinate system. | 05-12-2011 |
20110110560 | Real Time Hand Tracking, Pose Classification and Interface Control - A hand gesture from a camera input is detected using an image processing module of a consumer electronics device. The detected hand gesture is identified from a vocabulary of hand gestures. The electronics device is controlled in response to the identified hand gesture. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract. | 05-12-2011 |
20110110561 | FACIAL MOTION CAPTURE USING MARKER PATTERNS THAT ACCOMODATE FACIAL SURFACE - Capturing facial surface using marker patterns laid out on the facial surface by adapting the marker patterns to contours of the facial surface and motion range of a head including: generating a facial action coding system (FACS) matrix by capturing FACS poses; generating a pattern to wrap over the facial surface using the FACS poses as a guide; capturing and tracking marker motions of the pattern; stabilizing the marker motions of the pattern using a head stabilization transform to remove head motions from the marker motions; and generating and applying a plurality of FACS matrix weights to the stabilized marker motions. | 05-12-2011 |
20110116682 | OBJECT DETECTION METHOD AND SYSTEM - An object detection method and an object detection system, suitable for detecting moving object information of a video stream having a plurality of images, are provided. The method performs a moving object foreground detection on each of the images, so as to obtain a first foreground detection image comprising a plurality of moving objects. The method also performs a texture object foreground detection on each of the images, so as to obtain a second foreground detection image comprising a plurality of texture objects. The moving objects in the first foreground detection image and the texture objects in the second foreground detection image are selected and filtered, and then the remaining moving objects or texture objects after the filtering are output as real moving object information. | 05-19-2011 |
20110116683 | REDUCING MOTION ARTEFACTS IN MRI - The invention relates to motion correction in magnetic resonance imaging (MRI), implemented as a MRI apparatus or system, computer programs for such, and a method. A motion pattern of a region of interest ROI is estimated by: selecting a fixed point at an anatomical position that is pre-determined to be little or not affected by motion and rotating a point in the ROI that is affected by motion on the basis of motion detected by a navigator or other methods. From the estimated motion pattern of the ROI, the field of view (FOI) may be adapted by adjusting the gradients and the bandwidth of the RF pulses of the MR system in the acquisition sequence to avoid or reduce motion artefacts. Alternatively motion correction is carried out on the reconstructed images. | 05-19-2011 |
20110116684 | SYSTEM AND METHOD FOR VISUALLY TRACKING WITH OCCLUSIONS - Described herein are tracking algorithm modifications to handle occlusions when processing a video stream including multiple image frames. Specifically, system and methods for handling both partial and full occlusions while tracking moving and non-moving targets are described. The occlusion handling embodiments described herein may be appropriate for a visual tracking system with supplementary range information. | 05-19-2011 |
20110116685 | INFORMATION PROCESSING APPARATUS, SETTING CHANGING METHOD, AND SETTING CHANGING PROGRAM - Disclosed herein is an information processing apparatus including: a detection block configured to detect persons from an image; and a setting changing block configured such that if one of the persons detected by the detection block from the image is designated, then the setting changing block identifies a plurality of attributes of the designated person based on the image of the person, before changing user interface settings using attribute-specific setting information associated with a combination of the identified multiple attributes. | 05-19-2011 |
20110123066 | PRECISELY LOCATING FEATURES ON GEOSPATIAL IMAGERY - Methods for locating a feature on geospatial imagery and systems for performing those methods are disclosed. An accuracy level of each of a plurality of geospatial vector datasets available in a database can be determined. Each of the plurality of geospatial vector datasets corresponds to the same spatial region as the geospatial imagery. The geospatial vector dataset having the highest accuracy level may be selected. When the selected geospatial vector dataset and the geospatial imagery are misaligned, the selected geospatial vector dataset is aligned to the geospatial imagery. The location of the feature on the geospatial imagery is then determined based on the selected geospatial vector dataset and outputted via a display device. | 05-26-2011 |
20110123067 | Method And System for Tracking a Target - A method and system for tracking one or more targets is described. The method includes the step of selecting a first template having a first image of a target and cyclically repeated steps of accumulating new images of the target, producing updated templates containing the new images, and tracking the target using the updated templates. Embodiments of the method use techniques directed to detection and mitigation of target occlusion events. | 05-26-2011 |
20110129117 | SYSTEM AND METHOD FOR IDENTIFYING PRODUCE - An apparatus, method and system are presented for identifying produce. Multiple images of a produce item captured using five different types of illumination. The captured images are processed to determine parameters of the produce item and those parameters are compared to parameters of known produce to identify the produce item. | 06-02-2011 |
20110129118 | SYSTEMS AND METHODS FOR TRACKING NATURAL PLANAR SHAPES FOR AUGMENTED REALITY APPLICATIONS - The present system discloses systems and methods for tracking planar shapes for augmented-reality (AR) applications. Systems for real-time recognition and camera six degrees of freedom pose-estimation from planar shapes are disclosed. Recognizable shapes can be augmented with 3D content. Recognizable shapes can be in form of a predefined library being updated online using a network. Shapes can be added to the library when the user points to a shape and asks the system to start recognizing it. The systems perform shape recognition by analyzing contour structures and generating projective invariant signatures. Image features are further extracted for pose estimation and tracking. Sample points are matched by evolving an active contour in real time. | 06-02-2011 |
20110129119 | MULTI-OBJECT TRACKING WITH A KNOWLEDGE-BASED, AUTONOMOUS ADAPTATION OF THE TRACKING MODELING LEVEL - The invention proposes a method for object and object configuration tracking based on sensory input data, the method comprising the steps of: | 06-02-2011 |
20110129120 | PROCESSING CAPTURED IMAGES HAVING GEOLOCATIONS | 06-02-2011 |
20110129121 | REAL-TIME FACE TRACKING IN A DIGITAL IMAGE ACQUISITION DEVICE - An image processing apparatus for tracking faces in an image stream iteratively receives an acquired image from the image stream potentially including one or more face regions. The acquired image is sub-sampled at a specified resolution to provide a sub-sampled image. An integral image is then calculated for a least a portion of the sub-sampled image. Fixed size face detection is applied to at least a portion of the integral image to provide a set of candidate face regions. Responsive to the set of candidate face regions produced and any previously detected candidate face regions, the resolution is adjusted for sub-sampling a subsequent acquired image. | 06-02-2011 |
20110135147 | SYSTEM AND METHOD FOR OBSTACLE DETECTION USING FUSION OF COLOR SPACE INFORMATION - A method comprises receiving an image of the area, the image representing the area in a first color space; converting the received image to at least one second color space to produce a plurality of converted images, each converted image corresponding to one of a plurality of color sub-spaces in the at least one second color space; calculating upper and lower thresholds for at least two of the plurality of color sub-spaces; applying the calculated upper and lower thresholds to the converted images corresponding to the at least two color sub-spaces to segment the corresponding converted images; fusing the segmented converted images corresponding to the at least two color sub-spaces to segment the received image; and updating the segmentation of the received image based on edge density data in the received image. | 06-09-2011 |
20110135148 | METHOD FOR MOVING OBJECT DETECTION AND HAND GESTURE CONTROL METHOD BASED ON THE METHOD FOR MOVING OBJECT DETECTION - A method for moving object detection includes the steps: obtaining successive images of the moving object and dividing the successive images into blocks; selecting one block, calculating color feature values of the block at a current time point and a following time point; according to the color feature values, obtaining an active part of the selected block; comparing the color feature value of the selected block at the current time point with that of the other blocks at the following time point to obtain a similarity relating to each of the other blocks, and defining a maximum similarity as a local correlation part; obtaining a motion-energy patch of the block according to the active part and the local correlation part; repeating the steps to obtain all motion-energy patches to form a motion-energy map; and acquiring the moving object at the current time point in the motion-energy map. | 06-09-2011 |
20110135149 | Systems and Methods for Tracking Objects Under Occlusion - A method for tracking objects in a scene may include receiving visual-based information of the scene with a vision-based tracking system and telemetry-based information of the scene with a RTLS-based tracking system. The method may also include determining a location and identity of a first object in the scene using a combination of the visual-based information and the telemetry-based information. Another method for tracking objects in a scene may include detecting a location and identity of a first object and determining a telemetry-based measurement between the first object and a second object using a real time locating system (RTLS)-based tracking system. The method may further include determining a location and identity of the second object based on the detected location of the first object and the determined measurement. A system for tracking objects in a scene may include visual-based and telemetry-based information receivers and an object tracker. | 06-09-2011 |
20110135150 | METHOD AND APPARATUS FOR TRACKING OBJECTS ACROSS IMAGES - A method and apparatus for tracking objects across images. The method includes retrieving object location in a current frame, determining the appearance and motion signatures of the object in the current frame, predicting the new location of the object based on object dynamics, searching for a location with similar appearance and motion signatures in a next frame, and utilizing the location with similar appearance and motion signatures to determine the final location of the object in the next frame. | 06-09-2011 |
20110135151 | METHOD AND APPARATUS FOR SELECTIVELY SUPPORTING RAW FORMAT IN DIGITAL IMAGE PROCESSOR - A digital image processing apparatus and method for supporting a RAW format (a sensor data format before image processing is performed) selectively supports a user-desired region of a captured image in a RAW format. A method of supporting a RAW format in a digital image processing apparatus includes setting at least one portion of an image displayed in a live-view mode as a region of interest (ROI), storing the ROI in a RAW format, storing a non-ROI of the displayed image, which is a portion of the image other than the ROI, in a compression format, and compositing the stored ROI with the stored non-ROI. | 06-09-2011 |
20110135152 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes: a detection unit detecting the faces of persons from frames of moving-image contents; a first specifying unit specifying the persons corresponding to the detected faces by extracting feature amounts of the detected faces and verifying the extracted feature amounts in a first database in which the feature amounts of the faces are registered in correspondence with person identifying information; a voice analysis unit analyzing the voices acquired when the faces of the persons are detected from the frames of the moving-image contents and generating voice information; and a second specifying unit specifying the persons corresponding to the detected faces by verifying the voice information corresponding to the face of a person which is not specified by the first specifying unit in a second database in which the voice information is registered in correspondence with the person identifying information. | 06-09-2011 |
20110135153 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM - An image processing device includes a facial region extraction unit extracting a facial region, an identification information acquisition unit acquiring identification information for identifying a face in the facial region, and first and second integrated processing units performing integrated processing. The first and second integrated processing units determine a threshold value on the basis of a relationship between an estimated area and a position of the face being tracked, calculate a similarity between a face being tracked and a face pictured in an image to be stored in a predetermined storage period, and determine if the face being tracked and the stored face image are the face of the same person. | 06-09-2011 |
20110135154 | LOCATION-BASED SIGNATURE SELECTION FOR MULTI-CAMERA OBJECT TRACKING - Disclosed herein are a method, system, and computer program product for determining a correspondence between a first object ( | 06-09-2011 |
20110142281 | CONVERTING AIRCRAFT ENHANCED VISION SYSTEM VIDEO TO SIMULATED REAL TIME VIDEO - A method for overcoming image latency issues of a synthetic vision system include generating ( | 06-16-2011 |
20110142282 | VISUAL OBJECT TRACKING WITH SCALE AND ORIENTATION ADAPTATION - A method of tracking an object that appears in a plurality of image frames is provided. The method includes (a) dividing an identified object of one of the plurality of image frames into a plurality of object segments and (b) tracking a location of each of the plurality of object segments in the image frame. The method also includes (c) estimating at least one of scale and orientation of the object using the location of each of the plurality of object segments and (d) obtaining position of the object using the estimated scale and orientation. | 06-16-2011 |
20110142283 | APPARATUS AND METHOD FOR MOVING OBJECT DETECTION - An apparatus and method for moving object detection computes a corresponding frame difference for every two successive image frames of a moving object, and segments a current image frame of the two successive image frames into a plurality of homogeneous regions. At least a candidate region is further detected from the plurality of homogeneous regions. The system gradually merges the computed frame differences via a morphing-based technology and interests with the at least a candidate region, thereby obtains the location and a complete outline of the moving object. | 06-16-2011 |
20110142284 | Method and Apparatus for Acquiring Accurate Background Infrared Signature Data on Moving Targets - A method for measuring an infrared signature of a moving target includes: tracking the moving target with a tracking system along a path from a start position to an end position, measuring infrared radiation data of the moving target along the path, repositioning the tracking system to the start position, retracing the path to measure the infrared radiation data of the background, and determining the infrared signature of the moving target by comparing the infrared radiation data of the moving object with the infrared radiation data of the background without the moving object. | 06-16-2011 |
20110142285 | SYSTEM AND METHOD FOR TRANSITIONING FROM A MISSILE WARNING SYSTEM TO A FINE TRACKING SYSTEM IN A DIRECTIONAL INFRARED COUNTERMEASURES SYSTEM - A method for transitioning a target from a missile warning system to a fine tracking system in a directional countermeasures system includes capturing at least one image within a field of view of the missile warning system. The method further includes identifying a threat from the captured image or images and identifying features surrounding the threat. These features are registered with the threat and image within a field of view of the fine tracking system is captured. The registered features are used to identify a location of a threat within this captured image. | 06-16-2011 |
20110142286 | DETECTIVE INFORMATION REGISTRATION DEVICE, TARGET OBJECT DETECTION DEVICE, ELECTRONIC DEVICE, METHOD OF CONTROLLING DETECTIVE INFORMATION REGISTRATION DEVICE, METHOD OF CONTROLLING TARGET OBJECT DETECTION DEVICE, CONTROL PROGRAM FOR DETECTIVE INFORMATION REGISTRATION DEVICE, AND CONTROL PROGRAM FOR TARGET OBJECT DETECTION DEVICE - A digital camera ( | 06-16-2011 |
20110150271 | MOTION DETECTION USING DEPTH IMAGES - A sensor system creates a sequence of depth images that are used to detect and track motion of objects within range of the sensor system. A reference image is created and updated based on a moving average (or other function) of a set of depth images. A new depth images is compared to the reference image to create a motion image, which is an image file (or other data structure) with data representing motion. The new depth image is also used to update the reference image. The data in the motion image is grouped and associated with one or more objects being tracked. The tracking of the objects is updated by the grouped data in the motion image. The new positions of the objects are used to update an application. For example, a video game system will update the position of images displayed in the video based on the new positions of the objects. In one implementation, avatars can be moved based on movement of the user in front of a camera. | 06-23-2011 |
20110150272 | SYSTEMS AND METHODS OF TRACKING OBJECT PATHS - Systems and methods for tracking the path of a user configurable object are provided. The method includes displaying a video data stream of a monitored region, configuring an object in the video data stream, configuring a valid path of the object, tracking a path of the object, and providing an alert to a user when the object travels outside of the valid path. | 06-23-2011 |
20110150273 | METHOD AND SYSTEM FOR AUTOMATED SUBJECT IDENTIFICATION IN GROUP PHOTOS - A system to automatically attach subject descriptions to a digital image containing one or more subjects is described. The system comprises a camera a set of remotely readable badges attached to the subjects, where each badge has a readable identification, a receiver to read the badges where the receiver can determine both the identification of each badge and the location of each badge, and a processor to combine the digital image and the identification and location information is described. By accessing a database containing the subject identification associated with each badge identification the processor can attach subject identification information to each subject in the image. | 06-23-2011 |
20110150274 | METHODS FOR AUTOMATIC SEGMENTATION AND TEMPORAL TRACKING - In one embodiment, a method of detecting centerline of a vessel is provided. The method comprises steps of acquiring a 3D image volume, initializing a centerline, initializing a Kalman filter, predicting a next center point using the Kalman filter, checking validity of the prediction made using the Kalman filter, performing template matching, updating the Kalman filter based on the template matching and repeating the steps of predicting, checking, performing and updating for a predetermined number of times. Methods of automatic vessel segmentation and temporal tracking of the segmented vessel is further described with reference to the method of detecting centerline. | 06-23-2011 |
20110150275 | MODEL-BASED PLAY FIELD REGISTRATION - A method, apparatus, and system are described for model-based playfield registration. An input video image is processed. The processing of the video image includes extracting key points relating to the video image. Further, whether enough key points relating to the video image were extracted is determined, and a direct estimation of the video image is performed if enough key points have been extracted and then, a homograph matrix of a final video image based on the direct estimation is generated. | 06-23-2011 |
20110150276 | Identifying a characteristic of an individual utilizing facial recognition and providing a display for the individual - A method may include automatically remotely identifying at least one characteristic of an individual via facial recognition; and providing a display for the individual, the display having a content at least partially based on the identified at least one characteristic of the individual. A system may include a facial recognition module configured for automatically remotely identifying at least one characteristic of an individual via facial recognition; and a display module coupled with the facial recognition module, the display module configured for providing a display for the individual, the display having a content at least partially based on the identified at least one characteristic of the individual. | 06-23-2011 |
20110150277 | IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREOF - In an image included in a moving image, a specific area is registered as a reference area, and a specific hue range of the reference area is set as a first feature amount based on the distribution of hues of pixels in the reference area. When the occupation ratio of pixels having hues included in a second feature amount, obtained by expanding the hue range of the first feature amount in a surrounding area larger than the reference area, is smaller than a predetermined ratio, an area having a high degree of correlation is identified from an image using the second feature amount in the subsequent matching process. When the occupation ratio is equal to or larger than the predetermined ratio, an area having a high degree of correlation is identified from an image using the first feature amount in the subsequent matching process. | 06-23-2011 |
20110150278 | INFORMATION PROCESSING APPARATUS, PROCESSING METHOD THEREOF, AND NON-TRANSITORY STORAGE MEDIUM - An information processing apparatus comprising: a storage unit configured to store image features of multiple targets and mutual relationship information of the multiple targets; an input unit configured to input an image; a detection unit configured to detect a region of a target from the input image; an identification unit configured to, based on the stored image features and image features of the detected region, identify the target of the region; and an estimation unit configured to, in the case where both a first region in which a target was identified and a second region in which a target could not be identified are present in the input image, estimate a candidate for the target in the second region based on the mutual relationship information and the target in the first region. | 06-23-2011 |
20110150279 | IMAGE PROCESSING APPARATUS, PROCESSING METHOD THEREFOR, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An image processing apparatus comprising: an input unit configured to input a plurality of images obtained by capturing a target object from different viewpoints; a detection unit configured to detect a plurality of line segments from each of the plurality of input images; a setting unit configured to set, for each of the plurality of detected line segments, a reference line which intersects with the line segment; an array derivation unit configured to obtain a pattern array in which a plurality of pixel value change patterns on the set reference line are aligned; and a decision unit configured to decide association of the detected line segments between the plurality of images by comparing the pixel value change patterns, contained in the obtained pattern array, between the plurality of images. | 06-23-2011 |
20110150280 | SUBJECT TRACKING APPARATUS, SUBJECT REGION EXTRACTION APPARATUS, AND CONTROL METHODS THEREFOR - A subject tracking apparatus which performs subject tracking based on the degree of correlation between a reference image and an input image is disclosed. The degree of correlation between each of a plurality of reference images based on images input at different times, and the input image is obtained. If the maximum degree of correlation between a reference image based on a first input image among the plurality of reference images and the input image is equal to or higher than a threshold, a region with a maximum degree of correlation with a first reference image is determined as a subject region. Otherwise, a region with a maximum degree of correlation with a reference image based on an image input later than the first input image is determined as a subject region. | 06-23-2011 |
20110150281 | METHOD AND DEVICE FOR DETERMINING THE ORIENTATION OF A CROSS-WOUND BOBBIN TUBE - A method and device for determining the orientation of a cross-wound bobbin tube ( | 06-23-2011 |
20110150282 | BACKGROUND IMAGE AND MASK ESTIMATION FOR ACCURATE SHIFT-ESTIMATION FOR VIDEO OBJECT DETECTION IN PRESENCE OF MISALIGNMENT - Disclosed herein are a method, system, and computer program product for aligning an input video frame from a video sequence with a background model associated with said video sequence. The background model includes a plurality of model blocks ( | 06-23-2011 |
20110150283 | APPARATUS AND METHOD FOR PROVIDING ADVERTISING CONTENT - Disclosed herein are an apparatus and method for providing advertising content effectively. The apparatus for providing advertising content comprises: a image processing unit for extracting an object from a captured image; the long-distance analysis unit for creating long-distance analysis information obtained by analyzing the object at a first distance; the short-distance analysis unit for creating short-distance analysis information obtained by analyzing the object at a second distance that is shorter than the first distance; and the content selection unit for selecting advertising content using the long-distance analysis information and the short-distance analysis information. | 06-23-2011 |
20110150284 | METHOD AND TERMINAL FOR DETECTING AND TRACKING MOVING OBJECT USING REAL-TIME CAMERA MOTION - A method is provided for detecting and tracking a moving object using real-time camera motion estimation, including generating a feature map representing a change in an input pattern in an input image, extracting feature information of the image, estimating a global motion for recognizing a motion of a camera using the extracted feature information, correcting the input image by reflecting the estimated global motion, and detecting a moving object using the corrected image. | 06-23-2011 |
20110150285 | LIGHT EMITTING DEVICE AND METHOD FOR TRACKING OBJECT - A technique and a light emitting device that can smoothly read out data while tracking a position of the light emitting device (an object). The light emitting device expresses data with “a change in the change of a color (switching of changes)”. The light emitting device specifies an object and the position thereof with a first primary change and thereafter expresses data with, so to speak, a secondary change (switching of the primary change). The primary change means that G and B alternately turn on (indicated by G*B) and so on. The secondary change means a change from the condition (G*B), in which G and B alternately turn on, to the condition (B*R) in which B and R alternately turn on. Thus, since data is expressed by the change of color condition changes, it is easier to freely express data while the position of an object is specified. | 06-23-2011 |
20110158473 | DETECTING METHOD FOR DETECTING MOTION DIRECTION OF PORTABLE ELECTRONIC DEVICE - A detecting method is provided for detecting motion direction of a portable electronic device. The portable electronic device senses a plurality of continuous images in time sequence via an image sense unit. The differences among the plurality of images are analyzed by a process unit. Consequently the process unit determines the motion direction of the portable electronic device, generates motion data based on the differences, and sends a control signal corresponding to the motion direction of the device and the motion data. | 06-30-2011 |
20110158474 | IMAGE OBJECT TRACKING AND SEGMENTATION USING ACTIVE CONTOURS - A method of image object tracking and segmentation is provided. The method includes defining an initial contour for tracking an image object and partitioning the initial contour into a plurality of contour segments. The method also includes estimating a weighted length of each of the plurality of contour segments and generating a desired contour by converging the plurality of contour segments to a plurality of edges of the image object using the estimated weighted length. | 06-30-2011 |
20110158475 | Position Measuring Method And Position Measuring Instrument - The present invention provides a position measuring instrument, comprising a GPS position detecting device | 06-30-2011 |
20110158476 | ROBOT AND METHOD FOR RECOGNIZING HUMAN FACES AND GESTURES THEREOF - A robot and a method for recognizing human faces and gestures are provided, and the method is applicable to a robot. In the method, a plurality of face regions within an image sequence captured by the robot are processed by a first classifier, so as to locate a current position of a specific user from the face regions. Changes of the current position of the specific user are tracked to move the robot accordingly. While the current position of the specific user is tracked, a gesture feature of the specific user is extracted by analyzing the image sequence. An operating instruction corresponding to the gesture feature is recognized by processing the gesture feature through a second classifier, and the robot is controlled to execute a relevant action according to the operating instruction. | 06-30-2011 |
20110158477 | REDUCING EFFECTS OF ROTATIONAL MOTION - A method and system for improving image quality by correcting errors introduced by rotational motion of an object being imaged is provided. The object is associated with a fiducial mark. The method provides a computer executable methodology for detecting a rotation and selectively reordering, deleting and/or reacquiring projection data. | 06-30-2011 |
20110158478 | HEAD MOUNTED DISPLAY - A head mounted display capable of displaying necessary and sufficient number of display information in an easily viewable manner even when a large number of identifying objects are detected is provided. A see-through-type head mounted display includes a display unit which is configured to project image light corresponding to display information onto an eye of a user thus allowing the user to visually recognize an image corresponding to the image light while allowing an external light to pass therethrough. The head mounted display selects identifying objects about which associated information associated with the identifying objects are displayed by the display unit based on a result detected within an imaging area. The head mounted display displays the selected associated information associated with the identifying objects in association with the identifying objects which are visually recognized by the user through the display unit in a see-through manner. | 06-30-2011 |
20110158479 | METHOD AND DEVICE FOR ALIGNING A NEEDLE - A method and a device for use in conjunction with an imaging modality ( | 06-30-2011 |
20110164785 | TUNABLE WAVELET TARGET EXTRACTION PREPROCESSOR SYSTEM - The present invention is a target tracking system for enhanced target identification, target acquisition and track performance that is significantly superior over other methods. Specifically, the target tracking system incorporates an intelligent Tunable Wavelet Target Extraction Preprocessor (TWTEP). The TWTEP, which defines target characteristics in the presence of noise and clutter, 1) enhances and augments the target within the video scene to provide a better tracking source for the externally provided Track Process, 2) implements a tunable target definition from the video image to provide a highly resolved target delineation and selection, 3) utilizes a weighted pseudo-covariance technique to define target area for shape determination, extraction, 4) implements a target definition and extraction process, and 5) defines methodologies for presentation of filtered video and images for external processing. | 07-07-2011 |
20110164786 | CLOSE-UP SHOT DETECTING APPARATUS AND METHOD, ELECTRONIC APPARATUS AND COMPUTER PROGRAM - A close-up shot detection device includes motion detection element that calculates the amount of motion between at least two frames or fields constituting a video image every predetermined unit which is composed of one pixel or a plurality of adjacent pixels constituting the frame or field; binarization element that binarizes the calculated amount of motion; large-area specifying element that specifies, as a large area, a connected area in which the number of units is equal to or larger than a predetermined threshold, among connected areas which are obtained by connecting a predetermined number of units having the same binarized amount of motion; and close-up shot specifying element that, when at least one of preset criteria for the specified large area satisfies a predetermined condition, specifies a frame or field having the specified large area as a close-up shot. Consequently, a close-up shot can be easily and rapidly detected. | 07-07-2011 |
20110164787 | METHOD AND SYSTEM FOR APPLYING COSMETIC AND/OR ACCESSORIAL ENHANCEMENTS TO DIGITAL IMAGES - A method for a creating a virtual makeover includes inputting an initial digital Image into and initiating a virtual makeover at a local processor. Instructions are transmitted from the main server to the local processor. Positions of facial features are isolated within the digital image at the local processor. Facial regions within the digital image are defined based on the positions of the facial features at the local processor. After receiving input, cosmetic enhancements or the accessorial enhancement are applied to the digital image at the local processor. A final digital image is generated including the enhancements. The final digital image is then displayed. At least the defining, applying, and generating steps include instructions written in a non-flash format for execution in a flash-based wrapper. | 07-07-2011 |
20110164788 | METHOD AND DEVICE FOR DETERMINING LEAN ANGLE OF BODY AND POSE ESTIMATION METHOD AND DEVICE - Provided are a method and device for determining a lean angle of a body and a pose estimation method and device. The method for determining a lean angle of a body of the present invention includes: a head-position obtaining step for obtaining a position of a head; a search region determination step for determining a plurality of search region spaced with an angle around the head; an energy function calculating step for calculating a value of an energy function for the search region; and a lean angle determining step for determining the lean angle of a search region with a largest or smallest value of the energy function as the lean angle of the body. The pose estimation method of the present invention includes a body lean-angle obtaining step, for obtaining a lean angle of a body; and a pose estimation step, for performing a pose estimation based on the lean angle of the body. | 07-07-2011 |
20110170739 | Automated Acquisition of Facial Images - Described is a technology by which medical patient facial images are acquired and maintained for associating with a patient's records and/or other items. A video camera may provide video frames, such as captured when a patient is being admitted to a hospital. Face detection may be employed to clip the facial part from the frame. Multiple images of a patient's face may be displayed on a user interface to allow selection of a representative image. Also described is obtaining the patient images by processing electronic documents (e.g., patient records) to look for a face pictured therein. | 07-14-2011 |
20110170740 | Automatic image capture - A method of automatically capturing images with precision uses an intelligent mobile device having a camera loaded with an appropriate image capture application. When a use initializes the application, the camera starts taking images of the object. Each image is qualified to determine whether it is in focus and entirely within the field of view of the camera: Two or more qualified images are captured and stored for subsequent processing. The qualified images are aligned with each other by an appropriate perspective transformation so they each fill a common frame. Averaging of the aligned images reduces noise and a sharpening filter enhances edges, which produces a sharper image. The processed image is then converted into a two-level, black and white image which may be presented to the user for approval prior to submission via wireless or WiFi to a remote location. | 07-14-2011 |
20110170741 | IMAGE PROCESSING DEVICE AND STORAGE MEDIUM STORING IMAGE PROCESSING PROGRAM - There is provided an image processing device that includes a processor configured to execute instructions that cause the processor to provide functional units including: a setting unit that sets a plurality of extraction target ranges in a motion image configured of a plurality of frame images that are chronologically in succession with one another, each extraction target range being configured of a group of frame images that are selected from among the plurality of frame images constituting the motion image and that are chronologically in succession with one another, and the plurality of extraction target ranges being set such that there is no common frame image shared among the extraction target ranges; a selecting unit that selects a representative frame image from among the group of frame images in an extraction target range, the representative frame image being such a frame image whose difference from another representative frame image is the largest among differences of the frame images belonging to the extraction target range from the another representative frame image, the another representative frame image being selected from one of the extraction target ranges that is positioned chronologically adjacent to the extraction target range from which the representative frame image is selected; and a layout image generating unit that generates a layout image in which the selected representative frame images are laid out in such a pattern that indicates a chronological relationship among the representative frame images. | 07-14-2011 |
20110170742 | IMAGE PROCESSING DEVICE, OBJECT SELECTION METHOD AND PROGRAM - There is provided an image processing device including: a data storage unit that stores object identification data for identifying an object operable by a user and feature data indicating a feature of appearance of each object; an environment map storage unit that stores an environment map representing a position of one or more objects existing in a real space and generated based on an input image obtained by imaging the real space using an imaging device and the feature data stored in the data storage unit; and a selecting unit that selects at least one object recognized as being operable based on the object identification data, out of the objects included in the environment map stored in the environment map storage unit, as a candidate object being a possible operation target by a user. | 07-14-2011 |
20110170743 | METHOD FOR DETECTING OBJECT MOVEMENT AND DETECTION SYSTEM - This invention relates to a method for detecting object movement by dynamically updating a reference image data. By dynamically updating the reference image data, the impact of the ambient light change can be reduced and the detection error of object movement caused by using fixed reference image data under varying ambient light can also be avoided. The present invention further provides a detection system. | 07-14-2011 |
20110170744 | VIDEO-BASED VEHICLE DETECTION AND TRACKING USING SPATIO-TEMPORAL MAPS - Systems and methods for detecting and tracking objects, such as motor vehicles, within video data. The systems and method analyze video data, for example, to count objects, determine object speeds, and track the path of objects without relying on the detection and identification of background data within the captured video data. The detection system uses one or more scan lines to generate a spatio-temporal map. A spatio-temporal map is a time progression of a slice of video data representing a history of pixel data corresponding to a scan line. The detection system detects objects in the video data based on intersections of lines within the spatio-temporal map. Once the detection system has detected an object, the detection system may record the detection for counting purposes, display an indication of the object in association with the video data, determine the speed of the object, etc. | 07-14-2011 |
20110170745 | Body Gesture Control System for Operating Electrical and Electronic Devices - A body gesture control system for operating electrical and electronic devices includes an image sensor device and an image processor device to process body gesture images captured by the image sensor device for recognizing the body gesture. The image processor device includes an image calculation unit and a gesture change detection unit electrically connected therewith. The image calculation unit is used to calculate gesture regions of the captured body gesture images and the gesture change detection unit is operated to detect changes of the captured body gesture images and to thereby determine a body gesture recognition signal. | 07-14-2011 |
20110170746 | CAMERA BASED SENSING IN HANDHELD, MOBILE, GAMING OR OTHER DEVICES - Method and apparatus are disclosed to enable rapid TV camera and computer based sensing in many practical applications, including, but not limited to, handheld devices, cars, and video games. Several unique forms of social video games are disclosed. | 07-14-2011 |
20110170747 | Interactivity Via Mobile Image Recognition - Systems and methods of interacting with a virtual space, in which a mobile device is used to electronically capture image data of a real-world object, the image data is used to identify information related to the real-world object, and the information is used to interact with software to control at least one of: (a) an aspect of an electronic game; and (b) a second device local to the mobile device. Contemplated systems and methods can be used to gaming, in which the image data can be used to identify a name of the real-world object, to classify the real-world object, identify the real-world object as a player in the game, to identify the real-world Object as a goal object or as having some other value in the game, to use the image data to identify the real-world object as a goal object in the game. | 07-14-2011 |
20110176707 | IMAGE ANALYSIS BY OBJECT ADDITION AND RECOVERY - The invention described herein is generally directed to methods for analyzing an image. In particular, crowded field images may be analyzed for unidentified, unobserved objects based on an iterative analysis of modified images including artificial objects or removed real objects. The results can provide an estimate of the completeness of analysis of the image, an estimate of the number of objects that are unobserved in the image, and an assessment of the quality of other similar images. | 07-21-2011 |
20110176708 | Task-Based Imaging Systems - A task-based imaging system for obtaining data regarding a scene for use in a task includes an image data capturing arrangement for (a) imaging a wavefront of electromagnetic energy from the scene to an intermediate image over a range of spatial frequencies, (b) modifying phase of the wavefront, (c) detecting the intermediate image, and (d) generating image data over the range of spatial frequencies. The task-based imaging system also includes an image data processing arrangement for processing the image data and performing the task. The image data capturing and image data processing arrangements cooperate so that signal-to-noise ratio (SNR) of the task-based imaging system is greater than SNR of the task-based imaging system without phase modification of the wavefront over the range of spatial frequencies. | 07-21-2011 |
20110182469 | 3D CONVOLUTIONAL NEURAL NETWORKS FOR AUTOMATIC HUMAN ACTION RECOGNITION - Systems and methods are disclosed to recognize human action from one or more video frames by performing | 07-28-2011 |
20110182470 | MOBILE COMMUNICATION TERMINAL HAVING IMAGE CONVERSION FUNCTION AND METHOD - A mobile communication terminal having an image conversion function arranges and displays area-specific images in a three-dimensional (3D) space on the basis of distance information of the area-specific images of a two-dimensional (2D) image. | 07-28-2011 |
20110182471 | HANDLING INFORMATION FLOW IN PRINTED TEXT PROCESSING - Systems, methods and computer-readable media for processing an image are disclosed. The system comprises a processor, an image capturing unit in communication with the processor, an inspection surface positioned so that at least a portion of the inspection surface is within a field of view (FOV) of the image capturing unit, and an output device. The system has software that monitors the FOV of the image capturing unit for at least one event. The inspection surface is capable of supporting an object of interest. The image capturing unit is in a video mode while the software is monitoring for the at least one event | 07-28-2011 |
20110182472 | EYE GAZE TRACKING - This invention relates to a method of performing eye gaze tracking of at least one eye of a user, by determining the position of the center of the eye, said method comprising the steps of:
| 07-28-2011 |
20110182473 | SYSTEM AND METHOD FOR VIDEO SIGNAL SENSING USING TRAFFIC ENFORCEMENT CAMERAS - A system and method for determining the state of a traffic signal light, such as being red, yellow, or green, by employing a plurality of traffic enforcement cameras to be used in determining if a traffic violation has occurred. The system and method automatically predicts, tacks and captures violation events, such as violating a red traffic signal light, to use the video for any number of reasons, particularly for traffic enforcement purposes. There may be provided a tracking camera, a signal camera and an enforcement camera used to capture the video and other pertinent information relating to the event. The signal camera may be operatively connected to a processing unit that runs a video signal sensing (VSS) software unit to determine the active state of the system. Advantageously, this allows the monitoring of intersection for signal light violations without the need for a connection to the light itself. | 07-28-2011 |
20110182474 | EFFICIENT SYSTEM AND METHOD FOR FACE TRACKING - A method of scanning a scene using an image sensor includes (a) dividing the scene into multiple first portions; and scanning a first portion for presence of objects in an object class. The method further includes continuing the scanning of the multiple first portions for presence of other objects in the scene. The method also selects a second portion of the scene, in response to detecting an object in the first portion; and then tracking the object in the selected second portion. The second portion of the scene is selected based on estimating motion of the object detected in the first portion, so that it may still be located in the second portion. | 07-28-2011 |
20110188705 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - A frequency component of noise that is included in both images, and a frequency component of a first image that does not include said noise are estimated based on first image data obtained through imaging, using an imaging device, a first image that includes a specific image pattern, and based on second image data, obtained by imaging, using the imaging device, second image data that does not include the specific image pattern; and weighting is controlled, relative to frequencies, when calculating a correlation between the first image data and third image data, obtained through imaging a third image through the imaging device, based on the estimated individual frequency components. | 08-04-2011 |
20110188706 | Redundant Spatial Ensemble For Computer-Aided Detection and Image Understanding - Described herein is a technology for facilitating computer-aided detection and image understanding. In one implementation, an input set of training images of a target structure, such as an anatomical structure, is received. The input set of training images is spatially realigned to different landmarks to generate multiple bags of training images. At least one of the multiple bags comprises substantially all the training images in the input set, but realigned to a landmark. The multiple bags of training images may be used to train a spatial ensemble of detectors, which can be employed to generate an output result by automatically detecting a target structure in an input image. | 08-04-2011 |
20110188707 | System and Method for Pleographic Subject Identification, Targeting, and Homing Utilizing Electromagnetic Imaging in at Least One Selected Band - The inventive data processing system and method enable automatic recognition of images captured using various electromagnetic (EM) imaging systems and techniques, and more particularly to a system and method for applying pleographic processing for subject identification, recognition, matching, targeting, and or homing, utilizing one or more EM imaging systems, devices, in at least one selected EM band. | 08-04-2011 |
20110194731 | METHOD OF DETERMINING REFERENCE FEATURES FOR USE IN AN OPTICAL OBJECT INITIALIZATION TRACKING PROCESS AND OBJECT INITIALIZATION TRACKING METHOD - A method of determining reference features for use in an optical object initialization tracking process is disclosed, said method comprising the following steps: a) capturing at least one current image of a real environment or synthetically generated by rendering a virtual model of a real object to be tracked with at least one camera and extracting current features from the at least one current image, b) providing reference features adapted for use in an optical object initialization tracking process, c) matching a plurality of the current features with a plurality of the reference features, d) estimating at least one parameter associated with the current image based on a number of current and reference features which were matched, and determining for each of the reference features which were matched with one of the current features whether they were correctly or incorrectly matched, e) wherein the steps a) to d) are processed iteratively multiple times, wherein in step a) of every respective iterative loop a respective new current image is captured by at least one camera and steps a) to d) are processed with respect to the respective new current image, and f) determining at least one indicator associated to reference features which were correctly matched and/or to reference features which were incorrectly matched, wherein the at least one indicator is determined depending on how often the respective reference feature has been correctly matched or incorrectly matched, respectively. | 08-11-2011 |
20110194732 | IMAGE RECOGNITION APPARATUS AND METHOD - An image recognition apparatus detects a specific object image from an image to be processed, calculates a coincidence degree between an object recognisability state of the object image and that of an object in registered image information, and calculates a similarity between the image feature of the object image and the image feature in the registered image information. Based on the similarity and coincidence degree, the image recognition apparatus recognizes whether the object of the object image is that of the registered image information. When the similarity is lower than the first threshold and the coincidence degree is equal to or higher than the second threshold, the image recognition apparatus recognizes that the object of the object image is different from that of the registered image information. | 08-11-2011 |
20110200225 | ADVANCED BACKGROUND ESTIMATION TECHNIQUE AND CIRCUIT FOR A HYPER-SPECTRAL TARGET DETECTION METHOD - A system, circuit and methods for target detection from hyper-spectral image data are disclosed. Filter coefficients are determined using a modified constrained energy minimization (CEM) method. The modified CEM method can operate on a circuit operable to perform constrained linear programming optimization. A filter comprising the filter coefficients is applied to a plurality of pixels of the hyper-spectral image data to form CEM values for the pixels, and one or more target pixels are identified from the CEM values. The process may be repeated to enhance target recognition by using filter coefficients determined by excluding the identified target pixels from the hyper-spectral image data. | 08-18-2011 |
20110200226 | CUSTOMER BEHAVIOR COLLECTION METHOD AND CUSTOMER BEHAVIOR COLLECTION APPARATUS - According to one embodiment, a computer selects trajectory data on a person positioned in an image monitoring area from trajectory data on relevant persons. The computer selects a selling space image data obtained when the person corresponding to the trajectory data is positioned in the image monitoring area. The computer analyzes the selling space image data to extract a person image. The computer checks the person image extracted from the selling space image data against image data on each customer to search for customer image data obtained by taking an image of the person in the person image. The computer stores, upon detecting the customer image data obtained by taking an image of the person in the person image, identification information on transaction data stored in association with the customer image data, in association with identification information on the trajectory data. | 08-18-2011 |
20110200227 | ANALYSIS OF DATA FROM MULTIPLE TIME-POINTS - Described herein is a technology for facilitating analysis of data across multiple time-points. In one implementation, first and second images acquired at respective first and second different time-points are received. In addition, first and second findings associated with the first and second images respectively are also received. The first and second findings are associated with at least one region of interest. A correspondence between the first and second findings may be automatically determined by aligning the first and second findings. A longitudinal analysis result may then be generated by correlating the first and second findings. | 08-18-2011 |
20110200228 | TARGET TRACKING SYSTEM AND A METHOD FOR TRACKING A TARGET - A target tracking system including a tracking module arranged to perform model-based tracking of a target based on received measurements from a sensor. A detector is arranged to detect as a target performs a manoeuvre. An output switching module is arranged to switch from a first output mode in which model estimations of the tracking module are forwarded, to at least a second output mode in which only reliable outputs are forwarded, in response to information indicating the detection of a target manoeuvre being received from the detector. Also a collision avoidance system, a method for tracking a target and a computer program product. | 08-18-2011 |
20110200229 | Object Detecting with 1D Range Sensors - Moving objects are classified based on maximum margin classification and discriminative probabilistic sequential modeling of range data acquired by a scanner with a set of one or more 1D laser line scanner. The range data in the form of 2D images is pre-processed and then classified. The classifier is composed of appearance classifiers, sequence classifiers with different inference techniques, and state machine enforcement of a structure of the objects. | 08-18-2011 |
20110200230 | METHOD AND DEVICE FOR ANALYZING SURROUNDING OBJECTS AND/OR SURROUNDING SCENES, SUCH AS FOR OBJECT AND SCENE CLASS SEGMENTING - The invention relates to a method and an object detection device for analysing objects in the environment and/or scenes in the environment. The object detection device includes a data processing and/or evaluation device. In the data processing and/or evaluation device, image data (x | 08-18-2011 |
20110206236 | NAVIGATION METHOD AND APARATUS - An automated guidance system for a moving frame. The automated guidance system has an imaging system disposed on the frame; a motion sensing system coupled to the frame and configured for sensing movement of the frame; and a processor communicably connected to the vision system for receiving image data from the vision system and generating optical flow from image data of frame surrounding. The processor is communicably connected to the motion sensing system for receiving motion data of the frame from the motion sensing system. The processor is configured for determining, from kinematically aided dense optical flow correction to frame kinematic errors, due to errors in motion data from the motion sensing system. | 08-25-2011 |
20110206237 | RECOGNITION APPARATUS AND METHOD THEREOF, AND COMPUTER PROGRAM - A recognition apparatus for recognizing a position and an orientation of a target object, inputs a captured image of the target object captured by an image capturing apparatus; detects a plurality of feature portions from the captured image, and to extract a plurality of feature amounts indicating image characteristics in each of the plurality of feature portions; inputs property information indicating respective physical properties in the plurality of feature portions on the target object; inputs illumination information indicating an illumination condition at the time of capturing the captured image; determines respective degrees of importance of the plurality of extracted feature amounts based on the respective physical properties indicated by the property information and the illumination condition indicated by the illumination information; and recognizes the position and the orientation of the target object based on the plurality of feature amounts and the respective degrees of importance thereof. | 08-25-2011 |
20110206238 | PHARMACEUTICAL RECOGNITION AND IDENTIFICATION SYSTEM AND METHOD OF USE - An electronic pharmaceutical recognition and identification system is provided along with a method of use. In certain example embodiments a user can take a digital picture of a pharmaceutical with a portable appliance comprising a telephone, then text that picture to a predetermined telephone number, wait a short period of time for a pharmaceutical identification server system to electronically recognize and identify the pharmaceutical in question, and then automatically receive a text message back from the server system that includes various predetermined information regarding the pharmaceutical in question, such as its name, pictures of it, warnings, whether or not a prescription is required, as well as usage and interaction information. Fixed appliances are also provided that can passively interface with a pharmaceutical dispensing system to ensure that the prescribed pharmaceutical is being dispensed. | 08-25-2011 |
20110206239 | INPUT APPARATUS, REMOTE CONTROLLER AND OPERATING DEVICE FOR VEHICLE - An input apparatus for a vehicle includes: an operation element operable by an occupant of the vehicle; a biological information acquisition element acquiring biological information of the occupant; an unawakened state detection element detecting an unawakened state of the occupant based on the biological information, wherein the unawakened state is defined by a predetermined state different from an awakened state; and an operation disabling element disabling an operation input from the operation element when the unawakened state detection element detects the unawakened state. | 08-25-2011 |
20110206240 | DETECTING CONCEALED THREATS - Potential threat items may be concealed inside objects, such as portable electronic devices, that are subject to imaging for example, at a security checkpoint. Data from an imaged object can be compared to pre-determined object data to determine a class for the imaged object. Further, an object can be identified inside a container (e.g., a laptop inside luggage). One-dimensional Eigen projections can be used to partition the imaged object into partitions, and feature vectors from the partitions and the object image data can be used to generate layout feature vectors. One or more layout feature vectors can be compared to training data for threat versus non-threat-containing items from the imaged object's class to determine if the imaged object contains a potential threat item. | 08-25-2011 |
20110211729 | Method for Generating Visual Hulls for 3D Objects as Sets of Convex Polyhedra from Polygonal Silhouettes - A visual hull for a 3D object is generated by using a set of silhouettes extracted from a set of images. First, a set of convex polyhedra is generated as a coarse 3D model of the object. Then for each image, the convex polyhedra are refined by projecting them to the image and determining the intersections with the silhouette in the image. The visual hull of the object is represented as union of the convex polyhedra. | 09-01-2011 |
20110216938 | Apparatus for detecting lane-marking on road - The image processing ECU periodically acquires road-surface images and extracts edge points in the acquired road-surface image. Subsequently, the ECU determines the operating mode and extracts the edge line when the operating mode is either a dotted mode or a frame-accumulation mode. The edge points are transformed e.g. Hough transform, to extract an edge line that most frequently passes through the edge points. The extracted edge line denotes the lane marking. The ECU outputs a signal to activate a buzzer alert when determining the vehicle may depart from the lane. | 09-08-2011 |
20110216939 | APPARATUS AND METHOD FOR TRACKING TARGET - A target tracking apparatus and method according to an exemplary embodiment of the present invention may quickly and accurately perform target detection and tracking in a photographed image given as consecutive frames by acquiring at least one target candidate image most similar to a photographed image of a previous frame among prepared reference target images, determining one of the target candidate images as a target confirmation message based on the photographed image, and calculating a homography between the determined target confirmation image and the photographed image, and searching the photographed image of the previous image for feature points according to the calculated homography, and tracking an inter-frame change from the previous frame of the found feature points to a current frame. | 09-08-2011 |
20110216940 | TARGET DETECTION DEVICE AND TARGET DETECTION METHOD - Disclosed is a target detection device which can match a moving object in a captured image to an identifier when a plurality of identifiers began to be received in a short time, or when the number of identifiers received was larger than the number of detected position histories. The device ( | 09-08-2011 |
20110216941 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND ELECTRONIC APPARATUS - The present invention relates to an information processing apparatus, an information processing method, a program, and an electronic apparatus that are capable of detecting a movement of a hand of the user with ease. | 09-08-2011 |
20110216942 | IMAGE-CAPTURING APPARATUS AND METHOD, EXPRESSION EVALUATION APPARATUS, AND PROGRAM - An image-capturing apparatus for capturing an image by using a solid-state image-capturing device may include a face detector configured to detect a face of a human being on the basis of an image signal in a period until an image signal obtained by image capturing is recorded on a recording medium; an expression evaluation section configured to evaluate the expression of the detected face and to compute an expression evaluation value indicating the degree to which the detected face is close to a specific expression in relation to expressions other than the specific expression; and a notification section configured to notify notification information corresponding to the computed expression evaluation value to an image-captured person. | 09-08-2011 |
20110216943 | IMAGE-CAPTURING APPARATUS AND METHOD, EXPRESSION EVALUATION APPARATUS, AND PROGRAM - An image-capturing apparatus for capturing an image by using a solid-state image-capturing device may include a face detector configured to detect a face of a human being on the basis of an image signal in a period until an image signal obtained by image capturing is recorded on a recording medium; an expression evaluation section configured to evaluate the expression of the detected face and to compute an expression evaluation value indicating the degree to which the detected face is close to a specific expression in relation to expressions other than the specific expression; and a notification section configured to notify notification information corresponding to the computed expression evaluation value to an image-captured person. | 09-08-2011 |
20110222724 | SYSTEMS AND METHODS FOR DETERMINING PERSONAL CHARACTERISTICS - Systems and methods are disclosed for determining personal characteristics from images by generating a baseline gender model and an age estimation model using one or more convolutional neural networks (CNNs); capturing correspondences of faces by face tracking, and applying incremental learning to the CNNs and enforcing correspondence constraint such that CNN outputs are consistent and stable for one person. | 09-15-2011 |
20110222725 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING PROGRAM - An image processing device receives a captured image as input from an image capturing device installed in a conveying mechanism that conveys and tests works. The image processing device causes the image capturing device to capture images a plurality of times at a predetermined time interval. Based on the position of the work detected from the captured image output from the image capturing device by capturing the images at a predetermined time interval and the target position set by the user's operation, the image processing device derives the delay time required for capturing the image at the timing when the work is positioned near the target position, and sets the derived delay time for the image capturing timing of the image capturing device. | 09-15-2011 |
20110222726 | GESTURE RECOGNITION APPARATUS, METHOD FOR CONTROLLING GESTURE RECOGNITION APPARATUS, AND CONTROL PROGRAM - A gesture recognition apparatus is caused to correctly recognize start and end of a gesture without use of special unit by a natural manipulation of a user and low-load processing for the gesture recognition apparatus. The gesture recognition apparatus that recognizes the gesture from action of a recognition object taken in a moving image includes: a gravity center tracking unit that detects a specific subject having a specific feature from the moving image; a moving speed determining unit that computes a moving speed per unit time of the specific subject; a moving pattern extracting unit that extracts a moving pattern of the specific subject; and a start/end judgment unit that discriminates movement of the specific subject as an instruction (such as an instruction to start or end gesture recognition processing) input to the gesture recognition apparatus when the moving speed and the moving pattern satisfy predetermined conditions. | 09-15-2011 |
20110222727 | Object Localization Using Tracked Object Trajectories - A method of processing a video sequence is provided that includes tracking a first object and a second object for a specified number of frames, determining similarity between a trajectory of the first object and a trajectory of the second object over the specified number of frames, and merging the first object and the second object into a single object when the trajectory of the first object and the trajectory of the second object are sufficiently similar, whereby an accurate location and size for the single object is obtained. | 09-15-2011 |
20110222728 | Method and Apparatus for Scaling an Image in Segments - A method and an apparatus for scaling an image in segments are disclosed. The method includes: identifying scene features in each input video frame, and obtaining information about distribution of multiple features in the video frame; obtaining multiple feature distribution areas corresponding to the information about distribution of the multiple features, and obtaining multiple scale coefficients; and scaling the corresponding multiple feature distribution areas in each video frame according to the multiple scale coefficients. | 09-15-2011 |
20110222729 | APPARATUS AND METHOD FOR FINDING A MISPLACED OBJECT USING A DATABASE AND INSTRUCTIONS GENERATED BY A PORTABLE DEVICE - The basic invention uses a portable device that can contain a camera, a database, and a text, voice or visual entry to control the storage of an image into a database. Furthermore, the stored image can be associated with text, color, visual or audio. The stored images can be used to guide the user towards a target that the user does not recall its current location. The user's commands can be issued verbally, textually or by scrolling through the target images in the database until the desired one is found. This target can be shoes, pink sneakers, a toy or some comparable items that the user needs to find. | 09-15-2011 |
20110222730 | Red Eye False Positive Filtering Using Face Location and Orientation - An image is acquired including a red eye defect and non red eye defect regions having a red color. An initial segmentation of candidate redeye regions is performed. A location and orientation of one or more faces within the image are determined. The candidate redeye regions are analyzed based on the determined location and orientation of the one or more faces to determine a probability that each redeye region appears at a position of an eye. Any confirmed redeye regions having at least a certain threshold probability of being a false positive are removed as candidate redeye defect regions. The remaining redeye defect regions are corrected and a red eye corrected image is generated. | 09-15-2011 |
20110222731 | Computer Controlled System for Laser Energy Delivery to the Retina - An embodiment of the invention provides a method that captures a diagnostic image of a retina having at least one lesion, wherein the lesion includes a plurality of spots to be treated. Information is received from a user interface, wherein the information includes a duration, intensity, and/or wavelength of treatment for each of the spots. A real-time image of the retina is captured; and, a composite image is created by linking the diagnostic image to the real-time image. At least one updated real-time image of the retina is obtained using eye tracking and/or image stabilization; and, an annotated image is created by modifying the composite image based on the updated real-time image. A localized laser beam is delivered to each of the spots according to the information, the composite image, and the annotated image. | 09-15-2011 |
20110228975 | METHODS AND APPARATUS FOR ESTIMATING POINT-OF-GAZE IN THREE DIMENSIONS - Methods for determining a point-of-gaze (POG) of a user in three dimensions are disclosed. In particular embodiments, the methods involve: presenting a three-dimensional scene to both eyes of the user; capturing image data including both eyes of the user; estimating first and second line-of-sight (LOS) vectors in a three-dimensional coordinate system for the user's first and second eyes based on the image data; and determining the POG in the three-dimensional coordinate system using the first and second LOS vectors. | 09-22-2011 |
20110228976 | PROXY TRAINING DATA FOR HUMAN BODY TRACKING - Synthesized body images are generated for a machine learning algorithm of a body joint tracking system. Frames from motion capture sequences are retargeted to several different body types, to leverage the motion capture sequences. To avoid providing redundant or similar frames to the machine learning algorithm, and to provide a compact yet highly variegated set of images, dissimilar frames can be identified using a similarity metric. The similarity metric is used to locate frames which are sufficiently distinct, according to a threshold distance. For realism, noise is added to the depth images based on noise sources which a real world depth camera would often experience. Other random variations can be introduced as well. For example, a degree of randomness can be added to retargeting. For each frame, the depth image and a corresponding classification image, with labeled body parts, are provided. 3-D scene elements can also be provided. | 09-22-2011 |
20110228977 | IMAGE CAPTURING DEVICE AND METHOD FOR ADJUSTING A POSITION OF A LENS OF THE IMAGE CAPTURING DEVICE - A method for adjusting a position of a lens of an image capturing device obtains a plurality of images of a monitored scene by the lens, detects a motion area in the monitored scene, and detects if a human face is in the motion area. The method further moves the lens according to movement data of the human face if the human face is detected, or moves the lens according to movement data of the motion area if the human face is not detected. | 09-22-2011 |
20110228978 | FOREGROUND OBJECT DETECTION SYSTEM AND METHOD - A foreground object detection system and method establishes a background model by reading N frames of a video stream generated by a camera. The detection system further reads each frame of the video stream, detects the pixel value difference and the brightness value difference for each pair of two corresponding pixels of two consecutive frames for each of the N frames of the video stream. In detail, by comparing the pixel value difference with a pixel threshold and by comparing the brightness value difference with a brightness threshold, the detection system may determine a foreground or background pixel. | 09-22-2011 |
20110228979 | Moving-object detection apparatus, moving-object detection method and moving-object detection program - Disclosed herein is a moving-object detection apparatus having a plurality of moving-object detection processing devices configured to detect a moving object on the basis of a motion vector computed by making use of a present image and a past image wherein the moving-object detection processing devices are set to operate differently from each other in at least one of the resolution of the present and past images, the time distance between the present and past images and the search area of the motion vector in order to detect the moving object. | 09-22-2011 |
20110228980 | CONTROL APPARATUS AND VEHICLE SURROUNDING MONITORING APPARATUS - A control apparatus that improves the usability of a vehicle surrounding monitoring apparatus without confusing the monitoring party while monitoring the surroundings of a vehicle. A detection area setting section ( | 09-22-2011 |
20110228981 | METHOD AND SYSTEM FOR PROCESSING IMAGE DATA - A method for processing image data representing a segmentation mask, comprises generating two-dimensional shape representations of a three-dimensional object on the basis of a plurality of parameter sets; and matching motion blocks of the segmentation mask with the two-dimensional shape representations to obtain a best fit parameter set. Thereby, for example, a distance between the three-dimensional object and a camera position may be determined. | 09-22-2011 |
20110228982 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing device includes a learning image input unit configured to input a learning image, in which a tracked object is captured on different shooting conditions, together with the shooting conditions, a feature response calculation unit configured to calculate a response of one or more integrated features, with respect to the learning image while changing a parameter in accordance with the shooting conditions, a feature learning unit configured to recognize spatial distribution of the one or more integrated features in the learning image based on a calculation result of the response and evaluate a relationship between the shooting conditions and the parameter and a spatial relationship among the integrated features so as to learn a feature of the tracked object, and a feature storage unit configured to store a learning result of the feature. | 09-22-2011 |
20110228983 | INFORMATION PROCESSOR, INFORMATION PROCESSING METHOD AND PROGRAM - Disclosed herein is an information processor including, a storage section configured to store feature quantity data of a target object and audio data associated with the target object, an acquisition section configured to acquire an image of the target object, a recognition section configured to recognize an object included in the image based on the feature quantity data stored in the storage section and a reproduction section configured to reproduce the audio data associated with the recognized object and output a reproduced sound from an output device worn by the user. | 09-22-2011 |
20110228984 | SYSTEMS, METHODS AND ARTICLES FOR VIDEO ANALYSIS - A video analysis system including a video output device monitoring an area for activity, a video analyzer processing output of the video output device and identifying an event in near-real-time, and a persistent database archiving the event for an operational lifetime of the video analysis system and accessible in near-real-time. | 09-22-2011 |
20110228985 | APPROACHING OBJECT DETECTION SYSTEM - An approaching object detection system, approaching object can be accurately detected while reducing the load on a calculation processing. A first moving region detection unit ( | 09-22-2011 |
20110235855 | Color Gradient Object Tracking - A system and method are provided for color gradient object tracking. A tracking area is illuminated with a chromatic light source. A color value is measured, defined by at least three attributes, reflected from an object in the tracking area, and analyzed with respect to chromatic light source characteristics. A lookup table (LUT) is accessed that cross-references color values to positions in the tracking area, and in response to accessing the LUT, the object position in the tracking area is determined. The LUT is initially built by illuminating the tracking area with the light source. A test object is inserted into the tracking area in a plurality of determined positions, and the reflected color value is measured at each determined position. The color value measurements are correlated to determined positions. As a result, a color gradient can be measured between a first determined position and a second determined position. | 09-29-2011 |
20110235856 | METHOD AND SYSTEM FOR COMPOSING AN IMAGE BASED ON MULTIPLE CAPTURED IMAGES - A mobile multimedia device may be operable to capture consecutive image samples of a scene. The scene may comprise one or more objects such as faces or moving objects which may be identifiable by the mobile multimedia device. An image of the scene may be created by the mobile multimedia device utilizing a plurality of the captured consecutive image samples based on the identifiable objects. The image of the scene may be composed by selecting at least a portion of the captured consecutive image samples based on the identified one or more smiling faces. The image of the scene may be composed in such a way that the identified moving object, which may occur in the scene, may be eliminated from the composed image of the scene. | 09-29-2011 |
20110235857 | DEVICE AND METHOD FOR CONTROLLING STREETLIGHTS - A method for controlling streetlights located at a streetlight control area using a streetlight power control system controls an image capturing device to capture digital images of at least one route section of the streetlight control area at a predetermined interval. Light of a streetlight corresponding to the streetlight power controller is automatically adjusted by turning on or off the streetlight and by increasing or decreasing the intensity of the streetlight. | 09-29-2011 |
20110235858 | Grouping Digital Media Items Based on Shared Features - Methods, apparatuses, and systems for grouping digital media items based on shared features. Multiple digital images are received. Metadata about the digital images is obtained either by analyzing the digital images or by receiving metadata from a source separate from the digital images or both. The obtained metadata is analyzed by data processing apparatus to identify a common feature among two or more of the digital images. A grouping of the two or more images is formed by the data processing apparatus based on the identified common feature. | 09-29-2011 |
20110235859 | Signal processor - A signal processor includes an input unit, an extraction unit, a calculation unit, a determination unit, and an output unit. The input unit receives a moving image including a plurality of images. The extraction unit analyzes the moving image and extracts a representative image from the moving image. The calculation unit calculates a change amount of a partial moving image including the representative image. The change amount indicates degree of change. The determination unit uses the change amount to judge which the representative image or at least a part of the moving image is outputted. The output unit outputs the representative image or the partial moving image according to a corresponding output format. | 09-29-2011 |
20110235860 | Method to estimate 3D abdominal and thoracic tumor position to submillimeter accuracy using sequential x-ray imaging and respiratory monitoring - A method of estimating target motion for image guided radiotherapy (IGRT) systems is provided. The method includes acquiring by a kV imaging system sequential images of a target motion, computing by the kV imaging system from the sequential images an image-based estimation of the target motion expressed in a patient coordinate system, transforming by the kV imaging system the image-based estimation in the patient coordinate system to an estimate in a projection coordinate system, reformulating by the kV imaging system the projection coordinate system in a converging iterative form to force a convergence of the projection coordinate system to output a resolved estimation of the target motion, and displaying by the kV imaging system the resolved estimation of the target motion. | 09-29-2011 |
20110235861 | METHOD AND APPARATUS FOR ESTIMATING ROAD SHAPE - An apparatus estimates a shape of a road on which a vehicle travel. The apparatus id mounted on the vehicle. In the apparatus, information indicative of a plurality of detection points is received through transmission and reception of electromagnetic waves. The detection points are given as a plurality of candidates for edges of the road. It is determined whether or not a distance between each detection point and the vehicle is equal to or larger than a predetermined value. A first approximated curve for each detection point having the distance equal to larger than the predetermined value is detected, and a second approximated curve for a detection point having the distance less than the predetermined value is detected. The shape of the road is estimated by merging the first and second approximated curves. | 09-29-2011 |
20110235862 | FIELD OF IMAGING - Embodiments of the present invention provide a computer-based method for providing image data of a region of a target object ( | 09-29-2011 |
20110235863 | PROVISION OF IMAGE DATA - A method and apparatus are disclosed for providing image data. The method includes the steps of providing incident radiation from a radiation source at a target object and, via at least one detector, detecting an intensity of radiation scattered by the target object. Also via the at least one detector an intensity of radiation provided by the radiation source absent the target object is detected. Image data is provided via an iterative process responsive to the intensity of radiation detected absent the target object and the detected intensity of radiation scattered by the target object. | 09-29-2011 |
20110235864 | MOVING OBJECT TRAJECTORY ESTIMATING DEVICE - A moving object trajectory estimating device has: a surrounding information acquisition part that acquires information on surroundings of a moving object; a trajectory estimating part that specifies another moving object around the moving object based on the acquired surrounding information and estimates a trajectory of the specified moving object; and a recognition information acquisition part that acquires recognition information on a recognizable area of the specified moving object, and the trajectory estimating part estimates a trajectory of the specified moving object, based on the acquired recognition information of the specified moving object. | 09-29-2011 |
20110243376 | METHOD AND A DEVICE FOR DETECTING OBJECTS IN AN IMAGE - Detection of an object of a specified object category in an image. With the method, it is provided that: (1) at least two detectors are provided which are respectively set up for the purpose of detecting an object of the specified object category with a specified object size, wherein object sizes differ for the detectors, (2) the image is evaluated by the detectors in order to check whether an object of the specified object category is located in the image, and (3) an object of the specified object category is detected in the image when on the basis of the evaluation of the image by at least one of the detectors it is determined that an object of the specified object category is located in the image. A system suitable for implementing the method for detecting an object of a specified object category in an image is also described. | 10-06-2011 |
20110243377 | SYSTEM AND METHOD FOR PREDICTING OBJECT LOCATION - A system for predicting object location includes a video capture system for capturing a plurality of video frames, each of the video frames having a first area, an object isolation element for locating an object in each of the plurality of video frames, the object being located at a first actual position in a first video frame and being located at a second actual position in a second video frame, and a trajectory calculation element configured to analyze the first actual position and the second actual position to determine an object trajectory, the object trajectory comprising past trajectory and predicted future trajectory, wherein the predicted future trajectory is used to determine a second area in a subsequent video frame in which to search for the object, wherein the second area is different in size than the first area. | 10-06-2011 |
20110243378 | METHOD AND APPARATUS FOR OBJECT TRACKING AND LOITERING DETECTION - A method and apparatus for object tracking and loitering detection are provided. The method includes: wavelet-converting an input image by converting the input image into an image of a frequency domain to generate a frequency domain image and separating the frequency domain image according to a frequency band and a resolution; extracting object information including essential information about the input image from the frequency domain image; performing a fractal affine transform on the object information; and compensating for a difference between object information about a previous image and the object information about the input image by using a coefficient which is obtained by the fractal affine transform. | 10-06-2011 |
20110243379 | VEHICLE POSITION DETECTION SYSTEM - A system stores reference data generated by associating image feature point data with an image-capturing position and a recorded vehicle event. The system generates data for matching by extracting image feature points from an actually-captured image. The system generates information on an actual vehicle event, extracts first reference data whose image-capturing position is located in a vicinity of an estimated position of the vehicle, and extracts second reference data that includes a recorded vehicle event that matches the actual vehicle event. The system performs matching between at least one of the first reference data and the second reference data, and the data for matching, and determines a position of the vehicle based on the matching. | 10-06-2011 |
20110243380 | COMPUTING DEVICE INTERFACE - A computing device configured for providing an interface is described. The computing device includes a processor and instructions stored in memory. The computing device projects a projected image from a projector. The computing device also captures an image including the projected image using a camera. The camera operates in a visible spectrum. The computing device calibrates itself, detects a hand and tracks the hand based on a tracking pattern in a search space. The computing device also performs an operation. | 10-06-2011 |
20110243381 | METHODS FOR TRACKING OBJECTS USING RANDOM PROJECTIONS, DISTANCE LEARNING AND A HYBRID TEMPLATE LIBRARY AND APPARATUSES THEREOF - A method, non-transitory computer readable medium, and apparatus that tracks an object includes utilizing random projections to represent an object in a region of an initial frame in a transformed space with at least one less dimension. One of a plurality of regions in a subsequent frame with a closest similarity between the represented object and one or more of plurality of templates is identified as a location for the object in the subsequent frame. A learned distance is applied for template matching, and techniques that incrementally update the distance metric online are utilized in order to model the appearance of the object and increase the discrimination between the object and the background. A hybrid template library, with stable templates and hybrid templates that contains appearances of the object during the initial stage of tracking as well as more recent ones is utilized to achieve robustness with respect to pose variation and illumination changes. | 10-06-2011 |
20110243382 | X-Ray Inspection System and Method - The present specification discloses an X-ray system for processing X-ray data to determine an identity of an object under inspection. The X-ray system includes an X-ray source for transmitting X-rays, where the X-rays have a range of energies, through the object, a detector array for detecting the transmitted X-rays, where each detector outputs a signal proportional to an amount of energy deposited at the detector by a detected X-ray, and at least one processor that reconstructs an image from the signal, where each pixel within the image represents an associated mass attenuation coefficient of the object under inspection at a specific point in space and for a specific energy level, fits each of pixel to a function to determine the mass attenuation coefficient of the object under inspection at the point in space; and uses the function to determine the identity of the object under inspection. | 10-06-2011 |
20110243383 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing device includes a reference background storage unit that stores a reference background image, an estimation unit that detects an object from an input image and estimates an approximate position and an approximate shape of the object that is detected, a background difference image generation unit that generates a background difference image obtained based on a difference value between the input image and the reference background image, a failure determination unit that determines whether a failure occurs in the background difference image based on a comparison between the background difference image that is generated by the background difference image generation unit and the object that is estimated by the estimation unit, a failure type identification unit that identifies a type of the failure, and a background image update unit that updates the reference background image in a manner to correspond to the type of the failure. | 10-06-2011 |
20110243384 | IMAGE PROCESSING APPARATUS AND METHOD AND PROGRAM - There are provided an image processing apparatus, a method, and a program capable of appropriately adjusting the stereoscopic effect in a stereoscopic image with a person. The attention point serving as the provisional cross point position is set to a person's eye, and the cross point position is shifted backwards from the attention point as the percentage of the image occupied by the face increases, thereby adjusting the stereoscopic effect so as to increase an area of the object which is projected forward from the cross point. Regarding the calculation of the back shift amount, the back shift amount is set to increase as the percentage of the face occupied in the standard image increases, and the coefficient is set to be smaller as the number of pixels of the positions nearer than the attention point increases, and the set coefficient kb is multiplied by the back shift amount. | 10-06-2011 |
20110243385 | Moving object detection apparatus, moving object detection method, and program - Disclosed herein is a moving object detection apparatus including: an image input processing section configured to input an analysis image composed of an image taken by a camera in order to establish a designated region inside the analysis image; a first detection processing section configured to detect an image of a moving object which moves within the designated region established by the image input processing section and which is at a distance in a first range from the camera; and a second detection processing section configured to detect an image of the moving object which moves within the designated region established by the image input processing section and which is at a distance in a second range from the camera, the second range being farther than the first range. | 10-06-2011 |
20110243386 | Method and System for Multiple Object Detection by Sequential Monte Carlo and Hierarchical Detection Network - A method and system for detecting multiple objects in an image is disclosed. A plurality of objects in an image is sequentially detected in an order specified by a trained hierarchical detection network. In the training of the hierarchical detection network, the order for object detection is automatically determined. The detection of each object in the image is performed by obtaining a plurality of sample poses for the object from a proposal distribution, weighting each of the plurality of sample poses based on an importance ratio, and estimating a posterior distribution for the object based on the weighted sample poses. | 10-06-2011 |
20110243387 | Analysis of Radiographic Images - The present invention therefore provides a method for the analysis of radiographic images, comprising the steps of acquiring a plurality of projection images of a patient, acquiring a surrogate signal indicative of the location of a target structure in the patient, reconstructing a plurality of volumetric images of the patient from the projection images, each volumetric image being reconstructed from projection images having a like breathing phase, identifying the position of the target structure such as a tumour in each volumetric image, associating a surrogate signal with each of the projection images, and determining a relationship between the surrogate signal and the position of the target structure. Multiple projection images having a like breathing phase can be grouped for reconstruction, to provide sufficient numbers for reconstruction. The analysis of the multiple values of the surrogate associated with each breathing phase can be used to determine the mean surrogate value and its variation. Multiple values of the surrogate signal associated with the same nominal breathing phase can be used to determine a mean value of the surrogate signal for the target position associated with that phase and a variation of the value of the surrogate signal for the target position associated with that phase. The breathing phase of specific projection images can be obtained by analysis of one or more features in the images, such as the method we described in U.S. Pat. No. (7,356,112), or otherwise. | 10-06-2011 |
20110243388 | IMAGE DISPLAY APPARATUS, IMAGE DISPLAY METHOD, AND PROGRAM - An image display apparatus may include a display section for presenting an image. The apparatus may also include a viewing angle calculation section for determining a viewing angle of a user relative to the display section. Additionally, the apparatus may include an image generation section for generating first image data representing a first image, and for supplying the first image data to the display section for presentation of the first image. The image generation section may generate the first image data based on the user's viewing angle, second image data representing a second image, and third image data representing a third image. The second image may include an object viewed from a first viewing angle and the third image may include the object viewed from a second viewing angle, the first viewing angle and the second viewing angle being different from each other and from the user's viewing angle. | 10-06-2011 |
20110243389 | METHOD OF DETECTING PARTICLES BY DETECTING A VARIATION IN SCATTERED RADIATION - A smoke detecting method which uses a beam of radiation such as a laser ( | 10-06-2011 |
20110249861 | CONTENT INFORMATION PROCESSING DEVICE, CONTENT INFORMATION PROCESSING METHOD, CONTENT INFORMATION PROCESSING PROGRAM, AND PERSONAL DIGITAL ASSISTANT - An information processing apparatus comprising that includes a reproduction unit to reproduce video content comprising a plurality of frames; a memory to store a table including object identification information identifying an object image, and frame identification information identifying a frame of the plurality of frames that includes the object image; and a processor to extract the frame including the object image from the video content and generate display data of a reduced image corresponding to the frame for display. | 10-13-2011 |
20110249862 | IMAGE DISPLAY DEVICE, IMAGE DISPLAY METHOD, AND IMAGE DISPLAY PROGRAM - According to one embodiment, an image display device that displays acquired image frames includes: an image processing unit that detects a location of a target in a first image frame among the image frames and generates a first predicted location of the target in a second image frame acquired at a first time when a predetermined number of frames or predetermined period of time has passed since the first image frame is acquired; a script processing unit that generates at least one tracking image that starts from the location of the target in the first image frame and heads toward the first predicted location in the second image frame; a synthesis unit that generates combined images where the at least one tracking image is put on image frames between the first and second image frame; and a display unit that displays the combined images. | 10-13-2011 |
20110249863 | INFORMATION PROCESSING DEVICE, METHOD, AND PROGRAM - An information processing device includes a face detection unit that detects a face area from a target image, a feature point detection unit that detects a feature point of the detected face area, a determination unit that determines an attention area that is an area to which attention is paid in the face area based on the detected feature point, a reference color extraction unit that extracts a reference color that is color setting obtained from the target image in the determined attention area, an adjustment unit that adjusts the extracted reference color to a color setting for a modified image generated from the target image as a base, and a generation unit that generates the modified image from the target image by drawing the attention area using the color setting for the modified image. | 10-13-2011 |
20110249864 | MEASUREMENT OF THREE-DIMENSIONAL MOTION CHARACTERISTICS - A system for measurement of three-dimensional motion of an object is provided. The system includes a light projection means adapted for projecting, for distinct time intervals, light of at least two different colors with a cross-sectional pattern of fringe lines onto a surface of the object and also includes image acquisition means for capturing an image of the object during an exposure time, wherein the distinct time intervals are within the duration of the exposure time. The system further includes image processing means adapted for processing the image to obtain a different depth map for each color based on a projected pattern of fringe lines on the object as viewed from the position of the image acquisition means, to determine corresponding points on the depth maps of each color, and to determine a three-dimensional motion characteristic of the object based on the positions of corresponding points on the depth maps. | 10-13-2011 |
20110249865 | APPARATUS, METHOD AND COMPUTER-READABLE MEDIUM PROVIDING MARKER-LESS MOTION CAPTURE OF HUMAN - Provided are an apparatus, method and computer-readable medium providing marker-less motion capture of a human. The apparatus may include a two-dimensional (2D) body part detection unit to detect, from input images, candidate 2D body part locations of candidate 2D body parts; a three-dimensional (3D) lower body part computation unit to compute 3D lower body parts using the detected candidate 2D body part locations; a 3D upper body computation unit to compute 3D upper body parts based on a body model; and a model rendering unit to render the model in accordance with a result of the computed 3D upper body parts. | 10-13-2011 |
20110249866 | METHODS AND SYSTEMS FOR THREE DIMENSIONAL OPTICAL IMAGING, SENSING, PARTICLE LOCALIZATION AND MANIPULATION - Embodiments include methods, systems, and/or devices that may be used to image, obtain three-dimensional information from a scence, and/or locate multiple small particles and/or objects in three dimensions. A point spread function (PSF) with a predefined three dimensional shape may be implemented to obtain high Fisher information in 3D. The PSF may be generated via a phase mask, an amplitude mask, a hologram, or a diffractive optical element. The small particles may be imaged using the 3D PSF. The images may be used to find the precise location of the object using an estimation algorithm such as maximum likelihood estimation (MLE), expectation maximization, or Bayesian methods, for example. Calibration measurements can be used to improve the theoretical model of the optical system. Fiduciary particles/targets can also be used to compensate for drift and other type of movement of the sample relative to the detector. | 10-13-2011 |
20110249867 | DETECTION OF OBJECTS IN DIGITAL IMAGES - A system and method to detect objects in a digital image. At least one image representing at least one frame of a video sequence is received. A given color channel of the image is extracted. At least one blob that stands out from a background of the given color channel is identified. One or more features are extracted from the blob. The one or more features are provided to a plurality of pre-learned object models each including a set of pre-defined features associated with a pre-defined blob type. The one or more features are compared to the set of pre-defined features. The blob is determined to be of a type that substantially matches a pre-defined blob type associated with one of the pre-learned object models. At least a location of an object is visually indicated within the image that corresponds to the blob. | 10-13-2011 |
20110249868 | LINE-OF-SIGHT DIRECTION DETERMINATION DEVICE AND LINE-OF-SIGHT DIRECTION DETERMINATION METHOD - Provided are a line-of-sight direction determination device and a line-of-sight direction determination method capable of highly precisely and accurately determining a line-of-sight direction from immediately after start of measurement without indication of an object to be carefully observed and adjustment work done in advance. The line-of-sight direction determination device ( | 10-13-2011 |
20110255738 | Method and Apparatus for Visual Search Stability - Various methods for visual search stability are provided. One example method includes determining a plurality of image matching distances for a captured object depicted in a video frame, where each image matching distance being indicative of a quality of a match between the captured object and a respective object match result. The example method further includes including, in a candidate pool, an indication of the object match results having image matching distances in a candidate region, discarding the object match results having image matching distances in a non-candidate region, and analyzing the object match results with image matching distances in a potential candidate region to include, in the candidate pool, indications of select object match results with image matching distances in the potential candidate region. Similar and related example methods and example apparatuses are also provided. | 10-20-2011 |
20110255739 | IMAGE CAPTURING DEVICE AND METHOD WITH OBJECT TRACKING - A method for dynamically tracking a specific object in a monitored area obtains an image of the monitored area by one of a plurality of image capturing devices in the monitored area, and detects the specific object in the obtained image. The method further determines adjacent image capturing devices in the monitored area according to the path table upon the condition that the specific object is detected, and adjusts a detection sensitivity of each of the adjacent image capturing devices. | 10-20-2011 |
20110255740 | VEHICLE TRACKING SYSTEM AND TRACKING METHOD THEREOF - The present invention discloses a vehicle tracking system and method, and the tracking method comprises the steps of capturing a bright object from an image by the bright object segmentation; labeling the bright object by a connected component labeling method and forming a connected component object; identifying, analyzing and combining the characteristics of the connected component object to form a lamp object by the bright object recognition; tracking the trajectory of the lamp object by a multi-vehicle tracking method; and identifying the type of a vehicle having the lamp object by the vehicle detection/recognition and counting the number of various vehicles. | 10-20-2011 |
20110255741 | METHOD AND APPARATUS FOR REAL-TIME PEDESTRIAN DETECTION FOR URBAN DRIVING - A computer implemented method for detecting the presence of one or more pedestrians in the vicinity of the vehicle is disclosed. Imagery of a scene is received from at least one image capturing device. A depth map is derived from the imagery. A plurality of pedestrian candidate regions of interest (ROIs) is detected from the depth map by matching each of the plurality of ROIs with a 3D human shape model. At least a portion of the candidate ROIs is classified by employing a cascade of classifiers tuned for a plurality of depth bands and trained on a filtered representation of data within the portion of candidate ROIs to determine whether at least one pedestrian is proximal to the vehicle. | 10-20-2011 |
20110255742 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION STORAGE MEDIUM - A situation data obtaining unit obtains situation data describing a situation of an image capturing target of which image is captured by an image capturing device for producing an image to be output. Based on the situation data, a simulation process executing unit carries out a simulation process for simulating a behavior of the image capturing target after the situation of the image capturing target, described by the situation data. A combined screen image output unit outputs a result of the simulation process by the simulation process executing unit. The simulation process executing unit changes the behavior of the image capturing target in the simulation process in response to an operation received from a user. | 10-20-2011 |
20110255743 | OBJECT RECOGNITION USING HAAR FEATURES AND HISTOGRAMS OF ORIENTED GRADIENTS - A system and method to detect objects in a digital image. At least one image representing at least one frame of a video sequence is received. A sliding window of different window sizes at different locations is placed in the image. A cascaded classifier including a plurality of increasingly accurate layers is applied to each window size and each location. Each layer includes a plurality of classifiers. An area of the image within a current sliding window is evaluated using one or more weak classifiers in the plurality of classifiers based on at least one of Haar features and Histograms of Oriented Gradients features. An output of each weak classifier is a weak decision as to whether the area of the image includes an instance of an object of a desired object type. A location of the zero or more images associated with the desired object type is identified. | 10-20-2011 |
20110255744 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 10-20-2011 |
20110255745 | IMAGE ANALYSIS PLATFORM FOR IDENTIFYING ARTIFACTS IN SAMPLES AND LABORATORY CONSUMABLES - A High-resolution Image Acquisition and Processing Instrument (HIAPI) performs at least five simultaneous measurements in a noninvasive fashion, namely: (a) determining the volume of a liquid sample in welh (or microtubes) containing liquid sample, (b) detection of precipitate, objects of artifacts within microliter plate wells, (c) classification of colored samples in microliter plate wells or microtubes; (dl determination of contaminant (e.g. wafer concentration}; (e) air bubbles; (f) problems with the actual plate. Remediation of contaminant is also possible. | 10-20-2011 |
20110255746 | SYSTEM FOR USING THREE-DIMENSIONAL MODELS TO ENABLE IMAGE COMPARISONS INDEPENDENT OF IMAGE SOURCE - A method for identifying an object based at least in part on a reference database including two-dimensional images of objects includes the following steps: (a) providing a three-dimensional model reference database containing a plurality of estimated three-dimensional models, wherein each estimated three-dimensional model is derived from a corresponding two-dimensional image from the two-dimensional reference database; (b) sampling at least one image of an object to be identified; (c) implementing at least one identification process to identify the object, the identification process employing data from the three-dimensional model reference database. | 10-20-2011 |
20110255747 | MOVING OBJECT DETECTION APPARATUS AND MOVING OBJECT DETECTION METHOD - A moving object detection apparatus includes: an image input unit which receives a plurality of pictures included in video; a trajectory calculating unit which calculates a plurality of trajectories from the pictures; a subclass classification unit which classifies the trajectories into a plurality of subclasses; an inter-subclass approximate geodetic distance calculating unit which calculates, for each of the subclasses, an inter-subclass approximate geodetic distance representing similarity between the subclass and another subclass, using an inter-subclass distance that is a distance including a minimum value of a linear distance between each of trajectories belonging to the subclass and one of trajectories belonging to the other subclass; and a segmentation unit which performs segmentation by determining, based on the calculated inter-subclass approximate geodetic distance, a set of subclasses including similar trajectories as one class. | 10-20-2011 |
20110255748 | ARTICULATED OBJECT REGIONARTICULATED OBJECT REGION DETECTION APPARATUS AND METHOD OF THE SAME - An articulated object region detection apparatus includes: a subclass classification unit which classifies trajectories into subclasses; a distance calculating unit which calculates, for each of the subclasses, a point-to-point distance and a geodetic distance between the subclass and another subclass; and a region detection unit which detects, as a region having an articulated motion, two subclasses to which trajectories corresponding to two regions connected via the same articulation and indicating the articulated motion belong, based on a temporal change in the point-to-point distance and a temporal change in the geodetic distance between two given subclasses. | 10-20-2011 |
20110262001 | VIEWPOINT DETECTOR BASED ON SKIN COLOR AREA AND FACE AREA - In a particular illustrative embodiment, a method of determining a viewpoint of a person based on skin color area and face area is disclosed. The method includes receiving image data corresponding to an image captured by a camera, the image including at least one object to be displayed at a device coupled to the camera. The method further includes determining a viewpoint of the person relative to a display of the device coupled to the camera. The viewpoint of the person may be determined by determining a face area of the person based on a determined skin color area of the person and tracking a face location of the person based on the face area. One or more objects displayed at the display may be moved in response to the determined viewpoint of the person. | 10-27-2011 |
20110262002 | HAND-LOCATION POST-PROCESS REFINEMENT IN A TRACKING SYSTEM - A tracking system having a depth camera tracks a user's body in a physical space and derives a model of the body, including an initial estimate of a hand position. Temporal smoothing is performed when the initial estimate moves by less than a threshold level from frame to frame, while little or no smoothing is performed when the movement is more than the threshold. The smoothed estimate is used to define a local volume for searching for a hand extremity to define a new hand position. Another process generates stabilized upper body points that can be used as reliable reference positions, such as by detecting and accounting for occlusions. The upper body points and a prior estimated hand position are used to define an arm vector. A search is made along the vector to detect a hand extremity to define a new hand position. | 10-27-2011 |
20110262003 | OBJECT LEARNING METHOD, OBJECT TRACKING METHOD USING THE SAME, AND OBJECT LEARNING AND TRACKING SYSTEM - The present invention relates to an object learning method that minimizes time required for learning an object, an object tracking method using the object learning method, and an object learning and tracking system. The object learning method includes: receiving an image to be learned through a camera to generate a front image by a terminal; generating m view points used for object learning and generating first images obtained when viewing the object from the m view points using the front image; generating second images by performing radial blur on the first images; separating an area used for learning from the second images to obtain reference patches; and storing pixel values of the reference patches. | 10-27-2011 |
20110262004 | Learning Device and Learning Method for Article Transport Facility - A learning control device performs a positioning process, a first image capturing process, and a first deviation amount calculating process in which a reference position deviation amount in the horizontal direction between the imaging reference position and a detection mark is derived based on image information captured in the first image capturing process to derive a position adjustment amount from the derived reference position deviation amount, and the learning control device further includes a positioning correcting process in which the position adjustment device is operated to adjust a position of the second learn assist member based on the derived movement adjustment amount when the reference position deviation amount derived in the first deviation amount calculating process falls outside a set tolerance range. A second image capturing process, and a second deviation amount calculating process may be further provided. | 10-27-2011 |
20110262005 | OBJECT DETECTING METHOD AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING AN OBJECT DETECTION PROGRAM - An object detecting method includes dividing a standard pattern into two or more areas radially from a central point; selecting, in each divided area of the standard pattern, a standard pattern pixel position at the maximum distance from the area dividing central point as a standard pattern representative point; dividing a determined pattern into two or more areas; selecting, in each divided area of the determine pattern, a determined pattern pixel position at the maximum distance from the area dividing central point as a determined pattern representative point; determining a positional difference between the standard pattern representative point and the determined pattern representative point in the corresponding divided areas; and determining the determined pattern as a target object when the positional differences in all of the divided areas are within a predetermined range. | 10-27-2011 |
20110262006 | INTERFACE APPARATUS, GESTURE RECOGNITION METHOD, AND GESTURE RECOGNITION PROGRAM - An interface apparatus is configured to output an operation signal to a target apparatus operated in accordance with a gesture command. In the interface apparatus, a reference object detection unit detects a reference object having a feature similar to a predetermined reference feature value from an image captured by an image capture unit and generates reference information identifying the reference object. Based on the reference information, an operating object identifying unit identifies as the operating object a feature object included in the image and satisfying a predetermined identification condition in terms of a relative relationship with the reference object and extracts operating object information identifying the operating object. An operation signal generation unit starts detecting the gesture command according to a change in position of the identified operating object and generates the operation signal corresponding to the gesture command. | 10-27-2011 |
20110262007 | SHAPE MEASUREMENT APPARATUS AND CALIBRATION METHOD - The shape measurement apparatus calculates a characteristic amount for a plurality of points of interest on a surface of a measurement target object, based on an image obtained by image capturing with a camera, calculates an orientation of a normal line based on a value of the characteristic amount by referencing data stored in advance in a storage device, and restores the three-dimensional shape of the surface of the measurement target object based on a result of the calculation. The storage device stores a plurality of data sets generated respectively for a plurality of reference positions arranged in a field of view of the camera, and the data set to be referenced is switched depending on a position of a point of interest. | 10-27-2011 |
20110262008 | Method for Determining Position Data of a Target Object in a Reference System - A method for determining the position data of a target object in a reference system from an observation position at a distance. A three-dimensional reference model of the surroundings of the target object is provided, the reference model including known geographical location data. An image of the target object and its surroundings, resulting from the observation position for an observer, is matched with the reference model. The position data of the sighted target object in the reference model is determined as relative position data with respect to known location data of the reference model. | 10-27-2011 |
20110262009 | METHOD AND APPARATUS FOR IDENTIFYING OBSTACLE IN IMAGE - A method for identifying barriers in images is disclosed. In the method, images of a current frame and N frame which is nearest to the current frame are obtained, the obtained images of the frames are divided in the same way, and the image of each frame obtains a plurality of divided block regions; the motion barrier confidence of each block region corresponding to the current frame and the N frame which is nearest to the current frame is calculated; whether each block region in the image of the current frame is decided successively according to the motion barrier confidence of each block region corresponding to the current frame and the N frame which is nearest to the current frame; the barriers in the images are determined according to each block region. | 10-27-2011 |
20110262010 | ARRANGEMENT AND METHOD RELATING TO AN IMAGE RECORDING DEVICE - An input system for a digital camera may include a portion for taking at least one image to be used as a control image; and a controller to control at least one operation of the digital camera based on a control command recognized from the control image, the control command controlling a function of the camera. | 10-27-2011 |
20110268316 | MULTIPLE CENTROID CONDENSATION OF PROBABILITY DISTRIBUTION CLOUDS - Systems and methods are disclosed for identifying objects captured by a depth camera by condensing classified image data into centroids of probability that captured objects are correctly identified entities. Output exemplars are processed to detect spatially localized clusters of non-zero probability pixels. For each cluster, a centroid is generated, generally resulting in multiple centroids for each differentiated object. Each centroid may be assigned a confidence value, indicating the likelihood that it corresponds to a true object, based on the size and shape of the cluster, as well as the probabilities of its constituent pixels. | 11-03-2011 |
20110268317 | Data Capture and Identification System and Process - An identification method and process for objects from digitally captured images thereof that uses data characteristics to identify an object from a plurality of objects in a database. The data is broken down into parameters such as a Shape Comparison, Grayscale Comparison, Wavelet Comparison, and Color Cube Comparison with object data in one or more databases to identify the actual object of a digital image. | 11-03-2011 |
20110268318 | PHOTO DETECTING APPARATUS AND SYSTEM HAVING THE SAME - A photo detecting apparatus may include a signal processing unit, a control register unit, and a register data changing unit. The signal processing unit is configured to process electric signals converted from incident light to generate image data. The control register unit supplies a set value to the signal processing unit, the set value controlling operation of the signal processing unit. The control register unit stores a first set value supplied through a first bus, the first set value corresponding to an initial set value based on a decoded external control signal. In addition, the register data changing unit supplies a second set value to the control register unit through a second bus, separate from the first bus, when the first set value is to be changed. | 11-03-2011 |
20110268319 | DETECTING AND TRACKING OBJECTS IN DIGITAL IMAGES - There is provided an improved solution for detecting and tracking objects in digital images. The solution comprises selecting a neighborhood for each pixel under observation, the neighborhood being of known size and form, and reading pixel values of the neighborhood. Further the solution comprises selecting at least one set of coefficients for weighting each neighborhood such that each pixel value of each neighborhood is weighted with at least one coefficient; searching for an existence of at least one object feature at each pixel under observation on the basis of a combination of weighted pixel values at each neighborhood; and verifying the existence of the object in the digital image on the basis of the searches of existence of at least one object feature at a predetermined number of pixels. | 11-03-2011 |
20110268320 | METHOD AND APPARATUS FOR DETECTING AND SEPARATING OBJECTS OF INTEREST IN SOCCER VIDEO BY COLOR SEGMENTATION AND SHAPE ANALYSIS - Substantial elimination of errors in the detection and location of overlapping human objects in an image of a playfield is achieved, in accordance with at least one aspect of the invention, by performing a predominately shape-based analysis of one or more characteristics obtained from a specified portion of the candidate non-playfield object, by positioning a human object model substantially over the specified portion of the candidate non-playfield object in accordance with information based at least in part on information from the shape-based analysis, and removing an overlapping human object from the portion of the candidate non-playfield object identified by the human object model. In one exemplary embodiment, the human object model is an ellipse whose major and minor axes are variable in relation to one or more characteristics identified from the specified portion of the candidate non-playfield object. | 11-03-2011 |
20110268321 | PERSON-JUDGING DEVICE, METHOD, AND PROGRAM - A person-judging device comprises: an obstruction storage which stores information indicating an area of an obstruction which is extracted from an image based on a video signal from an external camera, the obstruction being extracted from the image; head portion range calculation means which, when a portion of an object which is extracted from the image is hidden by the obstruction, assumes that a potential range of grounding points where the object touches a reference face on the image is the area of the obstruction which is stored in the obstruction storage, and which, based on the assumed range and the correlation between the height of a person and the size and position of the head portion that are previously provided, calculates the potential range of the head portion on the image by assuming that a portion farthest from the grounding points of the object is the head portion of the person; and head portion detection means that judges whether an area including a shape corresponding to the head portion exists in the calculated range of the head portion. | 11-03-2011 |
20110274314 | REAL-TIME CLOTHING RECOGNITION IN SURVEILLANCE VIDEOS - Systems and methods are disclosed to recognize clothing from videos by detecting and tracking a human; performing face alignment and occlusal detection; and performing age and gender estimation, skin area extraction, and clothing segmentation to a linear support vector machine (SVM) to recognize clothing worn by the human. | 11-10-2011 |
20110274315 | METHOD, DEVICE, AND COMPUTER-READABLE MEDIUM OF OBJECT DETECTION - Disclosed are an object detection method and an object detection device. The object detection method comprises a step of obtaining plural detection results of a current frame according to plural object detection methods; a step of setting initial probabilities of the plural detection results of the current frame; a step of calculating a movement frequency distribution diagram representing movement frequencies of respective pixels in the current frame; a step of obtaining detection results of a previous frame; a step of updating the probabilities of the plural detection results of the current frame; and a step of determining a final list of detected objects based on the updated probabilities of the plural detection results of the current frame. | 11-10-2011 |
20110274316 | METHOD AND APPARATUS FOR RECOGNIZING LOCATION OF USER - A method of recognizing a location of a user including detecting the user's two eyes and mouth of their face is provided, which includes calculating a ratio of a distance between the two eyes to a distance between a middle point of the two eyes and the mouth, calculating a rotation angle of the face according to the ratio, and detecting a distance between the face and the camera based on the rotation angle. | 11-10-2011 |
20110274317 | MATCHING WEIGHT INFORMATION EXTRACTION DEVICE - The matching weight information extraction device includes a matching weight information extraction unit. The matching weight information extraction unit analyzes a change in a time direction of at least either an input video or features of a plurality of dimensions extracted from the video, in association with the dimensions. Further, the matching weight information extraction unit calculates weight information to be used for matching for each of the dimensions as matching weight information, according to a degree of the change in the time direction. | 11-10-2011 |
20110280438 | IMAGE PROCESSING METHOD, INTEGRATED CIRCUIT FOR IMAGE PROCESSING AND IMAGE PROCESSING SYSTEM - An image processing method includes: identifying at least one moving object of a current image according to the current image and at least one image different from the current image; and utilizing a processing circuit to generate an adjusted current image by performing a first image adjustment operation upon the at least one moving object of the current image and performing a second image adjustment operation upon a surrounding region of the at least one moving object of the current image, where the first image adjustment operation is different from the second image adjustment operation. | 11-17-2011 |
20110280439 | TECHNIQUES FOR PERSON DETECTION - Techniques are disclosed that involve the detection of persons. For instance, embodiments may receive, from an image sensor, one or more images (e.g., thermal images, infrared images, visible light images, three dimensional images, etc.) of a detection space. Based at least on the one or more images, embodiments may detect the presence of person(s) in the detection space. Also, embodiments may determine one or more characteristics of such detected person(s). Exemplary characteristics include (but are not limited to) membership in one or more demographic categories and/or activities of such persons. Further, based at least on such person detection and characteristics determining, embodiments may control delivery of content to an output device. | 11-17-2011 |
20110280440 | Method and Apparatus Pertaining to Rendering an Image to Convey Levels of Confidence with Respect to Materials Identification - A control circuit accesses image information regarding an image of a target. This information comprises, at least in part, information regarding material content of the target. The control circuit also accesses confidence information regarding at least one degree of confidence as pertains to the target's material content. The control circuit uses this confidence information to facilitate rendering the image such that the rendered image integrally conveys information both about materials included in the target and a relative degree of confidence that the materials are correctly identified. | 11-17-2011 |
20110280441 | PROJECTOR AND PROJECTION CONTROL METHOD - A method controls a projection of a projector. The method predetermines hand gestures, and assigns an operation function of an input device to each of the predetermined hand gestures. When an electronic file is projected onto a screen, the projector receives an image of a speaker captured by an image-capturing device connected to the projector. The projector identifies whether a hand gesture of the speaker matches one of the predetermined hand gestures. If the hand gesture matches one of the hand gestures, the projector may execute a corresponding assigned operation function. | 11-17-2011 |
20110280442 | OBJECT MONITORING SYSTEM AND METHOD - An object monitoring system and method identify a foreground object from a current frame of a video stream of a monitored area. The object monitoring system determines whether an object has entered or exited the monitored area according to the foreground object, and generates a security alarm. The object monitoring system searches N pieces of reference images just before an image is captured at the time of a generation of the security alarm, and detects information related to the object from the N pieces of reference images. By comparing the related information with vector descriptions of human body models stored in a feature database, and a holder or a remover of the object can be recognized. | 11-17-2011 |
20110280443 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - An image processing apparatus includes an identification criterion creating unit that creates an identification criterion so as to enable identification of specific regions in a target image to be processed that is selected in chronological order from among images constituting a set of time-series images; includes a feature data calculating unit that calculates the feature data of each segmented region in the target image to be processed; and includes a specific region identifying unit that, based on the feature data of each segmented region, identifies the specific regions in the target image to be processed by using the identification criterion. Moreover, the identification criterion creating unit creates the identification criterion based on the pieces of feature data of the specific regions identified in the images that have been already processed. | 11-17-2011 |
20110280444 | CAMERA AND CORRESPONDING METHOD FOR SELECTING AN OBJECT TO BE RECORDED - A camera is described having an image capturing device, an evaluation and control unit and a storage unit, the evaluation and control unit analyzes an image sequence having at least two successively captured images recorded by the image capturing device to segment and stabilize at least one object to be recorded during the image recording. The evaluation and control unit ascertains a deliberate panning movement of the camera and compares it with ascertained movements of objects represented in the captured images, the evaluation and control unit determining at least one object as an object to be recorded, the ascertained movement of which is most consistent with the camera's ascertained panning movement, and the evaluation and control unit storing an image section of the image captured by the image capturing device in the storage unit which represents the at least one object to be recorded. Also described is a corresponding method. | 11-17-2011 |
20110280445 | METHOD AND SYSTEM FOR ANALYZING AN IMAGE GENERATED BY AT LEAST ONE CAMERA - A method for analyzing an image of a real object, particularly a printed media object, generated by at least one camera comprises the following steps: generating at least a first image by the camera capturing at least one real object, defining a first search domain comprising multiple data sets of the real object, each of the data sets being indicative of a respective portion of the real object, and analyzing at least one characteristic property of the first image of the camera with respect to the first search domain, in order to determine whether the at least one characteristic property corresponds to information of at least a particular one of the data sets of the first search domain. If it is determined that the at least one characteristic property corresponds to information of at least a particular one of the data sets, a second search domain comprising only the particular one of the data sets is defined and the second search domain is used for analyzing the first image and/or at least a second image generated by the camera. | 11-17-2011 |
20110280446 | Method and Apparatus for Selective Disqualification of Digital Images - An unsatisfactory scene is disqualified as an image acquisition control for a camera. An image is acquired. One or more eye regions are determined. The eye regions are analyzed to determine whether they are blinking, and if so, then the scene is disqualified as a candidate for a processed, permanent image while the eye is completing the blinking. | 11-17-2011 |
20110280447 | METHODS AND SYSTEMS FOR CONTENT PROCESSING - Cell phones and other portable devices are equipped with a variety of technologies by which existing functionality can be improved, and new functionality can be provided. Some relate to visual search capabilities, and determining appropriate actions responsive to different image inputs. Others relate to processing of image data. Still others concern metadata generation, processing, and representation. Yet others relate to coping with fixed focus limitations of cell phone cameras, e.g., in reading digital watermark data. Still others concern user interface improvements. A great number of other features and arrangements are also detailed. | 11-17-2011 |
20110286627 | METHOD AND APPARATUS FOR TRACKING AND RECOGNITION WITH ROTATION INVARIANT FEATURE DESCRIPTORS - Various methods for tracking and recognition with rotation invariant feature descriptors are provided. One example method includes generating an image pyramid of an image frame, detecting a plurality of interest points within the image pyramid, and extracting feature descriptors for each respective interest point. According to some example embodiments, the feature descriptors are rotation invariant. Further, the example method may also include tracking movement by matching the feature descriptors to feature descriptors of a previous frame and performing recognition of an object within the image frame based on the feature descriptors. Related example methods and example apparatuses are also provided. | 11-24-2011 |
20110286628 | SYSTEMS AND METHODS FOR OBJECT RECOGNITION USING A LARGE DATABASE - A method of organizing a set of recognition models of known objects stored in a database of an object recognition system includes determining a classification model for each known object and grouping the classification models into multiple classification model groups. Each classification model group identifies a portion of the database that contains the recognition models of the known objects having classification models that are members of the classification model group. The method also includes computing a representative classification model for each classification model group. Each representative classification model is derived from the classification models that are members of the classification model group. When a target object is to be recognized, the representative classification models are compared to a classification model of the target object to enable selection of a subset of the recognition models of the known objects for comparison to a recognition model of the target object. | 11-24-2011 |
20110286629 | Method for reconstruction of a two-dimensional sectional image corresponding to a sectional plane through a recorded object and x-ray device - A method for reconstruction of a two-dimensional sectional image corresponding to a sectional plane through a recorded object from two-dimensional projection images recorded along a recording trajectory at different projection angles with an X-ray device is proposed. The sectional plane having at least two intersection points with the imaging trajectory is selected. After selection of the sectional plane, an intermediate function on the sectional plane is determined by backprojection of the projection images processed with a differentiation filter. The object densities forming the sectional image are determined from the intermediate function by a two-dimensional iterative deconvolution method. | 11-24-2011 |
20110286630 | Visualization of Medical Image Data With Localized Enhancement - Systems and methods for visualization of medical image data with localized enhancement. In one implementation, image data of a structure of interest is resampled within a predetermined plane to generate at least one background image of the structure of interest. In addition, at least one local image is reconstructed to visually enhance at least one local region of interest associated with the structure of interest. The local image and the background image are then combined to generate a composite image. | 11-24-2011 |
20110286631 | REAL TIME TRACKING/DETECTION OF MULTIPLE TARGETS - A mobile platform detects and tracks at least one target in real-time, by tracking at least one target, and creating an occlusion mask indicating an area in a current image to detect a new target. The mobile platform searches the area of the current image indicated by the occlusion mask to detect the new target. The use of a mask to instruct the detection system where to look for new targets increases the speed of the detection task. Additionally, to achieve real-time operation, the detection and tracking is performed in the limited time budget of the (inter) frame duration. Tracking targets is given higher priority than detecting new targets. After tracking is completed, detection is performed in the remaining time budget for the frame duration. Detection for one frame, thus, may be performed over multiple frames. | 11-24-2011 |
20110286632 | ASSEMBLY COMPRISING A RADAR AND AN IMAGING ELEMENT - An assembly comprising a radar and a camera for both deriving data relating to a golf ball and a golf club at launch, radar data relating to the ball and club being illustrated in an image provided by the camera. The data illustrated may be trajectories of the ball/club/club head, directions and/or angles, such as an angle of a face of the golf club striking the ball, the lie angle of the club head or the like. An assembly of this type may also be used for defining an angle or direction in the image and rotating e.g. an image of the golfer to have the determined direction or angle coincide with a predetermined angle/direction in order to be able to compare different images. | 11-24-2011 |
20110286633 | System And Method For Detecting, Tracking And Counting Human Objects of Interest - A method of identifying, tracking, and counting human objects of interest based upon at least one pair of stereo image frames taken by at least one image capturing device, comprising the steps of: obtaining said stereo image frames and converting each said stereo image frame to a rectified image frame using calibration data obtained for said at least one image capturing device; generating a disparity map based upon a pair of said rectified image frames; generating a depth map based upon said disparity map and said calibration data; identifying the presence or absence of said objects of interest from said depth map and comparing each of said objects of interest to existing tracks comprising previously identified objects of interest; for each said presence of an object of interest, adding said object of interest to one of said existing tracks if said object of interest matches said one existing track, or creating a new track comprising said object of interest if said object of interest does not match any of said existing tracks; updating each said existing track; and maintaining a count of said objects of interest in a given time period based upon said existing tracks created or modified during said given time period. | 11-24-2011 |
20110293136 | System and Method for Adapting Generic Classifiers for Object Detection in Particular Scenes Using Incremental Training - A generic classifier is adapted to detect an object in a particular scene, wherein the particular scene was unknown when the classifier was trained with generic training data. A camera acquires a video of frames of the particular scene. A model of the particular scene model is constructed using the frames in the video. The classifier is applied to the model to select negative examples, and new negative examples are added to the training data while removing another set of existing negative examples from the training data based on an uncertainty measure;. Selected positive examples are also added to the training data and the classifier is retrained until a desired accuracy level is reached to obtain a scene specific classifier. | 12-01-2011 |
20110293137 | ANALYSIS OF THREE-DIMENSIONAL SCENES - A method for processing data includes receiving a depth map of a scene containing a humanoid form. The depth map is processed so as to identify three-dimensional (3D) connected components in the scene, each connected component including a set of the pixels that are mutually adjacent and have mutually-adjacent depth values. Separate, first and second connected components are identified as both belonging to the humanoid form, and a representation of the humanoid form is generated including both of the first and second connected components. | 12-01-2011 |
20110293138 | DETECTION APPARATUS AND OBSTACLE DETECTION SYSTEM FOR VEHICLES USING THE SAME - A detection apparatus includes a housing, a circuit board, an image detection module, an ultrasonic detection module, and a connecting terminal. The image detection module includes a barrel, one or more lenses received in the barrel, and an image sensor configured to receive light through the lens and generate image signals. The image sensor is electrically connected to the circuit board. The ultrasonic detection module includes a piezoelectric member fixed to the housing to emit ultrasonic waves and receive reflected ultrasonic waves, and an ultrasonic control module operable to apply a voltage on the piezoelectric member, receive alternating voltages generated by the piezoelectric member, and generate voltage signals when receiving the voltages from the piezoelectric member. The ultrasonic control module is electrically connected to the piezoelectric member and the circuit board. The connecting terminal is electrically connected to the circuit board to output the image signals and the voltage signals. | 12-01-2011 |
20110293139 | METHOD OF AUTOMATICALLY TRACKING AND PHOTOGRAPHING CELESTIAL OBJECTS AND PHOTOGRAPHIC APPARATUS EMPLOYING THIS METHOD - A method of automatically tracking and photographing a celestial object, includes inputting latitude information, photographing azimuth angle information and photographing elevation angle information of a photographic apparatus; inputting star map data of a certain range including data on a location of a celestial object from the latitude information, the photographing azimuth angle information and the photographing elevation angle information; calculating a deviation amount between a location of the celestial object that is imaged in a preliminary image obtained by the photographic apparatus and the location of the celestial object which is defined in the input star map data; correcting at least one of the photographing azimuth angle information and the photographing elevation angle information using the deviation amount; and performing a celestial-object auto-tracking photographing operation based on the corrected at least one of the photographing azimuth angle information and the photographing elevation angle information. | 12-01-2011 |
20110293140 | Dataset Creation For Tracking Targets With Dynamically Changing Portions - A mobile platform visually detects and/or tracks a target that includes a dynamically changing portion, or otherwise undesirable portion, using a feature dataset for the target that excludes the undesirable portion. The feature dataset is created by providing an image of the target and identifying the undesirable portion of the target. The identification of the undesirable portion may be automatic or by user selection. An image mask is generated for the undesirable portion. The image mask is used to exclude the undesirable portion in the creation of the feature dataset for the target. For example, the image mask may be overlaid on the image and features are extracted only from unmasked areas of the image of the target. Alternatively, features may be extracted from all areas of the image and the image mask used to remove features extracted from the undesirable portion. | 12-01-2011 |
20110293141 | DETECTION OF VEHICLES IN AN IMAGE - The invention concerns a traffic surveillance system that is used to detect and track vehicles in video taken of a road from a low mounted camera. The inventors have discovered that even in heavily occluded scenes, due to traffic density or the angle of low mounted cameras capturing the images, at least one horizontal edge of the windshield is least likely to be occluded for each individual vehicle in the image. Thus, it is an advantage of the invention that the direct detection of a windshield on its own can be used to detect a vehicle in a single image. Multiple models are projected ( | 12-01-2011 |
20110293142 | METHOD FOR RECOGNIZING OBJECTS IN A SET OF IMAGES RECORDED BY ONE OR MORE CAMERAS - Method for improving the visibly of objects and recognizing objects in a set of images recorded by one or more cameras, the images of said set of images being made from mutual different geometric positions, the method comprising the steps or recording a set or subset of images by means of one camera which is moved rather freely and which makes said images during its movement, thus providing an array of subsequent images, estimating the camera movement between subsequent image recordings, also called ego-motion hereinafter, based on features of those recorded images, registering the camera images using a synthetic aperture method, recognizing said objects. | 12-01-2011 |
20110293143 | FUNCTIONAL IMAGING - A method includes generating a kinetic parameter value for a VOI in a functional image of a subject based on motion corrected projection data using an iterative algorithm, including determining a motion correction for projection data corresponding to the VOI based on the VOI, motion correcting the projection data corresponding to the VOI to generate the motion corrected projection data, and estimating the at least one kinetic parameter value based on the motion corrected projection data or image data generated with the motion corrected projection data. In another embodiment, a method includes registering functional image data indicative of tracer uptake in a scanned patient with image data from a different imaging modality, identifying a VOI in the image based on the registered images, generating at least one kinetic parameter for the VOI, and generating a feature vector including the at least one generated kinetic parameter and at least one bio- marker. | 12-01-2011 |
20110293144 | Method and System for Rendering an Entertainment Animation - Systems and methods for rendering an entertainment animation. The system can comprise a user input unit for receiving a non-binary user input signal; an auxiliary signal source for generating an auxiliary signal; a classification unit for classifying the non-binary user input signal with reference to the auxiliary signal; and a rendering unit for rendering the entertainment animation based on classification results from the classification unit. | 12-01-2011 |
20110293145 | DRIVING SUPPORT DEVICE, DRIVING SUPPORT METHOD, AND PROGRAM - Provided are a driving support device, a driving support method, and a program, in which the driver can more intuitively and accurately determine the distance to another vehicle in the side rear. A driving support device ( | 12-01-2011 |
20110299727 | Specific Absorption Rate Measurement and Energy-Delivery Device Characterization Using Thermal Phantom and Image Analysis - A system for use in characterizing an energy applicator includes a test fixture assembly. The test fixture assembly includes an interior area defined therein. The system also includes a thermally-sensitive medium disposed in the interior area of the test fixture assembly. The thermally-sensitive medium includes a cut-out portion defining a void in the thermally-sensitive medium. The cut-out portion is configured to receive at least a portion of the energy applicator therein. | 12-08-2011 |
20110299728 | AUTOMATIC DEPTH CAMERA AIMING - Automatic depth camera aiming is provided by a method which includes receiving from the depth camera one or more observed depth images of a scene. The method further includes, if a point of interest of a target is found within the scene, determining if the point of interest is within a far range relative to the depth camera. The method further includes, if the point of interest of the target is within the far range, operating the depth camera with a far logic, or if the point of interest of the target is not within the far range, operating the depth camera with a near logic. | 12-08-2011 |
20110299729 | APPARATUS AND METHOD FOR MEASURING GOLF CLUB SHAFT FLEX AND GOLF SIMULATION SYSTEM INCORPORATING THE SAME - A method for measuring shaft flex comprises capturing at least one image of a shaft during movement of the shaft through a swing plane and examining the at least one image to determine the flex of the shaft. | 12-08-2011 |
20110299730 | VEHICLE LOCALIZATION IN OPEN-PIT MINING USING GPS AND MONOCULAR CAMERA - Described herein is a method and system for vehicle localization in an open pit mining environment having intermittent or incomplete GPS coverage. The system comprises GPS receivers associated with the vehicles and providing GPS measurements when available, as well as one or more cameras | 12-08-2011 |
20110299731 | INFORMATION PROCESSING DEVICE AND METHOD, AND PROGRAM - An information processing device includes a first calculation unit which calculates a score of each sample image including a positive image in which an object as an identification object is present and a negative image in which the object as the identification object is not present, for each weak identifier of an identifier including a plurality of weak identifiers, a second calculation unit which calculates the number of scores when the negative image is processed, which are scores less than a minimum score among scores when the positive image is processed; and an realignment unit which realigns the weak identifiers in order from a weak identifier in which the number calculated by the second calculation unit is a maximum. | 12-08-2011 |
20110299732 | SYSTEM OF DRONES PROVIDED WITH RECOGNITION BEACONS - The present invention relates to a system ( | 12-08-2011 |
20110299733 | SYSTEM AND METHOD FOR PROCESSING RADAR IMAGERY - The present invention relates to a system and method for processing imagery, such as may be derived from a coherent imaging system e.g. a synthetic aperture radar (SAR). The system processes sequences of SAR images of a region taken in at least two different passes and generates Coherent Change Detection (CCD) base images from corresponding images of each pass. A reference image is formed from one or more of the CCD base images images, and an incoherent change detection image formed by comparison between a given CCD base image and the reference image. The technique is able to detect targets from tracks left in soft ground, or from shadow areas caused by vehicles, and so does not rely on a reflection directly from the target itself. The technique may be implemented on data recorded in real time, or may be done in post-processing on a suitable computer system. | 12-08-2011 |
20110299734 | METHOD AND SYSTEM FOR DETECTING TARGET OBJECTS - With a method and a system for detecting target objects, which are detected by a sensor device, for example, by radar, laser or passive reception of electromagnetic waves, through an imaging electro-optical sensor with subsequent digital image evaluation, it is proposed for a rapid allocation of the image sensor with changeable direction that takes into account the different importance of the individual target objects to predefine in an assessment device different assessment criteria for a target parameter of the respective target objects and to derive therefrom a prioritization value for each individual target. Based on the prioritization values a ranking is compiled of the target objects for detection by the image sensor, and the target objects are successively detected by the image sensor in the order given by the ranking and evaluated, in particular classified, in an image evaluation device. | 12-08-2011 |
20110299735 | METHOD OF USING STRUCTURAL MODELS FOR OPTICAL RECOGNITION - A method and system for recognizing all varieties of objects in an image by using structure models are disclosed. Structural elements are sought when comparing a structural model with an image but only within a framework of one or more generated hypotheses. The method for identifying objects includes preliminarily creating a structural model of objects by specifying a plurality of basic geometric structural elements corresponding to one or more portions of the object, recording a spatial characteristic of each identified basic geometric structural element, and recording a relational characteristic for each specified basic geometric structural element. Objects in the image are isolated and a list of hypotheses for each object is provided. Hypotheses are tested by determining if the corresponding group of basic geometric structural elements corresponds to another supposed object described in a classifier. Results of testing of hypotheses may be saved and the results may be used to identify objects. | 12-08-2011 |
20110305366 | Adaptive Action Detection - Described is providing an action model (classifier) for automatically detecting actions in video clips, in which unlabeled data of a target dataset is used to adaptively train the action model based upon similar actions in a labeled source dataset. The target dataset comprising unlabeled video data is processed into a background model. The action model is generated from the background model using a source dataset comprising labeled data for an action of interest. The action model is iteratively refined, generally by fixing a current instance of the action model and using the current instance of the action model to search for a set of detected regions (subvolumes), and then fixing the set of subvolumes and updating the current instance of the action model based upon the set of subvolumes, and so on, for a plurality of iterations. | 12-15-2011 |
20110305367 | STORAGE MEDIUM HAVING IMAGE RECOGNITION PROGRAM STORED THEREIN, IMAGE RECOGNITION APPARATUS, IMAGE RECOGNITION SYSTEM, AND IMAGE RECOGNITION METHOD - A game apparatus obtains a captured image captured by a camera. First, the game apparatus detects an object area of the captured image that includes a predetermined image object based on pixel values obtained at a first pitch across the captured image. Then, the game apparatus detects a predetermined image object from an image of the object area based on pixel values obtained at a second pitch smaller than the first pitch across the object area of the captured image. | 12-15-2011 |
20110305368 | STORAGE MEDIUM HAVING IMAGE RECOGNITION PROGRAM STORED THEREIN, IMAGE RECOGNITION APPARATUS, IMAGE RECOGNITION SYSTEM, AND IMAGE RECOGNITION METHOD - A game apparatus detects a predetermined image object including a first graphic pattern with a plurality of inner graphic patterns drawn therein from a captured image captured by an image-capturing section. The game apparatus first obtains the captured image captured by the image-capturing section, and detects an area of the first graphic pattern from the captured image. Then, the game apparatus detects the plurality of inner graphic patterns from within the detected area, and calculates center positions of the inner graphic patterns so as to detect the position of the predetermined image object. | 12-15-2011 |
20110305369 | PORTABLE WIRELESS MOBILE DEVICE MOTION CAPTURE AND ANALYSIS SYSTEM AND METHOD - Portable wireless mobile device motion capture and analysis system and method configured to display motion capture/analysis data on a mobile device. System obtains data from motion capture elements and analyzes the data. Enables unique displays associated with the user, such as 3D overlays onto images of the user to visually depict the captured motion data. Ratings associated with the captured motion can also be displayed. Predicted ball flight path data can be calculated and displayed. Data shown on a time line can also be displayed to show the relative peaks of velocity for various parts of the user's body. Based on the display of data, the user can determine the equipment that fits the best and immediately purchase the equipment, via the mobile device. Custom equipment may be ordered through an interface on the mobile device from a vendor that can assemble-to-order customer built equipment and ship the equipment. Includes active and passive golf shot count capabilities. | 12-15-2011 |
20110311099 | METHOD OF EVALUATING THE HORIZONTAL SPEED OF A DRONE, IN PARTICULAR A DRONE CAPABLE OF PERFORMING HOVERING FLIGHT UNDER AUTOPILOT - The method operates by estimating the differential movement of the scene picked up by a vertically-oriented camera. Estimation includes periodically and continuously updating a multiresolution representation of the pyramid of images type modeling a given picked-up image of the scene at different, successively-decreasing resolutions. For each new picked-up image, an iterative algorithm of the optical flow type is applied to said representation. The method also provides responding to the data produced by the optical-flow algorithm to obtain at least one texturing parameter representative of the level of microcontrasts in the picked-up scene and obtaining an approximation of the speed, to which parameters a battery of predetermined criteria are subsequently applied. If the battery of criteria is satisfied, then the system switches from the optical-flow algorithm to an algorithm of the corner detector type. | 12-22-2011 |
20110311100 | Method, Apparatus and Computer Program Product for Providing Object Tracking Using Template Switching and Feature Adaptation - A method, apparatus and computer program product are provided that may enable devices to provide improved object tracking, such as in connection with computer vision, multimedia content analysis and retrieval, augmented reality, human computer interaction and region-based image processing. In this regard, a method includes adjusting parameters of a portion of an input frame having a target object to match a template size and then performing feature-based image registration between the portion of the input frame and an active template and at least one selected inactive template. The method may also enable switching the selected inactive template to be an active template for a subsequent frame based at least on a matching score between the portion of the input frame and the selected inactive template and determine a position of a target object in the input frame based on one of the active template or the selected inactive template. | 12-22-2011 |
20110311101 | METHOD AND SYSTEM TO SEGMENT DEPTH IMAGES AND TO DETECT SHAPES IN THREE-DIMENSIONALLY ACQUIRED DATA - A method and system analyzes data acquired by image systems to more rapidly identify objects of interest in the data. In one embodiment, z-depth data are segmented such that neighboring image pixels having similar z-depths are given a common label. Blobs, or groups of pixels with a same label, may be defined to correspond to different objects. Blobs preferably are modeled as primitives to more rapidly identify objects in the acquired image. In some embodiments, a modified connected component analysis is carried out where image pixels are pre-grouped into regions of different depth values preferably using a depth value histogram. The histogram is divided into regions and image cluster centers are determined. A depth group value image containing blobs is obtained, with each pixel being assigned to one of the depth groups. | 12-22-2011 |
20110317871 | SKELETAL JOINT RECOGNITION AND TRACKING SYSTEM - A system and method are disclosed for recognizing and tracking a user's skeletal joints with a NUI system and further, for recognizing and tracking only some skeletal joints, such as for example a user's upper body. The system may include a limb identification engine which may use various methods to evaluate, identify and track positions of body parts of one or more users in a scene. In examples, further processing efficiency may be achieved by segmenting the field of view in smaller zones, and focusing on one zone at a time. Moreover, each zone may have its own set of predefined gestures which are recognized. | 12-29-2011 |
20110317872 | Low Threshold Face Recognition - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, are disclosed for reducing the impact of lighting conditions and biometric distortions, while providing a low-computation solution for reasonably effective (low threshold) face recognition. In one aspect, the methods include processing a captured image of a face of a user seeking to access a resource by conforming a subset of the captured face image to a reference model. The reference model corresponds to a high information portion of human faces. The methods further include comparing the processed captured image to at least one target profile corresponding to a user associated with the resource, and selectively recognizing the user seeking access to the resource based on a result of said comparing. | 12-29-2011 |
20110317873 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 12-29-2011 |
20110317874 | Information Processing Device And Information Processing Method - An image acquisition unit of an information processing device acquires data for a moving image including an image of a user and captured by an image capturing device. A tracking processing unit uses a particle filter to perform visual tracking in the moving image so as to estimate a head contour of the user. A gesture detection unit identifies a facial region in an area inside the head contour, acquires a parameter indicating the orientation of the face, and keeping a history of parameters. When time-dependent change in the orientation of the face meets a predetermined criterion, it is determined that a gesture is made. The output data generation unit generates output data dependent on a result of detecting a gesture. The output control unit controls the generated output data so as to display the data on the display, for example. | 12-29-2011 |
20110317875 | Identifying and Redressing Shadows in Connection with Digital Watermarking and Fingerprinting - The present disclosure relates generally to cell phones and cameras, and to shadow detection in images captured by such cell phones and cameras. One claim recites a method comprising: identifying a shadow cast by a camera on a subject being imaged; and using a programmed electronic processor, redressing the shadow in connection with: i) reading a digital watermark from imagery captured of the subject, or ii) calculating a fingerprint from the imagery captured of the subject. Another claim recites a method comprising: identifying a shadow cast by a cell phone on a subject being imaged by a camera included in the cell phone; and using a programmed electronic processor, determining a proximity of the camera to the subject based on an analysis of the shadow. Of course, other claims and combinations are provided too. | 12-29-2011 |
20110317876 | Optical Control System for Heliostats - A method of aligning a reflector with a target includes receiving, at a first reflector, light from a light source. The first reflector is configured to reflect light from the light source onto a target, illuminating the target in a first target region. A first image of the target is captured, using an imaging device. The first reflector is configured to reflect light from the light source onto the target, illuminating the target in a second target region. A second image of the target is captured, using the imaging device. The differences between the first image and the second image are compared to determine the alignment of the first reflector with respect to at least one of the light source and the target. | 12-29-2011 |
20110317877 | METHOD OF MOTION DETECTION AND AUTONOMOUS MOTION TRACKING USING DYNAMIC SENSITIVITY MASKS IN A PAN-TILT CAMERA - A method of identifying motion within a field of view includes capturing at least two sequential images within the field of view. Each of the images includes a respective array of pixel values. An array of difference values between corresponding ones of the pixel values in the sequential images is calculated. A sensitivity region map corresponding to the field of view is provided. The sensitivity region map includes a plurality of regions having different threshold values. A presence of motion is determined by comparing the difference values to corresponding ones of the threshold values. | 12-29-2011 |
20120002840 | METHOD OF AND ARRANGEMENT FOR LINKING IMAGE COORDINATES TO COORDINATES OF REFERENCE MODEL - A method of linking image coordinates to coordinates in a reference model is disclosed. The method includes acquiring a 2½D or 3D input image representing a body of a living being and including at least two image boundaries of at least two parts within said body, acquiring a 3D reference model representative of a reference living being describing in a reference model coordinate system at least two reference boundaries of the at least two parts within said body, and overlaying the reference model and the input image. The method further includes adjusting at least a portion of one of the reference boundaries and/or at least one of the image boundaries such that this reference boundary and this image boundary substantially coincide, while the adjusted reference boundary does not intersect with the remaining reference boundaries and/or the adjusted image boundary does not intersect with the remaining image boundaries. | 01-05-2012 |
20120002841 | INFORMATION PROCESSING APPARATUS, THREE-DIMENSIONAL POSITION CALCULATION METHOD, AND PROGRAM - An information processing apparatus includes a region segmentation unit configured to segment each of a plurality of images shot by an imaging apparatus for shooting an object from a plurality of viewpoints, into a plurality of regions based on colors of the object, an attribute determination unit configured to determine, based on regions in proximity to intersections between scanning lines set on the each image and boundary lines of the regions segmented by the region segmentation unit in the each image, attributes of the intersections, a correspondence processing unit configured to obtain corresponding points between the images based on the determined intersections' attributes, and a three-dimensional position calculation unit configured to calculate a three-dimensional position of the object based on the obtained corresponding points. | 01-05-2012 |
20120002842 | DEVICE AND METHOD FOR DETECTING MOVEMENT OF OBJECT - A device for detecting a movement of an object includes: an image shooting unit to generate a first image and a second image by continuous shooting; a detection unit to detect a movement region based on a difference between the first and second images; an edge detection unit to detect an edge region in the first image; a deletion unit to delete the edge region from the movement region; and a decision unit to determine a degree of object movement in accordance with the movement region in which a part of the movement region being deleted by the deletion unit. | 01-05-2012 |
20120002843 | DROWSINESS ASSESSMENT DEVICE AND PROGRAM - Local maxima values and local minima values are derived from eyelid openness time series data in a segment in which a continuous closed eye period of extracted blinks is a specific time duration (for example 1 second) or longer. When plural local minima values are present in the segment of continuous closed eye period of 1 second or longer, blinks are extracted passing over and back through each variable closed eye threshold value of a variable closed eye threshold that is slid in a direction from the derived local maxima value towards the local minima value in set steps to a low value, and a inter-blink interval derived. Determination is made that a blink burst has occurred when the derived inter-blink interval is 1 second or less, and say greater than 0.2 seconds, thereby detecting a blink burst. Blink bursts can be detected with good precision, and the state of drowsiness can be assessed with good precision. | 01-05-2012 |
20120008825 | SYSTEM AND METHOD FOR DYNAMICALLY TRACKING AND INDICATING A PATH OF AN OBJECT - A system for dynamically tracking and indicating a path of an object comprises an object position system for generating three-dimensional object position data comprising an object trajectory, a software element for receiving the three-dimensional object position data, the software element also for determining whether the three-dimensional object position data indicates that an object has exceeded a boundary, and a graphics system for displaying the object trajectory. | 01-12-2012 |
20120008826 | METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR DETECTING OBJECTS IN DIGITAL IMAGES - Method, device, and computer program product for detecting an object in a digital image are provided. The method includes providing a detection window and determining at least one area of the object in the digital image by traversing the detection window by a first step size onto a set of pixels. Further, at each pixel, presence of at least one portion of the object in the detection window is detected. Upon detection of the presence of the object, the detection window is shifted by a second step size to neighbouring pixels. Further, the detection window is selected as an area of the object if the at least one portion of the object is present in at least a threshold number of detection windows at the neighbouring pixels. Thereafter, an object area representing the object in the digital image is selected based on the at least one area. | 01-12-2012 |
20120008827 | METHOD AND DEVICE FOR IDENTIFYING OBJECTS AND FOR TRACKING OBJECTS IN A PRODUCTION PROCESS - An object ( | 01-12-2012 |
20120008828 | TARGET-LINKED RADIATION IMAGING SYSTEM - An imaging detection system includes at least one location detection device configured to determine coordinates of a target, at least one detector configured to detect events from a source associated with the target, and a processor coupled in communication with the at least one location detection device and the at least one detector. The processor is configured to receive the coordinates from the at least one location detection device and the events from the at least one detector, translate the events using the coordinates acquired from the at least one location detection device to compensate for a relative motion between the source and the at least one detector, and output a processed data set having the events translated based on the coordinates. | 01-12-2012 |
20120008829 | METHOD, DEVICE, AND COMPUTER-READABLE MEDIUM FOR DETECTING OBJECT IN DISPLAY AREA - Disclosed are a method and a device for detecting an object in a display area. The method comprises a step of generating a first image prepared to be displayed; a step of displaying the generated first image on a screen; a step of capturing a second image of the screen including the display area; and a step of comparing the generated first image with the captured second image so as to detect the object in the display area. | 01-12-2012 |
20120008830 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND COMPUTER-READABLE STORAGE MEDIUM - An information processing apparatus for estimating a position and orientation of a target object in a three-dimensional space, inputs a plurality of captured images obtained by imaging the target object from a plurality of viewpoints, clips, for each of the input captured images, a partial image corresponding to a region occupied by a predetermined partial space in the three-dimensional space, from the captured image, extracts, from a plurality of partial images clipped from the plurality of captured images, feature information indicating a feature of the plurality of partial images, stores dictionary information indicating a position and orientation of an object in association with feature information of the object corresponding to the position and orientation, and estimates the position and orientation of the target object by comparing the feature information of the extracted target object and the feature information indicated in the dictionary information. | 01-12-2012 |
20120008831 | OBJECT POSITION CORRECTION APPARATUS, OBJECT POSITION CORRECTION METHOD, AND OBJECT POSITION CORRECTION PROGRAM - An object position correction apparatus is provided with an observing device that detects an object to be observed to obtain an observed value, an observation history data base that records an observation history of the object, a position estimation history data base that records the estimated history of the position of the object, a prediction distribution forming unit that forms a prediction distribution that represents an existence probability at the position of the object, an object position estimation unit that estimates the ID and the position of the object, a center-of-gravity position calculation unit that calculates the center-of-gravity position of the observed values, an object position correction unit that carries out a correction on the estimated position of the object, and a display unit that displays the corrected position of the object. | 01-12-2012 |
20120008832 | REGION-OF-INTEREST VIDEO QUALITY ENHANCEMENT FOR OBJECT RECOGNITION - A video-based object recognition system and method provides selective, local enhancement of image data for improved object-based recognition. A frame of video data is analyzed to detect objects to receive further analysis, these local portions of the frame being referred to as a region of interest (ROI). A video quality metric (VQM) value is calculated locally for each ROI to assess the quality of the ROI. Based on the VQM value calculated with respect to the ROI, a particular video quality enhancement (VQE) function is selected and applied to the ROI to cure deficiencies in the quality of the ROI. Based on the enhanced ROI, objects within the defined region can be accurately identified. | 01-12-2012 |
20120014558 | POSITION-DEPENDENT GAMING, 3-D CONTROLLER, AND HANDHELD AS A REMOTE - Methods and systems for using a position of a mobile device with an integrated display as an input to a video game or other presentation are presented. Embodiments include rendering an avatar on a mobile device such that it appears to overlay a competing user in the real world. Using the mobile device's position, view direction, and the other user's mobile device position, an avatar (or vehicle, etc.) is depicted at an apparently inertially stabilized location of the other user's mobile device or body. Some embodiments may estimate the other user's head and body positions and angles and reflect them in the avatar's gestures. | 01-19-2012 |
20120014559 | Method and System for Semantics Driven Image Registration - A method and system for automatic semantics driven registration of medical images is disclosed. Anatomic landmarks and organs are detected in a first image and a second image. Pathologies are also detected in the first image and the second image. Semantic information is automatically extracted from text-based documents associated with the first and second images, and the second image is registered to the first image based the detected anatomic landmarks, organs, and pathologies, and the extracted semantic information. | 01-19-2012 |
20120014560 | METHOD FOR AUTOMATIC STORYTELLING FOR PHOTO ALBUMS USING SOCIAL NETWORK CONTEXT - A method for automatically selecting and organizing a subset of photos from a set of photos provided by a user, who has an account on at least one social network providing some context, for creating a summarized photo album with a storytelling structure. The method comprises: arranging the set of photos into a three level hierarchy, acts, scenes and shots; checking whether photos are photos with people or not; obtaining an aesthetic measure of the photos; creating and ranking face clusters; selecting the most aesthetic photo of each face cluster; selecting photos with people until complete a predefined number of photos of the summarized album picking the ones which optimize the function: | 01-19-2012 |
20120014561 | IMAGE TAKING APPARATUS AND IMAGE TAKING METHOD - An image taking apparatus according to an aspect of the invention comprises: an image pickup device which picks up an object image and outputs the picked-up image data; a face detection device which detects human faces in the image data; a face-distance calculating device which calculates the distance between the faces among a plurality of faces detected by the face detection device; and a controlling device which controls the image pickup device to start shooting, after a shooting instruction is issued, in the case where the distance between the faces calculated by the face-distance calculating device is not greater than a first predetermined threshold value. The image taking apparatus allows shooting the moment the distance between the faces is close enough not be greater than to a predetermined threshold value. | 01-19-2012 |
20120014562 | EFFICIENT METHOD FOR TRACKING PEOPLE - In accordance with one embodiment, a method to track persons includes generating a first and second set of facial coefficient vectors by: (i) providing a first and second image containing a plurality of persons; (ii) locating faces of persons in each image; and (iii) generating a facial coefficient vector for each face by extracting from the images coefficients sufficient to locally identify each face, then tracking the persons within the images, the tracking including comparing the first set of facial coefficient vectors to the second set of facial coefficient vectors to determine for each person in the first image if there is a corresponding person in the second image. Optically the method includes using estimated locations in combination with the vector distance between facial coefficient vectors to track persons. | 01-19-2012 |
20120020514 | OBJECT DETECTION APPARATUS AND OBJECT DETECTION METHOD - An object detection apparatus that detects an object to be detected captured in a determination image according to a feature amount of the object to be detected preliminarily learned by the use of a learning image, the object detection apparatus including a detector causing strong classifiers to operate in order of lower classification accuracy, continuing processing when the strong classifier has determined that the object to be detected is captured in the determination image, and determining that the object to be detected has not been detected without causing the strong classifier having classification accuracy higher than the aforementioned strong classifier to operate, when the strong classifier has determined that the object to be detected is not captured in the determination image, wherein the strong classifier inputs a classification result of the strong classifier having classification accuracy lower than the aforementioned strong classifier and determines whether the object to be detected is captured or not in the determination image according to the plurality of estimation values and the input classification result. | 01-26-2012 |
20120020515 | MULTI-PHENOMENOLOGY OBJECT DETECTION - Method and system for utilizing multiple phenomenological techniques to resolve closely spaced objects during imaging includes detecting a plurality of closely spaced objects through the imaging of a target area by an array, and spreading electromagnetic radiation received from the target area across several pixels. During the imaging, different phenomenological techniques may be applied to capture discriminating features that may affect a centroid of the electromagnetic radiation received on the array. Comparing the locations of the centroids over multiple images may be used to resolve a number of objects imaged by the array. Examples of such phenomenological discriminating techniques may include imaging the target area in multiple polarities of light or in multiple spectral bands of light. Another embodiment includes time-lapse imaging of the target area, to compare time lapse centroids for multiple movement signal characteristics over pluralities of pixels on the array. | 01-26-2012 |
20120020516 | SYSTEM AND METHOD FOR MONITORING MOTION OBJECT - A motion object monitoring system captures an image of a scene and distance data between points in the scene and a time-of-flight (TOF) camera by the TOF camera. A 3D model of the scene is built according to the image of the scene and the distance data. The motion object monitoring system gives numbers to the monitored objects according to specific features of the monitored objects. The specific features of the monitored objects are obtained by detecting the built 3D model of the scene. Only one of the numbers of each of the monitored objects is stored, instead of repeatedly storing the numbers of same motion objects. The motion object monitoring system analyzes the stored numbers, and displays an analysis result. The motion object monitoring system also determines a movement of each of the motion objects according to corresponding numbers of the motion objects. | 01-26-2012 |
20120020517 | METHOD FOR DETECTING, IN PARTICULAR COUNTING, ANIMALS - A method for detecting, in particular counting, animals that pass a predefined place in a walk-through direction with the aid of at least a camera, wherein the camera successively records pictures of the defined place and wherein the camera generates signals that represent these pictures and supplies these signals to signal processing means for further processing, wherein a multiplicity of the recorded pictures are processed with the aid of the signal processing means. | 01-26-2012 |
20120020518 | PERSON TRACKING DEVICE AND PERSON TRACKING PROGRAM - A two-dimensional moving track calculating unit | 01-26-2012 |
20120020519 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - An image processing apparatus includes a region setting unit configured to set a specific region where a reflection may occur in an image, a size setting unit configured to set a size of an object to be detected in association with a position in the image, and a changed region detection unit configured to detect a changed region by comparing a background model and an input image, wherein the changed region detection unit outputs the changed region in the specific region based on the size of the object associated with a position of the changed region, in a case where the changed region extends beyond a boundary of the specific region. | 01-26-2012 |
20120020520 | METHOD AND APPARATUS FOR DETECTING MOTION OF IMAGE IN OPTICAL NAVIGATOR - A system and method for determining a motion vector uses both a main block from an image and at least one ancillary block relating to the main block from the image. The main block and ancillary block are then tracked from image to image to provide a motion vector. The use of a composite tracking unit allows for more accurate correlation and identification of a motion vector. | 01-26-2012 |
20120020521 | OBJECT POSITION ESTIMATION APPARATUS, OBJECT POSITION ESTIMATION METHOD, AND OBJECT POSITION ESTIMATION PROGRAM - An object-state change determination unit calculates a correspondence relationship between each of a plurality of observed values obtained from a plurality of objects and each of a plurality of the latest object states to be recorded in an object state information storage unit, and determines presence or absence of a change in the object state so that only in a case where there is a change in the object state, an object position is estimated with high precision by using a batch estimation unit, while in the case of no change in the object state, the result of a position estimation of the object with high precision, recorded in the object state information storage unit, is outputted as a result of an object position estimation. | 01-26-2012 |
20120020522 | MOBILE IMAGING DEVICE AS NAVIGATOR - Embodiments of the invention are directed to obtaining information based on directional orientation of a mobile imaging device, such as a camera phone. Visual information is gathered by the camera and used to determine a directional orientation of the camera, to search for content based on the direction, to manipulate 3D virtual images of a surrounding area, and to otherwise use the directional information. Direction and motion can be determined by analyzing a sequence of images. Distance from a current location, inputted search parameters, and other criteria can be used to expand or filter content that is tagged with such criteria. Search results with distance indicators can be overlaid on a map or a camera feed. Various content can be displayed for a current direction, or desired content, such as a business location, can be displayed only when the camera is oriented toward the desired content. | 01-26-2012 |
20120020523 | INFORMATION CREATION DEVICE FOR ESTIMATING OBJECT POSITION AND INFORMATION CREATION METHOD AND PROGRAM FOR ESTIMATING OBJECT POSITION - Score determination means | 01-26-2012 |
20120020524 | TRACKED OBJECT DETERMINATION DEVICE, TRACKED OBJECT DETERMINATION METHOD AND TRACKED OBJECT DETERMINATION PROGRAM - Determination whether a moving object appearing in input video is an object tracked and captured by a cameraman is enabled. It is determined that a moving object is a subject image to which a cameraman pays attention based on a time difference between time when a movement state determined by a motion vector of the moving object changes and time when a shooting state determined by a motion vector of a camera motion changes. | 01-26-2012 |
20120020525 | DATA PROCESSING APPARATUS AND DATA PROCESSING METHOD - A data processing apparatus ( | 01-26-2012 |
20120027248 | Foreground Analysis Based on Tracking Information - Techniques for performing foreground analysis are provided. The techniques include identifying a region of interest in a video scene, applying a background subtraction algorithm to the region of interest to detect a static foreground object in the region of interest, and determining whether the static foreground object is abandoned or removed, wherein determining whether the static foreground object is abandoned or removed comprises performing a foreground analysis based on edge energy and region growing, and pruning one or more false alarms using one or more track statistics. | 02-02-2012 |
20120027249 | Multispectral Detection of Personal Attributes for Video Surveillance - Techniques for detecting an attribute in video surveillance include generating training sets of multispectral images, generating a group of multispectral box features comprising receiving input of a detector size of a width and height, a number of spectral bands in the multispectral images, and integer values representing a minimum and maximum width and height of multispectral box features, fixing a feature width and to height, generating feature building blocks with the fixed width and height, placing a feature building block at a same location for each spectral band level, and enumerating combinations of the feature building blocks through each spectral level until all sizes within the integer values have been covered, and wherein each combination determines a multispectral box feature, using the training sets to select multispectral box features to generate a multispectral attribute detector, and using the multispectral attribute detector to identify a location of an attribute in video surveillance. | 02-02-2012 |
20120027250 | DATA DIFFERENCE GUIDED IMAGE CAPTURING - Methods and apparatuses are disclosed. Previously stored images of one or more geographic areas may be viewed by online users. A new low-resolution image may be acquired and aspects of the new low-resolution image may be compared with a corresponding one of the previously stored images to determine an amount of change. A determination may be made regarding whether to acquire a new high-resolution image based on the determined amount of change and a freshness score associated with the one of the previously stored images. In another embodiment, a new image may be captured and corresponding location data may be obtained. A corresponding previously stored image may be obtained and compared with the new image to determine an amount of change. The new image may be uploaded to a remote computing device based on the determined amount of change and a freshness score of the previously stored image. | 02-02-2012 |
20120027251 | DEVICE WITH MARKINGS FOR CONFIGURATION - A device including a network interface is marked for determination of the position or orientation of the device. In particular, the markings can include a pattern and proportions that enable determination of at least one of a position and an orientation of the device relative to a station using appearance of the markings as observed from the station. | 02-02-2012 |
20120027252 | HAND GESTURE DETECTION - A method for detecting presence of a hand gesture in video frames includes receiving video frames having an original resolution, downscaling the received video frames into video frames having a lower resolution, and detecting a motion corresponding to the predefined hand gesture in the downscaled video frames based on temporal motion information in the downscaled video frames. The method also includes detecting a hand shape corresponding to the predefined hand gesture in a candidate search window within one of the downscaled video frames using a binary classifier. The candidate search window corresponds to a motion region containing the detected motion. The method further includes determining whether the received video frames contain the predefined hand gesture based on the hand shape detection. | 02-02-2012 |
20120027253 | ILLUMINATION APPARATUS AND BRIGHTNESS ADJUSTING METHOD - An illumination apparatus comprises a control unit, an image capturing unit, a processor unit, a comparison unit, an adjustment unit and an illumination unit. The control unit generates a start signal in a predetermined time. The image capturing unit captures a plurality of images of ambient road condition according to the start signal. The processor unit extracts the edges of the vehicle from the captured images to obtain a current traffic. The adjustment unit generates different pulse voltages according to the different volume of traffic. The illumination unit emits light according to the different pulse voltages. | 02-02-2012 |
20120027254 | Information Processing Apparatus and Information Processing Method for Drawing Image that Reacts to Input Information - In an information processing apparatus, an external-information acquisition unit acquires external information such as an image, a sound, textual information, and numerical information from an input apparatus. A field-image generation unit generates, as an image, a “field” that acts on a particle for a predetermined time step based on the external information. An intermediate-image memory unit stores an intermediate image that is generated in the process of generating a field image by the field-image generation unit. A field-image memory unit stores the field image generated by the field-image generation unit. A particle-image generation unit generates data of a particle image to be output finally by using the field image stored in the field-image memory unit. | 02-02-2012 |
20120027255 | VEHICLE DETECTION APPARATUS - A vehicle detection apparatus comprises an other-vehicle detection module configured to detect points of light in an image captured by a vehicle to which the vehicle detection module is mounted and to detect other vehicles based on the points of light, a vehicle lane-line detection module configured to detect an vehicle lane-line in the captured image, and a region sectioning module configured to section the captured image based on the detected vehicle lane-line into an own vehicle lane region, an oncoming vehicle lane region, and a vehicle lane exterior region. Other vehicles are detected by the other-vehicle detection module by detecting points of light based on respective detection conditions set for each of the sectioned regions. | 02-02-2012 |
20120027256 | Automatic Media Sharing Via Shutter Click - A computer-implemented method for automatically sharing media between users is provided. Collections of images are received from different users, where each collection is associated with a particular user and the users may be associated with each other. The collections are grouped into one or more albums based on the content of the images in the collection, where each album is associated with a particular user. The albums from the different users are grouped into one or more event groups based on the content of the albums. The event groups are then shared automatically, without user intervention, between the different users based on their associations with each other and their individual sharing preferences. | 02-02-2012 |
20120027257 | METHOD AND AN APPARATUS FOR DISPLAYING A 3-DIMENSIONAL IMAGE - A three-dimensional (3D) image display device may display a perceived 3D image. A location tracking unit may determine a viewing distance from a screen to a viewer. An image processing unit may calculate a 3D image pixel period based on the determined viewing distance, may determine a color of at least one of pixels and sub-pixels displaying the 3D image based on the calculated 3D image pixel period, and may control the 3D image to be displayed based on the determined color. | 02-02-2012 |
20120027258 | OBJECT DETECTION DEVICE - An object detection device including: an imaging unit ( | 02-02-2012 |
20120027259 | SYNCHRONIZATION OF TWO IMAGE SEQUENCES OF A PERIODICALLY MOVING OBJECT - A method and an apparatus for correlating two image sequences of a periodically moving object with respect to the periodicity is described. A first frame sequence of the object moving with the first periodicity is acquired. Therein the first frame sequence comprises at least one cycle of motion. A second frame sequence of the object moving with the second periodicity is acquired. Therein the second frame sequence comprises at least one cycle of motion. The first and the second frame sequences are synchronized with respect to the respective periodicity such that same phases of motion of the periodically moving object are correlated to be presented simultaneously. The present invention allows to compare sequences representing a periodical motion with a different number of frames in each of the sequences for the same cycle of motion. Thereby, e.g. image sequences of a beating heart acquired before and after a therapy may be presented in a synchronised way and therefore may be easily compared. | 02-02-2012 |
20120027260 | ASSOCIATING A SENSOR POSITION WITH AN IMAGE POSITION - A system for associating a sensor position with an image position comprises position information means | 02-02-2012 |
20120027261 | Method and Apparatus for Performing 2D to 3D Registration - A method and apparatus for performing 2D to 3D registration includes an initialization step and a refinement step. The initialization step is directed to identifying an orientation and a position by knowing orientation information where data images are captured and by identifying centers of relevant bodies. The refinement step uses normalized mutual information and pattern intensity algorithms to register the 2D image to the 3D volume. | 02-02-2012 |
20120033852 | SYSTEM AND METHOD TO FIND THE PRECISE LOCATION OF OBJECTS OF INTEREST IN DIGITAL IMAGES - The present invention is a method and system to precisely locate objects of interest in any given image scene space, which finds the presence of objects based upon pattern matching geometric relationships to a master, known set. The method and system prepares images for feature and attribute detection and identifies the presence of potential objects of interest, then narrows down the objects based upon how well they match a pre designated master template. The method by which matching takes place is done through finding all objects, plotting its area, juxtaposing a sweet spot overlap of its area on master objects, which in turn forms a glyph shape. The glyph shape is recorded, along with all other formed glyphs in an image's scene space and then mapped to form sets using a classifier and finally a pattern matching algorithm. The resulting objects of interest matches are then refined to plot the contour boundaries of the object's grouped elements (arrangement of contiguous pixels of the given object called a Co-Glyph) and finally snapped to its component actual dimensions e.g., x, y of a character or individual living cell. | 02-09-2012 |
20120033853 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - The present invention refers to an information processing apparatus comprising: an obtaining unit adapted to obtain an image of an object; a face region detection unit adapted to detect a face region of the object from the image; an eye region detection unit adapted to detect an eye region of the object; a generation unit adapted to generate a high-resolution image and low-resolution image of the face region detected by the face region detection means; a first extraction unit adapted to extract a first feature amount indicating a direction of a face existing in the face region from the low-resolution image; a second extraction unit adapted to extract a second feature amount indicating a direction of an eye existing in the eye region from the high-resolution image; and an estimation unit adapted to estimate a gaze direction of the object from the first feature amount and the second feature amount. | 02-09-2012 |
20120033854 | IMAGE PROCESSING APPARATUS - Provided are an image processing apparatus and method for counting moving objects in an image, the apparatus including: a motion detection unit which detects motion in an image; an object detection unit which detects objects based on the motion detected by the motion detection unit; an outline generation unit which generates at least one reference outline of which a size is adjusted according to a preset parameter based on a location in the image; and a calculation unit which calculates a number of objects having substantially a same size as that of the at least one reference outline from among the objects detected by the object detection unit, wherein the preset parameter is adjusted according to at least one circumstantial parameter. | 02-09-2012 |
20120033855 | PREDICTIVE FLIGHT PATH AND NON-DESTRUCTIVE MARKING SYSTEM AND METHOD - Systems and methods for acquiring and targeting an object placed in motion, tracking the object's movement, and while tracking, measuring the object's characteristics and marking the object with an external indicator until the object comes to rest is provided. The systems and methods include an acquisition and tracking system, a data capture system, and a marking control system. Through the components of the system, an object moving through two or three dimensional space can be externally marked to assist with improving the performance of striking the object. | 02-09-2012 |
20120033856 | SYSTEM AND METHOD FOR ENABLING MEANINGFUL INTERACTION WITH VIDEO BASED CHARACTERS AND OBJECTS - The present disclosure provides a system and method for enabling meaningful body-to-body interaction with virtual video-based characters or objects in an interactive imaging environment including: capturing a corpus of video-based interaction data, processing the captured video using a segmentation process that corresponds to the capture setup in order to generate binary video data, labeling the corpus by assigning a description to clips of silhouette video, processing the labeled corpus of silhouette motion data to extract horizontal and vertical projection histograms for each frame of silhouette data, and estimating the motion state automatically from each frame of segmentation data using the processed model. Virtual characters or objects are represented using video captured from video-based motion, thereby creating the illusion of real characters or objects in an interactive imaging experience. | 02-09-2012 |
20120033857 | SELECTIVE AND ADAPTIVE ILLUMINATION OF A TARGET - There are provided a method and a system for illuminating one or more target in a scene. An image of the scene is acquired using a sensing device that may use an infrared sensor for example. From the image, an illumination controller determines an illumination figure, such that the illumination figure adaptively matches at least a position of the target in the image. The target is the selectively illuminated using an illumination device, according to the illumination figure. | 02-09-2012 |
20120039505 | DYNAMICALLY RESIZING TEXT AREA ON A DISPLAY DEVICE - Dynamically resizing a text area in which text is displayed on a display device. A camera device periodically captures snapshots of a user's gaze point and head position while reading text, and the captured snapshots are used to detect movement of the user's head. Head movement suggests that the text area is too wide for comfortable viewing. Accordingly, the width of the text area is automatically resized, responsive to detecting head movement. Preferably, the resized width is set to the position of the user's gaze point prior to the detected head movement. The text is then preferably reflowed within the resized text area. Optionally, the user may be prompted to confirm whether the resizing will be performed. | 02-16-2012 |
20120039506 | METHOD FOR IDENTIFYING AN OBJECT IN A VIDEO ARCHIVE - The invention concerns a method for identifying an object in a video archive including multiple images acquired in a network of cameras including a phase of characterisation of the object to be identified and a phase of searching for the said object in the said archive, where the said characterisation phase consists in defining for the said object at least one semantic characteristic capable of being extracted, even in low-resolution images, from the said video archive. | 02-16-2012 |
20120039507 | Information Processing Device And Information Processing Method - An image acquisition unit of an information processing device acquires data for moving image including an image of a user and captured by an image capturing device. An initial processing unit determines correspondence between an amount of movement of the user and a parameter defining an image to be ultimately output in a conversion information storage unit. A tracking processing unit uses a particle filter to perform visual tracking in the moving image so as to estimate the magnification and translation amount of the user's head contour. The input value conversion unit converts the amount of movement of the user into the parameter defining an image using the magnification and the translation amount as parameters. The output data generation unit generates an image based on the parameter. The output control unit controls the generated image so as to be displayed on a display device. | 02-16-2012 |
20120039508 | TARGET DETECTING METHOD AND APPARATUS - Target detecting method and apparatus are disclosed. In the target detecting method, edges in a first direction in an input image may be detected to obtain an edge image comprising a plurality of edges in the first direction; and one or more candidate targets may be generated according to the plurality of edges in the first direction, a region between any two of the plurality of edges in the first direction in the input image corresponding to one of the candidate targets. | 02-16-2012 |
20120039509 | INFORMATION-INPUTTING DEVICE INPUTTING CONTACT POINT OF OBJECT ON RECORDING SURFACE AS INFORMATION - Structure and function for inputting information preferably includes a display device having two cameras in respective corners thereof. At least one computer readable medium preferably has program instructions configured to cause at least one processing structure to: (i) extract an object located on a plane of the display device from an image that includes the plane of the object, (ii) determine whether the object is a writing implement by determining, when a plurality of objects are extracted from the image, that one of the plurality of objects that satisfies a prescribed condition is the writing implement, (iii) calculate a position of a contact point between the writing implement and the plane as information to be input if the object has been determined as the writing implement, and (iv) input the information representing a position on the plane indicated by the object. | 02-16-2012 |
20120039510 | SYSTEM AND METHOD FOR REMOTELY MONITORING AND/OR VIEWING IMAGES FROM A CAMERA OR VIDEO DEVICE - A system and method are provided for remotely monitoring images from an image capturing device. Image data from an image capturing component is received where image data represents images of a scene in a field of view of the image capturing component. The image data may be analyzed to determine that the scene has changed. A determination may be made that the scene has changed. In response to this determination being made, a communication may be transmitted to a designated device, recipient or network location. The communication may be informative that a scene change or event occurred. The communication may be in the form of a notification or an actual image or series of images of the scene after the change or event. | 02-16-2012 |
20120039511 | Information Processing Apparatus, Information Processing Method, and Computer Program - An information processing apparatus that executes processing for creating an environmental map includes a camera that photographs an image, a self-position detecting unit that detects a position and a posture of the camera on the basis of the image, an image-recognition processing unit that detects an object from the image, a data constructing unit that is inputted with information concerning the position and the posture of the camera and information concerning the object and executes processing for creating or updating the environmental map, and a dictionary-data storing unit having stored therein dictionary data in which object information is registered. The image-recognition processing unit executes processing for detecting an object from the image acquired by the camera with reference to the dictionary data. The data constructing unit applies the three-dimensional shape data registered in the dictionary data to the environmental map and executes object arrangement on the environmental map. | 02-16-2012 |
20120045090 | MULTI-MODE VIDEO EVENT INDEXING - Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode. | 02-23-2012 |
20120045091 | System and Method for 3D Wireframe Reconstruction from Video - In one or more aspects of the present disclosure, a method, a computer program product and a system for reconstructing scene features of an object in 3D space using structure-from-motion feature-tracking includes acquiring a first camera frame at a first camera position; extracting image features from the first camera frame; initializing a first set of 3D points from the extracted image features; acquiring a second camera frame at a second camera position; predicting a second set of 3D points by converting their positions and variances to the second camera position; projecting the predicted 3D positions to an image plane of the second camera to obtain 2D predictions of the image features; measuring an innovation of the predicted 2D image features; and updating estimates of 3D points based on the measured innovation to reconstruct scene features of the object image in 3D space. | 02-23-2012 |
20120045092 | Hierarchical Video Sub-volume Search - Described is a technology by which video, which may be relatively high-resolution video, is efficiently processed to determine whether the video contains a specified action. The video corresponds to a spatial-temporal volume. The volume is searched with a top-k search that finds a plurality of the most likely sub-volumes simultaneously in a single search round. The score volumes of larger spatial resolution videos may be down-sampled into lower-resolution score volumes prior to searching. | 02-23-2012 |
20120045093 | METHOD AND APPARATUS FOR RECOGNIZING OBJECTS IN MEDIA CONTENT - An approach is provided for recognizing objects in media content. The capture manager determines to detect, at a device, one or more objects in a content stream. Next, the capture manager determines to capture one or more representations of the one or more objects in the content stream. Then, the capture manager associates the one or more representations with one or more instances of the content stream. | 02-23-2012 |
20120045094 | TRACKING APPARATUS, TRACKING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - The present invention provides a tracking apparatus for tracking a target designated on an image which is captured by an image sensing element, including a calculation unit configured to calculate, for each of feature candidate colors, a first area of a pixel group which includes a pixel of a feature candidate color of interest and in which pixels of colors similar to the feature candidate color of interest continuously appear, a second area of pixels of colors similar to the feature candidate color of interest in the plurality of pixels, and a ratio of the first area to the second area, and an extraction unit configured to extract a feature candidate color having the smallest first area as a feature color of the target from feature candidate colors for each of which the ratio of the first area to the second area is higher than a predetermined reference ratio. | 02-23-2012 |
20120045095 | IMAGE PROCESSING APPARATUS, METHOD THEREOF, PROGRAM, AND IMAGE CAPTURING APPARATUS - An image processing apparatus stores model information representing a subject model belonging to a specific category, detects the subject from an input image by referring to the model information, determines a region for which an image correction is to be performed within a region occupied by the detected subject in the input image, stores, for a local region of the image, a plurality of correction data sets representing correspondence between a feature vector representing a feature before correction and a feature vector representing a feature after correction, selects at least one of the correction data sets to be used to correct a local region included in the region determined to undergo the image correction, and corrects the region determined to undergo the image correction using the selected correction data sets. | 02-23-2012 |
20120045096 | MONITORING CAMERA TERMINAL - A monitoring camera terminal has an imaging portion for imaging a monitoring target area allocated to an own-terminal, an object extraction portion for processing a frame image imaged by the imaging portion to extract an imaged object, an ID addition portion for adding an ID to the object extracted by the object extraction portion, an object map creation portion for creating, for each object extracted by the object extraction portion, an object map associating the ID added to the object with a coordinate position in the frame image, and a tracing portion for tracing an object in the monitoring target area allocated to the own-terminal using the object maps created by the object map creation portion. | 02-23-2012 |
20120045097 | HIGH ACCURACY BEAM PLACEMENT FOR LOCAL AREA NAVIGATION - An improved method of high accuracy beam placement for local area navigation in the field of semiconductor chip manufacturing. This invention demonstrates a method where high accuracy navigation to the site of interest within a relatively large local area (e.g. an area 200 μm×200 μm) is possible even where the stage/navigation system is not normally capable of such high accuracy navigation. The combination of large area, high-resolution scanning, digital zoom and registration of the image to an idealized coordinate system enables navigation around a local area without relying on stage movements. Once the image is acquired any sample or beam drift will not affect the alignment. Preferred embodiments thus allow accurate navigation to a site on a sample with sub-100 nm accuracy, even without a high-accuracy stage/navigation system. | 02-23-2012 |
20120045098 | ARCHITECTURES AND METHODS FOR CREATING AND REPRESENTING TIME-DEPENDENT IMAGERY - The present invention pertains to geographical image processing of time-dependent imagery. Various assets acquired at different times are stored and processing according to acquisition date in order to generate one or more image tiles for a geographical region of interest. The different image tiles are sorted based on asset acquisition date. Multiple image tiles for the same region of interest may be available. In response to a user request for imagery as of a certain date, one or more image tiles associated with assets from prior to that date are used to generate a time-based geographical image for the user. | 02-23-2012 |
20120057745 | DETECTION OF OBJECTS USING RANGE INFORMATION - A system and method for detecting objects and background in digital images using range information includes receiving the digital image representing a scene; identifying range information associated with the digital image and including distances of pixels in the scene from a known reference location; generating a cluster map based at least upon an analysis of the range information and the digital image, the cluster map grouping pixels of the digital image by their distances from a viewpoint; identifying objects in the digital image based at least upon an analysis of the cluster map and the digital image; and storing an indication of the identified objects in a processor-accessible memory system. | 03-08-2012 |
20120057746 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - A processing device and method are provided. According to illustrative embodiments, the device and method are implemented by detecting a face region of an image, setting at least one action region according to the position of the face region, processing image data corresponding to the at least one action region to determine whether or not a predetermined action has been performed, and performing processing corresponding to the predetermined action when it is determined that the predetermined action has been performed. | 03-08-2012 |
20120057747 | IMAGE PROCESSING SYSTEM AND IMAGE PROCESSING METHOD - An image processing system performs a position-matching operation on first and second images, which are obtained by photographing the same object a plurality of times. A plurality of shift points are detected in the second image. The shift points correspond to fixed points, which are dispersed throughout the whole of the first image. The second image is divided into a plurality of partial images, the vertices of which are positioned at the same coordinates as the fixed points in the first image. Each of the partial images are shifted to the shift points to transform the partial images so that corresponding transformed partial images are produced. The transformed partial images are combined to form a combined image. | 03-08-2012 |
20120057748 | APPARATUS WHICH DETECTS MOVING OBJECT FROM IMAGE AND METHOD THEREOF - An image processing apparatus includes an input unit configured to input a plurality of time-sequential still images, a setting unit configured to set, in a still image among the plurality of still images, a candidate region that is a candidate of a region in which an object exists, and to acquire a likelihood of the candidate region, a motion acquisition unit configured to acquire motion information indicating a motion of the object based on the still image and another still image that is time-sequential to the still image, a calculation unit configured to calculate a weight corresponding to an appropriateness of the motion indicated by the motion information as a motion of the object, a correction unit configured to correct the likelihood based on the weight, and a detection unit configured to detect the object from the still image based on the corrected likelihood. | 03-08-2012 |
20120057749 | INATTENTION DETERMINING DEVICE - An inattention determining device includes range changing unit and inattention determining unit. When a curve detection result is output from curve detector, the range changing unit changes a first predetermined range to a second predetermined range by the predetermined amount in the curve direction before a turning direction of an acquisition result is changed in the curve direction of the curve detection result. The inattention determining unit determines whether or not a driver is in an inattention state on the basis of the second predetermined range. | 03-08-2012 |
20120057750 | System And Method For Data Assisted Chrom-Keying - The invention illustrates a system and method of displaying a base image and an overlay image comprising: capturing a base image of a real event; receiving an instrumentation data based on the real event; identifying a visual segment within the base image based on the instrumentation data; and rendering an overlay image within the visual segment. | 03-08-2012 |
20120057751 | Particle Tracking Methods - A method for tracking an object in a video data, comprises the steps of determining a plurality of particles for estimating a location of the object in the video data, determining a weight for each of the plurality of the particles, wherein the weights of two or more particles are determined substantially in parallel, and estimating the location of the object in the video data based upon the determined particle weights. | 03-08-2012 |
20120057752 | METHOD OF, AND APPARATUS AND COMPUTER SOFTWARE FOR, IMPLEMENTING IMAGE ANALYSIS PROTOCOLS - A computer-based method for the development of an image analysis protocol for analyzing image data, the image data containing images including image objects, in particular biological image objects such as biological cells. The image analysis protocol, once developed, is operable in an image analysis software system to report on one or more measurements conducted on selected ones of the image objects. The development process includes defining target identification settings to identify at least two different target sets of image objects, defining target identification settings to identify at least two different target sets of image objects, and defining one or more measurements to be performed using said pair-wise linking relationship(s). | 03-08-2012 |
20120057753 | SYSTEMS AND METHODS FOR TRACKING A MODEL - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose. | 03-08-2012 |
20120057754 | IMAGE SELECTION BASED ON IMAGE CONTENT - An image capture system comprises an image input and processing unit. The image input obtains image information which is then passed to the processing unit. The processing unit is coupled to the image input for determining image metrics on the image information. The processing unit initiates a capture sequence when the image metrics meet a predetermined condition. The capture sequence may store one or more images, or it may indicate that one or more images have been detected. In one embodiment, the image input is a CMOS or CCD sensor. | 03-08-2012 |
20120057755 | METHOD AND SYSTEM FOR CONTROLLING LIGHTING - A method is provided to control the lighting ambience in a space by means of a plurality of controllable light sources ( | 03-08-2012 |
20120063637 | ARRAY OF SCANNING SENSORS - An array of image sensors is arranged to cover a field of view for an image capture system. Each sensor has a field of view segment which is adjacent to the field of view segment covered by another image sensor. The adjacent field of view (FOV) segments share an overlap area. Each image sensor comprises sets of light sensitive elements which capture image data using a scanning technique which proceeds in a sequence providing for image sensors sharing overlap areas to be exposed in the overlap area during the same time period. At least two of the image sensors capture image data in opposite directions of traversal for an overlap area. This sequencing provides closer spatial and temporal relationships between the data captured in the overlap area by the different image sensors. The closer spatial and temporal relationships reduce artifact effects at the stitching boundaries, and improve the performance of image processing techniques applied to improve image quality. | 03-15-2012 |
20120063638 | EGOMOTION USING ASSORTED FEATURES - A system and method are disclosed for estimating camera motion of a visual input scene using points and lines detected in the visual input scene. The system includes a camera server comprising a stereo pair of calibrated cameras, a feature processing module, a trifocal motion estimation module and an optional adjustment module. The stereo pair of the calibrated cameras and its corresponding stereo pair of camera after camera motion form a first and a second trifocal tensor. The feature processing module is configured to detect points and lines in the visual input data comprising a plurality of image frames. The feature processing module is further configured to find point correspondence between detected points and line correspondence between detected lines in different views. The trifocal motion estimation module is configured to estimate the camera motion using the detected points and lines associated with the first and the second trifocal tensor. | 03-15-2012 |
20120063639 | INFORMATION PROCESSING DEVICE, RECOGNITION METHOD THEREOF AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An information processing device detects a background region from an image, extracts multiple partial regions from the image, sets multiple local regions for each of the multiple partial regions, selects a local region including a region other than the background region from among the multiple local regions and calculates a local feature amount from the selected local region, and determines a partial region that includes a recognition target object from among the multiple partial regions based on the calculated local feature amount. | 03-15-2012 |
20120063640 | IMAGE PROCESSING APPARATUS, IMAGE FORMING SYSTEM, AND IMAGE FORMING METHOD - An upstream image processing apparatus determines, when geometric conversion is instructed, whether the result of downstream correction processing changes due to the geometric conversion, and if it changes, the apparatus changes the conversion to geometric conversion that does not cause a change in the correction result. Then, the geometric conversion is performed on a target image, and the resultant image is transmitted to a downstream image processing apparatus. Together therewith, instruction information indicating an instruction for correction processing and instruction information indicating geometric transformation processing for performing geometric transformation processing to the instructed degree are transmitted to the downstream image processing apparatus. The downstream image processing apparatus adds an instruction for image processing as appropriate, and thereafter transmits the resultant data to an image forming apparatus. The image forming apparatus forms an image by performing correction processing and geometric transformation processing that have been instructed. | 03-15-2012 |
20120063641 | SYSTEMS AND METHODS FOR DETECTING ANOMALIES FROM DATA - The present disclosure concerns methods and/or systems for processing, detecting and/or notifying for the presence of anomalies or infrequent events from data. Some of the disclose methods and/or systems may be used on large-scale data sets. Certain applications are directed to analyzing sensor surveillance records to identify aberrant behavior. The sensor data may be from a number of sensor types including video and/or audio. Certain applications are directed to methods and/or systems that use compressive sensing. Certain applications may be performed in substantially real time. | 03-15-2012 |
20120063642 | SIMILARITY ANALYZING DEVICE, IMAGE DISPLAY DEVICE, IMAGE DISPLAY PROGRAM STORAGE MEDIUM, AND IMAGE DISPLAY METHOD - A similarity analyzing device includes: an image acquisition section which acquires picked-up images with which image pick-up dates and/or times are associated; and an image registration section which registers a face image showing a picked-up face and with which an image pick-up date and/or time is associated. The device further includes: a degree of similarity calculation section which detects a face in each of picked-up images acquired by the image acquisition section and calculates the degree of similarity between the detected face and the face in the face image registered in the image registration section; and a degree of similarity reduction section in which the larger the difference between the image pick-up date and/or time associated with the picked-up image and that associated with the face image is, the more the degree of similarity of the face calculated by the degree of similarity calculation section is reduced. | 03-15-2012 |
20120063643 | Methods, Systems, and Products for Gesture-Activation - Methods, systems, and products are disclosed recognizing gestures. A sequence of images is captured by a camera and compared to a stored sequence of images in memory. A gesture is then recognized in the stored sequence of images. | 03-15-2012 |
20120063644 | DISTANCE-BASED POSITION TRACKING METHOD AND SYSTEM - A pre-operative stage of a distance-based position tracking method ( | 03-15-2012 |
20120070033 | METHODS FOR OBJECT-BASED IDENTIFICATION, SORTING AND RANKING OF TARGET DETECTIONS AND APPARATUSES THEREOF - A method, non-transitory computer readable medium, and apparatus that provides object-based identification, sorting and ranking of target detections includes determining a target detection score for each pixel in each of one or more images for each of one or more targets. A region around one or more of the pixels with the determined detection scores which are higher than the determined detection scores for the remaining pixels in each of the one or more of images is identified. An object based score for each of the identified regions in each of the one or more images is determined. The one or more identified regions with the determined object based score for each region is provided. | 03-22-2012 |
20120070034 | METHOD AND APPARATUS FOR DETECTING AND TRACKING VEHICLES - The present invention relates to a method and apparatus for detecting and tracking vehicles. One embodiment of a system for detecting and tracking an object (e.g., vehicle) in a field of view includes a moving object indication stage for detecting a candidate object in a series of input video frames depicting the field of view and a track association stage that uses a joint probabilistic graph matching framework to associate an existing track with the candidate object. | 03-22-2012 |
20120070035 | METHOD AND INTERFACE OF RECOGNIZING USER'S DYNAMIC ORGAN GESTURE AND ELEC TRIC-USING APPARATUS USING THE INTERFACE - A method of recognizing a user's dynamic organ for use in an electric-using apparatus includes comparing a background image and a target image, which are inputted through an imaging element, to detect a candidate region including portions of the target image that are different between the background image and the target image; scanning the candidate region using a window; generating a HOG (histograms of oriented gradients) descriptor of a region of the target image that is scanned when it is judged that the scanned region includes a dynamic organ; measuring a resemblance value between the HOG descriptor of the scanned region and a HOG descriptor of a query template for a gesture of the dynamic organ; and judging that the scanned region includes the gesture of the dynamic organ when the resemblance value meets a predetermined condition. | 03-22-2012 |
20120070036 | Method and Interface of Recognizing User's Dynamic Organ Gesture and Electric-Using Apparatus Using the Interface - A method of recognizing a user's dynamic organ for use in an electric-using apparatus includes scanning a difference image, which reflects brightness difference between a target image and a comparative image that are inputted through an imaging element, using a window; generating a HOG (histograms of oriented gradients) descriptor of a region of the difference image that is scanned when it is judged that the scanned region includes a dynamic organ; measuring a resemblance value between the HOG descriptor of the scanned region and a HOG descriptor of a query template for a gesture of the dynamic organ; and judging that the scanned region includes the gesture of the dynamic organ when the resemblance value meets a predetermined condition, wherein the comparative image is one of frame images previous to the target image. | 03-22-2012 |
20120070037 | Method for estimating the motion of a carrier relative to an environment and computing device for navigation system | 03-22-2012 |
20120076353 | INTERACTIVE DISPLAY - Embodiments are disclosed herein that relate to the front-projection of an interactive display. One disclosed embodiment provides an interactive display system comprising a projector and a display screen configured to display an image projected by the projector, the display screen comprising a retroreflective layer and a diffuser layer covering the retroreflective layer, the diffuser layer being configured to diffusely reflect only a portion of light incident on the diffuser layer from the projector such that another portion of light passes through the diffuser layer and is reflected by the retroreflective layer back through the diffuser layer. The interactive display system also comprises a camera configured to capture images of the display screen via light reflected by the retroreflective layer to identify via the images a user gesture performed between the projector and the display screen. | 03-29-2012 |
20120076354 | IMAGE RECOGNITION BASED UPON A BROADCAST SIGNATURE - Methods and apparatus for processing image data are disclosed. In one embodiment, a method includes capturing, via an image sensor, an image that includes a plurality of objects including a target object, and receiving, from the target object, via a medium other than the image sensor, distinguishing information that is broadcast by the target object. The distinguishing information distinguishes the target object from other objects, and is used to select, within the captured image, the target object from among the other objects. | 03-29-2012 |
20120076355 | 3D OBJECT TRACKING METHOD AND APPARATUS - A 3D object tracking method and apparatus in which a model of an object to be tracked is divided into a plurality of polygonal planes and the object is tracked using texture data of the respective planes and geometric data between the respective planes to enable more precise tracking. The 3D object tracking method includes modeling the object to be tracked to generate a plurality of planes, and tracking the plurality of planes, respectively. The modeling of the object includes selecting points from among the plurality of planes, respectively, and calculating projective invariants using the selected points. | 03-29-2012 |
20120076356 | ANOMALY DETECTION APPARATUS - Behavior authority may be changed depending on a behavior performed by a person. Therefore, it is necessary to change the judgment criteria whether the behavior is anomalous or normal, in association with the changed behavior authority. Herein, an anomaly detection apparatus is provided, which calculates the behavior authority information of which judgment criteria of the anomalous and normal behaviors are changed corresponding to the behavior performed by the person, detects whether the behavior shown by the person is anomalous or not, and issues an alarm when the anomalous behavior is detected. | 03-29-2012 |
20120076357 | VIDEO PROCESSING APPARATUS, METHOD AND SYSTEM - According to one embodiment, a video processing apparatus includes an acquisition unit, a first extraction unit, a generation unit, a second extraction unit, a computation unit and a selection unit. The acquisition unit is configured to acquire video streams. A first extraction unit is configured to analyze at least one of the moving pictures and the sounds for each video stream and to extract feature values. A generation unit is configured to generate segments by dividing each video stream, and to generate associated segment groups. A second extraction unit is configured to extract the associated segment groups that number of associated segments is greater than or equal to threshold as common video segment groups. A computation unit is configured to compute summarization score. A selection unit is configured to select segments used for a summarized video as summarization segments from the common video segment groups based on the summarization score. | 03-29-2012 |
20120076358 | Methods for and Apparatus for Generating a Continuum of Three-Dimensional Image Data - The present invention provides methods and apparatus for generating a continuum of image data sprayed over three-dimensional models. The three-dimensional models can be representative of features captured by the image data and based upon multiple image data sets capturing the features. The image data can be captured at multiple disparate points along another continuum. | 03-29-2012 |
20120076359 | SYSTEM AND METHOD FOR TRACKING AN ELECTRONIC DEVICE - A system for tracking a spatially manipulated user controlling object using a camera associated with a processor. While the user spatially manipulates the controlling object, an image of the controlling object is picked-up via a video camera, and the camera image is analyzed to isolate the part of the image pertaining to the controlling object for mapping the position and orientation of the device in a two-dimensional space. Robust data processing systems and computerized method employing calibration and tracking algorithms such that minimal user intervention is required for achieving and maintaining successful tracking of the controlling object in changing backgrounds and lighting conditions. | 03-29-2012 |
20120076360 | IMAGE PROCESSING APPARATUS, METHOD AND COMPUTER-READABLE STORAGE MEDIUM - An image processing apparatus includes a receiver, a registration section, a determination section, and a controller. The receiver receives broadcast waves including signals of a plurality of channels. The registration section registers a recognition target. The determination section determines whether or not the recognition target, registered in the registration section, exists in a frame of an image including the signals of the plurality of channels included in the broadcast waves received by the receiver. The controller sequentially switches, in accordance with a determination result obtained by the determination section, the plurality of channels received by the receiver. | 03-29-2012 |
20120076361 | OBJECT DETECTION DEVICE - A depth histogram is created for each of a plurality of local regions of the depth image by grouping, according to specified depths, the depth information for the individual pixels that are contained in the local regions. A degree of similarity between two of the depth histograms for two of the local regions at different positions in the depth image is calculated as a feature. A depth image for training that has a high degree of certainty is defined as a positive example, a depth image for training that has a low degree of certainty is defined as a negative example, a classifier that is suitable for classifying the positive example and the negative example is constructed, and an object that is a target of detection is detected in the depth image, using the classifier and based on the feature. | 03-29-2012 |
20120082338 | ATTITUDE ESTIMATION BY REDUCING NOISE WITH DRAGBACK - In general, in one embodiment, a starfield image as seen by an object is analyzed. Compressive samples are taken of the starfield image and, in the compressed domain, processed to remove noise. Stars in the starfield image are identified and used to determine an attitude of the object. | 04-05-2012 |
20120082339 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus may include an obtaining unit to obtain a number of users from information on detection of a face region including a face in a captured image provided at the apparatus. The apparatus also may include a setting unit to set a display region for content and a display region for a captured image in a display screen; and a display image generation unit to generate a display image to be displayed in the display region for a captured image, in accordance with the information on the detection, the number of users, and the display region set for a captured image. | 04-05-2012 |
20120082340 | SYSTEM AND METHOD FOR PROVIDING MOBILE RANGE SENSING - The present invention provides an improved method for estimating range of objects in images from various distances comprising receiving a set of images of the scene having multiple objects from at least one camera in motion. Due to the motion of the camera, each of the images are obtained at different camera locations. Then an object visible in multiple images is selected. Data related to approximate camera positions and orientations and the images of the visible object are used to estimate the location of the object relative to a reference coordinate system. Based on the computed data, a projected location of the visible object is computed and the orientation angle of the camera for each image is refined. Additionally, pairs of cameras with various locations can obtain dense stereo for regions of the image at various ranges. | 04-05-2012 |
20120082341 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - A method is provided for displaying physical objects. The method comprises capturing an input image of physical objects, and matching a three-dimensional model to the physical objects. The method further comprises producing a modified partial image by at least one of modifying a portion of the matched three-dimensional model, or modifying a partial image extracted from the input image using the matched three-dimensional model. The method also comprises displaying an output image including the modified partial image superimposed over the input image. | 04-05-2012 |
20120082342 | 3 DIMENSION TRACKING SYSTEM FOR SURGERY SIMULATION AND LOCALIZATION SENSING METHOD USING THE SAME - The 3-dimensional tracking system according to the present disclosure includes: a photographing unit for photographing an object; a recognizing unit for recognizing a marker attached to the object by binarizing an image of the object photographed by the photographing unit; an extracting unit for extracting a 2-dimensional coordinate of the marker recognized by the recognizing unit; and a calculating unit for calculating a 3-dimensional coordinate from the 2-dimensional coordinate of the marker by using an intrinsic parameter of the photographing unit. | 04-05-2012 |
20120082343 | DETECTING A CHANGE BETWEEN IMAGES OR IN A SEQUENCE OF IMAGES - Detecting a change between images is performed more effectively when a measure of change is used for the detection that depends on a length of the code blocks to which the images are individually entropy-encoded, and which are allocated to different sections of the respective image, since the length of these code blocks is also available without decoding. This uses the fact that the length or amount of data of a code block directly depends, in large parts, on the entropy and hence on the complexity of the allocated image section, and that changes between images are, with high probability, also reflected in a change of complexity. | 04-05-2012 |
20120082344 | METHOD AND APPARATUS FOR COMPRESSED SENSING - Method and apparatus for compressed sensing yields acceptable quality reconstructions of an object from reduced numbers of measurements. A component x of a signal or image is represented as a vector having m entries. | 04-05-2012 |
20120087539 | METHOD OF DETECTING FEATURE POINTS OF AN OBJECT IN A SYSTEM FOR MOTION DETECTION - A method of detecting feature points of an object in a system for motion detection includes obtaining a first image of the object from a first camera and a second image of the object from a second camera, extracting a foreground image from each of the first image and the second image, based on an assumption that the foreground image is a T-pose image, segmenting the foreground image into a first set of sections, identifying a first set of feature points associated with the first set of sections, obtaining a T-pose image with a set of predetermined feature points, and determining whether the foreground image is a T-pose image by comparing the first set of feature points with the set of predetermined feature points. | 04-12-2012 |
20120087540 | COMPUTING DEVICE AND METHOD FOR MOTION DETECTION - A computing device for motion detection in a system capable of detecting feature points of an object of interest is disclosed. The computing device includes a vector forming unit to form a plurality of vectors associated with a set of the feature points and form a vector set based on the vectors, a posture identifying unit to identify a match of a posture in a database based on the vector set, a motion similarity unit to identify a set of predetermined postures in the database based on the matched posture and an immediately previous matched posture, and a motion identifying unit to identify a predetermined motion in the database based on the set of predetermined postures. | 04-12-2012 |
20120087541 | CAMERA FOR DETECTING DRIVER'S STATE - The present invention provides a camera for detecting a driver's drowsiness state, which can increase the number of pixels in an image of a driver's eye even when using an image sensor having the same number of pixels as a conventional camera instead of a high definition camera. The camera of the present invention is, thus, capable of determining whether the driver's eyes are open or closed. The camera for detecting the driver's state according to the present invention includes a cylindrical lens mounted in front of the camera configured so as to enlarge an image in the vertical direction, a convex lens located in the rear of the cylindrical lens, an image sensor for taking an image of a driver's face formed by the cylindrical lens and the convex lens, and an image processor for extracting an eye area from the image of the driver's face and determining whether the driver's eyes are open or closed. | 04-12-2012 |
20120087542 | LASER DETECTION DEVICE AND LASER DETECTION METHOD - A laser detection method and apparatus for detection of laser beams can each perform operations for producing an interference image from detected light radiation, recording the interference image, and processing the recorded interference image in order to detect laser radiation. In order to allow more robust and faster laser detection, the apparatus and method can detect a spatially defined point distribution from the interference image and transform the point distribution such that a grid interval remains between a point grid in the point distribution, and a fixed position, which is independent of a position in the original image, is associated with the point grid. The apparatus and method can further detect a grid interval in the point grid that was transformed, and detect the position of the point grid from the point distribution by filtering with the assistance of the grid interval. | 04-12-2012 |
20120087543 | IMAGE-BASED HAND DETECTION APPARATUS AND METHOD - An image-based hand detection apparatus includes a hand image detection unit for detecting a hand image corresponding to a shape of a hand clenched to form a fist from an image input. A feature point extraction unit extracts feature points from an area, having lower brightness than a reference value, in the detected hand image. An image rotation unit compares the feature points of the detected hand image with feature points of hand images stored in a hand image storage unit, and rotates the detected hand image or the stored hand images. A matching unit compares the detected hand image with the stored hand images and generates a result of the comparison. If at least one of the stored hand images is matched with the detected hand image, a hand shape recognition unit selects the at least one of the stored hand images as a matching hand image. | 04-12-2012 |
20120087544 | SUBJECT TRACKING DEVICE, SUBJECT TRACKING METHOD, SUBJECT TRACKING PROGRAM PRODUCT AND OPTICAL DEVICE - A subject tracking device includes: a tracking zone setting unit that sets an area where a main subject is present within a captured image as a tracking zone; a tracking unit that tracks the main subject based upon an image output corresponding to the tracking zone; and an arithmetic operation unit that determines through arithmetic operation image-capturing conditions based upon an image output corresponding to a central area within the tracking zone. | 04-12-2012 |
20120087545 | Fusing depth and pressure imaging to provide object identification for multi-touch surfaces - An apparatus for inputting information into a computer includes a 3d sensor that senses 3d information and produces a 3d output The apparatus includes a 2d sensor that senses 2d information and produces a 2d output The apparatus includes a processing unit which receives the 2d and 3d output and produces a combined output that is a function of the 2d and 3d output. A method for inputting information into a computer. The method includes the steps of producing a 3d output with a 3d sensor that senses 3d information. There is the step of producing a 2d output with a 2d sensor that senses 2d information. There is the step of receiving the 2d and 3d output at a processing unit. There is the step of producing a combined output with the processing unit that is a function of the 2d and 3d output. | 04-12-2012 |
20120093357 | VEHICLE THREAT IDENTIFICATION ON FULL WINDSHIELD HEAD-UP DISPLAY - A method to dynamically register a graphic identifying a potentially threatening vehicle onto a driving scene of a vehicle utilizing a substantially transparent windscreen head up display includes monitoring a vehicular environment, identifying the potentially threatening vehicle based on the monitored vehicular environment, determining the graphic identifying the potentially threatening vehicle, dynamically registering a location of the graphic upon the substantially transparent windscreen head up display corresponding to the driving scene of the vehicle, and displaying the graphic upon the substantially transparent windscreen head up display, wherein the substantially transparent windscreen head up display includes one of light emitting particles or microstructures over a predefined region of the windscreen permitting luminescent display while permitting vision therethrough. | 04-19-2012 |
20120093358 | CONTROL OF REAR-VIEW AND SIDE-VIEW MIRRORS AND CAMERA-COORDINATED DISPLAYS VIA EYE GAZE - An adaptive vision system includes a vision component to present an image to a user, a sensor for detecting a vision characteristic of the user and generating a sensor signal representing the vision characteristic of the user; and a processor in communication with the sensor and the vision component, wherein the processor receives the sensor signal, analyzes the sensor signal based upon an instruction set to determine the vision characteristic of the user, and configures the visual component based upon the vision characteristic of the user to modify the image presented to the user. | 04-19-2012 |
20120093359 | Batch Detection Association for Enhanced Target Descrimination in Dense Detection Environments - The embodiments described herein relate to systems and techniques for processing batch detection information received from one or more sensors configured to observe objects of interest. In particular the systems and techniques are configured to enhance track performance particularly in dense target environments. A substantially large number of batch detections can be processed in a number of phases of varying complexity. An initial phase performs relatively low complexity processing on substantially all detections obtained over an extended batch period, approximating object motion with a simplified model (e.g., linear). The batch detections are divided and redistributed into swaths according to the resulting approximations. A subsequent phase performs greater complexity (e.g., quadratic) processing on the divided sets of detections. The subdivision and redistribution of detections lends itself to parallelization. Beneficially, detections over extended batch periods can be processed very efficiently to provide improved target tracking and discrimination in dense target environments. | 04-19-2012 |
20120093360 | HAND GESTURE RECOGNITION - Systems, methods, and machine readable and executable instructions are provided for hand gesture recognition. A method for hand gesture recognition can include detecting, with an image input device in communication with a computing device, movement of an object. A hand pose associated with the moving object is recognized and a response corresponding to the hand pose is initiated. | 04-19-2012 |
20120093361 | TRACKING SYSTEM AND METHOD FOR REGIONS OF INTEREST AND COMPUTER PROGRAM PRODUCT THEREOF - In one exemplary embodiment, a tracking system for region-of-interest (ROI) performs a feature-point detection locally on an ROI of an image frame at an initial time via a feature point detecting and tracking module, and tracks the detected features. A linear transformation module finds out a transform relationship between two ROIs of two consecutive image frames, by using a plurality of corresponding feature points. An estimation and update module predicts and corrects a moving location for the ROI at a current time. Based on the result corrected by the estimation and update module, an outlier rejection module removes at least an outlier outside the ROI. | 04-19-2012 |
20120093362 | DEVICE AND METHOD FOR DETECTING SPECIFIC OBJECT IN SEQUENCE OF IMAGES AND VIDEO CAMERA DEVICE - A device for detecting a specific object includes: a suspect object region detection unit configured to create a foreground mask of each frame of image in a sequence of images and perform an inter-frame differential process on the foreground masks to detect a suspect object region; a unit for modeling a region with high incidence of false positive configured to, if at least one suspect object region is detected, determine a suspect object region satisfying a predetermined condition as a region with high incidence of false positive and build a model of each determined region; a post-processing unit configured to match each suspect object region not determined as a region with high incidence of false positive against at least one corresponding model, and detect the specific object according to a sequence of mismatching suspect object regions; and determine absence of the specific object if no suspect object region is detected. | 04-19-2012 |
20120093363 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - When a detection-target subject is imaged with an image pickup device having line-defect pixels, the detection-target subject is imaged, with the image pickup device or the detection-target subject rotated at a predetermined angle so that the edge of one side of the detection-target subject is not parallel to each of horizontal and vertical scanning lines of the image pickup device, and a gray-scale image is captured by a control apparatus. In the gray-scale image, the luminance of each of the line-defect pixels is corrected by interpolation with luminances of pixels adjacent to both sides of the line-defect pixel. The gray-scale image is subjected to sub-pixel processing to detect the edge of the detection-target subject. When the detection-target subject is a component in a rectangular shape, rotation is made so that four sides are not parallel to each of the horizontal and vertical scanning lines of the image pickup device. | 04-19-2012 |
20120093364 | OBJECT TRACKING DEVICE, OBJECT TRACKING METHOD, AND OBJECT TRACKING PROGRAM - An object tracking apparatus is provided that enables the possibility of erroneous tracking to be further reduced. An object tracking apparatus ( | 04-19-2012 |
20120093365 | CONFERENCE SYSTEM, MONITORING SYSTEM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND A NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - To provide a conference system, a monitoring system, an image processing apparatus, an image processing method and A non-transitory computer-readable storage medium that stores a computer-image processing program capable of accurately and effectively recognizing an object based on a fisheye-distorted image photographed at a wide angle. | 04-19-2012 |
20120093366 | IMAGE SELECTING APPARATUS, CAMERA, AND METHOD OF SELECTING IMAGE - An image selecting apparatus comprises an input unit | 04-19-2012 |
20120093367 | METHOD AND APPARATUS FOR ASSESSING THE THREAT STATUS OF LUGGAGE - A method and apparatus for assessing a threat status of a piece of luggage. The method comprises the steps of scanning the piece of luggage with penetrating radiation to generate image data and processing the image data with a computing device to identify one or more objects represented by the image data. The method also includes further processing the image data to compensate the image data for interaction between the object and the penetrating radiation to produce compensated image data and then determine the threat status of the piece of luggage. | 04-19-2012 |
20120093368 | ADAPTIVE SUBJECT TRACKING METHOD, APPARATUS, AND COMPUTER READABLE RECORDING MEDIUM - The present invention relates to a method for adaptively tracking a subject. The method includes the steps of: comparing a first block which indicates a region corresponding to a specific subject in a first frame with at least one block included in a second frame and determining a specific block among at least one block in the second frame which has the highest degree of similarity to the first block as a second block which indicates a region corresponding to the specific subject in the second frame; and detecting the specific subject from at least part of the whole region in the second frame by using a subject detection technology, if the degree of similarity between the first block and the second block is less than a predetermined threshold value, and resetting the second block in the second frame based on a region corresponding to the detected specific subject. | 04-19-2012 |
20120093369 | METHOD, TERMINAL DEVICE, AND COMPUTER-READABLE RECORDING MEDIUM FOR PROVIDING AUGMENTED REALITY USING INPUT IMAGE INPUTTED THROUGH TERMINAL DEVICE AND INFORMATION ASSOCIATED WITH SAME INPUT IMAGE - The present invention relates to a method for providing augmented reality (AR) by using an image inputted to a terminal and information relating to the inputted image. The method includes the steps of: (a) acquiring recognition information on an object included in the image inputted through the terminal; (b) instructing to search detailed information on the recognized object and providing a tag accessible to the detailed information, if the searched detailed information is acquired, on a location of the object appearing on a screen of the terminal in a form of the augmented reality; and (c) displaying the detailed information corresponding to the tag, if the tag is selected, in the form of the augmented reality; wherein, at the step (b), the information on the location of the object is acquired by applying an image recognition process to the inputted image. | 04-19-2012 |
20120099762 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - In a case where detecting a face contained in an image, the face is detected in all directions of the image by combining the rotation of a detector in the face detecting direction, and the rotation of the image itself. If the angle made by the image direction and the detecting direction of the detector is an angle at which image deterioration readily occurs, the detection range of the detector is made narrower than that for an angle at which image deterioration hardly occurs. | 04-26-2012 |
20120099763 | IMAGE RECOGNITION APPARATUS - An image recognition part of an image recognition apparatus recognizes an object based on a target area in an outside-vehicle image obtained by a camera installed in a vehicle. A position identifying part identifies an optical axis position of the camera relative to the vehicle based on the outside-vehicle image, and an area changing part changes a position of the target area in the outside-vehicle image according to the optical axis position of the camera. Therefore, it is possible to recognize an object properly based on the target area in the outside-vehicle image even though the optical axis position of the camera is displaced. | 04-26-2012 |
20120099764 | CALCULATING TIME TO GO AND SIZE OF AN OBJECT BASED ON SCALE CORRELATION BETWEEN IMAGES FROM AN ELECTRO OPTICAL SENSOR - A method and a system for calculating a time to go value between a vehicle and an intruding object. A first image of the intruding object at a first point of time retrieved. A second image of the intruding object at a second point of time is retrieved. The first image and the second image are filtered so that the first image and the second image become independent of absolute signal energy and so that edges become enhanced. An X fractional pixel position and a Y fractional pixel position are set to zero. The X fractional pixel position denotes a horizontal displacement at sub pixel level and the Y fractional pixel position denotes a vertical displacement at sub pixel level. A scale factor is selected. The second image is scaled with the scale factor and resampled to the X fractional pixel position and the Y fractional pixel position, which results in a resampled scaled image. Correlation values, are calculated between the first image and the resampled scaled image for different horizontal displacements at pixel level and different vertical displacements at pixel level for the resampled scaled image. A maximum correlation value at a subpixel level is found based on the correlation values. The X fractional pixel position and the Y fractional pixel position are also updated. j is set to j=j+1 and scaling of the second image, calculation of correlation values, finding the maximum correlation value and setting of j to j=j+1 are repeated a predetermined number of times. i is set to i=i+1 and selecting the scale factor, scaling of the second image, calculation of correlation values, finding the maximum correlation value, setting of j to j=j+1, and setting of i to i=i+1 are repeated a predetermined number of times. A largest maximum correlation value is found among the maximum correlation values and the scale factor associated with the largest maximum correlation value. The time to go is calculated based on the scale factor. | 04-26-2012 |
20120099765 | METHOD AND SYSTEM OF VIDEO OBJECT TRACKING - Methods and systems are provided to determine a target tracking box that surrounds a moving target. The pixels that define an image within the target tracking box can be classified as background pixels, foreground pixels, and changing pixels which may include pixels of an articulation, such as a portion of the target that moves relatively to the target tracking box. Identification of background image pixels improves the signal-to-noise ratio of the image, which is defined as the ratio between the number of pixels belonging to the foreground to the number of changing pixels, and which is used to track the moving target. Accordingly, tracking of small and multiple moving targets becomes possible because of the increased signal-to-noise ratio. | 04-26-2012 |
20120106781 | SIGNATURE BASED DRIVE-THROUGH ORDER TRACKING SYSTEM AND METHOD - A system and method for providing signature-based drive-through order tracking. An image with respect to a vehicle at a POS unit can be captured at an order point and a delivery point (e.g., a payment point and a pick-up point) utilizing an image capturing unit by detecting the presence of the vehicle at each point utilizing a vehicle presence sensor. The captured image can be processed in order to extract a small region of interest and can be reduced to a unique signature. The extracted signature of the vehicle at the order point can be stored into a database together with the corresponding order and the vehicle image. The signature extracted at the delivery point can be matched with the signature stored in the database. If a match is found, the order associated with the vehicle together with the images captured at the delivery point and the order point can be displayed in a user interface at the delivery point to ensure that the right order is delivered to a customer. | 05-03-2012 |
20120106782 | Detector for chemical, biological and/or radiological attacks - This specification generally relates to methods and algorithms for detection of chemical, biological, and/or radiological attacks. The methods use one or more sensors that can have visual, audio, and/or thermal sensing abilities and can use algorithms to determine by behavior patterns of people whether there has been a chemical, biological and/or radiological attack. | 05-03-2012 |
20120106783 | OBJECT TRACKING METHOD - An object tracking method includes steps of obtaining multiple first classifications of pixels within a first focus frame in a first frame picture, wherein the first focus frame includes an object to be tracked and has a first rectangular frame in a second frame picture; performing a positioning process to obtain a second rectangular frame; and obtaining color features of pixels around the second rectangular frame sequentially and establishing multiple second classifications according to the color feature. The established second classifications are compared with the first classifications sequentially to obtain an approximation value, compared with a predetermined threshold. The second rectangular frame is progressively adjusted, so as to establish a second focus frame. By analyzing color features of the pixels of the object and with a classification manner, the efficacy of detecting a shape and size of the object so as to update information of the focus frame is achieved. | 05-03-2012 |
20120106784 | APPARATUS AND METHOD FOR TRACKING OBJECT IN IMAGE PROCESSING SYSTEM - A method, apparatus, and system track an object in an image or a video. Pose information is extracted using a relation of at least one feature point extracted in a first Region of Interest (RoI). A pose is estimated using the pose information. A secpmd RoI is set using the pose. And the second RoI is estimated using a filtering scheme. | 05-03-2012 |
20120106785 | METHODS AND SYSTEMS FOR PRE-PROCESSING TWO-DIMENSIONAL IMAGE FILES TO BE CONVERTED TO THREE-DIMENSIONAL IMAGE FILES - Disclosed herein are methods and systems of efficiently, effectively, and accurately preparing images for a 2D to 3D conversion process by pre-treating occlusions and transparencies in original 2D images. A single 2D image, or a sequence of images, is ingested, segmented into discrete elements, and the discrete elements are individually reconstructed. The reconstructed elements are then re-composited and ingested into a 2D to 3D conversion process. | 05-03-2012 |
20120106786 | OBJECT DETECTING DEVICE - An object detecting device includes a camera ECU that detects an object from image data for a predetermined area has been captured by a monocular camera, a fusion processing portion that calculates the pre-correction horizontal width of the detected object, a numerical value calculating portion that estimates the length in the image depth direction of the calculated pre-correction horizontal width, and a collision determining portion that corrects the pre-correction horizontal width calculated by the fusion processing portion, based on the estimated length in the image depth direction. | 05-03-2012 |
20120106787 | APPARATUS AND METHODS FOR ANALYSING GOODS PACKAGES - An apparatus for constructing a data model of a goods package from a series of images, one of the series of images comprising an image of the goods package, comprises a processor and a memory for storing one or more routines. When the one or more routines are executed under control of the processor the apparatus extracts element data from goods package elements in the series of images and constructs the data model by associating element data from a number of visible sides of the goods package with the goods package. The apparatus may also analyse a candidate character string read in an OCR process from one of the series of images of the goods package. The apparatus may also analyse a barcode read from an image of a goods package. | 05-03-2012 |
20120106788 | Image Measuring Device, Image Measuring Method, And Computer Program - Provided are an image measuring device, an image measuring method, and a computer program, capable of performing accurate calibration and accurately measure a desired physical quantity even in a case of an object to be measured having a shape in which selection and tracking of target points are difficult or an object to be measured moving as time elapses. Frame images are played back frame by frame, and selection of a plurality of frame images is accepted from frame images played back frame by frame. A synthesized image in which the selected and accepted frame images are superimposed is generated. The generated synthesized image is displayed, and a predetermined physical quantity is measured on the displayed synthesized image. | 05-03-2012 |
20120106789 | IMAGE PROCESSING APPARATUS AND METHOD AND PROGRAM - An image processing apparatus includes an image input configured to receive image data, a target extraction device configured to extract an object from the image data as a target object based on recognizing a first movement by the object, and a gesture recognition device configured to issue a command based on recognizing a second movement by the target object. | 05-03-2012 |
20120106790 | Face or Other Object Detection Including Template Matching - A template matching module is configured to program a processor to apply multiple differently-tuned object detection classifier sets in parallel to a digital image to determine one or more of an object type, configuration, orientation, pose or illumination condition, and to dynamically switch between object detection templates to match a determined object type, configuration, orientation, pose, blur, exposure and/or directional illumination condition. | 05-03-2012 |
20120106791 | IMAGE PROCESSING APPARATUS AND METHOD THEREOF - An image processing apparatus such as a surveillance apparatus and method thereof are provided. The image processing apparatus includes: an object detecting unit which detects a plurality of moving objects from at least one of two or more images obtained by photographing a surveillance area from two or more view points, respectively; a depth determination unit which determines depths of the moving objects based on the two or more images, wherein the depth determination unit determines the moving objects as different objects if the moving objects have different depths. | 05-03-2012 |
20120106792 | USER INTERFACE APPARATUS AND METHOD USING MOVEMENT RECOGNITION - A movement recognition method and a user interface are provided. A skin color is detected from a reference face area of an image. A movement-accumulated area, in which movements are accumulated, is detected from sequentially accumulated image frames. Movement information corresponding to the skin color is detected from the detected movement-accumulated area. A user interface screen is created and displayed using the detected movement information. | 05-03-2012 |
20120106793 | METHOD AND SYSTEM FOR IMPROVING THE QUALITY AND UTILITY OF EYE TRACKING DATA - A system and method for interpreting eye-tracking data are provided. The system and method comprise receiving raw data from an eye tracking study performed using an eye tracking mechanism and structural information pertaining to an electronic document that was the subject of the study. The electronic document and its structural information are used to compute a plurality of transition probability values. The eye-tracking data and the transition probability values are used to compute a plurality of gaze probability values. Using the transition probability values and the gaze probability values, a maximally probably transition sequence corresponding to the most likely direction of the user's gaze upon the document is identified. | 05-03-2012 |
20120106794 | METHOD AND APPARATUS FOR TRAJECTORY ESTIMATION, AND METHOD FOR SEGMENTATION - A trajectory estimation apparatus includes: an image acceptance unit which accepts images that are temporally sequential and included in the video; a hierarchical subregion generating unit which generates subregions at hierarchical levels by performing hierarchical segmentation on each of the images accepted by the image acceptance unit such that, among subregions belonging to hierarchical levels different from each other, a spatially larger subregion includes spatially smaller subregions; and a representative trajectory estimation unit which estimates, as a representative trajectory, a trajectory, in the video, of a subregion included in a certain image, by searching for a subregion that is most similar to the subregion included in the certain image, across hierarchical levels in an image different from the certain image. | 05-03-2012 |
20120106795 | SYSTEM AND METHOD FOR OPTIMIZING CAMERA SETTINGS - There is provided a recognition system. The recognition system is coupled to an image capturing device, and determines a first matching percentage by comparing a first live image with a first reference image, determines a second matching percentage by comparing a second live image with the first reference image, compares the first matching percentage with the second matching percentage to determine a direction of adjustment of a setting of the image capturing device, and generates a feedback signal to adjust the setting based on the direction of adjustment. The first live image and second live image are captured by the image capturing device. | 05-03-2012 |
20120106796 | CREATING A CUSTOMIZED AVATAR THAT REFLECTS A USER'S DISTINGUISHABLE ATTRIBUTES - A capture system captures detectable attributes of a user. A differential system compares the detectable attributes with a normalized model of attributes, wherein the normalized model of attributes characterize normal representative attribute values across a sample of a plurality of users and generates differential attributes representing the differences between the detectable attributes and the normalized model of attributes. Multiple separate avatar creator systems receive the differential attributes and each apply the differential attributes to different base avatars to create custom avatars which reflect a selection of the detectable attributes of the user which are distinguishable from the normalized model of attributes. | 05-03-2012 |
20120106797 | IDENTIFICATION OF OBJECTS IN A VIDEO - Techniques related to identifying objects in a video are generally described. One example method for identifying a moving object in a video may include generating a background frame and a foreground frame based on the video, comparing the foreground and the background frames at each corresponding location, acquiring an object area based on the comparison, determining if object area contains a moving object based on size and shape of the object area, identifying the moving object against templates of target objects, and updating the background frame according to the comparison. | 05-03-2012 |
20120106798 | SYSTEM AND METHOD FOR EXTRACTING REPRESENTATIVE FEATURE - A representative feature extraction system which selects a representative feature from an input data group includes: occurrence distribution memory means for memorizing an occurrence distribution with respect to feature quantities assumed to be input; evaluation value calculation means for calculating, with respect to each of data items in the data group, the sum of distances to the other data items included in the data group based on the occurrence distribution, to determine an evaluation value for the data item; and data selecting means for selecting the data item having the smallest evaluation value as a representative feature of the data group. | 05-03-2012 |
20120106799 | TARGET DETECTION METHOD AND APPARATUS AND IMAGE ACQUISITION DEVICE - The present invention provides a target detection method comprising the following steps controlling a modulated light emitting device to emit optical pulse signals with a first light intensity and a second light intensity to a target to be detected and a background, wherein the capabilities of reflecting the light pulse signals of the target to be detected and the background are different, controlling an image sensor to acquire images of the target to be detected and the background, wherein the image sensor comprises a plurality of image acquisition regions, and it successively scans the same image acquisition region once in the first light intensity and in the second light intensity respectively to obtain a first light intensity image and a second light intensity image, and stores them into corresponding locations in a first frame image and a second frame image respectively, distinguishing the target to be detected and the background, using the first frame image and the second frame image. The present invention also provides a target detection apparatus and an image acquisition device. This invention can precisely detect targets, even moving targets, in a strong light background. | 05-03-2012 |
20120114171 | EDGE DIVERSITY OBJECT DETECTION - Methods for detecting objects in an image. The method includes a) receiving magnitude and orientation values for each pixel in an image and b) assigning each pixel to one of a predetermined number of orientation bins based on the orientation value of each pixel. The method also includes c) determining, for a first pixel, a maximum of all the pixel magnitude values for each orientation bin in a predetermined region surrounding the first pixel. The method also includes d) summing the maximum pixel magnitude values for each of the orientation bins in the predetermined region surrounding the first pixel, e) assigning the sum to the first pixel and f) repeating steps c), d) and e) for all the pixels in the image. | 05-10-2012 |
20120114172 | TECHNIQUES FOR FACE DETECTION AND TRACKING - Techniques are disclosed that involve face detection. For instance, face detection tasks may be decomposed into sets of one or more sub-tasks. In turn the sub-tasks of the sets may be allocated across multiple image frames. This allocation may be based on a resource budget. In addition, face tracking tasks may be performed. | 05-10-2012 |
20120114173 | IMAGE PROCESSING DEVICE, OBJECT TRACKING DEVICE, AND IMAGE PROCESSING METHOD - An edge extracting unit of a contour image generator generates an edge image of an input image using an edge extraction filter, etc. A foreground processing unites extracts the foreground from the input image using a background image and expands the foreground to generate an expanded foreground image. The foreground processing unit further generates a foreground boundary image constructed of the boundary of the expanded foreground region. A mask unit masks the edge image using the expanded foreground image to eliminate edges in the background. A synthesis unit synthesizes the masked edge image and the foreground boundary image to generate a contour image. | 05-10-2012 |
20120114174 | Voxel map generator and method thereof - A volume cell (VOXEL) map generation apparatus includes an inertia measurement unit to calculate inertia information by calculating inertia of a volume cell (VOXEL) map generator, a Time of Flight (TOF) camera to capture an image of an object, thereby generating a depth image of the object and a black-and-white image of the object, an estimation unit to calculate position and posture information of the VOXEL map generator by performing an Iterative Closest Point (ICP) algorithm on the basis of the depth image of the object, and to recursively estimate a position and posture of the VOXEL map generator on the basis of VOXEL map generator inertia information calculated by the inertia measurement unit and VOXEL map generator position and posture information calculated by the ICP algorithm, and a grid map construction unit to configure a grid map based on the recursively estimated VOXEL map generator position and posture. | 05-10-2012 |
20120114175 | OBJECT POSE RECOGNITION APPARATUS AND OBJECT POSE RECOGNITION METHOD USING THE SAME - An object pose recognition apparatus and method. The object pose recognition method includes acquiring first image data of an object to be recognized and 3-dimensional (3D) point cloud data of the first image data, and storing the first image data and the 3D point cloud data in a database, receiving input image data of the object photographed by a camera, extracting feature points from the stored first image data and the input image data, matching the stored 3D point cloud data and the input image data based on the extracted feature points and calculating a pose of the photographed object, and shifting the 3D point cloud data based on the calculated pose of the object, restoring second image data based on the shifted 3D point cloud data, and re-calculating the pose of the object using the restored second image data and the input image data. | 05-10-2012 |
20120114176 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes an object detection unit configured to detect an object from an image, a tracking unit configured to track the detected object, a trajectory management unit configured to manage a trajectory of the object being tracked, and a specific object detection unit configured to detect a specific object from the image. In a case where the specific object determination unit detects the object being tracked by the object tracking unit to be the specific object, the trajectory management unit manages a trajectory of the object being tracked at a time point before the time point the object being tracked is detected to be the specific object as the trajectory of the specific object. | 05-10-2012 |
20120114177 | IMAGE PROCESSING SYSTEM, IMAGE CAPTURE APPARATUS, IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND PROGRAM - There is provided an image processing system in which an image capture apparatus and an image processing apparatus are connected to each other via a network. When a likelihood indicating the probability that a detection target object detected from a captured image is a predetermined type of object does not meet a designated criterion, the image capture apparatus generates tentative object information for the detection target object, and transmits it to the image processing apparatus. The image processing apparatus detects, from detection targets designated by the tentative object information, a detection target as the predetermined type of object. | 05-10-2012 |
20120114178 | VISION SYSTEM AND METHOD OF ANALYZING AN IMAGE - A vision system comprises a camera that captures an image and a processor coupled to process the received image to determine at least one feature descriptor for the image. The processor includes an interface to access annotated map data that includes geo-referenced feature descriptors. The processor is configured to perform a matching procedure between the at least one feature descriptor determined for the at least one image and the retrieved geo-referenced feature descriptors. | 05-10-2012 |
20120114179 | FACE DETECTION DEVICE, IMAGING APPARATUS AND FACE DETECTION METHOD - A face detection device for detecting the face of a person in an input image may include the following elements: a face detection circuit including a hardware circuit configured to detect a face in an input image; a signal processing circuit configured to perform signal processing based on an input image signal in accordance with a rewritable program including a face detection program for detecting a face in an input image; and a controller configured to allow the face detection circuit and the signal processing circuit to perform face detection on an image of a frame or on respective images of adjacent frames among consecutive frames, and to control face detection by the signal processing circuit on the basis of a face detection result obtained by the face detection circuit. | 05-10-2012 |
20120114180 | Identification Of Objects In A 3D Video Using Non/Over Reflective Clothing - A computing system generates a depth map from at least one image, detects objects in the depth map, and identifies anomalies in the objects from the depth map. Another computing system identifies at least one anomaly in an object in a depth map, and uses the anomaly to identify future occurrences of the object. A system includes a three dimensional (3D) imaging system to generate a depth map from at least one image, an object detector to detect objects within the depth map, and an anomaly detector to detect anomalies in the detected objects, wherein the anomalies are logical gaps and/or logical protrusions in the depth map. | 05-10-2012 |
20120121123 | INTERACTIVE DEVICE AND METHOD THEREOF - An interactive device is provided. The interactive device has a display device; a camera, for continuously filming a plurality of images in front of the display device, wherein the plurality of images includes at least one first object; and a processor, connected to the display device and the camera, for receiving the plurality of images, displaying the plurality of images on the display device, determining occurrence of an interactive movement of the first object in the plurality of images, designating an interactive object in the plurality of images when the interactive movement is detected, analyzing at least one characteristic of the interactive object, and controlling displayed images on the display device according to a trace of the interactive object. | 05-17-2012 |
20120121124 | Method for optical pose detection - The tracking and compensation of patient motion during a magnetic resonance imaging (MRI) acquisition is an unsolved problem. A self-encoded marker where each feature on the pattern is augmented with a 2-D barcode is provided. Hence, the marker can be tracked even if it is not completely visible in the camera image. Furthermore, it offers considerable advantages over a simple checkerboard marker in terms of processing speed, since it makes the correspondence search of feature points and marker-model coordinates, which are required for the pose estimation, redundant. Significantly improved accuracy relative to a planar checkerboard pattern is obtained for both phantom experiments and in-vivo experiments with substantial patient motion. In an alternative aspect, a marker having non-coplanar features can be employed to provide improved motion tracking. Such a marker provides depth cues that can be exploited to improve motion tracking. The aspects of non-coplanar patterns and self-encoded patterns can be practiced independently or in combination. | 05-17-2012 |
20120121125 | METHODS AND SYSTEMS FOR SOLAR SHADE ANALYSIS - A device for performing solar shade analysis combines a spherical reflective dome and a ball compass mounted on a platform, with a compass alignment mark and four dots in the corners of the platform. A user may place the device on a surface of a roof, or in another location where solar shading analysis is required. A user, while standing above the device can take a photo of the device. The photographs can then be used in order to evaluate solar capacity and perform shade analysis for potential sites for solar photovoltaic systems. By using the device in conjunction with a mobile device having a camera, photographs may be taken and uploaded, to be analyzed and processed to determine a shading percentage. For example, the solar shade analysis system may calculate the percentage of time that the solar photovoltaic system might be shaded for each month of the year. These measurements and data, or similar measurements and data, may be valuable when applying for solar rebates or solar installation permits. | 05-17-2012 |
20120121126 | METHOD AND APPARATUS FOR ESTIMATING FACE POSITION IN 3 DIMENSIONS - An apparatus and method for estimating a three-dimensional face position. The method of estimating the three-dimensional face position includes acquiring two-dimensional image information from a single camera, detecting a face region of a user from the two-dimensional image information, calculating the size of the detected face region, estimating a distance between the single camera and the user's face using the calculated size of the face region, and obtaining positional information of the user's face in a three-dimensional coordinate system using the estimated distance between the single camera and the user's face. Accordingly, it is possible to estimate the distance between the user and the single camera using the size of the face region of the user in the image information acquired by the single camera so as to acquire the three-dimensional position coordinates of the user. | 05-17-2012 |
20120121127 | IMAGE PROCESSING APPARATUS AND NON-TRANSITORY STORAGE MEDIUM STORING IMAGE PROCESSING PROGRAM - An image processing apparatus executes acquiring, on a first image having a pattern having first areas and second areas that have a different color from the first areas, center position of the pattern where the first areas and the second areas cross, acquiring boundary positions between the first and second area, converting the first image to a second image having its image distortion corrected by using the center position and the boundary positions, acquiring, by scanning on the second image, expectation values which are areas including the point where the first and second areas cross excluding the center position, acquiring a intersection position of the intersection on the second image based on the expectation values, acquiring the center position and the positions on the first image corresponding to the intersection position by inverting the second image to the first image, determining the points corresponding to the acquired positions as features. | 05-17-2012 |
20120121128 | OBJECT TRACKING SYSTEM - The present invention provides a system, method and computer program product for tracking the movement of a plurality of targets, wherein the detected movement is used for the modification of an interactive environment. The system comprises one or more imaging devices configured to capture two or more images of at least some of a plurality of target identifiers with one or more of a plurality of targets. The system further comprises a processing module which is operatively coupled to the one or more imaging devices, and configured to receive and process the two or more images. During the processing a first location parameter and a second location parameter for a predetermined region are determined. The one or more movement parameters are at least in part determined from the first and second location parameters and used for the modification of the interactive environment. | 05-17-2012 |
20120121129 | IMAGE PROCESSING APPARATUS - An image processing apparatus includes a first searcher. The first searcher searches for, from a designated image, one or at least two first partial images each of which represents a face portion. A second searcher searches for, from the designated image, one or at least two second partial images each of which represents a rear of a head. A first setter sets a region corresponding to the one or at least two first partial images detected by the first searcher as a reference region for an image quality adjustment. A second setter sets a region different from a region corresponding to the one or at least two second partial images detected by the second searcher as the reference region. A start-up controller selectively starts up the first setter and the second setter so that the first setter has priority over the second setter. | 05-17-2012 |
20120121130 | FLEXIBLE COMPUTER VISION - A method for flexible interest point computation, comprising: producing multiple octaves of a digital image, wherein each octave of said multiple scale octaves comprises multiple layers; initiating a process comprising detection and description of interest points, wherein said process is programmed to progress layer-by-layer over said multiple layers of each of said multiple octaves, and to continue to a next octave of said multiple octaves upon completion of all layers of a current octave of said multiple octaves; upon the detection and the description of each interest point of said interest points during said process, recording an indication associated with said interest point in a memory, such that said memory accumulates indications during said process; and upon interruption to said process, returning a result being based at least on said indications. | 05-17-2012 |
20120121131 | METHOD AND APPARATUS FOR ESTIMATING POSITION OF MOVING VEHICLE SUCH AS MOBILE ROBOT - An apparatus of estimating a position of a moving vehicle such as a robot includes a feature point matching unit which generates vectors connecting feature points of a previous image frame and feature points of a current image frame, corresponding to the feature points of the previous image frame, and determines spatial correlations between the feature points of the current image frame, a clustering unit which configures at least one motion cluster by grouping at least one vector among the vectors based on the spatial correlations in a feature space, and a noise removal unit removing noise from each motion cluster, wherein the position of the moving vehicle is estimated based on the at least one motion cluster. | 05-17-2012 |
20120121132 | OBJECT RECOGNITION METHOD, OBJECT RECOGNITION APPARATUS, AND AUTONOMOUS MOBILE ROBOT - To carry out satisfactory object recognition in a short time. An object recognition method in accordance with an exemplary aspect of the present invention is an object recognition method for recognizing a target object by using a preliminarily-created object model. The object recognition method generates a range image of an observed scene, detects interest points from the range image, extracts first features, the first features being features of an area containing the interest points, carries out a matching process between the first features and second features, the second features being features of an area in the range image of the object model, calculates a transformation matrix based on a result of the matching process, the transformation matrix being for projecting the second features on a coordinate system of the observed scene, and recognizes the target object with respect to the object model based on the transformation matrix. | 05-17-2012 |
20120121133 | SYSTEM FOR DETECTING VARIATIONS IN THE FACE AND INTELLIGENT SYSTEM USING THE DETECTION OF VARIATIONS IN THE FACE - A face change detection system is provided, comprising an image input unit acquiring a plurality of input images, a face extraction unit extracting a face region of the input images, and a face change extraction unit detecting a face change in the input images by calculating an amount of change in the face region. | 05-17-2012 |
20120121134 | CONTROL APPARATUS, CONTROL METHOD, AND PROGRAM - The present invention relates to a control apparatus, a control method, and a program in which, when performing automatic image-recording, the frequency with which image-recording is performed can be changed so that the recording frequency can be suitably changed in accordance with, for example, a user's intention or the state of an imaging apparatus. | 05-17-2012 |
20120121135 | POSITION AND ORIENTATION CALIBRATION METHOD AND APPARATUS - A position and orientation measuring apparatus calculates a difference between an image feature of a two-dimensional image of an object and a projected image of a three-dimensional model in a stored position and orientation of the object projected on the two-dimensional image. The position and orientation measuring apparatus further calculates a difference between three-dimensional coordinate information and a three-dimensional model in the stored position and orientation of the object. The position and orientation measuring apparatus then converts a dimension of the first difference and/or the second difference to cause the first difference and the second difference to have an equivalent dimension and corrects the stored position and orientation. | 05-17-2012 |
20120128201 | BI-MODAL DEPTH-IMAGE ANALYSIS - A depth-image analysis system calculates first mode skeletal data representing a human target in an observed scene if a portion of the human target is observed with a first set of joint positions, and calculates second mode skeletal data representing the human target in the observed scene if the portion of the human target is observed with a second set of joint positions different than the first set of joint positions. The first mode skeletal data and the second mode skeletal data have different skeletal joint constraints. | 05-24-2012 |
20120128202 | Image processing apparatus, image processing method and computer readable information recording medium - An image processing apparatus includes an obtaining part configured to obtain a plurality of images including a photographing object photographed by a photographing part; a determination part configured to detects a shift in position between a first image and a second image included in the plurality of images obtained by the obtaining part, and determine whether the first image is suitable for being superposed to the second image; a selection part configured to select a certain number of images from the plurality of images based on a determination result of the determination part; and a synthesis part configured to synthesize the certain number of images selected by the selection part. | 05-24-2012 |
20120128203 | MOTION ANALYZING APPARATUS - A sensor unit is installed to a target object and detects a given physical amount. A data acquisition unit acquires output data of the sensor unit in a period including a first period for which a real value of a value of m time integrals of the physical amount is known and a second period that is a target for motion analysis. An error time function estimating unit performs m time integrals of the output data of the sensor unit and estimates a time function of an error of a value of the physical amount detected by the sensor unit with respect to the real value of the value of the physical amount detected by the sensor unit based on a difference between a value of m time integrals of the output data and the real value for the first period. | 05-24-2012 |
20120128204 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes a selection unit configured to select a plurality of specific areas of a target object, a learning unit configured to learn a detection model that relates to each of the plurality of specific areas, a generation unit configured to generate an area combination as a combination of specific areas selected from the plurality of specific areas, a recognition unit configured to recognize the target object based on the detection model and the area combination, and an addition unit configured to add a new specific area based on a recognition result obtained by the recognition unit. If the new specific area is added by the addition unit, the learning unit further learns a detection model that relates to the new specific area. | 05-24-2012 |
20120128205 | APPARATUS FOR PROVIDING SPATIAL CONTENTS SERVICE AND METHOD THEREOF - Disclosed herein is an apparatus for providing spatial contents service which includes a spatial contents insertion unit, a spatial contents generation unit, a topological relationship generation unit, and a spatial contents composition unit. The spatial contents insertion unit extracts spatial objects included in an image. The spatial contents generation unit generates primary spatial contents corresponding to the image. The topological relationship generation unit compares spatial location information of the primary spatial contents with spatial location information of one or more pieces of secondary spatial contents, and defines a spatial topological relationship between the primary spatial contents and the secondary spatial contents. The spatial contents composition unit couples or links the secondary spatial contents, which has a spatial topological relationship with the primary spatial contents, to the primary spatial contents. | 05-24-2012 |
20120128206 | OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD, AND COMPUTER-READABLE MEDIUM RECORDING PROGRAM THEREFOR - An object detection device includes: an obtaining unit successively obtaining frame images; a first determination unit determining whether a first similarity between a reference image and a first image region in one of the obtained frame images is less than a first threshold value; a second determination unit determining whether a second similarity between the reference image and a second image region, included in a frame image obtained before the one of the frame images and corresponding to the first image region, is less than a second threshold value larger than the first threshold value, when the first determination unit determines that the first similarity is not less than the first threshold value; and a detection unit detecting the first image region as a region of a particular object image when the second determination unit determines that the second similarity is not less than the second threshold value. | 05-24-2012 |
20120128207 | DATA ANALYSIS DEVICE, DATA ANALYSIS METHOD, AND PROGRAM - Provided is a data analysis device for automatically detecting a step on the ground based on point cloud data representing a three-dimensional shape of a feature surface. A space subject to analysis is divided into a plurality of subspaces. A boundary search unit ( | 05-24-2012 |
20120128208 | Human Tracking System - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities. | 05-24-2012 |
20120128209 | IMAGE ANALYSIS DEVICE AND IMAGE ANALYSIS PROGRAM - Problem to be Solved: | 05-24-2012 |
20120128210 | Method for Traffic Sign Recognition - The invention relates to a method for traffic sign recognition that analyzes and classifies the image data of a sensor ( | 05-24-2012 |
20120128211 | DISTANCE CALCULATION DEVICE FOR VEHICLE - Provided is a distance calculation device for a vehicle, which can accurately calculate the distance to an object, for example, even when the sunshine condition in an image capture environment changes. In the device, an image quality estimation means ( | 05-24-2012 |
20120134532 | ABNORMAL BEHAVIOR DETECTION SYSTEM AND METHOD USING AUTOMATIC CLASSIFICATION OF MULTIPLE FEATURES - Described herein are a system and a method for abnormal behavior detection using automatic classification of multiple features. Features from various sources, including those extracted from camera input through digital image analysis, are used as input to machine learning algorithms. These algorithms group the features and produce models of normal and abnormal behaviors. Outlying behaviors, such as those identified by their lower frequency, are deemed abnormal. Human supervision may optionally be employed to ensure the accuracy of the models. Once created, these models can be used to automatically classify features as normal or abnormal. This invention is suitable for use in the automatic detection of abnormal traffic behavior such as running of red lights, driving in the wrong lane, or driving against traffic regulations. | 05-31-2012 |
20120134533 | TEMPORAL THERMAL IMAGING METHOD FOR DETECTING SUBSURFACE OBJECTS AND VOIDS - A temporal thermal survey method to locate at a given area whether or not there is a subsurface object or void site. The method uses thermal inertia change detection. It locates temporal heat flows from naturally heated subsurface objects or faulty structures such as corrosion damage. The added value over earlier methods is the use of empirical methods to specify the optimum times for locating subsurface objects or voids amidst clutter and undisturbed host materials. Thermal inertia, or thermal effusivity, is the bulk material resistance to temperature change. Surface temperature highs and lows are shifted in time at the subsurface object or void site relative to the undisturbed host material sites. The Dual-band Infra-Red Effusivity Computed Tomography (DIRECT) method verifies the optimum two times to detect thermal inertia outliers at the subsurface object or void border with undisturbed host materials. | 05-31-2012 |
20120134534 | CONTROL COMPUTER AND SECURITY MONITORING METHOD USING THE SAME - A method for performing security surveillance using a control computer sends an image obtaining request from the control computer to a preset channel of a network video recorder (NVR) or a digital video recorder (DVR), and receives captured images from the preset channel of the NVR or the DVR. The method further detects a specified object in the captured images, and stores/outputs an image area of the specified object in a storage device of the control computer or a terminal device. | 05-31-2012 |
20120134535 | METHOD FOR ADJUSTING PARAMETERS OF VIDEO OBJECT DETECTION ALGORITHM OF CAMERA AND THE APPARATUS USING THE SAME - An apparatus for a video object detection algorithm of a camera includes a video object detection training module and a video object detection application module. The video object detection training module is configured to generate an optimum correspondence between quantified values of environmental variables and parameters of a video object detection algorithm according to a stream of training video signals and a video object detection reference result. The video object detection application module is configured to perform video object detection on a stream of training video signals based on the optimum correspondence between the quantified values of the environmental variables and the parameters of the video object detection algorithm. | 05-31-2012 |
20120134536 | Image Processing Apparatus and Method, and Program - An image processing apparatus includes a depth image obtaining unit configured to obtain a depth image including information on distances from an image-capturing position to a subject in a two-dimensional image to be captured; a local tip portion detection unit configured to detect a portion of the subject at a depth and a position close from the image-capturing position as a local tip portion; a projecting portion detection unit configured to detect, in a case where, when each of the blocks is set as a block of interest, the local tip portion of the block of interest in an area formed of the plurality of blocks adjacent to the block of interest, becomes a local tip portion closest from the image-capturing position, the local tip portion as a projecting portion; and a tracking unit configured to continuously track the position of the projecting portion. | 05-31-2012 |
20120134537 | SYSTEM AND METHOD FOR EXTRACTING THREE-DIMENSIONAL COORDINATES - A system and method for extracting 3D coordinates, the method includes obtaining, by a stereoscopic image photographing unit, two images of a target object, and obtaining 3D coordinates of the object on the basis of coordinates of each pixel of the two images, measuring, by a Time of Flight (TOF) sensor unit, a value of a distance to the object, and obtaining 3D coordinates of the object on the basis of the measured distance value, mapping pixel coordinates of each image to the 3D coordinates obtained through the TOF sensor unit, and calibrating the mapped result, determining whether each set of pixel coordinates and the distance value to the object measured through the TOF sensor unit are present, calculating a disparity value on the basis of the distance value or the pixel coordinates, and calculating 3D coordinates of the object on the basis of the calculated disparity value. | 05-31-2012 |
20120134538 | OBJECT TRACKING DEVICE CAPABLE OF TRACKING OBJECT ACCURATELY, OBJECT TRACKING METHOD, AND STORAGE MEDIUM - An object tracking device capable of accurately tracking an object as a tracking target. The device receives an image signal having a plurality of frame images and tracks a specific object in the image signal. The device sets a predetermined number of small areas in a reference area indicative of an area where an image of the object is formed in the preceding frame image. The object tracking device detects a motion vector of the object in each of the small areas, and determines a change of the object according to the motion vector to thereby obtain shape change information. The device corrects the location and size of the reference area according to the shape change information to thereby correct the reference area to a corrected reference area, and tracks the object using the corrected reference area. | 05-31-2012 |
20120134539 | OBSERVATION APPARATUS AND OBSERVATION METHOD - Provided are an observation apparatus and an observation method that allow a state change of an observation target to be observed after image-acquisition is started. An observation apparatus | 05-31-2012 |
20120134540 | METHOD AND APPARATUS FOR CREATING SURVEILLANCE IMAGE WITH EVENT-RELATED INFORMATION AND RECOGNIZING EVENT FROM SAME - An apparatus for creating a surveillance image with event-related information includes an event detection unit configured to detect an event in the surveillance image, an encoding unit configured to encode the surveillance image into a bit stream of the surveillance image, an event information creation unit configured to create event-related information based on the detected event, and a parsing unit configured to parse the encoded surveillance image and insert the event-related information into the bit stream of the encoded surveillance image. | 05-31-2012 |
20120134541 | OBJECT TRACKING DEVICE CAPABLE OF DETECTING INTRUDING OBJECT, METHOD OF TRACKING OBJECT, AND STORAGE MEDIUM - An object tracking device that is capable of detecting that an intruding object has entered an image frame of image data where a tracking target object is being tracked. A plurality of sub areas are set in a preceding or current frame target area indicative of a position of the tracking target object in a preceding or current frame of moving image data, and a feature value of each sub area is determined. If the feature value exceeds a first threshold value in at least one of the sub areas and at the same time the number of the at least one of the sub areas does not reach a reference value, it is determined that an intruding object different from the tracking target object has entered an area in which the tracking target object is positioned in the current frame. | 05-31-2012 |
20120140981 | System and Method for Combining Visible and Hyperspectral Imaging with Pattern Recognition Techniques for Improved Detection of Threats - Systems and method for detecting unknown samples wherein pattern recognition algorithms are applied to a visible image of a first target area comprising a first unknown sample to thereby generate a first set of target data. If comparison of the first set of target data to reference data results in a match, the first unknown is identified and a hyperspectral image of a second target area comprising a second unknown sample is obtained to generate a second set of test data. If comparison of the second set of test data to reference data results in a match, the second unknown sample is identified as a known material. Identification of an unknown through hyperspectral imaging can also trigger the visible camera to obtain an image. In addition, the visible and hyperspectral cameras can be run continuously to simultaneously obtain visible and hyperspectral images. | 06-07-2012 |
20120140982 | IMAGE SEARCH APPARATUS AND IMAGE SEARCH METHOD - According to one embodiment, an image search apparatus includes, an image input module which is input with an image, an event detection module which detects events from the input image input by the image input module, and determines levels, depending on types of the detected events, an event controlling module which retains the events detected by the event detection module, for each of the levels, and an output module which outputs the events retained by the event controlling module, for each of the levels. | 06-07-2012 |
20120140983 | METHOD FOR DETECTION OF SPECIMEN REGION, APPARATUS FOR DETECTION OF SPECIMEN REGION, AND PROGRAM FOR DETECTION OF SPECIMEN REGION - A method for detecting the specimen region includes the first step for the first region detecting unit to detect the first region which is a region with contrast in the first image of an object for observation which is photographed under illumination with visible light, the second step for the second region detecting unit to detect the second region which is a region with contrast in the second image of the object for observation which is photographed under illumination with ultraviolet light, and the third step for the specimen region defining unit to define, based on the first and second regions mentioned above, the specimen region where there exists the specimen in the object for observation. | 06-07-2012 |
20120140984 | DRIVING SUPPORT SYSTEM, DRIVING SUPPORT PROGRAM, AND DRIVING SUPPORT METHOD - Provided is a driving support system that includes an image recognition unit that performs image recognition processing to recognize if a recognition object associated with any of the support processes is included in image data captured by an on-vehicle camera and a recognition area information storage unit that stores information regarding a set recognition area in the image data that is set depending on a recognition accuracy of the recognition object set for execution of the support process. A candidate process extraction unit is also included for extracting at least one execution candidate support process from the plurality of support processes and a support process execution management unit that allows execution of the extracted execution candidate support process on a condition that a position in the image data of the recognition object recognized by the image recognition processing is included in the set recognition area. | 06-07-2012 |
20120140985 | IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREFOR - A parameter for each of a plurality of images captured in time series is computed based on information obtained from the image, and a normal reference image (an image captured before an image targeted for processing is stored). A degree of similarity between the image targeted for processing and the normal reference image is computed, and a parameter to be used in image processing applied to the image targeted for processing is computed by performing weighted addition such that a parameter computed from the normal reference image has a higher weight than a parameter computed from the image targeted for processing the higher the degree of similarity. | 06-07-2012 |
20120140986 | PROVIDING IMAGE DATA - Embodiments of the present invention provide a method of providing image data for constructing an image of a region of a target object, comprising providing incident radiation from a radiation source at a target object, detecting, by at least one detector, a portion of radiation scattered by the target object with the incident radiation or an aperture at first and second positions, and providing image data via an iterative process responsive to the detected radiation, wherein in said iterative process image data is provided corresponding to a portion of radiation scattered by the target object and not detected by the detector. | 06-07-2012 |
20120140987 | Methods and Systems for Discovering Styles Via Color and Pattern Co-Occurrence - Methods and systems for discovering styles via color and pattern co-occurrence are disclosed. According to one embodiment, a computer-implemented method comprises collecting a set of fashion images, selecting at least one subset within the set of fashion images, the subset comprising at least one image containing a fashion item, and computing a set of segments by segmenting the at least one image into at least one dress segment. Color and pattern representations of the set of segments are computed by using a color analysis method and a pattern analysis method respectively. A graph is created wherein each graph node corresponds to one of a color representation or a pattern representation computed for the set of segments. Weights of edges between nodes of the graph indicate a degree of how the corresponding colors or patterns complement each other in a fashion sense. | 06-07-2012 |
20120140988 | OBSTACLE DETECTION DEVICE AND METHOD AND OBSTACLE DETECTION SYSTEM - An obstacle region candidate point relating unit assumes that a pixel in an image corresponds to a point on a road surface, and associates pixels between images at two times on the basis of the amount of movement of a vehicle in question, a road plane, and a flow of the image estimated. When a pixel corresponds to a shadow of the vehicle in question or the moving object therearound appearing on the road surface, the ratio of intensities of the pixel values of the spectral images between two images should be approximately the same as the ratio of the spectral characteristics of the sunshine in the sun and the shade. Therefore, when the ratio of intensities is approximately the same as the ratio of the spectral characteristics, the obstacle determining unit does not determine that the pixel in question is a point corresponding to the obstacle. Only when the ratio of intensities is not approximately the same as the ratio of the spectral characteristics, the obstacle determining unit determines that the pixel in question is a point corresponding to the obstacle. | 06-07-2012 |
20120148092 | AUTOMATIC TRAFFIC VIOLATION DETECTION SYSTEM AND METHOD OF THE SAME - Disclosed herein are a system and method for the automatic detection of traffic and parking violations. Camera input is digitally analyzed for vehicle type and location. This information is then processed against local traffic and parking regulations to detect violations. Detectable driving offenses include, but are not limited to: no scooters, buses only, and scooters only lane violations. Detectable parking offenses include, but are not limited to: parking or loitering in bus stops, parking next to fire hydrants, and parking in no-parking zones. Camera input, detected vehicle information, and violations can be stored for later search and retrieval. The system may be configured to signal the authorities or other automated analysis systems about specific violations. When coupled with automatic license plate recognition, vehicles may be automatically matched against a registration database and reported or ticketed. | 06-14-2012 |
20120148093 | Blob Representation in Video Processing - A method of processing a video sequence is provided that includes receiving a frame of the video sequence, identifying a plurality of blobs in the frame, computing at least one interior point of each blob of the plurality of blobs, and using the interior points in further processing of the video sequence. The interior points may be used, for example, in object tracking. | 06-14-2012 |
20120148094 | IMAGE BASED DETECTING SYSTEM AND METHOD FOR TRAFFIC PARAMETERS AND COMPUTER PROGRAM PRODUCT THEREOF - An image-based detecting system for traffic parameters first sets a range of a vehicle lane for monitoring control, and sets an entry detection window and an exit detection window in the vehicle lane. When the entry detection window detects an event of a vehicle passing by using the image information captured at the entry detection window, a plurality of feature points are detected in the entry detection window, and will be tracked hereafter. Then, the feature points belonging to the same vehicle are grouped to obtain at least a location tracking result of single vehicle. When the tracked single vehicle moves to the exit detection window, according to the location tracking result and the time correlation through estimating the information captured at the entry detection window and the exit detection window, at least a traffic parameter is estimated. | 06-14-2012 |
20120148095 | IMAGE PROCESSING APPARATUS - An image processing apparatus includes a detector. A detector detects one or at least two object images each of which is coincident with a dictionary image from each of K (K: an integer of two or more) of continuous shot images. A classifier executes on the K of continuous shot images a process of classifying the object images detected according to a common object. A determiner determines an attribute of equal to or less than K of object images belonging to each of one or at least two object image groups classified. A first excluder excludes a continuous shot image satisfying an error condition out of the K of the continuous shot images, based on a determined result. A selector selects a part of one or at least two continuous shot images remained after an exclusion as a specific image. | 06-14-2012 |
20120148096 | APPARATUS AND METHOD FOR CONTROLLING IMAGE USING MOBILE PROJECTOR - Disclosed is an image control system using a mobile projector, including a first apparatus configured to determine, when a first picture is projected and a user input for a specific image is received, whether the projected first picture is projected onto the specific image, and if so, control the specific image to perform an operation corresponding to the user input, and a second apparatus configured to, receive the user input from the first apparatus, determine whether the first picture is projected onto the specific image, and if so, perform an operation corresponding to the user input. | 06-14-2012 |
20120148097 | 3D MOTION RECOGNITION METHOD AND APPARATUS - Disclosed are a three-dimensional motion recognition method and an apparatus using a motion template method and an optical flow tracking method of feature points. The three dimensional (3D) motion recognition method through feature-based stereo matching according to an exemplary embodiment of the present disclosure includes: obtaining a plurality of images from a plurality of cameras; extracting feature points from a single reference image; and comparing and tracking the feature points of the reference image and another comparison image photographed at the same time using an optical flow method. | 06-14-2012 |
20120148098 | ELECTRONIC CAMERA - An electronic camera includes an imager. An imager outputs an electronic image corresponding to an optical image captured on an imaging surface. A first generator generates a first notification forward of the imaging surface. A searcher searches for one or at least two face images each having a size exceeding a reference from the electronic image outputted from the imager. A controller controls a generation manner of the first generator with reference to an attribute of each of one or at least two face images detected by the detector. | 06-14-2012 |
20120148099 | SYSTEM AND METHOD FOR MEASURING FLIGHT INFORMATION OF A SPHERICAL OBJECT WITH HIGH-SPEED STEREO CAMERA - Disclosed is a method for automatically extracting centroids and features of a spherical object required to measure a flight speed, a flight direction, a rotation speed, and a rotation axis of the spherical object in a system for measuring flight information of the spherical object with a high-speed stereo camera. | 06-14-2012 |
20120148100 | POSITION AND ORIENTATION MEASUREMENT DEVICE AND POSITION AND ORIENTATION MEASUREMENT METHOD - A position and orientation measurement device includes a grayscale image input unit that inputs a grayscale image of an object, a distance image input unit that inputs a distance image of the object, an approximate position and orientation input unit that inputs an approximate position and orientation of the object with respect to the position and orientation measurement device, and a position and orientation calculator that updates the approximate position and orientation. The position and orientation calculator calculates a first position and orientation so that an object image on an image plane and a projection image of the three-dimensional shape model overlap each other, associates the three-dimensional shape model with the image features of the grayscale image and the distance image, and calculates a second position and orientation on the basis of a result of the association. | 06-14-2012 |
20120148101 | METHOD AND APPARATUS FOR EXTRACTING TEXT AREA, AND AUTOMATIC RECOGNITION SYSTEM OF NUMBER PLATE USING THE SAME - Disclosed is a method of extracting a text area, the method including generating a text area prediction value within an input second image based on a plurality of text area data stored in a database including geometric information about a text area of a first image, generating a text recognition result value by determining whether a text is recognized with respect to a probable text area within the input second image, and selecting a text area within the second image by combining the generated text area prediction value and text recognition result value. | 06-14-2012 |
20120148102 | MOBILE BODY TRACK IDENTIFICATION SYSTEM - There is provided a mobile body track identification system that determines which mobile body matches which detected track with a high precision irrespective of frequent interruption of tracks of a mobile body detected in a tracking area. Herein, hypotheses are generated by use of sets of track-coupling candidate/identification pairs, which combines track-coupling candidates, combining tracks of a mobile body detected in a predetermined time in the past, and identifications of the mobile body and which satisfies a predetermined condition. Next, identification likelihoods are calculated as likelihoods of detecting identifications in connection with tracks indicated by track-coupling candidates included in track-coupling candidate/identification pairs ascribed to each of the selected hypotheses. Identification likelihoods are integrated per each track-coupling candidate/identification pair, thus calculating an identification likelihood regarding the selected hypothesis. A most-probable hypothesis is estimated based on identification likelihoods of hypotheses. | 06-14-2012 |
20120148103 | METHOD AND SYSTEM FOR AUTOMATIC OBJECT DETECTION AND SUBSEQUENT OBJECT TRACKING IN ACCORDANCE WITH THE OBJECT SHAPE - A method and system for automatic object detection and subsequent object tracking in accordance with the object shape in digital video systems having at least one camera for recording and transmitting video sequences. In accordance with the method and system, an object detection algorithm based on a Gaussian mixture model and expanded object tracking based on Mean-Shift are combined with each other in object detection. The object detection is expanded in accordance with a model of the background by improved removal of shadows, the binary mask generated in this way is used to create an asymmetric filter core, and then the actual algorithm for the shape-adaptive object tracking, expanded by a segmentation step for adapting the shape, is initialized, and therefore a determination at least of the object shape or object contour or the orientation of the object in space is made possible. | 06-14-2012 |
20120148104 | PEDESTRIAN-CROSSING MARKING DETECTING METHOD AND PEDESTRIAN-CROSSING MARKING DETECTING DEVICE - Provided are a pedestrian-crossing marking detecting method and a pedestrian-crossing marking detecting device, wherein the existence of pedestrian crossing markings and the positions thereof can be detected accurately from within a picked up image, even when detection of the intensity edges of painted sections is difficult. In the pedestrian-crossing mark detecting device ( | 06-14-2012 |
20120155702 | System and Method for Detecting Nuclear Material in Shipping Containers - A system and method for detecting metal contraband such as weapons related material in shipping containers where a container is scanned with at least one penetrating beam, preferably a tomographic x-ray beam, and at least one image is formed. The image can be analyzed by a pattern recognizer to find voids representing metal. The voids can be further classified with respect to their 2 or 3-dimensional geometric shapes. Container ID and contents or bill of lading information can be combined along with other parameters such as total container weight to allow a processor to generate a detection probability. The processor can use artificial intelligence methods to classify suspicious containers for manual inspection. | 06-21-2012 |
20120155703 | MICROPHONE ARRAY STEERING WITH IMAGE-BASED SOURCE LOCATION - Methods and systems for beam forming an audio signal based on a location of an object relative to the listening device, the location being determined from positional data deduced from an optical image including the object. In an embodiment, an object's position is tracked based on video images of the object and the audio signal received from a microphone array located at a fixed position is filtered based on the tracked object position. Beam forming techniques may be applied to emphasize portions of an audio signal associated with sources near the object. | 06-21-2012 |
20120155704 | LOCALIZED WEATHER PREDICTION THROUGH UTILIZATION OF CAMERAS - Described herein are various technologies pertaining to predicting an amount of electrical power that is to be generated by a power system at a future point in time, wherein the power system utilizes a renewable energy resource to generate electrical power. A camera is positioned to capture an image of sky over a geographic region of interest. The image is analyzed to predict an amount of solar radiation that is to be received by the power source at a future point in time. The predicted solar radiation is used to predict an amount of electrical power that will be output by the power system at the future point in time. A computational resource of a data center that is powered by way of the power source is managed as a function of the predicted amount of power. | 06-21-2012 |
20120155705 | FIRST PERSON SHOOTER CONTROL WITH VIRTUAL SKELETON - A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control. | 06-21-2012 |
20120155706 | RANGE IMAGE GENERATION APPARATUS, POSITION AND ORIENTATION MEASUREMENT APPARATUS, RANGE IMAGE PROCESSING APPARATUS, METHOD OF CONTROLLING RANGE IMAGE GENERATION APPARATUS, AND STORAGE MEDIUM - A range image generation apparatus comprises: a generation unit adapted to generate a first range image of a target measurement object at one of a predetermined in-plane resolution and a predetermined depth-direction range resolving power; an extraction unit adapted to extract range information from the first range image generated by the generation unit; and a decision unit adapted to decide, as a parameter based on the range information extracted by the extraction unit, one of an in-plane resolution and a depth-direction range resolving power of a second range image to be generated by the generation unit, wherein the generation unit generates the second range image using the parameter decided by the decision unit. | 06-21-2012 |
20120155707 | IMAGE PROCESSING APPARATUS AND METHOD OF PROCESSING IMAGE - An image processing apparatus includes a first detecting unit configured to detect an object in an image; a determining unit configured to determine a moving direction of the object detected by the first detecting unit; and a second detecting unit configured to perform detection processing of detecting whether the object detected by the first detecting unit is a specific object on the basis of the moving direction of the object determined by the first determining unit. | 06-21-2012 |
20120155708 | APPARATUS AND METHOD FOR MEASURING TARGET POINT IN VIDEO - Disclosed are an apparatus and method for measuring a target point in a video. In the apparatus and method for measuring a target point in a video, a target point is recognized in a video including the target point set as a measuring target, information regarding the target point is extracted by using location information of the recognized target point and map information of the surroundings of the recognized target point, and the extracted target point is displayed in the video while providing detailed map information regarding the target point. Accordingly, a user can be quickly provided with detailed information regarding the location of the target point or an object present in a visual range and geo-spatial information of the surroundings. | 06-21-2012 |
20120155709 | Detecting Orientation of Digital Images Using Face Detection Information - A method of automatically establishing the correct orientation of an image using facial information. This method is based on the exploitation of the inherent property of image recognition algorithms in general and face detection in particular, where the recognition is based on criteria that is highly orientation sensitive. By applying a detection algorithm to images in various orientations, or alternatively by rotating the classifiers, and comparing the number of successful faces that are detected in each orientation, one may conclude as to the most likely correct orientation. Such method can be implemented as an automated method or a semi automatic method to guide users in viewing, capturing or printing of images. | 06-21-2012 |
20120155710 | PAPER-SHEET HANDLING APPARATUS AND PAPER-SHEET HANDLING METHOD - A paper-sheet handling apparatus ( | 06-21-2012 |
20120163656 | METHOD AND APPARATUS FOR IMAGE-BASED POSITIONING - Method and apparatus are provided for image based positioning comprising capturing a first image with an image capturing device. Wherein said first image includes at least one object. Moving the platform and capturing a second image with the image capturing device. The second image including the at least one object. Capturing in the first image an image of a surface; capturing in the second image a second image of the surface. Processing the plurality of images of the object and the surface using a combined feature based process and surface tracking process to track the location of the surface. Finally, determining the location of the platform by processing the combined feature based process and surface based process. | 06-28-2012 |
20120163657 | Summary View of Video Objects Sharing Common Attributes - Disclosed herein are a method, system, and computer program product for displaying on a display device ( | 06-28-2012 |
20120163658 | Temporal-Correlations-Based Mode Connection - Disclosed herein are a system, method, and computer program product for updating a scene model ( | 06-28-2012 |
20120163659 | IMAGING APPARATUS, IMAGING METHOD, AND COMPUTER READABLE STORAGE MEDIUM - An imaging apparatus includes an imaging unit that generates a pair of pieces of image data mutually having a parallax by capturing a subject, an image processing unit that performs special effect processing, which is capable of producing a visual effect by combining a plurality of pieces of image processing, on a pair of images corresponding to the pair of pieces of image data, and a region setting unit that sets a region where the image processing unit performs the special effect processing on the pair of images. | 06-28-2012 |
20120163660 | PROCESSING SYSTEM - A processing system for plate-like objects is provided, with an exposure device and an object carrier with an object carrier surface for receiving the object. The exposure device and the carrier are movable relative to one another, such that the exact position of the object relative to the carrier is determinable. An edge detection device is provided which comprises at least one edge illumination unit having an illumination area, within which an object edge located in the respective object edge area has light directed onto it from the side of the carrier. At least one edge image detection unit is provided on a side of the object located opposite the carrier, the edge image detection unit imaging an edge section of the object edges located in the illumination area as an edge image, such that the respective edge image is detectable in its exact position relative to the carrier. | 06-28-2012 |
20120163661 | APPARATUS AND METHOD FOR RECOGNIZING MULTI-USER INTERACTIONS - An apparatus for recognizing multi-user interactions includes: a pre-processing unit for receiving a single visible light image to perform pre-processing; a motion region detecting unit for detecting a motion region from the image to generate motion blob information; a skin region detecting unit for extracting information on a skin color region from the image to generate a skin blob list; a Haar-like detecting unit for performing Haar-like face and eye detection by using only contrast information from the image; a face tracking unit for recognizing a face of a user from the image by using the skin blob list and results of the Haar-like face and eye detection; and a hand tracking unit for recognizing a hand region of the user from the image. | 06-28-2012 |
20120163662 | METHOD FOR BUILDING OUTDOOR MAP FOR MOVING OBJECT AND APPARATUS THEREOF - The method for building an outdoor map for a moving object according to an exemplary embodiment of the present invention includes: receiving a real satellite image for an outdoor space to which the moving object is to move; calculating pixel information including sizes of length and width pixels and a physical distance of one pixel in the real satellite image; measuring a reference position coordinate for a reference position selected from the real satellite image; and linking a pixel number corresponding to the reference position, the reference position coordinate, and the pixel information to the real satellite image in order to build the outdoor map for the moving object, and further includes creating information on a road network in which the moving object navigates based on the pixel number corresponding to the reference position, the reference position coordinate, and the pixel information. | 06-28-2012 |
20120163663 | SECURITY USE RESTRICTIONS FOR A MEDICAL COMMUNICATION MODULE AND HOST DEVICE - System and method for interfacing with a medical device. The system has a host device and a communication module. The host device has a user interface configured to input and display information relating to the interfacing with the medical device. The communication module is locally coupled to the host device and configured to communicate wirelessly with the medical device. The system, implemented by the host device and the communication module, is configured to communicate with the medical device with functions. The system, implemented by at least one of the host device and the communication module, has validation layers configured for use by users, each of the users having access to at least one of the validation layers based on a validation condition, each individual one of the functions being operational through the user interface only with one of the validation layers. | 06-28-2012 |
20120163664 | METHOD AND SYSTEM FOR INPUTTING CONTACT INFORMATION - A method and a system for inputting contact information are provided. The method includes: acquiring a content attribute of a current edit box; starting up a camera device, and entering a shoot preview interface of the camera device; placing a text content of contact information to be input in the shoot preview interface of the camera device, and shooting the text content of the contact information; analyzing and recognizing the text content located near the positioning identifier in the preview interface in an image through an optical character recognition technology, and extracting a contact information character string conforming to the content attribute of the current edit box; and inputting a recognition result character string into the current edit box. | 06-28-2012 |
20120163665 | Method of object location in airborne imagery using recursive quad space image processing - A method and computer workstation are disclosed which determine the location in the ground space of selected point in a digital image of the earth obtained by an airborne camera. The method includes the steps of: (a) performing independently and in parallel a recursive partitioning of the image space and the ground space into successively smaller quadrants until a pixel coordinate in the image assigned to the selected point is within a predetermined limit (Δ) of the center of a final recursively partitioned quadrant in the image space. The method further includes a step of (b) calculating a geo-location of the point in the ground space corresponding to the selected point in the image space from the final recursively partitioned quadrant in the ground space corresponding to the final recursively partitioned quadrant in the image space. | 06-28-2012 |
20120163666 | Object Processing Employing Movement - Directional albedo of a particular article, such as an identity card, is measured and stored. When the article is later presented, it can be confirmed to be the same particular article by re-measuring the albedo function, and checking for correspondence against the earlier-stored data. The re-measuring can be performed through us of a handheld optical device, such as a camera-equipped cell phone. The albedo function can serve as random key data in a variety of cryptographic applications. The function can be changed during the life of the article. A variety of other features are also detailed. | 06-28-2012 |
20120163667 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 06-28-2012 |
20120163668 | TRANSLATION AND DISPLAY OF TEXT IN PICTURE - A method performed by a mobile terminal may include displaying an image via a display of the mobile device. A first user selection of a portion of the image is received and text in the selected portion of the image is identified. The identified text is displayed via the display. A second user selection of at least a portion of the identified text is received. The portion of the identified text is translated from a first language into a second language that differs from the first language. The translated text, in the second language, is displayed over the image via the display. | 06-28-2012 |
20120163669 | Systems and Methods for Detecting a Tilt Angle from a Depth Image - A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels. | 06-28-2012 |
20120163670 | BEHAVIORAL RECOGNITION SYSTEM - Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned. | 06-28-2012 |
20120170799 | MOVABLE RECGNITION APPARATUS FOR A MOVABLE TARGET - A movable recognition apparatus and a method thereof, which identify an activity configuration of at least a movable target, provide a plurality of distance measuring devices arranged as a two-dimensional matrix on a plane of a specific space to detect and obtain a plurality of vertical distance values between the movable target and the plane. Then, an analyzing device is applied to establish a contour graph corresponding to the movable target by means of referencing the vertical distance values and to identify the activity configuration in accordance with the shape change of the contour graph. Therefore, the movable recognition apparatus can perform the identification task conveniently with privacy requirement in addition to accuracy of the identified activity configuration. | 07-05-2012 |
20120170800 | SYSTEMS AND METHODS FOR CONTINUOUS PHYSICS SIMULATION FROM DISCRETE VIDEO ACQUISITION - A computer implemented method for processing video is provided. A first image and a second image are captured by a camera. A feature present in the first camera image and the second camera image is identified. A first location value of the feature within the first camera image is identified. A second location value of the feature within the second camera image is identified. An intermediate location value of the feature based at least in part on the first location value and the second location value is determined. The intermediate location value and the second location value are communicated to a physics simulation. | 07-05-2012 |
20120170801 | System for Food Recognition Method Using Portable Devices Having Digital Cameras - The present invention relates to a method for automatic food recognition by means of portable devices equipped with digital cameras. | 07-05-2012 |
20120170802 | SCENE ACTIVITY ANALYSIS USING STATISTICAL AND SEMANTIC FEATURES LEARNT FROM OBJECT TRAJECTORY DATA - Trajectory information of objects appearing in a scene can be used to cluster trajectories into groups of trajectories according to each trajectory's relative distance between each other for scene activity analysis. By doing so, a database of trajectory data can be maintained that includes the trajectories to be clustered into trajectory groups. This database can be used to train a clustering system, and with extracted statistical features of resultant trajectory groups a new trajectory can be analyzed to determine whether the new trajectory is normal or abnormal. Embodiments described herein, can be used to determine whether a video scene is normal or abnormal. In the event that the new trajectory is identified as normal the new trajectory can be annotated with the extracted semantic data. In the event that the new trajectory is determined to be abnormal a user can be notified that an abnormal behavior has occurred. | 07-05-2012 |
20120170803 | SEARCHING RECORDED VIDEO - Embodiments of the disclosure provide for systems and methods for creating metadata associated with a video data. The metadata can include data about objects viewed within a video scene and/or events that occur within the video scene. Some embodiments allow users to search for specific objects and/or events by searching the recorded metadata. In some embodiments, metadata is created by receiving a video frame and developing a background model for the video frame. Foreground object(s) can then be identified in the video frame using the background model. Once these objects are identified they can be classified and/or an event associated with the foreground object may be detected. The event and the classification of the foreground object can then be recorded as metadata. | 07-05-2012 |
20120170804 | Method and apparatus for tracking target object - A method and apparatus for tracking a target object are provided. A plurality of images is received, and one of the images is selected as a current image. A specific color of the current image is extracted. And the current image is compared with a template image to search a target object in the current image. If the target object is not found in the current image, a previous image with the target object is searched in the images received before the current image. And the target object is searched in the current image according to an object feature of the previous image. The object feature and an object location are updated into a storage unit when the target object is found. | 07-05-2012 |
20120170805 | OBJECT DETECTION IN CROWDED SCENES - Methods and systems are provided for object detection. A method includes automatically collecting a set of training data images from a plurality of images. The method further includes generating occluded images. The method also includes storing in a memory the generated occluded images as part of the set of training data images, and training an object detector using the set of training data images stored in the memory. The method additionally includes detecting an object using the object detector, the object detector detecting the object based on the set of training data images stored in the memory. | 07-05-2012 |
20120170806 | METHOD, TERMINAL, AND COMPUTER-READABLE RECORDING MEDIUM FOR SUPPORTING COLLECTION OF OBJECT INCLUDED IN INPUTTED IMAGE - The present invention relates to a method for supporting a collection of an object included in an image inputted through a terminal. The method includes the steps of: recognizing the identity of an object by using at least one of an object recognition technology, an optical character recognition technology, and a barcode recognition technology; getting a collection page including at least part of the information on an auto comment containing a phrase or sentence correctly combined under the grammar of a language by using the recognition information and the information on the image of the recognized object; allowing the collection page to be stored when a request for registration of the page is received; and providing a specific user with the information about a reward system. | 07-05-2012 |
20120170807 | APPARATUS AND METHOD FOR EXTRACTING DIRECTION INFORMATION IMAGE IN A PORTABLE TERMINAL - Provided is an apparatus and method for extracting a direction of an image in a portable terminal without using a sensor for extracting direction information of a captured image. An apparatus for extracting direction information of an image in a portable terminal includes a camera unit and a control unit, wherein the control unit extracts a detection direction of an object from the captured image as direction information, and stores the captured image together with the extracted direction information for a subsequent display of the captured image in a normal direction. | 07-05-2012 |
20120170808 | Obstacle Detection Device - The present invention provides an obstacle detection device that enables stable obstacle detection with less misdetections even when a bright section and a dark section are present in an obstacle and a continuous contour of the obstacle is present across the bright section and the dark section. The obstacle detection device includes a processed image generating unit that generates a processed image for detecting an obstacle from a picked-up image, a small region dividing unit that divides the processed image into plural small regions, an edge threshold setting unit that sets an edge threshold for each of the small regions from pixel values of the plural small regions and the processed image, an edge extracting unit that calculates a gray gradient value of each of the small regions from the plural small regions and the processed image and generates, using the edge threshold for the small region corresponding to the calculated gray gradient value, an edge image and a gradient direction image, and an obstacle recognizing unit that determines presence or absence of an obstacle from the edge image in a matching determination region set in the edge image and the gradient direction image corresponding to the edge image. The small region dividing unit divides the processed image into the plural small regions on the basis of an illumination state on the outside of the own vehicle. | 07-05-2012 |
20120170809 | PROCEDURE FOR RECOGNIZING OBJECTS - A recognition and placement procedure that identifies from a digital imaged captured with digital camera the position and orientation of a stored target object in a variety of positions, without digitally storing a wide variety of essential characters per pattern associated with the target object. | 07-05-2012 |
20120170810 | System and Method for Linking Real-World Objects and Object Representations by Pointing - A system and method are described for selecting and identifying a unique object or feature in the system user's three-dimensional (“3-D”) environment in a two-dimensional (“2-D”) virtual representation of the same object or feature in a virtual environment. The system and method may be incorporated in a mobile device that includes position and orientation sensors to determine the pointing device's position and pointing direction. The mobile device incorporating the present invention may be adapted for wireless communication with a computer-based system that represents static and dynamic objects and features that exist or are present in the system user's 3-D environment. The mobile device incorporating the present invention will also have the capability to process information regarding a system user's environment and calculating specific measures for pointing accuracy and reliability. | 07-05-2012 |
20120170811 | METHOD AND APPARATUS FOR WHEEL ALIGNMENT - A vehicle wheel alignment method and system is provided. A three-dimensional target is attached to a vehicle wheel known to be in alignment. The three-dimensional target has multiple target elements thereon, each of which has known geometric characteristics and 3D spatial relationship with one another. | 07-05-2012 |
20120170812 | DRIVING SUPPORT DISPLAY DEVICE - Disclosed is a driving support display device that composites and displays images acquired from a plurality of cameras, whereby images which are easy for the user to understand and which are accurate in the areas near the borders of partial images are provided. An image composition unit ( | 07-05-2012 |
20120170813 | METHOD OF MEASURING THE OUTLINE OF A FEATURE - A method of measuring an outline of a feature on a surface includes providing a substrate. The substrate includes a feature on a surface of the substrate. The feature includes walls. The surface of the substrate is illuminated. Edges of the walls are illuminated to measure a first contour and a second contour of the feature. An outline of the feature is calculated based on the first contour and the second contour. | 07-05-2012 |
20120177249 | METHOD OF DETECTING LOGOS, TITLES, OR SUB-TITLES IN VIDEO FRAMES - Detecting a static graphic object (such as a logo, title, or sub-title) in a sequence of video frames may be accomplished by analyzing each selected one of a plurality of pixels in a video frame of the sequence of video frames. Basic conditions for the selected pixel may be tested to determine whether the selected pixel is a static pixel. When the selected pixel is a static pixel, a static similarity measure and a forward motion similarity measure may be determined for the selected pixel. A temporal score for the selected pixel may be determined based at least in part on the similarity measures. Finally, a static graphic object decision for the selected pixel may be made based at least in part on the temporal score. | 07-12-2012 |
20120177250 | BOUNDARY DETECTION DEVICE FOR VEHICLES - In a lane boundary detection device, a plurality of edge components are extracted from a captured image capturing the periphery of the own vehicle. Candidates of a curve (including straight lines) that is to be the boundary of a driving area are extracted as boundary candidates based on the placement of the plurality of edge components. Then, an angle formed by a tangent in a predetermined section of each extracted boundary candidate and a vertical line in the captured image is calculated. Boundary candidates of which the formed angle is less than an angle reference value are set to have low probability. The boundary candidate having the highest probability among the boundary candidates is set as the boundary of the driving area. | 07-12-2012 |
20120177251 | IMAGE ANALYSIS BY OBJECT ADDITION AND RECOVERY - The invention described herein is generally directed to methods for analyzing an image. In particular, crowded field images may be analyzed for unidentified, unobserved objects based on an iterative analysis of modified images including artificial objects or removed real objects. The results can provide an estimate of the completeness of analysis of the image, an estimate of the number of objects that are unobserved in the image, and an assessment of the quality of other similar images. | 07-12-2012 |
20120183175 | METHOD FOR IDENTIFYING A SCENE FROM MULTIPLE WAVELENGTH POLARIZED IMAGES - Techniques for identifying images of a scene including illuminating the scene with a beam of 3 or more wavelengths, polarized according to a determined direction; simultaneously acquiring for each wavelength an image X | 07-19-2012 |
20120183176 | PERFORMING REVERSE TIME IMAGING OF MULTICOMPONENT ACOUSTIC AND SEISMIC DATA - A technique includes performing reverse time imaging to determine an image in a region of interest. The reverse time imaging includes modeling a pressure wavefield and a gradient wavefield in the region of interest based at least in part on particle motion data and pressure data acquired by sensors in response to energy being produced by at least one source. | 07-19-2012 |
20120183177 | IMAGE SURVEILLANCE SYSTEM AND METHOD OF DETECTING WHETHER OBJECT IS LEFT BEHIND OR TAKEN AWAY - An image surveillance system and a method of detecting whether an object is left behind or taken away are provided. The image surveillance system includes: a foreground detecting unit which detects a foreground region based on a pixel information difference between a background image and a current input image; a still region detecting unit which detects a candidate still region by clustering foreground pixels of the foreground region, and determines whether the candidate still region is a falsely detected still region or a true still region; and an object detecting unit which determines whether an object is left behind or taken away, based on edge information about the true still region. | 07-19-2012 |
20120183178 | METHOD AND DEVICE FOR RECOGNITION OF INFORMATION APPLIED ON PACKAGES - Embodiments describe a system and method for reading the information on bundled packages wrapped in transparent film. The film can obscure information on the outside of the packages making the automated identification and tracking of the packages difficult. Embodiments described herein provide a system and method for capturing the unique information regardless of the obscuring effects of packaging films. A camera that is insensitive to UV light captures visible light emitted by labels after the labels are irradiated by UV light. The light emission induces greater contrast overcoming any distortion that might have occurred due to the transparent packaging film. | 07-19-2012 |
20120189160 | LINE-OF-SIGHT DETECTION APPARATUS AND METHOD THEREOF - A line-of-sight detection apparatus includes a detection unit configured to detect a face from image data, a first extraction unit configured to extract a feature amount corresponding to a direction of the face from the image data, a calculation unit configured to calculate a line-of-sight reliability of each of a right eye and a left eye based on the face, a selection unit configured to select an eye according to the line-of-sight reliability, a second extraction unit configured to extract a feature amount of an eye region of the selected eye from the image data, and an estimation unit configured to estimate a line of sight of the face based on the feature amount corresponding to the face direction and the feature amount of the eye region. | 07-26-2012 |
20120189161 | VISUAL ATTENTION APPARATUS AND CONTROL METHOD BASED ON MIND AWARENESS AND DISPLAY APPARATUS USING THE VISUAL ATTENTION APPARATUS - Disclosed are a visual attention apparatus based on mind awareness and an image output apparatus using the same. Exemplary embodiments of the present invention can reduce data throughput by performing object segmentation and context analysis according to downsampling and colors and approximate shapes of input images so as to detect attention regions using extrinsic visual attention and intrinsic visual attention. In addition, the exemplary embodiments of the present invention can detect the attention regions having different viewpoints for each user by detecting the attention regions due to the extrinsic visual attention and the intrinsic visual attention and processing and displaying the attention regions as various regions of interest, thereby increasing the image immersion and the utility of contents. | 07-26-2012 |
20120189162 | MOBILE UNIT POSITION DETECTING APPARATUS AND MOBILE UNIT POSITION DETECTING METHOD - The mobile unit position detecting apparatus generates target data by extracting a target from an image shot by the image capturing device, extracts target setting data that best matches the target data, is prerecorded in a recording unit and is shot for each target, obtains a target ID corresponding to the extracted target setting data from the recording unit, detects position data associated with the obtained target ID, tracks the target in the image shot by the image capturing device, and calculates an aspect ratio of the target being tracked in the image. If the aspect ratio is equal to or lower than a threshold value, the mobile unit position detecting apparatus outputs the detected position data. | 07-26-2012 |
20120189163 | APPARATUS AND METHOD FOR RECOGNIZING HAND ROTATION - An apparatus and a method are provided that can intuitively and easily recognize hand rotation. The apparatus for recognizing a hand rotation includes a camera for photographing a plurality of hand image data, a detector for extracting circles through fingers of the hand image data and a controller for recognizing hand rotation through changes in positions and sizes of the circles extracted from each of the plurality of hand image data. | 07-26-2012 |
20120189164 | RULE-BASED COMBINATION OF A HIERARCHY OF CLASSIFIERS FOR OCCLUSION DETECTION - A person detection system includes a face detector configured to detect a face in an input video sequence, the face detector outputting a face keyframe to be stored if a face is detected; and a person detector configured to detect a person in the input video sequence if the face detector fails to detect a face, the person detector outputting a person keyframe to be stored, if a person is detected in the input video sequence. | 07-26-2012 |
20120189165 | METHOD OF PROCESSING BODY INSPECTION IMAGE AND BODY INSPECTION APPARATUS - A method of processing a body inspection image and a body inspection apparatus are disclosed. In one embodiment, the method may comprise recognizing a target region by means of pattern recognition, and performing privacy protection processing on the recognized target region. The target region may comprise a head and/or crotch part. According to the present disclosure, it is possible to achieve a compromise between privacy protection and body inspection. | 07-26-2012 |
20120195459 | CLASSIFICATION OF TARGET OBJECTS IN MOTION - A method for classifying objects in motion that includes providing, to a processor, feature data for one or more classes of objects to be classified, wherein the feature data is indexed by object class, orientation, and sensor. The method also includes providing, to the processor, one or more representative models for characterizing one or more orientation motion profiles for the one or more classes of objects in motion. The method also include acquiring, via a processor, feature data for a target object in motion from multiple sensors and/or for multiple times and trajectory of the target object in motion to classify the target object based on the feature data, the one or more orientation motion profiles and the trajectory of the target object in motion. | 08-02-2012 |
20120195460 | CONTEXT AWARE AUGMENTATION INTERACTIONS - A mobile platform renders different augmented reality objects based on the spatial relationship, such as the proximity and/or relative positions between real-world objects. The mobile platform detects and tracks a first object and a second object in one or more captured images. The mobile platform determines the spatial relationship of the objects, e.g., the proximity or distance between objects and/or the relative positions between objects. The proximity may be based on whether the objects appear in the same image or the distance between the objects. Based on the spatial relationship of the objects, the augmentation object to be rendered is determined, e.g., by searching a database. The selected augmentation object is rendered and displayed. | 08-02-2012 |
20120195461 | CORRELATING AREAS ON THE PHYSICAL OBJECT TO AREAS ON THE PHONE SCREEN - A mobile platform renders an augmented reality graphic to indicate selectable regions of interest on a captured image or scene. The region of interest is an area that is defined on the image of a physical object, which when selected by the user can generate a specific action. The mobile platform captures and displays a scene that includes an object and detects the object in the scene. A coordinate system is defined within the scene and used to track the object. A selectable region of interest is associated with one or more areas on the object in the scene. An indicator graphic is rendered for the selectable region of interest, where the indicator graphic identifies the selectable region of interest. | 08-02-2012 |
20120195462 | FLAME IDENTIFICATION METHOD AND DEVICE USING IMAGE ANALYSES IN HSI COLOR SPACE - In a flame identification method and device for identifying any flame image in a plurality of frames captured consecutively from a monitored area, for each image frame, intensity foreground pixels are obtained based on intensity values of pixels, a fire-like image region containing the intensity foreground pixels is defined when an intensity foreground area corresponding to the intensity foreground pixels is greater than a predetermined intensity foreground area threshold, and saturation foreground pixels are obtained from all pixels in the fire-like image region based on saturation values thereof to obtain a saturation foreground area corresponding to the saturation foreground pixels. Linear regression analyses are performed on two-dimensional coordinates each formed by the intensity and saturation pixel areas associated with a corresponding image frame to generate a determination coefficient. Whether a flame image exists in the image frames is determined based on the determination coefficient and a predetermined identification threshold. | 08-02-2012 |
20120195463 | IMAGE PROCESSING DEVICE, THREE-DIMENSIONAL IMAGE PRINTING SYSTEM, AND IMAGE PROCESSING METHOD AND PROGRAM - The image processing device includes a three-dimensional image data input unit which enters three-dimensional image data representing a three-dimensional image, a subject extractor which extracts a subject from the three-dimensional image data, a spatial vector calculator which calculates a spatial vector of the subject from a plurality of planar image data having different viewpoints contained in the three-dimensional image data, and a three-dimensional image data recorder which records the spatial vector and the three-dimensional image data in association with each other. | 08-02-2012 |
20120195464 | AUGMENTED REALITY SYSTEM AND METHOD FOR REMOTELY SHARING AUGMENTED REALITY SERVICE - An augmented reality (AR) system and method for remotely sharing an AR service is provided. The AR system includes a plurality of client devices and a host device. The AR system allows information related to a marker and information related an AR object to be shared between client devices participating in an AR session, which may be separated by a reference distance, through a host device. Accordingly, an AR service may be shared between the client devices. | 08-02-2012 |
20120195465 | PERSONNEL SECURITY SCREENING SYSTEM WITH ENHANCED PRIVACY - The present invention is directed towards processing security images of people subjected to X-ray radiation. The present invention processes a generated image by dividing the generated image into at least two regions or mask images, separately processing the at least two regions of the image, and viewing the resultant processed region images either alone or as a combined image. | 08-02-2012 |
20120195466 | IMAGE-BASED SURFACE TRACKING - A method of image-tracking by using an image capturing device ( | 08-02-2012 |
20120195467 | Object Information Derived from Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 08-02-2012 |
20120195468 | Object Information Derived from Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 08-02-2012 |
20120195469 | FORMATION OF A TIME-VARYING SIGNAL REPRESENTATIVE OF AT LEAST VARIATIONS IN A VALUE BASED ON PIXEL VALUES - A method of forming a time-varying signal representative of at least variations in a value based on pixel values from a sequence of images, the signal corresponding in length to the sequence of images, includes obtaining the sequence of images. A plurality of groups ( | 08-02-2012 |
20120195470 | HIGH CONTRAST RETROREFLECTIVE SHEETING AND LICENSE PLATES - The present disclosure relates to the formation of high contrast, wavelength independent retroreflective sheeting made by including a light scattering material on at least a portion of the retroreflective sheeting. The light scattering material reduces the brightness of the retroreflective sheeting without substantially changing the appearance of the retroreflective sheeting when viewed under scattered light. | 08-02-2012 |
20120201417 | APPARATUS AND METHOD FOR PROCESSING SENSORY EFFECT OF IMAGE DATA - A method and apparatus is capable of processing a sensory effect of image data. The apparatus includes an image analyzer that analyzes depth information and texture information about at least one object included in an image. A motion analyzer analyzes a motion of a user. An image matching processor matches the motion of the user to the image. An image output unit outputs the image to which the motion of the user is matched, and a sensory effect output unit outputs a texture of an object touched by the body of the user to the body of the user. | 08-09-2012 |
20120201418 | DIGITAL RIGHTS MANAGEMENT OF CAPTURED CONTENT BASED ON CAPTURE ASSOCIATED LOCATIONS - A certification is received from a user stating that captured content does not comprise a particular restricted element and a request from the user for an adjustment of a digital rights management rule identified for the captured content based on the captured content comprising the particular restricted element. At least one term of the digital rights management rule is adjusted to reflect that the captured content does not comprise the particular restricted element. The usage of the captured content by the user is monitored to determine whether the usage matches the certification statement. | 08-09-2012 |
20120201419 | MAP INFORMATION DISPLAY APPARATUS, MAP INFORMATION DISPLAY METHOD, AND PROGRAM - A map information display apparatus for displaying map information on the basis of information on image-capturing times and image-capturing positions that are respectively associated with a plurality of captured images includes a captured image extraction unit configured to extract images captured within a predetermined time period that includes the image-capturing time of a predetermined captured image from among the plurality of captured images; a map area selection unit configured to select an area of a map so as to include the image-capturing positions of the captured images extracted by the captured image extraction unit by using as a reference the image-capturing position of the predetermined captured image; and a map information display unit configured to display map information in such a manner that the area of the map, which is selected by the map area selection unit, is displayed. | 08-09-2012 |
20120201420 | Object Recognition and Describing Structure of Graphical Objects - Methods for processing machine-readable forms or documents of non-fixed format are disclosed. The methods make use of, for example, a structural description of characteristics of document elements, a description of a logical structure of the document, and methods of searching for document elements by using the structural description. A structural description of the spatial and parametric characteristics of document elements and the logical connections between elements may include a hierarchical logical structure of the elements, specification of an algorithm of determining the search constraints, specification of characteristics of searched elements, and specification of a set of parameters for a compound element identified on the basis of the aggregate of its components. The method of describing the logical structure of a document and methods of searching for elements of a document may be based on the use of the structural description. | 08-09-2012 |
20120201421 | System and Method for Automatic Registration Between an Image and a Subject - A patient defines a patient space in which an instrument can be tracked and navigated. An image space is defined by image data that can be registered to the patient space. A tracking device can be connected to a member in a known manner that includes imageable portions that generate image points in the image data. Selected image slices or portions can be used to register reconstructed image data to the patient space. | 08-09-2012 |
20120201422 | SIGNAL PROCESSING APPARATUS - A signal processing apparatus for displaying an input image in the sate in which a part of the image is enlarged, displays an enlarged image obtained by enlarging a part of a designated object in the input image so that the enlarged image is superimposed at a position in accordance with the position of the designated object. | 08-09-2012 |
20120201423 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE PROCESSING PROGRAM AND RECORDING MEDIUM - There are provided an image processing apparatus, an image processing method and an image processing program for transforming a target image having no contour of straight line portions. | 08-09-2012 |
20120207345 | TOUCHLESS HUMAN MACHINE INTERFACE - A system and method for receiving input from a user is provided. The system includes at least one camera configured to receive an image of a hand of the user and a controller configured to analyze the image and issue a command based on the analysis of the image. | 08-16-2012 |
20120207346 | Detecting and Localizing Multiple Objects in Images Using Probabilistic Inference - An object detection system is disclosed herein. The object detection system allows detection of one or more objects of interest using a probabilistic model. The probabilistic model may include voting elements usable to determine which hypotheses for locations of objects are probabilistically valid. The object detection system may apply an optimization algorithm such as a simple greedy algorithm to find hypotheses that optimize or maximize a posterior probability or log-posterior of the probabilistic model or a hypothesis receiving a maximal probabilistic vote from the voting elements in a respective iteration of the algorithm. Locations of detected objects may then be ascertained based on the found hypotheses. | 08-16-2012 |
20120207347 | IMAGE ROTATION FROM LOCAL MOTION ESTIMATES - A measure of frame-to-frame rotation is determined. Integral projection vector gradients are determined and normalized for a pair of images. Locations of primary maximum and minimum peaks of the integral projection vector gradients are determined. Based on normalized distances between the primary maximum and minimum peaks, a global image rotation is determined. | 08-16-2012 |
20120207348 | VEHICLE DETECTION APPARATUS - A vehicle detection apparatus includes a lamp candidate extraction unit that extracts a pixel region that may correspond to a tail lamp of a vehicle from pixel regions that an integration processing unit creates by extracting and integrating pixels of an image as a lamp candidate and a grouping unit that regroups groups containing the lamp candidate of the groups generated by grouping position data detected by a position detection unit and then regroups all groups. In the regrouping processing, a threshold used for regrouping groups containing the lamp candidate is set easier for regrouping than a threshold used for subsequently regrouping all groups. | 08-16-2012 |
20120207349 | TARGETED CONTENT ACQUISITION USING IMAGE ANALYSIS - A method is provided in which a tag is affixed to a known individual that is to be identified within a known field of view of an image capture system. The tag is a physical tag comprising at least a known feature. Subsequent to affixing the tag to the known individual, image data is captured within the known field of view of the image capture system, which is then provided to a processor. Image analysis is performed on the captured image data to detect the at least a known feature. In dependence upon detecting the at least a known feature, an occurrence of the known individual within the captured image data is identified. | 08-16-2012 |
20120207350 | APPARATUS FOR IDENTIFICATION OF AN OBJECT QUEUE, METHOD AND COMPUTER PROGRAM - In daily life, people are often forced to join a queue in order, for example, to pay at a checkout or to be dealt with at an airport, etc. Because of the various forms of a queue, these are not usually recorded automatically, but are analyzed manually. For example, if a long queue is formed at a supermarket, as a result of which the predicted waiting time for the customers rises above a threshold value, this situation can be identified by the checkout personnel, and a further checkout can be opened. A device | 08-16-2012 |
20120207351 | METHOD AND EXAMINATION APPARATUS FOR EXAMINING AN ITEM UNDER EXAMINATION IN THE FORM OF A PERSON AND/OR A CONTAINER - An examination apparatus examines an item including a person or a container and has a determination unit for determining a relevance level which can be assigned to the item under examination, in particular a hazard level, and an image capture unit for capturing an image of the item under examination. The examination apparatus has a database, an automated evaluation unit for automatically evaluating at least one section of the image using the database, an evaluation unit operated by a user for the visual evaluation of a section of the image by the user, and an input unit for inputting at least one evaluation input by the user, and a database processing unit for processing the database. The database processing unit processes a database entry using the evaluation input in conjunction with the determination of the relevance level. | 08-16-2012 |
20120207352 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 08-16-2012 |
20120207353 | System And Method For Detecting And Tracking An Object Of Interest In Spatio-Temporal Space - The present invention provides a system and method for detecting and tracking a moving object. First, robust change detection is applied to find initial candidate regions in consecutive frames. These initial detections in consecutive frames are stacked to produce space-time bands which are extracted by Hough transform and entropy minimization based band detection algorithm. | 08-16-2012 |
20120207354 | IMAGE SENSING APPARATUS AND METHOD FOR CONTROLLING THE SAME - Receiving an instruction from a user to start sensing a still image, an image sensing apparatus performs scene determination based on an evaluation value of scene determination from an image sensed immediately after the luminance of the image converges to a predetermined range of a target luminance. The image sensing apparatus can accurately determine a scene of the image even the image sensor with a narrow dynamic range. | 08-16-2012 |
20120207355 | X-RAY CT APPARATUS AND IMAGE DISPLAY METHOD OF X-RAY CT APPARATUS - The X-ray CT apparatus which includes an X-ray generator and an X-ray detector for acquiring projection data of an object from plural angles and creates an arbitrary cross-sectional image of the object on the basis of the projection data includes: an extraction section which extracts a region, which includes a target organ moving periodically, from the cross-sectional image; a synchronous phase determination section which determines a synchronous phase, which is used when creating a synchronous cross-sectional image synchronized with periodic motion of the target organ, on the basis of continuity of the target organ in a direction perpendicular to the cross-sectional image; a synchronous cross-sectional image creating section which creates the synchronous cross-sectional image on the basis of projection data corresponding to the synchronous phase determined by the synchronous phase determination section; and a display unit which displays the synchronous cross-sectional image. | 08-16-2012 |
20120213403 | Simultaneous Image Distribution and Archiving - The present specification discloses a storage system for enabling the substantially concurrent storage and access of data that has three dimensional images processed to identify a presence of a threat item. The system includes a source of data, a temporary storage memory for receiving and temporarily storing the data, a long term storage, and multiple workstations adapted to display three dimensional images. The temporary storage memory is adapted to support multiple file input/output operations executing substantially concurrently, including the receiving of data, transmitting of data to workstations, and transmitting of data to long term storage. | 08-23-2012 |
20120213404 | AUTOMATIC EVENT RECOGNITION AND CROSS-USER PHOTO CLUSTERING - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for automatic event recognition and photo clustering. In one aspect, methods include receiving, from a first user, first image data corresponding to a first image, receiving, from a second user, second image data corresponding to a second image, comparing the first image data and the second image data, and determining that the first image and the second image correspond to a coincident event based on the comparing. | 08-23-2012 |
20120213405 | MOVING OBJECT DETECTION APPARATUS - A moving object detection apparatus generates frame difference image data each time a frame data is captured, based on the captured frame data and previous frame data, and such frame difference image data is divided into pixel blocks. Subsequently, for each of the pixel blocks a discrete cosine transformation (DCT), a two-dimensional DCT coefficient is calculated, and such two-dimensional DCT coefficients are accumulated and stored. The value of each element of the two-dimensional DCT coefficient is arranged to form a characteristic vector, and, for each of the pixel blocks at the same position of the frame difference image data, the characteristic vector is generated and then such characteristic vector is arranged to form a time-series vector. The time-series vector derived from moving-object-capturing pixel blocks is used to calculate a principal component vector and a principal component score. | 08-23-2012 |
20120213406 | SUBJECT DESIGNATING DEVICE AND SUBJECT TRACKING APPARATUS - A subject designating device includes: a representative value calculation unit that calculates a representative value for each image of a brightness image and chrominance images based upon pixel values indicated at pixels present within a first subject area; a second image generation unit that creates a differential image by subtracting the representative value from pixel values indicated at pixels present within a second subject area; a binarizing unit that binarizes the differential image; a synthesizing unit that creates a synthetic image by combining binary images in correspondence to the brightness image and the chrominance images; a mask extraction unit that extracts a mask constituted with a white pixel cluster from the synthetic image; an evaluation value calculation unit that calculates an evaluation value indicating a likelihood of the mask representing the subject; and a subject designating unit that designates the subject in the target image based upon the evaluation value. | 08-23-2012 |
20120213407 | IMAGE CAPTURE AND POST-CAPTURE PROCESSING - Image data of a scene is captured. Spectral profile information is obtained for the scene. A database of plural spectral profiles is accessed, each of which maps a material to a corresponding spectral profile reflected therefrom. The spectral profile information for the scene is matched against the database, and materials for objects in the scene are identified by using matches between the spectral profile information for the scene against the database. Metadata which identifies materials for objects in the scene is constructed, and the metadata is embedded with the image data for the scene. | 08-23-2012 |
20120213408 | SYSTEM OF CONTROLLING DEVICE IN RESPONSE TO GESTURE - A control system includes: an input unit through which a signal for a gesture and a background of the gesture is input; a gesture recognition unit which recognizes the gesture on the basis of the input signal; an attribute recognition unit which recognizes an attribute of a background target of the recognized gesture on the basis of the input signal; and a command transmitting unit which generates a control command on the basis of a combination of the recognized gesture and the background target attribute and transmits the control command to a device. | 08-23-2012 |
20120213409 | DECODER-SIDE REGION OF INTEREST VIDEO PROCESSING - The disclosure is directed to decoder-side region-of-interest (ROI) video processing. A video decoder determines whether ROI assistance information is available. If not, the decoder defaults to decoder-side ROI processing. The decoder-side ROI processing may estimate the reliability of ROI extraction in the bitstream domain. If ROI reliability is favorable, the decoder applies bitstream domain ROI extraction. If ROI reliability is unfavorable, the decoder applies pixel domain ROI extraction. The decoder may apply different ROI extraction processes for intra-coded (I) and inter-coded (P or B) data. The decoder may use color-based ROI generation for intra-coded data, and coded block pattern (CBP)-based ROI generation for inter-coded data. ROI refinement may involve shape-based refinement for intra-coded data, and motion- and color-based refinement for inter-coded data. | 08-23-2012 |
20120213410 | METHODS AND APPARATUS FOR DETECTING A COMPOSITION OF AN AUDIENCE OF AN INFORMATION PRESENTING DEVICE - Methods and apparatus for detecting a composition of an audience of an information presenting device are disclosed. A disclosed example method includes: capturing at least one image of the audience; determining a number of people within the at least one image; prompting the audience to identify its members if a change in the number of people is detected based on the number of people determined to be within the at least one image; and if a number of members identified by the audience is different from the determined number of people after a predetermined number of prompts of the audience, adjusting a value to avoid excessive prompting of the audience. | 08-23-2012 |
20120213411 | IMAGE TARGET IDENTIFICATION DEVICE, IMAGE TARGET IDENTIFICATION METHOD, AND IMAGE TARGET IDENTIFICATION PROGRAM - A device is provided with a luminance histogram calculation unit which generates a luminance histogram showing appearance frequency of luminance values contained within the infrared image and determines a luminance value corresponding to a peak in the luminance histogram as a background luminance level of the background; a luminance shift calculation unit which sets the background luminance value as an intermediate value in luminance range width of the infrared image and generates a luminance shift image by linearly shifting other luminance values in the infrared image based on the intermediate value; a reversed image processing unit which generates a reversed shift image wherein the luminance level of the luminance shift image is reversed; and a luminance calculation processing unit which generates a calculation-processed image by performing calculation processing based on the difference in the luminance values at corresponding positions in the luminance shift image and the reversed shift image. | 08-23-2012 |
20120219174 | EXTRACTING MOTION INFORMATION FROM DIGITAL VIDEO SEQUENCES - A method for analyzing a digital video sequence of a scene to extract background motion information and foreground motion information, comprising: analyzing at least a portion of a plurality of image frames captured at different times to determine corresponding one-dimensional image frame representations; combining the one-dimensional frame representations to form a two-dimensional spatiotemporal representation of the video sequence; using a data processor to identify a set of trajectories in the two-dimensional spatiotemporal representation of the video sequence; analyzing the set of trajectories to identify a set of foreground trajectory segments representing foreground motion information and a set of background trajectory segments representing background motion information; and storing an indication of the foreground motion information or the background motion information or both in a processor-accessible memory. | 08-30-2012 |
20120219175 | ASSOCIATING AN OBJECT IN AN IMAGE WITH AN ASSET IN A FINANCIAL APPLICATION - The invention relates to a method for associating an object in an image with an asset of a number of assets in a financial application. The method includes receiving the image of the object comprising global positioning system (GPS) data, where the image is captured using an image-taking device with GPS functionality and processing the image to generate processed GPS data. The method further includes determining, using the processed GPS data, a geographic location of the object in the image, and identifying, using the geographic location, the object by performing a recognition analysis of the image. The method further includes associating, based on the recognition analysis, the object in the image with the asset of the assets of an owner in the financial application, and storing, in the financial application, the image of the object associated with the asset of the assets of the owner. | 08-30-2012 |
20120219176 | Method and Apparatus for Pattern Tracking - A method and apparatus for pattern tracking The method includes the steps of performing a foreground detection process to determine a hand-pill-hand region, performing image segmentation to separate the determined hand portion of the hand-pill-hand region from the pill portion thereof, building three reference models, one for each hand region and one for the pill region, initializing a dynamic model for tracking the hand-pill-hand region, determining N possible next positions for the hand-pill-hand region, for each such determined position, determining various features, building a new model for that region in accordance with the determined position, for each position, comparing the new model and a reference model, determining a position whose new model generates a highest similarity score, determining whether that similarity score is greater than a predetermined threshold, and wherein if it is determined that the similarity score is greater than the predetermined threshold, the object is tracked. | 08-30-2012 |
20120219177 | COMPUTER-READABLE STORAGE MEDIUM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - First, a series of edge pixels representing a contour of an object or of a design represented in the object are detected from an image acquired from a capturing apparatus. Then, a plurality of straight lines are generated on the basis of the series of detected edge pixels, and vertices of the contour are detected on the basis of the plurality of straight lines. Further, relative positions and orientations of the capturing apparatus and the object relative to each other are calculated on the basis of the detected vertices, and a virtual camera in a virtual space is set on the basis of the positions and the orientations. Then, a virtual space image obtained by capturing the virtual space with the virtual camera is displayed on a display device. | 08-30-2012 |
20120219178 | COMPUTER-READABLE STORAGE MEDIUM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - A position of a predetermined object or a predetermined design is sequentially detected from images. Then, an amount of movement of the predetermined object or the predetermined design is calculated on the basis of: a position, in a first image, of the predetermined object or the predetermined design detected from the first image; and a position, in a second image, of the predetermined object or the predetermined design detected from the second image acquired before the first image. Then, when the amount of movement is less than a first threshold, the position, in the first image, of the predetermined object or the predetermined design detected from the first image is corrected to the position, in the second image, of the predetermined object or the predetermined design detected from the second image. | 08-30-2012 |
20120219179 | COMPUTER-READABLE STORAGE MEDIUM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - A position of a predetermined object or design is sequentially detected from images. Then, an amount of movement of the predetermined object or design is calculated on the basis of: a position, in a first image, of the predetermined object or design detected from the first image; and a position, in a second image, of the predetermined object or design detected from the second image acquired before the first image. Then, when the amount of movement is less than a first threshold, the position, in the first image, of the predetermined object or design detected from the first image is corrected to a position internally dividing, in a predetermined ratio, line segments connecting: the position, in the first image, of the predetermined object or design detected from the first image; to the position, in the second image, of the predetermined object or design detected from the second image. | 08-30-2012 |
20120219180 | Automatic Detection of Vertical Gaze Using an Embedded Imaging Device - A method of detecting and applying a vertical gaze direction of a face within a digital image includes analyzing one or both eyes of a face within an acquired image, including determining a degree of coverage of an eye ball by an eye lid within the digital image. Based on the determined degree of coverage of the eye ball by the eye lid, an approximate direction of vertical eye gaze is determined. A further action is selected based on the determined approximate direction of vertical eye gaze. | 08-30-2012 |
20120219181 | AUGMENTED REALITY-BASED FILE TRANSFER METHOD AND FILE TRANSFER SYSTEM THEREOF - An augmented reality-based file transfer method and a related file transfer system integrated with cloud computing are provided. The file transfer method is applied to file transmission between a first device and a second device wirelessly connected to each other, wherein the first device includes a file, a display unit, and an input unit electronically connected to the display unit. The file transfer method includes the following steps: when an image stored in the first device is opened, displaying the file and the image on the display unit of the first device, wherein the image comprises a face image of the second user; when the file is dragged to the face image of the second user shown in the image via the input unit and is then released, generating a command; and transferring the file from the first device to the second device according to the command. | 08-30-2012 |
20120219182 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING PROGRAM - An image processing apparatus to extract a print image to be printed onto a print medium from an original image, the image processing apparatus includes: a detecting unit that detects a specific area, which includes a plurality of pixels having a low degree of variation in pixel values, from the original image, based on a predetermined detection criterion; and an extracting unit that, when an extraction range having a predetermined shape including the print image is set in the original image, extracts the print image so that the specific area is disposed in a non-print area, which is not printed on the print medium, within the extraction range. | 08-30-2012 |
20120219183 | 3D Object Detecting Apparatus and 3D Object Detecting Method - A 3D-object detecting apparatus may include a detection-image creating device configured to detect a 3D object on an image-capture surface from an image captured by an image-capture device and to create a detection image in which a silhouette of only the 3D object is left; a density-map creating device configured to determine the 3D objects spatial densities at corresponding coordinate points in a coordinate plane on the basis of the detection image and mask images obtained for the corresponding coordinate points on the basis of virtual cuboids arranged for the corresponding coordinate points and to create a density map having pixels for the corresponding coordinate points such that the pixels have pixel values corresponding to the determined spatial densities; and a 3D-object position detecting device that detects the position of the 3D object as a representative point in a high-density region in the density map. | 08-30-2012 |
20120219184 | MONITORING OF VIDEO IMAGES - A characteristic motion in a video is identified by determining pairs of moving features that have an indicative relationship between the motions of the two moving features in the pair. For example, the motion of a pedestrian is identified by an indicative relationship between the motions of the pedestrian's feet. This indicative relationship may be that one of the feet moves relative to the surroundings while the other remains stationary. | 08-30-2012 |
20120219185 | APPARATUS AND METHOD FOR DETERMINING A LOCATION IN A TARGET IMAGE - An apparatus and a computer-implemented method are provided for determining a location in a target image (T) of a site on a surface of a physical object using two or more reference images (I | 08-30-2012 |
20120219186 | Continuous Linear Dynamic Systems - Aspects of the present invention include systems and methods for segmentation and recognition of action primitives. In embodiments, a framework, referred to as the Continuous Linear Dynamic System (CLDS), comprises two sets of Linear Dynamic System (LDS) models, one to model the dynamics of individual primitive actions and the other to model the transitions between actions. In embodiments, the inference process estimates the best decomposition of the whole sequence into continuous alternating between the two set of models, using an approximate Viterbi algorithm. In this way, both action type and action boundary may be accurately recognized. | 08-30-2012 |
20120219187 | Data Capture and Identification System and Process - An identification method and process for objects from digitally captured images thereof that uses data characteristics to identify an object from a plurality of objects in a database. The data is broken down into parameters such as a Shape Comparison, Grayscale Comparison, Wavelet Comparison, and Color Cube Comparison with object data in one or more databases to identify the actual object of a digital image. | 08-30-2012 |
20120219188 | METHOD OF PROVIDING A DESCRIPTOR FOR AT LEAST ONE FEATURE OF AN IMAGE AND METHOD OF MATCHING FEATURES - A method of providing a descriptor for at least one feature of an image comprises the steps of providing an image captured by a capturing device and extracting at least one feature from the image, and assigning a descriptor to the at least one feature, the descriptor depending on at least one parameter which is indicative of an orientation, wherein the at least one parameter is determined from the orientation of the capturing device measured by a tracking system. The invention also relates to a method of matching features of two or more images. | 08-30-2012 |
20120219189 | METHOD AND DEVICE FOR DETECTING FATIGUE DRIVING AND THE AUTOMOBILE USING THE SAME - The present application discloses a method and device of detecting fatigue driving, comprising: analyzing an eye image in the driver's eye image area with a rectangular feature template to obtain the upper eyelid line; determining the eye closure state according to the curvature or curvature feature value of the upper eyelid line; and collecting statistics on the eye closure state and thereby determining whether the driver is in a fatigue state. The present application determines whether the eyes are opened or closed according to the shape of the upper eyelid, which is more accurate because the upper eyelid line has characteristics of higher relative contrast, anti-interference capacity, and adaptability to the changes in the facial expression. | 08-30-2012 |
20120224743 | SMARTPHONE-BASED METHODS AND SYSTEMS - Methods and arrangements involving portable devices, such as smartphones and tablet computers, are disclosed. Exemplary arrangements utilize the camera portions of such devices to identify nearby subjects, and take actions based thereon. Others rely on near field chip (RFID) identification of objects, or on identification of audio streams (e.g., music, voice). Some of the detailed technologies concern improvements to the user interfaces associated with such devices. Others involve use of these devices in connection with shopping, text entry, sign language interpretation, and vision-based discovery. Still other improvements are architectural in nature, e.g., relating to evidence-based state machines, and blackboard systems. Yet other technologies concern use of linked data in portable devices—some of which exploit GPU capabilities. Still other technologies concern computational photography. A great variety of other features and arrangements are also detailed. | 09-06-2012 |
20120224744 | IMAGE ANALYSIS METHOD - A moving feature is recognized in a video sequence by comparing its movement with a characteristic pattern. Possible trajectories through the video sequence are generated for an object by identifying potential matches of points in pairs of frames of the video sequence. When looking for the characteristic pattern, a number of possible trajectories are analyzed. The possible trajectories may be selected so that they are suitable for analysis. This may include selecting longer trajectories that can be easier to analyze. Thereby where the object being tracked is momentarily behind another object a continuous trajectory is generated. | 09-06-2012 |
20120224745 | EVALUATION OF GRAPHICAL OUTPUT OF GRAPHICAL SOFTWARE APPLICATIONS EXECUTING IN A COMPUTING ENVIRONMENT - Graphic objects generated by a software application executing in a computing environment are evaluated. The computing environment includes a graphical user interface for managing I/O functions, a data storage device for storing computer usable program code and data, and a data processing engine in communication with the graphical user interface and the data storage device The data processing engine receives and processes origin data from the data storage device to produce projected values for data points in the graphic image intended to be displayed. The data processing engine also creates and processes a snapshot of the displayed graphic object to produce actual values of data points in the displayed graphic object, compares the projected values to the actual values, and outputs an indication of the degree of similarity between the intended graphic object and the displayed graphic object. | 09-06-2012 |
20120224746 | CLASSIFIER ANOMALIES FOR OBSERVED BEHAVIORS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A combination of a self organizing map (SOM) and an adaptive resonance theory (ART) network may be used to identify a variety of different anomalous inputs at each cluster layer. As progressively higher layers of the cortex model component represent progressively higher levels of abstraction, anomalies occurring in the higher levels of the cortex model represent observations of behavioral anomalies corresponding to progressively complex patterns of behavior. | 09-06-2012 |
20120224747 | In-Vehicle Apparatus for Recognizing Running Environment of Vehicle - An in-vehicle running-environment recognition apparatus including an input unit for inputting an image signal from in-vehicle imaging devices for photographing external environment of a vehicle, an image processing unit for detecting a first image area by processing the image signal, the first image area having a factor which prevents recognition of the external environment, an image determination unit for determining a second image area based on at least any one of size of the first image area, position thereof, and set-up positions of the in-vehicle imaging devices having the first image area, an environment recognition processing being performed in the second image area, the first image area being detected by the image processing unit, and an environment recognition unit for recognizing the external environment of the vehicle based on the second image area. | 09-06-2012 |
20120230537 | TAG INFORMATION MANAGEMENT APPARATUS, TAG INFORMATION MANAGEMENT SYSTEM, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND TAG INFORMATION MANAGEMENT METHOD - A tag data management apparatus for managing tag data indicative of an attribute of content data, comprising: an extraction section that extracts positional information included in the content data, the positional information being indicative of a position associated with the content data; and a priority order determination section that determines a priority order of the content data, based on the positional information extracted by the extraction section. | 09-13-2012 |
20120230538 | PROVIDING INFORMATION ASSOCIATED WITH AN IDENTIFIED REPRESENTATION OF AN OBJECT - Methods, apparatus systems and computer program products are described herein that provide for using video or still shot analysis, such as AR or the like, to assist the user of mobile devices with receiving information corresponding to an abstraction or representation of a subject. Some subjects are difficult to capture in a video or still shot. The method and devices described herein capture representations of difficult to capture or unavailable subjects and presents information related to the subject with the representation. In an embodiment, the representation is a screenshot and the information is provided related to the application that is represented by the screenshot. Various other types of representations including depictions, advertisements, portions of, and identifying marks can be identified by the system and method and information presented relating to the corresponding subjects. In some cases, the information is customized with financial information of the user. | 09-13-2012 |
20120230539 | PROVIDING LOCATION IDENTIFICATION OF ASSOCIATED INDIVIDUALS BASED ON IDENTIFYING THE INDIVIDUALS IN CONJUNCTION WITH A LIVE VIDEO STREAM - Systems, methods, and computer program products are provided for using real-time video analysis, such as AR or the like to assist the user of a mobile device with commerce activities. Through the use of real-time vision object recognition faces, physical features, objects, logos, artwork, products, locations and other features that can be recognized in the real-time video stream can be matched to data associated with such to assist the user with commerce activity. The commerce activity may include, but is not limited to: identifying individuals associated with the user, identifying locations associated with individuals who are associated with the user, identifying groups of individuals who share a trait, or the like. In specific embodiments, the data that is matched to the images in the real-time video stream is specific to financial institutions, such as customer financial behavior history, customer purchase power/transaction history and the like. | 09-13-2012 |
20120230540 | DYNAMICALLY INDENTIFYING INDIVIDUALS FROM A CAPTURED IMAGE - Embodiments of the invention are directed to methods and apparatuses for capturing a real-time video stream using a mobile device, determining, using a processor, which images from the real-time video stream are associated with individuals meeting a user defined criteria, and presenting on a display of the real-time video stream, one or more indicators, each indicator being associated with an image determined to be a person meeting the predefined criteria. | 09-13-2012 |
20120230541 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - An image processing apparatus includes a first acquisition unit configured to obtain identification information for a plurality of blocks of an image, a second acquisition unit configured to obtain information to be used for image processing from a pixel value of a region of the image determined based on the identification information, and an image processing unit configured to perform image processing of the image based on the information obtained by the second acquisition unit. | 09-13-2012 |
20120230542 | METHOD FOR CREATING AND USING AFFECTIVE INFORMATION IN A DIGITAL IMAGING SYSTEM - An image file for storing a still digital image and metadata related to the still digital image, the image file including digital image data representing the still digital image, and metadata that categorizes the still digital image as an important digital image, wherein the categorization uses a range of levels and the range of levels includes at least three different integer values. | 09-13-2012 |
20120230543 | Object Information Derived from Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 09-13-2012 |
20120230544 | APPARATUS AND METHOD FOR FINDING A MISPLACED OBJECT USING A DATABASE AND INSTRUCTIONS GENERATED BY A PORTABLE DEVICE - The basic invention uses a portable device that can contain a camera, a database, and a text, voice or visual entry to control the storage of an image into a database. Furthermore, the stored image can be associated with text, color, visual or audio. The stored images can be used to guide the user towards a target that the user does not recall its current location. The user's commands can be issued verbally, textually or by scrolling through the target images in the database until the desired one is found. This target can be shoes, pink sneakers, a toy or some comparable items that the user needs to find. | 09-13-2012 |
20120230545 | Face Recognition Apparatus and Methods - One or more facial recognition categories are assigned to a face region detected in an input image ( | 09-13-2012 |
20120230546 | GENERIC OBJECT-BASED IMAGE RECOGNITION APPARATUS WITH EXCLUSIVE CLASSIFIER, AND METHOD FOR THE SAME - The present invention provides an image recognition apparatus with enhanced performance and robustness. | 09-13-2012 |
20120230547 | EYE TRACKING - An eye tracking apparatus and method of eye monitoring, comprising a target display adapted to project a moveable image of a target into a user's field of vision, an illumination source adapted to project a reference point onto a user's eye, a sensor adapted to monitor a user's eye, and a processor adapted to determine the position of a feature of a user's eye relative to the reference point, wherein the apparatus is arranged such that said determined position provides a direct indication of eye direction relative to the target direction. | 09-13-2012 |
20120237080 | METHOD FOR DETECTION OF MOVING OBJECT OF APPROXIMATELY KNOWN SIZE IN CONDITIONS OF LOW SIGNAL-TO-NOISE RATIO - The invention provides a method for detection of a moving object when signal-to-noise ratio is low. A field of view is presented as a regularly updated frame of data points. A state of the object is defined by an “azimuth—speed” pair (i.e., a hypothesis). On each update, a detection system performs two steps. At the first step, the brightness of data points of a new frame is replaced by the average brightness of points surrounding this point. At the second step, the brightness of data points of this frame is being accumulated separately for each hypothesis. On each update, one of hypotheses produces the accumulated frame with the brightest point. This hypothesis is considered the best; its frame is displayed on a screen. The object is detected when the best hypothesis stabilizes in a sequence of updates and the movement of the brightest point becomes consistent with this hypothesis. | 09-20-2012 |
20120237081 | ANOMALOUS PATTERN DISCOVERY - A trajectory of movement of an object is tracked in a video data image field that is partitioned into a plurality of different grids. Global image features from video data relative to the trajectory are extracted and compared to a learned trajectory model to generate a global anomaly detection confidence decision value as a function of fitting to the learned trajectory model. Local image features are also extracted for each of the image field grids that include object trajectory, which are compared to learned feature models for the grids to generate local anomaly detection confidence decisions for each grid as a function of fitting to the learned feature models for the grids. The global anomaly detection confidence decision value and the local anomaly detection confidence decision values for the grids are into a fused anomaly decision with respect to the tracked object. | 09-20-2012 |
20120237082 | VIDEO BASED MATCHING AND TRACKING - An analytical device is disclosed that analyzes whether a first image is similar to (or the same as) as a second image. The analytical device analyzes the first image by combining at least a part (or all) of the first image with at least a part (or all) of the second image, and by analyzing at least a part (or all) of the combined image. Part or all of the combination may be analyzed with respect to the abstraction of the first image and/or the abstraction of the second image. The abstraction may be based on a Bag of Features (BoF) description, based on a histogram of intensity values, or based on other types of abstraction methodologies. The analysis may involve comparing one or more aspects of the combination (such as the entropy or randomness of the combination) with the one or more aspects of the abstracted first image and/or abstracted second image. Based on the comparison, the analytical device may determine whether the first image is similar to or the same as the second image. The analytical device may work with a variety of images in a variety of applications including a video tracking system, a biometric analytic system, or a database image analytical system. | 09-20-2012 |
20120237083 | AUTOMATIC OBSTACLE LOCATION MAPPING - A method of automatic obstacle location mapping comprises receiving an indication of a feature to be identified in a defined area. An instance of the feature is found within an image. A report is then generated conveying the location of said feature. | 09-20-2012 |
20120237084 | SYSTEM AND METHOD FOR IDENTIFYING THE EXISTENCE AND POSITION OF TEXT IN VISUAL MEDIA CONTENT AND FOR DETERMINING A SUBJECT'S INTERACTIONS WITH THE TEXT - A reading meter system and method is provided for identifying the existence and position of text in visual media content (e.g., a document to be displayed (or being displayed) on a computer monitor or other display device) and determining if a subject has interacted with the text and/or the level of the subject's interaction with the text (e.g., whether the subject looked at the text, whether the subject read the text, whether the subject comprehended the text, whether the subject perceived and made sense of the text, and/or other levels of the subject's interaction with the text). The determination may, for example, be based on data generated from an eye tracking device. The reading meter system may be used alone and/or in connection with an emotional response tool (e.g., a software-based tool for determining the subject's emotional response to the text and/or other elements of the visual media content on which the text appears). If used together, the reading meter system and emotional response tool advantageously may both receive, and perform processing on, eye date generated from a common eye tracking device. | 09-20-2012 |
20120237085 | METHOD FOR DETERMINING THE POSE OF A CAMERA AND FOR RECOGNIZING AN OBJECT OF A REAL ENVIRONMENT - A method for determining the pose of a camera ( | 09-20-2012 |
20120237086 | MOVING BODY POSITIONING DEVICE - Provided is a moving body positioning device serving as an essential element for monitoring and tracing a moving body, which moving body positioning device uses an external monitoring camera. | 09-20-2012 |
20120243729 | LOGIN METHOD BASED ON DIRECTION OF GAZE - A method of authenticating a user of a computing device is proposed, together with computing device on which the method is implemented. A plurality of objects is displayed on a display screen. The plurality of objects includes at least objects that make up a sequence of objects pre-selected as the user's passcode. In response to a trigger signal an image of the user's face is captured while looking at one of the objects on the display screen. A determination of which object is in the direction of the user's gaze is made from the photograph and whether or not the gaze is on the correct object in the sequence of the passcode. This is repeated for each object in the sequence of the passcode. | 09-27-2012 |
20120243730 | COLLABORATIVE CAMERA SERVICES FOR DISTRIBUTED REAL-TIME OBJECT ANALYSIS - A collaborative object analysis capability is depicted and described herein. The collaborative object analysis capability enables a group of cameras to collaboratively analyze an object, even when the object is in motion. The analysis of an object may include one or more of identification of the object, tracking of the object while the object is in motion, analysis of one or more characteristics of the object, and the like. In general, a camera is configured to discover the camera capability information for one or more neighboring cameras, and to generate, on the basis of such camera capability information, one or more actions to be performed by one or more neighboring cameras to facilitate object analysis. The collaborative object analysis capability also enables additional functions related to object analysis, such as alerting functions, archiving functions (e.g., storing captured video, object tracking information, object recognition information, and so on), and the like. | 09-27-2012 |
20120243731 | IMAGE PROCESSING METHOD AND IMAGE PROCESSING APPARATUS FOR DETECTING AN OBJECT - An image processing method and an image processing apparatus for detecting an object are provided. The image processing method includes the following steps: partitioning an image into at least a first sub-image covering a first zone and a second sub-image covering a second zone according to a designed trait; and performing an image detection process upon the first sub-image for checking whether the object is within the first zone to generate a first detecting result. The object is a human face, and the image detection process is a face detection process. | 09-27-2012 |
20120243732 | Adaptable Framework for Cloud Assisted Augmented Reality - A mobile platform efficiently processes sensor data, including image data, using distributed processing in which latency sensitive operations are performed on the mobile platform, while latency insensitive, but computationally intensive operations are performed on a remote server. The mobile platform acquires sensor data, such as image data, and determines whether there is a trigger event to transmit the sensor data to the server. The trigger event may be a change in the sensor data relative to previously acquired sensor data, e.g., a scene change in an image. When a change is present, the sensor data may be transmitted to the server for processing. The server processes the sensor data and returns information related to the sensor data, such as identification of an object in an image or a reference image or model. The mobile platform may then perform reference based tracking using the identified object or reference image or model. | 09-27-2012 |
20120243733 | MOVING OBJECT DETECTING DEVICE, MOVING OBJECT DETECTING METHOD, MOVING OBJECT DETECTION PROGRAM, MOVING OBJECT TRACKING DEVICE, MOVING OBJECT TRACKING METHOD, AND MOVING OBJECT TRACKING PROGRAM - A moving object detecting device | 09-27-2012 |
20120243734 | Determining Detection Certainty In A Cascade Classifier - Disclosed are embodiments for determining detection certainty in a cascade classifier ( | 09-27-2012 |
20120243735 | ADJUSTING DISPLAY FORMAT IN ELECTRONIC DEVICE - A display format adjustment system includes a receiving module, a visual condition determination module, a display format determination module, and a display control |