Entries |
Document | Title | Date |
20080199043 | Image Enhancement in Sports Recordings - A video signal representing rapid ball movement is produced from a series of source images. An initial image position for the moving ball is identified by, for each image, producing a difference image between sequential images. In the difference image, image elements representing a contents alteration below a threshold are allocated a first value, and those representing a contents alteration above or equal to the threshold are allocated a second value. A set of candidates is then identified, where each candidate is represented by a group of neighboring image elements that all contain the second value. The group must fulfill a ball size criterion. A ball selection algorithm selects an initial image position from the set of ball candidates. The ball is tracked, and a composite image sequence is generated wherein a synthetic trace representing the path of the moving ball is shown as successively added image data. | 08-21-2008 |
20080199044 | Image Processing Apparatus, Image Processing Method, and Program - Disclosed herein is an image processing apparatus for recognizing, from a taken image, an object corresponding to a registered image registered in advance, including, an image taker configured to take an image of a subject to obtain the taken image of the subject, a recognizer configured to recognize, from the taken image, an object corresponding to the registered image, a first specified area tracker configured to execute first specified area tracking processing for tracking, in the taken image, a first tracking area specified on the basis of a result of recognition by the recognizer, and a second specified area tracker configured to execute second specified area tracking processing for tracking a second specified area specified on the basis of a result of the first specified area tracking processing. | 08-21-2008 |
20080205700 | Apparatus and Method for Assisted Target Designation - A method for assisting a user to designate a target as viewed on a video image displayed on a video display by use of a user operated pointing device. The method includes the steps of evaluating prior to target designation one or more tracking function indicative of a result which would be generated by designating a target at a current pointing direction of the pointing device, and providing to the user, prior to target designation, an indication indicative of the result. | 08-28-2008 |
20080205701 | ENHANCED INPUT USING FLASHING ELECTROMAGNETIC RADIATION - Enhanced input using flashing electromagnetic radiation, in which first and second images, captured on a first side of a screen, of an object and an ambient electromagnetic radiation emitter disposed on a second side of the screen, are accessed. The first image being captured while the object is illuminated with projected electromagnetic radiation, and the second image being captured while the projected electromagnetic radiation is extinguished. A position of the object relative to the screen based on coniparing the first and second images is determined. An application is controlled based on the determined position. | 08-28-2008 |
20080205702 | BACKGROUND IMAGE GENERATION APPARATUS - The information of a detection area is obtained by using radar, the information is sent to a mobile body detection unit, the position of a mobile body existing within the detection area is detected, a zone excluding a predetermined range surrounding the mobile body is identified by using a nonexistence zone identification unit, the information of the detection area at the time is obtained, a zone which does not include a mobile body is accurately generated by a background image generation unit, then the information of the detection area is obtained by a camera, and the difference between the generated background image and the aforementioned information is detected by a difference process unit; thereby an accurate position of the mobile body is detected. | 08-28-2008 |
20080205703 | Methods and Apparatus for Automatically Tracking Moving Entities Entering and Exiting a Specified Region - Techniques for tracking entities using a single overhead camera are provided. A foreground region is detected in a video frame of the single overhead camera corresponding to one or more entities. It is determined if the foreground region is associated with an existing tracker. It is determined whether the detected foreground region is the result of at least one of a merger of two or more smaller foreground regions having corresponding existing trackers and a split of a larger foreground region having a corresponding existing tracker when the detected foreground region is not associated with an existing tracker. The detected foreground region is tracked via at least one existing tracker when the foreground region is associated with an existing tracker or the foreground region is the result of at least one of a merger and a split. | 08-28-2008 |
20080212830 | Efficient Calculation of Ensquared Energy in an Imaging System - Systems and methods are provided for determining an ensquared energy associated with an imaging system. In one embodiment of the invention, a focal plane array captures an image of a target comprising a plurality of point sources, each point source being associated with a pixel within the focal plane array. An image analysis component estimates an ensquared energy value for the imaging system from respective intensity values of the associated pixels and known relative positions of the plurality of point sources. | 09-04-2008 |
20080212831 | REMOTE CONTROL OF AN IMAGE CAPTURING UNIT IN A PORTABLE ELECTRONIC DEVICE - A method and computer program product are described herein for remotely controlling a first image capturing unit in a portable electronic device as well as to such a portable electronic device. The portable electronic device may include a first and a second image capturing unit. The device detects and tracks an object via the second capturing unit and detects changes in an area of the object. These changes are then used for controlling the first image capturing unit remotely. When the control involves capturing of images an improved image quality can be obtained. Also the time it takes to capture an image is reduced. | 09-04-2008 |
20080212832 | DISCRIMINATOR GENERATING APPARATUS AND OBJECT DETECTION APPARATUS - A discriminator generating apparatus includes a learning unit ( | 09-04-2008 |
20080212833 | ENHANCEMENT OF AIMPOINT IN SIMULATED TRAINING SYSTEMS - Embodiments of the invention, therefore, provide improved systems and methods for tracking targets in a simulation environment. Merely by way of example, an exemplary embodiment provides a reflected laser target tracking system that tracks a target with a video camera and associated computational logic. In certain embodiments, a closed loop algorithm may be used to predict future positions of targets based on formulas derived from prior tracking points. Hence, the target's next position may be predicted. In some cases, targets may be filtered and/or sorted based on predicted positions. In certain embodiments, equations (including without limitation, first order equations and second order equations) may be derived from one or more video frames. Such equations may also be applied to one or more successive frames of video received and/or produced by the system. In certain embodiments, these formulas also may be used to compute predicted positions for targets; this prediction may, in some cases, compensate for inherent delays in the processing pipeline. | 09-04-2008 |
20080212834 | User interface using camera and method thereof - A user interface using a camera and a method thereof, wherein two or more images that were shot in time sequence are preprocessed to form N×M matrices, and then each element of the matrices are compared. The comparison is thus made (N+1)(M+1) times to select a result of the highest similarity and produce a motion vector. The interface and method help to produce more accurate motion vectors and to obviate inaccuracy that is yielded throughout low-pass filtering. | 09-04-2008 |
20080212835 | Object Tracking by 3-Dimensional Modeling - Disclosed a method for tracking 3-dimensional objects, or some of these objects' features, using range imaging for depth-mapping merely a few points on the surface area of each object, mapping them onto a geometrical 3-dimensional model, finding the object's pose, and deducing the spatial positions of the object's features, including those not captured by the range imaging. | 09-04-2008 |
20080212836 | Visual Tracking Using Depth Data - Real-time visual tracking using depth sensing camera technology, results in illumination-invariant tracking performance. Depth sensing (time-of-flight) cameras provide real-time depth and color images of the same scene. Depth windows regulate the tracked area by controlling shutter speed. A potential field is derived from the depth image data to provide edge information of the tracked target. A mathematically representable contour can model the tracked target. Based on the depth data, determining a best fit between the contour and the edge of the tracked target provides position information for tracking. Applications using depth sensor based visual tracking include head tracking, hand tracking, body-pose estimation, robotic command determination, and other human-computer interaction systems. | 09-04-2008 |
20080219501 | Motion Measuring Device, Motion Measuring System, In-Vehicle Device, Motion Measuring Method, Motion Measurement Program, and Computer-Readable Storage - An embodiment of the present invention includes: a tracking object image extracting section that extracts a tracking object image, which represents a tracking object, from an image captured by a monocular camera; a two-dimensional displacement calculating section that calculates, as actual movement amounts, amounts of inter-frame movement of the tracking object image; a two-dimensional plane projecting section that generates on a two-dimensional plane a projected image of a three-dimensional model, which represents in three dimensions a capturing object captured by the monocular camera; a small motion generating section that calculates, as estimated movement amounts, amounts of inter-frame movement of the projected image; and a three-dimensional displacement estimating section that estimates amounts of three-dimensional motion of the tracking object on the basis of the actual movement amounts and the estimated movement amounts. | 09-11-2008 |
20080219502 | TRACKING BIMANUAL MOVEMENTS - Hands may be tracked before, during, and after occlusion, and a gesture may be recognized. Movement of two occluded hands may be tracked as a unit during an occlusion period. A type of synchronization characterizing the two occluded hands during the occlusion period may be determined based on the tracked movement of the occluded hands. Based on the determined type of synchronization, it may be determined whether directions of travel for each of the two occluded hands change during the occlusion period. Implementations may determine that a first hand and a second hand are occluded during an occlusion period, the first hand having come from a first direction and the second hand having come from a second direction. The first hand may be distinguished from the second hand after the occlusion period based on a determined type of synchronization characterizing the two hands, and a behavior of the two hands. | 09-11-2008 |
20080219503 | MEANS FOR USING MICROSTRUCTURE OF MATERIALS SURFACE AS A UNIQUE IDENTIFIER - A method and apparatus for the visual identification of materials for tracking an object comprises parameter setting, acquisition and identification phases. The parameter setting phase comprises the steps of defining acquisition parameters for the objects. The acquisition phase comprises the steps of digitally acquiring two-dimensional template image of an object, applying a flattening function and generating downsampled template version of the flattened template and storing it in a reference database with the flattened template. The identification phase comprises the steps of digitally acquiring a snapshot image, applying the flattening function and generating one downsampled version, cross-correlating the downsampled version of the flattened snapshot with the corresponding downsampled templates of the reference database, and selecting templates according to the value of the signal to noise ratio, for the selected templates, cross-correlating the flattened snapshot image with the reference flattened template, and identifying the object by finding the best corresponding template. | 09-11-2008 |
20080219504 | AUTOMATIC MEASUREMENT OF ADVERTISING EFFECTIVENESS - An automated system for measuring information about a target image in a video is described. One embodiment includes receiving a set of one or more video images for the video, automatically finding the target image in at least a subset of the video images, determining one or more statistics regarding the target image being in the video, and reporting the one or more statistics. | 09-11-2008 |
20080219505 | Object Detection System - An object detection system is provided a plurality of image capture units for capturing images of surroundings of the system, a distance information calculation unit for dividing a captured image which constitutes a reference of captured images captured by the plurality of image capture units into a plurality of pixel blocks, individually retrieving corresponding pixel positions within the other captured image for the pixel blocks, and individually calculating distance information, and a histogram generation module for dividing a range image representing the individual distance information of the pixel blocks calculated by the distance information calculation unit into a plurality of segments having predetermined sizes, providing histograms relating to the distance information for the respective divided segments, and casting the distance information of the pixel blocks to the histograms of the respective segments. | 09-11-2008 |
20080219506 | Method and apparatus for automatic object identification - A method and system for processing image data to identify objects in an image. The method and system operate using various resolutions of the image to identify the objects. Information obtained while processing the image at one resolution is employed when processing the image at another resolution. | 09-11-2008 |
20080219507 | Passive Touch System And Method Of Detecting User Input - A method of tracking an object of interest preferably includes (i) acquiring a first image and a second image representing different viewpoints of the object of interest; (ii) processing the first image into a first image data set and the second image into a second image data set; (iii) processing the first image data set and the second image data set to generate a background data set associated with a background; (iv) generating a first difference map by determining differences between the first image data set and the background data set and a second difference map by determining differences between the second image data set and the background data set; (v) detecting a first relative position of the object of interest in the first difference map and a second relative position of the object of interest in the second difference map; and (vi) producing an absolute position of the object of interest from the first and second relative positions of the object of interest. | 09-11-2008 |
20080226126 | Object-Tracking Apparatus, Microscope System, and Object-Tracking Program - An object-tracking apparatus ( | 09-18-2008 |
20080226127 | LINKING TRACKED OBJECTS THAT UNDERGO TEMPORARY OCCLUSION - A method and system is configured to characterize regions of an environment by the likelihoods of transition of a target from each region to another. The likelihoods of transition between regions is preferably used in combination with conventional object-tracking algorithms to determine the likelihood that a newly-appearing object in a scene corresponds to a recently-disappeared target. The likelihoods of transition may be predefined based on the particular environment, or may be determined based on prior appearances and disappearances in the environment, or a combination of both. The likelihoods of transition may also vary as a function of the time of day, day of the week, and other factors that may affect the likelihoods of transitions between regions in the particular surveillance environment. | 09-18-2008 |
20080226128 | SYSTEM AND METHOD FOR USING FEATURE TRACKING TECHNIQUES FOR THE GENERATION OF MASKS IN THE CONVERSION OF TWO-DIMENSIONAL IMAGES TO THREE-DIMENSIONAL IMAGES - The present invention is directed to systems and methods for controlling 2-D to 3-D image conversion and/or generation. The methods and systems use auto-fitting techniques to create a mask based upon tracking features from frame to frame. When features are determined to be missing they are added prior to auto-fitting the mask. | 09-18-2008 |
20080226129 | Cart Inspection for Suspicious Items - Methods and apparatus provide for a Cart Inspector to create a suspicion level for a transaction when a video image of the transaction portrays an item(s) left in a shopping cart. Specifically, the Cart Inspector obtains video data associated with a time(s) of interest. The video data originates from a video camera that monitors a transaction area. The Cart Inspector analyzes the video data with respect to target image(s) associated with a transaction in the transaction area during the time(s) of interest. The Cart Inspector creates an indication of a suspicion level for the transaction based on analysis of the target image(s). Creation of a high suspicion level for the transaction indicates that the transaction's corresponding video images most likely portray occurrences where the purchase price of an item transported through the transaction area was not included in the total amount paid by the customer. | 09-18-2008 |
20080232641 | SYSTEM AND METHOD FOR THE MEASUREMENT OF RETAIL DISPLAY EFFECTIVENESS - The present invention relates to the measurement of human activities through video, particularly in retail environments. A method for measuring retail display effectiveness in accordance with an embodiment of the present invention includes: detecting a moving object in a field of view of an imaging device, the imaging device obtaining image data of a product display; tracking the object in the field of view of the imaging device to obtain a track; and obtaining statistics for the track with regard to the product display. | 09-25-2008 |
20080232642 | System and method for 3-D recursive search motion estimation - A method for 3-D recursive search motion estimation is provided to estimate a motion vector for a current block in a current frame. The method includes the following steps. First, provide a spatial prediction by selecting at least one motion vector for at least one neighboring block in the current frame. Then, provide a temporal prediction. After that, estimate the motion vector for the current block based on the spatial prediction and the temporal prediction. The temporal prediction is obtained by selecting at least one most frequent motion vector from a plurality of motion vectors for a plurality of blocks in a corresponding region of a previous frame, wherein the corresponding block encloses a previous block which is location corresponding to the current block in the current frame. | 09-25-2008 |
20080232643 | Bitmap tracker for visual tracking under very general conditions - System and method for visually tracking a target object silhouette in a plurality of video frames under very general conditions. The tracker does not make any assumption about the object or the scene. The tracker works by approximating, in each frame, a PDF (probability distribution function) of the target's bitmap and then estimating the maximum a posteriori bitmap. The PDF is marginalized over all possible motions per pixel, thus avoiding the stage in which optical flow is determined. This is an advantage over other general-context trackers that do not use the motion cue at all or rely on the error-prone calculation of optical flow. Using a Gibbs distribution with a first order neighborhood system yields a bitmap PDF whose maximization may be transformed into that of a quadratic pseudo-Boolean function, the maximum of which is approximated via a reduction to a maximum-flow problem. | 09-25-2008 |
20080232644 | Storage medium having information processing program stored thereon and information processing apparatus - A motion information obtaining step successively obtains motion information from a motion sensor. An imaging information obtaining step successively obtains imaging information from an imaging means. An invalid information determination step determines whether the imaging information is valid information or invalid information for predetermined processing. A motion value calculation step calculates a motion value representing a magnitude of a motion of the operation apparatus in accordance with the motion information. A processing step executes, when the imaging information is determined as the invalid information in the invalid information determination step and when the motion value calculated in the motion calculation step is within a predetermined value range, predetermined processing in accordance with most recent valid imaging information among valid imaging information previously obtained. | 09-25-2008 |
20080232645 | TRACKING A SURFACE IN A 3-DIMENSIONAL SCENE USING NATURAL VISUAL FEATURES OF THE SURFACE - A facility for determining the 3-dimensional location and orientation of a subject surface in a distinguished perspective image of the subject surface is described. The subject surface has innate visual features, a subset of which are selected. The facility uses the location of the selected visual features in a perspective image of the subject surface that precedes the distinguished perspective image in time to identify search zones in the distinguished perspective image. The facility searches the identified search zones for the selected visual features to determine the 2-dimensional locations at which the selected visual features occur. Based on the determined 2-dimensional locations, the facility determines the 3-dimensional location and orientation of the subject surface in the distinguished perspective image. | 09-25-2008 |
20080240496 | APPROACH FOR RESOLVING OCCLUSIONS, SPLITS AND MERGES IN VIDEO IMAGES - Aspects of the present invention provide a solution for resolving an occlusion in a video image. Specifically, an embodiment of the present invention provides an environment in which portions of a video image in which occlusions have occurred may be determined and analyzed to determine the type of occlusion. Furthermore, regions of the video image may be analyzed to determine which object in the occlusion the region belongs to. The determinations and analysis may use such factors as pre-determined attributes of an object, such as color or texture of the object and/or a temporal association of the object, among others. | 10-02-2008 |
20080240497 | Method for tracking objects in videos using forward and backward tracking - A method tracks an object in a sequence of frames of a video. The method is provided with a set of tracking modules. Frames of a video are buffered in a memory buffer. First, an object is tracked in the buffered frames forward in time using a selected one of the plurality of tracking module. Second, the object is tracked in the buffered frames backward in time using the selected tracking module. Then, a tracking error is determined from the first tracking and the second tracking. If the tracking error is less than a predetermined threshold, then additional frames are buffered in the memory buffer and the first tracking, the second tracking and the determining steps are repeated. Otherwise, if the error is greater than the predetermined threshold, then a different tracking module is selected and the first tracking, the second tracking and the determining steps are repeated. | 10-02-2008 |
20080240498 | RUNWAY SEGMENTATION USING VERTICES DETECTION - Methods and apparatus are provided for locating a runway by detecting an object (or blob) within data representing a region of interest provided by a vision sensor. The vertices of the object are determined by finding points on the contour of the object nearest for the four corners of the region of interest. The runway can then be identified to the pilot of the aircraft by extending lines between the vertices to identify the location of the runway. | 10-02-2008 |
20080240499 | Jointly Registering Images While Tracking Moving Objects with Moving Cameras - A method tracks a moving object by registering a current image in a sequence of images with a previous image. The sequence of images is acquired of a scene by a moving camera. The registering produces a registration result. The moving object is tracked in the registered image to produce a tracking result. The registered current image is registered with the previous image using tracking result for all the images in the sequence. | 10-02-2008 |
20080240500 | IMAGE PROCESSING METHODS - A method of image processing, the method comprising receiving an image frame including a plurality of pixels, each of the plurality of pixels including an image information, conducting a first extraction based on the image information to identify foreground pixels related to a foreground object in the image frame and background pixels related to a background of the image frame, scanning the image frame in regions, identifying whether each of the regions includes a sufficient number of foreground pixels, identifying whether each of regions including a sufficient number of foreground pixels includes a foreground object, clustering regions including a foreground object into at least one group, each of the at least one group corresponding to a different foreground object in the image frame, and conducting a second extraction for each of at least one group to identify whether a foreground pixel in the each of the at least one group is to be converted to a background pixel. | 10-02-2008 |
20080240501 | Measurement system, lithographic apparatus and method for measuring a position dependent signal of a movable object - An encoder-type measurement system is configured to measure a position dependent signal of a movable object, the measurement system including at least one sensor mountable on the movable object a sensor target object mountable on a substantially stationary frame, and a mounting device configured to mount the sensor target object on the substantially stationary frame. The measurement system further includes a compensation device configured to compensate movements and/or deformations of the sensor target object with respect to the substantially stationary frame. The compensation device may include a passive or an active damping device and/or a feedback position control system. In an alternative embodiment, the compensation device includes a gripping device which fixes the position of the sensor target object during a high accuracy movement of the movable object. | 10-02-2008 |
20080240502 | Depth mapping using projected patterns - Apparatus for mapping an object includes an illumination assembly, which includes a single transparency containing a fixed pattern of spots. A light source transilluminates the single transparency with optical radiation so as to project the pattern onto the object. An image capture assembly captures an image of the pattern that is projected onto the object using the single transparency. A processor processes the image captured by the image capture assembly so as to reconstruct a three-dimensional (3D) map of the object. | 10-02-2008 |
20080240503 | Image Processing Apparatus And Image Pickup Apparatus Mounting The Same, And Image Processing Method - A coding unit codes a moving image. An object detector detects an object from within a picture contained in the moving image, and generates, for each picture, object detection information containing at least the number of objects detected within an identical picture. When a codestream is generated from coded data generated by the coding unit, a stream generator describes the object detection information in a prescribed region of the codestream. | 10-02-2008 |
20080240504 | Integrating Object Detectors - An N-object detector comprises an N-object decision structure incorporating decision sub-structures of N object detectors. Some decision sub-structures have multiple different versions composed of the same classifiers with the classifiers rearranged. Said multiple versions associated with an object detector are arranged in the N-object decision structure so that the order in which the classifiers are evaluated is dependent upon the results of the evaluation of a classifier of another object detector. Each version of the same decision sub-structure produces the same logical behaviour as the other versions. Such an N-object decision structure is generated by generating multiple candidate N-object decision structures and analysing the expected computational cost of these candidates to select one of them. | 10-02-2008 |
20080240505 | Feature information collecting apparatuses, methods, and programs - Apparatuses, methods, and programs acquire vehicle position information that represents a current position of a vehicle, acquire image information of a vicinity of the vehicle, and carry out image recognition processing of a target feature that is included in the image information to determine a position of the target feature. The apparatuses, methods, and programs store recognition position information that is based on the acquired vehicle position information and that represents the determined recognition position of the target feature. The apparatuses, methods, and programs determine an estimated position of the target feature based on a set of a plurality of stored recognition position information for the target feature, the plurality of stored recognition position information for the target feature being stored due to the target feature being subject to image recognition processing a plurality of times. | 10-02-2008 |
20080247599 | Method for Detecting Objects Left-Behind in a Scene - A method detects an object left-behind in a scene by updating a set of background models using a sequence of images acquired of the scene by a camera. Each background model is updated at a different temporal scales ranging from short term to long term. A foreground mask is determined from each background model after the updating for a particular image of the sequence. A motion image is updated from the set of foreground masks. In the motion, image, each pixel has an associated evidence value. The evidence values are compared with a evidence threshold to detect and signal an object left behind in the scene. | 10-09-2008 |
20080247600 | IMAGE RECORDING DEVICE, PLAYER DEVICE, IMAGING DEVICE, PLAYER SYSTEM, METHOD OF RECORDING IMAGE, AND COMPUTER PROGRAM - An imaging device detects a face of a subject from an image in response to inputting of the image containing the subject, and generates face data related to the face. The imaging device generates face data management information managing the face data and controls recording of the input image, the generated face data and the face data management information on a recording unit with the input image mapped to the face data and the face data management information. The face data contains a plurality information components recorded in a predetermined recording order. The face data management information, in a data structure responsive to the recording order of the information components of the face data, contains a train of consecutively assigned bits. The information components are assigned predetermined flags in the recording order. Each flag represents the presence or absence of the information component corresponding to the flag in the face data. | 10-09-2008 |
20080247601 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus such as a monitor system for executing image processing to present a suspicious object effectively. An object detecting unit detects an object contained in an image, an associating unit associates a plurality of objects detected with the object detecting unit, with each other, and an evaluating unit evaluates (e.g., evaluation as being suspicious) an object detected by the object detecting unit, and an association evaluating unit evaluates another object associated by the associating unit with the object evaluated by the evaluating unit, in accordance with the evaluation made by the evaluating unit. | 10-09-2008 |
20080253609 | TRACKING WORKFLOW IN MANIPULATING MEDIA ITEMS - A computer-implemented method is described including receiving input specifying an image frame from among a series of image frames, and automatically detecting one or more points in the specified image frame that would be suitable for tracking a point in the series of image frames. In addition, a computer-implemented method is described including choosing a first position of a point on a first image frame of a plurality of image frames, and displaying in a bounded region on the first image frame content relating to a second image frame of the plurality of image frames, wherein the content displayed in the bounded region includes a second position of the point at a different time than the first position of the point. | 10-16-2008 |
20080253610 | Three dimensional shape reconstitution device and estimation device - A face model providing portion provides an stored average face model to an estimation portion estimating an affine parameter for obtaining a head pose. An individual face model learning portion obtains a result of tracking feature points by the estimation portion and learns an individual face model. The individual face model learning portion terminates the learning when a free energy of the individual face model is over a free energy of the average face model, and switches a face model provided to the estimation portion from the average face model to the individual face model. While learning the individual face mode, an observation matrix is factorized using a reliability matrix showing reliability of each observation value forming the observation matrix with emphasis on the feature point having higher reliability. | 10-16-2008 |
20080253611 | Analyst cueing in guided data extraction - The Analyst Cueing method addresses the issues of locating desired targets of interest from among very large datasets in a timely and efficient manner. The combination of computer aided methods for classifying targets and cueing a prioritized list for an analyst produces a robust system for generalized human-guided data mining. Incorporating analyst feedback adaptively trains the computerized portion of the system in the identification and labeling of targets and regions of interest. This system dramatically improves analyst efficiency and effectiveness in processing data captured from a wide range of deployed sensor types. | 10-16-2008 |
20080253612 | Method and an Arrangement for Locating and Picking Up Objects From a Carrier - The invention relates to a method for locating and picking up objects that are placed on a carrier. A scanning operation is performed over the carrier. The scanning is performed by a line laser scanner whose results are used to generate a virtual surface that represents the area that has been scanned. The virtual surface is compared to a pre-defined virtual object corresponding to an object to be picked from the carrier, whereby a part of the virtual surface that matches the pre-defined virtual object is identified. A robot arm is then caused to move to a location corresponding to the identified part of the virtual surface and pick up an object from the carrier at this location. | 10-16-2008 |
20080253613 | System and Method for Cooperative Remote Vehicle Behavior - A method for facilitating cooperation between humans and remote vehicles comprises creating image data, detecting humans within the image data, extracting gesture information from the image data, mapping the gesture information to a remote vehicle behavior, and activating the remote vehicle behavior. Alternatively, voice commands can by used to activate the remote vehicle behavior. | 10-16-2008 |
20080253614 | METHOD AND APPARATUS FOR DISTRIBUTED ANALYSIS OF IMAGES - A method and apparatus for intelligent distributed analyses of images including capturing the images and analyzing the captured images, where feature information is extracted from the captured images. The extracted feature information is used in determining whether a predefined condition is met, and the extracted feature information is transmitted for further analysis when the predefined condition is met. The extracted feature information is stored and is used to generate statistical information related to the extracted feature information. Further, additional feature information is provided from other databases to implement further analysis including an event detection or recognition. Accordingly, distributed intelligent analyses of images is provided for analyzing captured images to efficiently and effectively implement event detection or recognition. | 10-16-2008 |
20080260205 | Image Processing Device and Method - The present invention relates to an image processing device and a corresponding image processing method for processing medical image data showing at least two image objects, including a segmentation unit for detection and/or segmentation of image objects in said image data. To allow a more accurate and better segmentation of target objects which are hard to localize and detect, it is proposed that the segmentation unit comprises: a selection unit ( | 10-23-2008 |
20080260206 | IMAGE PROCESSING APPARATUS AND COMPUTER PROGRAM PRODUCT - An image processing apparatus includes a feature-quantity calculating unit that calculates feature quantities of target regions each indicating a tracking object in respective target images, the target images being obtained by capturing the tracking object at a plurality of time points; a provisional-tracking processing unit that performs provisional tracking of the target region by associating the target regions of the target images with each other using the calculated feature quantities; and a final-tracking processing unit that acquires a final tracking result of the target region based on a result of the provisional tracking. | 10-23-2008 |
20080260207 | Vehicle environment monitoring apparatus - A vehicle environment monitoring apparatus capable of extracting an image of a monitored object in an environment around a vehicle by separating the same from the background image with a simple configuration having a single camera mounted on the vehicle is provided. The apparatus includes a first image portion extracting processing unit to extract first image portions (A | 10-23-2008 |
20080267449 | 3-D MODELING - A system comprising an imaging device adapted to capture images of a target object at multiple angles. The system also comprises storage coupled to the imaging device and adapted to store a generic model of the target object. The system further comprises processing logic coupled to the imaging device and adapted to perform an iterative process by which the generic model is modified in accordance with the target object. During each iteration of the iterative process, the processing logic obtains structural and textural information associated with at least one of the captured images and modifies the generic model with the structural and textural information. The processing logic displays the generic model. | 10-30-2008 |
20080267450 | Position Tracking Device, Position Tracking Method, Position Tracking Program and Mixed Reality Providing System - The present invention has a simpler structure than before and is designed to precisely detect the position of a real environment's target object on a screen. The present invention generates a special marker image MKZ including a plurality of areas whose brightness levels gradually change in X and Y directions, displays the special marker image MKZ on the screen of a liquid crystal display | 10-30-2008 |
20080267451 | System and Method for Tracking Moving Objects - A method for tracking an object that is embedded within images of a scene, including: in a sensor unit that includes movable sensor, generating, storing and transmitting over a communication link a succession of images of a scene. In a remote control unit, receiving the succession of images. Receiving a user command for selecting an object of interest in a given image of the received succession of images and determining object data associated with the object and transmitting through the link to the sensor unit the object data. In the sensor unit, identifying the given image of the stored succession of images and the object of interest using the object data, and tracking the object in other image of the stored succession of images. The other image being later than the given image. In the case that the object cannot be located in the latest image of the stored succession of images, using information of at images in which the object was located to predict estimated real-time location of the object and generating direction command to the movable sensor for generating realtime image of the scene and locking on the object. | 10-30-2008 |
20080267452 | APPARATUS AND METHOD OF DETERMINING SIMILAR IMAGE - An apparatus of determining a similar image contains a subject-region-detecting unit that detects a subject region from a received image, a pixel-value-distribution-generating unit that generates pixel value distribution of pixels included in the subject region detected by the subject-region-detecting unit, and a determination unit that determines whether or not an image relative to the subject region is similar to a previously registered subject image based on the pixel value distribution generated by the pixel-value-distribution-generating unit and a registered pixel value distribution of the previously registered subject image. | 10-30-2008 |
20080267453 | METHOD FOR ESTIMATING THE POSE OF A PTZ CAMERA - Provided is an iterative method of estimating the pose of a moving PTZ camera. The first step is to use an image registration method on a reference image and a current image to calculate a matrix that estimates the motion of sets of points corresponding to the same object in both images. Information about the absolute camera pose, embedded in the matrix obtained in the first step, is used to simultaneously recalculate both the starting positions in the reference image and the motion estimate. The recalculated starting positions and motion estimate are used to determine the pose of the camera in the current image. The current image is taken as a new reference image, a new current image is selected and the process is repeated in order to determine the pose of the camera in the new current image. The entire process is repeated until the camera stops moving. | 10-30-2008 |
20080273750 | Apparatus and Method For Automatically Detecting Objects - A device automatically detects boundary lines on the road from an image captured by a camera mounted on the vehicle. The device includes a controller that performs image processing on the image to compute the velocity information for each pixel in the image, and, on the basis of the computed velocity information for each pixel in the image, extracts the pixels that contain velocity information, detects the oblique lines formed by the extracted pixels, and detects the boundary lines on the road on the basis of the detected oblique lines. | 11-06-2008 |
20080273751 | Detection and Tracking of Moving Objects from a Moving Platform in Presence of Strong Parallax - Among other things, methods, systems and computer program products are described for detecting and tracking a moving object in a scene. One or more residual pixels are identified from video data. At least two geometric constraints are applied to the identified one or more residual pixels. A disparity of the one or more residual pixels to the applied at least two geometric constraints is calculated. Based on the detected disparity, the one or more residual pixels are classified as belonging to parallax or independent motion and the parallax classified residual pixels are filtered. Further, a moving object is tracked in the video data. Tracking the object includes representing the detected disparity in probabilistic likelihood models. Tracking the object also includes accumulating the probabilistic likelihood models within a number of frames during the parallax filtering. Further, tracking the object includes based on the accumulated probabilistic likelihood models, extracting an optimal path of the moving object. | 11-06-2008 |
20080273752 | SYSTEM AND METHOD FOR VEHICLE DETECTION AND TRACKING - A method for vehicle detection and tracking includes acquiring video data including a plurality of frames, comparing a first frame of the acquired video data against a set of one or more vehicle detectors to form vehicle hypotheses, pruning and verifying the vehicle hypotheses using a set of course-to-fine constraints to detect a vehicle, and tracking the detected vehicle within one or more subsequent frames of the acquired video data by fusing shape template matching with one or more vehicle detectors. | 11-06-2008 |
20080273753 | System for Detecting Image Abnormalities - An image capture system for capturing images of an object, the image capture system comprising a moving platform such as an airplane, one or more image capture devices mounted to the moving platform, and a detection computer. The image capture device has a sensor for capturing an image. The detection computer executes an abnormality detection algorithm for detecting an abnormality in an image immediately after the image is captured and then automatically and immediately causing a re-shoot of the image. Alternatively, the detection computer sends a signal to the flight management software executed on a computer system to automatically schedule a re-shoot of the image. When the moving platform is an airplane, the detection computer schedules a re-shoot of the image such that the image is retaken before landing the airplane. | 11-06-2008 |
20080273754 | APPARATUS AND METHOD FOR DEFINING AN AREA OF INTEREST FOR IMAGE SENSING - A method for defining an area of interest or a trip line using a camera by tracking the movement of a person within a field of view of the camera. The area of interest is defined by a path or boundary indicated by the person's movement. Alternatively, a trip line comprising a path between a starting point and a stopping point may be defined by tracking the movement of the person within the camera's field of view. An occupancy sensor may be structured to sense the movement of an occupant within an area, and to adjust the lighting in the area accordingly if the occupant enters the area of interest or crosses the trip line. The occupancy sensor includes an image sensor coupled to a processor, an input facility such as a pushbutton to receive input, and an output facility such as an electronic beeper to provide feedback to the person defining the area of interest or the trip line. | 11-06-2008 |
20080273755 | CAMERA-BASED USER INPUT FOR COMPACT DEVICES - A camera is used to detect a position and/or orientation of an object such as a user's finger as an approach for providing user input, for example to scroll through data, control a cursor position, and provide input to control a video game based on a position of a user's finger. Input may be provided to a handheld device, including, for example, cell phones, video games systems, portable music (MP3) players, portable video players, personal data assistants (PDAs), audio/video equipment remote controls, and consumer digital cameras, or other types of devices. | 11-06-2008 |
20080273756 | POINTING DEVICE AND MOTION VALUE CALCULATING METHOD THEREOF - A pointing device is provided. A sensor generates a motion detection signal by sensing motion. A calculator receives the motion detection signal, calculates a motion value based on the motion detection signal, calculates a conversion motion value base on an angle of the motion value, and outputs the conversion motion value. An interface outputs the motion conversion value inputted from the calculator. By limiting a motion angle, the pointing device can provide a positioning operation suitable for a motion intended by a user. The user can optionally use a motion control method in all directions according to need. | 11-06-2008 |
20080279420 | VIDEO AND AUDIO MONITORING FOR SYNDROMIC SURVEILLANCE FOR INFECTIOUS DISEASES - We present, in exemplary embodiments of the present invention, novel systems and methods for syndromic surveillance that can automatically monitor symptoms that may be associated with the early presentation of a syndrome (e.g., fever, coughing, sneezing, runny nose, sniffling, rashes). Although not so limited, the novel surveillance systems described herein can be placed in common areas occupied by a crowd of people, in accordance with local and national laws applicable to such surveillance. Common areas may include public areas (e.g., an airport, train station, sports arena) and private areas (e.g., a doctor's waiting room). The monitored symptoms may be transmitted to a responder (e.g., a person, an information system) outside of the surveillance system, such that the responder can take appropriate action to identifying, treat and quarantine potentially infected individuals, as necessary. | 11-13-2008 |
20080279421 | OBJECT DETECTION USING COOPERATIVE SENSORS AND VIDEO TRIANGULATION - Methods and apparatus are provided for detecting and tracking a target. Images are captured from a field of view by at least two cameras mounted on one or more platforms. These images are analyzed to identify landmarks with the images which can be used to track the targets position from frame to frame. The images are fused (merged) with information about the target or platform position from at least one sensor to detect and track the target. The targets position with respect to the position of the platform is displayed or the position of the platform relative to the target is displayed. | 11-13-2008 |
20080285797 | METHOD AND SYSTEM FOR BACKGROUND ESTIMATION IN LOCALIZATION AND TRACKING OF OBJECTS IN A SMART VIDEO CAMERA - Aspects of a method and system for change detection in localization and tracking of objects in a smart video camera are provided. A programmable surveillance video camera comprises processors for detecting objects in a video signal based on an object mask. The processors may generate a textual representation of the video signal by utilizing a description language to indicate characteristics of the detected objects, such as shape, texture, color, and/or motion, for example. The object mask may be based on a detection field value generated for each pixel in the video signal by comparing a first observation field and a second observation field associated with each of the pixels. The first observation field may be based on a difference between an input video signal value and an estimated background value while the second observation field may be based on a temporal difference between first observation fields. | 11-20-2008 |
20080285798 | Obstacle detection apparatus and a method therefor - An apparatus of detecting an object on a road surface includes a stereo set of video cameras mounted on a vehicle to produce right and left images, a storage to store the right and left images, a parameter computation unit to compute a parameter representing road planarity constraint based on the images of the storage, a corresponding point computation unit to compute correspondence between a first point on one of the right and left images and a second point on the other, which corresponds to the first point, based on the parameter, an image transformation unit to produce a transformed image from the one image using the correspondence, and a detector to detect an object having a dimension larger than a given value in a vertical direction with respect to the road surface, using the correspondence and the transformed image. | 11-20-2008 |
20080285799 | APPARATUS AND METHOD FOR DETECTING OBSTACLE THROUGH STEREOVISION - According to an apparatus and method for detecting an obstacle through stereovision, an image capturing module comprises a plurality of cameras and is used for capturing a plurality of images; an image processing module edge-detecting the image to generate a plurality of edge objects and object information corresponding to each edge object; an object detection module matches a focus and a horizontal spacing interval of the camera according to the object information to generate a relative object distance corresponding to each edge object; a group module compares the relative object distance with a threshold distance and groups the edge objects with the relative object distance smaller than the threshold distance to be an obstacle and obtains a relative obstacle distance corresponding to the obstacle. | 11-20-2008 |
20080285800 | INFORMATION PROCESSING APPARATUS AND METHOD, AND PROGRAM - An information processing apparatus includes an obtaining unit configured to obtain feature quantities of an image; and a detector configured to detect a gazing point at which a user gazes within the image, wherein the gazing point detected by the detector among the feature quantities obtained by the obtaining unit or the feature quantities extracted from the image in a predetermined range containing the gazing point is stored. | 11-20-2008 |
20080285801 | Visual Tracking Eye Glasses In Visual Head And Eye Tracking Systems - The invention relates to the application area of camera-based head and eye tracking systems. The performance of such systems typically suffers when eye glasses are worn, as the frames of the glasses interfere with the tracking of the facial features utilized by the system. This invention describes how the appearance of the glasses can be utilized by such a tracking system, not only eliminating the interference of the glasses with the tracking but also aiding the tracking of the facial features. The invention utilizes a shape model of the glasses which can be tracked by a specialized tracker to derive 3D pose information. | 11-20-2008 |
20080285802 | TAILGATING AND REVERSE ENTRY DETECTION, ALARM, RECORDING AND PREVENTION USING MACHINE VISION - Unauthorized entry into controlled access areas using tailgating or reverse entry methods is detected using machine vision methods. Camera images of the controlled area are processed to identify and track objects in the controlled area. In a preferred embodiment, this processing includes 3D surface analysis to distinguish and classify objects in the field of view. Feature extraction, color analysis, and pattern recognition may also be used for identification and tracking of objects. Integration with security monitoring and control systems provides notification when a tailgating or reverse entry event has occurred. More reliable operation in practical circumstances is thus obtained, such as when multiple people are using an entrance or exit under variable light and shadow conditions. Electronic access control systems may further be combined with the machine vision methods of the invention to more effectively prevent tailgating or reverse entry. | 11-20-2008 |
20080292140 | Tracking people and objects using multiple live and recorded surveillance camera video feeds - Tracking a target across a region is disclosed. A graphical user interface is provided that displays, in a first region, video from a field of view of a main video device, and, in a plurality of second regions, video from a field of view of each of a plurality of perimeter video devices (PVDs). The field of view of each PVD is proximate to the main video device's field of view. A selection of one of the plurality of PVDs is received. In response, video from a field of view of the selected PVD is displayed in the first region, and a plurality of candidate PVDs is identified. Each candidate PVD has a field of view proximate to the field of view of the selected PVD. The plurality of second regions is then repopulated with video from a field of view of each of the plurality of identified candidate PVDs. | 11-27-2008 |
20080298636 | METHOD FOR DETECTING WATER REGIONS IN VIDEO - A computer-based method for automatic detection of water regions in a video include the steps of estimating a water map of the video and outputting the water map to an output medium, such as a video analysis system. The method may further include the steps of training a water model from the water map; re-classifying the water map using the water model by detecting water pixels in the video; and refining the water map. | 12-04-2008 |
20080298637 | Head Pose Assessment Methods and Systems - Improvements are provided to effectively assess a user's face and head pose such that a computer or like device can track the user's attention towards a display device(s). Then the region of the display or graphical user interface that the user is turned towards can be automatically selected without requiring the user to provide further inputs. A frontal face detector is applied to detect the user's frontal face and then key facial points such as left/right eye center, left/right mouth corner, nose tip, etc., are detected by component detectors. The system then tracks the user's head by an image tracker and determines yaw, tilt and roll angle and other pose information of the user's head through a coarse to fine process according to key facial points and/or confidence outputs by pose estimator. | 12-04-2008 |
20080304705 | SYSTEM AND METHOD FOR SIDE VISION DETECTION OF OBSTACLES FOR VEHICLES - This invention provides a system and method for object detection and collision avoidance for objects and vehicles located behind the cab or front section of an elongated, and possibly tandem, vehicle. Through the use of narrow-baseline stereo vision that can be vertically oriented relative to the ground/road surface, the system and method can employ relatively inexpensive cameras, in a stereo relationship, on a low-profile mounting, to perform reliable detection with good range discrimination. The field of detection is sufficiently behind and aside the rear area to assure an adequate safety zone in most instances. Moreover, this system and method allows all equipment to be maintained on the cab of a tandem vehicle, rather than the interchangeable, and more-prone-to-damage cargo section and/or trailer. One or more cameras can be mounted on, or within, the mirror on each side, on aerodynamic fairings or other exposed locations of the vehicle. Image signals received from each camera can be conditioned before they are matched and compared for disparities viewed above the ground surface, and according to predetermined disparity criteria. | 12-11-2008 |
20080304706 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - There is provided an information processing apparatus, comprising: an obtaining unit which obtains video data captured by an image capturing apparatus disposed in a monitored space, location information regarding a location of a moving object in the monitored space, and existence information regarding a capturing period of the moving object in the video data; and a display processing unit which processes a display of a trajectory of the moving object in the monitored space based on the location information, the display processing unit processing a display of the trajectory so that the portion of the trajectory that corresponds to the capturing period is distinguishable from the other portions of the trajectory, based on the existence information. | 12-11-2008 |
20080304707 | Information Processing Apparatus, Information Processing Method, and Computer Program - An information processing apparatus that executes processing for creating an environmental map includes a camera that photographs an image, a self-position detecting unit that detects a position and a posture of the camera on the basis of the image, an image-recognition processing unit that detects an object from the image, a data constructing unit that is inputted with information concerning the position and the posture of the camera and information concerning the object and executes processing for creating or updating the environmental map, and a dictionary-data storing unit having stored therein dictionary data in which object information is registered. The image-recognition processing unit executes processing for detecting an object from the image acquired by the camera with reference to the dictionary data. The data constructing unit applies the three-dimensional shape data registered in the dictionary data to the environmental map and executes object arrangement on the environmental map. | 12-11-2008 |
20080310676 | Method and System for Optoelectronic Detection and Location of Objects - Disclosed are methods and systems for optoelectronic detection and location of moving objects. The disclosed methods and systems capture one-dimensional images of a field of view through which objects may be moving, make measurements in those images, select from among those measurements those that are likely to correspond to objects in the field of view, make decisions responsive to various characteristics of the objects, and produce signals that indicate those decisions. The disclosed methods and systems provide excellent object discrimination, electronic setting of a reference point, no latency, high repeatability, and other advantages that will be apparent to one of ordinary skill in the art. | 12-18-2008 |
20080310677 | OBJECT DETECTION SYSTEM AND METHOD INCORPORATING BACKGROUND CLUTTER REMOVAL - A method and system for optically detecting an object within a field of view where detection is difficult because of background clutter within the field of view that obscures the object. A camera is panned with movement of the object to motion stabilize the object against the background clutter while taking a plurality of image frames of the object. A frame-by-frame analysis is performed to determine variances in the intensity of each pixel, over time, from the collected frames. From this analysis a variance image is constructed that includes an intensity variance value for each pixel. Pixels representing background clutter will typically vary considerably in intensity from frame to frame, while pixels making up the object will vary little or not at all. A binary threshold test is then applied to each variance value and the results are used to construct a final image. The final image may be a black and white image that clearly shows the object as a silhouette. | 12-18-2008 |
20080310678 | Pedestrian Detecting Apparatus - A first pedestrian judging unit judges, on the basis of the size and motion state of a target three-dimensional object, whether the object is a pedestrian. A second pedestrian judging unit judges, on the basis of shape data on the object, whether the object is a pedestrian. A pedestrian judging unit finally determines that the object is a pedestrian when both the first and second pedestrian judging units judge the object as a pedestrian, when the second pedestrian judging unit judges the object as a pedestrian, when the first pedestrian judging unit judges the object as a pedestrian and a result of this judgment is held for a preset period, or when the first pedestrian judging unit judges the object as a pedestrian in a current judgment operation and the second pedestrian judging unit judged the object as a pedestrian in the previous judging operation. | 12-18-2008 |
20080317281 | MEDICAL MARKER TRACKING WITH MARKER PROPERTY DETERMINATION - A method for tracking at least one medical marker is provided, wherein actual properties of the at least one marker are compared with nominal properties of the at least one marker. A basis for subsequent use of information obtained from the at least one marker is formed based on the comparison. | 12-25-2008 |
20080317282 | Vehicle-Use Image Processing System, Vehicle-Use Image Processing Method, Vehicle-Use Image Processing Program, Vehicle, and Method of Formulating Vehicle-Use Image Processing System - A system or the like capable of detecting lane marks more accurately by preventing false lane marks from being erroneously detected as true lane marks. A vehicle-use image processing system ( | 12-25-2008 |
20080317283 | SIGNAL PROCESSING METHOD AND DEVICE FOR MULTI APERTURE SUN SENSOR - The disclosure relates to a signal processing method for multi aperture sun sensor comprising the following steps: reading the information of sunspots in a row from a centroid coordinate memory, judging the absence of sunspots in that row, identifying the row and column index of the sunspots in the complete row, selecting the corresponding calibration parameter based on the row and column index, calculating attitude with the attitude calculation module the corresponding to identified sunspots, averaging the accumulated attitude of all sunspots and outputting the final attitude. At the same time, a signal processing device for multi aperture sun sensor is also presented. It is comprised of a sunspot absence judgment and an identification module and an attitude calculation module. The disclosure implements the integration of sun sensors without additional image processor or attitude processor, reduces field programmable gate array resource and improves the reliability of sun sensors. | 12-25-2008 |
20080317284 | Face tracking device - A face tracking device for tracking an orientation of a person's face with using a cylindrical head model, the face tracking device comprises: an image means for continuously shooting the person's face and for obtaining a first image data based on a shot of the person's face; an extraction means for extracting a second image data from the first image data, the second image data corresponding to a facial area of the person's face; a determination means for determining whether the second image is usable as an initial value required for the cylindrical head model; and a face orientation detection means for detecting the orientation of the person's face with using the cylindrical head model and with using the initial value determined to be usable by the determination means. | 12-25-2008 |
20080317285 | IMAGING DEVICE, IMAGING METHOD AND COMPUTER PROGRAM - With a digital still camera, a user freely detects a smiling face on a touchpanel displaying a through image and selects a subject having that smiling face. The digital still camera displays the smiling face as a smiling face detection target and a non-target detected face on the through image in a distinctly different manner to discriminate the smiling face detection target from the non-target detected face. For example, when persons in an event such as a party are photographed in a relatively large viewing angle, an auto photographing operation may be performed in response to smiling face detections on condition that at least two members in the party are smiling. | 12-25-2008 |
20080317286 | SECURITY DEVICE AND SYSTEM - A security device and system is disclosed. This security device is particularly useful in a security system where there are many security cameras to be monitored. This device automatically highlights to a user a camera feed in which an incident is occurring. This assists a user in identifying incidents and to make an appropriate decision regarding whether or not to intervene. This highlighting is performed by a trigger signal generated in accordance with a comparison between a sequence of representations of sensory data and other corresponding sequences of representations of sensory data. | 12-25-2008 |
20080317287 | Image processing apparatus for reducing effects of fog on images obtained by vehicle-mounted camera and driver support apparatus which utilizies resultant processed images - Kalman filter processing is applied to each of successive images of a scene obscured by fog, captured by an onboard camera of a vehicle. The measurement matrix for the Kalman filter is established based on currently estimated characteristics of the fog, and intrinsic luminance values of a scene portrayed by a current image constitute the state vector for the Kalman filter. Adaptive filtering for removing the effects of fog from the images is thereby achieved, with the filtering being optimized in accordance with the degree of image deterioration caused by the fog. | 12-25-2008 |
20090003651 | OBJECT SEGMENTATION RECOGNITION - A system for segmenting radiographic images of a cargo container can include an object segmentation recognition module adapted to perform a series of functions. The functions can include receiving a plurality of radiographic images of a cargo container, each image generated using a different energy level and segmenting each of the radiographic images using one or more segmentation modules to generate segmentation data representing one or more image segments. The functions can also include identifying image layers within the radiographic images using a plurality of layer analysis modules by providing the plurality of radiographic images and the segmentation data as input to the layer analysis modules, and determining adjusted atomic number values for an atomic number image based on the image layers. The functions can include adjusting the atomic number image based on the adjusted atomic number values for the regions of interest to generate an adjusted atomic number image and identifying regions of interest within the adjusted atomic number image based on an image characteristic. The functions can also include providing coordinates of each region of interest and the adjusted atomic number image as output. | 01-01-2009 |
20090003652 | REAL-TIME FACE TRACKING WITH REFERENCE IMAGES - A method of tracking a face in a reference image stream using a digital image acquisition device includes acquiring a full resolution main image and an image stream of relatively low resolution reference images each including one or more face regions. One or more face regions are identified within two or more of the reference images. A relative movement is determined between the two or more reference images. A size and location are determined of the one or more face regions within each of the two or more reference images. Concentrated face detection is applied to at least a portion of the full resolution main image in a predicted location for candidate face regions having a predicted size as a function of the determined relative movement and the size and location of the one or more face regions within the reference images, to provide a set of candidate face regions for the main image. | 01-01-2009 |
20090003653 | Trajectory processing apparatus and method - A trajectory processing apparatus comprises a trajectory database configured to store a position coordinate of a movable body detected from a camera image in association with data that specifies the camera image from which the movable body is detected, and a camera image database configured to store the camera image. A control section fetches the position coordinate of the movable body and the specifying data for the camera image from which the movable body is detected from the trajectory database. Further, the position coordinate of the movable body fetched from the trajectory database is displayed in a display section as a trajectory of the movable body. Furthermore, the control section acquires from the camera image database the camera image specified by the specifying data fetched from the trajectory database. Moreover, this camera image is displayed in the display section. | 01-01-2009 |
20090010490 | SYSTEM AND PROCESS FOR DETECTING, TRACKING AND COUNTING HUMAN OBJECTS OF INTEREST - A method of identifying, tracking, and counting human objects of interest based upon at least one pair of stereo image frames taken by at least one image capturing device, comprising the steps of: obtaining said stereo image frames and converting each said stereo image frame to a rectified image frame using calibration data obtained for said at least one image capturing device; generating a disparity map based upon a pair of said rectified image frames; generating a depth map based upon said disparity map and said calibration data; identifying the presence or absence of said objects of interest from said depth map and comparing each of said objects of interest to existing tracks comprising previously identified objects of interest; for each said presence of an object of interest, adding said object of interest to one of said existing tracks if said object of interest matches said one existing track, or creating a new track comprising said object of interest if said object of interest does not match any of said existing tracks; updating each said existing track; and maintaining a count of said objects of interest in a given time period based upon said existing tracks created or modified during said given time period. | 01-08-2009 |
20090010491 | METHOD AND APPARATUS FOR PROVIDING PICTURE FILE - A method and an apparatus for providing a picture file are provided. The picture file providing apparatus includes a controller which searches for one or more picture files based on a location of a subject, and a screen display unit which forms a display screen to display the one or more picture files that were found, in order to provide a user with the direction information included in each picture file. Each picture file includes picture data, information on a location in which the picture data was created, and information on a direction of a captured image of a subject included in the picture data. | 01-08-2009 |
20090010492 | IMAGE RECOGNITION DEVICE, FOCUS ADJUSTMENT DEVICE, IMAGING APPARATUS, IMAGE RECOGNITION METHOD AND FOCUS ADJUSTMENT METHOD - An image recognition device includes a detection unit which is configured to detect a first difference between partial information of at least a part of the first image information and the reference information and to detect a second difference between partial information of at least a part of the second image information and the reference information. A recognition unit is configured to recognize a first area corresponding to the reference image in the first image information. A calculation unit is configured to calculate a determination value based on a reference area in the second image information corresponding to the first area by weighting the second difference. The recognition unit is configured to recognize a second area corresponding to the reference image in the second image information based on at least one of the second difference and the determination value. | 01-08-2009 |
20090010493 | Motion-Validating Remote Monitoring System - A method of autonomously monitoring a remote site, including the steps of locating a primary detector at a site to be monitored; creating one or more geospatial maps of the site using an overhead image of the site; calibrating the primary detector to the geospatial map using a detector-specific model; detecting an object in motion at the site; tracking the moving object on the geospatial map; and alerting a user to the presence of motion at the site. In addition thermal image data from a infrared cameras, rather than optical/visual image data, is used to create detector-specific models and geospatial maps in substantially the same way that optical cameras and optical image data would be used. | 01-08-2009 |
20090016570 | METHOD AND APPARATUS FOR CALIBRATING SAMPLING OPERATIONS FOR AN OBJECT DETECTION PROCESS - One embodiment of the present invention provides a system that detects an object in an image. During operation, the system determines a relationship between sampling parameters and a detection rate for an object detection process. The system also determines a relationship between the sampling parameters and a detection speed for the object detection process. The system uses the determined relationships to generate specific sampling parameters. Next, the system performs the object detection process, wherein the object detection process uses the sampling parameters to sample locations in the image. This sampling process is used to refine the search for the object by identifying locations that respond to an object detector and are hence likely to be proximate to an instance of the object. | 01-15-2009 |
20090022364 | MULTI-POSE FAC TRACKING USING MULTIPLE APPEARANCE MODELS - A system and method are provided for tracking a face moving through multiple frames of a video sequence. A predicted position of a face in a video frame is obtained. Similarity matching for both a color model and an edge model are performed to derive correlation values for each about the predicted position. The correlation values are then combined to determine a best position and scale match to track a face in the video. | 01-22-2009 |
20090022365 | METHOD AND APPARATUS FOR MEASURING POSITION AND ORIENTATION OF AN OBJECT - An information processing method includes acquiring an image of an object captured by an imaging apparatus, acquiring an angle of inclination measured by an inclination sensor mounted on the object or the imaging apparatus, detecting a straight line from the captured image, and calculating a position and orientation of the object or the imaging apparatus, on which the inclination sensor is mounted, based on the angle of inclination, an equation of the detected straight line on the captured image, and an equation of a straight line in a virtual three-dimensional space that corresponds to the detected straight line. | 01-22-2009 |
20090022366 | SYSTEM AND METHOD FOR ANALYZING VIDEO FROM NON-STATIC CAMERA - A novel system and method of treating the output of moving cameras, in particular ones that enable the application of conventional “static camera” algorithms, e.g., to enable the continuous vigilance of computer surveillance technology to be applied to moving cameras that cover a wide area. According to the invention, a single camera is deployed to cover an area that might require many static cameras and a corresponding number of processing units. A novel system for processing the main video sufficiently enables long-term change detection, particularly the observation that a static object has been moved or has appeared, for instance detecting the parking and departure of vehicles in a parking lot, the arrival of trains in stations, delivery of goods, arrival and dispersal of people, or any other application. | 01-22-2009 |
20090022367 | THREE-DIMENSIONAL SHAPE DETECTING DEVICE AND THREE-DIMENSIONAL SHAPE DETECTING METHOD - A three-dimensional shape detection device which can detect a three-dimensional shape of an object to be picked up even in the case that an image pick-up part with a narrow dynamic range is used is disclosed. An image of an object to be picked up is picked up under a plurality of different exposure conditions in a state that each of a plurality of kinds of patterned lights alternatively disposing bright and dark portions is time-sequentially projected to the object to be picked up and a plurality of brightness images are generated on for respective exposure conditions. Further, based on such a plurality of the brightness images, a coded image is formed on each exposure condition and a code edge position for a space code is obtained for every exposure condition. Based on a plurality of code edge positions for every exposure condition obtained in this manner, one code edge position for calculating a three-dimensional shape of the object to be picked up is determined such that the three-dimensional shape of the object to be picked up is calculated. | 01-22-2009 |
20090022368 | MONITORING DEVICE, MONITORING METHOD, CONTROL DEVICE, CONTROL METHOD, AND PROGRAM - The present invention relates to a monitoring device, monitoring method, control device, control method, and program that use information on a face direction or gaze direction of a person to cause a device to perform processing in accordance with a movement or status of the person. A target detector | 01-22-2009 |
20090028384 | Three-dimensional road map estimation from video sequences by tracking pedestrians - Estimation of a 3D layout of roads and paths traveled by pedestrians is achieved by observing the pedestrians and estimating road parameters from the pedestrian's size and position in a sequence of video frames. The system includes a foreground object detection unit to analyze video frames of a 3D scene and detect objects and object positions in video frames, an object scale prediction unit to estimate 3D transformation parameters for the objects and to predict heights of the objects based at least in part on the parameters, and a road map detection unit to estimate road boundaries of the 3D scene using the object positions to generate the road map. | 01-29-2009 |
20090028385 | DETECTING AN OBJECT IN AN IMAGE USING EDGE DETECTION AND MORPHOLOGICAL PROCESSING - A representation of an object in a live event is detected in an image of the event. A location of the object in the live event is translated to an estimated location in the image based on camera sensor and/or registration data. A search area is determined around the estimated location in the image. A direction of motion of the object in the image is also determined. A representation of the object is identified in the search area by detecting edges of the object, e.g., perpendicular to the direction of motion and parallel to the direction of motion, performing morphological processing, and matching against a model or other template of the object. Based on the position of the representation of the object, the camera sensor and/or registration data can be updated, and a graphic can be located in the image substantially in real time. | 01-29-2009 |
20090028386 | AUTOMATIC TRACKING APPARATUS AND AUTOMATIC TRACKING METHOD - An automatic tracking apparatus is provided, which is capable of solving a failure occurred in an automatic tracking operation in connection with a zooming operation, and capable of tracking an object in a stable manner, while a zooming-up operation, or a zooming-down operation is carried out in a high speed. | 01-29-2009 |
20090028387 | Apparatus and method for recognizing position of mobile robot - Provided is an apparatus for recognizing the position of a mobile robot. The apparatus includes an image capturing unit which is loaded into a mobile robot and captures an image; an illuminance determining unit which determines illuminance at a position where an image is to be captured; a light-emitting unit which emits light toward the position; a light-emitting control unit which controls the light-emitting unit according to the determined illuminance; a driving control unit which controls the speed of the mobile robot according to the determined illuminance; and a position recognizing unit which recognizes the position of the mobile robot by comparing a pre-stored image to the captured image. | 01-29-2009 |
20090034789 | MOVING THING RECOGNITION SYSTEM - A moving thing recognition system using a camera on a path that the moving thing (such as a train, vehicle or ship) is proceeding, in which a check aligning device is used to align each picture file of the moving thing with virtual checks to compare for the body type and speed of the moving thing. The virtual checks of the moving thing are provided in taking the length of a fixed marking article or some other fixed article on the path of the moving thing as a reference. Thereby, under the circumstance that the path of the moving thing is unchanged and there is no emitted signal, accurate recognition can be obtained. | 02-05-2009 |
20090034790 | Method for customs inspection of baggage and cargo - A method and system of inspecting baggage to be transported from a location of origin to a destination is provided that includes generating scan data representative of a piece of baggage while the baggage is at the location of origin, and storing the scan data in a database. A rendered view representative of a content of the baggage is provided where the rendered views are based on the scan data retrieved from the database over a network. The rendered views are presented at a destination different from the origin. | 02-05-2009 |
20090034791 | Image processing for person and object Re-identification - A device and method for processing an image to create appearance and shape labeled images of a person or object captured within the image. The appearance and shape labeled images are unique properties of the person or object and can be used to re-identify the person or object in subsequent images. The appearance labeled image is an aggregate of pre-stored appearance labels that are assigned to image segments of the image based on calculated appearance attributes of each image segment. The shape labeled image is an aggregate of pre-stored shape labels that are assigned to image segments of the image based on calculated shape attributes of each image segment. An identifying descriptor of the person or object can be computed based on both the appearance labeled image and the shape labeled image. The descriptor can be compared with other descriptors of later captured images to re-identify a person or object. | 02-05-2009 |
20090034792 | REDUCING LATENCY IN A DETECTION SYSTEM - A first multi-dimensional digital image of a scan region is generated. The scan region is included in a materials-detection apparatus and is configured to receive and move containers through the materials-detection apparatus. A pre-defined background range of values is accessed, the background range of values representing a range of values associated with non-target materials and the background range of values being distinct from values associated with the target materials. A value of a voxel included in the multi-dimensional digital image is compared to the background range of values to determine whether the value of the voxel is within the background range of values. If the value of the voxel is within the background range of values, the voxel is identified as a voxel representing a low-density material. A second multi-dimensional digital image that disregards the identified voxel is generated to compress the first multi-dimensional digital image. | 02-05-2009 |
20090034793 | Fast Crowd Segmentation Using Shape Indexing - A method for performing crowd segmentation includes receiving video image data (S | 02-05-2009 |
20090034794 | Conduct inference apparatus - In a conduct inference process, feature points are extracted from a capture image. The extracted feature points are collated with conduct inference models to select conduct inference models in each of which an accordance ratio between a target vector and a movement vector is within a tolerance. Among the selected conduct inference models, one conduct inference model in which a distance from a relative feature point to a return point is shortest is selected. Then, a specific conduct designated in the selected conduct inference model is tentatively determined as a specific conduct the driver intends to perform. Furthermore, based on the tentatively determined specific conduct, it is determined whether the specific conduct is probable. When it is determined that the specific conduct is probable, an alarm process is executed to output an alarm to the driver. | 02-05-2009 |
20090034795 | METHOD FOR GEOLOCALIZATION OF ONE OR MORE TARGETS - The subject of the invention is a method for geolocalization of one or more stationary targets from an aircraft by means of a passive optronic sensor. The sensor acquires at least one image I | 02-05-2009 |
20090034796 | INCAPACITY MONITOR - A method of monitoring incapacity of a subject which includes the steps of continuously monitoring eye and eyelid movement of at least one eye of the subject; analyzing eye and eyelid movements to obtain measures of ocular quiescence and the duration of an interval of no eye or eyelid movement; and if the duration of ocular quiescence exceeds a predetermined value providing a potential incapacity warning and requesting a response within a predetermined period, and applying an emergency procedure if no response is made within a predetermined interval. | 02-05-2009 |
20090041297 | Human detection and tracking for security applications - A computer-based system for performing scene content analysis for human detection and tracking may include a video input to receive a video signal; a content analysis module, coupled to the video input, to receive the video signal from the video input, and analyze scene content from the video signal and determine an event from one or more objects visible in the video signal; a data storage module to store the video signal, data related to the event, or data related to configuration and operation of the system; and a user interface module, coupled to the content analysis module, to allow a user to configure the content analysis module to provide an alert for the event, wherein, upon recognition of the event, the content analysis module produces the alert. | 02-12-2009 |
20090041298 | IMAGE CAPTURE SYSTEM AND METHOD - Video capture systems, methods and computer program products can be provided and configured to capture video sequences of one or more participants during an activity. The video capture system can be configured to include one or more video capture devices positioned at predetermined locations in an activity area; a tracking device configured to track a location of the participant during the activity; a content storage device communicatively coupled to the video capture devices and configured to store video content received from the video capture devices; and a content assembly device communicatively coupled to the content storage device and to the tracking device, and configured to use tracking information from the tracking device to retrieve video sequences of the participant from the tracking device and to assemble the retrieved video sequences into a composite participant video. | 02-12-2009 |
20090041299 | Method and Apparatus for Recognition of an Object by a Machine - Disclosed is a method and apparatus for recognition of an object by a machine including isolating and processing an image to help facilitate recognition of the object by the machine. | 02-12-2009 |
20090041300 | HEADLIGHT SYSTEM FOR VEHICLES, PREFERABLY FOR MOTOR VEHICLES - 1. Headlight system for vehicles, preferably for motor vehicles | 02-12-2009 |
20090041301 | FRAME OF REFERENCE REGISTRATION SYSTEM AND METHOD - A system for assisting in work carried out on a workpiece and having a frame of reference. The system includes a referencing arrangement to register the position of a first location in the frame of reference of the system; a tool holder for holding a tool to assist with the work; a data interface to receive image data relating to the workpiece; and a processing arrangement to register the image data within the frame of reference of the system. The position of the tool holder is known within the frame of reference of the system. The image data represents an image which is indexed by position relative to the first location. The processing arrangement utilizes the relative position of the image represented by the image data with respect to the first location and the position of the first location in the frame of reference of the system. | 02-12-2009 |
20090041302 | Object type determination apparatus, vehicle, object type determination method, and program for determining object type - An object type determination apparatus, an object type determination method, a vehicle, and a program for determining an object type, capable of accurately determining the type of the object by appropriately determining periodicity in movement of the object from images, are provided. The object type determination apparatus includes an object area extracting means ( | 02-12-2009 |
20090046893 | SYSTEM AND METHOD FOR TRACKING AND ASSESSING MOVEMENT SKILLS IN MULTIDIMENSIONAL SPACE - Accurate simulation of sport to quantify and train performance constructs by employing sensing electronics for determining, in essentially real time, the player's three dimensional positional changes in three or more degrees of freedom (three dimensions); and computer controlled sport specific cuing that evokes or prompts sport specific responses from the player that are measured to provide meaningful indicia of performance. The sport specific cuing is characterized as a virtual opponent that is responsive to, and interactive with, the player in real time. The virtual opponent continually delivers and/or responds to stimuli to create realistic movement challenges for the player. | 02-19-2009 |
20090052737 | Method and Apparatus for Detecting a Target in a Scene - A method of detecting a target in a scene is described that comprises the step of taking one or more data sets, each data set comprising a plurality of normalised data elements, each normalised data element corresponding to the return from a part of the scene normalised to a reference return for the same part of the scene. The method then involves thresholding ( | 02-26-2009 |
20090052738 | SYSTEM AND METHOD FOR COUNTING FOLLICULAR UNITS - A system and method for counting follicular units using an automated system comprises acquiring an image of a body surface having skin and follicular units, filtering the image to remove skin components in the image, processing the resulted image to segment it, and filtering noise to eliminate all elements other than hair follicles of interest so that hair follicles in an area of interest can be counted. The system may comprise an image acquisition device and an image processor for performing the method. In another aspect, the system and method also classifies the follicular units based on the number of hairs in the follicular unit. | 02-26-2009 |
20090052739 | HUMAN PURSUIT SYSTEM, HUMAN PURSUIT APPARATUS AND HUMAN PURSUIT PROGRAM - A human pursuit system includes a plurality of cameras, shooting directions of which are directed toward a floor, are installed on a ceiling, a parallax of an object reflected in an overlapping image domain is calculated on the basis of at least a portion of the overlapping image domain where images are overlapped among shot images shot by the plurality of cameras, the object equal to or greater than a threshold value predetermined by the calculated parallax is detected as a human, a pattern image including the detected human object is extracted, and a pattern matching is applied to the extracted pattern image and the image shot by the camera to thereby pursue a human movement trajectory. | 02-26-2009 |
20090052740 | MOVING OBJECT DETECTING DEVICE AND MOBILE ROBOT - A moving object detecting device measures a congestion degree of a space and utilizes the congestion degree for tracking. In performing the tracking, a direction measured by a laser range sensor is heavily weighted when the congestion degree is low. When the congestion degree is high, a sensor fusion is performed by heavily weighting a direction measured by a image processing on a captured image to obtain a moving object estimating direction, and obtains a distance by the laser range sensor in the moving object estimating direction. | 02-26-2009 |
20090052741 | Subject tracking method, subject tracking device, and computer program product - A subject tracking method, includes: calculating a similarity factor indicating a level of similarity between an image contained in a search frame at each search frame position and a template image by shifting the search frame within a search target area set in each of individual frames of input images input in time sequence; determining a position of the search frame for which a highest similarity factor value has been calculated, within each input image to be a position (subject position) at which a subject is present; tracking the subject position thus determined through the individual frames of input images; calculating a difference between a highest similarity factor value and a second highest similarity factor value; and setting the search target area for a next frame based upon the calculated difference. | 02-26-2009 |
20090060270 | Image Detection Method - An image detection method is performed by a computer to determine whether or not an image in a region shot by a camera changes. According to the method, consecutive images shot by the camera are captured, and at least one anchored frame for the consecutive images is set. Whether or not the images in the anchored frame should or should not change is determined, and a signal is transmitted to determine whether or not the detected region is normal or not. Then, a notification signal is transmitted automatically to remind supervisors to closely observe the detected region. | 03-05-2009 |
20090060271 | METHOD AND APPARATUS FOR MANAGING VIDEO DATA - A method for managing video data including selecting a target object from a monitored area monitored by at least one image capturing device, extracting feature data of the selected target object, detecting motion of an object occurring in video data corresponding to the monitored area, comparing feature data the object causing the detected motion with the extracted feature data of the target object, and outputting information related to the object causing the motion when the comparing step determines the object causing the motion is the target object. | 03-05-2009 |
20090060272 | SYSTEM AND METHOD FOR OVERLAYING COMPUTER GENERATED HIGHLIGHTS IN A DISPLAY OF MILLIMETER WAVE IMAGERY - A system and method for overlaying computer-generated highlights in a display of millimeter wave imagery is disclosed. In a particular embodiment, visible spectrum and algorithmically created images are displayed adjacent to corresponding millimeter wave imagery on a graphical user interface (GUI). The millimeter wave imagery is used to detect a threat such as a concealed object. A computer generated highlight coinciding with a location of the detected concealed object is used to automatically overlay at least one or more of the visible spectrum images, algorithmically created images, and millimeter wave imagery. The computer generated highlight is encoded with information valuable for aiding the user when viewing and assessing the image date. | 03-05-2009 |
20090060273 | SYSTEM FOR EVALUATING AN IMAGE - In a system for evaluating an image, a processing device includes an input for receiving image data representing the image and another input for receiving distance information on a distance of an object relative to an image plane of the image. The distance information may be determined based on a three-dimensional image including depth information captured utilizing a | 03-05-2009 |
20090060274 | IMAGE PICK-UP APPARATUS HAVING A FUNCTION OF RECOGNIZING A FACE AND METHOD OF CONTROLLING THE APPARATUS - It is judged whether or not a human face detecting mode is set (S | 03-05-2009 |
20090060275 | MOVING BODY IMAGE EXTRACTION APPARATUS AND COMPUTER READABLE STORAGE MEDIUM STORING PROGRAM - A moving body image extraction apparatus calculates difference intensity relating to a background portion with respect to a plurality of frame of continuous shoot, calculates a value by dividing difference intensity of an arbitrary frame of the plurality of frames by summed difference intensity for the plurality of frames, outputs an extracted image of a moving body in the arbitrary frame based on the calculated value. | 03-05-2009 |
20090060276 | METHOD FOR DETECTING AND/OR TRACKING OBJECTS IN MOTION IN A SCENE UNDER SURVEILLANCE THAT HAS INTERFERING FACTORS; APPARATUS; AND COMPUTER PROGRAM - A method for detection and/or tracking of objects in motion | 03-05-2009 |
20090060277 | BACKGROUND MODELING WITH FEATURE BLOCKS - Video content analysis of a video may include: modeling a background of the video; detecting at least one target in a foreground of the video based on the feature blocks of the video; and tracking each target of the video. Modeling a background of the video may include: dividing each frame of the video into image blocks; determining features for each image block of each frame to obtain feature blocks for each frame; determining a feature block map for each frame based on the feature blocks of each frame; and determining a background feature block map to model the background of the vide based on at least one of the feature block maps. | 03-05-2009 |
20090060278 | STATIONARY TARGET DETECTION BY EXPLOITING CHANGES IN BACKGROUND MODEL - A sequence of video frames of an area of interest is obtained. A first background model of the area of interest is constructed based on a first parameter. A second background model of the area of interest is constructed based on a second parameter, the second parameter being different from the first parameter. A difference between the first and second background models is determined. A stationary target is determined based on the determined difference. An alert concerning the stationary target is generated. | 03-05-2009 |
20090067673 | METHOD AND APPARATUS FOR DETERMINING THE POSITION OF A VEHICLE, COMPUTER PROGRAM AND COMPUTER PROGRAM PRODUCT - The present invention relates to an apparatus and a method for determining the position of a vehicle moved along a path, markers, particularly code carriers or barcodes being located along the path.. The method is characterized in that the markers are detected with a digital camera placed on the vehicle and that by means of image processing from a position of at least one marker image in the detection or coverage range of the digital camera a position of the vehicle relative to the given marker or the given markers in the main vehicle movement direction along the path and in at least one direction at right angles to the main movement direction is determined. The invention also relates to a computer program and a computer program product. | 03-12-2009 |
20090067674 | Monitoring device - The invention concerns a monitoring device with a multi-camera device and an object tracking device for the high resolution observation of moving objects. Hereby it is provided that the object tracking device comprises an image integration device for the generation of a total image from the individual images of the multi-camera device and a cut-out definition device for the definition, independent from the borders of the individual images, of the to be observed cut-out. | 03-12-2009 |
20090074244 | Wide luminance range colorimetrically accurate profile generation method - Generating a color profile for a digital input device. Color values for at least one color target positioned within a first scene are measured, the color target having multiple color patches. An image of the first scene is generated using the digital input device, the first scene including the color target(s). Color values from a portion of the image corresponding to the color target are extracted and a color profile is generated, based on the measured color values and the extracted color values. The generated color profile is used to transform the color values of an image of a second scene captured under the same lighting conditions as the first scene. Using this generated color profile to transform images is likely to result in more colorimetrically accurate transformations of images created under real-world lighting conditions. | 03-19-2009 |
20090074245 | Miniature autonomous agents for scene interpretation - A miniature autonomous apparatus for performing scene interpretation, comprising: image acquisition means, image processing means, memory means and communication means, the processing means comprising means for determining an initial parametric representation of the scene; means for updating the parametric representation according to predefined criteria; means for analyzing the image, comprising means for determining, for each pixel of the image, whether it is a hot pixel, according to predefined criteria; means for defining at least one target from the hot pixels; means for measuring predefined parameters for at least one target; and means for determining, for at least one target whether said target is of interest, according to application-specific criteria, and wherein said communication means are adapted to output the results of said analysis. | 03-19-2009 |
20090074246 | METHOD AND SYSTEM FOR THE AUTOMATIC DETECTION OF EVENTS IN SPORT FIELDS - The present invention refers to the problem of the automatic detection of events in sport field, in particular Goal/NoGoal events by signalling to the mach management, which can autonomously take the final decision upon the event. The system is not invasive for the field structures, neither it requires to interrupt the game or to modify the rules thereof, but it only aims at detecting objectively the event occurrence and at providing support in the referees' decisions by means of specific signalling of the detected events. | 03-19-2009 |
20090074247 | Obstacle detection method - A method is provided for the detection of an obstacle in a road, in particular of a pedestrian, in the surroundings in the range of view of an optical sensor attached to a movable carrier such as in particular a vehicle, wherein a first image is taken by means of the optical sensor at a first time and a second image is taken at a later second time, a first transformed image is produced by a transformation of the first taken image from the image plane of the optical sensor into the road plane, a further transformed image is produced from the first transformed image while taking account of the carrier movement in the time period between the first time and the second time, the further transformed image is transformed back from the road plane into the image plane and an image stabilization is carried out based on the image transformed back into the image plane and on the second taken image. | 03-19-2009 |
20090074248 | GESTURE-CONTROLLED INTERFACES FOR SELF-SERVICE MACHINES AND OTHER APPLICATIONS - A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines. | 03-19-2009 |
20090080695 | Electro-optical Foveated Imaging and Tracking System - Conventional electro-optical imaging systems can not achieve wide field of view (FOV) and high spatial resolution imaging simultaneously due to format size limitations of image sensor arrays. To implement wide field of regard imaging with high resolution, mechanical scanning mechanisms are typically used. Still, sensor data processing and communication speed is constrained due to large amount of data if large format image sensor arrays are used. This invention describes an electro-optical imaging system that achieves wide FOV global imaging for suspect object detection and local high resolution for object recognition and tracking. It mimics foveated imaging property of human eyes. There is no mechanical scanning for changing the region of interest (ROI). Two relatively small format image sensor arrays are used to respectively acquire global low resolution image and local high resolution image. The ROI is detected and located by analysis of the global image. A lens array along with an electronically addressed switch array and a magnification lens is used to pick out and magnify the local image. The global image and local image are processed by the processor, and can be fused for display. Three embodiments of the invention are descried. | 03-26-2009 |
20090080696 | Automated person identification and location for search applications - A “be on the look out” or BOLO device is an unsupervised device that can be deployed at a particular location to watch for a specific target or person. A camera produces scene images that the BOLO device analyzes to determine if they contain a pattern matching a target descriptor. If a matching pattern is found, then the BOLO device emits an alarm signal. The alarm signal can contain the BOLO device's location or identification. A location database can produce the device's location when given the device's identification. A target transmitter can supply new target descriptors to deployed BOLO devices. | 03-26-2009 |
20090080697 | Imaging position analyzing method - The imaging position of each of the frames in image data of a plurality of frames captured while a vehicle is traveling is accurately determined. | 03-26-2009 |
20090080698 | Image display apparatus and computer program product - A comprehensive degree of relevance of other moving picture contents with respect to a moving-picture content to be processed is calculated by using any one of or all of content information, frame information, and image characteristics, to display a virtual space in which a visualized content corresponding to a moving picture content to be displayed, which is selected based on the degree of relevance, is located at a position away from a layout position of the visualized content corresponding to the moving picture content to be processed, according to the degree of relevance. | 03-26-2009 |
20090080699 | 3D Beverage Container Localizer - Objects placed on a flat surface are identified and localized by using a single view image. The single view image in the perspective projection is transformed to a normalized image in a pseudo plan to view to enhance detection of the bottom or top shapes of the objects. One or more geometric features are detected from the normalized image by processing the normalized image. The detected geometric features are analyzed to determine the identity and the location the objects on the flat surface. | 03-26-2009 |
20090080700 | PROJECTILE TRACKING SYSTEM - A system and method for determining the track of a projectile use a thermal signature of the projectile. Sequential infrared image frames are acquired from a sensor at a given position. A set of frames containing spots with characteristics consistent with a projectile in flight are identified. A possible projectile track solution for said spots is identified. A thermal signature value for each pixel of each spot of the possible solution is determined. The determined thermal signature is then compared to an actual thermal signature for a substantially similar projectile track to ascertain whether the determined thermal signature substantially matches the actual thermal signature, which indicates that the possible projectile track solution is the correct solution. | 03-26-2009 |
20090080701 | Method for object tracking - The present invention relates to a method for the recognition and tracking of a moving object, in particular of a pedestrian, from a motor vehicle, at which a camera device is arranged. An image of the environment including picture elements is taken in the range of view of the camera device ( | 03-26-2009 |
20090080702 | Method for the recognition of obstacles - A method is provided for the recognition of an obstacle, in particular a pedestrian, located in the travel path of a movable carrier such as in particular a motor vehicle, in the environment in the range of view of an optical sensor attached to the movable carrier, wherein a first image is taken by means of the optical sensor at a first time and a second image is taken at a later second time, wherein a first transformed lower part image is generated by a projection of an image section of the first taken image lying below the horizon from the image plane of the optical sensor into the ground plane, wherein a first transformed upper part image is generated by a projection of an image section of the first taken image lying above the horizon from the image plane of the optical sensor into a virtual plane parallel to the ground plane, wherein a second transformed lower part image is generated by a projection of an image section of the second taken image lying below the horizon from the image plane of the optical sensor into the ground plane, wherein a second transformed upper part image is generated by a projection of an image section of the second taken image lying above the horizon from the image plane of the optical sensor into a virtual plane parallel to the ground plane, wherein a lower difference part image is determined from the first and second transformed lower part images, an upper difference part image is determined from the first and second upper part images and it is determined by evaluation of the lower difference part image and of the upper difference part image whether an obstacle is located in the travel path of the movable carrier. | 03-26-2009 |
20090087023 | Method and System for Detecting and Tracking Objects in Images - Invention describes a method and system for detecting and tracking an object in a sequence of images. For each image the invention determines an object descriptor from a tracking region in a current image in a sequence of images, in which the tracking region corresponds to a location of an object in a previous image. A regression function is applied to the descriptor to determine a motion of the object from the previous image to the current image, in which the motion has a matrix Lie group structure. The location of the tracking region is updated using the motion of the object. | 04-02-2009 |
20090087024 | CONTEXT PROCESSOR FOR VIDEO ANALYSIS SYSTEM - Embodiments of the present invention provide a method and a system for mapping a scene depicted in an acquired stream of video frames that may be used by a machine-learning behavior-recognition system. A background image of the scene is segmented into plurality of regions representing various objects of the background image. Statistically similar regions may be merged and associated. The regions are analyzed to determine their z-depth order in relation to a video capturing device providing the stream of the video frames and other regions, using occlusions between the regions and data about foreground objects in the scene. An annotated map describing the identified regions and their properties is created and updated. | 04-02-2009 |
20090087025 | Shadow and highlight detection system and method of the same in surveillance camera and recording medium thereof - A method and system for detecting a shadow region and a highlight region from a foreground region in a surveillance system, and a recording medium thereof, are provided. The system includes an image capturing unit to capture a new image, a background model unit to receive the new image and update a stored background model with the new image, a difference image obtaining unit to compare the new image with the background model and to obtain a difference image between the new image and the background model, a penumbra region extraction unit to extract a partial shadow region or a partial highlight region by measuring a sharpness of an edge of the difference image and expanding a background region, and an umbra region extraction unit to extract a complete shadow region or a complete highlight region based on the result of the extraction by the penumbra region extraction unit. | 04-02-2009 |
20090087026 | METHOD AND SYSTEM OF MATERIAL IDENTIFICATION USING BINOCULAR STEROSCOPIC AND MULTI-ENERGY TRANSMISSION IMAGES - The present invention provides a method and system of material identification using binocular steroscopic and multi-energy transmission image. With the method, any obstacle that dominates the ray absorption can be peeled off from the objects that overlap in the direction of a ray beam. The object that is unobvious due to a relatively small amount of ray absorption will thus stand out, and the material property of the object, such as organic, mixture, metal and the like can be identified. This method lays a fundament for automatic identification of harmful objects, such as explosive, drugs, etc., concealed in a freight container. | 04-02-2009 |
20090087027 | ESTIMATOR IDENTIFIER COMPONENT FOR BEHAVIORAL RECOGNITION SYSTEM - An estimator/identifier component for a computer vision engine of a machine-learning based behavior-recognition system is disclosed. The estimator/identifier component may be configured to classify an object being one of two or more classification types, e.g., as being a vehicle or a person. Once classified, the estimator/identifier may evaluate the object to determine a set of kinematic data, static data, and a current pose of the object. The output of the estimator/identifier component may include the classifications assigned to a tracked object, as well as the derived information and object attributes. | 04-02-2009 |
20090087028 | Hand Washing Monitoring System - A hand washing monitoring system ( | 04-02-2009 |
20090087029 | 4D GIS based virtual reality for moving target prediction - The technology of the 4D-GIS system deploys a GIS-based algorithm used to determine the location of a moving target through registering the terrain image obtained from a Moving Target Indication (MTI) sensor or small Unmanned Aerial Vehicle (UAV) camera with the digital map from GIS. For motion prediction the target state is estimated using an Extended Kalman Filter (EKF). In order to enhance the prediction of the moving target's trajectory a fuzzy logic reasoning algorithm is used to estimate the destination of a moving target through synthesizing data from GIS, target statistics, tactics and other past experience derived information, such as, likely moving direction of targets in correlation with the nature of the terrain and surmised mission. | 04-02-2009 |
20090087030 | Digital Image Processing Using Face Detection Information - A method of processing a digital image using face detection within the image achieves one or more desired image processing parameters. A group of pixels is identified that correspond to an image of a face within the digital image. Default values are determined of one or more parameters of at least some portion of the digital image. Values are adjusted of the one or more parameters within the digitally-detected image based upon an analysis of the digital image including the image of the face and the default values. | 04-02-2009 |
20090092282 | System and Method for Tracking Objects with a Synthetic Aperture - A computer implemented method tracks 3D positions of an object moving in a scene. A sequence of images is acquired of the scene with a set of cameras such that each time instant a set of images are acquired of the scene, in which each image includes pixels. Each set of images is aggregated into a synthetic aperture image including the pixels, and the pixels in each the set of images are matched corresponding to multiple locations and multiple depths of a target window with an appearance model to determine scores for the multiple locations and multiple depths. A particular location and a particular depth having a maximal score is selected as the 3D position of the moving object. | 04-09-2009 |
20090092283 | SURVEILLANCE AND MONITORING SYSTEM - A system having one or more devices for detection, surveillance and monitoring. Video images of scenes with persons from the devices may be processed and provided to a biometrics component for standoff biometric acquisition and matching. Various remote and internal databases may be resorted to for biometric matching. Matching results may go to the history component and the strategy and association component. The output of the latter component may be subject to behavior inference and analysis. The system may be interconnected with outside entities such as an access control system. | 04-09-2009 |
20090092284 | Light Modulation Techniques for Imaging Objects in or around a Vehicle - Method and system for obtaining information about an object in a compartment in a vehicle includes directing illumination into the compartment, spatial or temporally modulating the illumination, receiving light reflected from an object in the compartment, and analyzing the reflected light to obtain information about the object. The compartment may be a passenger compartment of an automobile, the trunk of an automobile or the interior of a trailer of a truck. The illumination may be directed from a light source and the reflected light received at a receiver spaced apart from the light source. Analysis of the reflected light may therefore entail applying a triangulation calculation to enable a determination of a distance between the light source and illuminated point on the object. The same method and system can be adapted for monitoring the environment around the vehicle. | 04-09-2009 |
20090092285 | METHOD OF LOCAL TRACING OF CONNECTIVITY AND SCHEMATIC REPRESENTATIONS PRODUCED THEREFROM - A schematic diagram detailing a circuit that was reverse engineered from a plurality of images taken of the circuit is provided. The schematic diagram includes at least one circuit element that was represented as an object in at least one of the plurality of images, such that signal continuity information was determined through local tracing of connectivity between a first image and a second image of the plurality of images. A method of tracing the connectivity within the plurality of images to produce the schematic diagram is also disclosed. | 04-09-2009 |
20090092286 | IMAGE GENERATING APPARATUS, IMAGE GENERATING PROGRAM, IMAGE GENERATING PROGRAM RECORDING MEDIUM AND IMAGE GENERATING METHOD - When an obstacle does not exist in a horizontal direction in a direction of a virtual camera, a PC coordinate is set as a point of gaze. When the player character comes close to a high wall while the procedure of S | 04-09-2009 |
20090092287 | Mixed Media Reality Recognition With Image Tracking - An MMR system integrating image tracking and recognition comprises a plurality of mobile devices, a pre-processing server or MMR gateway, and an MMR matching unit, and may include an MMR publisher. The MMR matching unit receives an image query from the pre-processing server or MMR gateway and sends it to one or more of the recognition units to identify a result including a document, the page, and the location on the page. Image tracking information also is provided for determining relative locations of images within a document page. The mobile device includes an image tracker for providing at least a portion of the image tracking information. The present invention also includes methods for image tracking-assisted recognition, recognition of multiple images using a single image query, and improved image tracking using MMR recognition. | 04-09-2009 |
20090097704 | ON-CHIP CAMERA SYSTEM FOR MULTIPLE OBJECT TRACKING AND IDENTIFICATION - Apparatus and methods provide multiple object identification and tracking using an object recognition system, such as a camera system. One method of tracking multiple objects includes constructing a first set of objects in real time as a camera scans an image of a first frame row by row. A second set of objects is constructed concurrently in real time as the camera scans an image of a second frame row by row. The first and second sets of objects are stored separately in memory and the sets of objects are compared. Based on the comparison between the first frame (previous frame) and the second frame (current frame), a unique ID is assigned to an object in the second frame (current frame). | 04-16-2009 |
20090097705 | OBTAINING INFORMATION BY TRACKING A USER - A device may obtain tracking information of a face or a head of a user, determine a position and orientation of the user, and determine a direction of focus of the user based on the tracking information, the position, and the orientation. In addition, the device may retrieve information associated with a location at which the user focused. | 04-16-2009 |
20090097706 | SYSTEMS AND METHODS FOR DETERMINING IF OBJECTS ARE IN A QUEUE - Systems and methods that determine a position value of a first object and a position value of a second object, and compare the position value of the first object with the position value of the second object to determine if the second object is in a queue with the first object are provided. | 04-16-2009 |
20090097707 | Method of controlling digital image processing apparatus for face detection, and digital image processing apparatus employing the method - Provided is a method of controlling a digital image processing apparatus for detecting a face from continuously input images, the method comprising operations (a) to (c). In (a), if a face is detected, image information of a body area is stored. In (b), if the face is not detected, a body having the image information stored in (a) is detected. In (c), if a current body is detected after a previous body was detected in (b), an image characteristic of the previously detected body is compared to an image characteristic of the currently detected body, and a movement state of the face is determined according to the comparison result. | 04-16-2009 |
20090097708 | Image-Processing System and Image-Processing Method - A vehicle-periphery-image-providing system may include an image-capturing unit, a viewpoint-change unit, an image-composition unit, an object-decttion unit, a line-width-setting unit, and a line-selection unit. The image-capturing units, such as cameras, capture images outside a vehicle periphery and generate image-data items. The viewpoint-change unit generates a bird's-eye-view image for each image-data item based on the image-data item so that end portions of the real spaces corresponding to two adjacent bird's-eye-view images overlap each other. The image-composition unit generates a bird's-eye-view-composite image by combining the bird's-eye-view images according to a predetermined layout. The object-detection unit detects an object existing in the real space corresponding to a portion where the bird's-eye-view images of the bird's-eye-composite image are joined to each other. The line-width-setting unit sets the width of the line image corresponding to the joining portion. The line-selection unit adds a line image having the set width to an overlap portion of one of the bird's-eye-view images. | 04-16-2009 |
20090097709 | SIGNAL PROCESSING APPARATUS - A signal processing apparatus for displaying an input image in the sate in which a part of the image is enlarged, displays an enlarged image obtained by enlarging a part of a designated object in the input image so that the enlarged image is superimposed at a position in accordance with the position of the designated object. | 04-16-2009 |
20090097710 | METHODS AND SYSTEM FOR COMMUNICATION AND DISPLAYING POINTS-OF-INTEREST - A method for displaying point-of-interest coordinate locations in perspective images and for coordinate-based information transfer between perspective images on different platforms includes providing a shared reference image of a region overlapping the field of view of the perspective view. The perspective view is then correlated with the shared reference image so as to generate a mapping between the two views. This mapping is then used to derive a location of a given coordinate from the shared reference image within the perspective view and the location is indicated in the context of the perspective view on a display. | 04-16-2009 |
20090097711 | Detecting apparatus of human component and method thereof - Disclosed are an apparatus and a method of detecting a human component from an input image. The apparatus includes a training database (DB) to store positive and negative samples of a human component, an image processor to calculate a difference image for the input image, a sub-window processor to extract a feature population from a difference image that is calculated by the image processor for the positive and negative samples of a predetermined human component stored in the training DB, and a human classifier to detect a human component corresponding to a human component model using the human component model that is learned from the feature population. | 04-16-2009 |
20090103775 | Multi-Tracking of Video Objects - An inventive method for video object tracking includes the steps of selecting an object; choosing an object type for the object, and enabling one of multiple object tracking processes responsive to the object type chosen. In a preferred embodiment selecting the object includes one of segmenting the object by using a region, selecting points on the boundary of an object, aggregating regions or combining a selected region and selected points on a boundary of an object. The object tracking processes can be expanded to include tracking processes adapted to newly created object types. | 04-23-2009 |
20090103776 | Method of Non-Uniformity Compensation (NUC) of an Imager - The present invention provides for simple and streamlined boresight correlation of FLIR-to-missile video. Boresight correlation is performed with un-NUCed missile video, which allows boresight correlation and NUC to be performed simultaneously thereby reducing the time required to acquire a target and fire the missile. The current approach uses the motion of the missile seeker for NUCing to produce spatial gradient filtering in the missile image by differencing images as the seeker moves. This compensates DC non-uniformities in the image. A FLIR image is processed with a matching displace and subtract spatial filter constructed based on the tracked scene motion. The FLIR image is resampled to match the missile image resolution, and the two images are preprocessed and correlated using conventional methods. Improved NUC is provided by cross-referencing multiple measurements of each area of the scene as viewed by different pixels in the imager. This approach is based on the simple yet novel premise that every pixel in the array that looks at the same thing should see the same thing. As a result, the NUC terms adapt to non-uniformities in the imager and not the scene. | 04-23-2009 |
20090103777 | Lock and hold structured light illumination - A method, system, and associated program code, for 3-dimentional image acquisition, using structured light illumination, of a surface-of-interest under observation by at least one camera. One aspect includes: illuminating the surface-of-interest, while static/at rest, with structured light to obtain initial depth map data therefor; while projecting a hold pattern comprised of a plurality of snake-stripes at the static surface-of-interest, assigning an identity to and an initial lock position of each of the snake-stripes of the hold pattern; and while projecting the hold pattern, tracking, from frame-to-frame each of the snake-stripes. Another aspect includes: projecting a hold pattern comprised of a plurality of snake-stripes; as the surface-of-interest moves into a region under observation by at least one camera that also comprises the projected hold pattern, assigning an identity to and an initial lock position of each snake-stripe as it sequentially illuminates the surface-of-interest; and while projecting the hold pattern, tracking, from frame-to-frame, each snake-stripe while it passes through the region. Yet another aspect includes: projecting, in sequence at the surface-of-interest positioned within a region under observation by at least one camera, a plurality of snake-stripes of a hold pattern by opening/moving a shutter cover; as each of the snake-stripes sequentially illuminates the surface-of-interest, assigning an identity to and an initial lock position of that snake-stripe; and while projecting the hold pattern, tracking, from frame-to-frame, each of the snake-stripes once it has illuminated the surface-of-interest and entered the region. | 04-23-2009 |
20090103778 | Composition determining apparatus, composition determining method, and program - A composition determining apparatus includes a subject detecting unit configured to detect one or more specific subjects in an image based on image data; a subject orientation detecting unit configured to detect subject orientation information indicating an orientation in the image of the subject detected by the subject detecting unit, the detection of the subject orientation information being performed for each of the detected subjects; and a composition determining unit configured to determine a composition based on the subject orientation information. When a plurality of subjects are detected by the subject detecting unit, the composition determining unit determines a composition based on a relationship among a plurality of pieces of the subject orientation information corresponding to the plurality of subjects. | 04-23-2009 |
20090103779 | MULTI-SENSORIAL HYPOTHESIS BASED OBJECT DETECTOR AND OBJECT PURSUER - The invention relates to a method for multi-sensorial object detection, wherein sensor information is evaluated together from several different sensor signal flows having different sensor signal properties. For said evaluation, the at least two sensor signal flows are not adapted to each other and/or projected onto each other, but object hypotheses are generated in each of the at least two sensor signal flows and characteristics for at least one classifier are generated based of said object hypotheses. Said object hypotheses are subsequently evaluated by means of a classifier and are associated with one or more categories. At least two categories are identified and the object is associated with one of the two categories. | 04-23-2009 |
20090103780 | Hand-Gesture Recognition Method - One embodiment of the invention includes a method of providing device inputs. The method includes illuminating hand gestures performed via a bare hand of a user in a foreground of a background surface with at least one infrared (IR) light source. The method also includes generating a first plurality of silhouette images associated with the bare hand based on an IR light contrast between the bare hand and the background surface and generating a second plurality of silhouette images associated with the bare hand based on an IR light contrast between the bare hand and the background surface. The method also includes determining a plurality of three-dimensional features of the bare hand relative to the background surface based on a parallax separation of the bare hand in the first plurality of silhouette images relative to the second plurality of silhouette images. The method also includes determining a provided input gesture based on the plurality of three-dimensional features of the bare hand and comparing the provided input gesture with a plurality of predefined gesture inputs in a gesture library. The method further includes providing at least one device input corresponding to interaction with displayed visual content based on the provided input gesture corresponding to one of the plurality of predefined gesture inputs. | 04-23-2009 |
20090110235 | SYSTEM AND METHOD FOR SELECTION OF AN OBJECT OF INTEREST DURING PHYSICAL BROWSING BY FINGER FRAMING - A system and method selecting an object from a plurality of objects in a physical environment is disclosed. The method may include framing an object located in a physical environment by positioning an aperture at a selected distance from a user's eye, the position of the aperture being selected such that the aperture substantially encompasses the object as viewed from the user's perspective, detecting the aperture by analyzing image data including the aperture and the physical environment, and selecting the object substantially encompassed by the detected aperture. The method may further include identifying the selected object based on its geolocation, collecting and merging data about the identified object from a plurality of data sources, and displaying the collected and merged data. | 04-30-2009 |
20090110236 | Method And System For Object Detection And Tracking - Disclosed is a method and system for object detection and tracking. Spatio-temporal information for a foreground/background appearance module is updated, based on a new input image and the accumulated previous appearance information and foreground/background information module labeling information over time. Object detection is performed according to the new input image and the updated spatio-temporal information and transmitted previous information over time, based on the labeling result generated by the object detection. The information for the foreground/background appearance module is repeatedly updated until a convergent condition is reached. The produced labeling result from objection detection is considered as a new tracking measurement for further updating on a tracking prediction module. A final tracking result may be obtained through the updated tracking prediction module, which is determined by the current tracking measurement and the previous observed tracking results. The tracking object location at the next time is predicted. The returned predicted appearance information for the foreground/background object is used as the input for updating the foreground and background appearance module. The returned labeling information is used as the information over time for the object detection. | 04-30-2009 |
20090110237 | METHOD FOR POSITIONING A NON-STRUCTURAL OBJECT IN A SERIES OF CONTINUING IMAGES - A method for positioning a non-structural object in a series of continuing images is disclosed, which comprises the steps of: establishing a pattern representing a target object while analyzing the pattern for obtaining positions relative to a representative feature of the pattern; picking up a series of continuing images including the target object for utilizing the brightness variations at the boundary defining the representative feature which are detected in the series of continuing images to calculate and thus obtain a predictive candidate position of the representative feature in an image picked up next to the series of continuing images; calculating the differences between the boundaries defining the representative feature at the predictive candidate position in the series of continuing images and also calculating the similarities between the pattern and those boundaries; and using the differences and the similarities to calculate and thus obtain the position of the representative feature in the image picked up next to the series of continuing images. | 04-30-2009 |
20090110238 | Automatic correlation modeling of an internal target - A method and apparatus to automatically control the timing of an image acquisition by an imaging system in developing a correlation model of movement of a target within a patient. | 04-30-2009 |
20090110239 | System and method for revealing occluded objects in an image dataset - Disclosed are a system and method for identifying objects in an image dataset that occlude other objects and for transforming the image dataset to reveal the occluded objects. In some cases, occluding objects are identified by processing the image dataset to determine the relative positions of visual objects. Occluded objects are then revealed by removing the occluding objects from the image dataset or by otherwise de-emphasizing the occluding objects so that the occluded objects are seen behind it. A visual object may be removed simply because it occludes another object, because of privacy concerns, or because it is transient. When an object is removed or de-emphasized, the objects that were behind it may need to be “cleaned up” so that they show up well. To do this, information from multiple images can be processed using interpolation techniques. The image dataset can be further transformed by adding objects to the images. | 04-30-2009 |
20090110240 | METHOD FOR DETECTING A MOVING OBJECT IN AN IMAGE STREAM - The invention relates to a method for detecting a moving object in a stream of images taken at successive instants, of the type comprising, for each zone of a predefined set of zones of at least one pixel of the image constituting a current image, a step ( | 04-30-2009 |
20090110241 | IMAGE PROCESSING APPARATUS AND METHOD FOR OBTAINING POSITION AND ORIENTATION OF IMAGING APPARATUS - An image processing apparatus obtains location information of each image feature in a captured image based on image coordinates of the image feature in the captured image. The image processing apparatus selects location information usable to calculate a position and an orientation of the imaging apparatus among the obtained location information. The image processing apparatus obtains the position and the orientation of the imaging apparatus based on the selected location information and an image feature corresponding to the selected location information among the image features included in the captured image. | 04-30-2009 |
20090116691 | METHOD FOR LOCATING AN OBJECT ASSOCIATED WITH A DEVICE TO BE CONTROLLED AND A METHOD FOR CONTROLLING THE DEVICE - The invention describes a method for locating an object (B | 05-07-2009 |
20090116692 | REALTIME OBJECT TRACKING SYSTEM - A real-time computer vision system tracks one or more objects moving in a scene using a target location technique which does not involve searching. The imaging hardware includes a color camera, frame grabber and processor. The software consists of the low-level image grabbing software and a tracking algorithm. The system tracks objects based on the color, motion and/or shape of the object in the image. A color matching function is used to compute three measures of the target's probable location based on the target color, shape and motion. The method then computes the most probable location of the target using a weighting technique. Once the system is running, a graphical user interface displays the live image from the color camera on the computer screen. The operator can then use the mouse to select a target for tracking. The system will then keep track of the moving target in the scene in real-time. | 05-07-2009 |
20090116693 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing method is provided for an image processing apparatus which executes processing by allocating a plurality of weak discriminators to form a tree structure having branches corresponding to types of objects so as to detect objects included in image data. Each weak discriminator calculates a feature amount to be used in a calculation of an evaluation value of the image data, and discriminates whether or not the object is included in the image data by using the evaluation value. The weak discriminator allocated to a branch point in the tree structure further selects a branch destination using at least some of the feature amounts calculated by weak discriminators included in each branch destination. | 05-07-2009 |
20090123028 | Target Position Setting Device And Parking Assist Device With The Same - A target position setting device includes a distance meter, an imager, first and second calculating portions, a determination portion, and a setting portion. The distance meter measures a distance to an object around a vehicle. The imager takes an image of an environment around the vehicle. The first calculating portion calculates a first candidate of a target position of the vehicle according to a measuring result of the distance meter. The second calculating portion calculates a second candidate of the target position of the vehicle according to an imaging result of the imager. The determination portion determines whether a relationship between the first candidate and the second candidate meets a given condition. The setting portion sets the target position according to the second candidate of the target position when the determination portion determines that the relationship between the first candidate and the second candidate meets the given condition. | 05-14-2009 |
20090123029 | Display-and-image-pickup apparatus, object detection program and method of detecting an object - A display-and-image-pickup apparatus includes: a display-and-image-pickup panel having an image display function and an image pickup function; an image producing means for producing a predetermined processed image on the basis of a picked-up image of a proximity object obtained through the use of the display-and-image-pickup panel; an image processing means for obtaining information about the proximity object through selectively using one of two obtaining modes on the basis of at least one of the picked-up image and the processed image; and a switching means for switching processes so that, in the case where the parameter is increasing, one of the two obtaining modes is switched to the other obtaining mode when the parameter reaches a threshold value, and in the case where the parameter is decreasing, the other obtaining mode is switched to the one obtaining mode when the parameter reaches a smaller threshold value. | 05-14-2009 |
20090123030 | Method For The Autostereoscopic Presentation Of Image Information With Adaptation To Suit Changes In The Head Position Of The Observer - For continuous tracking without noticeable skips during physical changes in head position, the intensities of all subpixels of the matrix screen are reduced in order to form intensity focuses for subpixel groups behind barrier elements, which comprise a number n of subpixels, including a subpixel reserve, in the image lines. In the case of parallel alterations, these intensity focuses are then displaced by a constant absolute value continuously through directly adjacent subpixels and also through subpixel group boundaries with different stereo image views. Distance changes involve the intensity focuses being increasingly widened or compressed relative to the screen edges. The intensities of the individual subpixels can be altered by means of simple multiplication by standardized constant or variable intensity factors which can be ascertained as a function of motion. | 05-14-2009 |
20090129628 | METHOD FOR DETERMINING THE POSITION OF AN OBJECT FROM A DIGITAL IMAGE - Method for determining the position of an object point in a scene from a digital image thereof acquired through an optical system is presented. The image comprises a set of image points corresponding to object points and the position of the object points are determined by means of predetermined vectors associated with the image points. The predetermined vector represents the inverted direction of a light ray in the object space that will produce this image point through the optical system comprising all distortion effects of the optical system. | 05-21-2009 |
20090129629 | Method And Apparatus For Adaptive Object Detection - Disclosed is a method and apparatus for adaptive object detection, which may be applied in detecting an object having an ellipse feature. The method for adaptive object detection comprises performing an object shape detection based on the extracted foreground from the object; determining whether the object being occluded according to the detected feature statistic information of the object; if the object being not occluded, determining whether to switching object shape detection to ellipse detection; if the object being occluded or necessary to switch to ellipse detection, performing ellipse detection on the foreground; when the foreground being detected to have ellipse features, the object is continuously tracked; and when the current detection being ellipse detection, determining whether the ellipse detection being able to switch back to object shape detection. | 05-21-2009 |
20090129630 | 3D TEXTURED OBJECTS FOR VIRTUAL VIEWPOINT ANIMATIONS - 3d textured objects are provided for virtual viewpoint animations. In one aspect, an image of an event is obtained from a camera and an object in the image is automatically detected. For example, the event may be a sports event and the object may be a stationary object which is detected based on a known location, color and shape. A 3d model of the object is combined with a textured 3d model of the event to depict a virtual viewpoint which differs from a viewpoint of the camera. The textured 3d model of the event has texture applied from an image of the event, while the 3d model of the object does not have such texture applied, in one approach. In another aspect, an object in the image such as a participant in a sporting event is represented in the virtual viewpoint by a textured 3d kinematics model. | 05-21-2009 |
20090129631 | Method of Tracking the Position of the Head in Real Time in a Video Image Stream - The invention relates to a method of tracking the position of the bust of a user on the basis of a video image stream, said bus comprising the user's torso and head, the method comprising the determination of the position of the torso on a first image, in which method a virtual reference frame is associated with the torso on said first image, and in which method, for a second image, a new position of the virtual reference frame is determined on said second image, and, a relative position of the head with respect to said new position of the virtual reference frame is measured by comparison with the position of the virtual reference frame on said first image, so as to determine independently the movements of the head and the torso. | 05-21-2009 |
20090129632 | Method of object detection - A method is set forth for the detection of an object, in particular in a road, in particular of a pedestrian, in the surroundings in the range of view of an optical sensor attached to a carrier such as in particular a vehicle, wherein, from the range of view of the optical sensor, a relevant spatial region disposed below the horizon is determined, a gray scale image is produced by means of the optical sensor which includes a relevant image region corresponding to the relevant spatial region, and a search for a possible object is only made in this relevant image region corresponding to the relevant spatial region disposed below the horizon for the detection of the object. | 05-21-2009 |
20090136089 | 3D inspection of an object using x-rays - A method is presented for a 3D inspection of an object or bag in order to check for explosives or contraband. The method is applicable to Computed Tomography, Laminography or any other method that can be used to produce images of slices through the object. According to this method, it is not necessary to reconstruct the slice image with a high resolution as is required for visual display, but it is sufficient to reconstruct the image at only a sample or a set of points or pixels that are sparsely distributed within the reconstructed slice. The properties of the object are then analyzed only at these sparsely distributed pixels within the slice to make a determination for the presence or absence of explosives or contraband. This process of image reconstruction and analysis is repeated over several slices spaced through the volume of the object. In another embodiment of this invention, the set of points or pixels at which the image is reconstructed are offset spatially with respect to the set of pixels in the adjacent or neighboring slice. This invention greatly reduces the computational burden, hence simplifies the hardware and software design, speeds up the scanning process and allows for a more complete and uniform inspection of the entire volume of the object. | 05-28-2009 |
20090136090 | House Displacement Judging Method, House Displacement Judging Device - To attain a house change judging method and device which can judge a change with high precision and is capable of fully automating the judgment, the present invention provides a house change judging method for judging a change of a house ( | 05-28-2009 |
20090141935 | MOTION COMPENSATED CT RECONSTRUCTION OF HIGH CONTRAST OBJECTS - Cardiac CT imaging using gated reconstruction is currently limited in its temporal and spatial resolution. According to an exemplary embodiment of the present invention, an examination apparatus is provided in which an identification of a high contrast object is performed. This high contrast object is then followed through the phases, resulting in a motion vector field of the high contrast object, on the basis of which a motion compensated reconstruction is then performed. | 06-04-2009 |
20090141936 | Object-Tracking Computer Program Product, Object-Tracking Device, and Camera - A computer performs following steps according to a program for tracking an object. Template matching of each frame of an input image to a plurality of template images is performed, a template image having a highest similarity with an image within a predetermined region of the input image is selected as a selected template among the plurality of template images and the predetermined region of the input image is extracted as a matched region. With reference to an image within the matched region thus extracted, by tracking motion between frames, motion of an object is tracked between the images of the plurality of frames. It is determined as to whether or not a result of template matching satisfies an update condition for updating the plurality of template images. In a case that the update condition is determined to be satisfied, at least one of the plurality of template images. | 06-04-2009 |
20090141937 | Subject Extracting Method, Subject Tracking Method, Image Synthesizing Method, Computer Program for Extracting Subject, Computer Program for Tracking Subject, Computer Program for Synthesizing Images, Subject Extracting Device, Subject Tracking Device, and Image Synthesizing Device - A binary mask image for extracting subject is generated by binarizing an image after image-processing (processed image) with a predefined threshold value. Based on an image before image-processing (pre-processing image) and the binary mask image for extracting image, a subject image in which only a subject included in the pre-processing image is extracted is generated by eliminating a background region from the pre-processing image. | 06-04-2009 |
20090141938 | ROBOT VISION SYSTEM AND DETECTION METHOD - A robot vision system for outputting a disparity map includes a stereo camera for receiving left and right images and outputting a disparity map between the two images; an encoder for encoding either the left image or the right image into a motion compensation-based video bit-stream; and a decoder for extracting an encoding type of an image block, a motion vector, and a DCT coefficient from the video bit-stream. Further, the system includes a person detector for detecting and labeling person blocks in the image using the disparity map between the left image and the right image, the block encoding type, and the motion vector, and detecting a distance from the labeled person to the camera; and an obstacle detector for detecting a closer obstacle than the person using the block encoding type, the motion vector, and the DCT coefficient extracted from the video bit-stream, and the disparity map. | 06-04-2009 |
20090141939 | Systems and Methods for Analysis of Video Content, Event Notification, and Video Content Provision - A method for remote event notification over a data network is disclosed. The method includes receiving video data from any source, analyzing the video data with reference to a profile to select a segment of interest associated with an event of significance, encoding the segment of interest, and sending to a user a representation of the segment of interest for display at a user display device. A further method for sharing video data based on content according to a user-defined profile over a data network is disclosed. The method includes receiving the video data, analyzing the video data for relevant content according to the profile, consulting a profile to determine a treatment of the relevant content, and sending data representative of the relevant content according to the treatment. | 06-04-2009 |
20090141940 | Integrated Systems and Methods For Video-Based Object Modeling, Recognition, and Tracking - The present disclosure relates to systems and methods for modeling, recognizing, and tracking object images in video files. In one embodiment, a video file, which includes a plurality of frames, is received. An image of an object is extracted from a particular frame in the video file, and a subsequent image is also extracted from a subsequent frame. A similarity value is then calculated between the extracted images from the particular frame and subsequent frame. If the calculated similarity value exceeds a predetermined similarity threshold, the extracted object images are assigned to an object group. The object group is used to generate an object model associated with images in the group, wherein the model is comprised of image features extracted from optimal object images in the object group. Optimal images from the group are also used for comparison to other object models for purposes of identifying images. | 06-04-2009 |
20090141941 | IMAGE PROCESSING APPARATUS AND METHOD FOR ESTIMATING ORIENTATION - A method of estimating an orientation of one or more of a plurality of objects disposed on a plane, from one or more video images of a scene, which includes the objects on the plane produced from a view of the scene by a video camera. The method comprises receiving for each of the one or more objects, object tracking data, which provides a position of the object on the plane in the video images with respect to time, determining from the object tracking data a plurality of basis vectors associated with at least one of the objects, each basis vector corresponding to a factor, which can influence the orientation of the object and each basis vector being related to the movement or location of the one or more objects, and combining the basis vectors in accordance with a blending function to calculate an estimate of the orientation of the object on the plane, the blending function including blending coefficients which determine a relative magnitude of each basis vector used in the blending function. | 06-04-2009 |
20090147991 | METHOD, SYSTEM, AND COMPUTER PROGRAM FOR DETECTING AND CHARACTERIZING MOTION - A method for motion detection/characterization is provided including the steps of (a) capturing a series of time lapsed images of the target, wherein the target moves between at least two of such images; (b) generating a motion distribution in relation to the target across the series of images; and (c) identifying motion of the target based on analysis of the motion distribution. In a further aspect of motion detection/characterization in accordance with the invention, motion is detected/characterized based on calculation of a color distribution for a series of images. A system and computer program for presenting an augmented environment based on the motion detection/characterization is also provided. An interface means based on the motion detection/characterization is also provided. | 06-11-2009 |
20090147992 | THREE-LEVEL SCHEME FOR EFFICIENT BALL TRACKING - A three-level ball detection and tracking method is disclosed. The ball detection and tracking method employs three levels to generate multiple ball candidates rather than a single one. The ball detection and tracking method constructs multiple trajectories using candidate linking, then uses optimization criteria to determine the best ball trajectory. | 06-11-2009 |
20090147993 | HEAD-TRACKING SYSTEM - A head-tracking system and a method for operating a head-tracking system in which a stationary reference point is detected are provided. A detector for detecting the position of a head is calibrated based on the detected stationary reference point. In one example implementation, the detection of the stationary reference point is used to determine the position of the head. | 06-11-2009 |
20090147994 | TORO: TRACKING AND OBSERVING ROBOT - The present invention provides a method for tracking entities, such as people, in an environment over long time periods. A region-based model is generated to model beliefs about entity locations. Each region corresponds to a discrete area representing a location where an entity is likely to be found. Each region includes one or more positions which more precisely specify the location of an entity within the region so that the region defines a probability distribution of the entity residing at different positions within the region. A region-based particle filtering method is applied to entities within the regions so that the probability distribution of each region is updated to indicate the likelihood of the entity residing in a particular region as the entity moves. | 06-11-2009 |
20090147995 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM - An information processing apparatus includes information input units which inputs observation information in a real space; an event detection unit which generates event information including estimated position and identification information on users existing in the actual space through analysis of the input information; and an information integration processing unit which sets hypothesis probability distribution data regarding user position and user identification information and generates analysis information including the user position information through hypothesis update and sorting out based on the event information, in which the event detection unit detects a face area from an image frame input from an image information input unit, extracts face attribute information from the face area, and calculates and outputs a face attribute score corresponding to the extracted face attribute information to the information integration processing unit, and the information integration processing unit applies the face attribute score to calculate target face attribute expectation values. | 06-11-2009 |
20090154768 | METHOD OF MOTION DETECTION AND AUTONOMOUS MOTION TRACKING USING DYNAMIC SENSITIVITY MASKS IN A PAN-TILT CAMERA - A method of identifying motion within a field of view includes capturing at least two sequential images within the field of view. Each of the images includes a respective array of pixel values. An array of difference values between corresponding ones of the pixel values in the sequential images is calculated. A sensitivity region map corresponding to the field of view is provided. The sensitivity region map includes a plurality of regions having different threshold values. A presence of motion is determined by comparing the difference values to corresponding ones of the threshold values. | 06-18-2009 |
20090154769 | Moving robot and moving object detecting method and medium thereof - A moving robot and moving object detecting method and medium thereof is disclosed. The moving object detecting method includes transforming an omni-directional image captured in the moving robot to a panoramic image, comparing the panoramic image with a previous panoramic image and estimating a movement region of the moving object based on the comparison, and recognizing that a movement of the moving object exist in the estimated movement region when the area of the estimated movement region exceeds the reference area. | 06-18-2009 |
20090154770 | Moving Amount Calculation System and Obstacle Detection System - An arithmetic device ( | 06-18-2009 |
20090161911 | Moving Object Detection Apparatus And Method - Disclosed is directed to a moving object detection apparatus and method. The apparatus comprises an image capture module, an image alignment module, a temporal differencing module, a distance transform module, and a background subtraction module. The image capture module derives a plurality of images in a time series. The image alignment module aligns the images if the image capture module is situated on a movable platform. The temporal differencing module performs temporal differencing on the captured images or the aligned images, and generates a difference image. The distance transform module transforms the difference image into a distance map. The background subtraction module applies the distance map to background subtraction technology and compares the results with the current captured image, so as to obtain the information for moving objects. | 06-25-2009 |
20090161912 | METHOD FOR OBJECT DETECTION - In one aspect, the present invention is directed to a method for object detection, the method comprising the steps of: dividing a digital image into a plurality of sub-windows of substantially the same dimensions; processing the image of each of the sub-windows by a cascade of homogeneous classifiers (each of the homogenous classifiers produces a CRV, which is a value relative to the likelihood of a sub-window to comprise an image of the object of interest, and wherein each of the classifiers has an increasing accuracy in identifying features associated with the object of interest); and upon classifying by all of the classifiers of the cascade a sub-window as comprising an image of the object of interest, applying a post-classifier on the cascade CRVS, for evaluating the likelihood of the sub-window to comprise an image of the object of interest, wherein the post-classifier differs from the homogenous classifiers. | 06-25-2009 |
20090169052 | Object Detector - An object position area ( | 07-02-2009 |
20090169053 | COLLABORATIVE TRACKING - Disclosed is a system ( | 07-02-2009 |
20090169054 | METHOD OF ADJUSTING SELECTED WINDOW SIZE OF IMAGE OBJECT - A method of adjusting selected window size of an image object is applicable for tracking a target object in a video. The video includes a plurality of frames, and the target object has a display range changing with the playback of each frame. According to a variation trend of the display range of the target object, whether a variation times corresponding to the variation trend reaches a threshold value or not is recorded, and then the selected window size is reset, such that the target object is enclosed with a selected window having a size closer to the target object. | 07-02-2009 |
20090175496 | Image processing device and method, recording medium, and program - The present invention relates to image processing apparatus and method, a recording medium, and a program for providing reliable tracking of a tracking point. When a right eye | 07-09-2009 |
20090175497 | LOCATION MEASURING DEVICE AND METHOD - With apparatus and method for measuring in three dimensions by applying an estimating process to points corresponding to feature points in a plurality of motion image frames, high speed and high accuracy are realized. The apparatus comprises: a first track determining section ( | 07-09-2009 |
20090175498 | LOCATION MEASURING DEVICE AND METHOD - To realize high speed and high precision with device and method of three-dimensional measurement by applying estimating process to points corresponding to feature points in a plurality of motion frame images. With the device and method of calculating location information through processes of choosing a stereo pair, relative orientation, and bundle adjustment and using corresponding points of feature points extracted from respective motion frame images, each process is made up of two stages. To the first process section (stages: | 07-09-2009 |
20090175499 | Systems and methods for identifying objects and providing information related to identified objects - Systems and methods for identifying an object and presenting additional information about the identified object are provided. The techniques of the present invention can allow the user to specify modes to help with identifying objects. Furthermore, the additional information can be provided with different levels of detail depending on user selection. Apparatus for presenting a user with a log of the identified objects is also provided. The user can customize the log by, for example, creating a multi-media album. | 07-09-2009 |
20090175500 | Object tracking apparatus - An object tracking apparatus tracks an object on image data captured continuously. The object tracking apparatus includes an object color adjusting unit and a particle filter processing unit. The object color adjusting unit calculates tendency of color change in regions on image data and adjusts a color of the object set as an object color based on the tendency of color change to obtain a reference color. The particle filter processing unit estimates a region corresponding to the object on image data based on likelihood of each particle calculated by comparing a color around each particle with the reference color, using particles which move on image data according to a predefined rule. | 07-09-2009 |
20090175501 | Imaging control apparatus and imaging control method - An imaging control apparatus includes preset information management means for holding and managing unit preset information including positional information indicative of the position of an imaging field changing mechanism that changes the imaging field of view of an imaging unit, and reference image data, the preset information management means, as a registration process in response to a registration instruction, producing and holding unit preset information including positional information indicative of the position of the imaging field changing mechanism when the registration instruction is issued and reference image data related to the positional information and produced based on an image signal obtained through imaging performed by the imaging unit when the registration instruction is issued; operation screen display control means for controlling display of an operation image used to select among preset items that correspond to respective sets of unit preset information held in the preset information management means, the operation screen display control means displaying and presenting, for each of the preset items, the reference image data contained in the corresponding unit preset information on the operation screen; and drive control means for carrying out drive control for changing the position of the imaging field changing mechanism, the drive control means carrying out the drive control in such a way that when a preset item is selected and entered on the operation screen, the imaging field changing mechanism is positioned as indicated by the positional information in the unit preset information that corresponds to the selected and entered preset item. | 07-09-2009 |
20090175502 | Methods for discriminating moving objects in motion image sequences - In an exemplary embodiment of the present invention, an automated, computerized method is provided for classifying pixel values in a motion sequence of images. According to a feature of the present invention, the method comprises the steps of determining spectral information relevant to the sequence of images, and utilizing the spectral information to classify a pixel as one of background, shadow and object. | 07-09-2009 |
20090185715 | SYSTEM AND METHOD FOR DEFORMABLE OBJECT RECOGNITION - The present invention provides a system and method for detecting deformable objects in images even in the presence of partial occlusion, clutter and nonlinear illumination changes. A holistic approach for deformable object detection is disclosed that combines the advantages of a match metric that is based on the normalized gradient direction of the model points, the decomposition of the model into parts and a search method that takes all search results for all parts at the same time into account. Despite the fact that the model is decomposed into sub-parts, the relevant size of the model that is used for the search at the highest pyramid level is not reduced. Hence, the present invention does not suffer the speed limitations of a reduced number of pyramid levels that prior art methods have. | 07-23-2009 |
20090185716 | DUST DETECTION SYSTEM AND DIGITAL CAMERA - A dust detection system, comprising a receiver, a dust extraction block, a memory and an image correction block, is provided. The receiver receives an image signal. The dust extraction block generates a dust image signal on the basis of the image signal. The memory stores an intrinsic-flaw image signal corresponding to an intrinsic-flaw image including sub-images of dust that the dust extraction block extracts in initializing. The image correction block generates a corrected dust-image signal on the basis of the intrinsic-flaw image signal and a normal dust-image signal. The normal dust-image signal corresponds to a normal dust image including sub-image of dust that the dust extraction block extracts after initializing. The corrected dust image is the normal dust image that sub-images of dust in the intrinsic-flaw image are deleted from. | 07-23-2009 |
20090185717 | OBJECT DETECTION SYSTEM WITH IMPROVED OBJECT DETECTION ACCURACY - In a system for detecting a target object, a similarity determining unit sets a block in a picked-up image, and compares a part of the picked-up image contained in the block with a pattern image data while changes a location of the block in the picked-up image to determine a similarity of each part of the picked-up image contained in a corresponding one of the different-located blocks with respect to the pattern image data. A specifying unit extracts some different-located blocks from all of the different-located blocks. The determined similarity of the part of the picked-up image contained in each of some different-located blocks is equal to or greater than a predetermined threshold similarity. The specifying unit specifies, in the picked-up image, a target area based on a frequency distribution of some different-located blocks therein. | 07-23-2009 |
20090190797 | RECOGNIZING IMAGE ENVIRONMENT FROM IMAGE AND POSITION - A method of recognizing the environment of an image from an image and position information associated with the image includes acquiring the image and its associated position information; using the position information to acquire an aerial image correlated to the position information; identifying the environment of the image from the acquired aerial image; and storing the environment of the image in association with the image for subsequent use. | 07-30-2009 |
20090190798 | SYSTEM AND METHOD FOR REAL-TIME OBJECT RECOGNITION AND POSE ESTIMATION USING IN-SITU MONITORING - Provided are a system and method for real-time object recognition and pose estimation using in-situ monitoring. The method includes the steps of: a) receiving 2D and 3D image information, extracting evidences from the received 2D and 3D image information, recognizing an object by comparing the evidences with model, and expressing locations and poses by probabilistic particles; b) probabilistically fusing various locations and poses and finally determining a location and a pose by filtering inaccurate information; c) generating ROI by receiving 2D and 3D image information and the location and pose from the step b) and collecting and calculating environmental information; d) selecting an evidence or a set of evidences probabilistically by receiving the information from the step c) and proposing a cognitive action of a robot for collecting additional evidence; and e) repeating the steps a) and b) and the steps c) and d) in parallel until a result of object recognition and pose estimation is probabilistically satisfied. | 07-30-2009 |
20090190799 | METHOD FOR CHARACTERIZING THE EXHAUST GAS BURN-OFF QUALITY IN COMBUSTION SYSTEMS - A method for characterizing a flue gas burnout quality of a combustion process in a combustion system having a gas burnout zone includes optically detecting in a visible wavelength range, in a flow cross section of the gas burnout zone, low-soot combustion regions, regions without combustion, and sooting regions, so as to provide a plurality of successive individual images, the regions without combustion and the sooting regions having different dynamics. The plurality of successive individual images are analyzed so as to distinguish regions of transition, to the low-soot combustion regions, of the regions without combustion and the sooting regions. | 07-30-2009 |
20090196459 | Image manipulation and processing techniques for remote inspection device - A remote inspection apparatus has an imager disposed in an imager head and capturing image data. An active display unit receives the image data in digital form and graphically renders the image data on an active display. Movement tracking sensors track movement of the imager head and/or image display unit. In some aspects, a computer processor located in the active display unit employs information from movement tracking sensors tracking movement of the imager head to generate and display a marker indicating a position of the imager head. In additional aspects, the computer processor employs information from movement tracking sensors tracking movement of the active display unit to control movement of the imager head. In other aspects, the computer processor employs information from movement tracking sensors tracking movement of the active display unit to modify the image data rendered on the active display. | 08-06-2009 |
20090196460 | EYE TRACKING SYSTEM AND METHOD - An eye tracking system and method is provided giving persons with severe disabilities the ability to access a computer through eye movement. A system comprising a head tracking system, an eye tracking system, a display device, and a processor which calculates the gaze point of the user is provided. The eye tracking method comprises determining the location and orientation of the head, determining the location and orientation of the eye, calculating the location of the center of rotation of the eye, and calculating the gaze point of the eye. A method for inputting to an electronic device a character selected by a user through alternate means is provided, the method comprising placing a cursor near the character to be selected by said user, shifting the characters on a set of keys which are closest to the cursor, tracking the movement of the character to be selected with the cursor, and identifying the character to be selected by comparing the direction of movement of the cursor with the direction of movement of the characters of the set of keys which are closest to the cursor. | 08-06-2009 |
20090196461 | IMAGE CAPTURE DEVICE AND PROGRAM STORAGE MEDIUM - An image capture device includes a capture unit configured to capture an image of an object, an object detection unit configured to detect the object in the image captured by the capture unit, an angle detection unit configured to detect an angle of the object detected by the object detection unit, and a control unit configured to perform a predetermined control operation for the image capture device based on the angle of the object detected by the angle detection unit. | 08-06-2009 |
20090196462 | VIDEO AND AUDIO CONTENT ANALYSIS SYSTEM - The present invention is directed to various methods and systems for analysis and processing of video and audio signals from a plurality of sources in real-time or off-line. According to some embodiments of the present invention, analysis and processing applications are dynamically installed in the processing units. | 08-06-2009 |
20090202107 | Object detection and recognition system - An object recognition system is provided including at least one image capturing device configured to capture at least one image, wherein the image includes a plurality of pixels and is represented in an image data set, an object detection device configured to identify a plurality of pixels corresponding to objects from the at least one image, wherein an object includes a plurality of pixels and is represented in an object data set, wherein the object data set includes a set of features corresponding to each pixel in the object, and an image recognition device configured to recognize objects of interest present in an object by image correlation against a set of template images to recognize an object as one of the templates. | 08-13-2009 |
20090202108 | ASSAYING AND IMAGING SYSTEM IDENTIFYING TRAITS OF BIOLOGICAL SPECIMENS - A method of system is provided for assaying specimens. In connection with such system or method, plural multi-pixel target images of a field of view are obtained at different corresponding points in time over a given sample period. A background image is obtained using a plural set of the plural target images. For a range of points in time, the background image is removed from the target images to produce corresponding background-removed target images. Analysis is performed using at least a portion of the corresponding background-removed target images to identify visible features of the specimens. A holding structure is provided to hold a set of discrete specimen containers. A positioning mechanism is provided to position a plural subset of the containers to place the moving specimens within the plural subset of the containers within a field of view of the camera. | 08-13-2009 |
20090208052 | INTERACTIVE DEVICE AND METHOD FOR TRANSMITTING COMMANDS FROM A USER - According to the present invention, it is provided an interactive device comprising a display, a camera, an image analysing means, said interactive device comprising means to acquire an image with the camera, the analysing means detecting at least a human face on the acquired image and displaying on the display at least a pattern where the human face was detected wherein the interactive device further comprises means to determine a halo region extending at least around the pattern and means to add into the halo region at least one interactive zone related to a command, means to detect movement onto the interactive zone and means to execute the command by said device. | 08-20-2009 |
20090208053 | AUTOMATIC IDENTIFICATION AND REMOVAL OF OBJECTS IN AN IMAGE, SUCH AS WIRES IN A FRAME OF VIDEO - A wire tracking system is described that provides a method and system for automatically locating wires in a digital image and tracking the located wires through a sequence of digital images. The wire tracking system is particularly good at removing wires from complex shots where background replacement is difficult. The wire tracking system performs complex signal processing to automatically remove the wire from the original image while preserving grain and background detail. Thus, the wire tracking system provides a reliable method of automatically identifying wires and replacing the wires with a reconstructed background image, and frees artists to make other enhancements to the scene. | 08-20-2009 |
20090208054 | MEASURING A COHORT'S VELOCITY, ACCELERATION AND DIRECTION USING DIGITAL VIDEO - A computer implemented method, apparatus, and computer program product for identifying positional data for an object moving in an area of interest. Positional data for each camera in a set of cameras associated with the object is retrieved. The positional data identifies a location of each camera in the set of cameras within the area of interest. The object is within an image capture range of each camera in the set of cameras. Metadata describing video data captured by the set of cameras is analyzed using triangulation analytics and the positional data for the set of cameras to identify a location of the object. The metadata is generated in real time as the video data is captured by the set of cameras. The positional data for the object is identified based on locations of the object over a given time interval. The positional data describes motion of the object. | 08-20-2009 |
20090208055 | Efficient detection of broken line segments in a scanned image - Systems and methods are presented for detecting and repairing broken lines within an image from a plurality of edge segments comprising a plurality of pixels and having associated first and second endpoints. A characteristic angle is determined for each edge segment. A normal distance is determined for each line segment according the distance of closest approach to a reference point for a line defined by the first and second endpoints of each edge segment. At least one line within the scanned image is located according to the determined characteristic angles and the determined normal distance for the plurality of edge segments. | 08-20-2009 |
20090208056 | REAL-TIME FACE TRACKING IN A DIGITAL IMAGE ACQUISITION DEVICE - An image processing apparatus for tracking faces in an image stream iteratively receives a new acquired image from the image stream, the image potentially including one or more face regions. The acquired image is sub-sampled ( | 08-20-2009 |
20090208057 | VIRTUAL CONTROLLER FOR VISUAL DISPLAYS - Virtual controllers for visual displays are described. In one implementation, a camera captures an image of hands against a background. The image is segmented into hand areas and background areas. Various hand and finger gestures isolate parts of the background into independent areas, which are then assigned control parameters for manipulating the visual display. Multiple control parameters can be associated with attributes of multiple independent areas formed by two hands, for advanced control including simultaneous functions of clicking, selecting, executing, horizontal movement, vertical movement, scrolling, dragging, rotational movement, zooming, maximizing, minimizing, executing file functions, and executing menu choices. | 08-20-2009 |
20090208058 | IMAGING SYSTEM FOR VEHICLE - An imaging system for a vehicle includes an imaging device having a field of view exteriorly and forward of the vehicle in its direction of travel, and an image processor operable to process the captured images in accordance with an algorithm. The algorithm comprises a sign recognition routine and a character recognition routine. The image processor processes the image data captured by the imaging device to detect signs in the field of view of the imaging device and applies the sign recognition routine to determine a sign type of the detected sign. The image processor is operable to apply the character recognition routine to the image data to determine information on the detected sign. The image processor applies the character recognition routine to the captured images in response to an output of the sign recognition routine being indicative of the detected sign being a sign type of interest. | 08-20-2009 |
20090214077 | Method For Determining The Self-Motion Of A Vehicle - A method and a device for determining the self-motion of a vehicle in an environment are provided, in which at least part of the environment is recorded via snapshots by an imaging device mounted on the vehicle. At least two snapshots are analyzed for determining the optical flows of image points, reference points that seem to be stationary from the point of view of the imaging device being ascertained from the optical flows. The reference points are collected in an observed set, new reference points being dynamically added to the observed set with the aid of a first algorithm, and existing reference points being dynamically removed from the observed set with the aid of a second algorithm. | 08-27-2009 |
20090214078 | Method for Handling Static Text and Logos in Stabilized Images - To handle static text and logos in stabilized images without destabilizing the static text and logos, a method of handling overlay subpictures in stabilized images includes detecting an overlay subpicture in an input image, separating the overlay subpicture from the input image, stabilizing the input image to form a stabilized image, and merging the overlay subpicture with the stabilized image to obtain an output image. | 08-27-2009 |
20090214079 | SYSTEMS AND METHODS FOR RECOGNIZING A TARGET FROM A MOVING PLATFORM - Systems and methods for recognizing a location of a target are provided. One system includes a camera configured to generate first data representing an object resembling the target, a memory storing second data representing a template of the target, and a processor. The processor is configured to receive the first data and the second data, and determine that the object is the target if the object matches the template within a predetermined percentage error. A method includes receiving first data representing an object resembling the target, receiving second data representing a template of the target, and determining that the object is the target if the object matches the template within a predetermined percentage error. Also provided are computer-readable mediums including processor instructions for executing the above method. | 08-27-2009 |
20090214080 | METHODS AND APPARATUS FOR RUNWAY SEGMENTATION USING SENSOR ANALYSIS - Systems and methods for determining whether a region of interest (ROI) includes a runway are provided. One system includes a camera for capturing an image of the ROI, an analysis module for generating a binary large object (BLOB) of at least a portion of the ROI, and a synthetic vision system including a template of the runway. The system further includes a segmentation module for determining if the ROI includes the runway based on a comparison of the template and the BLOB. One method includes the steps of identifying a position for each corner on the BLOB and forming a polygon on the BLOB based on the position of each corner. The method further includes the step of determining that the BLOB represents the runway based on a comparison of the polygon and a template of the runway. Also provided are computer-readable mediums storing instructions for performing the above method. | 08-27-2009 |
20090214081 | APPARATUS AND METHOD FOR DETECTING OBJECT - A disparity profile indicating a relation between a perpendicular position on time series images and a disparity on a target monitoring area based on an arrangement of a camera is calculated. Processing areas are set, by setting a height of each of the processing areas using a length at the bottom of the image obtained by converting a reference value of a height of an object according to the profile, while setting a position of each bottom of processing areas on the image. An object having a height higher than a certain height with respect to the monitoring area, unify an object detection result in each processing area according to the disparity of the object, and detect the object of the whole monitoring area from each processing area is detected. Position and speed for the object detected by the object primary detection unit are estimated. | 08-27-2009 |
20090220122 | TRACKING SYSTEM FOR ORTHOGNATHIC SURGERY - Systems and methods are provided for measuring relative movement between two portions of the facial skeleton. A target ( | 09-03-2009 |
20090220123 | APPARATUS AND METHOD FOR COUNTING NUMBER OF OBJECTS - An image processing apparatus includes a first detecting unit configured to detect an object based on an upper body of a person and a second detecting unit configured to detect an object based on a face of a person. The image processing apparatus determines a level of congestion of objects contained in an input image, selects the first detecting unit when the level of congestion is low, and selects the second detecting unit when the level of congestion is high. The image processing apparatus counts the number of objects detected by the selected first or second detecting unit from the image. Thus, the image processing apparatus can detect an object and count the number of objects with high precision even when the level of congestion is high and the objects tend to overlap one another. | 09-03-2009 |
20090220124 | AUTOMATED SCORING SYSTEM FOR ATHLETICS - Disclosed are methods and systems for utilizing motion capture techniques, for example, video based motion capture techniques, for capturing and modeling the captured 3D movement of an athlete through a defined space. The model is then compared with an intended motion pattern in order to identify deviations and/or form breaks that, in turn, may be used in combination with a scoring algorithm to quantify the athlete's execution of the intended motion pattern to produce an objective score. It is anticipated that these methods and systems will be particularly useful for training and judging in those sports that have struggled with the vagaries introduced by the subjective nature of human scoring. | 09-03-2009 |
20090220125 | IMAGE RECONSTRUCTION BY POSITION AND MOTION TRACKING - A system, method, and apparatus provide the ability to reconstruct an image from an object. A hand-held image acquisition device is configured to acquire local image information from a physical object. A tracking system obtains displacement information for the hand-held acquisition device while the device is acquiring the local image information. An image reconstruction system computes the inverse of the displacement information and combines the inverse with the local image information to transform the local image information into a reconstructed local image information. A display device displays the reconstructed local image information. | 09-03-2009 |
20090232353 | METHOD AND SYSTEM FOR MARKERLESS MOTION CAPTURE USING MULTIPLE CAMERAS - Completely automated end-to-end method and system for markerless motion capture performs segmentation of articulating objects in Laplacian Eigenspace and is applicable to handling of the poses of some complexity. 3D voxel representation of acquired images are mapped to a higher dimensional space ( | 09-17-2009 |
20090232354 | ADVERTISEMENT INSERTION SYSTEMS AND METHODS FOR DIGITAL CAMERAS BASED ON OBJECT RECOGNITION - Digital cameras include an image capture system, an object recognition system and an advertisement insertion system. The image capture system captures a visible image as a digital image. The object recognition system recognizes visible objects in the digital image. The advertisement insertion system inserts an advertising-related image into the digital image in response to a visible object in the digital image that was recognized. The user of the digital camera may be compensated for exposure to the advertising-related image. | 09-17-2009 |
20090232355 | REGISTRATION OF 3D POINT CLOUD DATA USING EIGENANALYSIS | 09-17-2009 |
20090232356 | Tracking System and Method for Tracking Objects - Disclosed are tracking system and a method for locating a plurality of objects. The tracking system includes an identification module, a receiver, a processing module, and a transmitter. The identification module is configured to obtain unit identification information associated with the one or more traceable units. The receiver is configured to receive an information of a spatial location and unit identification information of the one or more traceable units. The processing module is electronically coupled to the identification module and the receiver and is configured to identify the one or more traceable units based on the obtained unit identification information and the received unit identification information. The processing module is further configured to determine locations of the one or more traceable units based on the information of the spatial location of the one or more identified traceable units. The transmitter is electronically coupled to the processing module. | 09-17-2009 |
20090232357 | DETECTING BEHAVIORAL DEVIATIONS BY MEASURING EYE MOVEMENTS - According to one embodiment of the present invention, a computer implemented method, apparatus, and computer usable program product is provided for detecting behavioral deviations in members of a cohort group. A member of a cohort group is identified. Each member of the cohort group shares a common characteristic. Ocular metadata associated with the member of the cohort group is generated in real-time. The ocular metadata describes movements of an eye of the member of the cohort group. The ocular metadata is analyzed to identify patterns of ocular movements. In response to the patterns of ocular movements indicating behavioral deviations in the member of the cohort group, the member of the cohort group is identified as a person of interest. A person of interest may be subjected to an increased level of monitoring and/or other security measures. | 09-17-2009 |
20090232358 | Method and apparatus for processing an image - There is provided an efficient, fast image processing apparatus with low error probability for rapidly scrutinizing a digitized video image frame and processing said image frame to detect and characterize features of interest while ignoring other features of said image frame. There is further provided an efficient fast image processing method with low error probability for rapidly scrutinizing a digitized video image frame and processing said image frame to detect and characterize features of interest while ignoring other features of said image frame. In a first embodiment of the invention an image processing apparatus comprises an imaging device coupled to a digital electronic image processor. Video data from the imaging device is linked to a location data source. Objects of interest in a scene are identified by comparing computed Maximally Stable Extremal Regions (MSERs) of captured images with MSERs of images of objects contained in a object template database. | 09-17-2009 |
20090238404 | METHODS FOR USING DEFORMABLE MODELS FOR TRACKING STRUCTURES IN VOLUMETRIC DATA - A computerized method for tracking of a 3D structure in a 3D image including a plurality of sequential image frames, one of which is a current image frame, includes representing the 3D structure being tracked with a parametric model with parameters for local shape deformations. A predicted state vector is created for the parametric model using a kinematic model. The parametric model is deformed using the predicted state vector, and a plurality of actual points for the 3D structure is determined using a current frame of the 3D image, and displacement values and a measurement vectors are determined using differences between the plurality of actual points and the plurality of predicted points. The displacement values and the measurement vectors are filtered to generate an updated state vector and an updated covariance matrix, and an updated parametric model is generated for the current image frame using the updated state vector. | 09-24-2009 |
20090238405 | METHOD AND SYSTEM FOR ENABLING A USER TO PLAY A LARGE SCREEN GAME BY MEANS OF A MOBILE DEVICE - The present invention relates to a system and method for determining and tracking one or more objects, or one or more image sections within each image of a video stream to be displayed on user's mobile device, comprising: (a) one or more video streams to be run on a streaming server; (b) an image capture software component for capturing images of said one or more video streams, according to a first group of one or more sets of rules; (c) a receiver for receiving one or more commands generated by a user and transferring said commands to an extra-layer software component; (d) an extra-layer software component for: (d.1.) determining one or more objects or image sections within the captured images; (d.2.) tracking said objects or image sections within said captured images; and (d.3.) processing said captured images, to generate corresponding images to be displayed on a mobile device screen, according to a second group of one or more sets of rules and according to user's commands received by means of said receiver; (e) a compression software component for compressing the images, processed by means of said extra-layer software component, according to a third group of one or more sets of rules; (f) a data software component for providing groups of one or more sets of rules to said image capture software component, said extra-layer software component and said compression software component; and (g) a transmitter for transmitting the compressed images to a mobile device. The system and method further comprises a relayout software component for: (a) determining one or more objects or image sections within each image of the one or more video streams; (b) tracking said objects or image sections within said each image of said one or more video streams; and (c) processing said each image, to generate corresponding images to be displayed on a mobile device screen, according to a first group of one or more sets of rules and according to user's commands received by means of the receiver. | 09-24-2009 |
20090238406 | Dynamic state estimation - According to an implementation, a set of particles is provided for use in estimating a location of a state of a dynamic system. A local-mode seeking mechanism is applied to move one or more particles in the set of particles, and the number of particles in the set of particles is modified. The location of the state of the dynamic system is estimated using particles in the set of particles. Another implementation provides dynamic state estimation using a particle filter for which the particle locations are modified using a local-mode seeking algorithm based on a mean-shift analysis and for which the number of particles is adjusted using a Kullback-Leibler-distance sampling process. The mean-shift analysis may reduce degeneracy in the particles, and the sampling process may reduce the computational complexity of the particle filter. The implementation may be useful with non-linear and non-Gaussian systems. | 09-24-2009 |
20090238407 | Object detecting apparatus and method for detecting an object - An apparatus for detecting an object, includes: a candidate point detection unit detecting a candidate point between the ground and an object from an image; a tracking unit calculating positions of the candidate point at a first time and a second time; a difference calculation unit calculating a difference between an estimated position at the second time and the candidate point position at the second time; and a state determination unit determining a new state of the candidate point at the second time based on the difference, and changing the search threshold value or a state. | 09-24-2009 |
20090238408 | IMAGE-SIGNAL PROCESSOR, IMAGE-SIGNAL PROCESSING METHOD, AND PROGRAM - An image-signal processing apparatus configured to track an object moving in an image includes a setting unit configured to set an eliminating area in an image constituting a moving image; a motion-vector detecting unit configured to detect an object in the image constituting a moving image and detect a motion vector corresponding to the object using an area excluding the eliminating area in the image; and an estimating unit configured to estimate a position to which the object moves on the basis of the detected motion vector. | 09-24-2009 |
20090238409 | Method for testing a motion vector - A method for testing a motion vector is described, which has: provision of at least one item of motion information assigned to the image sequence; storing a first image section of the first image in a first buffer memory and storing a second image section of the second image in a second intermediate memory, whereby a position of the first image section in the first image and a position of the second image section in the second image have reciprocal offset, which is dependent on the at least one item of motion information; determining a first image block in the first image section and a second image block in a second image section using the motion vector; comparing the contents of the first and of the second image block. | 09-24-2009 |
20090238410 | FACE RECOGNITION WITH COMBINED PCA-BASED DATASETS - A face recognition method for working with two or more collections of facial images is provided. A representation framework is determined for a first collection of facial images including at least principle component analysis (PCA) features. A representation of said first collection is stored using the representation framework. A modified representation framework is determined based on statistical properties of original facial image samples of a second collection of facial images and the stored representation of the first collection. The first and second collections are combined without using original facial image samples. A representation of the combined image collection (super-collection) is stored using the modified representation framework. A representation of a current facial image, determined in terms of the modified representation framework, is compared with one or more representations of facial images of the combined collection. Based on the comparing, it is determined which, if any, of the facial images within the combined collection matches the current facial image. | 09-24-2009 |
20090245570 | METHOD AND SYSTEM FOR OBJECT DETECTION IN IMAGES UTILIZING ADAPTIVE SCANNING - An object detection method and system for detecting an object in an image utilizing an adaptive image scanning strategy is disclosed herein. An initial rough shift can be determined based on the size of a scanning window and the image can be scanned continuously for several detections of similar sizes using the rough shift. The scanning window can be classified with respect to a cascade of homogenous classification functions covering one or more features of the object. The size and scanning direction of the scanning window can be adaptively changed depending on the probability of the object occurrence in accordance with scan acceleration. The object can be detected by an object detector and can be localized with higher precision and accuracy. | 10-01-2009 |
20090245571 | Digital video target moving object segmentation method and system - A digital video target moving object segmentation method and system is designed for processing a digital video stream for segmentation of every target moving object that appears in the video content. The proposed method and system is characterized by the operations of a multiple background imagery extraction process and a background imagery updating process for extracting characteristic background imagery whose content includes the motional background objects in addition to the static background scenes; and wherein the multiple background imagery extraction process is based on a background difference threshold comparison method, while the background imagery updating process is based on a background-matching and weight-counting method. This feature allows an object mask to be defined based on the characteristic background imagery, which can mask both the motional background objects as well as the static background scenes. | 10-01-2009 |
20090245572 | Control apparatus and method - The invention discloses a control apparatus for a user to control an electronic apparatus. The control apparatus of the invention includes a monitoring module, a sensing module, a first processing module, and a first transmitting module. The monitoring module is used to monitor the user's eyeball(s), and generates related eyeball-movement information. The sensing module is used to monitor a body portion of the user, and generates related body portion-movement information. The first processing module is connected to the monitoring module and the sensing module respectively, for calculating the control information in accordance with the eyeball-movement information and the body portion-movement information. Additionally, the first transmitting module is connected to the first processing module, for transmitting the control information to the electronic device, which can act according to the control information. | 10-01-2009 |
20090245573 | OBJECT MATCHING FOR TRACKING, INDEXING, AND SEARCH - A camera system comprises an image capturing device, object detection module, object tracking module, and match classifier. The object detection module receives image data and detects objects appearing in one or more of the images. The object tracking module temporally associates instances of a first object detected in a first group of the images. The first object has a first signature representing features of the first object. The match classifier matches object instances by analyzing data derived from the first signature of the first object and a second signature of a second object detected in a second image. The second signature represents features of the second object derived from the second image. The match classifier determine whether the second signature matches the first signature. A training process automatically configures the match classifier using a set of possible object features. | 10-01-2009 |
20090245574 | OPTICAL POINTING DEVICE AND METHOD OF DETECTING CLICK EVENT IN OPTICAL POINTING DEVICE - A method of detecting a click event for sensing a motion of a finger corresponding to a click on a sensing area of an optical pointing device, the method including: obtaining an image of the finger from the sensing area; sensing a change in the image of the finger; analyzing a horizontal movement of the finger based on the change in the image of the finger; and generating a click signal when the horizontal movement of the finger is within a predetermined range is provided. | 10-01-2009 |
20090245575 | METHOD, APPARATUS, AND PROGRAM STORAGE MEDIUM FOR DETECTING OBJECT - In an object detecting method according to an aspect of the invention, a specific kind of object such as a human head can be detected with high accuracy even if the detecting target object appears in various shapes. The object detecting method includes a primary evaluated value computing step of applying plural filters to an image of an object detecting target to compute plural feature quantities and of obtaining a primary evaluated value corresponding to each-feature quantity; a secondary evaluated value computing step of obtaining a secondary evaluated value by integrating the plural primary evaluated values obtained in the primary evaluated value computing step; and a region extracting step of comparing the secondary evaluated value obtained in the secondary evaluated value computing step and a threshold to extract a region where an existing probability of the specific kind of object is higher than the threshold. | 10-01-2009 |
20090245576 | METHOD, APPARATUS, AND PROGRAM STORAGE MEDIUM FOR DETECTING OBJECT - The invention relates to an object detecting method for detecting a specific kind of object such as a human head and a human face from an image expressed by two-dimensionally arrayed pixels, the object detecting method including an image group producing step of producing an image group including an original image of the object detecting target and at least one thinned-out image by thinning out pixels constituting the original image at a predetermined rate or by thinning out the pixels at the predetermined rate in a stepwise manner; and a stepwise detection step of detecting the specific kind of object from the original image by sequentially repeating plural extraction processes from an extraction process of applying a filter acting on a relatively small region to a relatively small image toward an extraction process of applying a filter acting on a relatively wide region to a relatively large image. | 10-01-2009 |
20090245577 | Tracking Processing Apparatus, Tracking Processing Method, and Computer Program - A tracking processing apparatus includes: first state-variable-sample-candidate generating means for generating state variable sample candidates at first present time; plural detecting means each for performing detection concerning a predetermined detection target related to a tracking target; sub-information generating means for generating sub-state variable probability distribution information at present time; second state-variable-sample-candidate generating means for generating state variable sample candidates at second present time; a state-variable-sample acquiring means for selecting state variable samples out of the state variable sample candidates at the first present time and the state variable sample candidates at the second present time at random according to a predetermined selection ratio set in advance; and estimation-result generating means for generating main state variable probability distribution information at the present time as an estimation result. | 10-01-2009 |
20090245578 | METHOD OF DETECTING PREDETERMINED OBJECT FROM IMAGE AND APPARATUS THEREFOR - In an object detecting method, an imaging condition of an image pickup unit is determined, a detecting method is selected based on the determined imaging condition, and at least one predetermined object is detected from an image picked up through the image pickup unit according to the selected detecting method. | 10-01-2009 |
20090245579 | PROBABILITY DISTRIBUTION CONSTRUCTING METHOD, PROBABILITY DISTRIBUTION CONSTRUCTING APPARATUS, STORAGE MEDIUM OF PROBABILITY DISTRIBUTION CONSTRUCTING PROGRAM, SUBJECT DETECTING METHOD, SUBJECT DETECTING APPARATUS, AND STORAGE MEDIUM OF SUBJECT DETECTING PROGRAM - A probability distribution constructing method extracts a subject shape similar to a subject of a specific type repeatedly appearing in various sizes in plural images obtained by repeatedly photographing a field using a fixedly disposed camera, from plurality images, in accordance with a size of the similar subject shape and positional information of the camera on a view angle. Subsequently, the probability distribution constructing method determines the similar subject shape, and calculates an appearance probability distribution of the size of the subject, and detects the subject using the appearance probability distribution. | 10-01-2009 |
20090245580 | MODIFYING PARAMETERS OF AN OBJECT DETECTOR BASED ON DETECTION INFORMATION - Embodiments of an object detection unit configured to modify parameters for one or more object detectors based on detection information are provided. | 10-01-2009 |
20090252373 | Method and System for detecting polygon Boundaries of structures in images as particle tracks through fields of corners and pixel gradients - A stochastic method and system for detecting polygon structures in images, by detecting a set of best matching corners of predetermined acuteness α of a polygon model from a set of similarity scores based on GDM features of corners, and tracking polygon boundaries as particle tracks using a sequential Monte Carlo approach. The tracking involves initializing polygon boundary tracking by selecting pairs of corners from the set of best matching corners to define a first side of a corresponding polygon boundary; tracking all intermediate sides of the polygon boundaries using a particle filter, and terminating polygon boundary tracking by determining the last side of the tracked polygon boundaries to close the polygon boundaries. The particle tracks are then blended to determine polygon matches, which may be made available, such as to a user, for ranking and inspection. | 10-08-2009 |
20090252374 | IMAGE SIGNAL PROCESSING APPARATUS, IMAGE SIGNAL PROCESSING METHOD, AND PROGRAM - An image signal processing apparatus includes a detecting unit configured to detect a motion vector of a tracking point provided in an object in a moving image, a computing unit configured to compute a reliability parameter representing the reliability of the detected motion vector, a determining unit configured to determine whether the detected motion vector is adopted by comparing the computed reliability parameter with a boundary, an accumulating unit configured to accumulate the reliability parameter, and a changing unit configured to change the boundary on the basis of the accumulated reliability parameters. | 10-08-2009 |
20090252375 | Position Detection System, Position Detection Method, Program, Object Determination System and Object Determination Method - There is provided a position detection system including an imaging unit to capture an image of a projection plane of an electromagnetic wave, an electromagnetic wave emission unit to emit the electromagnetic wave to the projection plane, a control unit to control emission of the electromagnetic wave by the electromagnetic wave emission unit, and a position detection unit including a projected image detection section to detect a projected image of an object existing between the electromagnetic wave emission unit and the projection plane based on a difference between an image of the projection plane captured during emission of the electromagnetic wave fay the electromagnetic wave emission unit and an image of the projection plane captured during no emission of the electromagnetic wave, and a position detection section to detect a position of the object based on a position of the projected image of the object. | 10-08-2009 |
20090257621 | Method and System for Dynamic Feature Detection - Disclosed are methods and systems for dynamic feature detection of physical features of objects in the field of view of a sensor. Dynamic feature detection substantially reduces the effects of accidental alignment of physical features with the pixel grid of a digital image by using the relative motion of objects or material in and/or through the field of view to capture and process a plurality of images that correspond to a plurality of alignments. Estimates of the position, weight, and other attributes of a feature are based on an analysis of the appearance of the feature as it moves in the field of view and appears at a plurality of pixel grid alignments. The resulting reliability and accuracy is superior to prior art static feature detection systems and methods. | 10-15-2009 |
20090257622 | METHOD FOR REMOTE SPECTRAL ANALYSIS OF GAS PLUMES - A method for reducing the effects of background radiation introduced into gaseous plume spectral data obtained by an aerial imaging sensor, includes capturing spectral data of a gaseous plume with its obscured background along a first line of observation and capturing a second image of the previously obscured background along a different line of observation. The parallax shift of the plume enables the visual access needed to capture the radiometric data emanating exclusively from the background. The images are then corresponded on a pixel-by-pixel basis to produce a mapping. An image-processing algorithm is applied to the mapped images to reducing the effects of background radiation and derive information about the content of the plume. | 10-15-2009 |
20090262976 | POSITION-DETERMINING SYSTEM AND METHOD - A position-determining system for determining position and orientation of an object on a work surface parallel to an X-Y plane of a Cartesian coordinate system includes an image-capturing device, a processor and a recognition assistant. The image-capturing device is directed towards the work surface for capturing images of the object and sending the images to the processor. The processor processes the images captured by the image-capturing device. The recognition assistant is attached on the object. The recognition assistant includes a first recognition assistant part and a second recognition assistant part configured to be readily recognizable in an image examined by the processor. Then the processor determines position and orientation of the object via a template matching algorithm. | 10-22-2009 |
20090262977 | VISUAL TRACKING SYSTEM AND METHOD THEREOF - The present invention provides a visual tracking system and its method comprising: a sensor unit, for capturing monitored scenes continuously; an image processor unit, for detecting when a target enters into a monitored scene, and extracting its characteristics to establish at least one model, and calculating the matching scores of the models; a hybrid tracking algorithm unit, for combining the matching scores to produce optimal matching results; a visual probability data association filter, for receiving the optimal matching results to eliminate the interference and output a tracking signal; an active moving platform, for driving the platform according to the tracking signal to situate the target at the center of the image. Therefore, the visual tracking system of the present invention can help a security camera system to record the target in details and maximize the visual information of the intruding target. | 10-22-2009 |
20090262978 | Automatic Detection Of Fires On Earth's Surface And Of Atmospheric Phenomena Such As Clouds, Veils, Fog Or The Like, Using A Satellite System - A method for automatically detecting fires on Earth's surface using a satellite system is provided. The method includes acquiring multi-spectral images of the Earth at different times, using a multi-spectral satellite sensor, each multi-spectral image being a collection of single-spectral images each associated with a respective wavelength (λ), and each single-spectral image being made up of pixels each indicative of a spectral radiance (R | 10-22-2009 |
20090262979 | Determining a Material Flow Characteristic in a Structure - An volume of a patient can be mapped with a system operable to identify a plurality of locations and save a plurality of locations of a mapping instrument. The mapping instrument can include one or more electrodes that can sense a voltage that can be correlated to a three dimensional location of the electrode at the time of the sensing or measurement. Therefore, a map of a volume can be determined based upon the sensing of the plurality of points without the use of other imaging devices. An implantable medical device can then be navigated relative to the mapping data. | 10-22-2009 |
20090262980 | Method and Apparatus for Determining Tracking a Virtual Point Defined Relative to a Tracked Member - An volume of a patient can be mapped with a system operable to identify a plurality of locations and save a plurality of locations of a mapping instrument. The mapping instrument can include one or more electrodes that can sense a voltage that can be correlated to a three dimensional location of the electrode at the time of the sensing or measurement. Therefore, a map of a volume can be determined based upon the sensing of the plurality of points without the use of other imaging devices. An implantable medical device can then be navigated relative to the mapping data. | 10-22-2009 |
20090262981 | IMAGE PROCESSING APPARATUS AND METHOD THEREOF - An image processing apparatus estimates an estimated object region including an object on an input image on the basis of a stored object data, obtains a similarity distribution of the estimated object region and peripheral regions thereof by at least one classifier, and obtains an object region coordinate and a template image on the basis of the similarity distribution. | 10-22-2009 |
20090262982 | Determining a Location of a Member - An volume of a patient can be mapped with a system operable to identify a plurality of locations and save a plurality of locations of a mapping instrument. The mapping instrument can include one or more electrodes that can sense a voltage that can be correlated to a three dimensional location of the electrode at the time of the sensing or measurement. Therefore, a map of a volume can be determined based upon the sensing of the plurality of points without the use of other imaging devices. An implantable medical device can then be navigated relative to the mapping data. | 10-22-2009 |
20090262983 | Image processing based on object information - A CPU divides an image into plural regions and for each of the regions, generates a histogram and calculates an average brightness Y ave. The CPU determines a focus location on the image by using focus location information, sets a region at the determined location as an emphasis region, and sets the average brightness Y ave of the emphasis region as a brightness criterion Y std. The CPU uses the brightness criterion Y std to determine non-usable regions. By using the regions not excluded as non-usable regions, the CPU calculates an image quality adjustment average brightness Y′ ave, i.e. the average brightness of the entire image, with a weighting W in accordance with the locations of the regions reflected thereto, and executes a bright value correction by using the calculated image quality adjustment average brightness Y′ ave. | 10-22-2009 |
20090262984 | Multiple Camera Control System - A multiple camera tracking system for interfacing with an application program running on a computer is provided. The tracking system includes two or more video cameras arranged to provide different viewpoints of a region of interest, and are operable to produce a series of video images. A processor is operable to receive the series of video images and detect objects appearing in the region of interest. The processor executes a process to generate a background data set from the video images, generate an image data set for each received video image, compare each image data set to the background data set to produce a difference map for each image data set, detect a relative position of an object of interest within each difference map, and produce an absolute position of the object of interest from the relative positions of the object of interest and map the absolute position to a position indicator associated with the application program. | 10-22-2009 |
20090268941 | VIDEO MONITOR FOR SHOPPING CART CHECKOUT - A system ensures payment for the purchase of merchandise carried through a checkout aisle on the lower tray of a shopping cart. For that purpose, the system includes a controller with an embedded program for identifying a virtual structure substantially equivalent to the physical structure of the tray. Further, the system includes a sensor that determines when a cart is positioned at the checkout aisle. The system also includes a camera for creating an image of the physical structure of the tray and transmitting the image to the controller. The controller includes a means for activating the embedded program to compare the image with the virtual structure. As a result of the comparison, the controller determines whether merchandise is on the physical structure of the tray. During the comparison, the controller removes the virtual structure from the image. | 10-29-2009 |
20090268942 | Methods and apparatus for detection of motion picture piracy for piracy prevention - A copiers' camera or camcorder in a motion-picture audience region is detected by illuminating the audience region with invisible infrared light, and locating any copiers' camera or camcorder within the audience region by imaging the audience region with one or more infrared-light-sensitive cameras. The image captured by the infrared-sensitive camera(s) during a performance may be correlated with information about the audience region, such as row and seat numbers. Copiers may be identified by their presence at seats where copying activity is detected, and the infrared images may be preserved as evidence of the piracy. | 10-29-2009 |
20090268943 | COMPOSITION DETERMINATION DEVICE, COMPOSITION DETERMINATION METHOD, AND PROGRAM - A composition determination device includes: a subject detection unit configured to detect a subject in an image based on acquired image data; an actual subject size detection unit configured to detect the actual size which can be viewed as being equivalent to actual measurements, for each subject detected by the subject detection unit; a subject distinguishing unit configured to distinguish relevant subjects from subjects detected by the subject detection unit, based on determination regarding whether or not the actual size detected by the actual subject size detection unit is an appropriate value corresponding to a relevant subject; and a composition determination unit configured to determine a composition with only relevant subjects, distinguished by the subject distinguishing unit, as objects. | 10-29-2009 |
20090268944 | LINE OF SIGHT DETECTING DEVICE AND METHOD - A line of sight detecting method includes estimating a face direction of an object person based on a shot face image of the object person, detecting a part of an eye outline in the face image of the object person, detecting a pupil in the face image of the object person, and estimating the direction of a line of sight of the object person based on the correlation of the pupil position in the eye outline and the face direction with respect to the direction of the line of sight, and the pupil position and the face direction of the object person. | 10-29-2009 |
20090268945 | ARCHITECTURE FOR CONTROLLING A COMPUTER USING HAND GESTURES - Architecture for implementing a perceptual user interface. The architecture comprises alternative modalities for controlling computer application programs and manipulating on-screen objects through hand gestures or a combination of hand gestures and verbal commands. The perceptual user interface system includes a tracking component that detects object characteristics of at least one of a plurality of objects within a scene, and tracks the respective object. Detection of object characteristics is based at least in part upon image comparison of a plurality of images relative to a course mapping of the images. A seeding component iteratively seeds the tracking component with object hypotheses based upon the presence of the object characteristics and the image comparison. A filtering component selectively removes the tracked object from the object hypotheses and/or at least one object hypothesis from the set of object hypotheses based upon predetermined removal criteria. | 10-29-2009 |
20090274339 | Behavior recognition system - A system for recognizing various human and creature motion gaits and behaviors is presented. These behaviors are defined as combinations of “gestures” identified on various parts of a body in motion. For example, the leg gestures generated when a person runs are different than when a person walks. The system described here can identify such differences and categorize these behaviors. Gestures, as previously defined, are motions generated by humans, animals, or machines. Multiple gestures on a body (or bodies) are recognized simultaneously and used in determining behaviors. If multiple bodies are tracked by the system, then overall formations and behaviors (such as military goals) can be determined. | 11-05-2009 |
20090279736 | MAGNETIC RESONANCE EYE TRACKING SYSTEMS AND METHODS - Embodiments of magnetic resonance eye tracking systems and methods are disclosed. One embodiment, among others, comprises a method that receives magnetic resonance based data and determines direction of a subject's gaze based on the data. | 11-12-2009 |
20090279737 | PROCESSING METHOD FOR CODED APERTURE SENSOR - A method of processing for a coded aperture imaging apparatus which is useful for target identification and tracking. The method uses a statistical scene model and, preferably using several frames of data, determines a likelihood of the position and/or velocity of one or more targets assumed to be in the scene. The method preferably applies a recursive Bayesian filter or Bayesian batch filter to determine a probability distribution of likely state parameters. The method acts upon the acquired data directly without requiring any processing to form an image. | 11-12-2009 |
20090279738 | Apparatus for image recognition - An image recognition apparatus includes an image recognition unit, an evaluation value calculation unit, and a motion extraction unit. The image recognition unit uses motion vectors that are generated in the course of coding image data into MPEG format data or in the course of decoding the MPEG coded data by the evaluation value calculation unit and the motion extraction unit as well as two dimensional DCT coefficients and encode information such as picture types and block types for generating the evaluation values that represent feature of the image. The apparatus further includes an update unit for recognizing the object in the image based on the determination rules for a unit of macro block. The apparatus can thus accurately detect the motion of the object based on the evaluation values derived from DCT coefficients even when generation of the motion vectors is difficult. | 11-12-2009 |
20090285449 | SYSTEM FOR OPTICAL RECOGNITION OF THE POSITION AND MOVEMENT OF AN OBJECT ON A POSITIONING DEVICE - The optical recognition system determines the position and/or movement of an object ( | 11-19-2009 |
20090285450 | IMAGE-BASED SYSTEM AND METHODS FOR VEHICLE GUIDANCE AND NAVIGATION - A method of estimating position and orientation of a vehicle using image data is provided. The method includes capturing an image of a region external to the vehicle using a camera mounted to the vehicle, and identifying in the image a set of feature points of the region. The method further includes subsequently capturing another image of the region from a different orientation of the camera, and identifying in the image the same set of feature points. A pose estimation of the vehicle is generated based upon the identified set of feature points and corresponding to the region. Each of the steps are repeated at with respect to a different region at least once so as to generate at least one succeeding pose estimation of the vehicle. The pose estimations are then propagated over a time interval by chaining the pose estimation and each succeeding pose estimation one with another according to a sequence in which each was generated. | 11-19-2009 |
20090290755 | System Having a Layered Architecture For Constructing a Dynamic Social Network From Image Data - A system having a layered architecture for constructing dynamic social network from image data of actors and events. It may have a low layer for capturing raw data and identifying actors and events. The system may have a middle layer that receives actor and event information from the low layer and puts it in to a two dimensional matrix. A high layer of the system may add weighted relationship information to the matrix to form the basis for constructing a social network. The system may have a sliding window thus making the social network dynamic. | 11-26-2009 |
20090290756 | METHODS AND APPARATUS FOR DETECTING A COMPOSITION OF AN AUDIENCE OF AN INFORMATION PRESENTING DEVICE - Methods and apparatus for detecting a composition of an audience of an information presenting device are disclosed. A disclosed example method includes: capturing at least one image of the audience; determining a number of people within the at least one image; prompting the audience to identify its members if a change in the number of people is detected based on the number of people determined to be within the at least one image; and if a number of members identified by the audience is different from the determined number of people after a predetermined number of prompts of the audience, adjusting a value to avoid excessive prompting of the audience. | 11-26-2009 |
20090296984 | System and Method for Three-Dimensional Object Reconstruction from Two-Dimensional Images - A system and method for three-dimensional (3D) acquisition and modeling of a scene using two-dimensional (2D) images are provided. The system and method provides for acquiring first and second images of a scene, applying a smoothing function to the first image to make feature points of objects, e.g., corners and edges of the objects, in the scene more visible, applying at least two feature detection functions to the first image to detect feature points of objects in the first image, combining outputs of the at least two feature detection functions to select object feature points to be tracked, applying a smoothing function to the second image, applying a tracking function on the second image to track the selected object feature points, and reconstructing a three-dimensional model of the scene from an output of the tracking function. | 12-03-2009 |
20090296985 | Efficient Multi-Hypothesis Multi-Human 3D Tracking in Crowded Scenes - System and methods are disclosed to perform multi-human 3D tracking with a plurality of cameras. At each view, a module receives each camera output and provides 2D human detection candidates. A plurality of 2D tracking modules are connected to the CNNs, each 2D tracking module managing 2D tracking independently. A 3D tracking module is connected to the 2D tracking modules to receive promising 2D tracking hypotheses. The 3D tracking module selects trajectories from the 2D tracking modules to generate 3D tracking hypotheses. | 12-03-2009 |
20090296986 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD AND PROGRAM - An image processing device includes: a tracking unit to track a predetermined point on an image as a tracking point, to correspond with an operation of a user; a display control unit to display the tracking point candidate serving as the tracking point candidates, which are greater in number than objects moving on the image and fewer than the number of pixels of the image, on the image; and a setting unit to set the tracking point candidates as the tracking points on the next frame of the tracking unit, corresponding to an operation by a user. | 12-03-2009 |
20090296987 | ROAD LANE BOUNDARY DETECTION SYSTEM AND ROAD LANE BOUNDARY DETECTING METHOD - A road lane boundary detection system includes a detection region setting unit that sets a certain region in a road image, as a target detection region to be searched for detection of a road lane boundary, and a detecting unit that processes image data in the target detection region set by the detection region setting unit, so as to detect the road lane boundary. The detection region setting unit sets a first detection region as the target detection region if no road lane boundary is detected, and sets a second detection region as the target detection region if the road lane boundary is detected, such that the first and second detection regions are different in size from each other. | 12-03-2009 |
20090296988 | CHARACTER INPUT APPARATUS AND CHARACTER INPUT METHOD - A character input apparatus includes a liquid crystal monitor | 12-03-2009 |
20090296989 | Method for Automatic Detection and Tracking of Multiple Objects - A method for automatically detecting and tracking objects in a scene. The method acquires video frames from a video camera; extracts discriminative features from the video frames; detects changes in the extracted features using background subtraction to produce a change map; uses the change map to use a hypothesis to estimate of an approximate number of people along with uncertainty in user specified locations; and using the estimate, track people and update the hypotheses for a refinement of the estimation of people count and location. | 12-03-2009 |
20090304229 | OBJECT TRACKING USING COLOR HISTOGRAM AND OBJECT SIZE - A solution for monitoring an area uses color histograms and size information (e.g., heights and widths) for blob(s) identified in an image of the area and model(s) for existing object track(s) for the area. Correspondence(s) between the blob(s) and the object track(s) are determined using the color histograms and size information. Information on an object track is updated based on the type of correspondence(s). The solution can process merges, splits and occlusions of foreground objects as well as temporal and spatial fragmentations. | 12-10-2009 |
20090304230 | Detecting and tracking targets in images based on estimated target geometry - A system for detecting and tracking targets captured in images, such as people and object targets that are captured in video images from a surveillance network. Targets can be detected by an efficient, geometry-driven approach that determines likely target configuration of the foreground imagery based on estimated geometric information of possible targets. The detected targets can be tracked using a centralized tracking system. | 12-10-2009 |
20090304231 | Method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device - A method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device includes: decomposing a frame into intensity, color and direction features according to human perceptions; filtering an input image by a Gaussian pyramid to obtain levels of pyramid representations by down sampling; calculating the features of pyramid representations; using a linear center-surround operator similar to a biological perception to expedite the calculation of a mean value of the peripheral region; using the difference of each feature between a small central region and the peripheral region as a measured value; overlaying the pyramid feature maps to obtain a conspicuity map and unify the conspicuity maps of the three features; obtaining a saliency map of the frames by linear combination; and using the saliency map for a segmentation to mark an interesting region of a frame in the large region of the conspicuity maps. | 12-10-2009 |
20090304232 | VISUAL AXIS DIRECTION DETECTION DEVICE AND VISUAL LINE DIRECTION DETECTION METHOD - Provided is a visual axis direction detection device capable of obtaining a highly accurate visual axis direction detection result without performing a particular calibration for each of examinees. The device ( | 12-10-2009 |
20090304233 | RECOGNITION APPARATUS AND RECOGNITION METHOD - A barcode recognition apparatus includes an image interface, an image analysis unit, an image conversion unit, and a bar recognition unit. The image interface acquires an image including a barcode captured by a camera. The image analysis unit analyzes a characteristic of an input image acquired from the camera, and decides an image conversion method for the conversion from the input image into an image for recognition processing on the basis of the analysis result. The image conversion unit converts the input image into an image for recognition processing by the image conversion method decided by the image analysis unit. The bar recognition unit performs barcode recognition processing for the image for recognition processing obtained by the image conversion unit. | 12-10-2009 |
20090304234 | TRACKING POINT DETECTING DEVICE AND METHOD, PROGRAM, AND RECORDING MEDIUM - A tracking point detecting device includes: a frame decimation unit for decimation the frame interval of a moving image configured of multiple frame images continuing temporally; a first detecting unit for detecting, of two consecutive frames of the decimated moving image, a temporally-subsequent frame pixel corresponding to a predetermined pixel of a temporally-previous frame; a forward-direction detecting unit for detecting the pixel corresponding to a predetermined pixel of a temporally-previous frame of the decimated moving image, at each of the decimated frames in the same direction as time; an opposite-direction detecting unit for detecting the pixel corresponding to the detected pixel of a temporally-subsequent frame of the decimated moving image, at each of the decimated frames in the opposite direction of time; and a second detecting unit for detecting a predetermined pixel of each of the decimated frames by employing the pixel positions detected in the forward and opposite directions. | 12-10-2009 |
20090310820 | IMPROVEMENTS RELATING TO TARGET TRACKING - A method and system are disclosed for tracking a target imaged in video footage. The target may, for example, be a person moving through a crowd The method comprises the steps of: identifying a target in a first frame; generating a population of sub-templates by sampling from a template area defined around the target position; and searching for instances of the sub-templates in a second frame so as to locate the target in the second frame. Sub-templates whose instances are not consistent with the new target position are removed from the population and replaced by newly sampled sub-templates. The method can then be repeated so as to find the target in further frames. It can be implemented in a system comprising video imaging means, such as a CCTV camera, and processing means operable to carry out the method. | 12-17-2009 |
20090310821 | DETECTION OF AN OBJECT IN AN IMAGE - The invention provides a method, system, and program product for detecting an object in a digital image. In one embodiment, the invention includes: deriving an initial object indication mask based on pixel-wise differences between a first digital image and a second digital image, at least one of which includes the object; performing an edge finding operation on both the first and second digital images, wherein the edge finding operation includes marking added edges; generating a plurality of straight linear runs of pixels across an image containing the object, wherein each of the plurality of straight linear runs starts and ends on an added edge and is contained within the initial object indication mask; and forming a final object indication mask by retaining only pixels that are part of at least one of the plurality of straight linear runs. | 12-17-2009 |
20090310822 | Feedback object detection method and system - A feedback object detection method and system. The system includes an object segmentation element, an object tracking element and an object prediction element. The object segmentation element extracts the object from an image according to prediction information of the object provided by the object prediction element. Then, the object tracking element tracks the extracted object to generate motion information of the object like moving speed and moving direction. The object prediction element generates the prediction information such as predicted position and predicted size of the object according to the motion information. The feedback of the prediction information to the object segmentation element facilitates accurately extracting foreground pixels from the image. | 12-17-2009 |
20090310823 | Object tracking method using spatial-color statistical model - An object tracking method utilizing spatial-color statistical models is used for tracking an object in different frames. A first object is extracted from a first frame and a second object is extracted from a second frame. The first object is divided into several first blocks and the second object is divided into several second blocks according to pixel parameters of each pixel within the first object and the second object. The comparison between the first blocks and the second blocks is made to find the corresponding relation therebetween. The second object is identified as the first object according to the corresponding relation. | 12-17-2009 |
20090316951 | MOBILE IMAGING DEVICE AS NAVIGATOR - Embodiments of the invention are directed to obtaining information based on directional orientation of a mobile imaging device, such as a camera phone. Visual information is gathered by the camera and used to determine a directional orientation of the camera, to search for content based on the direction, to manipulate 3D virtual images of a surrounding area, and to otherwise use the directional information. Direction and motion can be determined by analyzing a sequence of images. Distance from a current location, inputted search parameters, and other criteria can be used to expand or filter content that is tagged with such criteria. Search results with distance indicators can be overlaid on a map or a camera feed. Various content can be displayed for a current direction, or desired content, such as a business location, can be displayed only when the camera is oriented toward the desired content. | 12-24-2009 |
20090316952 | GESTURE RECOGNITION INTERFACE SYSTEM WITH A LIGHT-DIFFUSIVE SCREEN - One embodiment of the invention includes a gesture recognition interface system. The interface system may comprise at least one light source positioned to illuminate a first side of a light-diffusive screen. The interface system may also comprise at least one camera positioned on a second side of the light-diffusive screen, the second side being opposite the first side, and configured to receive a plurality of images based on a brightness contrast difference between the light-diffusive screen and an input object. The interface system may further comprise a controller configured to determine a given input gesture based on changes in relative locations of the input object in the plurality of images. The controller may further be configured to initiate a device input associated with the given input gesture. | 12-24-2009 |
20090316953 | Adaptive match metric selection for automatic target recognition - An automatic target recognition system with adaptive metric selection. The novel system includes an adaptive metric selector for selecting a match metric based on the presence or absence of a particular feature in an image and a matcher for identifying a target in the image using the selected match metric. In an illustrative embodiment, the adaptive metric selector is designed to detect a shadow in the image and select a first metric if a shadow is detected and not cut off, and select a second metric otherwise. The system may also include an automatic target cuer for detecting targets in a full-scene image and outputting one or more target chips, each chip containing one target. The adaptive metric selector adaptively selects the match metric for each chip separately, and may also adaptively select an appropriate chip size such that a shadow in the chip is not unnecessarily cut off. | 12-24-2009 |
20090316954 | INPUT APPARATUS AND IMAGE FORMING APPARATUS - An input apparatus for enabling a user to enter an instruction into a main apparatus has high durability and offers superior operability. The input apparatus includes a table device having a table with a variable size. An image of plural virtual keys that is adapted to the size of the table is projected by a projector unit onto the table. Position information about a finger of the user that is placed on the table is detected by a position detecting device contactlessly. One of the plural virtual keys that corresponds to the position of the finger of the user detected by the position detecting device is detected by a key detecting device based on information about the image of the plural virtual keys and a result of the detection made by the position detecting device. | 12-24-2009 |
20090316955 | IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM - An image processing system includes: an object detecting unit that detects a moving body object from image data of an image of a predetermined area; an object-occurrence-position detecting unit that detects an occurrence position of the object detected by the object detecting unit; and a valid-object determining unit that determines that the object detected by the object detecting unit is a valid object when the object is present in a mask area set as a non-detection target in the image of the predetermined area and the occurrence position of the object in the mask area detected by the object-occurrence-position detecting unit is outside the mask area. | 12-24-2009 |
20090316956 | Image Processing Apparatus - An image processing accuracy estimation unit estimates an image processing accuracy by calculating a size of an object by which the accuracy of measurement of the distance of the object photographed by an on-vehicle camera becomes a permissible value or less. An image post-processing area determination unit determines, in accordance with the estimated image processing accuracy, a partial area inside a detection area of the object as an image post-processing area for which an image post-processing is carried out and lattices the determined image post-processing area to cells. An image processing unit processes the image photographed by the on-vehicle camera to detect a candidate for object and calculates a three-dimensional position of the detected object candidate. An image post-processing unit calculates, in each the individual cell inside the determined area the probability as to whether the detected object is present and determines the presence/absence of the object. | 12-24-2009 |
20090324008 | METHOD, APPARTAUS AND COMPUTER PROGRAM PRODUCT FOR PROVIDING GESTURE ANALYSIS - A method for providing gesture analysis may include analyzing image data using a skin detection model generated with respect to detecting skin of a specific user, tracking a portion of the image data correlating to a skin region, and performing a gesture recognition for the tracked portion of the image based on comparing features recognized in the skin region to stored features corresponding to a predefined gesture. An apparatus and computer program product corresponding to the method are also provided. | 12-31-2009 |
20090324009 | Method and system for the determination of object positions in a volume - A method or a system embodiment determines positional information about a moveable object to which is affixed a pattern of stripes having reference lines. A method determines image lines of stripe images of each stripe within at least two video frames, uses the image lines to prescribe planes having lines of intersection, and determines a transformation mapping reference lines to lines of intersection. Position information about the object may be derived from the transformation. A system embodiment comprises a pattern of stripes in a known fixed relationship to an object, reference lines characterizing the stripes, two or more cameras at known locations, a digital computer adapted to receive video frames from the pixel arrays of the cameras, and a program stored in the computer's memory. The program performs some or all of the method. When there are two or more moveable objects, an embodiment may further determine the position information about a first object to be transformed to a local coordinate system fixed with respect to a second object. | 12-31-2009 |
20090324010 | Neural network-controlled automatic tracking and recognizing system and method - A neural network-controlled automatic tracking and recognizing system includes a fixed field of view collection module, a full functions variable field of view collection module, a video image recognition algorithm module, a neural network control module, a suspect object track-tracking module, a database comparison and alarm judgment module, a monitored characteristic recording and rule setting module, a light monitoring and control module, a backlight module, an alarm output/display/storage module, and security monitoring sensors. The invention relates also to the operation method of the system. | 12-31-2009 |
20090324011 | METHOD OF DETECTING MOVING OBJECT - Proposed is a method of detecting a moving object, including: providing an image-set at least including a first image and a second image correlated in a time series, the first image preceding the second image; defining a detecting region and a detecting direction so as to construct a virtual gate in the first image; estimating the motion vector in a time series; comparing, by the virtual gate, the second image with the first image so as to determine a difference therebetween in terms of an object's position and motion vector; and retrieving the object to be an effective moving object upon determination of the object as lying within the detecting region defined in the virtual gate and moving in a direction substantively the same with the detecting direction. This invention presents a moving object detection method without the need to construct a background model a priori. | 12-31-2009 |
20090324012 | SYSTEM AND METHOD FOR CONTOUR TRACKING IN CARDIAC PHASE CONTRAST FLOW MR IMAGES - A method for tracking a contour in cardiac phase contrast flow magnetic resonance (MR) images includes estimating a global translation of a contour in a reference image in a time sequence of cardiac phase contrast flow MR images to a contour in a current image in the time sequence of images by finding a 2-dimensional translation vector that maximizes a similarity function of the contour in the reference image and the current image calculated over a bounding rectangle containing the contour in the reference image, estimating an affine transformation of the contour in the reference image to the contour in the current image, and performing a constrained local deformation of the contour in the current image. | 12-31-2009 |
20090324013 | Image processing apparatus and image processing method - An image processing apparatus, a feature point tracking method and a feature point tracking program, which enable efficient feature point tracking by taking the easiness of convergence of a displacement amount according to the image pattern into account in a hierarchical gradient method, are provided. A displacement calculating unit reads a hierarchical tier image with the smallest image size from each of a reference pyramid py | 12-31-2009 |
20090324014 | RETRIEVING SCENES FROM MOVING IMAGE DATA - A computer system, method and computer program that retrieves, from at least one piece of moving image data, at least one scene that includes moving image content to be retrieved. The computer system includes a storage unit that stores a locus of a model of the moving image to be retrieved and velocity variation of the model; a first calculation unit that calculates a first vector including the locus and the velocity variation of the model; a second calculation unit that calculates a second vector regarding the moving image content to be retrieved included in the at least one piece of moving image data; a third calculation unit that calculates a degree of similarity between the first and second vectors; and a selection unit that selects, at least one scene which includes the moving image content to be retrieved, on the basis of the degree of similarity. | 12-31-2009 |
20090324015 | EMITTER TRACKING SYSTEM - An improved emitter tracking system. In aspects of the present teachings, the presence of a desired emitter may be established by a relatively low-power emitter detection module, before images of the emitter and/or its surroundings are captured with a relatively high-power imaging module. Capturing images of the emitter may be synchronized with flashes of the emitter, to increase the signal-to-noise ratio of the captured images. | 12-31-2009 |
20090324016 | MOVING TARGET DETECTING APPARATUS, MOVING TARGET DETECTING METHOD, AND COMPUTER READABLE STORAGE MEDIUM HAVING STORED THEREIN A PROGRAM CAUSING A COMPUTER TO FUNCTION AS THE MOVING TARGET DETECTING APPARATUS - To extract a target pixel that shows a moving target in an image containing a complicated background. An image storing section | 12-31-2009 |
20090324017 | CAPTURING AND PROCESSING FACIAL MOTION DATA - Capturing and processing facial motion data includes: coupling a plurality of sensors to target points on a facial surface of an actor; capturing frame by frame images of the plurality of sensors disposed on the facial surface of the actor using at least one motion capture camera disposed on a head-mounted system; performing, in the head-mounted system, a tracking function on the frame by frame images of the plurality of sensors to accurately map the plurality of sensors for each frame; and generating, in the head-mounted system, a modeled surface representing the facial surface of the actor. | 12-31-2009 |
20090324018 | Efficient And Accurate 3D Object Tracking - A method of tracking an object in an input image stream, the method comprising iteratively applying the steps of: (a) rendering a three-dimensional object model according to a previously predicted state vector from a previous tracking loop or the state vector from an initialisation step; (b) extracting a series of point features from the rendered object; (c) localising corresponding point features in the input image stream; (d) deriving a new state vector from the point feature locations in the input image stream. | 12-31-2009 |
20100002908 | Pedestrian Tracking Method and Pedestrian Tracking Device - A pedestrian tracking method and a pedestrian tracking device with a simple structure can estimate the motion of a pedestrian in images without using color information, making it possible to achieve a robust pedestrian tracking. The pedestrian tracking device ( | 01-07-2010 |
20100002909 | Method and device for detecting in real time interactions between a user and an augmented reality scene - The invention consists in a system for detection in real time of interactions between a user and an augmented reality scene, the interactions resulting from the modification of the appearance of an object present in the image. After having created ( | 01-07-2010 |
20100002910 | Method and Apparatus for Developing Synthetic Three-Dimensional Models from Imagery - A method and apparatus for modeling an object in software are disclosed. The method includes generating a three-dimensional geometry of the object from a plurality of points obtained from a plurality of images of the object, the images having been acquired from a plurality of perspectives; and generating a three-dimensional model from the three-dimensional geometry for integration into an object recognition system. The apparatus may be a program storage medium encoded with instructions that, when executed by a computer, perform such a method or a computer programmed to perform such a method. | 01-07-2010 |
20100008539 | SYSTEMS AND METHODS FOR IMPROVED TARGET TRACKING FOR TACTICAL IMAGING - Certain embodiments provide systems and methods for target image acquisition using sensor data. The system includes at least one sensor adapted to detect an event and generate a signal based at least in part on the event. The system also includes an imager obtaining an image of a target and target area based on a target tracking and recognition algorithm. The imager is configured to trigger image acquisition based at least in part on the signal from the sensor. The imager adjusts the target tracking and recognition algorithm based at least in part on sensor data in the signal. In certain embodiments, the imager may also adjust an image acquisition threshold for obtaining an image based on the sensor data. | 01-14-2010 |
20100008540 | Method for Object Detection - A method for object detection from a visual image of a scene. The method includes: using a first order predicate logic formalism to specify a set of logical rules to encode contextual knowledge regarding the object to be detected; inserting the specified logical rules into a knowledge base; obtaining the visual image of the scene; applying specific object feature detectors to some or all pixels in the visual image of the scene to obtain responses at those locations; using the obtained responses to generate logical facts indicative of whether specific features or parts of the object are present or absent at that location in the visual image; inserting the generated logical facts into the knowledge base; and combining the logical facts with the set of logical rules to whether the object is present or absent at a particular location in the scene. | 01-14-2010 |
20100008541 | Method for Presenting Images to Identify Target Objects - A method presents a set of images to a viewer. The images include objects, which can be either distractor objects or target objects. A prevalence of the target objects is substantially lower than the distractor objects. Each image is segmented into portions so that each portion includes one object. The portions are then combined into a combined image. The combined image is presented to a viewer so that the target objects can be accurately and rapidly identified. The combining of the portions can be random or ordered in either the spatial or temporal domain. | 01-14-2010 |
20100008542 | Object detection method and apparatus - An object detection method and apparatus is provided. When an object pixel having a target pixel value is found while an image including an object is scanned at intervals of a preset number of pixels, whether or not each pixel around the object pixel has the target pixel value is sequentially determined, while spreading to pixels around the object pixel, to find an entire pixel region constituting the object and position values of the found pixels are stored. This ensures that an entire pixel region of the object is simply, easily, quickly, and correctly found. | 01-14-2010 |
20100014707 | Vehicle and road sign recognition device - A vehicle and road sign recognition device each includes: image capturing means ( | 01-21-2010 |
20100014708 | TARGET RANGE-FINDING METHOD AND DEVICE - The present invention provides a target range-finding method and device. The device includes a marking portion on the target, which is set with an area or size and defined by a first and second measurement edge. An image acquisition device includes a lens and operating screen. The operating screen displays the target image captured by the image acquisition device. A measuring mark selection unit selects the position of the first and second measurement edges of the target image from the operating screen of the image acquisition device. A processing unit calculates the range of the target. The target range-finding device presents better range-finding accuracy, ease-of-operation and higher efficiency as well as improved applicability. | 01-21-2010 |
20100014709 | Super-resolving moving vehicles in an unregistered set of video frames - A method is provided for accurately determining the registration for a moving vehicle over a number of frames so that the vehicle can be super-resolved. Instead of causing artifacts in a super-resolved image, the moving vehicle can be specifically registered and super-resolved individually. This method is very accurate, as it uses a mathematical model that captures motion with a minimal number of parameters and uses all available image information to solve for those parameters. Methods are provided that implement the vehicle registration algorithm and super-resolve moving vehicles using the resulting vehicle registration. One advantage of this system is that better images of moving vehicles can be created without requiring costly new aerial surveillance equipment. | 01-21-2010 |
20100014710 | METHOD AND SYSTEM FOR TRACKING POSITIONS OF HUMAN EXTREMITIES - A method for tracking positions of human extremities is disclosed. A left image of a first extremity portion is retrieved using a first picturing device and an outline candidate position of the first extremity portion is obtained according to feature information of the left image. A right image of the first extremity portion is retrieved using a second picturing device and a depth candidate position of the first extremity portion is obtained according to depth information of the right image. Geometry relations between the outline candidate position and the depth candidate position and a second extremity portion of a second extremity position are calculated to determine whether a current extremity position of the first extremity portion is required to be updated. | 01-21-2010 |
20100021005 | Time Managing Device of a Computer System and Related Method - A time managing device of a computer system including a graphic user interface capable of displaying application windows is disclosed. The time managing device includes an image capturing device, a sight-light detecting unit and a reminding unit. The image capturing device is used for capturing a user image corresponding to a user. The sight-light detecting unit is coupled to the image capturing device and used for analyzing a user sight-light state according to the user image to generate a sight-light detection result. The reminding unit is coupled to the sight-light detecting unit and the graphic user interface, and used for performing a reminder to a predetermined application window displayed on the graphic user interface according to a predetermined time and the sight-light detection result. | 01-28-2010 |
20100021006 | OBJECT TRACKING METHOD AND SYSTEM - An object tracking method uses a system having an object identifying device and at least one video tracking device, wherein the object identifying device monitors an area to identify an object entering the area and the video tracking device wired/wirelessly connected to the object identifying device monitors the area monitored by the object identifying device. The method includes: extracting, at the object identifying device, object identification information of the object; providing, at the object identifying device, the object identification information to the video tracking device; tracking, at the video tracking device, the object to extract physical information of the object; mapping, at the video tracking device, the physical information to the object identification information to generate object information of the object; and storing, at the video tracking device, the object information in a memory of the video tracking device. | 01-28-2010 |
20100021007 | RECONSTRUCTION OF DATA PAGE FROM IMAGED DATA - The present invention relates to an electronic device ( | 01-28-2010 |
20100021008 | System and Method for Face Tracking - Improved face tracking is provided during determination of an image by an imaging device using a low power face tracking unit. In one embodiment, image data associated with a frame and one or more face detection windows from a face detection unit may be received by the face tracking unit. The face detection windows are associated with the image data of the frame. A face list may be determined based on the face detection windows and one or more faces may be selected from the face list to generate an output face list. The output face list may then be provided to a processor of an imaging device for the detection of an image based on at least one of coordinate and scale values of the one or more faces on the output face list. | 01-28-2010 |
20100021009 | METHOD FOR MOVING TARGETS TRACKING AND NUMBER COUNTING - The invention discloses a method for moving targets tracking and number counting, comprising the steps of: a). acquiring continuously the video images comprising moving targets; b). acquiring the video image of a current frame, and pre-processing the video image of the current frame; c). segmenting the target region of the processed image, and extracting the target region; d). matching the target region of the current frame obtained in step c) with that of the previous frame based on an online feature selection to establish a match tracking link; and e). determining the number of the targets corresponding to each match tracking link based on the target region tracks recorded by the match tracking link. The invention can solve the problem of low precision of the number statistic results caused by the bad environment, such as that the distribution of the illumination is extremely not equilibrium spatially, the change in a time period is complicated, the change of the gesture during the people goes by is evident, and the like, under the normal application condition. | 01-28-2010 |
20100027839 | SYSTEM AND METHOD FOR TRACKING MOVEMENT OF JOINTS - A first image is obtained. At least one moving object indicated by the at least one image is selected. At least one joint that is associated with the at least one moving object is identified. At least one second image including the at least one moving object with the at least one joint is obtained and the movement of the at least one joint is tracked in a three-dimensional space. | 02-04-2010 |
20100027840 | System and method for bullet tracking and shooter localization - A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios. | 02-04-2010 |
20100027841 | METHOD AND SYSTEM FOR DETECTING A SIGNAL STRUCTURE FROM A MOVING VIDEO PLATFORM - The present invention aims at providing a method for detecting a signal structure from a moving vehicle. The method for detecting signal structure includes capturing an image from a camera mounted on the moving vehicle. The method further includes restricting a search space by predefining candidate regions in the image, extracting a set of features of the image within each candidate region and detecting the signal structure accordingly. | 02-04-2010 |
20100027842 | OBJECT DETECTION METHOD AND APPARATUS THEREOF - An object detection method and an apparatus thereof are provided. In the object detection method, a plurality of images in an image sequence is sequentially received. When a current image is received, a latest background image is established by referring to the current image and the M images previous to the current image, so as to update one of N background images, wherein M and N are positive integers. Next, color models of the current image and the background images are analyzed to determine whether a pixel in the current image belongs to a foreground object. Accordingly, the accuracy in object detection is increased by instantly updating the background images. | 02-04-2010 |
20100027843 | SURFACE UI FOR GESTURE-BASED INTERACTION - Disclosed is a unique system and method that facilitates gesture-based interaction with a user interface. The system involves an object sensing configured to include a sensing plane vertically or horizontally located between at least two imaging components on one side and a user on the other. The imaging components can acquire input images taken of a view of and through the sensing plane. The images can include objects which are on the sensing plane and/or in the background scene as well as the user as he interacts with the sensing plane. By processing the input images, one output image can be returned which shows the user objects that are in contact with the plane. Thus, objects located at a particular depth can be readily determined. Any other objects located beyond can be “removed” and not seen in the output image. | 02-04-2010 |
20100027844 | MOVING OBJECT RECOGNIZING APPARATUS - Provided is a moving object recognizing apparatus capable of effectively showing reliability of result of image processing involved in moving object recognition and issuing alarms in an appropriate manner when needed. The moving object recognizing apparatus includes a data acquisition unit ( | 02-04-2010 |
20100034422 | OBJECT TRACKING USING LINEAR FEATURES - A method of tracking objects within an environment comprises acquiring sensor data related to the environment, identifying linear features within the sensor data, and determining a set of tracked linear features using the linear features identified within the sensor data and a previous set of tracked linear features, the set of tracked linear features being used to track objects within the environment. | 02-11-2010 |
20100034423 | SYSTEM AND METHOD FOR DETECTING AND TRACKING AN OBJECT OF INTEREST IN SPATIO-TEMPORAL SPACE - The present invention provides a system and method for detecting and tracking a moving object. First, robust change detection is applied to find initial candidate regions in consecutive frames. These initial detections in consecutive frames are stacked to produce space-time bands which are extracted by Hough transform and entropy minimization based band detection algorithm. | 02-11-2010 |
20100034424 | POINTING SYSTEM FOR LASER DESIGNATOR - A system for illuminating an object of interest includes a platform and a gimbaled sensor associated with an illuminator. The gimbaled sensor provides sensor data corresponding to a sensed condition associated with an area. The gimbaled sensor is configured to be articulated with respect to the platform. A first transceiver transceives communications to and from a ground control system. The ground system includes an operator control unit allowing a user to select and transmit to the first transceiver at least one image feature corresponding to the object of interest. An optical transmitter is configured to emit a signal operable to illuminate a portion of the sensed area proximal to the object of interest. A correction subsystem is configured to determine an illuminated-portion-to-object-of-interest error and, in response to the error determination, cause the signal to illuminate the object of interest. | 02-11-2010 |
20100034425 | METHOD, APPARATUS AND SYSTEM FOR GENERATING REGIONS OF INTEREST IN VIDEO CONTENT - A method, apparatus and system for generating regions of interest in a video content include identifying the program content of received video content, categorizing the scene content of the identified program content and defining at least one region of interest in at least one of the characterized scenes by identifying at least one of a location and an object of interest in the scenes. In one embodiment of the invention, a region of interest is defined using user preference information for the identified program content and the categorized scene content. | 02-11-2010 |
20100046796 | METHOD OF RECOGNIZING A MOTION PATTERN OF AN OBJECT - A method and a motion recognition system is disclosed for recognizing a motion pattern of at least one object by means of determining relative motion blur variations around the at least on object in an image or a sequence of images. Motion blur parameters are extracted from the motion blur in the images, and based thereon the motion blur variations are determined by means of determining variations between the motion blur parameters. | 02-25-2010 |
20100046797 | METHODS AND SYSTEMS FOR AUDIENCE MONITORING - Systems and methods for audience monitoring are provided that include receiving an input including a recording or live feed of an audience composed of several persons, detecting foreground of the input, performing blob segmentation of the input, and analyzing human presence on each segmented blob by identifying at least one person, identifying a spatial distribution of at least one identified person, determining a dwell time of at least one identified person, determining a temporal distribution of at least one identified person, and determining a gaze direction of at least one identified person. Such detecting provides the ability to track individual persons present in the audience, and how long they remain in the audience. The method also provides the ability to determine gaze direction of persons in the audience, and how long one or more persons are gazing in a particular direction. | 02-25-2010 |
20100046798 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - In an image processing apparatus that performs tracking processing based on a correlation between flame images, when an object that is a tracking target is missed and a frame indicating the tracking target is set to a uniform background during tracking processing, a display of the frame may blur. An image processing apparatus is provided which detects a tracking target candidate region which has a highest correlation with a set tracking target region, calculates a difference between an evaluation value acquired in the tracking target candidate region and an evaluation value acquired in a peripheral region of the tracking target candidate region, and stops tracking if the difference is less than a threshold value. | 02-25-2010 |
20100046799 | METHODS AND SYSTEMS FOR DETECTING OBJECTS OF INTEREST IN SPATIO-TEMPORAL SIGNALS - Methods and systems detect objects of interest in a spatio-temporal signal. According to one embodiment, a system processes a digital spatio-temporal input signal containing zero or more foreground objects of interest superimposed on a background. The system comprises a foreground/background separation module, a foreground object grouping module, an object classification module, and a feedback connection. The foreground/background separation module receives the spatio-temporal input signal and, according to one or more adaptable parameters, produces foreground/background labels designating elements of the spatio-temporal input signal as either foreground or background. The foreground object grouping module is connected to the foreground/background separation module and identifies groups of selected foreground-labeled elements as foreground objects. The object classification module is connected to the foreground object grouping module and generates object-level information related to the foreground object. The object-level information adapts the one or more adaptable parameters of the foreground/background separation module, via the feedback connection. | 02-25-2010 |
20100054533 | Digital Image Processing Using Face Detection Information - A method of processing a digital image using face detection within the image achieves one or more desired image processing parameters. A group of pixels is identified that correspond to an image of a face within the digital image. Default values are determined of one or more parameters of at least some portion of the digital image. Values are adjusted of the one or more parameters within the digitally-detected image based upon an analysis of the digital image including the image of the face and the default values. | 03-04-2010 |
20100054534 | SYSTEM AND METHOD FOR INTERACTING WITH A MEDIA DEVICE USING FACES AND PALMS OF VIDEO DISPLAY VIEWERS - Systems and method which allow for user interaction with and control of televisions and other media device are disclosed. A television set is provided with a face and/or palm detection device configured to identify faces and/or palms and map them into coordinates. The mapped coordinates may be translated into data inputs which may be used to interact with applications related to the television. In some embodiments, multiple faces and/or palms may be detected and inputs may be received from each of them. The inputs received by mapping the coordinates may include inputs for interactive television programs in which viewers are asked to vote or rank some aspect of the program. | 03-04-2010 |
20100054535 | Video Object Classification - Techniques for classifying one or more objects in at least one video, wherein the at least one video comprises a plurality of frames are provided. One or more objects in the plurality of frames are tracked. A level of deformation is computed for each of the one or more tracked objects in accordance with at least one change in a plurality of histograms of oriented gradients for a corresponding tracked object. Each of the one or more tracked objects is classified in accordance with the computed level of deformation. | 03-04-2010 |
20100054536 | ESTIMATING A LOCATION OF AN OBJECT IN AN IMAGE - An implementation provides a method including forming a metric surface in a particle-based framework for tracking an object, the metric surface relating to a particular image in a sequence of digital images. Multiple hypotheses are formed of a location of the object in the particular image, based on the metric surface. The location of the object is estimated based on probabilities of the multiple hypotheses. | 03-04-2010 |
20100054537 | VIDEO FINGERPRINTING - A method for fingerprinting video comprising identifying motion in a video as a function of time; using the identified motion to create a motion fingerprint; identifying peaks and/or troughs in the motion fingerprint, and using these to create a reduced size points of interest motion fingerprint. Reduced size fingerprints for a plurality of known videos can be prepared and stored for later comparison with reduced size fingerprints for unknown videos, thereby providing a mechanism for identifying the unknown videos. | 03-04-2010 |
20100061591 | OBJECT RECOGNITION DEVICE - An object recognition device detects a position of a vehicle based on a running path obtained by GPS, vehicle speed, steering angle, etc., and also detects the position of the vehicle based on a result of recognition of an object obtained using a captured image of a camera. The device computes a positioning accuracy in detecting the vehicle position, which accuracy mostly deteriorates as a movement distance of the vehicle increases. | 03-11-2010 |
20100061592 | SYSTEM AND METHOD FOR ANALYZING THE MOVEMENT AND STRUCTURE OF AN OBJECT - A system and method for analyzing the movement and structure of an object ( | 03-11-2010 |
20100061593 | Extrapolation system for solar access determination - An extrapolation system includes acquiring a first orientation-referenced image at a first position, acquiring a second orientation-referenced image at a second position having a vertical offset from the first position, and processing the first orientation-referenced image and the second orientation-referenced image to provide an output parameter extrapolated to a third position that has an offset from the first position and the second position. | 03-11-2010 |
20100061594 | DETECTION OF MOTOR VEHICLE LIGHTS WITH A CAMERA - A method for detecting front headlights and tail lights of a motor vehicle with a colour camera sensor is presented. The colour camera sensor comprises a plurality of red pixels, i.e. image points which are only sensitive in the red spectral range, and a plurality of pixels of other colours. In a first evaluation stage, only the intensity of the red pixels in the image is analysed in order to select relevant points of light in the image. | 03-11-2010 |
20100061595 | INVENTORY MANAGEMENT SYSTEM - The location of objects in a building is recorded in the inventory management system. The objects are moved through the building with a vehicle. The vehicle transmits wireless messages indicating actions of the vehicle, such as loading or unloading of objects. A camera captures images of an area in which the vehicle moves. Positions of the vehicle are automatically detected from the captured images. The information about locations of objects is updated using the detected positions at time points indicated by the messages. In an embodiment the actions of the vehicle are signalled with light signals and picked up via the camera. | 03-11-2010 |
20100067738 | IMAGE ANALYSIS USING A PRE-CALIBRATED PATTERN OF RADIATION - A system and method of image content analysis using a pattern generator that emits a regular and pre-calibrated pattern of non-visible electromagnetic radiation from a surface in range of a camera adapted to perceive the pattern. The camera captures images of the perceived pattern and other objects within the camera's range, and outputs image data. The image data is analyzed to determine attributes of the objects and area within the camera's range. The pattern provides a known background, which enables an improved and simplified image analysis. | 03-18-2010 |
20100067739 | Sequential Stereo Imaging for Estimating Trajectory and Monitoring Target Position - A method for determining a position of a target includes obtaining a first image of the target, obtaining a second image of the target, wherein the first and the second images have different image planes and are generated at different times, processing the first and second images to determine whether the target in the first image corresponds spatially with the target in the second image, and determining the position of the target based on a result of the act of processing. Systems and computer products for performing the method are also described. | 03-18-2010 |
20100067740 | Pedestrian Detection Device and Pedestrian Detection Method - A near-infrared night vision device to which a pedestrian detection device is applied includes a near-infrared projector, a near-infrared camera, a display and an ECU. By executing programs, the ECU constitutes a pedestrian candidate extraction portion and a determination portion. The pedestrian candidate extraction portion extracts pedestrian candidate regions from near-infrared images. The determination portion normalizes the sizes and the brightnesses of the pedestrian candidates extracted by the pedestrian candidate extraction portion, and then computes the degrees of similarity between the normalized pedestrian candidates. The determination portion determines that a pedestrian candidate having two or more other pedestrian candidates whose degree of similarity with the pedestrian candidate is greater than or equal to a predetermined value is not a pedestrian. | 03-18-2010 |
20100067741 | Real-time tracking of non-rigid objects in image sequences for which the background may be changing - A method and apparatus is disclosed for tracking an arbitrarily moving object in a sequence of images where the background may be changing. The tracking is based on visual features, such as color or texture, where regions of images (such as those which represent the object being tracked or the background) can be characterized by statistical distributions of feature values. The method improves on the prior art by incorporating a means whereby characterizations of the background can be rapidly re-learned for each successive image frame. This makes the method robust against the scene changes that occur when the image capturing device moves. It also provides robustness in difficult tracking situations, such as when the tracked object passes in front of backgrounds with which it shares similar colors or other features. Furthermore, a method is disclosed for automatically detecting and correcting certain kinds of errors which may occur when employing this or other tracking methods. | 03-18-2010 |
20100067742 | OBJECT DETECTING DEVICE, IMAGING APPARATUS, OBJECT DETECTING METHOD, AND PROGRAM - An object detecting device includes a calculating unit configured to calculate gradient intensity and gradient orientation of luminance for a plurality of regions in an image and calculate a frequency distribution of the luminance gradient intensity as to the calculated luminance gradient orientation for each of the regions, and a determining unit configured to determine whether or not an identified object is included in the image by comparing a plurality of frequency distributions calculated for each of the regions. | 03-18-2010 |
20100067743 | SYSTEM AND METHOD FOR TRACKING AN ELECTRONIC DEVICE - A system for tracking a spatially manipulated user controlling object using a camera associated with a processor. While the user spatially manipulates the controlling object, an image of the controlling object is picked-up via a video camera, and the camera image is analyzed to isolate the part of the image pertaining to the controlling object for mapping the position and orientation of the device in a two-dimensional space. Robust data processing systems and computerized method employing calibration and tracking algorithms such that minimal user intervention is required for achieving and maintaining successful tracking of the controlling object in changing backgrounds and lighting conditions. | 03-18-2010 |
20100067744 | Method and Single Laser Device for Detecting Magnifying Optical Systems - The invention comprises illuminating a scene where said magnifying optical system (OP) may occur with at least one pulse generated by first laser transmitter (E). The laser transmitter (E) and a first detector of the scene thus illuminated (D | 03-18-2010 |
20100074469 | Vehicle and road sign recognition device - The present invention includes: image capturing means ( | 03-25-2010 |
20100074470 | Combination detector and object detection method using the same - Provided are a detector and a method of detecting an object using the detector. The method includes combining a first detector and a second detector in a combination scheme to form a multi-layer combination detector, the second detector being of a type different from that of the first detector, processing a binary classification detection with respect to an inputted sample starting from an uppermost layer detector, allowing a sample of an object detected from a current layer to approach a lower layer, while rejecting a sample of a non-object detected from the current layer whereby the rejected non-object may not approach the lower layer, and outputting a sample passing through all layers as a detected object. | 03-25-2010 |
20100074471 | Gesture Processing with Low Resolution Images with High Resolution Processing for Optical Character Recognition for a Reading Machine - A portable reading machine that operates in several modes and performs image preprocessing to prior to optical character recognition. The portable reading machine receives a low resolution image and a high resolution image of a scene and processing the low resolution image to recognize a user-initiated gesture using a gesturing item that indicates a command from the user to the reading machine and the high resolution image to recognize text in the image of the scene, according to the command from the user to the machine. | 03-25-2010 |
20100074472 | SYSTEM FOR AUTOMATED SCREENING OF SECURITY CAMERAS - The present invention involves a system for automatically screening closed circuit television (CCTV) cameras for large and small scale security systems, as used for example in parking garages. The system includes six primary software elements, each of which performs a unique function within the operation of the security system to provide intelligent camera selection for operators, resulting in a marked decrease of operator fatigue in a CCTV system. Real-time image analysis of video data is performed wherein a single pass of a video frame produces a terrain map which contains parameters indicating the content of the video. Based on the parameters of the terrain map, the system is able to make decisions about which camera an operator should view based on the presence and activity of vehicles and pedestrians, furthermore, discriminating vehicle traffic from pedestrian traffic. The system is compatible with existing CCTV (closed circuit television) systems and is comprised of modular elements to facilitate integration and upgrades. | 03-25-2010 |
20100080415 | OBJECT-TRACKING SYSTEMS AND METHODS - A system and method for tracking, identifying, and labeling objects or features of interest is provided. In some embodiments, tracking is accomplished using unique signature of the feature of interest and image stabilization techniques. According to some aspects a frame of reference using predetermined markers is defined and updated based on a change in location of the markers and/or specific signature information. Individual objects or features within the frame may also be tracked and identified. Objects may be tracked by comparing two still images, determining a change in position of an object between the still images, calculating a movement vector of the object, and using the movement vector to update the location of an image device. | 04-01-2010 |
20100080416 | EYE DETECTION SYSTEM USING A SINGLE CAMERA - A system and a method for detecting the eyes of a driver of a vehicle using a single camera. The method includes determining a set of positional parameters corresponding to a driving seat of the vehicle. The camera is positioned at a pre-determined location inside the vehicle, and a set of parameters corresponding to the camera is determined. The location of the driver's eyes is detected using the set of positional parameters, an image of the driver's face and the set of parameters corresponding to the camera. | 04-01-2010 |
20100080417 | Object-Tracking Systems and Methods - A system and method for tracking, identifying, and labeling objects or features of interest is provided. In some embodiments, tracking is accomplished using unique signature of the feature of interest and image stabilization techniques. According to some aspects a frame of reference using predetermined markers is defined and updated based on a change in location of the markers and/or specific signature information. Individual objects or features within the frame may also be tracked and identified. Objects may be tracked by comparing two still images, determining a change in position of an object between the still images, calculating a movement vector of the object, and using the movement vector to update the location of an image device. | 04-01-2010 |
20100080418 | PORTABLE SUSPICIOUS INDIVIDUAL DETECTION APPARATUS, SUSPICIOUS INDIVIDUAL DETECTION METHOD, AND COMPUTER-READABLE MEDIUM - Cameras provided to glasses successively take subject images around a wearer of the glasses. The subject images are searched to detect human face regions, and if human face regions are detected, feature quantities of each face are calculated to detect the face direction and the eye direction, and an eye-gaze direction is detected based on them. Whether or not each person with the detected human face region is looking at the cameras is determined from the eye-gaze direction, and if there is a human face looking at the cameras for a given period of time or more, a person with the human face is determined as being a suspicious individual, and a warning message indicating the detection of the suspicious individual is output to the wearer. Furthermore, the detection information and images can be provided to a device in a remote location. | 04-01-2010 |
20100086174 | METHOD OF AND APPARATUS FOR PRODUCING ROAD INFORMATION - An embodiment of the present invention discloses a method of producing road information for use in a map database including: acquiring a source image from an image sequence obtained by means of a terrestrial based camera mounted on a moving vehicle; determining a road color sample from pixels associated with a predefined area in the source image representative of the road surface in front of or behind the moving vehicle; generating a road surface image from the source image in dependence of the road color sample; and, producing road information in dependence of the road surface image and position and orientation data associated with the source image. | 04-08-2010 |
20100086175 | Image Processing Apparatus, Image Processing Method, Program, and Recording Medium - An image processing apparatus includes a detector, a setting unit, and an image generator. The detector detects a target object image region from a first image. When one or more predetermined parameters are applicable to a target object within the region detected by the detector, the setting unit sets the relevant target object image region as a first region. The image generator then generates a second image by applying predetermined processing to either the image portion within the first region, or to the image portions in a second region containing image portions within the first image that are not contained in the first region. | 04-08-2010 |
20100086176 | Learning Apparatus and Method, Recognition Apparatus and Method, Program, and Recording Medium - A learning apparatus includes an image generator, a feature point extractor, a feature value calculator, and a classifier generator. The image generator generates, from an input image, images having differing scale coefficients. The feature point extractor extracts feature points from each image generated by the image generator. The feature value calculator calculates feature values for the feature points by filtering the feature points using a predetermined filter. The classifier generator generates one or more classifiers for detecting a predetermined target object from an image by means of statistical learning using the feature values. | 04-08-2010 |
20100086177 | IMAGE PROCESSING APPARATUS AND METHOD - An image processing apparatus which is capable of suppressing an increase in the circuit size of buffers between data-processing circuits, thereby enabling an associated component thereof to be implemented by hardware. A position control unit sequentially shifts a position of a sub window image by a predetermined skip amount in a predetermined scanning direction, for scanning, and further repeating the scanning for skipped sub window images, after shifting a start position of the scanning, to thereby determine positions of all sub window images each as an area from a face image is to be detected. | 04-08-2010 |
20100092030 | SYSTEM AND METHOD FOR COUNTING PEOPLE NEAR EXTERNAL WINDOWED DOORS - A system for counting objects, such as people, is provided having a camera ( | 04-15-2010 |
20100092031 | SELECTIVE AND ADAPTIVE ILLUMINATION OF A TARGET - There are provided a method and a system for illuminating one or more target in a scene. An image of the scene is acquired using a sensing device that may use an infrared sensor for example. From the image, an illumination controller determines an illumination figure, such that the illumination figure adaptively matches at least a position of the target in the image. The target is the selectively illuminated using an illumination device, according to the illumination figure. | 04-15-2010 |
20100092032 | METHODS AND APPARATUS TO FACILITATE OPERATIONS IN IMAGE BASED SYSTEMS - Vision based systems may select actions based on analysis of images to redistribute objects. Actions may include action type, action axis and/or action direction. Analysis may determine whether an object is accessible by a robot, whether an upper surface of a collection of objects meet a defined criteria and/or whether clusters of objects preclude access. | 04-15-2010 |
20100092033 | METHOD FOR TARGET GEO-REFERENCING USING VIDEO ANALYTICS - A method to geo-reference a target between subsystems of a targeting system is provided. The method includes receiving a target image formed at a sender subsystem location, generating target descriptors for a first selected portion of the target image, sending target location information and the target descriptors from a sender subsystem of the targeting system to a receiver subsystem of the targeting system, pointing an optical axis of a camera of the receiver subsystem at the target based on the target location information received from the sending subsystem, forming a target image at a receiver subsystem location when the optical axis is pointed at the target, and identifying a second selected portion of the target image formed at the receiver subsystem location that is correlated to the first selected portion of the target image formed at the sender subsystem location. | 04-15-2010 |
20100092034 | METHOD AND SYSTEM FOR POSITION DETERMINATION USING IMAGE DEFORMATION - A method and system of position determination using image deformation is provided. One implementation involves receiving an image of a visual tag, the image captured by an image capturing device, wherein the visual tag has a predefined position associated therewith; based on the image determining a distance of the image capturing device from the visual tag, and determining an angular position of the image capturing device relative to the visual tag; and determining position of the image capturing device based on said distance and said angular position. | 04-15-2010 |
20100092035 | AUTOMATIC RECOGNITION APPARATUS - The invention concerns an apparatus for automatic recognition of objects, which includes a device for capturing images of one object, or of a plurality of objects, which are to be recognized. The objects to be evaluated are manually introduced into a field of view of said camera. The invented apparatus possesses an image recognition device, whereby, from an image of an object within the field of view of the camera, an identification-signal representing the object is generated. The data acquired therefrom can serve, for example, a weighing scale, which has been equipped with the invented automatic recognition apparatus. | 04-15-2010 |
20100092036 | METHOD AND APPARATUS FOR DETECTING TARGETS THROUGH TEMPORAL SCENE CHANGES - A system and method for detecting a target in imagery is disclosed. At least one image region exhibiting changes in at least intensity is detected from among at least a pair of aligned images. A distribution of changes in at least intensity inside the at least one image region is determined using an unsupervised learning method. The distribution of changes in at least intensity is used to identify pixels experiencing changes of interest. At least one target from the identified pixels is identified using a supervised learning method. The distribution of changes in at least intensity is a joint hue and intensity histogram when the pair of images pertain to color imagery. The distribution of changes in at least intensity is an intensity histogram when the pair of images pertain to grey-level imagery. | 04-15-2010 |
20100092037 | METHOD AND SYSTEM FOR VIDEO INDEXING AND VIDEO SYNOPSIS - In a system and method for generating a synopsis video from a source video, at least three different source objects are selected according to one or more defined constraints, each source object being a connected subset of image points from at least three different frames of the source video. One or more synopsis objects are sampled from each selected source object by temporal sampling using image points derived from specified time periods. For each synopsis object a respective time for starting its display in the synopsis video is determined, and for each synopsis object and each frame a respective color transformation for displaying the synopsis object may be determined. The synopsis video is displayed by displaying selected synopsis objects at their respective time and color transformation, such that in the synopsis video at least three points that each derive from different respective times in the source video are displayed simultaneously. | 04-15-2010 |
20100092038 | SYSTEM AND METHOD OF DETECTING OBJECTS - The present invention is a system and a method of segmenting and detecting objects which can be approximated by planar or nearly planar surfaces in order to detect one or more objects with threats or potential threats. The method includes capturing imagery of the scene proximate a platform, producing a depth map from the imagery and tessellating the depth map into a number of patches. The method also includes classifying the plurality of patches as threat patches and projecting the threat patches into a pre-generated vertical support histogram to facilitate selection of the projected threat patches having a score value within a sufficiency criterion. The method further includes grouping the selected patches having the score value using a plane fit to obtain a region of interest and processing the region of interest to detect said object. | 04-15-2010 |
20100092039 | Digital Image Processing Using Face Detection Information - A method of processing a digital image using face detection within the image achieves one or more desired image processing parameters. A group of pixels is identified that correspond to an image of a face within the digital image. Default values are determined of one or more parameters of at least some portion of the digital image. Values are adjusted of the one or more parameters within the digitally-detected image based upon an analysis of the digital image including the image of the face and the default values. | 04-15-2010 |
20100098292 | Image Detecting Method and System Thereof - An image detecting method and a system thereof are provided. The image detecting method includes the following steps. An original image is captured. A moving-object image of the original image is created. An edge-straight-line image of the original image is created, wherein the edge-straight-line image comprises a plurality of edge-straight-lines. Whether the original image has a mechanical moving-object image is detected according to the length, the parallelism and the gap of the part of the edge-straight-lines corresponding to the moving-object image. | 04-22-2010 |
20100098293 | Structure and Motion with Stereo Using Lines - A system and method are disclosed for estimating camera motion and structure reconstruction of a scene using lines. The system includes a line detection module, a line correspondence module, a temporal line tracking module and structure and motion module. The line detection module is configured to detect lines in visual input data comprising a plurality of image frames. The line correspondence module is configured to find line correspondence between detected lines in the visual input data. The temporal line tracking module is configured to track the detected lines temporally across the plurality of the image frames. The structure and motion module is configured to estimate the camera motion using the detected lines in the visual input data and to reconstruct three-dimensional lines from the estimated camera motion. | 04-22-2010 |
20100098294 | METHOD AND APPARATUS FOR DETECTING LANE - A method and an apparatus for detecting a lane are disclosed. The lane detecting apparatus includes: a region of ID setup setting a region of ID including a road region of a current lane in an acquired image; a road sign verifier verifying existence of a road sign within the set region of ID; an ROI setup calculating a difference value between a lane prediction result and previous lane information when there exists a road sign and setting an ROI based on the calculated difference value; and a lane detector detecting a lane by extracting lane markings based on the set ROI. Accordingly, a lane can be more accurately detected even in a road environment including a road sign by removing the road sign to extract only necessary lane markings. | 04-22-2010 |
20100098295 | CLEAR PATH DETECTION THROUGH ROAD MODELING - A method for detecting a clear path of travel for a vehicle including fusion of clear path detection by image analysis and road geometry data describing road geometry includes monitoring an image from a camera device on the vehicle, analyzing the image through clear path detection analysis to determine a clear path of travel within the image, monitoring the road geometry data, analyzing the road geometry data to determine an impact of the data to the clear path, modifying the clear path based upon the analysis of the road geometry data, and utilizing the clear path in navigation of the vehicle. | 04-22-2010 |
20100104134 | Interaction Using Touch and Non-Touch Gestures - A computer interface may use touch- and non-touch-based gesture detection systems to detect touch and non-touch gestures on a computing device. The systems may each capture an image, and interpret the image as corresponding to a predetermined gesture. The systems may also generate similarity values to indicate the strength of a match between a captured image and corresponding gesture, and the system may combine gesture identifications from both touch- and non-touch-based gesture identification systems to ultimately determine the gesture. A threshold comparison algorithm may be used to apply different thresholds for different gesture detection systems and gesture types. | 04-29-2010 |
20100104135 | MARKER GENERATING AND MARKER DETECTING SYSTEM, METHOD AND PROGRAM - A marker generating system is characterized in having a special feature extracting element that extracts a portion, as a special feature, including a distinctive pattern in a video image not including a marker; a unique special feature selecting element that, based on the extracted special feature, selects a special feature of an image, as a unique special feature, that does not appear on the video image; and a marker generating element that generates a marker based on the unique special feature. | 04-29-2010 |
20100104136 | METHOD AND APPARATUS FOR DETECTING THE PLACEMENT OF A GOLF BALL FOR A LAUNCH MONITOR - A novel method and apparatus for detecting the placement of a golf ball for a launch monitor is disclosed. The method comprises capturing an image of a scan zone that is adjacent to the launch monitor and in the field of view of the launch monitor's image sensor, analyzing the scan zone image for the placement of an object, and determining if the object is likely the golf ball. An apparatus is also disclosed that implements the golf ball detection method. | 04-29-2010 |
20100119109 | MULTI-CORE MULTI-THREAD BASED KANADE-LUCAS-TOMASI FEATURE TRACKING METHOD AND APPARATUS - A multi-core multi-thread based Kanade-Lucas-Tomasi (KLT) feature tracking method includes subdividing an input image into regions and allocating a core to each region; extracting KLT features for each region in parallel and in real time; and tracking the extracted features in the input image. Said extracting the features is carried out based on single-region/multi-thread/single-core architecture, while said tracking the features is carried out based on multi-feature/multi-thread/single-core architecture. | 05-13-2010 |
20100119110 | IMAGE DISPLAY DEVICE, COMPUTER READABLE STORAGE MEDIUM STORING IMAGE PROCESSING PROGRAM, AND IMAGE PROCESSING METHOD - An image processing apparatus includes an area dividing unit that divides an image obtained by capturing inside of a body lumen into one or more areas by using a value of a specific wavelength component that is specified in accordance with a degree of absorption or scattering in vivo from a plurality of wavelength components included in the image or wavelength components obtained by conversion of the plurality of wavelength components; and a target-of-interest site specifying unit that specifies a target-of-interest site in the area by using a discriminant criterion in accordance with an area obtained by the division. | 05-13-2010 |
20100119111 | TIME EXPANSION FOR DISPLAYING PATH INFORMATION - Embodiments of the present invention provide systems and methods for displaying sequential information representing a path. The sequential information can include a number of tokens representing a path. A representation of the tokens and path of the sequential information can be displayed. An instruction to adjust the representation of the path of the sequential information can be received. For example, instruction can comprise user instruction, including but not limited to a user manipulation of a slider control of a user interface through which the representation of the sequence is displayed. The displayed representation of the path of the sequential information can be updated based on and corresponding to the instruction. So for example, the user can click and drag or otherwise manipulate the slider control above and the displayed representation of the path can be expanded and/or contracted based on the user's movement of the slider control. | 05-13-2010 |
20100119112 | GRAPHICAL REPRESENTATIONS FOR AGGREGATED PATHS - Techniques for displaying path-related information. Techniques are provided for generating and displaying graphical representations for a path. For example, radial histograms, radial vector plots, and other graphical representations may be rendered for multiple paths aggregated together. | 05-13-2010 |
20100119113 | METHOD AND APPARATUS FOR DETECTING OBJECTS - A method for detecting an object on an image representable by picture elements includes: “determining first and second adaptive thresholds for picture elements of the image, depending on an average intensity in a region around the respective picture element”, “determining partial objects of picture elements of a first type that are obtained based on a comparison with the first adaptive threshold”, “determining picture elements of a second type that are obtained based on a comparison with the second adaptive threshold” and “combining a first and a second one of the partial objects to an extended partial object by picture elements of the second type, when a minimum distance exists between the first and the second of the partial objects, wherein the object to be detected can be described by a sum of the partial objects of picture elements of the first type and/or the obtained extended partial objects”. | 05-13-2010 |
20100124356 | DETECTING OBJECTS CROSSING A VIRTUAL BOUNDARY LINE - An approach that detects objects crossing a virtual boundary line is provided. Specifically, an object detection tool provides this capability. The object detection tool comprises a boundary component configured to define a virtual boundary line in a video region of interest, and establish a set of ground patch regions surrounding the virtual boundary line. The object detection tool further comprises an extraction component configured to extract a set of attributes from each of the set of ground patch regions, and update a ground patch history model with the set of attributes from each of the set of ground patch regions. An analysis component is configured to analyze the ground patch history model to detect whether an object captured in at least one of the set of ground patch regions is crossing the virtual boundary line in the video region of interest. | 05-20-2010 |
20100124357 | SYSTEM AND METHOD FOR MODEL BASED PEOPLE COUNTING - An approach that allows for model based people counting is provided. In one embodiment, there is a generating tool configured to generate a set of person-shape models based on results of a cumulative training process; a detecting tool configured to detect persons in a camera field-of-view by using the set of person-shape models, and a counting tool configured to track detected persons upon crossing by the detected persons of a previously established virtual boundary. | 05-20-2010 |
20100124358 | METHOD FOR TRACKING MOVING OBJECT - A method for tracking a moving object is provided. The method detects the moving object in a plurality of continuous images so as to obtain space information of the moving object in each of the images. In addition, appearance features of the moving object in each of the images are captured to build an appearance model. Finally, the space information and the appearance model are combined to track a moving path of the moving object in the images. Accordingly, the present invention is able to keep tracking the moving object even if the moving object leaves the monitoring frame and returns again, so as to assist the supervisor in finding abnormal acts and making following reactions. | 05-20-2010 |
20100124359 | METHOD AND SYSTEM FOR AUTOMATIC DETECTION OF A CLASS OF OBJECTS - An apparatus and method for providing automatic threat detection using passive millimeter wave detection and image processing analysis. | 05-20-2010 |
20100124360 | METHOD AND APPARATUS FOR RECORDING EVENTS IN VIRTUAL WORLDS - A method and an apparatus for recording an event in a virtual world. The method includes acquiring camera view regions of avatars joining the event; identifying one or more key avatars and/or key objects based on information about the targets in the camera view regions of the avatars; setting one or more recorders for the identified one or more key avatars and/or key objects for recording the event such that the one or more key avatars and/or key objects are located in the camera view regions of the one or more recorders. The apparatus includes devices configured to perform the steps of the method. | 05-20-2010 |
20100128926 | ITERATIVE MOTION SEGMENTATION - An image processing device which simultaneously secures and extracts a background image, at least two object images, a shape of each object image and motion of each object image, from among plural images, the image processing device including an image input unit ( | 05-27-2010 |
20100128927 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - By a method such as foreground extraction or facial extraction, the area of a target object is detected from an input image, and the feature amount such as the center of gravity, size, and inclination is acquired. Using the value of a temporarily-set internal parameter, edge image generation, particle generation, and transition are carried out, and a contour is estimated by obtaining the probability density distribution by observing the likelihood. Comparing a feature amount obtained from the estimated contour and a feature amount of the area of the target object, the temporarily setting is reset by determining that the value for the temporary setting is not appropriate when the degree of matching of the both is smaller than a reference value. When the degree of matching is larger than the reference value, the value of the parameter is determined to be the final value. | 05-27-2010 |
20100128928 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing apparatus including a dynamic body detecting unit for detecting a dynamic body contained in a moving image, a dynamic body region setting unit for, during a predetermined time from a time point the dynamic body is detected by the dynamic body detecting unit, setting a region containing the dynamic body at the detection time point as a dynamic body region, and a fluctuation removable processing unit for performing a fluctuation removal process on a region other than the dynamic body region set by the dynamic body region setting unit. | 05-27-2010 |
20100128929 | IMAGE PROCESSING APPARATUS AND METHOD FOR TRACKING A LOCATION OF A TARGET SUBJECT - A digital image processing apparatus has a tracking function for tracking a location variation of a set tracking area on a plurality of frame images. The digital image processing apparatus includes a similarity calculation unit that calculates a similarity by varying a location of a template on one frame image. The similarity calculation unit calculates a second direction similarity by fixing a first direction location of the template in a first direction on the one frame image and by varying a second direction location of the template in a second direction which is perpendicular to the first direction, and then calculates a first direction similarity by fixing the second direction location of the template at a location where the second direction similarity is the highest and by varying the first direction location of the template in the first direction on the one frame image. | 05-27-2010 |
20100128930 | DETECTION OF ABANDONED AND VANISHED OBJECTS - Disclosed herein are a method and system for classifying a detected region of change of a video frame as one of an abandoned object event and an object removal event, wherein a plurality of boundary blocks define a boundary of said region of change. For each one of a set of said boundary blocks ( | 05-27-2010 |
20100135527 | Image recognition algorithm, method of identifying a target image using same, and method of selecting data for transmission to a portable electronic device - An image recognition algorithm includes a keypoints-based comparison and a region-based color comparison. A method of identifying a target image using the algorithm includes: receiving an input at a processing device, the input including data related to the target image; performing a retrieving step including retrieving an image from an image database, and, until the image is either accepted or rejected, designating the image as a candidate image; performing an image recognition step including using the processing device to perform an image recognition algorithm on the target and candidate images in order to obtain an image recognition algorithm output; and performing a comparison step including: if the image recognition algorithm output is within a pre-selected range, accepting the candidate image as the target image; and if the image recognition algorithm output is not within the pre-selected range, rejecting the candidate image and repeating the retrieving, image recognition, and comparison steps. | 06-03-2010 |
20100135528 | ANALYZING REPETITIVE SEQUENTIAL EVENTS - Techniques for analyzing one or more sequential events performed by a human actor to evaluate efficiency of the human actor are provided. The techniques include identifying one or more segments in a video sequence as one or more components of one or more sequential events performed by a human actor, integrating the one or more components into one or more sequential events by incorporating a spatiotemporal model and one or more event detectors, and analyzing the one or more sequential events to analyze behavior of the human actor. | 06-03-2010 |
20100135529 | Systems and methods for tracking images - Image tracking as described herein can include: segmenting a first image into regions; determining an overlap of intensity distributions in the regions of the first image, and segmenting a second image into regions such that an overlap of intensity distributions in the regions of the second image is substantially similar to the overlap of intensity distributions in the regions of the first image. In certain embodiments, images can depict a heart at different points in time and the tracked regions can be the left ventricle cavity and the myocardium. In such embodiments, segmenting the second image can include generating first and second curves that track the endocardium and epicardium boundaries, and the curves can be generated by minimizing functions containing a coefficient based on the determined overlap of intensity distributions in the regions of the first image. | 06-03-2010 |
20100135530 | METHODS AND SYSTEMS FOR CREATING A HIERARCHICAL APPEARANCE MODEL - A method for creating an appearance model of an object includes receiving an image of the object and creating a hierarchical appearance model of the object from the image of the object. The hierarchical appearance model has a plurality of layers, each layer including one or more nodes. Nodes in each layer contain information of the object with a corresponding level of detail. Nodes in different layers of the hierarchical appearance model correspond to different levels of detail. | 06-03-2010 |
20100135531 | Position Alignment Method, Position Alignment Device, and Program - A position alignment method, a position alignment device, and a program in which processing load can be reduced are proposed. A group of some points in a first set of points extracted from an object appearing in one image and a group of some points in a second set of points extracted from an object appearing in another image are used as a reference, and the second set of points is aligned with respect to the first set of points. Thereafter, all the points in the first set of points and all the points in the aligned second set of point are used as a reference, and the second set of points is aligned with respect to the first set of points. | 06-03-2010 |
20100135532 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM FOR STORING PROGRAM - An image processing apparatus comprises an image capture unit configured to capture an image, a characteristic part detector configured to detect a characteristic part of a face from the image captured by the image capture unit, an outline generator configured to generate a pseudo outline of the face based on positions of the characteristic part detected by the characteristic part detector and a correction unit configured to correct the image based on the pseudo outline generated by the outline generator. | 06-03-2010 |
20100142758 | Method for Providing Photographed Image-Related Information to User, and Mobile System Therefor - System for providing a mobile user, object related information related to an object visible thereto, the system including a camera directable toward the object, a local interest points and semi global geometry (LIPSGG) extraction processor, and a remote LIPSGG identifier, the camera acquiring an image of at least a portion of the object, the LIPSGG extraction processor being coupled with the camera, the LIPSGG extraction processor extracting an LIPSGG model of the object from the image, remote LIPSGG identifier being coupled with the LIPSGG extraction processor via a network, the remote LIPSGG identifier receiving the LIPSGG model from the LIPSGG extraction processor, via the network, the remote LIPSGG identifier identifying the object according to the LIPSGG model, the remote LIPSGG identifier retrieving the object related information, the remote LIPSGG identifier providing the object related information to the mobile user operating the camera. | 06-10-2010 |
20100150399 | APPARATUS AND METHOD FOR OPTICAL GESTURE RECOGNITION - An optical gesture recognition system is shown having a first light source and a first optical receiver configured to receive reflected light from an object when the first light source is activated and output a first measured reflectance value corresponding to an amplitude of the reflected light. A processor is configured to receive the first measured reflectance value and to compare the first measured reflectance value at first and second points in time to track motion of the object and identify a gesture of the object corresponding to the tracked motion of the object. | 06-17-2010 |
20100150400 | INFORMATION PROCESSOR, INFORMATION PROCESSING METHOD, AND COMPUTER READABLE MEDIUM - A first movement control section sequentially moves a first image to multiple first positions. A first comparison section compares the moved first image with a second image. A target first position selection section selects a target first position based on the result of said comparison. After the target first position is selected, the second movement control section sequentially moves the first image to multiple second positions located in the periphery of the target first position. The second comparison section compares the moved first image with the second image. A target second position selection section selects a target second position based on the result of said comparison. A second position alignment execution section performs geometric transformation based on the difference between the position of the first image and the target second position and aligns the positions of the first and second images. | 06-17-2010 |
20100150401 | Target tracker - For a tracking of a target object in a time series of frames of image data, a tracking object designation acceptor accepts a designation of a tracking object, a target color setter sets a color of the designated tracking object as a target color, and a particle filter processor employs particles for measurements to determine color likelihoods by comparison between the target color and colors in vicinities of particles, works, as the color likelihoods meet a criterion, to estimate a region of the tracking object in a frame of image data in accordance with results of the measurements, and as the color likelihoods fails to meet the criterion, to use particles, for measurements to determine luminance likelihoods based on luminance differences between frames of image data in a time series of frames of image data, and estimate a region of the tracking object in a frame of image data in accordance with results of the measurements, and updates the target color by a color in either estimated region. | 06-17-2010 |
20100158312 | Method for tracking and processing image - The invention relates to a method for image processing, which can be used to calibrate the background quickly. When the external environment is changed due to the switch of light, the color of background is calibrated quickly, and the background can be updated together. The method not only is used to update the background, but also can be used to eliminate the convergence of background again. | 06-24-2010 |
20100158313 | COUPLING ALIGNMENT APPARATUS AND METHOD - An apparatus for axially aligning a first coupling member and a second coupling member that can be connected so as to form a rotating assembly. The apparatus includes a measurement arrangement configured to be mounted onto the first coupling member and to be rotated therewith. The measurement arrangement includes an emitter arrangement configured to emit first and second signals in the direction of the second coupling member so as to cause at least a portion of said first and second signals to be reflected by the second coupling member. The measurement apparatus further has a capture arrangement configured to capture at least a portion of the first and second reflected signals. The apparatus includes a control arrangement configured to determine an offset in axial alignment between the first and second coupling member based on at least the first and second reflected signals. | 06-24-2010 |
20100158314 | METHOD AND APPARATUS FOR MONITORING TREE GROWTH - A system for identifying forest stands within an area of interest that are exhibiting abnormal growth determines a relationship between vegetation index (VI) values determined from a first and a second image of the area of interest. From the relationship, an expected or predicted VI value for each forest stand is determined and compared with the actual VI value computed for the forest stand from the first image. Those forest stands with a difference between the actual and predicted VI values that exceed a threshold are identified as exhibiting abnormal growth. | 06-24-2010 |
20100158315 | SPORTING EVENT IMAGE CAPTURE, PROCESSING AND PUBLICATION - Systems, methods and software are disclosed for capturing and/or importing and processing media items such as digital images or video ( | 06-24-2010 |
20100158316 | Action estimating apparatus, method for updating estimation model, and program - A storage unit stores a model defining a position or a locus of a feature point of an occupant in each specific action. An action estimation unit compares the feature point with each of the models to detect an estimated action. A detecting unit detects that a specific action is being performed as a definite action. A first generating unit generates a new definite model corresponding to the definite action by modifying a position or a locus of the feature point according to an in-action feature point when the definite action is being performed. A second generating unit generates a new non-definite model using the in-action feature point according to a correspondence between the feature point in the definite action and the feature point of a non-definite model other than the definite model. An update unit updates the definite action model and the non-definite action model. | 06-24-2010 |
20100166256 | Method and apparatus for identification and position determination of planar objects in images - A method of identifying a planar object in source images is disclosed. In at least one embodiment, the method includes: retrieving a first source image obtained by a first terrestrial based camera; retrieving a second source image obtained by a second terrestrial based camera; retrieving position data associated with the first and second source image; retrieving orientation data associated with the first and second source image; performing a looking axis rotation transformation on the first and second source image by use of the associated position data and orientation data to obtain first and second intermediate images, wherein the first and second intermediate images have an identical looking axis; performing a radial logarithmic space transformation on the first and second intermediate images to obtain first and second radial logarithmic data images; detecting an area in the first image potentially being a planar object; comparing the potential planar object having similar dimensions in the second radial logarithmic data image and similar rgb characteristics; and finally, identifying the area as a planar object and determining its position. At least one embodiment of the method enables the engineer to detect very efficiently planar perpendicular objects in subsequent images. | 07-01-2010 |
20100166257 | METHOD AND APPARATUS FOR DETECTING SEMI-TRANSPARENCIES IN VIDEO - A method and apparatus for detecting semi-transparencies in video is disclosed. | 07-01-2010 |
20100166258 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR PROVIDING HAND SEGMENTATION FOR GESTURE ANALYSIS - A method for providing hand segmentation for gesture analysis may include determining a target region based at least in part on depth range data corresponding to an intensity image. The intensity image may include data descriptive of a hand. The method may further include determining a point of interest of a hand portion of the target region, determining a shape corresponding to a palm region of the hand, and removing a selected portion of the target region to identify a portion of the target region corresponding to the hand. An apparatus and computer program product corresponding to the method are also provided. | 07-01-2010 |
20100166259 | OBJECT ENUMERATING APPARATUS AND OBJECT ENUMERATING METHOD - An object enumerating apparatus comprises means for generating and binarizing inter-frame differential data from moving image data representative of a photographed object under detection, means for extracting feature data from a plurality of the inter-frame binary differential data directly adjacent to each other on a pixel-by-pixel basis through cubic higher-order local auto-correlation, means for calculating a coefficient of each factor vector from a factor matrix comprised of a plurality of factor vectors previously generated through learning using a factor analysis and arranged for one object under detection, and the feature data, and means for adding a plurality of the coefficients for one object under detection, and rounding off the sum to the decimal point to the closest integer representative of a quantity. By courtesy of small fluctuations in the sum of coefficients and accurate matching with the quantity of objects intended for recognition, a recognition can be accomplished with robustness to differences in scale and speed of objects and to dynamic changes thereof. | 07-01-2010 |
20100166260 | METHOD FOR AUTOMATIC DETECTION AND TRACKING OF MULTIPLE TARGETS WITH MULTIPLE CAMERAS AND SYSTEM THEREFOR - A method for automatically detecting and tracking multiple targets in a multi-camera surveillance zone and system thereof. In each camera view of the system only a simple object detection algorithm is needed. The detection results from multiple cameras are fused into a posterior distribution, named TDP, based on the Bayesian rule. This TDP distribution represents a likelihood of presence of some moving targets on the ground plane. To properly handle the tracking of multiple moving targets with time, a sample-based framework which combines Markov Chain Monte carlo (MCMC), Sequential Monte Carlo (SMC), and Mean-Shift Clustering, is provided. The detection and tracking accuracy is evaluated by both synthesized videos and real videos. The experimental results show that this method and system can accurately track a varying number of targets. | 07-01-2010 |
20100166261 | SUBJECT TRACKING APPARATUS AND CONTROL METHOD THEREFOR, IMAGE CAPTURING APPARATUS, AND DISPLAY APPARATUS - A subject tracking apparatus extracts a subject region which is similar to a reference image on the basis of a degree of correlation with the reference image for tracking a predetermined subject from images supplied in a time series manner. Further, the subject tracking apparatus detects the position of the predetermined subject in the subject region on the basis of the distribution of characteristic pixels representing the predetermined subject contained in the subject region, and corrects the subject region so as to reduce a shift in position of the predetermined subject in the subject region. Moreover, the corrected subject region is taken as the result of tracking the predetermined subject, and the reference image is updated with the corrected subject region as the reference image to be used for the next supplied image. | 07-01-2010 |
20100166262 | MULTI-MODAL OBJECT SIGNATURE - Disclosed herein are a method and system for appearance-invariant tracking of an object in an image sequence. A track is associated with the image sequence, wherein the track has an associated track signature comprising at least one mode. The method detects the object in a frame of the image sequence ( | 07-01-2010 |
20100172541 | TARGETING METHOD, TARGETING DEVICE, COMPUTER READABLE MEDIUM AND PROGRAM ELEMENT - According to an exemplary embodiment a targeting method for targeting a first object from an entry point to a target point in an object ( | 07-08-2010 |
20100172542 | BUNDLING OF DRIVER ASSISTANCE SYSTEMS - A traffic sign recognition system including a detection mechanism adapted for detecting a candidate traffic sign and a recognition mechanism adapted for recognizing the candidate traffic sign as being an electronic traffic sign. A partitioning mechanism may be adapted for partitioning the image frames into a first partition and a second partition. The detection mechanism may use the first portion of the image frames and the recognition mechanism may use the second portion of the image frames. When the candidate traffic sign is detected as an electronic traffic sign, the recognition mechanism may use both the first partition of the image frames and the second portion of the image frames. | 07-08-2010 |
20100177929 | ENHANCED SAFETY DURING LASER PROJECTION - The present invention is directed to systems and methods that provide enhanced eye safety for image projection systems. In particular, the instant invention provides enhanced eye safety for long throw laser projection systems. | 07-15-2010 |
20100177930 | METHODS FOR DETERMINING A WAVEFRONT POSITION - The present disclosure relates to methods for determining a wavefront position of a liquid on a surface of an assay test strip placing a liquid on the surface of the test strip; and acquiring one or more signals from the surface of the test strip at one or more times, comparing the one or more acquired signals to a threshold, wherein the wavefront position is a position on the surface of the test strip where a signal is greater than or less than a threshold (e.g., fixed or dynamic threshold). Such methods may be used to determine the wavefront velocity of a liquid on a surface of an assay test strip and the transit time of a liquid sample to traverse the one or more positions on the surface of the assay test strip. | 07-15-2010 |
20100177931 | VIRTUAL OBJECT ADJUSTMENT VIA PHYSICAL OBJECT DETECTION - Various embodiments related to the location and adjustment of a virtual object on a display in response to a detected physical object are disclosed. One disclosed embodiment provides a computing device comprising a multi-touch display, a processor and memory comprising instructions executable by the processor to display on the display a virtual object, to detect a change in relative location between the virtual object and a physical object that constrains a viewable area of the display, and to adjust a location of the virtual object on the display in response to detecting the change in relative location between the virtual object and the physical object. | 07-15-2010 |
20100177932 | OBJECT DETECTION APPARATUS AND OBJECT DETECTION METHOD - An object detection apparatus includes an image acquisition unit that acquires image data, a reading unit that reads the acquired image data in a predetermined image area at predetermined resolution, an object area detection unit that detects an object area from first image data read by the reading unit, an object discrimination unit that discriminates a predetermined object from the object area detected by the object area detection unit, and a determination unit that determines an image area and resolution used to read second image data which is captured later than the first image data from the object area detected by the object area detection unit, wherein the reading unit reads the second image data from the image area at the resolution determined by the determination unit. | 07-15-2010 |
20100183192 | SYSTEM AND METHOD FOR OBJECT MOTION DETECTION BASED ON MULTIPLE 3D WARPING AND VEHICLE EQUIPPED WITH SUCH SYSTEM - The present invention relates to a technique for detecting dynamic (i.e., moving) objects using sensor signals with 3D information and can be deployed e.g. in driver assistance systems. | 07-22-2010 |
20100183193 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND INTEGRATED CIRCUIT FOR PROCESSING IMAGES - This image processing apparatus, for photographed images taken at a predetermined time interval and input sequentially, specifies an image area as the target of predetermined processing. The apparatus (i) has processing capability to generate, in accordance with a particular input photographed image, reduced images at K (K≧1) ratios within the predetermined time interval, (ii) selects, for each photographed image that is input, M (M≦K) or fewer ratios from among L (L>K) different ratios in accordance with ratios indicated for a photographed image input prior to the photographed image, (iii) compares each of the reduced images generated at the selected M or fewer ratios with template images, and (iv) in accordance with the comparison results, specifies the image area. | 07-22-2010 |
20100183194 | THREE-DIMENSIONAL MEASURING DEVICE - A three-dimensional measuring device includes an irradiation device configured to irradiate and switch among a multiplicity of light patterns having different periods and having a striped light intensity distribution on at least a measurement object, a camera having an imaging element capable of imaging reflected light from the measurement object irradiated by the light pattern, a rack configured to cause relative change in positional relationship between the imaging element and the measurement object, and a control device configured to perform three-dimensional measurements based on image data imaged by the camera. The control device performs the three-dimensional measurements by performing a phase shift method calculation of height data as a first height data for each pixel unit of image data based on a multiply phase-shifted image data obtained by irradiating on a first position a multiply phase-shifted first light pattern having a first period. | 07-22-2010 |
20100183195 | Method and Apparatus for Object Detection in an Image - A method and apparatus for detecting at least one of a location and a scale of an object in an image. The method comprising distinguishing the trailing and leading edges of a moving object in at least one portion of the image, applying a symmetry detection filter to at least a portion of the image to produce symmetry scores relating to the at least one portion of the image, and identifying at least one location corresponding to locally maximal symmetry scores of the symmetry scores relating to the at least one portion of the image, and utilizing the at least one location of the locally maximal symmetry scores to detect at least one of a location and a scale of the object in the image, wherein the scale relates to the size of the symmetry detection filter. | 07-22-2010 |
20100183196 | DYNAMIC TRACKING OF SOFT TISSUE TARGETS WITH ULTRASOUND IMAGES, WITHOUT USING FIDUCIAL MARKERS - An apparatus and method of dynamically tracking a soft tissue target with ultrasound images, without the use of fiducial markers. In one embodiment, the apparatus includes an ultrasound imager to generate a reference ultrasound and a first ultrasound image having a soft tissue target, and a processing device coupled to the ultrasound imager to receive the reference ultrasound image and the first ultrasound image, to register the first ultrasound image with the reference ultrasound image, and to determine a displacement of the soft tissue target based on registration of the first ultrasound image with the reference ultrasound image. | 07-22-2010 |
20100195867 | VISUAL TARGET TRACKING USING MODEL FITTING AND EXEMPLAR - A method of tracking a target includes receiving an observed depth image of the target from a source and analyzing the observed depth image with a prior-trained collection of known poses to find an exemplar pose that represents an observed pose of the target. The method further includes rasterizing a model of the target into a synthesized depth image having a rasterized pose and adjusting the rasterized pose of the model into a model-fitting pose based, at least in part, on differences between the observed depth image and the synthesized depth image. Either the exemplar pose or the model-fitting pose is then selected to represent the target. | 08-05-2010 |
20100195868 | TARGET-LOCKING ACQUISITION WITH REAL-TIME CONFOCAL (TARC) MICROSCOPY - Presented herein is a real-time target-locking confocal microscope that follows an object moving along an arbitrary path, even as it simultaneously changes its shape, size and orientation. This Target-locking Acquisition with Realtime Confocal (TARC) microscopy system integrates fast image processing and rapid image acquisition using, for example, a Nipkow spinning-disk confocal microscope. The system acquires a 3D stack of images, performs a full structural analysis to locate a feature of interest, moves the sample in response, and then collects the next 3D image stack. In this way, data collection is dynamically adjusted to keep a moving object centered in the field of view. The system's capabilities are demonstrated by target-locking freely-diffusing clusters of attractive colloidal particles, and actively-transported quantum dots (QDs) endocytosed into live cells free to move in three dimensions for several hours. During this time, both the colloidal clusters and live cells move distances several times the length of the imaging volume. Embodiments may be applied to other applications, such as manufacturing, open water observation of marine life, aerial observation of flying animals, or medical devices, such as tumor removal. | 08-05-2010 |
20100195869 | VISUAL TARGET TRACKING - A visual target tracking method includes representing a human target with a machine-readable model configured for adjustment into a plurality of different poses and receiving an observed depth image of the human target from a source. The observed depth image is compared to the model. A refine-z force vector is then applied to one or more force-receiving locations of the model to move a portion of the model towards a corresponding portion of the observed depth image if that portion of the model is Z-shifted from that corresponding portion of the observed depth image. | 08-05-2010 |
20100195870 | TRACKING METHOD AND DEVICE ADOPTING A SERIES OF OBSERVATION MODELS WITH DIFFERENT LIFE SPANS - The present invention relates to a tracking method and a tracking device adopting multiple observation models with different life spans. The tracking method is suitable for tracking an object in a low frame rate video or with abrupt motion, and uses three observation models with different life spans to track and detect a specific subject in frame images of a video sequence. An observation model I performs online learning with one frame image prior to the current image, an observation model II performs online learning with five frames prior to the current image, and an observation model III is offline trained. The three observation models are combined by a cascade particle filter so that the specific subject in the low frame rate video or the object with abrupt motion can be tracked quickly and accurately. | 08-05-2010 |
20100202656 | Ultrasonic Doppler System and Method for Gesture Recognition - A method and system recognizes an unknown gesture by directing an ultrasonic signal at an object making an unknown gestures. A set of Doppler signals are acquired of the ultrasonic signal after reflection by the object. Doppler features are extracted from the reflected Doppler signal, and the Doppler features are classified using a set of Doppler models storing the Doppler features and identities of known gestures to recognize and identify the unknown gesture, wherein there is one Doppler model for each known gesture. | 08-12-2010 |
20100202657 | SYSTEM AND METHOD FOR OBJECT DETECTION FROM A MOVING PLATFORM - The present invention relates to a system and method for detecting one or more targets belonging to a first class (e.g., moving and/or stationary people), from a moving platform in a 3D-rich environment. The framework described here is implemented using a number of monocular or stereo cameras distributed around the vehicle to provide 360 degrees coverage. Furthermore, the framework described here utilizes numerous filters to reduce the number of false positive identifications of the targets. | 08-12-2010 |
20100202658 | Drowsiness detector - A drowsiness detector detects the drowsiness by measuring a distance of an eyebrow at three points from a reference line defined by an inner eye corner and an outer eye corner. The three distances of the eyebrow from the reference line are respectively standardized by an inter-eye distance between the inner eye corners of the left and right eyes, and are respectively compared with thresholds for determining the rise of the eyebrow. The rise of the eyebrow is then translated as the start of the drowsiness, and is associated with an operation such as a doze prevention operation or the like. | 08-12-2010 |
20100202659 | IMAGE SAMPLING IN STOCHASTIC MODEL-BASED COMPUTER VISION - A method for tracking a target in computer vision is disclosed. The method generates an integral image ( | 08-12-2010 |
20100202660 | OBJECT TRACKING SYSTEMS AND METHODS - An object tracking method may include: receiving frames of data containing image information of an object; performing an object segmentation to obtain an object motion result; and using the object motion result to conduct an object tracking. In particular, the object segmentation may include: extracting motion vectors from the frames of data; estimating a global motion using the motion vectors; and subtracting the global motion from the motion vectors to generate an object motion result. | 08-12-2010 |
20100202661 | MOVING OBJECT DETECTION APPARATUS AND COMPUTER READABLE STORAGE MEDIUM STORING MOVING OBJECT DETECTION PROGRAM - The approaching object detection unit in a moving object detection apparatus for a moving picture calculates a moving distance of each characteristic point in an image frame obtained at time point t−1, on the basis of an image frame obtained at time point t and an image frame obtained at time point t−1, and on the basis of the image frame obtained at time point t−1 and an image frame obtained at time point t+m, a moving distance of a characteristic point is in the image frame obtained at time point t−1 and has a moving distance to be less than a prescribed value. | 08-12-2010 |
20100208939 | STATISTICAL OBJECT TRACKING IN COMPUTER VISION - A method and system for object tracking in computer vision. The tracked object is recognized from an image that has been acquired with the camera of the computer vision system. The image is processed by randomly generating samples in the search space and then computing fitness functions. Regions of high fitness attract more samples. Computations may be stored into a tree structure. The method provides efficient means for sampling from a very peaked probability density function that can be expressed as a product of factor functions. | 08-19-2010 |
20100208940 | PRE TENSION MONITORING SOLUTION - The present invention relates to a tension monitoring system comprising: —at least one camera for acquiring at least one image of at least one pattern located on an object of interest, wherein the pattern comprises a plurality of points and where each point is arranged on the object in such as way as to follow the movement of the object; —a computational device; wherein the computational device is arranged to analyze the acquired image for detecting the position of each pattern point using an image analysis algorithm arranged to determine the geometrical centre of a point using a contrast detection method, determining the distance between at least two pattern portions, and calculating the tension induced in the object using a reference value of distance between the two pattern portions when the object is mechanically relaxed. | 08-19-2010 |
20100208941 | ACTIVE COORDINATED TRACKING FOR MULTI-CAMERA SYSTEMS - A method and system for coordinated tracking of objects is disclosed. A plurality of images is received from a plurality of nodes, each node comprising at least one image capturing device. At least one target in the plurality of images is identified to produce at least one local track corresponding to each of the plurality of nodes having the at least one target in its field of view. The at least one local track corresponding to each of the plurality of nodes is fused according to a multi-hypothesis tracking method to produce at least one fused track corresponding to the at least one target. At least one of the plurality of nodes is assigned to track the at least one target based on minimizing at least one cost function comprising a cost matrix using the k-best algorithm for tracking at least one target for each of the plurality of nodes. The at least one fused track is sent to the at least one of the plurality of nodes assigned to track the at least one target based on the at least one fused track. | 08-19-2010 |
20100215213 | TARGETING METHOD, TARGETING DEVICE, COMPUTER READABLE MEDIUM AND PROGRAM ELEMENT - This invention will introduce a fast and effective target approach planning method preferably for needle guided percutaneous interventions using a rotational X-ray device. According to an exemplary embodiment A targeting method for targeting a first object in an object under examination is provided, wherein the method comprises selecting a first two-dimensional image of an three-dimensional data volume representing the object under examination, determining a target point in the first two-dimensional image, displaying an image of the three-dimensional data volume with the selected target point. Furthermore, the method comprises positioning the said image of the three-dimensional data volume by scrolling and/or rotating such that a suitable path of approach crossing the target point has a first direction parallel to an actual viewing direction of the said image of the three-dimensional data volume and generating a second two-dimensional image out of the three-dimensional data volume, wherein a normal of the plane of the second two-dimensional image is oriented parallel to the first direction and crosses the target point. | 08-26-2010 |
20100215214 | IMAGE PROCESSING METHOD - A method and apparatus for localizing an area in relative movement and for determining the speed and direction thereof in real time is disclosed. Each pixel of an image is smoothed using its own time constant. A binary value corresponding to the existence of a significant variation in the amplitude of the smoothed pixel from the prior frame, and the amplitude of the variation, are determined, and the time constant for the pixel is updated. For each particular pixel, two matrices are formed that include a subset of the pixels spatially related to the particular pixel. The first matrix contains the binary values of the subset of pixels. The second matrix contains the amplitude of the variation of the subset of pixels. In the first matrix, it is determined whether the pixels along an oriented direction relative to the particular pixel have binary values representative of significant variation, and, for such pixels, it is determined in the second matrix whether the amplitude of these pixels varies in a known manner indicating movement in the oriented direction. In each of several domains, histogram of the values in the first and second matrices falling in such domain is formed. Using the histograms, it is determined whether there is an area having the characteristics of the particular domain. The domains include luminance, hue, saturation, speed (V), oriented direction (D | 08-26-2010 |
20100215215 | OBJECT DETECTING APPARATUS, INTERACTIVE SYSTEM, OBJECT DETECTING METHOD, INTERACTIVE SYSTEM REALIZING METHOD, AND RECORDING MEDIUM - This is provided with a plurality of retroreflective sheets each of which is attached to a screen and retroreflectively reflects received light, an imaging unit which photographs the retroreflective sheets, and an MCU which analyzes a differential picture obtained by photographing. The MCU detects, from the differential picture, a shade area corresponding to a part of the retroreflective sheet which is covered by a foot of a player. The detection of the shade area corresponds to the detection of the foot of the player. Because, in the case where the foot is placed on the retroreflective sheet, the part corresponding thereto is not captured in the differential picture, and is present as a shade area. It is possible to detect a foot without attaching and fixing a reflecting sheet to the foot. | 08-26-2010 |
20100215216 | Localization system and method - Disclosed herein is a localization system and method to recognize the location of an autonomous mobile platform. In order to recognize the location of the autonomous mobile platform, a beacon (three-dimensional structure) having a recognizable image pattern is disposed at a location desired by a user, the mobile platform which knows image pattern information of the beacon photographs the image of the beacon and finds and analyzes a pattern to be recognized from the photographed image. A relative distance and a relative angle of the mobile platform are computed using the analysis of the pattern such that the location of the mobile platform is accurately recognized. | 08-26-2010 |
20100215217 | Method and System of Tracking and Stabilizing an Image Transmitted Using Video Telephony - Herein described is a system and method that tracks the face of a person engaged in a videophone conversation. In addition to performing facial tracking, the invention provides stabilization of facial images that are transmitted during the videophone conversation. The face is tracked by employing one or more algorithms that correlate videophone captured facial images against a stored facial image. The face may be better identified by way of employing one or more voice recognition algorithms. The one or more voice recognition algorithms may correlate utterances of the person engaged in a conversation to one or more stored utterances. The identified utterances are subsequently mapped to a stored facial image. In a representative embodiment, the system used for performing facial tracking and image stabilization comprises an image sensor, a lens, an actuator, and a controller/processor. | 08-26-2010 |
20100220891 | AUGMENTED REALITY METHOD AND DEVICES USING A REAL TIME AUTOMATIC TRACKING OF MARKER-FREE TEXTURED PLANAR GEOMETRICAL OBJECTS IN A VIDEO STREAM - The invention relates to a method and to devices for the real-time tracking of one or more substantially planar geometrical objects of a real scene in at least two images of a video stream for an augmented-reality application. After receiving a first image of the video stream ( | 09-02-2010 |
20100220892 | DRIVER IMAGING APPARATUS AND DRIVER IMAGING METHOD - An imaging mechanism captures an image of a face of a driver of a vehicle. A first image processor performs image processing on a wide portion of the face of the driver in a first image using a first image captured by the imaging mechanism. A second image processor performs image processing on a part of the face of the driver in a second image captured by the imaging mechanism at a higher exposure than the exposure of the first image, using the second image. | 09-02-2010 |
20100226531 | MAKEUP SIMULATION SYSTEM, MAKEUP SIMULATOR, MAKEUP SIMULATION METHOD, AND MAKEUP SIMULATION PROGRAM - According to the present invention, a makeup simulation system applying makeup to a video having an image of the face of a user captured thereon is characterized by image capturing means for capturing the image of the face of the user and outputting the video, control means for receiving the video output from the image capturing means, performing image processing on the video, and outputting the video; and display means for displaying the video output from the control means, wherein the control means includes face recognition means for recognizing the face of the user from the video based on predetermined tracking points; and makeup processing means for applying a predetermined makeup on the face of the user included in the video based on the tracking points and outputting the video to the display means. | 09-09-2010 |
20100226532 | Object Detection Apparatus, Method and Program - An object detection apparatus for detecting an object from an image obtained by taking a front view picture of a road in a traveling direction of a vehicle includes a camera unit for taking the front view picture of the road and inputting the image; a dictionary modeling the object; a search unit for searching the image with a search window; a histogram production unit for producing a histogram by comparing the image in the search window with the dictionary and counting a detection frequency in a direction parallel to a road plane; and a detection unit for detecting the detection object by detecting a unimodal distribution from the histogram. | 09-09-2010 |
20100226533 | METHOD OF IMAGE PROCESSING - The present invention relates to a method of identifying a target object in an image using image processing. It further relates to apparatus and computer software implementing the method. The method includes storing template data representing a template orientation field indicative of an orientation of each of a plurality of features of a template object; receiving image data representing the image; processing the image data to generate an image orientation field indicating an orientation corresponding to the plurality of image features; processing the image orientation field using the template orientation field to generate a match metric indicative of an extent of matching between at least part of the template orientation field and at least part of the image orientation field; and using the match metric to determine whether or not the target object has been identified in the image. Image and/or template confidence data is used to generate the match metric. | 09-09-2010 |
20100226534 | FUSION FOR AUTOMATED TARGET RECOGNITION - A method of predicting a target type in a set of target types from at least one image is provided. At least one image is obtained. A first and second set of confidence values and associated azimuth angles are determined for each target type in the set of target types from the at least one image. The first and second set of confidence values are fused for each of the azimuth angles to produce a fused curve for each target type in the set of target types. When multiple images are obtained, first and second set of possible detections are compiled corresponding to regions of interest in the multiple images. The possible detections are associated by regions of interest. The fused curves are produced for every region of interest. In the embodiments, the target type is predicted from the set of target types based on criteria concerning the fused curve. | 09-09-2010 |
20100226535 | AUGMENTING A FIELD OF VIEW IN CONNECTION WITH VISION-TRACKING - The claimed subject matter relates to an architecture that can employ vision-monitoring techniques to enhance an experience associated with elements of a local environment. In particular, the architecture can establish gaze- or eye-tracking attributes in connection with a user. In addition, a location and a head or face-based perspective of the user can also be obtained. By aggregating this information, the architecture can identify a current field of view of the user, and then map that field of view to a modeled view in connection with a geospatial model of the environment. In addition, the architecture can select additional content that relates to an entity in the view or a modeled entity in the modeled view, and further present the additional content to the user. | 09-09-2010 |
20100226536 | VIDEO SIGNAL DISPLAY DEVICE, VIDEO SIGNAL DISPLAY METHOD, STORAGE MEDIUM, AND INTEGRATED CIRCUIT - A technical problem is to inhibit variation in the correction between frames of a moving image while maintaining a correction amount of the overall image. The video signal display device has an attraction point determination portion ( | 09-09-2010 |
20100226537 | DETECTION AND TRACKING OF INTERVENTIONAL TOOLS - The present invention relates to minimally invasive X-ray guided interventions, in particular to an image processing and rendering system and a method for improving visibility and supporting automatic detection and tracking of interventional tools that are used in electrophysiological procedures. According to the invention, this is accomplished by calculating differences between 2D projected image data of a preoperatively acquired 3D voxel volume showing a specific anatomical region of interest or a pathological abnormality (e.g. an intracranial arterial stenosis, an aneurysm of a cerebral, pulmonary or coronary artery branch, a gastric carcinoma or sarcoma, etc.) in a tissue of a patient's body and intraoperatively recorded 2D fluoroscopic images showing the aforementioned objects in the interior of said patient's body, wherein said 3D voxel volume has been generated in the scope of a computed tomography, magnet resonance imaging or 3D rotational angiography based image acquisition procedure and said 2D fluoroscopic images have been co-registered with the 2D projected image data. After registration of the projected 3D data with each of said X-ray images, comparison of the 2D projected image data with the 2D fluoroscopic images—based on the resulting difference images—allows removing common patterns and thus enhancing the visibility of interventional instruments which are inserted into a pathological tissue region, a blood vessel segment or any other region of interest in the interior of the patient's body. Automatic image processing methods to detect and track those instruments are also made easier and more robust by this invention. Once the 2D-3D registration is completed for a given view, all the changes in the system geometry of an X-ray system used for generating said fluoroscopic images can be applied to a registration matrix. Hence, use of said method as claimed is not limited to the same X-ray view during the whole procedure. | 09-09-2010 |
20100226538 | OBJECT DETECTION APPARATUS AND METHOD THEREFOR - An image processing apparatus includes a moving image input unit configured to input a moving image, an object likelihood information storage unit configured to store object likelihood information in association with a corresponding position in an image for each object size in each frame included in the moving image, a determination unit configured to determine a pattern clipping position where a pattern is clipped out based on the object likelihood information stored in the object likelihood information storage unit, and an object detection unit configured to detect an object in an image based on the object likelihood information of the pattern clipped out at the pattern clipping position determined by the determination unit. | 09-09-2010 |
20100232643 | Method, Apparatus, and Computer Program Product For Object Tracking - A method for object tracking is provided. The method may include identifying a first interest point, receiving a video frame, and detecting, via a processor, a second interest point in the video frame using a scale space image pyramid. The method may further include matching the second interest point with the first interest point, and determining a motion estimation based on the matched interest points. Similar apparatuses and computer program products are also provided. | 09-16-2010 |
20100232644 | SYSTEM AND METHOD FOR COUNTING THE NUMBER OF PEOPLE - This invention discloses a method and system for counting the number of people. First, a first face information is stored in a memory. Then, an image is determined to be a complexion region or not. The complexion region is determined to be a real face or not. Next, a one-to-one similarity matching is processed between the potential face information and the first face information, when the similarity matching achieves a predetermined condition, use the potential face information to update the first face information, when the similarity matching does not achieve a predetermined condition and the potential face is the real face, the potential face is viewed as a second face information and added to the memory, and the first face information is set as been occluded. Finally, the number of people in front of the camera is counted according to the faces stored in the memory. | 09-16-2010 |
20100232645 | MODEL-BASED SPECT HEART ORIENTATION ESTIMATION - When estimating a position or orientation of a patient's heart, a mesh model of a nominal heart is overlaid on a SPECT or PET image of the patient's heart and manipulated to conform to the image of the patient's heart. A mesh adaptation protocol applies opposing forces to the mesh model to constrain the mesh model from changing shape and to pull the mesh model to the shape of the patient's heart. A heart orientation estimator ( | 09-16-2010 |
20100232646 | SUBJECT TRACKING APPARATUS, IMAGING APPARATUS AND SUBJECT TRACKING METHOD - A subject tracking apparatus includes a region extraction section extracting a region similar to a reference image in a first image based on respective feature amounts of the first image being picked up and the reference image being set, a motion vector calculating section calculating a motion vector in each of a plurality of regions in the first image using a second image and the first image, the second image being picked up at a different time from that of the first image, and a control section determining an object region of subject tracking in the first image based on an extraction result in the region extraction section and a calculation result in the motion vector calculating section. | 09-16-2010 |
20100232647 | THREE-DIMENSIONAL RECOGNITION RESULT DISPLAYING METHOD AND THREE-DIMENSIONAL VISUAL SENSOR - In the present invention, whether three-dimensional measurement or checking processing with a model is properly performed by setting information and recognition processing result can easily be confirmed. After setting processing is performed to a three-dimensional visual sensor including a stereo camera, a real workpiece is imaged, the three-dimensional measurement is performed to an edge included in a produced stereo image, and restored three-dimensional information is checked with a three-dimensional model to compute a position of the workpiece and a rotation angle for an attitude indicated by the three-dimensional model. Thereafter, perspective transformation of the three-dimensional information on the edge obtained through measurement processing and the three-dimensional model to which coordinate transformation is already performed based on recognition result is performed into a coordinate system of a camera that performs the imaging, and projection images are displayed while being able to be checked with each other. | 09-16-2010 |
20100232648 | IMAGING APPARATUS, MOBILE BODY DETECTING METHOD, MOBILE BODY DETECTING CIRCUIT AND PROGRAM - An imaging apparatus includes: a moving body detecting section that detects if an object in an image is a moving body which makes a motion between frames; and an attribute determining section that determines a similarity indicating whether or not the object detected as the moving body is similar among a plurality of frames, and a change in luminance of the object based on a texture and luminance of the object, and, when determining that the object is a light/shadow-originated change in luminance, adds attribute information indicating the light/shadow-originated change in luminance to the object detected as the moving body. | 09-16-2010 |
20100232649 | Locating Device for a Magnetic Resonance System - The present utility model provides a locating device for a magnetic resonance system comprising an image sensor, an image display for displaying images acquired by the abovementioned image sensor, and a locator, which locator has at least one locating mark. There is no need for the abovementioned locating device for a magnetic resonance system to use a laser for locating and, therefore, the case where an operator is hurt by the laser will not occur. On the other hand, due to the use of the image sensor and the image display, the remote control of adjustment conditions can be accomplished, therefore there is no need to repeatedly enter into a magnetic resonance examination room to carry out operations during the adjustment process, which saves time and costs for the adjustments. | 09-16-2010 |
20100239119 | SYSTEM FOR IRIS DETECTION TRACKING AND RECOGNITION AT A DISTANCE - A stand-off range or at-a-distance iris detection and tracking for iris recognition having a head/face/eye locator, a zoom-in iris capture mechanism and an iris recognition module. The system may obtain iris information of a subject with or without his or her knowledge or cooperation. This information may be sufficient for identification of the subject, verification of identity and/or storage in a database. | 09-23-2010 |
20100239120 | IMAGE OBJECT-LOCATION DETECTION METHOD - An image object-location detection method includes dividing a target image into a plurality of image blocks, calculating a plurality of sharpness values respectively corresponding to the plurality of image blocks, and analyzing the plurality of sharpness values to accordingly select image blocks corresponding to object-locations in the target image from the plurality of image blocks. | 09-23-2010 |
20100239121 | METHOD AND SYSTEM FOR ASCERTAINING THE POSITION AND ORIENTATION OF A CAMERA RELATIVE TO A REAL OBJECT - The invention relates to a method for ascertaining the position and orientation of a camera ( | 09-23-2010 |
20100239122 | METHOD FOR CREATING AND/OR UPDATING TEXTURES OF BACKGROUND OBJECT MODELS, VIDEO MONITORING SYSTEM FOR CARRYING OUT THE METHOD, AND COMPUTER PROGRAM - Video monitoring systems are used for camera-supported monitoring of relevant areas, and usually comprise a plurality of monitoring cameras placed in the relevant areas for recording monitoring scenes. The monitoring scenes may be, for example, parking lots, intersections, streets, plazas, but also regions within buildings, plants, hospitals, or the like. In order to simplify the analysis of the monitoring scenes by monitoring personnel, the invention proposes displaying at least the background of the monitoring scene on a monitor as a virtual reality in the form of a three-dimensional scene model using background object models. The invention proposes a method for creating and/or updating textures of background object models in the three-dimensional scene model, wherein a background image of the monitoring scene is formed from one or more camera images | 09-23-2010 |
20100239123 | METHODS AND SYSTEMS FOR PROCESSING OF VIDEO DATA | 09-23-2010 |
20100239124 | IMAGE PROCESSING APPARATUS AND METHOD - It is an object to accurately detect an image of an object from an image created by photographing. A computer | 09-23-2010 |
20100239125 | DIGITAL IMAGE PROCESSING APPARATUS, TRACKING METHOD, RECORDING MEDIUM FOR STORING COMPUTER PROGRAM FOR EXECUTING THE TRACKING METHOD, AND DIGITAL IMAGE PROCESSING APPARATUS ADOPTING THE TRACKING METHOD - A digital image processing apparatus and tracking method are provided to rapidly and accurately track a subject location in video images. The apparatus searches for a target image that is most similar to a reference image, in a current frame image in which each pixel has luminance, and other, data, the reference image being smaller than the current frame image, and includes a similarity calculator for calculating a degree of similarity between the reference image and each of a plurality of matching images that have the same size as the reference image and are portions of the current frame image; and a target image determination unit for determining one of the plurality of matching images as the target image using the degree of similarity obtained by the similarity calculator. The similarity calculator calculates the degree of similarity by applying greater weight to the other data than to the luminance data. | 09-23-2010 |
20100246884 | METHOD AND SYSTEM FOR DIAGNOSTICS SUPPORT - A method for displaying a diagnostic image acquires the diagnostic digital image and applies one or more pattern recognition algorithms to the acquired diagnostic digital image, detecting at least one feature within the acquired diagnostic digital image. At least a portion of the acquired diagnostic digital image displays with a marking at the location of the at least one detected feature. At least one detected feature displays under a first set of image display settings for a first interval, then under at least a second set of image display settings for a second interval. | 09-30-2010 |
20100246885 | SYSTEM AND METHOD FOR MONITORING MOTION OBJECT - A motion object monitoring system captures images of monitored objects in a monitored area, and gives numbers to the monitored objects according to specific features of the monitored objects. The specific features of the monitored objects are obtained by detecting the captured images. Only one of the numbers of each of the monitored objects is stored, instead of repeatedly storing the numbers of same motion objects. The motion object monitoring system analyzes the stored numbers, and displays an analysis result. The motion object monitoring system also determines a movement of each of the motion objects according to corresponding numbers of the motion objects. | 09-30-2010 |
20100246886 | MOVING OBJECT IMAGE TRACKING APPARATUS AND METHOD - An apparatus includes a first-computation unit computing first-angular-velocity-instruction values for driving first-and-second-rotation units to track a moving object, using a detected tracking error and a detected angles, when the moving object exists in a first range separate from a zenith by at least a preset distance, a second-computation unit computing second-angular-velocity-instruction values for driving the first-and-second-rotation units to track the moving object and avoid a zenith-singular point, using the detected angles, the detected tracking error and an estimated traveling direction, and a control unit controlling the first-and-second-rotation units to eliminate differences between the first-angular-velocity-instruction values and the angular velocities when the moving object exists in the first range, and controlling the first-and-second-rotation units to eliminate differences between the second-angular-velocity instruction values and the angular velocities when the moving object exists in a second range within the preset distance from the zenith. | 09-30-2010 |
20100246887 | METHOD AND APPARATUS FOR OBJECT TRACKING - There is described an apparatus and method for tracking objects in video. In particular, there is described a method and apparatus that improves the realism of the object in the captured scene. This improvement is effected by identifying a first and last frame in a video and subjecting the detected path of the object to a correcting function which improves the output positional data. | 09-30-2010 |
20100246888 | IMAGING APPARATUS, IMAGING METHOD AND COMPUTER PROGRAM FOR DETERMINING AN IMAGE OF A REGION OF INTEREST - The present invention relates to an imaging apparatus for determining an image of a region of interest, wherein a motion generation unit ( | 09-30-2010 |
20100254571 | FACE IMAGE PICKUP DEVICE AND METHOD - There are provided a face image pickup device and a face image pickup method which can stably acquire a face image by appropriate illumination, and a program thereof. The face image pickup device comprises a camera which picks up an image of a face of a target person, an illumination light source which illuminates the face of the target person with near-infrared light having an arbitrary light amount, and a computer. The computer detects an area including an eye from the face image of the target person picked up by the camera. The computer measures a brightness distribution in the detected area. Thereafter, the computer controls the illumination light source so as to change the amount of near-infrared light based on the measured brightness distribution. | 10-07-2010 |
20100254572 | CONTINUOUS EXTENDED RANGE IMAGE PROCESSING - Methods and systems for image processing are provided. A method for processing images of a scene includes receiving image data of a reference and a current frame; generating N motion vectors that describe motion of the image data within the scene by computing a correlation function on the reference and current frames at each of N registration points; registering the current frame based on the N motion vectors to produce a registered current frame; and updating the image data of the scene based on the registered current frame. Optionally, registered frames may be oversampled. Techniques for generating the N motion vectors according to roll, zoom, shift and optical flow calculations, updating image data of the scene according to switched and intermediate integration approaches, re-introducing smoothed motion into image data of the scene, re-initializing the process, and processing images of a scene and moving target within the scene are provided. | 10-07-2010 |
20100260376 | MAPPER COMPONENT FOR MULTIPLE ART NETWORKS IN A VIDEO ANALYSIS SYSTEM - Techniques are disclosed for detecting the occurrence of unusual events in a sequence of video frames Importantly, what is determined as unusual need not be defined in advance, but can be determined over time by observing a stream of primitive events and a stream of context events. A mapper component may be configured to parse the event streams and supply input data sets to multiple adaptive resonance theory (ART) networks. Each individual ART network may generate clusters from the set of inputs data supplied to that ART network. Each cluster represents an observed statistical distribution of a particular thing or event being observed that ART network. | 10-14-2010 |
20100260377 | MOBILE DETECTOR, MOBILE DETECTING PROGRAM, AND MOBILE DETECTING METHOD - When a mobile is detected using an imaging device installed in the mobile, the image of a partial area is enlarged/reduced depending on variation in distance to the detection object mobile and then it is compared under a fixed scale thus causing increase in computation cost. In order to eliminate the need for an enlargement/reduction processing or a deformation correction processing every time when collation is performed, an input image is converted into a virtual plane image having a size or a shape on the image of a detection object mobile which does not vary depending on the distance between the mobiles. Using a pair of virtual plane images obtained at two different times, points are made to correspond and the mobile is detected based on the gap of corresponding points. | 10-14-2010 |
20100260378 | SYSTEM AND METHOD FOR DETECTING THE CONTOUR OF AN OBJECT ON A MOVING CONVEYOR BELT - A system for detecting the contour of an object situated on a surface includes an image acquisition assembly, wherein there is relative motion between the image acquisition assembly and the object. The image acquisition assembly includes a line detector, operable for scanning the surface line by line by virtue of the relative motion. Each line is scanned during a scan cycle, the line being transverse to the direction of the relative motion. A light source is operable for emitting light toward the line detector during active periods between idle periods, such that during each of the active periods the light is emitted for at least one cycle synchronized with the scan cycle, allowing the line detector to acquire a first group of at least one lit scan line. During each of the idle periods lasting for at least another cycle synchronized with the scan cycle, no light is emitted, allowing the line detector to acquire a second group of at least one unlit scan line. The object passes between the line detector and the light source by virtue of the relative motion. A processor is coupled with the image acquisition assembly and receives and analyzes scan lines acquired by the line detector. For each of the first group of at least one lit scan line and a successive one of the second group of at least one unlit scan line, the processor identifies a token pattern consisting of a lit segment of the first group adjoining an unlit segment of the second group. The processor searches along the first group and the successive second group for locations where the token pattern ends or reappears, thereby defining edges of the object, and combining the collection of the defined edges to produce a contour of the object. | 10-14-2010 |
20100260379 | Image Processing Apparatus And Image Sensing Apparatus - A tracking process portion includes a search area setting portion for setting a search area in the input image, an image analysis portion for analyzing an image in the search area, an auxiliary track value setting portion for setting an auxiliary track value based on a result of the analysis, a track value setting portion for setting an auxiliary track value based on a result of the analysis and deciding whether the set track value is correct or not, and a track target detection portion for detecting a track object from the image in the search area based on the track value. If the set track value is incorrect, the track value setting portion performs a switching operation for setting the auxiliary track value and a track value. | 10-14-2010 |
20100260380 | DEVICE FOR OPTICALLY MEASURING AND/OR TESTING OBLONG PRODUCTS - A device for optically measuring and/or testing oblong products moving in a longitudinal direction. The device includes a plurality of cameras arranged in a plane perpendicular to the longitudinal direction, and distributed around the longitudinal direction. Each of the cameras has a fixed focus. The device further includes a displacing device adapted to displace each of the cameras simultaneously and jointly over the same distance toward the surface of the oblong product to focus on the oblong product, wherein the device defines a center that is located in the plane. | 10-14-2010 |
20100260381 | SUBJECT TRACKING DEVICE AND CAMERA - A subject tracking device includes: a first similarity factor calculation unit that compares an input image assuming characteristics quantities corresponding to a plurality of characteristics components, with a template image assuming characteristics quantities corresponding to the plurality of characteristics components, and calculates a similarity factor indicating a level of similarity between the input image and the template image in correspondence to each of the plurality of characteristics components; a normalization unit that normalizes similarity factors corresponding to the plurality of characteristics components having been calculated by the first similarity factor calculation unit; and a second similarity factor calculation unit that calculates a similarity factor indicating a level of similarity between the input image and the template image based upon results of normalization achieved via the normalization unit. | 10-14-2010 |
20100266158 | SYSTEM AND METHOD FOR OPTICALLY TRACKING A MOBILE DEVICE - A system and method for optically tracking a mobile device uses a first displacement value along a first direction and a second displacement value along a second direction, which are produced using frames of image data of a navigation surface, to compute first and second tracking values that indicate the current position of the mobile device. The first tracking value is computed using the second displacement value and the sine of a tracking angle value, while the second tracking value is computed using the second displacement value and the cosine of the tracking angle value. The tracking angle value is an angle value derived using at least one previous second displacement value. | 10-21-2010 |
20100266159 | HUMAN TRACKING APPARATUS, HUMAN TRACKING METHOD, AND HUMAN TRACKING PROCESSING PROGRAM - A human tracking apparatus and method capable of highly accurately tracking the movement of persons photographed in moving images includes: an image memory | 10-21-2010 |
20100266160 | Image Sensing Apparatus And Data Structure Of Image File - An image sensing apparatus includes an image sensing portion which generates image data of an image by image sensing, and a record control portion which records image data of a main image generated by the image sensing portion together with main additional information obtained from the main image in a recording medium, in which the record control portion records sub additional information obtained from a sub image taken at a timing different from that of the main image in the recording medium in association with the image data of the main image and the main additional information. | 10-21-2010 |
20100266161 | METHOD AND APPARATUS FOR PRODUCING LANE INFORMATION - A method of producing lane information for use in a map database is disclosed. In at least one embodiment, the method includes acquiring one or more source images of a road surface and associated position and orientation data, the road having a direction and lane markings parallel to the direction of the road; acquiring road information representative of the direction of said road; transforming the one or more source images to obtain a transformed image in dependence of the road information, wherein each column of pixels of the transformed image corresponds to a surface parallel to the direction of said road; applying a filter with asymmetrical mask on the transformed image to obtain a filtered image; and producing lane information from the filtered image in dependence of the position and orientation data associated with the one or more source images. | 10-21-2010 |
20100266162 | Methods, Systems, And Computer Program Products For Protecting Information On A User Interface Based On A Viewability Of The Information - Methods, systems, and computer program products for protecting information on a user interface based on a viewability of the information are disclosed. According to one method, a viewing position of a person other than a user with respect to information on a user interface is identified. An information viewability threshold is determined based on the information on the user interface. Further, an action associated with the user interface is performed based on the identified viewing position and the determined information viewability threshold. | 10-21-2010 |
20100272314 | OBSTRUCTION DETECTOR - An optical reader of a form is discussed where the form has a stored known boundary or boundaries. When the boundaries in a captured image do not match those of the stored known boundaries, it may be determined that an obstruction exists that will interfere with a correct reading of the form. The boundary may be printed, blank, and may include quiet areas, or combinations thereof in stored known patterns. A captured image of the form is compared to retrieved, stored boundary information and differences are noted. The differences may be thresholded to determine if an obstruction exists. If an obstruction is detected, the operator may be signaled, and the location may be displayed or highlighted. The form may be discarded or obstruction may be cleared and the form may be re-processed. | 10-28-2010 |
20100272315 | Automatic Measurement of Morphometric and Motion Parameters of the Coronary Tree From A Rotational X-Ray Sequence - Automatic measurement of morphometric and motion parameters of a coronary target includes extracting reference frames from input data of a coronary target at different phases of a cardiac cycle, extracting a three-dimensional centerline model for each phase of the cardiac cycle based on the references frames and projection matrices of the coronary target, tracking a motion of the coronary target through the phases based on the three-dimensional centerline models, and determining a measurement of morphologic and motion parameters of the coronary target based on the motion. | 10-28-2010 |
20100272316 | Controlling An Associated Device - In an illustrative embodiment a computer-implemented process for controlling an associated device utilizing an automated location tracking and control system to produce an action associates a target with a blind node having a wireless transmitter, wherein the target moves within a predetermined area among a set of reference nodes. The computer-implemented process performs a continuous data acquisition based on a target movement data, wherein the continuous data acquisition is repeated within a predetermined interval, performs a continuous calculation of a target location using the target movement to form target location vectors, wherein the continuous calculation is repeated within the predetermined interval, performs a transmission of current coordinate information using the target location vectors, and transforms received current coordinate information into a device control code, wherein the device control code is a set of voltages. The computer-implemented process transmits the device control code to an associated device, and responsive to the device control code, controls an action on the associated device in real time, wherein the action is directed to the tracked object. | 10-28-2010 |
20100278383 | SYSTEM AND METHOD FOR RECOGNITION OF A THREE-DIMENSIONAL TARGET - A system for recognition of a target three-dimensional object is disclosed. The system may include a photon-counting detector and a three-dimensional integral imaging system. The three-dimensional integral imaging system may be positioned between the photon-counting detector and the target three-dimensional object. | 11-04-2010 |
20100278384 | Human body pose estimation - Techniques for human body pose estimation are disclosed herein. Depth map images from a depth camera may be processed to calculate a probability that each pixel of the depth map is associated with one or more segments or body parts of a body. Body parts may then be constructed of the pixels and processed to define joints or nodes of those body parts. The nodes or joints may be provided to a system which may construct a model of the body from the various nodes or joints. | 11-04-2010 |
20100278385 | FACIAL EXPRESSION RECOGNITION APPARATUS AND FACIAL EXPRESSION RECOGNITION METHOD THEREOF - A facial expression recognition apparatus and a facial expression recognition method thereof are provided. The facial expression recognition apparatus comprises a gray image generating unit, a face edge detection unit, a motion skin extraction unit, a face contour generating unit and a facial expression recognition unit. The gray image generating unit generates a gray image according to an original image. The face edge detection unit outputs a face edge detection result according to the gray image. The motion skin extraction unit generates a motion skin extraction result according to the original image, and generates a face and background division result according to the motion skin extraction result. The face contour generating unit outputs a face contour according to the gray image, the face edge detection result and the face and background division result. The facial expression recognition unit outputs a facial expression recognition result according to the face contour. | 11-04-2010 |
20100278386 | VIDEOTRACKING - A method for tracking an object in a sequence of video frames includes the following steps: creating a model with characteristic features for the object to be tracked; and performing a template matching algorithm in individual frames on the basis of the created model for determining a position of the object in the respective frame. An apparatus arrangement for performing the method includes at least one video camera ( | 11-04-2010 |
20100278387 | Passive Electro-Optical Tracker - A passive electro-optical tracker uses a two-band IR intensity ratio to discriminate high-speed projectiles and obtain a speed estimate from their temperature, as well as determining the trajectory back to the source of fire. In an omnidirectional system a hemispheric imager with an MWIR spectrum splitter forms two CCD images of the environment. Three methods are given to determine the azimuth and range of a projectile, one for clear atmospheric conditions and two for nonhomogeneous atmospheric conditions. The first approach uses the relative intensity of the image of the projectile on the pixels of a CCD camera to determine the azimuthal angle of trajectory with respect to the ground, and its range. The second calculates this angle using a different algorithm. The third uses a least squares optimization over multiple frames based on a triangle representation of the smeared image to yield a real-time trajectory estimate. | 11-04-2010 |
20100278388 | SYSTEM AND METHOD FOR GENERATING A DYNAMIC BACKGROUND - A system and methodology that counts a number of moving objects including the pedestrians within predetermined areas. According to certain embodiments, a system comprises an image sensing device and a data processing device. The image sensing device is situated at a predetermined area. The image sensing device retrieves a series of images of the moving objects within the predetermined area. The data processing device is coupled to the image sensing device. The data processing device processes the retrieved image to generate a dynamic background of the first predetermined area and determine a flow of the moving objects thereon. | 11-04-2010 |
20100284565 | Method and apparatus for fingerprint motion tracking using an in-line array - A fingerprint motion tracking method and system is provided for sensing features of a fingerprint along an axis of finger motion, where a linear sensor array has a plurality of substantially contiguous sensing elements configured to capture substantially contiguous overlapping segments of image data. A processing element is configured to receive segments of image data captured by the linear sensor array and to generate fingerprint motion data. Multiple sensor arrays may be included for generating directional data. The motion tracking data may be used in conjunction with a fingerprint image sensor to reconstruct a fingerprint image using the motion data either alone or together with the directional data. | 11-11-2010 |
20100284566 | PICTURE DATA MANAGEMENT APPARATUS AND PICTURE DATA MANAGEMENT METHOD - A land mark used as a key for organizing images captured by, e.g., a digital camera is adequately selected. A association degree adding section ( | 11-11-2010 |
20100284567 | SYSTEM AND PRACTICE FOR SURVEILLANCE PRIVACY-PROTECTION CERTIFICATION AND REGISTRATION - There is provided an apparatus for the certification of privacy compliance. The apparatus includes a registry of at least one of enrolled video surveillance operators, approved surveillance hardware devices, approved surveillance software programs, approved surveillance system installers, and approved entities that manage surveillance systems. The apparatus further includes a registry searcher, in signal communication with the registry, for receiving queries to the registry, and for determining whether at least one of a particular surveillance operator, a particular surveillance hardware device, a particular surveillance software program, a particular surveillance system installer, and a particular entity that manages a particular surveillance system is on the registry based on a given query. | 11-11-2010 |
20100284568 | OBJECT RECOGNITION APPARATUS AND OBJECT RECOGNITION METHOD - An object recognition apparatus recognizes an object from video data for a predetermined time period generated by a camera, analyzes the recognition result, and determines a minimum size and moving speed of faces of the video image recognized from the received frame image. Then, the object recognition apparatus determines a lower limit value of a frame rate and resolution from the determined minimum size and moving speed of the faces. | 11-11-2010 |
20100284569 | LANE RECOGNITION SYSTEM, LANE RECOGNITION METHOD, AND LANE RECOGNITION PROGRAM - To provide a lane recognition system which can improve the lane recognition accuracy by suppressing noises that are likely to be generated respectively in an original image and a bird's-eye image. The lane recognition system recognizes a lane based on an image. The system includes: a synthesized bird's-eye image creation module which creates a synthesized bird's-eye image by connecting a plurality of bird's-eye images that are obtained by transforming respective partial regions of original images picked up at a plurality of different times into bird's-eye images; a lane line candidate extraction module which detects a lane line candidate by using information of the original images or the bird's-eye images created from the original images, and the synthesized bird's-eye image; and a lane line position estimation module which estimates a lane line position based on information of the lane line candidate. | 11-11-2010 |
20100284570 | SYSTEM AND METHOD FOR GAS LEAKAGE DETECTION - Imaging system and method for detecting the presence of a substance that has a detectable signature in a known spectral band. The system comprises a thermal imaging sensor and optics, and two interchangeable band-pass uncooled filters located between the optics and the detector. A first filter transmits electromagnetic radiation in a first spectral band that includes the known spectral band and blocks electromagnetic radiation for other spectral bands. A second filter transmits only electromagnetic radiation in a second spectral band in which the substance has no detectable signature. The system also includes a processor for processing the images to obtain a reconstructed fused image involving using one or more transforms aimed at obtaining similarity between one or more images acquired with the first filter and one or more images acquired with the second filter before reconstructing the fused image. | 11-11-2010 |
20100290668 | LONG DISTANCE MULTIMODAL BIOMETRIC SYSTEM AND METHOD - A system for multimodal biometric identification has a first imaging system that detects one or more subjects in a first field of view, including a targeted subject having a first biometric characteristic and a second biometric characteristic; a second imaging system that captures a first image of the first biometric characteristic according to first photons, where the first biometric characteristic is positioned in a second field of view smaller than the first field of view, and the first image includes first data for biometric identification; a third imaging system that captures a second image of the second biometric characteristic according to second photons, where the second biometric characteristic is positioned in a third field of view which is smaller than the first and second fields of view, and the second image includes second data for biometric identification. At least one active illumination source emits the second photons. | 11-18-2010 |
20100290669 | IMAGE JUDGMENT DEVICE - The present invention provides an image judgment device that can prevent increase in a storage capacity to store element characteristic information. The image judgment device stores the element characteristic information for each element that a characteristic part of a sample object has and first and second positional information defining a position of each element, selects either the first or the second positional information, acquires image characteristic information for a partial image that is in an image frame and considered as an element specified by the first positional information in a characteristic extraction method based on a first axis when the first positional information is selected, extracts image characteristic information for a partial image that is in an image frame and considered as an element specified by the second positional information in a characteristic extraction method based on a second axis, which is acquired by rotating the first axis, when the second positional information is selected, specifies element characteristic information for an element corresponding to a position of the partial image, and judges whether or not the characteristic part appears in the image frame with use of the specified element characteristic information and the extracted image characteristic information. | 11-18-2010 |
20100290670 | IMAGE PROCESSING APPARATUS, DISPLAY DEVICE, AND IMAGE PROCESSING METHOD - According to one embodiment, an image processing apparatus includes an extracted coordinates setting module, an image generator, and an output module. The extracted coordinates setting module sets extracted coordinates in a captured image along a direction in which a viewpoint moves with respect to an object in the captured image. The image generator sequentially extracts partial areas from the captured image in which perspective deformation of the object has been corrected based on the extracted coordinates, and generates a plurality of partial area images from the partial areas. The partial areas are in a size corresponding to the viewing angle of the human eye calculated according to an angle of view of the captured image. The output module outputs a moving image including the partial area images as frames. | 11-18-2010 |
20100290671 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An association degree evaluation unit acquires pieces of position information of an image sensing apparatus at respective times within an adjacent time range to an imaging time of a designated image of those sensed by the image sensing apparatus. Furthermore, the association degree evaluation unit acquires pieces of position information of a moving object at the respective times within the adjacent time range. Then, the association degree evaluation unit calculates a similarity between routes of the image sensing apparatus and moving object based on the acquired position information group, and decides a degree of association between the designated image and moving object based on the calculated similarity. An associating unit registers information indicating the degree of association in association with the designated image. | 11-18-2010 |
20100290672 | MOVING OBJECT DETECTING DEVICE, MOVING OBJECT DETECTING METHOD, AND COMPUTER PROGRAM - An apparatus for detecting movement of an object captured by an imaging device, the apparatus includes a moving object detection unit, that is (1) operable to detect movement of an object based on a first moving object detecting process, and (2) operable to detect movement of the object based on a second moving object detecting process. The apparatus also includes an output unit operable to generate an output based on the detection by the moving object detection unit based on at least one of the first and second moving object detecting processes. | 11-18-2010 |
20100290673 | IMAGE PROCESSING DEVICE, ELECTRONIC INSTRUMENT, AND INFORMATION STORAGE MEDIUM - An image processing device includes a weighted image generation section that generates a weighted image in which at least one of an object-of-interest area of an input image and an edge of a background area other than the object-of-interest area is weighted, a composition grid generation section that generates a composition grid that includes grid lines that are weighted, and a composition evaluation section that performs composition evaluation calculations on the input image based on the weighted image and the composition grid. | 11-18-2010 |
20100296697 | OBJECT TRACKER AND OBJECT TRACKING METHOD - Referring to FIG. | 11-25-2010 |
20100296698 | MOTION OBJECT DETECTION METHOD USING ADAPTIVE BACKGROUND MODEL AND COMPUTER-READABLE STORAGE MEDIUM - A motion object detection method using an adaptive background model and a computer-readable storage medium are provided. In the motion object detection method, a background model establishing step is firstly performed to establish a background model to provide a plurality of background brightness reference values. Then, a foreground object detecting step is performed to use the background model to detect foreground objects. In the background model establishing step, a plurality of brightness weight values are firstly provided in accordance with the brightness of background pixels, wherein each of the brightness weight values is determined in accordance with the relative background pixel. Thereafter, the background brightness reference values are calculated based on the brightness of the background pixels and the brightness weight values. In addition, a computer can perform the motion object detection method after reading the computer-readable storage medium. | 11-25-2010 |
20100296699 | APPARATUS AND METHOD OF IMAGE ANALYSIS - A method of analysing a captured image comprising an instance of a target object comprises the steps of: for each of a plurality of different brightness threshold levels, generating contours from the captured digital image that indicate where in the captured digital image the pixel values of the captured digital image cross the respective brightness threshold level; identifying instances of a contour corresponding to a characteristic feature of said target object, the instances being detected at substantially similar image positions in the contours derived using at least two of the respective brightness threshold levels; and estimating a homography which maps the characteristic feature of the target object to its representation in the captured image, based upon the two or more instances of that target object's corresponding contour. | 11-25-2010 |
20100296700 | METHOD AND DEVICE FOR DETECTING THE COURSE OF A TRAFFIC LANE - A method for detecting the course of a traffic lane, including the following steps: | 11-25-2010 |
20100296701 | PERSON TRACKING METHOD, PERSON TRACKING APPARATUS, AND PERSON TRACKING PROGRAM STORAGE MEDIUM - A person tracking method capable of tracking movements of a person captured by a camera through lighter processing in comparison with tracking processing that employs a Kalman filter or the like is provided. The method includes: detecting a head on each frame image; calculating a feature quantity that features a person whose head is detected on the frame images; calculating a relevance ratio that represents a degree of agreement between a feature quantity on a past frame image and a feature quantity on a current frame image, which belong to each person whose head is detected on the current frame image; and determining that, a head, which is a basis for calculation of a relevance ratio that represents a degree of agreement being a first threshold as well as being a maximum degree of agreement, is a head of the same person as the person having the head. | 11-25-2010 |
20100296702 | PERSON TRACKING METHOD, PERSON TRACKING APPARATUS, AND PERSON TRACKING PROGRAM STORAGE MEDIUM - A person tracking method capable of obtaining information representing a correspondence between a shot image and a three-dimensional real space, without actual measurement, thereby enabling lighter processing is provided. The method includes: calculating a statistically average correspondence between a size of person's head and a position representing a height of the head on the shot image, the camera looking down a measured space and taking the measured space; detecting a position and a size of a head on each of measured frame images; calculating, based on positions and sizes of heads on plural past measured frame images and the correspondence, a movement feature quantity representing a possibility that a head on a current measured frame image is of the same person on the past measured frame images; and determining that the head on the current measured frame image is of the same person on the past measured frame images. | 11-25-2010 |
20100296703 | METHOD AND DEVICE FOR DETECTING AND CLASSIFYING MOVING TARGETS - Horizontal velocity profile sensing techniques, methods and systems may be used to detect and classify moving targets, including but not limited to a person, an animal, or a vehicle, or any other object that lends itself to characterization. Such techniques, methods and systems may be implemented with an autonomous stand-alone device, for example, as an unattended ground sensor, or it may constitute part of a sensor system. An exemplary illustrative non-limiting implementation allows the device to be fixed to a location, while detecting and classifying moving targets. In another exemplary illustrative non-limiting implementation, the device may be placed on a moving or rotating platform and used to detect stationary objects. | 11-25-2010 |
20100296704 | SYSTEM AND METHOD FOR ANALYZING VIDEO FROM NON-STATIC CAMERA - A novel system and method of treating the output of moving cameras, in particular ones that enable the application of conventional “static camera” algorithms, e.g., to enable the continuous vigilance of computer surveillance technology to be applied to moving cameras that cover a wide area. According to the invention, a single camera is deployed to cover an area that might require many static cameras and a corresponding number of processing units. A novel system for processing the main video sufficiently enables long-term change detection, particularly the observation that a static object has been moved or has appeared, for instance detecting the parking and departure of vehicles in a parking lot, the arrival of trains in stations, delivery of goods, arrival and dispersal of people, or any other application. | 11-25-2010 |
20100303289 | DEVICE FOR IDENTIFYING AND TRACKING MULTIPLE HUMANS OVER TIME - A system recognizes human beings in their natural environment, without special sensing devices attached to the subjects, uniquely identifies them and tracks them in three dimensional space. The resulting representation is presented directly to applications as a multi-point skeletal model delivered in real-time. The device efficiently tracks humans and their natural movements by understanding the natural mechanics and capabilities of the human muscular-skeletal system. The device also uniquely recognizes individuals in order to allow multiple people to interact with the system via natural movements of their limbs and body as well as voice commands/responses. | 12-02-2010 |
20100303290 | Systems And Methods For Tracking A Model - An image such as a depth image of a scene may be received, observed, or captured by a device and a model of a user in the depth image may be generated. The background of a received depth image may be removed to isolate a human target in the received depth image. A model may then be adjusted to fit with in the isolated human target in the received depth image. To adjust the model, a joint or a bone may be magnetized to the closest pixel of the isolated human target. The joint or the bone may then be refined such that the joint or the bone may be further adjusted to a pixels equidistant between two edges the body part of the isolated human target where the joint or bone may have been magnetized. | 12-02-2010 |
20100303291 | Virtual Object - An image of a scene may be observed, received, or captured. The image may then be scanned to determine one or more signals emitted or reflected by an indicator that belongs to an input object. Upon determining the one or more signals, the signals may be grouped together into a cluster that may be used to generate a first vector that may indicate the orientation of the input object in the captured scene. The first vector may then be tracked, a virtual object and/or an avatar associated with the first vector may be rendered, and/or controls to perform in an application executing on the computer environment may be determined based on the first vector. | 12-02-2010 |
20100303292 | APPARATUS AND METHOD FOR DETECTING MOVEMENT DIRECTION OF OBJECT - An apparatus for detecting movement direction of object, includes a converging lens, an image sensor and an image processor. The converging lens has an axial chromatic aberration between a first and a second rays in different wavelengths. The image sensor is for receiving and converting the first and second rays into first and second electronic image signals associated with the object. The image processor is configured for analyzing whether the object is closer to an object plane associated with the first ray or closer to an object plane associated with the second ray when the object moves to different positions, and determining the movement direction of the object based on the analyzed positions of the object relative to the object plane associated with the first ray and the object plane associated with the second ray. | 12-02-2010 |
20100303293 | System and Method for Linking Real-World Objects and Object Representations by Pointing - A system and method are described for selecting and identifying a unique object or feature in the system user's three-dimensional (“3-D”) environment in a two-dimensional (“2-D”) virtual representation of the same object or feature in a virtual environment. The system and method may be incorporated in a mobile device that includes position and orientation sensors to determine the pointing device's position and pointing direction. The mobile device incorporating the present invention may be adapted for wireless communication with a computer-based system that represents static and dynamic objects and features that exist or are present in the system user's 3-D environment. The mobile device incorporating the present invention will also have the capability to process information regarding a system user's environment and calculating specific measures for pointing accuracy and reliability. | 12-02-2010 |
20100303294 | Method and Device for Finding and Tracking Pairs of Eyes - A method for finding and subsequently tracking the 3-D coordinates of a pair of eyes in at least one face, including receiving image data, which contains a sequence of at least one digital video signal of at least one image sensor, finding eyes or tracking previously found eyes in the image data, ascertaining the 3-D coordinates of the found or tracked eyes, associating the found or tracked eyes with a pair of eyes and providing the 3-D coordinates of the pair of eyes. | 12-02-2010 |
20100303295 | X-Ray Monitoring - Apparatus for monitoring in real time the movement of a plurality of substances in a mixture, such as oil water and air flowing through a pipe comprises an X-ray scanner arranged to make a plurality of scans of the mixture over a monitoring period to produce a plurality of scan data sets, and control means arranged to analyze the data sets to identify volumes of each of the substances and to measure their movement. By identifying volumes of each of the substances in each of a number of layers and for each of a number of scans, real time analysis and imaging of the substance can be achieved. | 12-02-2010 |
20100303296 | MONITORING CAMERA SYSTEM, MONITORING CAMERA, AND MONITORING CAMERACONTROL APPARATUS - A system includes a plurality of image capturing units configured to capture an object image to generate video data, a video coding unit configured to code each of the generated video data, a measurement unit configured to measure a recognition degree representing a feature of the object from each of the generated video data, and a control unit configured to control the video coding unit to code each of the video data based on the measured recognition degree. | 12-02-2010 |
20100303297 | COLOR CALIBRATION FOR OBJECT TRACKING - To calibrate a tracking system a computing device locates an object in one or more images taken by an optical sensor. The computing device determines environment colors included in the image, the environment colors being colors in the one or more images that are not emitted by the object. The computing device determines one or more trackable colors that, if assumed by the object, will enable the computing device to track the object. | 12-02-2010 |
20100303298 | SELECTIVE SOUND SOURCE LISTENING IN CONJUNCTION WITH COMPUTER INTERACTIVE PROCESSING - A method and apparatus for capturing image and sound during interactivity with a computer program is provided. The apparatus includes an image capture unit that is configured to capture one or more image frames. Also provided is a sound capture unit. The sound capture unit is configured to identify one or more sound sources. The sound capture unit generates data capable of being analyzed to determine a zone of focus at which to process sound to the substantial exclusion of sounds outside of the zone of focus. In this manner, sound that is captured and processed for the zone of focus is used for interactivity with the computer program. | 12-02-2010 |
20100310120 | METHOD AND SYSTEM FOR TRACKING MOVING OBJECTS IN A SCENE - A method and system for tracking moving objects in a scene is described. One embodiment acquires a digital video signal corresponding to the scene; identifies in the digital video signal one or more candidate moving objects; locates at least one candidate moving object in the digital video signal subsequent to identification of the at least one candidate moving object; tracks candidate moving objects that, for at least a predetermined period after they have been identified, continue to be located in the digital video signal; assigns a score to each tracked candidate moving object in accordance with how long after passage of the predetermined period the tracked candidate moving object has continued to be located in the digital video signal; combines the respective scores of the tracked candidate moving objects to obtain an overall score for the scene; and indicates to a user whether the overall score satisfies a predetermined criterion. | 12-09-2010 |
20100310121 | System and method for passive automatic target recognition (ATR) - A passive automatic target recognition (ATR) system includes a range map processor configured to generate range-to-pixel map data based on digital elevation map data and parameters of a passive image sensor. The passive image sensor is configured to passively acquire image data. The passive ATR system also includes a detection processor configured to identify a region of interest (ROI) in the passively acquired sensor image data based on the range-to-pixel map data, and an ATR processor configured to generate an ATR decision for the ROI. | 12-09-2010 |
20100310122 | Method and Device for Detecting Stationary Targets - Techniques for detecting stationary targets in videos or frame images are described. According to one aspect of the present invention, a sequence of frame images is being received from a video system. Each of the frame images into a plurality of image blocks, and dividing a background image is divided into a plurality of corresponding background image blocks. Characteristic values of the image blocks in each of the frame images are calculated. A plurality of characteristic value sequences is then formed, each of the characteristic value sequences comprises a predefined number of characteristic values for each of the image blocks in the frame images. A histogram of each of the characteristic value sequences is computed to determine whether one of the image blocks in one of the frame images contains a stationary target. | 12-09-2010 |
20100310123 | METHOD AND SYSTEM FOR ACTIVELY DETECTING AND RECOGNIZING PLACARDS - A method and a system for actively detecting and recognizing a placard are provided. In the present method, an image capturing device is moved according to a maneuver rule, wherein the image capturing device captures an image continuously during the movement. Then whether a placard exists in the image or not is determined. If a placard exists in the image, a content of the placard is identified and a corresponding action is executed. The method repeatedly processes the foregoing steps to further continuously move the image capturing device and determine whether the placard exists in a newly captured image so as to achieve a purpose of detecting and recognizing placards actively. | 12-09-2010 |
20100310124 | METHOD OF AND DEVICE FOR DETERMINING THE DISTANCE BETWEEN AN INTEGRATED CIRCUIT AND A SUBSTRATE - In a method of determining the distance (d) between an integrated circuit ( | 12-09-2010 |
20100310125 | Method and Device for Detecting Distance, Identifying Positions of Targets, and Identifying Current Position in Smart Portable Device - A method for detecting distance in a smart portable device includes acquiring an image of a target object, calculating a length of a side of the target object in the image, acquiring a predicted length of the side of the target object, and determining a distance between the smart portable device and the target object according to the length of the side of the target object in the image and the predicted length. | 12-09-2010 |
20100310126 | OPTICAL TRIANGULATION - The present invention relates to a method for determining the extension of a trajectory in a space-time volume of measure images. The space-time volume of measure images is generated by a measuring method utilizing a measuring system comprising a first light source and a sensor. The measuring method comprises a step of, in a predetermined operating condition of the measuring system, moving a measure object along a first direction of movement in relation to the measuring system while the first light source illuminates the measure object whereby the sensor generates a measure image of the measure object at each time instant in a set of at least two subsequent time instants, thus generating said space-time volume of measure images wherein a feature point of the measure object maps to a trajectory in the space-time volume. | 12-09-2010 |
20100310127 | SUBJECT TRACKING DEVICE AND CAMERA - A subject tracking device includes: an input unit that sequentially inputs input images; an arithmetic operation unit that calculates a first similarity level between an initial template image and a target image and a second similarity level between an update template image and the target image; a position determining unit that determines a subject position based upon at least one of the first and the second similarity level; a decision-making unit that decides whether or not to update the update template image based upon the first and the second similarity level; and an update unit that generates a new update template image based upon the initial template image multiplied by a first weighting coefficient and the target image multiplied by a second weighting coefficient, and updates the update template image with the newly generated update template image, if the update template image is decided to be updated. | 12-09-2010 |
20100310128 | System and Method for Remote Measurement of Displacement and Strain Fields - A computer-implemented method for measuring full field deformation characteristics of a deformable body. The method includes determining optical setup design parameters for measuring displacement and strain fields, and generating and applying a dot pattern on a planar side of a deformable body. A sequence of images of the dot pattern is acquired before and after deformation of the body. Irregular objects are eliminated from the images based on dot light intensity threshold and the object area or another geometrical cutoff criterion. The characteristic points of the dots are determined, and the characteristic points are matched between two or more of the sequential images. The displacement vector of the characteristic points is found, and mesh free or other techniques are used to estimate the full field displacement based on the displacement vector of the characteristic points. Strain tensor or other displacement-derived quantities can also be estimated using mesh-free or other analysis techniques. | 12-09-2010 |
20100316253 | PERVASIVE SENSING - A method of electronically monitoring a subject, for example in a home care environment, to determine the presence of the subject in zones of the environment as a function of time includes fusing data from image and wearable sensors. A grid display for displaying the presence in the zones is also provided. | 12-16-2010 |
20100316254 | USE OF Z-ORDER DATA IN AN IMAGE SENSOR - Systems and methods are provided for detecting objects of an object class, such as faces, in an image sensor. In some embodiments, the image sensor can include a detector with an image buffer. The image buffer can store image data in raster order. The detector can read the data out in Z order to perform object detection. The image data can then compute feature responses using the Z-ordered image data and determine whether any objects of the object class are present based on the feature responses. In some embodiments, the detector can downscale the image data while the object detection is performed and use the downscaled image data to continue the detection process. In some embodiments, the image data can perform detection even if the image is rotated. | 12-16-2010 |
20100316255 | DRIVER ASSISTANCE SYSTEM FOR MONITORING DRIVING SAFETY AND CORRESPONDING METHOD FOR DETECTING AND EVALUATING A VEHICLE MOVEMENT - A driver assistance system for monitoring driving safety has a mobile electronic unit including a video sensor, a computer unit for image data processing, and an acoustic output unit, which detects the immediate surroundings of the vehicle from the data of the video sensor and outputs a warning or information via an output unit when the computer unit detects a dangerous situation. The mobile electronic unit detects noises within the vehicle or from the outside via an acoustic input unit, and incorporates the information in the assessment of driving safety. | 12-16-2010 |
20100316256 | OBJECT DETECTION APPARATUS AND METHOD THEREOF - An image processing apparatus includes a discrimination unit configured to sequentially perform discrimination of whether each of a plurality of image data includes a predetermined object using a parameter stored in a storage unit, an update unit configured to update the parameter stored in the storage unit, and a control unit configured to, when the discrimination unit discriminates that the predetermined object is included, control the update unit to update the parameter and the discrimination unit to perform the discrimination on current image data using the updated parameter, and when the discrimination unit discriminates that the predetermined object is not included, control the update unit to maintain the parameter stored in the storage unit and the discrimination unit to perform the discrimination on next image data using the maintained parameter. By using this image processing apparatus, the processing can be speeded up without increasing a size of a circuit. | 12-16-2010 |
20100316257 | MOVABLE OBJECT STATUS DETERMINATION - Embodiments of the present invention relate to automated methods and systems for determining a degree of presence of a movable object in a physical space. Video images are used to define a region of interest ( | 12-16-2010 |
20100322471 | Motion invariant generalized hyperspectral targeting and identification methodology and apparatus therefor - The present disclosure relates to a method and system for enhancing the ability of nuclear, chemical, and biological (“NBC”) sensors, specifically mobile sensors, to detect, analyze, and identify NBC agents on a surface, in an aerosol, in a vapor cloud, or other similar environment. Embodiments include the use of a two-stage approach including targeting and identification of a contaminant. Spectral imaging sensors may be used for both wide-field detection (e.g., for scene classification) and narrow-field identification. | 12-23-2010 |
20100322472 | OBJECT TRACKING IN COMPUTER VISION - A method and system for object tracking in computer vision. The tracked object is recognized from an image that has been acquired with the camera of the computer vision system. The image is processed by randomly generating samples in the search space and then computing fitness functions. Regions of high fitness attract more samples. The random selection may be based on standard deviation or other weights. Computations are stored into a tree structure. The tree structure can be used as prior information for next image. | 12-23-2010 |
20100322473 | DECENTRALIZED TRACKING OF PACKAGES ON A CONVEYOR - A decentralized tracking system is discussed herein. The decentralized tracking system can be comprised of two or more tracking elements and be used to track packages moving on a conveyor system. Each tracking element can operate independently, despite being highly sophisticated and dynamically coordinated with one or more other tracking elements. The conveyor system can be a modular and/or accumulation conveyor system that has sorting functionality. The decentralized tracking system can be used to divert packages for sortation by, for example, embedding a destination zone into the package's tracking data and/or preprogramming conveyor zones to sort specific packages based on a package identifier. | 12-23-2010 |
20100322474 | DETECTING MULTIPLE MOVING OBJECTS IN CROWDED ENVIRONMENTS WITH COHERENT MOTION REGIONS - Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks. | 12-23-2010 |
20100322475 | OBJECT AREA DETECTING DEVICE, OBJECT AREA DETECTING SYSTEM, OBJECT AREA DETECTING METHOD AND PROGRAM - To enable detection of an overlying object distinctively even if a stationary object is overlaid with another stationary object or a moving object. A data processing device includes a first unit which detects an object area in a plurality of time-series continuous input images, a second unit which detects a stationary area in the object area from the plurality of continuous input images, a third unit which stores information of the stationary area as time-series background information, and a fourth unit which compares the time-series background information with the object area to thereby detect each object included in the object area. | 12-23-2010 |
20100322476 | VISION BASED REAL TIME TRAFFIC MONITORING - A system and method for detecting and tracking one or more vehicles using a system for obtaining two-dimensional visual data depicting traffic flow on a road is disclosed. In one exemplary embodiment, the system and method identifies groups of features for determining traffic data. The features are classified as stable features or unstable features based on whether each feature is on the frontal face of a vehicle close to the road plane. In another exemplary embodiment, the system and method identifies vehicle base fronts as a basis for determining traffic data. In yet another exemplary embodiment, the system and method includes an automatic calibration procedure based on identifying two vanishing points | 12-23-2010 |
20100322477 | DEVICE AND METHOD FOR DETECTING A PLANT - A device for detecting a plant includes a two-dimensional camera for detecting a two-dimensional image of a plant leaf having a high two-dimensional resolution, and a three-dimensional camera for detecting a three-dimensional image of the plant leaf having a high three-dimensional resolution. The two-dimensional camera is a conventional high-resolution color camera, for example, and the three-dimensional camera is a TOF camera, for example. A processor for merging the two-dimensional image and the three-dimensional image creates a three-dimensional result representation having a higher resolution than the three-dimensional image of the 3D camera, which may include, among other things, the border of a leaf. The three-dimensional result representation serves to characterize a plant leaf, such as to calculate the surface area of the leaf, the alignment of the leaf, or serves to identify the leaf. | 12-23-2010 |
20100322478 | Restoration apparatus for weather-degraded image and driver assistance system - In a restoration apparatus, an estimating unit divides a captured original image into a plurality of local pixel blocks, and estimates an luminance level of airlight in each of the plurality of local pixel blocks. A calculating unit directly calculates, from a particle-affected luminance model, a luminance level of each pixel of each of the plurality of local pixel blocks in the original image to thereby generate, based on the luminance level of each pixel of each of the plurality of local pixel blocks, a restored image of the original image. The particle-affected luminance model expresses an intrinsic luminance of a target observed by the image pickup device as a function between the luminance level of airlight and an extinction coefficient. The extinction coefficient represents the concentration of particles in the atmosphere. | 12-23-2010 |
20100322479 | SYSTEMS AND METHODS FOR 3-D TARGET LOCATION - A target is imaged in a three-dimensional real space using two or more video cameras. A three-dimensional image space combined from two video cameras of the two or more video cameras is displayed to a user using a stereoscopic display. A right eye and a left eye of the user are imaged as the user is observing the target in the stereoscopic video display, a right gaze line of the right eye and a left gaze line of the left eye are calculated in the three-dimensional image space, and a gazepoint in the three-dimensional image space is calculated as the intersection of the right gaze line and the left gaze line using a binocular eyetracker. A real target location is determined by translating the gazepoint in the three-dimensional image space to the real target location in the three-dimensional real space from the locations and the positions of the two video cameras using a processor. | 12-23-2010 |
20100322480 | Systems and Methods for Remote Tagging and Tracking of Objects Using Hyperspectral Video Sensors - Detection and tracking of an object by exploiting its unique reflectance signature. This is done by examining every image pixel and computing how closely that pixel's spectrum matches a known object spectral signature. The measured radiance spectra of the object can be used to estimate its intrinsic reflectance properties that are invariant to a wide range of illumination effects. This is achieved by incorporating radiative transfer theory to compute the mapping between the observed radiance spectra to the object's reflectance spectra. The consistency of the reflectance spectra allows for object tracking through spatial and temporal gaps in coverage. Tracking an object then uses a prediction process followed by a correction process. | 12-23-2010 |
20100329508 | Detecting Ground Geographic Features in Images Based on Invariant Components - Systems, devices, features, and methods for detecting geographic features in images, such as, for example, to develop a navigation database are disclosed. For example, a method of detecting a path marking from collected images includes collecting a plurality of images of geographic areas along a path. An image of the plurality of images is selected. Components that represent an object on the path in the selected image are determined. In one embodiment, the determined components are independent or invariant to scale of the object. The determined components are compared to reference components in a data library. If the determined components substantially meet a matching threshold with the reference components, the object in the selected image is identified to be a path marking corresponding to the reference components in the data library. | 12-30-2010 |
20100329509 | METHOD AND SYSTEM FOR GESTURE RECOGNITION - A method and a system for gesture recognition are provided for recognizing a gesture performed by a user in front of an electronic product having a video camera. In the present method, an image containing the upper body of the user is captured and a hand area in the image is obtained. The hand area is fully scanned by a first couple of concentric circles. During the scanning, a proportion of a number of skin color pixels on an inner circumference of the first couple of concentric circles and a proportion of a number of skin color pixels on an outer circumference of the first couple of concentric circles are used to determine a number of fingertips in the hand area. The gesture is recognized by the number of fingertips and an operation function of the electronic product is executed according to an operating instruction corresponding to the recognized gesture. | 12-30-2010 |
20100329510 | METHOD AND DEVICE FOR DISPLAYING THE SURROUNDINGS OF A VEHICLE - In a method for displaying on a display device the surroundings of a vehicle, the surroundings are detected by at least one detection sensor as an image of the surroundings while the vehicle is traveling or at a standstill. A surroundings image from a given surrounding area is ascertained by the detection sensor in different vehicle positions, and/or at least one surroundings image from the given surrounding area is ascertained by each of at least two detection sensors situated at a distance from one another, and in each case a composite surroundings image is obtained from the surroundings images and displayed by the display device. | 12-30-2010 |
20100329511 | Apparatus and method for detecting hands of subject in real time - An apparatus and method can effectively detect both hands and hand shape of a user from images input through cameras. A skin image detecting skin regions from one of the input images and a stereoscopic distance image are used. For hand detection, background and noise are eliminated from a combined image of the skin image and the distance image and regions corresponding to actual both hands are detected from effective images having a high probability of hands. For hand shape detection, a non-skin region is eliminated from the skin image based on the stereoscopic distance information, hand shape candidate regions are detected from the remaining region after elimination, and finally a hand shape is determined. | 12-30-2010 |
20100329512 | METHOD FOR REALTIME TARGET DETECTION BASED ON REDUCED COMPLEXITY HYPERSPECTRAL PROCESSING - There is provided a method for real-time target detection comprising detecting a preprocessed pixel as a target and/or a background, based on a library, and refining the library by extracting a sample from the target or the background. | 12-30-2010 |
20110002505 | System and Method For Analysis of Image Data - A method and apparatus for optical damage assessment using an existing imaging focal plane array and a fixed or moving set of optics and filters. Advantages include cost reductions and improved reliability due to fewer components arid therefore fewer points of failure. | 01-06-2011 |
20110002506 | Eye Beautification - Sub-regions within a face image are identified to be enhanced by applying a localized smoothing kernel to luminance data corresponding to the sub-regions of the face image. An enhanced face image is generated including an enhanced version of the face that includes certain original pixels in combination with pixels corresponding to the one or more enhanced sub-regions of the face. | 01-06-2011 |
20110002507 | Obstacle detection procedure for motor vehicle - The present invention concerns an obstacle detection procedure within the area surrounding a motor vehicle. | 01-06-2011 |
20110002508 | DIGITALLY-GENERATED LIGHTING FOR VIDEO CONFERENCING APPLICATIONS - A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model. | 01-06-2011 |
20110002509 | MOVING OBJECT DETECTION METHOD AND MOVING OBJECT DETECTION APPARATUS - A moving object detection method with which a region of a moving object is accurately extracted without being affected by a change in shape or size or occlusion of the moving object and in which a distance indicating a similarity between trajectories of an image in each of the blocks included in video is calculated (S | 01-06-2011 |
20110007938 | Thermal and short wavelength infrared identification systems - A method and apparatus for preventing fratricide including an emitter that emits a signaling code at a wavelength, the signaling code representing a coded message; a receiver that captures an image of a field of view including the emitter and generates image information corresponding to the captured image; a translation system that: receives the image information, and decodes the coded message from the image information; and a output device that outputs the decoded message. | 01-13-2011 |
20110007939 | Image-based tracking - A method of image-tracking by using an image capturing device. The method comprises: performing an image-capture of a scene by using an image capturing device; and tracking movement of the image capturing device by analyzing a set of images by using an image processing algorithm. | 01-13-2011 |
20110007940 | AUTOMATED TARGET DETECTION AND RECOGNITION SYSTEM AND METHOD - Methods and apparatus are provided for recognizing particular objects of interest in a captured image. One or more salient features that are correlative to an object of interest are detected within a captured image. The captured image is segmented into one or more regions of interest that include a detected salient feature. A covariance appearance model is generated for each of the one or more regions of interest, and first and second comparisons are conducted. The first comparisons comprise comparing each of the generated covariance appearance models to a plurality of stored covariance appearance models, and the second comparisons comprise comparing each of the generated covariance appearance models to each of the other generated covariance appearance model. Based on the first and second comparisons, a determination is made as to whether each of the one or more detected salient features is a particular object of interest. | 01-13-2011 |
20110007941 | PRECISELY LOCATING FEATURES ON GEOSPATIAL IMAGERY - Methods for locating a feature on geospatial imagery and systems for performing those methods are disclosed. An accuracy level of each of a plurality of geospatial vector datasets available in a database can be determined. Each of the plurality of geospatial vector datasets corresponds to the same spatial region as the geospatial imagery. The geospatial vector dataset having the highest accuracy level may be selected. When the selected geospatial vector dataset and the geospatial imagery are misaligned, the selected geospatial vector dataset is aligned to the geospatial imagery. The location of the feature on the geospatial imagery is then determined based on the selected geospatial vector dataset and outputted via a display device. | 01-13-2011 |
20110007942 | Real-Time Tracking System - There is provided a real-time tracking system and a method associated therewith for identifying and tracking objects moving in a physical region, typically for producing a physical effect, in real-time, in response to the movement of each object. The system scans a plane, which intersects a physical space, in order to collect reflection-distance data as a function of position along the plane. The reflection-distance data is then processed by a shape-analysis subsystem in order to locate among the reflection-distance data, a plurality of discontinuities, which are in turn associated to one or more detected objects. Each detected object is identified and stored in an identified-object structure. The scanning and processing is repeated for a number of iterations, wherein each detected object is identified with respect to the previously scanned objects, through matching with the identified-object structures, in order to follow the course of each particular object. | 01-13-2011 |
20110007943 | Registration Apparatus, Checking Apparatus, Data Structure, and Storage Medium (amended - A registration apparatus, a checking apparatus, a data structure, and a storage medium that are capable of achieving an improved authentication accuracy are provided. The registration apparatus includes an image acquisition unit configured to acquire a venous image for a vein of a living body, an extraction unit configured to extract a parameter resistant to affine transformation from part of the venous image, and a registration unit configured to register the parameter extracted by the extraction unit in storage means. The part of the venous image is set as a target for extracting the parameter resistant to affine transformation. | 01-13-2011 |
20110007944 | SYSTEM AND METHOD FOR OCCUPANCY ESTIMATION - A system generates occupancy estimates based on a Kinetic-Motion (KM)-based model that predicts the movements of occupants through a region divided into a plurality of segments. The system includes a controller for executing an algorithm representing the KM-based model. The KM-based model includes state equations that define each of the plurality of segments as containing congested portions and uncongested portions. The state equations define the movement of occupants based, in part, on the distinctions made between congested and uncongested portions of each segment. | 01-13-2011 |
20110007945 | FAST ALGORITHM FOR STREAMING WAVEFRONT - The invention is generally directed to the field of image processing, and more particularly to a method and an apparatus for determining a wavefront of an object, in particular a human eye. The invention discloses a method and an apparatus for real-time wavefront sensing of an optical system utilizing two different algorithms for detecting centroids of a centroid image as provided by a Hartmann-Shack wavefront sensor. A first algorithm detects an initial position of all centroids and a second algorithm detects incremental changes of all centroids detected by said first algorithm. | 01-13-2011 |
20110007946 | UNIFIED SYSTEM AND METHOD FOR ANIMAL BEHAVIOR CHARACTERIZATION WITH TRAINING CAPABILITIES - In general, the present invention is directed to systems and methods for finding the position and shape of an object using video. The invention includes a system with a video camera coupled to a computer in which the computer is configured to automatically provide object segmentation and identification, object motion tracking (for moving objects), object position classification, and behavior identification. In a preferred embodiment, the present invention may use background subtraction for object identification and tracking, probabilistic approach with expectation-maximization for tracking the motion detection and object classification, and decision tree classification for behavior identification. Thus, the present invention is capable of automatically monitoring a video image to identify, track and classify the actions of various objects and the object's movements within the image. The image may be provided in real time or from storage. The invention is particularly useful for monitoring and classifying animal behavior for testing drugs and genetic mutations, but may be used in any of a number of other surveillance applications. | 01-13-2011 |
20110013804 | Method for Normalizing Displaceable Features of Objects in Images - A method normalizes a feature of an object in an image. The feature of the object is extracted from a 2D or 3D image. The feature is displaceable within a displacement zone in the object, and wherein the feature has a location within the displacement zone. An associated description of the feature is determined. Then, the feature is displaced to a best location in the displacement zone to produce a normalized feature. | 01-20-2011 |
20110013805 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND INTERFACE APPARATUS - In order to detect a specific detection object from an input image, a color serving as a reference is calculated in a reference image region. The difference for each color component between each pixel in the detection window and the reference color is calculated. Whether or not the detection object is included in the detection window is discriminated by a feature vector indicating how the difference is distributed in the detection window. | 01-20-2011 |
20110013806 | Methods of object search and recognition - Embodiments of the invention disclose techniques for processing of machine-readable forms of unfixed or flexible format. An auxiliary brief description may be optionally specified to determine the spatial orientation of the image. A method of searching for elements of a document comprises the following main operations in addition to the operations of preliminary image processing: selecting the varieties of structural description from several available variants, determining the orientation of the image, selecting the text objects, where the text must be recognized, and determining the minimal required volume of recognition, recognizing the text objects, searching for elements of the form. Searching for elements of the form comprises the following actions: selecting a searched element in the structural description, gaining the algorithm of search constraints from the structural description, searching for the element, testing the obtained variants. | 01-20-2011 |
20110019873 | PERIPHERY MONITORING DEVICE AND PERIPHERY MONITORING METHOD - A flow calculating section | 01-27-2011 |
20110019874 | DEVICE AND METHOD FOR DETERMINING GAZE DIRECTION - An eye tracker device ( | 01-27-2011 |
20110019875 | IMAGE DISPLAY DEVICE - On a table type image display device A, a display ( | 01-27-2011 |
20110026764 | DETECTION OF OBJECTS USING RANGE INFORMATION - A system and method for detecting objects and background in digital images using range information includes receiving the digital image representing a scene; identifying range information associated with the digital image and including distances of pixels in the scene from a known reference location; generating a cluster map based at least upon an analysis of the range information and the digital image, the cluster map grouping pixels of the digital image by their distances from a viewpoint; identifying objects in the digital image based at least upon an analysis of the cluster map and the digital image; and storing an indication of the identified objects in a processor-accessible memory system. | 02-03-2011 |
20110026765 | SYSTEMS AND METHODS FOR HAND GESTURE CONTROL OF AN ELECTRONIC DEVICE - Systems and methods of generating device commands based upon hand gesture commands are disclosed. An exemplary embodiment generates image information from a series of captured images, generates commands based upon hand gestures made by a user that emulate device commands generated by a remote control device, identifies a hand gesture made by the user from the received image information, determines a hand gesture command based upon the identified hand gesture, compares the determined hand gesture command with the plurality of predefined hand gesture commands to identify a corresponding matching hand gesture command from the plurality of predefined hand gesture commands, generates an emulated remote control device command based upon the identified matching hand gesture command, and controls the media device based upon the generated emulated remote control device command. | 02-03-2011 |
20110026766 | MOVING IMAGE EXTRACTING APPARATUS, PROGRAM AND MOVING IMAGE EXTRACTING METHOD - There is provided a moving image extracting apparatus including a movement detecting unit which detects movement of an imaging apparatus at the time when imaging a moving image based on the moving image imaged by the imaging apparatus, an object detecting unit which detects an object from the moving image, a salient object selecting unit which selects an object detected by the object detecting unit over a period of predetermined length or longer as a salient object within a segment in which movement of the imaging apparatus is detected by the movement detecting unit, and an extracting unit which extracts a segment including the salient object selected by the salient object selecting unit from the moving image. | 02-03-2011 |
20110026767 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An apparatus stores a luminance signal and a color signal extracted from a tracking area in image data and determines a correlation with the stored luminance signal, thereby extracting an area where a specified object exists in another image data to update the tracking area using the position information of the extracted area. If a sufficient correlation cannot be obtained from the luminance signal, the apparatus makes a comparison with the stored color signal to determine whether the specified object is lost. The apparatus updates the luminance signal every time the tracking area is updated, but does not update the color signal even if the tracking area is updated or updates the color signal at a period longer than a period at which the luminance signal is updated. | 02-03-2011 |
20110026768 | Tracking a Spatial Target - Apparatuses and methods for tracking a dermatological feature are disclosed. One method includes establishing an imaging reference proximate to an identified dermatological feature, wherein the imaging reference has a known color spectrum and known physical dimensions. A digital image sequence is obtained containing one or more images of the identified dermatological feature and the imaging reference. At least one trait of the identified dermatological feature is estimated using the imaging reference and at least one image of the digital image sequence. | 02-03-2011 |
20110026769 | PRESENTATION DEVICE - A presentation device comprises an image capture portion for capturing an image of a subject and generating a raw image thereof; a detection portion adapted to analyze whether a first marker is present in the raw image, and if the first marker is present in the raw image, to detect an existing position of the first marker within the raw image; a storage portion for storing a positional relationship of a synthesis position at which a mask image for masking at least a portion of the raw image is synthesized with the raw image relative to the existing position of the first marker; a synthesized image generation portion adapted to determine the synthesis position according to the positional relationship with the detected existing position, and to synthesize the mask image at the determined synthesis position within the raw image to generate a synthesized image; and an output portion for outputting the synthesized image. | 02-03-2011 |
20110026770 | Person Following Using Histograms of Oriented Gradients - A method for using a remote vehicle having a stereo vision camera to detect, track, and follow a person, the method comprising: detecting a person using a video stream from the stereo vision camera and histogram of oriented gradient descriptors; estimating a distance from the remote vehicle to the person using depth data from the stereo vision camera; tracking a path of the person and estimating a heading of the person; and navigating the remote vehicle to an appropriate location relative to the person. | 02-03-2011 |
20110033084 | IMAGE CLASSIFICATION SYSTEM AND METHOD THEREOF - An image classification system configured to classify a target and method thereof is provided, wherein the system includes at least one light source configured to emit light with at least one line pattern towards the target, wherein at least a portion of the emitted light and line pattern is reflected by the target. The system further includes an imager configured to receive at least a portion of the reflected light and line pattern, such that an obtained 2-D line pattern is produced that is representative of at least a portion of the emitted light and line pattern reflected by the target, and a controller configured to compare the 2-D line pattern to at least one previously obtained 2-D line pattern stored in a database, such that the controller classifies the 2-D line pattern as a function of the comparison. | 02-10-2011 |
20110033085 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An apparatus includes a storage unit configured to store an attribute of each pixel existing inside a tracking target area set on an image and an attribute of a pixel existing adjacent to the pixel, an allocation unit configured to allocate an evaluation value to a pixel to be evaluated according to a result of comparison between an attribute of the pixel to be evaluated and an attribute of a pixel existing inside the tracking target area and a result of comparison between an attribute of a pixel existing adjacent to the pixel to be evaluated and an attribute of a pixel existing adjacent to the pixel existing inside the tracking target area, and a changing unit configured to change the tracking target area based on the allocated evaluation value. | 02-10-2011 |
20110033086 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An apparatus includes a storage unit configured to classify pixels existing inside a tracking target area set on an image and pixels existing outside the tracking target area according to an attribute and to store a result of classification of the pixels on a storage medium, a first derivation unit configured to derive a first ratio of the pixels existing inside the tracking target area and having the attribute to the pixels existing outside the tracking target area and having the attribute, a second derivation unit configured to derive a second ratio of pixels, whose the first ratio is higher than a first predetermined value, to all pixels existing inside the tracking target area, and a determination unit configured, if the second ratio is higher than a second predetermined value, to determine that the tracking target area can be tracked. | 02-10-2011 |
20110033087 | VIDEO CONTENT ANALYSIS - A video content analysis (VCA) system generates an output regarding a detected condition that provides an indication of a confidence level regarding the detected condition. One example VCA system determines whether a first characteristic of a detected object in a field of vision of the video content analysis system satisfies a first criterion. If so, a first signal is generated under selected conditions. The VCA system also determines whether a second characteristic of the detected object satisfies a corresponding second criterion. If so, a second, different signal is generated if the first and second criteria are satisfied. The first and second signals indicate respective, different confidence levels that an event has occurred. A disclosed example includes a VCA as part of a security system. | 02-10-2011 |
20110038508 | SYSTEM AND METHOD FOR PERFORMING OPTICAL NAVIGATION USING PORTIONS OF CAPTURED FRAMES OF IMAGE DATA - A system and method for performing optical navigation selectively uses portions of captured frame of image data for cross-correlation for displacement estimation, which can reduce the power consumption and/or increase the tracking performance at higher speed usage. | 02-17-2011 |
20110044497 | SYSTEM, METHOD AND PROGRAM PRODUCT FOR CAMERA-BASED OBJECT ANALYSIS - A system, method and program product for camera-based object analyses including object recognition, object detection, and/or object categorization. An exemplary embodiment of the computerized method for analyzing objects in images obtained from a camera system includes receiving image(s) having pixels from the camera system; calculating a pool of features for each pixel; then deriving either a pool of radial moment of features from the pool of features and a geometric center of the image(s) or a pool of central moments of features from the pool of features; then calculating a normalized descriptor, based on an area of the image(s) and either of the derived pool of moments of features; and then based on the normalized descriptor, a computer then either recognizes, detects, and/or categorizes an object(s) in the image(s). | 02-24-2011 |
20110044498 | VISUALIZING AND UPDATING LEARNED TRAJECTORIES IN VIDEO SURVEILLANCE SYSTEMS - Techniques are disclosed for visually conveying a trajectory map. The trajectory map provides users with a visualization of data observed by a machine-learning engine of a behavior recognition system. Further, the visualization may provide an interface used to guide system behavior. For example, the interface may be used to specify that the behavior recognition system should alert (or not alert) when a particular trajectory is observed to occur. | 02-24-2011 |
20110044499 | INTER-TRAJECTORY ANOMALY DETECTION USING ADAPTIVE VOTING EXPERTS IN A VIDEO SURVEILLANCE SYSTEM - A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies. | 02-24-2011 |
20110044500 | Light Information Receiving Method, Unit and Method for Recognition of Light-Emitting Objects - A light information receiving method, a method and a unit for the recognition of light-emitting objects are provided. The light information receiving method includes the following steps. A light-emitting object array is captured to obtain a plurality of images, wherein the light-emitting object array includes at least one light-emitting object. A temporal filtering process is performed to the images to recognize a light-emitting object. A light-emitting status of the light-emitting object array is recognized according to the light-emitting object location. A decoding process is performed according to the light-emitting status to output an item of information. | 02-24-2011 |
20110044501 | Systems and methods for personalized motion control - End users, unskilled in the art, generating motion recognizers from example motions, without substantial programming, without limitation to any fixed set of well-known gestures, and without limitation to motions that occur substantially in a plane, or are substantially predefined in scope. From example motions for each class of motion to be recognized, a system automatically generates motion recognizers using machine learning techniques. Those motion recognizers can be incorporated into an end-user application, with the effect that when a user of the application supplies a motion, those motion recognizers will recognize the motion as an example of one of the known classes of motion. Motion recognizers can be incorporated into an end-user application; tuned to improve recognition rates for subsequent motions to allow end-users to add new example motions. | 02-24-2011 |
20110044502 | MOTION DETECTION METHOD, APPARATUS AND SYSTEM - A motion detection method, apparatus and system are disclosed in the present invention, which relates to the video image processing field. The present invention can effectively overcome the influence of the background on motion detection and the problem of object “conglutination” to avoid false detection, thereby accomplishing object detection in complex scenes with a high precision. The motion detection method disclosed in embodiments of the present invention comprises: acquiring detection information of the background scene and detection information of the current scene, wherein the current scene is a scene comprising an object(s) to be detected and the same background scene; and calculating the object(s) to be detected according to the detection information of the background scene and the detection information of the current scene. The present invention is applicable to any scenes where moving objects need to be detected, e.g., automatic passenger flow statistical systems in railway, metro and bus sectors, and is particularly applicable to detection and calibration of objects in places where brightness varies greatly. | 02-24-2011 |
20110044503 | VEHICLE TRAVEL SUPPORT DEVICE, VEHICLE, VEHICLE TRAVEL SUPPORT PROGRAM - A vehicle travel support device determines presence of a recognition inhibiting factor of a lane mark on a road on which a vehicle is traveling with high accuracy irrespective of an imaging history by a vehicular camera from the same position. The vehicle travel support system generates an edge image by extracting an edge or actualizing an edge in an image obtained through the vehicular camera. When Hough transform of the edge image is performed, votes for a specified vote value of a linear component is evaluated in a ρ-θ space (Hough space). Presence of a recognition inhibiting factor of a lane mark on a road is determined by determining whether or not the votes of a specified vote value in a specified region denoting a standard travel lane of a vehicle in the real space is ≧a threshold in the ρ-θ space. | 02-24-2011 |
20110044504 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND PROGRAM - An information processing device, including: a three-dimensional information generating section for obtaining position and attitude of a moving camera or three-dimensional positions of feature points by successively receiving captured images from different viewpoints, and updating status data using observation information which includes tracking information of the feature points, the status data including three-dimensional positions of the feature points within the images and position and attitude information of the camera; and a submap generating section for generating submaps by dividing an area for which the three-dimensional position is to be calculated. The three-dimensional information generating section obtains position and attitude of the camera or three-dimensional positions of the feature points by generating status data corresponding to the submaps not including information about feature points outside of a submap area for each of the generated submaps and updating the generated status data corresponding to the submaps. | 02-24-2011 |
20110044505 | EQUIPMENT OPERATION SAFETY MONITORING SYSTEM AND METHOD AND COMPUTER-READABLE MEDIUM RECORDING PROGRAM FOR EXECUTING THE SAME - Provided are equipment operation safety monitoring system and method and computer-readable medium having a program recorded thereon, the program allowing a computer to execute the method. The equipment operation safety monitoring system includes an image input unit, an integrated image generation unit, a guideline generation unit, and an image output unit. The image input unit is mounted on heavy equipment and inputs a plurality of images acquired by photographing partitioned areas in all the directions around the heavy equipment. The integrated image generation unit generates an integrated image including the areas in all the directions around the heavy equipment by using the plurality of the images. The guideline generation unit generates a guideline indicating a position separated by a predetermined distance from the heavy equipment. The image output unit illustrates the guideline on the integrated image and outputs the integrated image. | 02-24-2011 |
20110044506 | TARGET ANALYSIS APPARATUS, METHOD AND COMPUTER-READABLE MEDIUM - Provided is a target analysis apparatus, method and computer-readable medium based on a depth image and an intensity image of a target is provided. The target analysis apparatus may include a body detection unit to detect a body of the target from the intensity image of the target, a foreground segmentation unit to calculate an intensity threshold value in accordance with intensity values from the detected body, to transform the intensity image into a binary image using the intensity threshold value, and to mask the depth image of the target using the binary image as a mask to thereby obtain a masked depth image, and an active portion detection unit to detect an active portion of the body of the target from the masked depth image. | 02-24-2011 |
20110044507 | METHOD AND ASSISTANCE SYSTEM FOR DETECTING OBJECTS IN THE SURROUNDING AREA OF A VEHICLE - A method for determining relevant objects in a vehicle moving on a roadway An assistance function is executed in relation to a position of a relevant object, and the relevant objects are determined on the basis of an image evaluation of images of a surrounding area of the vehicle. The images are detected by way of camera sensors. By way of a radar sensor positions of stationary objects in the surrounding area of the vehicle are determined. A profile of a roadway edge is determined using the positions of the stationary objects and that the image evaluation is carried out in relation to the roadway edge profile determined. A driver assistance system suitable for carrying out the method is also described. | 02-24-2011 |
20110044508 | APPARATUS AND METHOD FOR RAY TRACING USING PATH PREPROCESS - Disclosed is an apparatus and method for ray-tracing using a path preprocess. The method for ray-tracing including launching a ray from a transmitting point at angles with regular intervals, setting a first side of an object where the launched ray is projected as a reference patch, and searching predetermined preprocessed path data for a counterpart patch corresponding to a second side of another object, the second side being exposed to the projected ray reflected or diffracted from the set reference patch, and tracing a transmission path of the reflected or diffracted ray. | 02-24-2011 |
20110051999 | Device and method for detecting targets in images based on user-defined classifiers - A device and method for detecting targets of interest in an image, such as people or objects of a certain type. Targets are detected based on an optimized strong classifier descriptor that can be based on a combination of weak classifier descriptors. The weak classifier descriptors can include a user-defined weak classifier descriptor that is defined by a user to represent a shape or appearance attribute that is characteristic of parts of the target of interest. The strong classifier descriptor can be optimized by selecting a subset of weak classifier descriptors that exhibit improved performance in detecting targets in training images. | 03-03-2011 |
20110052000 | DETECTING ANOMALOUS TRAJECTORIES IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for determining anomalous trajectories of objects tracked over a sequence of video frames. In one embodiment, a symbol trajectory may be derived from observing an object moving through a scene. The symbol trajectory represents semantic concepts extracted from the trajectory of the object. Whether the symbol trajectory is anomalous may be determined, based on previously observed symbol trajectories. A user may be alerted upon determining that the symbol trajectory is anomalous. | 03-03-2011 |
20110052001 | AUTOMATIC ERROR DETECTION FOR INVENTORY TRACKING AND MANAGEMENT SYSTEMS USED AT A SHIPPING CONTAINER YARD - A method automatically detects errors in a container inventory database associated with a container inventory tracking system of a container storage facility. A processor in the inventory tracking system performs a method that: obtains a first data record, identifies an event (e.g., pickup, drop-off, or movement) associated with the first record, provides a list of error types based on the identified event, and determines whether a data error has occurred through a checking process. In each of the checking steps, the processor selects an error type from the list of error types, determines a search criterion based on the selected error type and the first data record, queries the database using the search criterion, compares query results with the first data record to detect data conflicts between them, and upon the detection of the data conflicts, reports that a data error of the selected error type has been detected. | 03-03-2011 |
20110052002 | FOREGROUND OBJECT TRACKING - Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications. | 03-03-2011 |
20110052003 | FOREGROUND OBJECT DETECTION IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the detected foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications. | 03-03-2011 |
20110052004 | CAMERA DEVICE AND IDENTITY RECOGNITION METHOD UTILIZING THE SAME - A camera device includes an image capturing module, a face detection module, a light detection and ranging (LIDAR) system, a storage module, and a microprocessor. The image capturing module continuously captures images of a determined filed. The face detection module detects the images to obtain a face to be tested, and records coordinates of the face in the image. The LIDAR system scans the face to be tested in the determined field according to the coordinates thereby to obtain three-dimensional information of the face to be tested. The storage module stores three-dimensional information of a determined face. The microprocessor compares the three-dimensional information of the face to be tested with the three-dimensional information of the determined face, and then outputs a recognition signal. | 03-03-2011 |
20110052005 | Designation of a Characteristic of a Physical Capability by Motion Analysis, Systems and Methods - Motion Analysis is used to classify or rate human capability in a physical domain via a minimized movement and data collection protocol producing a discreet, overall figure of merit of the selected physical capability. The minimal protocol is determined by data mining of a more extensive movement and data collection. Protocols are relevant in medical, sports and occupational applications. Kinematic, kinetic, body type, Electromyography (EMG), Ground Reactive Force (GRF), demographic, and psychological data are encompassed. Resulting protocols are capable of transforming raw data representing specific human motions into an objective rating of a skill or capability related to those motions. | 03-03-2011 |
20110052006 | EXTRACTION OF SKELETONS FROM 3D MAPS - A method for processing data includes receiving a temporal sequence of depth maps of a scene containing a humanoid form having a head. The depth maps include a matrix of pixels having respective pixel depth values. A digital processor processes at least one of the depth maps so as to find a location of the head and estimates dimensions of the humanoid form based on the location. The processor tracks movements of the humanoid form over the sequence using the estimated dimensions. | 03-03-2011 |
20110052007 | GESTURE RECOGNITION METHOD AND INTERACTIVE SYSTEM USING THE SAME - A gesture recognition method for an interactive system includes the steps of: capturing image windows with an image sensor; obtaining information of object images associated with at least one pointer in the image windows; calculating a position coordinate of the pointer relative to the interactive system according to the position of the object images in the image windows when a single pointer is identified according to the information of object images; and performing gesture recognition according to a relation between the object images in the image window when a plurality of pointers are identified according to the information of object images. The present invention further provides an interactive system. | 03-03-2011 |
20110052008 | System and Method for Image Based Sensor Calibration - Apparatus and methods are disclosed for the calibration of a tracked imaging probe for use in image-guided surgical systems. The invention uses actual image data collected from an easily constructed calibration jig to provide data for the calibration algorithm. The calibration algorithm analytically develops a geometric relationship between the probe and the image so objects appearing in the collected image can be accurately described with reference to the probe. The invention can be used with either two or three dimensional image data-sets. The invention also has the ability to automatically determine the image scale factor when two dimensional data-sets are used. | 03-03-2011 |
20110058708 | OBJECT TRACKING APPARATUS AND OBJECT TRACKING METHOD - Candidate contour curves for a tracking object in the current frame are determined using a particle filter, based on the existence probability distribution of the tracking object in a frame which is one frame previous to the current frame. To match a candidate curve against a contour image of the current frame, a processing to search for the closest contour to the candidate curves is divided for each knot constituting the candidate contour curve and is executed in parallel by a plurality of processors. Each image data on a search region for each knot to be processed are copied from a contour image stored in an image storage to the respective local memories. | 03-10-2011 |
20110058709 | VISUAL TARGET TRACKING USING MODEL FITTING AND EXEMPLAR - A method of tracking a target includes receiving an observed depth image of the target from a source and analyzing the observed depth image with a prior-trained collection of known poses to find an exemplar pose that represents an observed pose of the target. The method further includes rasterizing a model of the target into a synthesized depth image having a rasterized pose and adjusting the rasterized pose of the model into a model-fitting pose based, at least in part, on differences between the observed depth image and the synthesized depth image. Either the exemplar pose or the model-fitting pose is then selected to represent the target. | 03-10-2011 |
20110064267 | CLASSIFIER ANOMALIES FOR OBSERVED BEHAVIORS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A combination of a self organizing map (SOM) and an adaptive resonance theory (ART) network may be used to identify a variety of different anomalous inputs at each cluster layer. As progressively higher layers of the cortex model component represent progressively higher levels of abstraction, anomalies occurring in the higher levels of the cortex model represent observations of behavioral anomalies corresponding to progressively complex patterns of behavior. | 03-17-2011 |
20110064268 | VIDEO SURVEILLANCE SYSTEM CONFIGURED TO ANALYZE COMPLEX BEHAVIORS USING ALTERNATING LAYERS OF CLUSTERING AND SEQUENCING - Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A video surveillance system may be configured to observe a scene (as depicted in a sequence of video frames) and, over time, develop hierarchies of concepts including classes of objects, actions and behaviors. That is, the video surveillance system may develop models at progressively more complex levels of abstraction used to identify what events and behaviors are common and which are unusual. When the models have matured, the video surveillance system issues alerts on unusual events. | 03-17-2011 |
20110064269 | OBJECT POSITION TRACKING SYSTEM AND METHOD - A method of tracking an object is provided. The method includes obtaining sensed positions of the object at a plurality of time instants and predicting a future position of the object by applying fuzzy predictive rules to the sensed positions of the object obtained from at least two previous time instants. | 03-17-2011 |
20110064270 | OPTICAL TRACKING DEVICE AND POSITIONING METHOD THEREOF - The present invention discloses an optical tracking device and a positioning method thereof. The optical tracking device comprises several light-emitting units, several image tracking units, an image processing unit, an analysis unit, and a calculation unit. First, the light-emitting units are correspondingly disposed on a carrier in geometric distribution and provide light sources. Secondly, the image tracking units track the plurality of light sources and capture images. The images are subjected to image processing by the image processing unit to obtain light source images corresponding to the light sources from each image. Then the analysis unit analyzes the light source images to obtain positions and colors corresponding to the light-emitting units. Lastly, the calculation unit establishes three-dimensional coordinates corresponding to the light-emitting units based on the positions and colors and calculates the position of the carrier based on the three-dimensional coordinates. | 03-17-2011 |
20110064271 | METHOD FOR DETERMINING A THREE-DIMENSIONAL REPRESENTATION OF AN OBJECT USING A SEQUENCE OF CROSS-SECTION IMAGES, COMPUTER PROGRAM PRODUCT, AND CORRESPONDING METHOD FOR ANALYZING AN OBJECT AND IMAGING SYSTEM - The method comprises, for each cross-section image, determining the position of the object (O) in relation to the cross-section plane at the moment the cross-section image is captured, and determining a three-dimensional representation (V) of the object (O) using cross-section images (X | 03-17-2011 |
20110064272 | Method and apparatus for three-dimensional tracking of infra-red beacons - A method for processing data includes identifying a time signature of an infra-red (IR) beacon. Image data associated with the IR beacon is identified using the time signature. | 03-17-2011 |
20110069865 | METHOD AND APPARATUS FOR DETECTING OBJECT USING PERSPECTIVE PLANE - A method and apparatus for detecting an object using a perspective plane are disclosed. The method includes determining a perspective plane for a background scene, and determining a moving object within the background scene based upon the determined perspective plane. By using a visual surveillance device and an apparatus for detecting objects, the method and apparatus for detecting an object using a perspective plane is capable of efficiently detecting objects and tracking the movements of the corresponding objects. | 03-24-2011 |
20110069866 | Image processing apparatus and method - Provided is an image processing apparatus. The image processing apparatus may extract a three-dimensional (3D) silhouette image in an input color image and/or an input depth image. Motion capturing may be performed using the 3D silhouette image and 3D body modeling may be performed. | 03-24-2011 |
20110069867 | TECHNIQUE FOR REGISTERING IMAGE DATA OF AN OBJECT - A technique of registering image data of an object | 03-24-2011 |
20110069868 | SIGNAL PROCESSING SYSTEM AND SIGNAL PROCESSING PROGRAM - A dedicated base vector based on a known spectral characteristic of a subject as an identification target having the known spectral characteristic and a spectral characteristic of an imaging system, which includes a spectral characteristic concerning a color imaging system used for image acquisition of subjects including the subject as the identification target and a spectral characteristic concerning illumination light used when image acquisition of the subjects by the color imaging system, are acquired. A weighting factor concerning the dedicated base vector is calculated based on an image signal obtained by image acquisition of the subject by the color imaging system, the dedicated has vector, and the spectral characteristic of the imaging system. An identification result of the subject which is the identification target having the known spectral characteristic is calculated based on the weighting factor concerning the dedicated base vector to output as an output signal. | 03-24-2011 |
20110069869 | SYSTEM AND METHOD FOR DEFINING AN ACTIVATION AREA WITHIN A REPRESENTATION SCENERY OF A VIEWER INTERFACE - The invention describes a system ( | 03-24-2011 |
20110075884 | Automatic Retrieval of Object Interaction Relationships - A method for automatically retrieving interaction information between objects, including: with a server, transforming a first image and a second image submitted to said server from a source into first and second sets of parameters, respectively; searching a database for an interaction relationship between the first and second images using the first and second sets of parameters; and returning a representation of the interaction relationship to the source. | 03-31-2011 |
20110081043 | USING VIDEO-BASED IMAGERY FOR AUTOMATED DETECTION, TRACKING, AND COUNTING OF MOVING OBJECTS, IN PARTICULAR THOSE OBJECTS HAVING IMAGE CHARACTERISTICS SIMILAR TO BACKGROUND - A system and method to automatically detect, track and count individual moving objects in a high density group without regard to background content, embodiments performing better than a trained human observer. Select embodiments employ thermal videography to detect and track even those moving objects having thermal signatures that are similar to a complex stationary background pattern. The method allows tracking an object that need not be identified every frame of the video, that may change polarity in the imagery with respect to background, e.g., switching from relatively light to dark or relatively hot to cold and vice versa, or both. The methodology further provides a permanent record of an “episode” of objects in motion, permitting reprocessing with different parameters any number of times. Post-processing of the recorded tracks allows easy enumeration of the number of objects tracked with the FOV of the imager. | 04-07-2011 |
20110081044 | Systems And Methods For Removing A Background Of An Image - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may then be discarded to isolate one or more voxels associated with a foreground object such as a human target and the isolated voxels associated with the foreground object may be processed. | 04-07-2011 |
20110081045 | Systems And Methods For Tracking A Model - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose. | 04-07-2011 |
20110081046 | METHOD OF IMPROVING THE RESOLUTION OF A MOVING OBJECT IN A DIGITAL IMAGE SEQUENCE - A method of improving the resolution of a small moving object in a digital image sequence comprises the steps of:
| 04-07-2011 |
20110081047 | ELECTRONIC APPARATUS AND IMAGE DISPLAY METHOD - According to one embodiment, an electronic apparatus detects face images in a still image. The apparatus sets positions and sizes of display ranges on the still image such that the display ranges include the face images respectively, the display ranges being associated with display areas obtained by dividing a display screen. The apparatus displays partial images included in the display ranges on the display areas in order to display the face images on the display areas respectively, and changes the position and size of each of the display ranges such that a display mode of the display screen is caused to transit from a first display mode in which the face images are displayed on the display areas respectively to a second display mode in which an entire image of the still image is displayed on the display screen. | 04-07-2011 |
20110081048 | METHOD AND APPARATUS FOR TRACKING MULTIPLE OBJECTS AND STORAGE MEDIUM - The present invention relates to a method and an apparatus for tracking multiple objects and a storage medium. More particularly, the present invention relates to a method and an apparatus for tracking multiple objects that performs object detection of one subset per an input image by performing only objection detection of one subset per camera image regardless of the number N of objects to be tracked and tracks all objects among images while the objects are detected to track multiple objects in real time, and a storage medium. The method for tracking multiple objects according to the exemplary embodiment of the present invention includes: (a) performing object detection with respect to only objects of one subset among multiple objects with respect to an input image at a predetermined time; and (b) tracking all objects among images from an image of a time prior to the predetermined time with respect to all objects in the input image while step (a) is performed. | 04-07-2011 |
20110085698 | Measuring Turbulence and Winds Aloft using Solar and Lunar Observable Features - Presented is a system and method for detecting turbulence in the atmosphere comprising an image capturing device for capturing a plurality of images of a visual feature of a celestial object such as the sun, combined with a lens having focal length adapted to focus an image onto image capturing device such that the combination of the lens and the image capturing device are adapted to resolve a distortion caused by a turbule of turbulent air, and an image processor adapted to compare said plurality of images of said visual feature to detect the transit of a turbule of turbulent air in between said image capturing device and said celestial object, and compute a measurement of the angular velocity of the turbule. A second plurality of images is used to triangulate the distance to the turbule and the velocity of the turbule. | 04-14-2011 |
20110085699 | Method and apparatus for tracking image patch considering scale - A method and apparatus for tracking an image considering scale are provided. A registered image patch may be divided into a scale-invariant image patch and a scale-variant image patch according to a predetermined scale invariance index (SII). If a registered image patch within an image is a scale-invariant image patch, the scale-invariant image patch is tracked by adjusting its position, while if the registered image patch is a scale-variant image patch, the scale-invariant image patch is tracked by adjusting its position and scale. | 04-14-2011 |
20110085700 | Systems and Methods for Generating Bio-Sensory Metrics - Neuromarketing processing systems and methods are described that provide marketers with a window into the mind of the consumer with a scientifically validated, quantitatively-based means of bio-sensory measurement. The neuromarketing processing system generates, from bio-sensory inputs, quantitative models of consumers' responses to information in the consumer environment, under an embodiment. The quantitative models provide information including consumers' emotion, engagement, cognition, and feelings. The information in the consumer environment includes advertising, packaging, in-store marketing, and online marketing. | 04-14-2011 |
20110085701 | STRUCTURE DETECTION APPARATUS AND METHOD, AND COMPUTER-READABLE MEDIUM STORING PROGRAM THEREOF - A plurality of candidate points are extracted from image data. The plurality of candidate points are normalized, and a set of representative points composing form model that is most similar to set form is selected from the plurality of candidate points. Further, the candidate points and the form model are compared with each other, and correction is performed by adding a region forming structure or by deleting a region, or the like. Accordingly, the structure is detected in image data. | 04-14-2011 |
20110085702 | OBJECT TRACKING BY HIERARCHICAL ASSOCIATION OF DETECTION RESPONSES - Systems, methods, and computer readable storage media are described that can provide a multi-level hierarchical framework to progressively associate detection responses, in which different methods and models are adopted to improve tracking robustness. A modified transition matrix for the Hungarian algorithm can be used to solve the association problem that considers not only initialization, termination and transition of tracklets but also false alarm hypotheses. A Bayesian inference approach can be used to automatically estimate a scene structure model as the high-level knowledge for the long-range trajectory association. | 04-14-2011 |
20110085703 | Method and apparatus for automatic object identification - A method and system for processing image data to identify objects in an image. Terrain types are identified in the image. A second image is generated identifying areas of the image which border regions of different intensities by identifying a gradient magnitude value for each pixel of the image. A filtered image is generated from the second image, the filtered image identifying potential objects which have a smaller radius than the size of a filter and a different brightness than background pixels surrounding the potential objects. The second image and the filtered image are compared to identify potential objects as an object. A potential object is identified as an object if the potential object has a gradient magnitude greater than a threshold gradient magnitude, and the threshold gradient magnitude is based on the terrain type identified in the portion of the image where the potential object is located. | 04-14-2011 |
20110085704 | Markerless motion capturing apparatus and method - A markerless motion capturing apparatus and method is provided. The markerless motion capturing apparatus may track a pose and a motion of a performer from an image, inputted from a camera, without using a marker or a sensor, and thereby may extend an application of the markerless motion capturing apparatus and selection of a location. | 04-14-2011 |
20110085705 | DETECTION OF BODY AND PROPS - A system and method for detecting and tracking targets including body parts and props is described. In one aspect, the disclosed technology acquires one or more depth images, generates one or more classification maps associated with one or more body parts and one or more props, tracks the one or more body parts using a skeletal tracking system, tracks the one or more props using a prop tracking system, and reports metrics regarding the one or more body parts and the one or more props. In some embodiments, feedback may occur between the skeletal tracking system and the prop tracking system. | 04-14-2011 |
20110085706 | DEVICE AND METHOD FOR LOCALIZING AN OBJECT OF INTEREST IN A SUBJECT - The present invention relates to a device, a method and a computer program which allow for the localization of an object of interest in a subject. The device includes a registration unit ( | 04-14-2011 |
20110091068 | Secure Tracking Of Tablets - A method of tracking and tracing tablets, in particular pharmaceutical tablets, includes reading, i.e. detecting, code structure from the tablet, reading additional information from the package on an information sheet, and then comparing the readings to verify authenticity. The code structure may be two-dimensional or three-dimensional. The detected code may further be compared with information stored in a database. | 04-21-2011 |
20110091069 | INFORMATION PROCESSING APPARATUS AND METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - An information processing apparatus comprises: an extraction unit configured to extract a person from a video obtained by capturing a real space; a holding unit configured to hold a movement estimation rule corresponding to a partial region specified in the video; a determination unit configured to determine whether a region where the person has disappeared from the video or appeared in the video corresponds to the partial region; and an estimation unit configured to estimate, based on the movement estimation rule corresponding to the partial region determined to correspond, a movement of the person after the person has disappeared from the video or before the person has appeared in the video. | 04-21-2011 |
20110091070 | COMBINING MULTI-SENSORY INPUTS FOR DIGITAL ANIMATION - Animating digital characters based on motion captured performances, including: receiving sensory data collected using a variety of collection techniques including optical video, electro-oculography, and at least one of optical, infrared, and inertial motion capture; and managing and combining the collected sensory data to aid cleaning, tracking, labeling, and re-targeting processes. Keywords include Optical Video Data and Inertial Motion Capture. | 04-21-2011 |
20110091071 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus including an image acquisition unit that acquires a target image; a face part extraction unit that extracts a face region including a face part from the target image; an identification unit that identifies a model face part by comparing the face part to a plurality of model face parts stored in a storage unit; and an illustration image determination unit that determines an illustration image corresponding to the identified model face part. | 04-21-2011 |
20110091072 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND CONTROL METHOD FOR IMAGE PROCESSING APPARATUS - An image processing apparatus capable of communicating with a plurality of servers stores image data including an object of recognition, and a plurality of recognition dictionaries. The image processing apparatus establishes communication with one of the servers to receive, from the server with which the communication has been established, designation information designating a recognition dictionary for recognizing the object of recognition included in the image data. The image processing apparatus identifies the recognition dictionary designated in the received designation information from among the stored recognition dictionaries and uses the identified recognition dictionary to recognize the object of recognition included in the image data. | 04-21-2011 |
20110091073 | MOVING OBJECT DETECTION APPARATUS AND MOVING OBJECT DETECTION METHOD - To provide a moving object detection apparatus which accurately performs region extraction, regardless of the pose or size of a moving object. The moving object detection apparatus includes: an image receiving unit receiving the video sequence; a motion analysis unit calculating movement trajectories based on motions of the image; a segmentation unit performing segmentation so as to divide the movement trajectories into subsets, and setting a part of the movement trajectories as common points shared by the subsets; a distance calculation unit calculating a distance representing a similarity between a pair of movement trajectories, for each of the subsets; a geodesic distance calculation unit transforming the calculated distance into a geodesic distance; an approximate geodesic distance calculation unit calculating an approximate geodesic distance bridging over the subsets, by integrating geodesic distances including the common points; and a region extraction unit performing clustering on the calculated approximate geodesic distance. | 04-21-2011 |
20110091074 | MOVING OBJECT DETECTION METHOD AND MOVING OBJECT DETECTION APPARATUS - A moving object detection method includes: extracting NL long-term trajectories (NL≧2) over TL pictures (TL≧3) and NS short-term trajectories (NS>NL) over TS pictures (TL>TS≧2), using movement trajectories; calculating a geodetic distance between the NL long-term trajectories and a geodetic distance between the NS short-term trajectories (S | 04-21-2011 |
20110096954 | OBJECT AND MOVEMENT DETECTION - Motions, positions or configurations off, for example a human hand can be recognised by transmitting a plurality of transmit signals in respective time frames; receiving a plurality of receive signals; determining a plurality of channel impulse responses using the transmit and receive signals; defining a matrix of impulse responses, with impulse responses for adjacent time frames adjacent each other; and analysing the matrix for patterns ( | 04-28-2011 |
20110096955 | SECURE ITEM IDENTIFICATION AND AUTHENTICATION SYSTEM AND METHOD BASED ON UNCLONABLE FEATURES - The present invention is a method and apparatus for protection of various items against counterfeiting using physical unclonable features of item microstructure images. The protection is based on the proposed identification and authentication protocols coupled with portable devices. In both cases a special transform is applied to data that provides a unique representation in the secure key-dependent domain of reduced dimensionality that also simultaneously resolves performance-security-complexity and memory storage requirement trade-offs. The enrolled database needed for the identification can be stored in the public domain without any risk to be used by the counterfeiters. Additionally, it can be easily transportable to various portable devices due to its small size. Notably, the proposed transformations are chosen in such a way to guarantee the best possible performance in terms of identification accuracy with respect to the identification in the raw data domain. The authentication protocol is based on the proposed transform jointly with the distributed source coding. Finally, the extensions of the described techniques to the protection of artworks and secure key exchange and extraction are disclosed in the invention. | 04-28-2011 |
20110096956 | VEHICLE PERIPHERY MONITORING DEVICE - A vehicle periphery monitoring device is operable to report a high contact possibility between a vehicle and an object at an appropriate time or frequency according to the type of the object. When the object is determined to be a human being and the position of the object in real space is contained in a first contact determination area, a high contact possibility between the vehicle and the object is reported. On the other hand, when the object is determined to be a quadruped animal and the real spatial position of the object is contained in a second contact determination area, the corresponding report is made. The second contact determination area has an overlapped area that overlaps with the first contact determination area, and an overflowed area that has at least a part thereof overflowing from the first contact determination area. | 04-28-2011 |
20110103642 | Multipass Data Integration For Automatic Detection And Classification Of Objects - Classification of a potential target is accomplished by receiving image information, detecting a potential target within the image information and determining a plurality of features forming a feature set associated with the potential target. The location of the potential target is compared with a detection database to determine if it is close to an element in the detection database. If not, a single-pass classifier receives a potential target's feature set, classifies the potential target, and transmits the location, feature set and classification to the detection database. If it is close, a fused multi-pass feature determiner determines fused multi-pass features of the potential target and a multi-pass classifier receives the potential target's feature set and fused multi-pass features, classifies the potential target, and transmits its location, feature set, fused multi-pass features and classification to the detection database. | 05-05-2011 |
20110103643 | IMAGING SYSTEM WITH INTEGRATED IMAGE PREPROCESSING CAPABILITIES - An electronic device may have a camera module. The camera module may include a camera sensor and associated image preprocessing circuitry. The image preprocessing circuitry may analyze images from the camera module to perform motion detection, facial recognition, and other operations. The image preprocessing circuitry may generate signals that indicate the presence of a user and that indicate the identity of the user. The electronic device may receive the signals from the camera module and may use the signals in implementing power saving functions. The electronic device may enter a power conserving mode when the signals do not indicate the presence of a user, but may keep the camera module powered in the power conserving mode. When the camera module detects that a user is present, the signals from the camera module may activate the electronic device and direct the electronic device to enter an active operating mode. | 05-05-2011 |
20110103644 | METHOD AND APPARATUS FOR IMAGE DETECTION WITH UNDESIRED OBJECT REMOVAL - A method and image detection device are provided for removal of undesired objects from image data. In one embodiment, a method includes detecting image data for a first frame, detecting image data for a second frame, and detecting motion of an undesired object based, at least in part, on image data for the first and second frames. Image data of the first frame may be replaced with image data of the second frame to generate corrected image data, wherein the undesired object is removed from the corrected image data. The corrected image data may be stored. | 05-05-2011 |
20110103645 | Motion Detecting Apparatus - A motion detecting includes a fetcher which repeatedly fetches an object scene image having a designated resolution. An assigner assigns a plurality of areas each of which has a representative point to the object scene image in a manner to have an overlapping amount different depending on a size of the designated resolution. A divider divides each of a plurality of images respectively corresponding to the plurality of areas assigned by the assigner, into a plurality of partial images, by using the representative points as a base point. A detector detects a difference in brightness between a pixel corresponding to the representative point and surrounding pixels, from each of the plurality of partial images divided by the divider. A creator creates motion information indicating a motion of the object scene image fetched by the fetcher, based on a detection result of the detector. | 05-05-2011 |
20110103646 | PROCEDE POUR GENERER UNE IMAGE DE DENSITE D'UNE ZONE D'OBSERVATION - A method for generating a density image of an observation zone over a given time interval, in which method a plurality of images of the observation zone is acquired, for each image acquired the following steps are carried out: a) detection of zones of pixels standing out from the fixed background of the image, b) detection of individuals, c) for each individual detected, determination of the elementary surface areas occupied by this individual, and d) incrementation of a level of intensity of the elementary surface areas thus determined in the density image. | 05-05-2011 |
20110103647 | Device and Method for Classifying Vehicles - Device for classifying objects, in particular vehicles, on a roadway, with a sensor, which operates according to the light-section procedure and is directed onto the roadway to detect the surface contour of an object, and an evaluation unit connected to the sensor that classifies the object on the basis of the detected surface contour. | 05-05-2011 |
20110103648 | Method and apparatus for automatic object identification - A method and system for processing image data to identify objects in an image. A gradient vector image is generated from the image, the gradient vector image identifying a gradient magnitude value and a gradient direction for each pixel of the image. Lines are identified in the gradient vector image. It is determined whether the identified lines are perpendicular, whether more than a predetermined number of pixels on each of the lines identified as perpendicular have a gradient magnitude greater than a predetermined threshold, and whether the individual lines which are identified as perpendicular are within a predetermined distance of each other. A portion of the image is identified as an object if the identified lines are perpendicular, more than the predetermined number of pixels on each of the lines have a gradient magnitude greater than the predetermined threshold, and are within a predetermined distance of each other. | 05-05-2011 |
20110103649 | Complex Wavelet Tracker - The present invention relates to a video tracker which allows automatic tracking of a selected area over video frames. Motion of the selected area is defined by a parametric motion model. In addition to simple displacement of the area it can also detect motions such as rotation, scaling and shear depending on the motion model. The invention realizes the tracking of the selected area by estimating the parameters of this motion model in the complex discrete wavelet domain. The invention can achieve the result in a non-iterative direct way. Estimation carried out in the complex discrete wavelet domain provides a robust tracking opportunity without being effected by noise and illumination changes in the video as opposed to the intensity-based methods. The invention can easily be adapted to many fields in addition to video tracking. | 05-05-2011 |
20110110557 | Geo-locating an Object from Images or Videos - The present invention discloses a novel method, computer program product, and system for determining a spatial location of a target object from the selection of points in multiple images that correspond to the object location within the images. In one aspect, the method includes collecting location and orientation information of one or more image sensors producing the images; the collected location and orientation information is then used to determine the spatial location of the target object. | 05-12-2011 |
20110110558 | Apparatus, System, and Method for Automatic Airborne Contaminant Analysis - An apparatus, system, and method are disclosed for locating, classifying, and quantifying airborne contaminants. In one embodiment, the apparatus contains an air sampler, an imaging device, a processing module, and a user interface. The air sampler may contain at least one opening into which ambient air is flowable. The imaging device may produce images of the ambient air within an interior volume of the air sampler. The processing module may receive the images produced by the imaging device and may locate, classify, and quantify specific airborne contaminants, such as mold and pollen spores. Data concerning the airborne contaminants can be output to a user at a user interface. | 05-12-2011 |
20110110559 | Optical Positioning Apparatus And Positioning Method Thereof - An optical positioning apparatus and method are adapted for determining a position of an object in a three-dimensional coordinate system which has a first axis, a second axis and a third axis perpendicular to one another. The optical positioning apparatus includes a host device which has a first optical sensor and a second optical sensor located along the first axis with a first distance therebetween, and a processor connected with the optical sensors, and a calibrating device placed in the sensitivity range of the optical sensors with a second distance between an origin of the second axis and a coordinate of the calibrating device projected in the second axis. The optical sensors sense the calibrating device to make the processor execute a calibrating procedure, and then sense the object to make the processor execute a positioning procedure for determining the position of the object in the three-dimensional coordinate system. | 05-12-2011 |
20110110560 | Real Time Hand Tracking, Pose Classification and Interface Control - A hand gesture from a camera input is detected using an image processing module of a consumer electronics device. The detected hand gesture is identified from a vocabulary of hand gestures. The electronics device is controlled in response to the identified hand gesture. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract. | 05-12-2011 |
20110110561 | FACIAL MOTION CAPTURE USING MARKER PATTERNS THAT ACCOMODATE FACIAL SURFACE - Capturing facial surface using marker patterns laid out on the facial surface by adapting the marker patterns to contours of the facial surface and motion range of a head including: generating a facial action coding system (FACS) matrix by capturing FACS poses; generating a pattern to wrap over the facial surface using the FACS poses as a guide; capturing and tracking marker motions of the pattern; stabilizing the marker motions of the pattern using a head stabilization transform to remove head motions from the marker motions; and generating and applying a plurality of FACS matrix weights to the stabilized marker motions. | 05-12-2011 |
20110116682 | OBJECT DETECTION METHOD AND SYSTEM - An object detection method and an object detection system, suitable for detecting moving object information of a video stream having a plurality of images, are provided. The method performs a moving object foreground detection on each of the images, so as to obtain a first foreground detection image comprising a plurality of moving objects. The method also performs a texture object foreground detection on each of the images, so as to obtain a second foreground detection image comprising a plurality of texture objects. The moving objects in the first foreground detection image and the texture objects in the second foreground detection image are selected and filtered, and then the remaining moving objects or texture objects after the filtering are output as real moving object information. | 05-19-2011 |
20110116683 | REDUCING MOTION ARTEFACTS IN MRI - The invention relates to motion correction in magnetic resonance imaging (MRI), implemented as a MRI apparatus or system, computer programs for such, and a method. A motion pattern of a region of interest ROI is estimated by: selecting a fixed point at an anatomical position that is pre-determined to be little or not affected by motion and rotating a point in the ROI that is affected by motion on the basis of motion detected by a navigator or other methods. From the estimated motion pattern of the ROI, the field of view (FOI) may be adapted by adjusting the gradients and the bandwidth of the RF pulses of the MR system in the acquisition sequence to avoid or reduce motion artefacts. Alternatively motion correction is carried out on the reconstructed images. | 05-19-2011 |
20110116684 | SYSTEM AND METHOD FOR VISUALLY TRACKING WITH OCCLUSIONS - Described herein are tracking algorithm modifications to handle occlusions when processing a video stream including multiple image frames. Specifically, system and methods for handling both partial and full occlusions while tracking moving and non-moving targets are described. The occlusion handling embodiments described herein may be appropriate for a visual tracking system with supplementary range information. | 05-19-2011 |
20110116685 | INFORMATION PROCESSING APPARATUS, SETTING CHANGING METHOD, AND SETTING CHANGING PROGRAM - Disclosed herein is an information processing apparatus including: a detection block configured to detect persons from an image; and a setting changing block configured such that if one of the persons detected by the detection block from the image is designated, then the setting changing block identifies a plurality of attributes of the designated person based on the image of the person, before changing user interface settings using attribute-specific setting information associated with a combination of the identified multiple attributes. | 05-19-2011 |
20110123066 | PRECISELY LOCATING FEATURES ON GEOSPATIAL IMAGERY - Methods for locating a feature on geospatial imagery and systems for performing those methods are disclosed. An accuracy level of each of a plurality of geospatial vector datasets available in a database can be determined. Each of the plurality of geospatial vector datasets corresponds to the same spatial region as the geospatial imagery. The geospatial vector dataset having the highest accuracy level may be selected. When the selected geospatial vector dataset and the geospatial imagery are misaligned, the selected geospatial vector dataset is aligned to the geospatial imagery. The location of the feature on the geospatial imagery is then determined based on the selected geospatial vector dataset and outputted via a display device. | 05-26-2011 |
20110123067 | Method And System for Tracking a Target - A method and system for tracking one or more targets is described. The method includes the step of selecting a first template having a first image of a target and cyclically repeated steps of accumulating new images of the target, producing updated templates containing the new images, and tracking the target using the updated templates. Embodiments of the method use techniques directed to detection and mitigation of target occlusion events. | 05-26-2011 |
20110129117 | SYSTEM AND METHOD FOR IDENTIFYING PRODUCE - An apparatus, method and system are presented for identifying produce. Multiple images of a produce item captured using five different types of illumination. The captured images are processed to determine parameters of the produce item and those parameters are compared to parameters of known produce to identify the produce item. | 06-02-2011 |
20110129118 | SYSTEMS AND METHODS FOR TRACKING NATURAL PLANAR SHAPES FOR AUGMENTED REALITY APPLICATIONS - The present system discloses systems and methods for tracking planar shapes for augmented-reality (AR) applications. Systems for real-time recognition and camera six degrees of freedom pose-estimation from planar shapes are disclosed. Recognizable shapes can be augmented with 3D content. Recognizable shapes can be in form of a predefined library being updated online using a network. Shapes can be added to the library when the user points to a shape and asks the system to start recognizing it. The systems perform shape recognition by analyzing contour structures and generating projective invariant signatures. Image features are further extracted for pose estimation and tracking. Sample points are matched by evolving an active contour in real time. | 06-02-2011 |
20110129119 | MULTI-OBJECT TRACKING WITH A KNOWLEDGE-BASED, AUTONOMOUS ADAPTATION OF THE TRACKING MODELING LEVEL - The invention proposes a method for object and object configuration tracking based on sensory input data, the method comprising the steps of: | 06-02-2011 |
20110129120 | PROCESSING CAPTURED IMAGES HAVING GEOLOCATIONS | 06-02-2011 |
20110129121 | REAL-TIME FACE TRACKING IN A DIGITAL IMAGE ACQUISITION DEVICE - An image processing apparatus for tracking faces in an image stream iteratively receives an acquired image from the image stream potentially including one or more face regions. The acquired image is sub-sampled at a specified resolution to provide a sub-sampled image. An integral image is then calculated for a least a portion of the sub-sampled image. Fixed size face detection is applied to at least a portion of the integral image to provide a set of candidate face regions. Responsive to the set of candidate face regions produced and any previously detected candidate face regions, the resolution is adjusted for sub-sampling a subsequent acquired image. | 06-02-2011 |
20110135147 | SYSTEM AND METHOD FOR OBSTACLE DETECTION USING FUSION OF COLOR SPACE INFORMATION - A method comprises receiving an image of the area, the image representing the area in a first color space; converting the received image to at least one second color space to produce a plurality of converted images, each converted image corresponding to one of a plurality of color sub-spaces in the at least one second color space; calculating upper and lower thresholds for at least two of the plurality of color sub-spaces; applying the calculated upper and lower thresholds to the converted images corresponding to the at least two color sub-spaces to segment the corresponding converted images; fusing the segmented converted images corresponding to the at least two color sub-spaces to segment the received image; and updating the segmentation of the received image based on edge density data in the received image. | 06-09-2011 |
20110135148 | METHOD FOR MOVING OBJECT DETECTION AND HAND GESTURE CONTROL METHOD BASED ON THE METHOD FOR MOVING OBJECT DETECTION - A method for moving object detection includes the steps: obtaining successive images of the moving object and dividing the successive images into blocks; selecting one block, calculating color feature values of the block at a current time point and a following time point; according to the color feature values, obtaining an active part of the selected block; comparing the color feature value of the selected block at the current time point with that of the other blocks at the following time point to obtain a similarity relating to each of the other blocks, and defining a maximum similarity as a local correlation part; obtaining a motion-energy patch of the block according to the active part and the local correlation part; repeating the steps to obtain all motion-energy patches to form a motion-energy map; and acquiring the moving object at the current time point in the motion-energy map. | 06-09-2011 |
20110135149 | Systems and Methods for Tracking Objects Under Occlusion - A method for tracking objects in a scene may include receiving visual-based information of the scene with a vision-based tracking system and telemetry-based information of the scene with a RTLS-based tracking system. The method may also include determining a location and identity of a first object in the scene using a combination of the visual-based information and the telemetry-based information. Another method for tracking objects in a scene may include detecting a location and identity of a first object and determining a telemetry-based measurement between the first object and a second object using a real time locating system (RTLS)-based tracking system. The method may further include determining a location and identity of the second object based on the detected location of the first object and the determined measurement. A system for tracking objects in a scene may include visual-based and telemetry-based information receivers and an object tracker. | 06-09-2011 |
20110135150 | METHOD AND APPARATUS FOR TRACKING OBJECTS ACROSS IMAGES - A method and apparatus for tracking objects across images. The method includes retrieving object location in a current frame, determining the appearance and motion signatures of the object in the current frame, predicting the new location of the object based on object dynamics, searching for a location with similar appearance and motion signatures in a next frame, and utilizing the location with similar appearance and motion signatures to determine the final location of the object in the next frame. | 06-09-2011 |
20110135151 | METHOD AND APPARATUS FOR SELECTIVELY SUPPORTING RAW FORMAT IN DIGITAL IMAGE PROCESSOR - A digital image processing apparatus and method for supporting a RAW format (a sensor data format before image processing is performed) selectively supports a user-desired region of a captured image in a RAW format. A method of supporting a RAW format in a digital image processing apparatus includes setting at least one portion of an image displayed in a live-view mode as a region of interest (ROI), storing the ROI in a RAW format, storing a non-ROI of the displayed image, which is a portion of the image other than the ROI, in a compression format, and compositing the stored ROI with the stored non-ROI. | 06-09-2011 |
20110135152 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes: a detection unit detecting the faces of persons from frames of moving-image contents; a first specifying unit specifying the persons corresponding to the detected faces by extracting feature amounts of the detected faces and verifying the extracted feature amounts in a first database in which the feature amounts of the faces are registered in correspondence with person identifying information; a voice analysis unit analyzing the voices acquired when the faces of the persons are detected from the frames of the moving-image contents and generating voice information; and a second specifying unit specifying the persons corresponding to the detected faces by verifying the voice information corresponding to the face of a person which is not specified by the first specifying unit in a second database in which the voice information is registered in correspondence with the person identifying information. | 06-09-2011 |
20110135153 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM - An image processing device includes a facial region extraction unit extracting a facial region, an identification information acquisition unit acquiring identification information for identifying a face in the facial region, and first and second integrated processing units performing integrated processing. The first and second integrated processing units determine a threshold value on the basis of a relationship between an estimated area and a position of the face being tracked, calculate a similarity between a face being tracked and a face pictured in an image to be stored in a predetermined storage period, and determine if the face being tracked and the stored face image are the face of the same person. | 06-09-2011 |
20110135154 | LOCATION-BASED SIGNATURE SELECTION FOR MULTI-CAMERA OBJECT TRACKING - Disclosed herein are a method, system, and computer program product for determining a correspondence between a first object ( | 06-09-2011 |
20110142281 | CONVERTING AIRCRAFT ENHANCED VISION SYSTEM VIDEO TO SIMULATED REAL TIME VIDEO - A method for overcoming image latency issues of a synthetic vision system include generating ( | 06-16-2011 |
20110142282 | VISUAL OBJECT TRACKING WITH SCALE AND ORIENTATION ADAPTATION - A method of tracking an object that appears in a plurality of image frames is provided. The method includes (a) dividing an identified object of one of the plurality of image frames into a plurality of object segments and (b) tracking a location of each of the plurality of object segments in the image frame. The method also includes (c) estimating at least one of scale and orientation of the object using the location of each of the plurality of object segments and (d) obtaining position of the object using the estimated scale and orientation. | 06-16-2011 |
20110142283 | APPARATUS AND METHOD FOR MOVING OBJECT DETECTION - An apparatus and method for moving object detection computes a corresponding frame difference for every two successive image frames of a moving object, and segments a current image frame of the two successive image frames into a plurality of homogeneous regions. At least a candidate region is further detected from the plurality of homogeneous regions. The system gradually merges the computed frame differences via a morphing-based technology and interests with the at least a candidate region, thereby obtains the location and a complete outline of the moving object. | 06-16-2011 |
20110142284 | Method and Apparatus for Acquiring Accurate Background Infrared Signature Data on Moving Targets - A method for measuring an infrared signature of a moving target includes: tracking the moving target with a tracking system along a path from a start position to an end position, measuring infrared radiation data of the moving target along the path, repositioning the tracking system to the start position, retracing the path to measure the infrared radiation data of the background, and determining the infrared signature of the moving target by comparing the infrared radiation data of the moving object with the infrared radiation data of the background without the moving object. | 06-16-2011 |
20110142285 | SYSTEM AND METHOD FOR TRANSITIONING FROM A MISSILE WARNING SYSTEM TO A FINE TRACKING SYSTEM IN A DIRECTIONAL INFRARED COUNTERMEASURES SYSTEM - A method for transitioning a target from a missile warning system to a fine tracking system in a directional countermeasures system includes capturing at least one image within a field of view of the missile warning system. The method further includes identifying a threat from the captured image or images and identifying features surrounding the threat. These features are registered with the threat and image within a field of view of the fine tracking system is captured. The registered features are used to identify a location of a threat within this captured image. | 06-16-2011 |
20110142286 | DETECTIVE INFORMATION REGISTRATION DEVICE, TARGET OBJECT DETECTION DEVICE, ELECTRONIC DEVICE, METHOD OF CONTROLLING DETECTIVE INFORMATION REGISTRATION DEVICE, METHOD OF CONTROLLING TARGET OBJECT DETECTION DEVICE, CONTROL PROGRAM FOR DETECTIVE INFORMATION REGISTRATION DEVICE, AND CONTROL PROGRAM FOR TARGET OBJECT DETECTION DEVICE - A digital camera ( | 06-16-2011 |
20110150271 | MOTION DETECTION USING DEPTH IMAGES - A sensor system creates a sequence of depth images that are used to detect and track motion of objects within range of the sensor system. A reference image is created and updated based on a moving average (or other function) of a set of depth images. A new depth images is compared to the reference image to create a motion image, which is an image file (or other data structure) with data representing motion. The new depth image is also used to update the reference image. The data in the motion image is grouped and associated with one or more objects being tracked. The tracking of the objects is updated by the grouped data in the motion image. The new positions of the objects are used to update an application. For example, a video game system will update the position of images displayed in the video based on the new positions of the objects. In one implementation, avatars can be moved based on movement of the user in front of a camera. | 06-23-2011 |
20110150272 | SYSTEMS AND METHODS OF TRACKING OBJECT PATHS - Systems and methods for tracking the path of a user configurable object are provided. The method includes displaying a video data stream of a monitored region, configuring an object in the video data stream, configuring a valid path of the object, tracking a path of the object, and providing an alert to a user when the object travels outside of the valid path. | 06-23-2011 |
20110150273 | METHOD AND SYSTEM FOR AUTOMATED SUBJECT IDENTIFICATION IN GROUP PHOTOS - A system to automatically attach subject descriptions to a digital image containing one or more subjects is described. The system comprises a camera a set of remotely readable badges attached to the subjects, where each badge has a readable identification, a receiver to read the badges where the receiver can determine both the identification of each badge and the location of each badge, and a processor to combine the digital image and the identification and location information is described. By accessing a database containing the subject identification associated with each badge identification the processor can attach subject identification information to each subject in the image. | 06-23-2011 |
20110150274 | METHODS FOR AUTOMATIC SEGMENTATION AND TEMPORAL TRACKING - In one embodiment, a method of detecting centerline of a vessel is provided. The method comprises steps of acquiring a 3D image volume, initializing a centerline, initializing a Kalman filter, predicting a next center point using the Kalman filter, checking validity of the prediction made using the Kalman filter, performing template matching, updating the Kalman filter based on the template matching and repeating the steps of predicting, checking, performing and updating for a predetermined number of times. Methods of automatic vessel segmentation and temporal tracking of the segmented vessel is further described with reference to the method of detecting centerline. | 06-23-2011 |
20110150275 | MODEL-BASED PLAY FIELD REGISTRATION - A method, apparatus, and system are described for model-based playfield registration. An input video image is processed. The processing of the video image includes extracting key points relating to the video image. Further, whether enough key points relating to the video image were extracted is determined, and a direct estimation of the video image is performed if enough key points have been extracted and then, a homograph matrix of a final video image based on the direct estimation is generated. | 06-23-2011 |
20110150276 | Identifying a characteristic of an individual utilizing facial recognition and providing a display for the individual - A method may include automatically remotely identifying at least one characteristic of an individual via facial recognition; and providing a display for the individual, the display having a content at least partially based on the identified at least one characteristic of the individual. A system may include a facial recognition module configured for automatically remotely identifying at least one characteristic of an individual via facial recognition; and a display module coupled with the facial recognition module, the display module configured for providing a display for the individual, the display having a content at least partially based on the identified at least one characteristic of the individual. | 06-23-2011 |
20110150277 | IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREOF - In an image included in a moving image, a specific area is registered as a reference area, and a specific hue range of the reference area is set as a first feature amount based on the distribution of hues of pixels in the reference area. When the occupation ratio of pixels having hues included in a second feature amount, obtained by expanding the hue range of the first feature amount in a surrounding area larger than the reference area, is smaller than a predetermined ratio, an area having a high degree of correlation is identified from an image using the second feature amount in the subsequent matching process. When the occupation ratio is equal to or larger than the predetermined ratio, an area having a high degree of correlation is identified from an image using the first feature amount in the subsequent matching process. | 06-23-2011 |
20110150278 | INFORMATION PROCESSING APPARATUS, PROCESSING METHOD THEREOF, AND NON-TRANSITORY STORAGE MEDIUM - An information processing apparatus comprising: a storage unit configured to store image features of multiple targets and mutual relationship information of the multiple targets; an input unit configured to input an image; a detection unit configured to detect a region of a target from the input image; an identification unit configured to, based on the stored image features and image features of the detected region, identify the target of the region; and an estimation unit configured to, in the case where both a first region in which a target was identified and a second region in which a target could not be identified are present in the input image, estimate a candidate for the target in the second region based on the mutual relationship information and the target in the first region. | 06-23-2011 |
20110150279 | IMAGE PROCESSING APPARATUS, PROCESSING METHOD THEREFOR, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An image processing apparatus comprising: an input unit configured to input a plurality of images obtained by capturing a target object from different viewpoints; a detection unit configured to detect a plurality of line segments from each of the plurality of input images; a setting unit configured to set, for each of the plurality of detected line segments, a reference line which intersects with the line segment; an array derivation unit configured to obtain a pattern array in which a plurality of pixel value change patterns on the set reference line are aligned; and a decision unit configured to decide association of the detected line segments between the plurality of images by comparing the pixel value change patterns, contained in the obtained pattern array, between the plurality of images. | 06-23-2011 |
20110150280 | SUBJECT TRACKING APPARATUS, SUBJECT REGION EXTRACTION APPARATUS, AND CONTROL METHODS THEREFOR - A subject tracking apparatus which performs subject tracking based on the degree of correlation between a reference image and an input image is disclosed. The degree of correlation between each of a plurality of reference images based on images input at different times, and the input image is obtained. If the maximum degree of correlation between a reference image based on a first input image among the plurality of reference images and the input image is equal to or higher than a threshold, a region with a maximum degree of correlation with a first reference image is determined as a subject region. Otherwise, a region with a maximum degree of correlation with a reference image based on an image input later than the first input image is determined as a subject region. | 06-23-2011 |
20110150281 | METHOD AND DEVICE FOR DETERMINING THE ORIENTATION OF A CROSS-WOUND BOBBIN TUBE - A method and device for determining the orientation of a cross-wound bobbin tube ( | 06-23-2011 |
20110150282 | BACKGROUND IMAGE AND MASK ESTIMATION FOR ACCURATE SHIFT-ESTIMATION FOR VIDEO OBJECT DETECTION IN PRESENCE OF MISALIGNMENT - Disclosed herein are a method, system, and computer program product for aligning an input video frame from a video sequence with a background model associated with said video sequence. The background model includes a plurality of model blocks ( | 06-23-2011 |
20110150283 | APPARATUS AND METHOD FOR PROVIDING ADVERTISING CONTENT - Disclosed herein are an apparatus and method for providing advertising content effectively. The apparatus for providing advertising content comprises: a image processing unit for extracting an object from a captured image; the long-distance analysis unit for creating long-distance analysis information obtained by analyzing the object at a first distance; the short-distance analysis unit for creating short-distance analysis information obtained by analyzing the object at a second distance that is shorter than the first distance; and the content selection unit for selecting advertising content using the long-distance analysis information and the short-distance analysis information. | 06-23-2011 |
20110150284 | METHOD AND TERMINAL FOR DETECTING AND TRACKING MOVING OBJECT USING REAL-TIME CAMERA MOTION - A method is provided for detecting and tracking a moving object using real-time camera motion estimation, including generating a feature map representing a change in an input pattern in an input image, extracting feature information of the image, estimating a global motion for recognizing a motion of a camera using the extracted feature information, correcting the input image by reflecting the estimated global motion, and detecting a moving object using the corrected image. | 06-23-2011 |
20110150285 | LIGHT EMITTING DEVICE AND METHOD FOR TRACKING OBJECT - A technique and a light emitting device that can smoothly read out data while tracking a position of the light emitting device (an object). The light emitting device expresses data with “a change in the change of a color (switching of changes)”. The light emitting device specifies an object and the position thereof with a first primary change and thereafter expresses data with, so to speak, a secondary change (switching of the primary change). The primary change means that G and B alternately turn on (indicated by G*B) and so on. The secondary change means a change from the condition (G*B), in which G and B alternately turn on, to the condition (B*R) in which B and R alternately turn on. Thus, since data is expressed by the change of color condition changes, it is easier to freely express data while the position of an object is specified. | 06-23-2011 |
20110158473 | DETECTING METHOD FOR DETECTING MOTION DIRECTION OF PORTABLE ELECTRONIC DEVICE - A detecting method is provided for detecting motion direction of a portable electronic device. The portable electronic device senses a plurality of continuous images in time sequence via an image sense unit. The differences among the plurality of images are analyzed by a process unit. Consequently the process unit determines the motion direction of the portable electronic device, generates motion data based on the differences, and sends a control signal corresponding to the motion direction of the device and the motion data. | 06-30-2011 |
20110158474 | IMAGE OBJECT TRACKING AND SEGMENTATION USING ACTIVE CONTOURS - A method of image object tracking and segmentation is provided. The method includes defining an initial contour for tracking an image object and partitioning the initial contour into a plurality of contour segments. The method also includes estimating a weighted length of each of the plurality of contour segments and generating a desired contour by converging the plurality of contour segments to a plurality of edges of the image object using the estimated weighted length. | 06-30-2011 |
20110158475 | Position Measuring Method And Position Measuring Instrument - The present invention provides a position measuring instrument, comprising a GPS position detecting device | 06-30-2011 |
20110158476 | ROBOT AND METHOD FOR RECOGNIZING HUMAN FACES AND GESTURES THEREOF - A robot and a method for recognizing human faces and gestures are provided, and the method is applicable to a robot. In the method, a plurality of face regions within an image sequence captured by the robot are processed by a first classifier, so as to locate a current position of a specific user from the face regions. Changes of the current position of the specific user are tracked to move the robot accordingly. While the current position of the specific user is tracked, a gesture feature of the specific user is extracted by analyzing the image sequence. An operating instruction corresponding to the gesture feature is recognized by processing the gesture feature through a second classifier, and the robot is controlled to execute a relevant action according to the operating instruction. | 06-30-2011 |
20110158477 | REDUCING EFFECTS OF ROTATIONAL MOTION - A method and system for improving image quality by correcting errors introduced by rotational motion of an object being imaged is provided. The object is associated with a fiducial mark. The method provides a computer executable methodology for detecting a rotation and selectively reordering, deleting and/or reacquiring projection data. | 06-30-2011 |
20110158478 | HEAD MOUNTED DISPLAY - A head mounted display capable of displaying necessary and sufficient number of display information in an easily viewable manner even when a large number of identifying objects are detected is provided. A see-through-type head mounted display includes a display unit which is configured to project image light corresponding to display information onto an eye of a user thus allowing the user to visually recognize an image corresponding to the image light while allowing an external light to pass therethrough. The head mounted display selects identifying objects about which associated information associated with the identifying objects are displayed by the display unit based on a result detected within an imaging area. The head mounted display displays the selected associated information associated with the identifying objects in association with the identifying objects which are visually recognized by the user through the display unit in a see-through manner. | 06-30-2011 |
20110158479 | METHOD AND DEVICE FOR ALIGNING A NEEDLE - A method and a device for use in conjunction with an imaging modality ( | 06-30-2011 |
20110164785 | TUNABLE WAVELET TARGET EXTRACTION PREPROCESSOR SYSTEM - The present invention is a target tracking system for enhanced target identification, target acquisition and track performance that is significantly superior over other methods. Specifically, the target tracking system incorporates an intelligent Tunable Wavelet Target Extraction Preprocessor (TWTEP). The TWTEP, which defines target characteristics in the presence of noise and clutter, 1) enhances and augments the target within the video scene to provide a better tracking source for the externally provided Track Process, 2) implements a tunable target definition from the video image to provide a highly resolved target delineation and selection, 3) utilizes a weighted pseudo-covariance technique to define target area for shape determination, extraction, 4) implements a target definition and extraction process, and 5) defines methodologies for presentation of filtered video and images for external processing. | 07-07-2011 |
20110164786 | CLOSE-UP SHOT DETECTING APPARATUS AND METHOD, ELECTRONIC APPARATUS AND COMPUTER PROGRAM - A close-up shot detection device includes motion detection element that calculates the amount of motion between at least two frames or fields constituting a video image every predetermined unit which is composed of one pixel or a plurality of adjacent pixels constituting the frame or field; binarization element that binarizes the calculated amount of motion; large-area specifying element that specifies, as a large area, a connected area in which the number of units is equal to or larger than a predetermined threshold, among connected areas which are obtained by connecting a predetermined number of units having the same binarized amount of motion; and close-up shot specifying element that, when at least one of preset criteria for the specified large area satisfies a predetermined condition, specifies a frame or field having the specified large area as a close-up shot. Consequently, a close-up shot can be easily and rapidly detected. | 07-07-2011 |
20110164787 | METHOD AND SYSTEM FOR APPLYING COSMETIC AND/OR ACCESSORIAL ENHANCEMENTS TO DIGITAL IMAGES - A method for a creating a virtual makeover includes inputting an initial digital Image into and initiating a virtual makeover at a local processor. Instructions are transmitted from the main server to the local processor. Positions of facial features are isolated within the digital image at the local processor. Facial regions within the digital image are defined based on the positions of the facial features at the local processor. After receiving input, cosmetic enhancements or the accessorial enhancement are applied to the digital image at the local processor. A final digital image is generated including the enhancements. The final digital image is then displayed. At least the defining, applying, and generating steps include instructions written in a non-flash format for execution in a flash-based wrapper. | 07-07-2011 |
20110164788 | METHOD AND DEVICE FOR DETERMINING LEAN ANGLE OF BODY AND POSE ESTIMATION METHOD AND DEVICE - Provided are a method and device for determining a lean angle of a body and a pose estimation method and device. The method for determining a lean angle of a body of the present invention includes: a head-position obtaining step for obtaining a position of a head; a search region determination step for determining a plurality of search region spaced with an angle around the head; an energy function calculating step for calculating a value of an energy function for the search region; and a lean angle determining step for determining the lean angle of a search region with a largest or smallest value of the energy function as the lean angle of the body. The pose estimation method of the present invention includes a body lean-angle obtaining step, for obtaining a lean angle of a body; and a pose estimation step, for performing a pose estimation based on the lean angle of the body. | 07-07-2011 |
20110170739 | Automated Acquisition of Facial Images - Described is a technology by which medical patient facial images are acquired and maintained for associating with a patient's records and/or other items. A video camera may provide video frames, such as captured when a patient is being admitted to a hospital. Face detection may be employed to clip the facial part from the frame. Multiple images of a patient's face may be displayed on a user interface to allow selection of a representative image. Also described is obtaining the patient images by processing electronic documents (e.g., patient records) to look for a face pictured therein. | 07-14-2011 |
20110170740 | Automatic image capture - A method of automatically capturing images with precision uses an intelligent mobile device having a camera loaded with an appropriate image capture application. When a use initializes the application, the camera starts taking images of the object. Each image is qualified to determine whether it is in focus and entirely within the field of view of the camera: Two or more qualified images are captured and stored for subsequent processing. The qualified images are aligned with each other by an appropriate perspective transformation so they each fill a common frame. Averaging of the aligned images reduces noise and a sharpening filter enhances edges, which produces a sharper image. The processed image is then converted into a two-level, black and white image which may be presented to the user for approval prior to submission via wireless or WiFi to a remote location. | 07-14-2011 |
20110170741 | IMAGE PROCESSING DEVICE AND STORAGE MEDIUM STORING IMAGE PROCESSING PROGRAM - There is provided an image processing device that includes a processor configured to execute instructions that cause the processor to provide functional units including: a setting unit that sets a plurality of extraction target ranges in a motion image configured of a plurality of frame images that are chronologically in succession with one another, each extraction target range being configured of a group of frame images that are selected from among the plurality of frame images constituting the motion image and that are chronologically in succession with one another, and the plurality of extraction target ranges being set such that there is no common frame image shared among the extraction target ranges; a selecting unit that selects a representative frame image from among the group of frame images in an extraction target range, the representative frame image being such a frame image whose difference from another representative frame image is the largest among differences of the frame images belonging to the extraction target range from the another representative frame image, the another representative frame image being selected from one of the extraction target ranges that is positioned chronologically adjacent to the extraction target range from which the representative frame image is selected; and a layout image generating unit that generates a layout image in which the selected representative frame images are laid out in such a pattern that indicates a chronological relationship among the representative frame images. | 07-14-2011 |
20110170742 | IMAGE PROCESSING DEVICE, OBJECT SELECTION METHOD AND PROGRAM - There is provided an image processing device including: a data storage unit that stores object identification data for identifying an object operable by a user and feature data indicating a feature of appearance of each object; an environment map storage unit that stores an environment map representing a position of one or more objects existing in a real space and generated based on an input image obtained by imaging the real space using an imaging device and the feature data stored in the data storage unit; and a selecting unit that selects at least one object recognized as being operable based on the object identification data, out of the objects included in the environment map stored in the environment map storage unit, as a candidate object being a possible operation target by a user. | 07-14-2011 |
20110170743 | METHOD FOR DETECTING OBJECT MOVEMENT AND DETECTION SYSTEM - This invention relates to a method for detecting object movement by dynamically updating a reference image data. By dynamically updating the reference image data, the impact of the ambient light change can be reduced and the detection error of object movement caused by using fixed reference image data under varying ambient light can also be avoided. The present invention further provides a detection system. | 07-14-2011 |
20110170744 | VIDEO-BASED VEHICLE DETECTION AND TRACKING USING SPATIO-TEMPORAL MAPS - Systems and methods for detecting and tracking objects, such as motor vehicles, within video data. The systems and method analyze video data, for example, to count objects, determine object speeds, and track the path of objects without relying on the detection and identification of background data within the captured video data. The detection system uses one or more scan lines to generate a spatio-temporal map. A spatio-temporal map is a time progression of a slice of video data representing a history of pixel data corresponding to a scan line. The detection system detects objects in the video data based on intersections of lines within the spatio-temporal map. Once the detection system has detected an object, the detection system may record the detection for counting purposes, display an indication of the object in association with the video data, determine the speed of the object, etc. | 07-14-2011 |
20110170745 | Body Gesture Control System for Operating Electrical and Electronic Devices - A body gesture control system for operating electrical and electronic devices includes an image sensor device and an image processor device to process body gesture images captured by the image sensor device for recognizing the body gesture. The image processor device includes an image calculation unit and a gesture change detection unit electrically connected therewith. The image calculation unit is used to calculate gesture regions of the captured body gesture images and the gesture change detection unit is operated to detect changes of the captured body gesture images and to thereby determine a body gesture recognition signal. | 07-14-2011 |
20110170746 | CAMERA BASED SENSING IN HANDHELD, MOBILE, GAMING OR OTHER DEVICES - Method and apparatus are disclosed to enable rapid TV camera and computer based sensing in many practical applications, including, but not limited to, handheld devices, cars, and video games. Several unique forms of social video games are disclosed. | 07-14-2011 |
20110170747 | Interactivity Via Mobile Image Recognition - Systems and methods of interacting with a virtual space, in which a mobile device is used to electronically capture image data of a real-world object, the image data is used to identify information related to the real-world object, and the information is used to interact with software to control at least one of: (a) an aspect of an electronic game; and (b) a second device local to the mobile device. Contemplated systems and methods can be used to gaming, in which the image data can be used to identify a name of the real-world object, to classify the real-world object, identify the real-world object as a player in the game, to identify the real-world Object as a goal object or as having some other value in the game, to use the image data to identify the real-world object as a goal object in the game. | 07-14-2011 |
20110176707 | IMAGE ANALYSIS BY OBJECT ADDITION AND RECOVERY - The invention described herein is generally directed to methods for analyzing an image. In particular, crowded field images may be analyzed for unidentified, unobserved objects based on an iterative analysis of modified images including artificial objects or removed real objects. The results can provide an estimate of the completeness of analysis of the image, an estimate of the number of objects that are unobserved in the image, and an assessment of the quality of other similar images. | 07-21-2011 |
20110176708 | Task-Based Imaging Systems - A task-based imaging system for obtaining data regarding a scene for use in a task includes an image data capturing arrangement for (a) imaging a wavefront of electromagnetic energy from the scene to an intermediate image over a range of spatial frequencies, (b) modifying phase of the wavefront, (c) detecting the intermediate image, and (d) generating image data over the range of spatial frequencies. The task-based imaging system also includes an image data processing arrangement for processing the image data and performing the task. The image data capturing and image data processing arrangements cooperate so that signal-to-noise ratio (SNR) of the task-based imaging system is greater than SNR of the task-based imaging system without phase modification of the wavefront over the range of spatial frequencies. | 07-21-2011 |
20110182469 | 3D CONVOLUTIONAL NEURAL NETWORKS FOR AUTOMATIC HUMAN ACTION RECOGNITION - Systems and methods are disclosed to recognize human action from one or more video frames by performing | 07-28-2011 |
20110182470 | MOBILE COMMUNICATION TERMINAL HAVING IMAGE CONVERSION FUNCTION AND METHOD - A mobile communication terminal having an image conversion function arranges and displays area-specific images in a three-dimensional (3D) space on the basis of distance information of the area-specific images of a two-dimensional (2D) image. | 07-28-2011 |
20110182471 | HANDLING INFORMATION FLOW IN PRINTED TEXT PROCESSING - Systems, methods and computer-readable media for processing an image are disclosed. The system comprises a processor, an image capturing unit in communication with the processor, an inspection surface positioned so that at least a portion of the inspection surface is within a field of view (FOV) of the image capturing unit, and an output device. The system has software that monitors the FOV of the image capturing unit for at least one event. The inspection surface is capable of supporting an object of interest. The image capturing unit is in a video mode while the software is monitoring for the at least one event | 07-28-2011 |
20110182472 | EYE GAZE TRACKING - This invention relates to a method of performing eye gaze tracking of at least one eye of a user, by determining the position of the center of the eye, said method comprising the steps of:
| 07-28-2011 |
20110182473 | SYSTEM AND METHOD FOR VIDEO SIGNAL SENSING USING TRAFFIC ENFORCEMENT CAMERAS - A system and method for determining the state of a traffic signal light, such as being red, yellow, or green, by employing a plurality of traffic enforcement cameras to be used in determining if a traffic violation has occurred. The system and method automatically predicts, tacks and captures violation events, such as violating a red traffic signal light, to use the video for any number of reasons, particularly for traffic enforcement purposes. There may be provided a tracking camera, a signal camera and an enforcement camera used to capture the video and other pertinent information relating to the event. The signal camera may be operatively connected to a processing unit that runs a video signal sensing (VSS) software unit to determine the active state of the system. Advantageously, this allows the monitoring of intersection for signal light violations without the need for a connection to the light itself. | 07-28-2011 |
20110182474 | EFFICIENT SYSTEM AND METHOD FOR FACE TRACKING - A method of scanning a scene using an image sensor includes (a) dividing the scene into multiple first portions; and scanning a first portion for presence of objects in an object class. The method further includes continuing the scanning of the multiple first portions for presence of other objects in the scene. The method also selects a second portion of the scene, in response to detecting an object in the first portion; and then tracking the object in the selected second portion. The second portion of the scene is selected based on estimating motion of the object detected in the first portion, so that it may still be located in the second portion. | 07-28-2011 |
20110188705 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - A frequency component of noise that is included in both images, and a frequency component of a first image that does not include said noise are estimated based on first image data obtained through imaging, using an imaging device, a first image that includes a specific image pattern, and based on second image data, obtained by imaging, using the imaging device, second image data that does not include the specific image pattern; and weighting is controlled, relative to frequencies, when calculating a correlation between the first image data and third image data, obtained through imaging a third image through the imaging device, based on the estimated individual frequency components. | 08-04-2011 |
20110188706 | Redundant Spatial Ensemble For Computer-Aided Detection and Image Understanding - Described herein is a technology for facilitating computer-aided detection and image understanding. In one implementation, an input set of training images of a target structure, such as an anatomical structure, is received. The input set of training images is spatially realigned to different landmarks to generate multiple bags of training images. At least one of the multiple bags comprises substantially all the training images in the input set, but realigned to a landmark. The multiple bags of training images may be used to train a spatial ensemble of detectors, which can be employed to generate an output result by automatically detecting a target structure in an input image. | 08-04-2011 |
20110188707 | System and Method for Pleographic Subject Identification, Targeting, and Homing Utilizing Electromagnetic Imaging in at Least One Selected Band - The inventive data processing system and method enable automatic recognition of images captured using various electromagnetic (EM) imaging systems and techniques, and more particularly to a system and method for applying pleographic processing for subject identification, recognition, matching, targeting, and or homing, utilizing one or more EM imaging systems, devices, in at least one selected EM band. | 08-04-2011 |
20110194731 | METHOD OF DETERMINING REFERENCE FEATURES FOR USE IN AN OPTICAL OBJECT INITIALIZATION TRACKING PROCESS AND OBJECT INITIALIZATION TRACKING METHOD - A method of determining reference features for use in an optical object initialization tracking process is disclosed, said method comprising the following steps: a) capturing at least one current image of a real environment or synthetically generated by rendering a virtual model of a real object to be tracked with at least one camera and extracting current features from the at least one current image, b) providing reference features adapted for use in an optical object initialization tracking process, c) matching a plurality of the current features with a plurality of the reference features, d) estimating at least one parameter associated with the current image based on a number of current and reference features which were matched, and determining for each of the reference features which were matched with one of the current features whether they were correctly or incorrectly matched, e) wherein the steps a) to d) are processed iteratively multiple times, wherein in step a) of every respective iterative loop a respective new current image is captured by at least one camera and steps a) to d) are processed with respect to the respective new current image, and f) determining at least one indicator associated to reference features which were correctly matched and/or to reference features which were incorrectly matched, wherein the at least one indicator is determined depending on how often the respective reference feature has been correctly matched or incorrectly matched, respectively. | 08-11-2011 |
20110194732 | IMAGE RECOGNITION APPARATUS AND METHOD - An image recognition apparatus detects a specific object image from an image to be processed, calculates a coincidence degree between an object recognisability state of the object image and that of an object in registered image information, and calculates a similarity between the image feature of the object image and the image feature in the registered image information. Based on the similarity and coincidence degree, the image recognition apparatus recognizes whether the object of the object image is that of the registered image information. When the similarity is lower than the first threshold and the coincidence degree is equal to or higher than the second threshold, the image recognition apparatus recognizes that the object of the object image is different from that of the registered image information. | 08-11-2011 |
20110200225 | ADVANCED BACKGROUND ESTIMATION TECHNIQUE AND CIRCUIT FOR A HYPER-SPECTRAL TARGET DETECTION METHOD - A system, circuit and methods for target detection from hyper-spectral image data are disclosed. Filter coefficients are determined using a modified constrained energy minimization (CEM) method. The modified CEM method can operate on a circuit operable to perform constrained linear programming optimization. A filter comprising the filter coefficients is applied to a plurality of pixels of the hyper-spectral image data to form CEM values for the pixels, and one or more target pixels are identified from the CEM values. The process may be repeated to enhance target recognition by using filter coefficients determined by excluding the identified target pixels from the hyper-spectral image data. | 08-18-2011 |
20110200226 | CUSTOMER BEHAVIOR COLLECTION METHOD AND CUSTOMER BEHAVIOR COLLECTION APPARATUS - According to one embodiment, a computer selects trajectory data on a person positioned in an image monitoring area from trajectory data on relevant persons. The computer selects a selling space image data obtained when the person corresponding to the trajectory data is positioned in the image monitoring area. The computer analyzes the selling space image data to extract a person image. The computer checks the person image extracted from the selling space image data against image data on each customer to search for customer image data obtained by taking an image of the person in the person image. The computer stores, upon detecting the customer image data obtained by taking an image of the person in the person image, identification information on transaction data stored in association with the customer image data, in association with identification information on the trajectory data. | 08-18-2011 |
20110200227 | ANALYSIS OF DATA FROM MULTIPLE TIME-POINTS - Described herein is a technology for facilitating analysis of data across multiple time-points. In one implementation, first and second images acquired at respective first and second different time-points are received. In addition, first and second findings associated with the first and second images respectively are also received. The first and second findings are associated with at least one region of interest. A correspondence between the first and second findings may be automatically determined by aligning the first and second findings. A longitudinal analysis result may then be generated by correlating the first and second findings. | 08-18-2011 |
20110200228 | TARGET TRACKING SYSTEM AND A METHOD FOR TRACKING A TARGET - A target tracking system including a tracking module arranged to perform model-based tracking of a target based on received measurements from a sensor. A detector is arranged to detect as a target performs a manoeuvre. An output switching module is arranged to switch from a first output mode in which model estimations of the tracking module are forwarded, to at least a second output mode in which only reliable outputs are forwarded, in response to information indicating the detection of a target manoeuvre being received from the detector. Also a collision avoidance system, a method for tracking a target and a computer program product. | 08-18-2011 |
20110200229 | Object Detecting with 1D Range Sensors - Moving objects are classified based on maximum margin classification and discriminative probabilistic sequential modeling of range data acquired by a scanner with a set of one or more 1D laser line scanner. The range data in the form of 2D images is pre-processed and then classified. The classifier is composed of appearance classifiers, sequence classifiers with different inference techniques, and state machine enforcement of a structure of the objects. | 08-18-2011 |
20110200230 | METHOD AND DEVICE FOR ANALYZING SURROUNDING OBJECTS AND/OR SURROUNDING SCENES, SUCH AS FOR OBJECT AND SCENE CLASS SEGMENTING - The invention relates to a method and an object detection device for analysing objects in the environment and/or scenes in the environment. The object detection device includes a data processing and/or evaluation device. In the data processing and/or evaluation device, image data (x | 08-18-2011 |
20110206236 | NAVIGATION METHOD AND APARATUS - An automated guidance system for a moving frame. The automated guidance system has an imaging system disposed on the frame; a motion sensing system coupled to the frame and configured for sensing movement of the frame; and a processor communicably connected to the vision system for receiving image data from the vision system and generating optical flow from image data of frame surrounding. The processor is communicably connected to the motion sensing system for receiving motion data of the frame from the motion sensing system. The processor is configured for determining, from kinematically aided dense optical flow correction to frame kinematic errors, due to errors in motion data from the motion sensing system. | 08-25-2011 |
20110206237 | RECOGNITION APPARATUS AND METHOD THEREOF, AND COMPUTER PROGRAM - A recognition apparatus for recognizing a position and an orientation of a target object, inputs a captured image of the target object captured by an image capturing apparatus; detects a plurality of feature portions from the captured image, and to extract a plurality of feature amounts indicating image characteristics in each of the plurality of feature portions; inputs property information indicating respective physical properties in the plurality of feature portions on the target object; inputs illumination information indicating an illumination condition at the time of capturing the captured image; determines respective degrees of importance of the plurality of extracted feature amounts based on the respective physical properties indicated by the property information and the illumination condition indicated by the illumination information; and recognizes the position and the orientation of the target object based on the plurality of feature amounts and the respective degrees of importance thereof. | 08-25-2011 |
20110206238 | PHARMACEUTICAL RECOGNITION AND IDENTIFICATION SYSTEM AND METHOD OF USE - An electronic pharmaceutical recognition and identification system is provided along with a method of use. In certain example embodiments a user can take a digital picture of a pharmaceutical with a portable appliance comprising a telephone, then text that picture to a predetermined telephone number, wait a short period of time for a pharmaceutical identification server system to electronically recognize and identify the pharmaceutical in question, and then automatically receive a text message back from the server system that includes various predetermined information regarding the pharmaceutical in question, such as its name, pictures of it, warnings, whether or not a prescription is required, as well as usage and interaction information. Fixed appliances are also provided that can passively interface with a pharmaceutical dispensing system to ensure that the prescribed pharmaceutical is being dispensed. | 08-25-2011 |
20110206239 | INPUT APPARATUS, REMOTE CONTROLLER AND OPERATING DEVICE FOR VEHICLE - An input apparatus for a vehicle includes: an operation element operable by an occupant of the vehicle; a biological information acquisition element acquiring biological information of the occupant; an unawakened state detection element detecting an unawakened state of the occupant based on the biological information, wherein the unawakened state is defined by a predetermined state different from an awakened state; and an operation disabling element disabling an operation input from the operation element when the unawakened state detection element detects the unawakened state. | 08-25-2011 |
20110206240 | DETECTING CONCEALED THREATS - Potential threat items may be concealed inside objects, such as portable electronic devices, that are subject to imaging for example, at a security checkpoint. Data from an imaged object can be compared to pre-determined object data to determine a class for the imaged object. Further, an object can be identified inside a container (e.g., a laptop inside luggage). One-dimensional Eigen projections can be used to partition the imaged object into partitions, and feature vectors from the partitions and the object image data can be used to generate layout feature vectors. One or more layout feature vectors can be compared to training data for threat versus non-threat-containing items from the imaged object's class to determine if the imaged object contains a potential threat item. | 08-25-2011 |
20110211729 | Method for Generating Visual Hulls for 3D Objects as Sets of Convex Polyhedra from Polygonal Silhouettes - A visual hull for a 3D object is generated by using a set of silhouettes extracted from a set of images. First, a set of convex polyhedra is generated as a coarse 3D model of the object. Then for each image, the convex polyhedra are refined by projecting them to the image and determining the intersections with the silhouette in the image. The visual hull of the object is represented as union of the convex polyhedra. | 09-01-2011 |
20110216938 | Apparatus for detecting lane-marking on road - The image processing ECU periodically acquires road-surface images and extracts edge points in the acquired road-surface image. Subsequently, the ECU determines the operating mode and extracts the edge line when the operating mode is either a dotted mode or a frame-accumulation mode. The edge points are transformed e.g. Hough transform, to extract an edge line that most frequently passes through the edge points. The extracted edge line denotes the lane marking. The ECU outputs a signal to activate a buzzer alert when determining the vehicle may depart from the lane. | 09-08-2011 |
20110216939 | APPARATUS AND METHOD FOR TRACKING TARGET - A target tracking apparatus and method according to an exemplary embodiment of the present invention may quickly and accurately perform target detection and tracking in a photographed image given as consecutive frames by acquiring at least one target candidate image most similar to a photographed image of a previous frame among prepared reference target images, determining one of the target candidate images as a target confirmation message based on the photographed image, and calculating a homography between the determined target confirmation image and the photographed image, and searching the photographed image of the previous image for feature points according to the calculated homography, and tracking an inter-frame change from the previous frame of the found feature points to a current frame. | 09-08-2011 |
20110216940 | TARGET DETECTION DEVICE AND TARGET DETECTION METHOD - Disclosed is a target detection device which can match a moving object in a captured image to an identifier when a plurality of identifiers began to be received in a short time, or when the number of identifiers received was larger than the number of detected position histories. The device ( | 09-08-2011 |
20110216941 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND ELECTRONIC APPARATUS - The present invention relates to an information processing apparatus, an information processing method, a program, and an electronic apparatus that are capable of detecting a movement of a hand of the user with ease. | 09-08-2011 |
20110216942 | IMAGE-CAPTURING APPARATUS AND METHOD, EXPRESSION EVALUATION APPARATUS, AND PROGRAM - An image-capturing apparatus for capturing an image by using a solid-state image-capturing device may include a face detector configured to detect a face of a human being on the basis of an image signal in a period until an image signal obtained by image capturing is recorded on a recording medium; an expression evaluation section configured to evaluate the expression of the detected face and to compute an expression evaluation value indicating the degree to which the detected face is close to a specific expression in relation to expressions other than the specific expression; and a notification section configured to notify notification information corresponding to the computed expression evaluation value to an image-captured person. | 09-08-2011 |
20110216943 | IMAGE-CAPTURING APPARATUS AND METHOD, EXPRESSION EVALUATION APPARATUS, AND PROGRAM - An image-capturing apparatus for capturing an image by using a solid-state image-capturing device may include a face detector configured to detect a face of a human being on the basis of an image signal in a period until an image signal obtained by image capturing is recorded on a recording medium; an expression evaluation section configured to evaluate the expression of the detected face and to compute an expression evaluation value indicating the degree to which the detected face is close to a specific expression in relation to expressions other than the specific expression; and a notification section configured to notify notification information corresponding to the computed expression evaluation value to an image-captured person. | 09-08-2011 |
20110222724 | SYSTEMS AND METHODS FOR DETERMINING PERSONAL CHARACTERISTICS - Systems and methods are disclosed for determining personal characteristics from images by generating a baseline gender model and an age estimation model using one or more convolutional neural networks (CNNs); capturing correspondences of faces by face tracking, and applying incremental learning to the CNNs and enforcing correspondence constraint such that CNN outputs are consistent and stable for one person. | 09-15-2011 |
20110222725 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING PROGRAM - An image processing device receives a captured image as input from an image capturing device installed in a conveying mechanism that conveys and tests works. The image processing device causes the image capturing device to capture images a plurality of times at a predetermined time interval. Based on the position of the work detected from the captured image output from the image capturing device by capturing the images at a predetermined time interval and the target position set by the user's operation, the image processing device derives the delay time required for capturing the image at the timing when the work is positioned near the target position, and sets the derived delay time for the image capturing timing of the image capturing device. | 09-15-2011 |
20110222726 | GESTURE RECOGNITION APPARATUS, METHOD FOR CONTROLLING GESTURE RECOGNITION APPARATUS, AND CONTROL PROGRAM - A gesture recognition apparatus is caused to correctly recognize start and end of a gesture without use of special unit by a natural manipulation of a user and low-load processing for the gesture recognition apparatus. The gesture recognition apparatus that recognizes the gesture from action of a recognition object taken in a moving image includes: a gravity center tracking unit that detects a specific subject having a specific feature from the moving image; a moving speed determining unit that computes a moving speed per unit time of the specific subject; a moving pattern extracting unit that extracts a moving pattern of the specific subject; and a start/end judgment unit that discriminates movement of the specific subject as an instruction (such as an instruction to start or end gesture recognition processing) input to the gesture recognition apparatus when the moving speed and the moving pattern satisfy predetermined conditions. | 09-15-2011 |
20110222727 | Object Localization Using Tracked Object Trajectories - A method of processing a video sequence is provided that includes tracking a first object and a second object for a specified number of frames, determining similarity between a trajectory of the first object and a trajectory of the second object over the specified number of frames, and merging the first object and the second object into a single object when the trajectory of the first object and the trajectory of the second object are sufficiently similar, whereby an accurate location and size for the single object is obtained. | 09-15-2011 |
20110222728 | Method and Apparatus for Scaling an Image in Segments - A method and an apparatus for scaling an image in segments are disclosed. The method includes: identifying scene features in each input video frame, and obtaining information about distribution of multiple features in the video frame; obtaining multiple feature distribution areas corresponding to the information about distribution of the multiple features, and obtaining multiple scale coefficients; and scaling the corresponding multiple feature distribution areas in each video frame according to the multiple scale coefficients. | 09-15-2011 |
20110222729 | APPARATUS AND METHOD FOR FINDING A MISPLACED OBJECT USING A DATABASE AND INSTRUCTIONS GENERATED BY A PORTABLE DEVICE - The basic invention uses a portable device that can contain a camera, a database, and a text, voice or visual entry to control the storage of an image into a database. Furthermore, the stored image can be associated with text, color, visual or audio. The stored images can be used to guide the user towards a target that the user does not recall its current location. The user's commands can be issued verbally, textually or by scrolling through the target images in the database until the desired one is found. This target can be shoes, pink sneakers, a toy or some comparable items that the user needs to find. | 09-15-2011 |
20110222730 | Red Eye False Positive Filtering Using Face Location and Orientation - An image is acquired including a red eye defect and non red eye defect regions having a red color. An initial segmentation of candidate redeye regions is performed. A location and orientation of one or more faces within the image are determined. The candidate redeye regions are analyzed based on the determined location and orientation of the one or more faces to determine a probability that each redeye region appears at a position of an eye. Any confirmed redeye regions having at least a certain threshold probability of being a false positive are removed as candidate redeye defect regions. The remaining redeye defect regions are corrected and a red eye corrected image is generated. | 09-15-2011 |
20110222731 | Computer Controlled System for Laser Energy Delivery to the Retina - An embodiment of the invention provides a method that captures a diagnostic image of a retina having at least one lesion, wherein the lesion includes a plurality of spots to be treated. Information is received from a user interface, wherein the information includes a duration, intensity, and/or wavelength of treatment for each of the spots. A real-time image of the retina is captured; and, a composite image is created by linking the diagnostic image to the real-time image. At least one updated real-time image of the retina is obtained using eye tracking and/or image stabilization; and, an annotated image is created by modifying the composite image based on the updated real-time image. A localized laser beam is delivered to each of the spots according to the information, the composite image, and the annotated image. | 09-15-2011 |
20110228975 | METHODS AND APPARATUS FOR ESTIMATING POINT-OF-GAZE IN THREE DIMENSIONS - Methods for determining a point-of-gaze (POG) of a user in three dimensions are disclosed. In particular embodiments, the methods involve: presenting a three-dimensional scene to both eyes of the user; capturing image data including both eyes of the user; estimating first and second line-of-sight (LOS) vectors in a three-dimensional coordinate system for the user's first and second eyes based on the image data; and determining the POG in the three-dimensional coordinate system using the first and second LOS vectors. | 09-22-2011 |
20110228976 | PROXY TRAINING DATA FOR HUMAN BODY TRACKING - Synthesized body images are generated for a machine learning algorithm of a body joint tracking system. Frames from motion capture sequences are retargeted to several different body types, to leverage the motion capture sequences. To avoid providing redundant or similar frames to the machine learning algorithm, and to provide a compact yet highly variegated set of images, dissimilar frames can be identified using a similarity metric. The similarity metric is used to locate frames which are sufficiently distinct, according to a threshold distance. For realism, noise is added to the depth images based on noise sources which a real world depth camera would often experience. Other random variations can be introduced as well. For example, a degree of randomness can be added to retargeting. For each frame, the depth image and a corresponding classification image, with labeled body parts, are provided. 3-D scene elements can also be provided. | 09-22-2011 |
20110228977 | IMAGE CAPTURING DEVICE AND METHOD FOR ADJUSTING A POSITION OF A LENS OF THE IMAGE CAPTURING DEVICE - A method for adjusting a position of a lens of an image capturing device obtains a plurality of images of a monitored scene by the lens, detects a motion area in the monitored scene, and detects if a human face is in the motion area. The method further moves the lens according to movement data of the human face if the human face is detected, or moves the lens according to movement data of the motion area if the human face is not detected. | 09-22-2011 |
20110228978 | FOREGROUND OBJECT DETECTION SYSTEM AND METHOD - A foreground object detection system and method establishes a background model by reading N frames of a video stream generated by a camera. The detection system further reads each frame of the video stream, detects the pixel value difference and the brightness value difference for each pair of two corresponding pixels of two consecutive frames for each of the N frames of the video stream. In detail, by comparing the pixel value difference with a pixel threshold and by comparing the brightness value difference with a brightness threshold, the detection system may determine a foreground or background pixel. | 09-22-2011 |
20110228979 | Moving-object detection apparatus, moving-object detection method and moving-object detection program - Disclosed herein is a moving-object detection apparatus having a plurality of moving-object detection processing devices configured to detect a moving object on the basis of a motion vector computed by making use of a present image and a past image wherein the moving-object detection processing devices are set to operate differently from each other in at least one of the resolution of the present and past images, the time distance between the present and past images and the search area of the motion vector in order to detect the moving object. | 09-22-2011 |
20110228980 | CONTROL APPARATUS AND VEHICLE SURROUNDING MONITORING APPARATUS - A control apparatus that improves the usability of a vehicle surrounding monitoring apparatus without confusing the monitoring party while monitoring the surroundings of a vehicle. A detection area setting section ( | 09-22-2011 |
20110228981 | METHOD AND SYSTEM FOR PROCESSING IMAGE DATA - A method for processing image data representing a segmentation mask, comprises generating two-dimensional shape representations of a three-dimensional object on the basis of a plurality of parameter sets; and matching motion blocks of the segmentation mask with the two-dimensional shape representations to obtain a best fit parameter set. Thereby, for example, a distance between the three-dimensional object and a camera position may be determined. | 09-22-2011 |
20110228982 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing device includes a learning image input unit configured to input a learning image, in which a tracked object is captured on different shooting conditions, together with the shooting conditions, a feature response calculation unit configured to calculate a response of one or more integrated features, with respect to the learning image while changing a parameter in accordance with the shooting conditions, a feature learning unit configured to recognize spatial distribution of the one or more integrated features in the learning image based on a calculation result of the response and evaluate a relationship between the shooting conditions and the parameter and a spatial relationship among the integrated features so as to learn a feature of the tracked object, and a feature storage unit configured to store a learning result of the feature. | 09-22-2011 |
20110228983 | INFORMATION PROCESSOR, INFORMATION PROCESSING METHOD AND PROGRAM - Disclosed herein is an information processor including, a storage section configured to store feature quantity data of a target object and audio data associated with the target object, an acquisition section configured to acquire an image of the target object, a recognition section configured to recognize an object included in the image based on the feature quantity data stored in the storage section and a reproduction section configured to reproduce the audio data associated with the recognized object and output a reproduced sound from an output device worn by the user. | 09-22-2011 |
20110228984 | SYSTEMS, METHODS AND ARTICLES FOR VIDEO ANALYSIS - A video analysis system including a video output device monitoring an area for activity, a video analyzer processing output of the video output device and identifying an event in near-real-time, and a persistent database archiving the event for an operational lifetime of the video analysis system and accessible in near-real-time. | 09-22-2011 |
20110228985 | APPROACHING OBJECT DETECTION SYSTEM - An approaching object detection system, approaching object can be accurately detected while reducing the load on a calculation processing. A first moving region detection unit ( | 09-22-2011 |
20110235855 | Color Gradient Object Tracking - A system and method are provided for color gradient object tracking. A tracking area is illuminated with a chromatic light source. A color value is measured, defined by at least three attributes, reflected from an object in the tracking area, and analyzed with respect to chromatic light source characteristics. A lookup table (LUT) is accessed that cross-references color values to positions in the tracking area, and in response to accessing the LUT, the object position in the tracking area is determined. The LUT is initially built by illuminating the tracking area with the light source. A test object is inserted into the tracking area in a plurality of determined positions, and the reflected color value is measured at each determined position. The color value measurements are correlated to determined positions. As a result, a color gradient can be measured between a first determined position and a second determined position. | 09-29-2011 |
20110235856 | METHOD AND SYSTEM FOR COMPOSING AN IMAGE BASED ON MULTIPLE CAPTURED IMAGES - A mobile multimedia device may be operable to capture consecutive image samples of a scene. The scene may comprise one or more objects such as faces or moving objects which may be identifiable by the mobile multimedia device. An image of the scene may be created by the mobile multimedia device utilizing a plurality of the captured consecutive image samples based on the identifiable objects. The image of the scene may be composed by selecting at least a portion of the captured consecutive image samples based on the identified one or more smiling faces. The image of the scene may be composed in such a way that the identified moving object, which may occur in the scene, may be eliminated from the composed image of the scene. | 09-29-2011 |
20110235857 | DEVICE AND METHOD FOR CONTROLLING STREETLIGHTS - A method for controlling streetlights located at a streetlight control area using a streetlight power control system controls an image capturing device to capture digital images of at least one route section of the streetlight control area at a predetermined interval. Light of a streetlight corresponding to the streetlight power controller is automatically adjusted by turning on or off the streetlight and by increasing or decreasing the intensity of the streetlight. | 09-29-2011 |
20110235858 | Grouping Digital Media Items Based on Shared Features - Methods, apparatuses, and systems for grouping digital media items based on shared features. Multiple digital images are received. Metadata about the digital images is obtained either by analyzing the digital images or by receiving metadata from a source separate from the digital images or both. The obtained metadata is analyzed by data processing apparatus to identify a common feature among two or more of the digital images. A grouping of the two or more images is formed by the data processing apparatus based on the identified common feature. | 09-29-2011 |
20110235859 | Signal processor - A signal processor includes an input unit, an extraction unit, a calculation unit, a determination unit, and an output unit. The input unit receives a moving image including a plurality of images. The extraction unit analyzes the moving image and extracts a representative image from the moving image. The calculation unit calculates a change amount of a partial moving image including the representative image. The change amount indicates degree of change. The determination unit uses the change amount to judge which the representative image or at least a part of the moving image is outputted. The output unit outputs the representative image or the partial moving image according to a corresponding output format. | 09-29-2011 |
20110235860 | Method to estimate 3D abdominal and thoracic tumor position to submillimeter accuracy using sequential x-ray imaging and respiratory monitoring - A method of estimating target motion for image guided radiotherapy (IGRT) systems is provided. The method includes acquiring by a kV imaging system sequential images of a target motion, computing by the kV imaging system from the sequential images an image-based estimation of the target motion expressed in a patient coordinate system, transforming by the kV imaging system the image-based estimation in the patient coordinate system to an estimate in a projection coordinate system, reformulating by the kV imaging system the projection coordinate system in a converging iterative form to force a convergence of the projection coordinate system to output a resolved estimation of the target motion, and displaying by the kV imaging system the resolved estimation of the target motion. | 09-29-2011 |
20110235861 | METHOD AND APPARATUS FOR ESTIMATING ROAD SHAPE - An apparatus estimates a shape of a road on which a vehicle travel. The apparatus id mounted on the vehicle. In the apparatus, information indicative of a plurality of detection points is received through transmission and reception of electromagnetic waves. The detection points are given as a plurality of candidates for edges of the road. It is determined whether or not a distance between each detection point and the vehicle is equal to or larger than a predetermined value. A first approximated curve for each detection point having the distance equal to larger than the predetermined value is detected, and a second approximated curve for a detection point having the distance less than the predetermined value is detected. The shape of the road is estimated by merging the first and second approximated curves. | 09-29-2011 |
20110235862 | FIELD OF IMAGING - Embodiments of the present invention provide a computer-based method for providing image data of a region of a target object ( | 09-29-2011 |
20110235863 | PROVISION OF IMAGE DATA - A method and apparatus are disclosed for providing image data. The method includes the steps of providing incident radiation from a radiation source at a target object and, via at least one detector, detecting an intensity of radiation scattered by the target object. Also via the at least one detector an intensity of radiation provided by the radiation source absent the target object is detected. Image data is provided via an iterative process responsive to the intensity of radiation detected absent the target object and the detected intensity of radiation scattered by the target object. | 09-29-2011 |
20110235864 | MOVING OBJECT TRAJECTORY ESTIMATING DEVICE - A moving object trajectory estimating device has: a surrounding information acquisition part that acquires information on surroundings of a moving object; a trajectory estimating part that specifies another moving object around the moving object based on the acquired surrounding information and estimates a trajectory of the specified moving object; and a recognition information acquisition part that acquires recognition information on a recognizable area of the specified moving object, and the trajectory estimating part estimates a trajectory of the specified moving object, based on the acquired recognition information of the specified moving object. | 09-29-2011 |
20110243376 | METHOD AND A DEVICE FOR DETECTING OBJECTS IN AN IMAGE - Detection of an object of a specified object category in an image. With the method, it is provided that: (1) at least two detectors are provided which are respectively set up for the purpose of detecting an object of the specified object category with a specified object size, wherein object sizes differ for the detectors, (2) the image is evaluated by the detectors in order to check whether an object of the specified object category is located in the image, and (3) an object of the specified object category is detected in the image when on the basis of the evaluation of the image by at least one of the detectors it is determined that an object of the specified object category is located in the image. A system suitable for implementing the method for detecting an object of a specified object category in an image is also described. | 10-06-2011 |
20110243377 | SYSTEM AND METHOD FOR PREDICTING OBJECT LOCATION - A system for predicting object location includes a video capture system for capturing a plurality of video frames, each of the video frames having a first area, an object isolation element for locating an object in each of the plurality of video frames, the object being located at a first actual position in a first video frame and being located at a second actual position in a second video frame, and a trajectory calculation element configured to analyze the first actual position and the second actual position to determine an object trajectory, the object trajectory comprising past trajectory and predicted future trajectory, wherein the predicted future trajectory is used to determine a second area in a subsequent video frame in which to search for the object, wherein the second area is different in size than the first area. | 10-06-2011 |
20110243378 | METHOD AND APPARATUS FOR OBJECT TRACKING AND LOITERING DETECTION - A method and apparatus for object tracking and loitering detection are provided. The method includes: wavelet-converting an input image by converting the input image into an image of a frequency domain to generate a frequency domain image and separating the frequency domain image according to a frequency band and a resolution; extracting object information including essential information about the input image from the frequency domain image; performing a fractal affine transform on the object information; and compensating for a difference between object information about a previous image and the object information about the input image by using a coefficient which is obtained by the fractal affine transform. | 10-06-2011 |
20110243379 | VEHICLE POSITION DETECTION SYSTEM - A system stores reference data generated by associating image feature point data with an image-capturing position and a recorded vehicle event. The system generates data for matching by extracting image feature points from an actually-captured image. The system generates information on an actual vehicle event, extracts first reference data whose image-capturing position is located in a vicinity of an estimated position of the vehicle, and extracts second reference data that includes a recorded vehicle event that matches the actual vehicle event. The system performs matching between at least one of the first reference data and the second reference data, and the data for matching, and determines a position of the vehicle based on the matching. | 10-06-2011 |
20110243380 | COMPUTING DEVICE INTERFACE - A computing device configured for providing an interface is described. The computing device includes a processor and instructions stored in memory. The computing device projects a projected image from a projector. The computing device also captures an image including the projected image using a camera. The camera operates in a visible spectrum. The computing device calibrates itself, detects a hand and tracks the hand based on a tracking pattern in a search space. The computing device also performs an operation. | 10-06-2011 |
20110243381 | METHODS FOR TRACKING OBJECTS USING RANDOM PROJECTIONS, DISTANCE LEARNING AND A HYBRID TEMPLATE LIBRARY AND APPARATUSES THEREOF - A method, non-transitory computer readable medium, and apparatus that tracks an object includes utilizing random projections to represent an object in a region of an initial frame in a transformed space with at least one less dimension. One of a plurality of regions in a subsequent frame with a closest similarity between the represented object and one or more of plurality of templates is identified as a location for the object in the subsequent frame. A learned distance is applied for template matching, and techniques that incrementally update the distance metric online are utilized in order to model the appearance of the object and increase the discrimination between the object and the background. A hybrid template library, with stable templates and hybrid templates that contains appearances of the object during the initial stage of tracking as well as more recent ones is utilized to achieve robustness with respect to pose variation and illumination changes. | 10-06-2011 |
20110243382 | X-Ray Inspection System and Method - The present specification discloses an X-ray system for processing X-ray data to determine an identity of an object under inspection. The X-ray system includes an X-ray source for transmitting X-rays, where the X-rays have a range of energies, through the object, a detector array for detecting the transmitted X-rays, where each detector outputs a signal proportional to an amount of energy deposited at the detector by a detected X-ray, and at least one processor that reconstructs an image from the signal, where each pixel within the image represents an associated mass attenuation coefficient of the object under inspection at a specific point in space and for a specific energy level, fits each of pixel to a function to determine the mass attenuation coefficient of the object under inspection at the point in space; and uses the function to determine the identity of the object under inspection. | 10-06-2011 |
20110243383 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing device includes a reference background storage unit that stores a reference background image, an estimation unit that detects an object from an input image and estimates an approximate position and an approximate shape of the object that is detected, a background difference image generation unit that generates a background difference image obtained based on a difference value between the input image and the reference background image, a failure determination unit that determines whether a failure occurs in the background difference image based on a comparison between the background difference image that is generated by the background difference image generation unit and the object that is estimated by the estimation unit, a failure type identification unit that identifies a type of the failure, and a background image update unit that updates the reference background image in a manner to correspond to the type of the failure. | 10-06-2011 |
20110243384 | IMAGE PROCESSING APPARATUS AND METHOD AND PROGRAM - There are provided an image processing apparatus, a method, and a program capable of appropriately adjusting the stereoscopic effect in a stereoscopic image with a person. The attention point serving as the provisional cross point position is set to a person's eye, and the cross point position is shifted backwards from the attention point as the percentage of the image occupied by the face increases, thereby adjusting the stereoscopic effect so as to increase an area of the object which is projected forward from the cross point. Regarding the calculation of the back shift amount, the back shift amount is set to increase as the percentage of the face occupied in the standard image increases, and the coefficient is set to be smaller as the number of pixels of the positions nearer than the attention point increases, and the set coefficient kb is multiplied by the back shift amount. | 10-06-2011 |
20110243385 | Moving object detection apparatus, moving object detection method, and program - Disclosed herein is a moving object detection apparatus including: an image input processing section configured to input an analysis image composed of an image taken by a camera in order to establish a designated region inside the analysis image; a first detection processing section configured to detect an image of a moving object which moves within the designated region established by the image input processing section and which is at a distance in a first range from the camera; and a second detection processing section configured to detect an image of the moving object which moves within the designated region established by the image input processing section and which is at a distance in a second range from the camera, the second range being farther than the first range. | 10-06-2011 |
20110243386 | Method and System for Multiple Object Detection by Sequential Monte Carlo and Hierarchical Detection Network - A method and system for detecting multiple objects in an image is disclosed. A plurality of objects in an image is sequentially detected in an order specified by a trained hierarchical detection network. In the training of the hierarchical detection network, the order for object detection is automatically determined. The detection of each object in the image is performed by obtaining a plurality of sample poses for the object from a proposal distribution, weighting each of the plurality of sample poses based on an importance ratio, and estimating a posterior distribution for the object based on the weighted sample poses. | 10-06-2011 |
20110243387 | Analysis of Radiographic Images - The present invention therefore provides a method for the analysis of radiographic images, comprising the steps of acquiring a plurality of projection images of a patient, acquiring a surrogate signal indicative of the location of a target structure in the patient, reconstructing a plurality of volumetric images of the patient from the projection images, each volumetric image being reconstructed from projection images having a like breathing phase, identifying the position of the target structure such as a tumour in each volumetric image, associating a surrogate signal with each of the projection images, and determining a relationship between the surrogate signal and the position of the target structure. Multiple projection images having a like breathing phase can be grouped for reconstruction, to provide sufficient numbers for reconstruction. The analysis of the multiple values of the surrogate associated with each breathing phase can be used to determine the mean surrogate value and its variation. Multiple values of the surrogate signal associated with the same nominal breathing phase can be used to determine a mean value of the surrogate signal for the target position associated with that phase and a variation of the value of the surrogate signal for the target position associated with that phase. The breathing phase of specific projection images can be obtained by analysis of one or more features in the images, such as the method we described in U.S. Pat. No. (7,356,112), or otherwise. | 10-06-2011 |
20110243388 | IMAGE DISPLAY APPARATUS, IMAGE DISPLAY METHOD, AND PROGRAM - An image display apparatus may include a display section for presenting an image. The apparatus may also include a viewing angle calculation section for determining a viewing angle of a user relative to the display section. Additionally, the apparatus may include an image generation section for generating first image data representing a first image, and for supplying the first image data to the display section for presentation of the first image. The image generation section may generate the first image data based on the user's viewing angle, second image data representing a second image, and third image data representing a third image. The second image may include an object viewed from a first viewing angle and the third image may include the object viewed from a second viewing angle, the first viewing angle and the second viewing angle being different from each other and from the user's viewing angle. | 10-06-2011 |
20110243389 | METHOD OF DETECTING PARTICLES BY DETECTING A VARIATION IN SCATTERED RADIATION - A smoke detecting method which uses a beam of radiation such as a laser ( | 10-06-2011 |
20110249861 | CONTENT INFORMATION PROCESSING DEVICE, CONTENT INFORMATION PROCESSING METHOD, CONTENT INFORMATION PROCESSING PROGRAM, AND PERSONAL DIGITAL ASSISTANT - An information processing apparatus comprising that includes a reproduction unit to reproduce video content comprising a plurality of frames; a memory to store a table including object identification information identifying an object image, and frame identification information identifying a frame of the plurality of frames that includes the object image; and a processor to extract the frame including the object image from the video content and generate display data of a reduced image corresponding to the frame for display. | 10-13-2011 |
20110249862 | IMAGE DISPLAY DEVICE, IMAGE DISPLAY METHOD, AND IMAGE DISPLAY PROGRAM - According to one embodiment, an image display device that displays acquired image frames includes: an image processing unit that detects a location of a target in a first image frame among the image frames and generates a first predicted location of the target in a second image frame acquired at a first time when a predetermined number of frames or predetermined period of time has passed since the first image frame is acquired; a script processing unit that generates at least one tracking image that starts from the location of the target in the first image frame and heads toward the first predicted location in the second image frame; a synthesis unit that generates combined images where the at least one tracking image is put on image frames between the first and second image frame; and a display unit that displays the combined images. | 10-13-2011 |
20110249863 | INFORMATION PROCESSING DEVICE, METHOD, AND PROGRAM - An information processing device includes a face detection unit that detects a face area from a target image, a feature point detection unit that detects a feature point of the detected face area, a determination unit that determines an attention area that is an area to which attention is paid in the face area based on the detected feature point, a reference color extraction unit that extracts a reference color that is color setting obtained from the target image in the determined attention area, an adjustment unit that adjusts the extracted reference color to a color setting for a modified image generated from the target image as a base, and a generation unit that generates the modified image from the target image by drawing the attention area using the color setting for the modified image. | 10-13-2011 |
20110249864 | MEASUREMENT OF THREE-DIMENSIONAL MOTION CHARACTERISTICS - A system for measurement of three-dimensional motion of an object is provided. The system includes a light projection means adapted for projecting, for distinct time intervals, light of at least two different colors with a cross-sectional pattern of fringe lines onto a surface of the object and also includes image acquisition means for capturing an image of the object during an exposure time, wherein the distinct time intervals are within the duration of the exposure time. The system further includes image processing means adapted for processing the image to obtain a different depth map for each color based on a projected pattern of fringe lines on the object as viewed from the position of the image acquisition means, to determine corresponding points on the depth maps of each color, and to determine a three-dimensional motion characteristic of the object based on the positions of corresponding points on the depth maps. | 10-13-2011 |
20110249865 | APPARATUS, METHOD AND COMPUTER-READABLE MEDIUM PROVIDING MARKER-LESS MOTION CAPTURE OF HUMAN - Provided are an apparatus, method and computer-readable medium providing marker-less motion capture of a human. The apparatus may include a two-dimensional (2D) body part detection unit to detect, from input images, candidate 2D body part locations of candidate 2D body parts; a three-dimensional (3D) lower body part computation unit to compute 3D lower body parts using the detected candidate 2D body part locations; a 3D upper body computation unit to compute 3D upper body parts based on a body model; and a model rendering unit to render the model in accordance with a result of the computed 3D upper body parts. | 10-13-2011 |
20110249866 | METHODS AND SYSTEMS FOR THREE DIMENSIONAL OPTICAL IMAGING, SENSING, PARTICLE LOCALIZATION AND MANIPULATION - Embodiments include methods, systems, and/or devices that may be used to image, obtain three-dimensional information from a scence, and/or locate multiple small particles and/or objects in three dimensions. A point spread function (PSF) with a predefined three dimensional shape may be implemented to obtain high Fisher information in 3D. The PSF may be generated via a phase mask, an amplitude mask, a hologram, or a diffractive optical element. The small particles may be imaged using the 3D PSF. The images may be used to find the precise location of the object using an estimation algorithm such as maximum likelihood estimation (MLE), expectation maximization, or Bayesian methods, for example. Calibration measurements can be used to improve the theoretical model of the optical system. Fiduciary particles/targets can also be used to compensate for drift and other type of movement of the sample relative to the detector. | 10-13-2011 |
20110249867 | DETECTION OF OBJECTS IN DIGITAL IMAGES - A system and method to detect objects in a digital image. At least one image representing at least one frame of a video sequence is received. A given color channel of the image is extracted. At least one blob that stands out from a background of the given color channel is identified. One or more features are extracted from the blob. The one or more features are provided to a plurality of pre-learned object models each including a set of pre-defined features associated with a pre-defined blob type. The one or more features are compared to the set of pre-defined features. The blob is determined to be of a type that substantially matches a pre-defined blob type associated with one of the pre-learned object models. At least a location of an object is visually indicated within the image that corresponds to the blob. | 10-13-2011 |
20110249868 | LINE-OF-SIGHT DIRECTION DETERMINATION DEVICE AND LINE-OF-SIGHT DIRECTION DETERMINATION METHOD - Provided are a line-of-sight direction determination device and a line-of-sight direction determination method capable of highly precisely and accurately determining a line-of-sight direction from immediately after start of measurement without indication of an object to be carefully observed and adjustment work done in advance. The line-of-sight direction determination device ( | 10-13-2011 |
20110255738 | Method and Apparatus for Visual Search Stability - Various methods for visual search stability are provided. One example method includes determining a plurality of image matching distances for a captured object depicted in a video frame, where each image matching distance being indicative of a quality of a match between the captured object and a respective object match result. The example method further includes including, in a candidate pool, an indication of the object match results having image matching distances in a candidate region, discarding the object match results having image matching distances in a non-candidate region, and analyzing the object match results with image matching distances in a potential candidate region to include, in the candidate pool, indications of select object match results with image matching distances in the potential candidate region. Similar and related example methods and example apparatuses are also provided. | 10-20-2011 |
20110255739 | IMAGE CAPTURING DEVICE AND METHOD WITH OBJECT TRACKING - A method for dynamically tracking a specific object in a monitored area obtains an image of the monitored area by one of a plurality of image capturing devices in the monitored area, and detects the specific object in the obtained image. The method further determines adjacent image capturing devices in the monitored area according to the path table upon the condition that the specific object is detected, and adjusts a detection sensitivity of each of the adjacent image capturing devices. | 10-20-2011 |
20110255740 | VEHICLE TRACKING SYSTEM AND TRACKING METHOD THEREOF - The present invention discloses a vehicle tracking system and method, and the tracking method comprises the steps of capturing a bright object from an image by the bright object segmentation; labeling the bright object by a connected component labeling method and forming a connected component object; identifying, analyzing and combining the characteristics of the connected component object to form a lamp object by the bright object recognition; tracking the trajectory of the lamp object by a multi-vehicle tracking method; and identifying the type of a vehicle having the lamp object by the vehicle detection/recognition and counting the number of various vehicles. | 10-20-2011 |
20110255741 | METHOD AND APPARATUS FOR REAL-TIME PEDESTRIAN DETECTION FOR URBAN DRIVING - A computer implemented method for detecting the presence of one or more pedestrians in the vicinity of the vehicle is disclosed. Imagery of a scene is received from at least one image capturing device. A depth map is derived from the imagery. A plurality of pedestrian candidate regions of interest (ROIs) is detected from the depth map by matching each of the plurality of ROIs with a 3D human shape model. At least a portion of the candidate ROIs is classified by employing a cascade of classifiers tuned for a plurality of depth bands and trained on a filtered representation of data within the portion of candidate ROIs to determine whether at least one pedestrian is proximal to the vehicle. | 10-20-2011 |
20110255742 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION STORAGE MEDIUM - A situation data obtaining unit obtains situation data describing a situation of an image capturing target of which image is captured by an image capturing device for producing an image to be output. Based on the situation data, a simulation process executing unit carries out a simulation process for simulating a behavior of the image capturing target after the situation of the image capturing target, described by the situation data. A combined screen image output unit outputs a result of the simulation process by the simulation process executing unit. The simulation process executing unit changes the behavior of the image capturing target in the simulation process in response to an operation received from a user. | 10-20-2011 |
20110255743 | OBJECT RECOGNITION USING HAAR FEATURES AND HISTOGRAMS OF ORIENTED GRADIENTS - A system and method to detect objects in a digital image. At least one image representing at least one frame of a video sequence is received. A sliding window of different window sizes at different locations is placed in the image. A cascaded classifier including a plurality of increasingly accurate layers is applied to each window size and each location. Each layer includes a plurality of classifiers. An area of the image within a current sliding window is evaluated using one or more weak classifiers in the plurality of classifiers based on at least one of Haar features and Histograms of Oriented Gradients features. An output of each weak classifier is a weak decision as to whether the area of the image includes an instance of an object of a desired object type. A location of the zero or more images associated with the desired object type is identified. | 10-20-2011 |
20110255744 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 10-20-2011 |
20110255745 | IMAGE ANALYSIS PLATFORM FOR IDENTIFYING ARTIFACTS IN SAMPLES AND LABORATORY CONSUMABLES - A High-resolution Image Acquisition and Processing Instrument (HIAPI) performs at least five simultaneous measurements in a noninvasive fashion, namely: (a) determining the volume of a liquid sample in welh (or microtubes) containing liquid sample, (b) detection of precipitate, objects of artifacts within microliter plate wells, (c) classification of colored samples in microliter plate wells or microtubes; (dl determination of contaminant (e.g. wafer concentration}; (e) air bubbles; (f) problems with the actual plate. Remediation of contaminant is also possible. | 10-20-2011 |
20110255746 | SYSTEM FOR USING THREE-DIMENSIONAL MODELS TO ENABLE IMAGE COMPARISONS INDEPENDENT OF IMAGE SOURCE - A method for identifying an object based at least in part on a reference database including two-dimensional images of objects includes the following steps: (a) providing a three-dimensional model reference database containing a plurality of estimated three-dimensional models, wherein each estimated three-dimensional model is derived from a corresponding two-dimensional image from the two-dimensional reference database; (b) sampling at least one image of an object to be identified; (c) implementing at least one identification process to identify the object, the identification process employing data from the three-dimensional model reference database. | 10-20-2011 |
20110255747 | MOVING OBJECT DETECTION APPARATUS AND MOVING OBJECT DETECTION METHOD - A moving object detection apparatus includes: an image input unit which receives a plurality of pictures included in video; a trajectory calculating unit which calculates a plurality of trajectories from the pictures; a subclass classification unit which classifies the trajectories into a plurality of subclasses; an inter-subclass approximate geodetic distance calculating unit which calculates, for each of the subclasses, an inter-subclass approximate geodetic distance representing similarity between the subclass and another subclass, using an inter-subclass distance that is a distance including a minimum value of a linear distance between each of trajectories belonging to the subclass and one of trajectories belonging to the other subclass; and a segmentation unit which performs segmentation by determining, based on the calculated inter-subclass approximate geodetic distance, a set of subclasses including similar trajectories as one class. | 10-20-2011 |
20110255748 | ARTICULATED OBJECT REGIONARTICULATED OBJECT REGION DETECTION APPARATUS AND METHOD OF THE SAME - An articulated object region detection apparatus includes: a subclass classification unit which classifies trajectories into subclasses; a distance calculating unit which calculates, for each of the subclasses, a point-to-point distance and a geodetic distance between the subclass and another subclass; and a region detection unit which detects, as a region having an articulated motion, two subclasses to which trajectories corresponding to two regions connected via the same articulation and indicating the articulated motion belong, based on a temporal change in the point-to-point distance and a temporal change in the geodetic distance between two given subclasses. | 10-20-2011 |
20110262001 | VIEWPOINT DETECTOR BASED ON SKIN COLOR AREA AND FACE AREA - In a particular illustrative embodiment, a method of determining a viewpoint of a person based on skin color area and face area is disclosed. The method includes receiving image data corresponding to an image captured by a camera, the image including at least one object to be displayed at a device coupled to the camera. The method further includes determining a viewpoint of the person relative to a display of the device coupled to the camera. The viewpoint of the person may be determined by determining a face area of the person based on a determined skin color area of the person and tracking a face location of the person based on the face area. One or more objects displayed at the display may be moved in response to the determined viewpoint of the person. | 10-27-2011 |
20110262002 | HAND-LOCATION POST-PROCESS REFINEMENT IN A TRACKING SYSTEM - A tracking system having a depth camera tracks a user's body in a physical space and derives a model of the body, including an initial estimate of a hand position. Temporal smoothing is performed when the initial estimate moves by less than a threshold level from frame to frame, while little or no smoothing is performed when the movement is more than the threshold. The smoothed estimate is used to define a local volume for searching for a hand extremity to define a new hand position. Another process generates stabilized upper body points that can be used as reliable reference positions, such as by detecting and accounting for occlusions. The upper body points and a prior estimated hand position are used to define an arm vector. A search is made along the vector to detect a hand extremity to define a new hand position. | 10-27-2011 |
20110262003 | OBJECT LEARNING METHOD, OBJECT TRACKING METHOD USING THE SAME, AND OBJECT LEARNING AND TRACKING SYSTEM - The present invention relates to an object learning method that minimizes time required for learning an object, an object tracking method using the object learning method, and an object learning and tracking system. The object learning method includes: receiving an image to be learned through a camera to generate a front image by a terminal; generating m view points used for object learning and generating first images obtained when viewing the object from the m view points using the front image; generating second images by performing radial blur on the first images; separating an area used for learning from the second images to obtain reference patches; and storing pixel values of the reference patches. | 10-27-2011 |
20110262004 | Learning Device and Learning Method for Article Transport Facility - A learning control device performs a positioning process, a first image capturing process, and a first deviation amount calculating process in which a reference position deviation amount in the horizontal direction between the imaging reference position and a detection mark is derived based on image information captured in the first image capturing process to derive a position adjustment amount from the derived reference position deviation amount, and the learning control device further includes a positioning correcting process in which the position adjustment device is operated to adjust a position of the second learn assist member based on the derived movement adjustment amount when the reference position deviation amount derived in the first deviation amount calculating process falls outside a set tolerance range. A second image capturing process, and a second deviation amount calculating process may be further provided. | 10-27-2011 |
20110262005 | OBJECT DETECTING METHOD AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING AN OBJECT DETECTION PROGRAM - An object detecting method includes dividing a standard pattern into two or more areas radially from a central point; selecting, in each divided area of the standard pattern, a standard pattern pixel position at the maximum distance from the area dividing central point as a standard pattern representative point; dividing a determined pattern into two or more areas; selecting, in each divided area of the determine pattern, a determined pattern pixel position at the maximum distance from the area dividing central point as a determined pattern representative point; determining a positional difference between the standard pattern representative point and the determined pattern representative point in the corresponding divided areas; and determining the determined pattern as a target object when the positional differences in all of the divided areas are within a predetermined range. | 10-27-2011 |
20110262006 | INTERFACE APPARATUS, GESTURE RECOGNITION METHOD, AND GESTURE RECOGNITION PROGRAM - An interface apparatus is configured to output an operation signal to a target apparatus operated in accordance with a gesture command. In the interface apparatus, a reference object detection unit detects a reference object having a feature similar to a predetermined reference feature value from an image captured by an image capture unit and generates reference information identifying the reference object. Based on the reference information, an operating object identifying unit identifies as the operating object a feature object included in the image and satisfying a predetermined identification condition in terms of a relative relationship with the reference object and extracts operating object information identifying the operating object. An operation signal generation unit starts detecting the gesture command according to a change in position of the identified operating object and generates the operation signal corresponding to the gesture command. | 10-27-2011 |
20110262007 | SHAPE MEASUREMENT APPARATUS AND CALIBRATION METHOD - The shape measurement apparatus calculates a characteristic amount for a plurality of points of interest on a surface of a measurement target object, based on an image obtained by image capturing with a camera, calculates an orientation of a normal line based on a value of the characteristic amount by referencing data stored in advance in a storage device, and restores the three-dimensional shape of the surface of the measurement target object based on a result of the calculation. The storage device stores a plurality of data sets generated respectively for a plurality of reference positions arranged in a field of view of the camera, and the data set to be referenced is switched depending on a position of a point of interest. | 10-27-2011 |
20110262008 | Method for Determining Position Data of a Target Object in a Reference System - A method for determining the position data of a target object in a reference system from an observation position at a distance. A three-dimensional reference model of the surroundings of the target object is provided, the reference model including known geographical location data. An image of the target object and its surroundings, resulting from the observation position for an observer, is matched with the reference model. The position data of the sighted target object in the reference model is determined as relative position data with respect to known location data of the reference model. | 10-27-2011 |
20110262009 | METHOD AND APPARATUS FOR IDENTIFYING OBSTACLE IN IMAGE - A method for identifying barriers in images is disclosed. In the method, images of a current frame and N frame which is nearest to the current frame are obtained, the obtained images of the frames are divided in the same way, and the image of each frame obtains a plurality of divided block regions; the motion barrier confidence of each block region corresponding to the current frame and the N frame which is nearest to the current frame is calculated; whether each block region in the image of the current frame is decided successively according to the motion barrier confidence of each block region corresponding to the current frame and the N frame which is nearest to the current frame; the barriers in the images are determined according to each block region. | 10-27-2011 |
20110262010 | ARRANGEMENT AND METHOD RELATING TO AN IMAGE RECORDING DEVICE - An input system for a digital camera may include a portion for taking at least one image to be used as a control image; and a controller to control at least one operation of the digital camera based on a control command recognized from the control image, the control command controlling a function of the camera. | 10-27-2011 |
20110268316 | MULTIPLE CENTROID CONDENSATION OF PROBABILITY DISTRIBUTION CLOUDS - Systems and methods are disclosed for identifying objects captured by a depth camera by condensing classified image data into centroids of probability that captured objects are correctly identified entities. Output exemplars are processed to detect spatially localized clusters of non-zero probability pixels. For each cluster, a centroid is generated, generally resulting in multiple centroids for each differentiated object. Each centroid may be assigned a confidence value, indicating the likelihood that it corresponds to a true object, based on the size and shape of the cluster, as well as the probabilities of its constituent pixels. | 11-03-2011 |
20110268317 | Data Capture and Identification System and Process - An identification method and process for objects from digitally captured images thereof that uses data characteristics to identify an object from a plurality of objects in a database. The data is broken down into parameters such as a Shape Comparison, Grayscale Comparison, Wavelet Comparison, and Color Cube Comparison with object data in one or more databases to identify the actual object of a digital image. | 11-03-2011 |
20110268318 | PHOTO DETECTING APPARATUS AND SYSTEM HAVING THE SAME - A photo detecting apparatus may include a signal processing unit, a control register unit, and a register data changing unit. The signal processing unit is configured to process electric signals converted from incident light to generate image data. The control register unit supplies a set value to the signal processing unit, the set value controlling operation of the signal processing unit. The control register unit stores a first set value supplied through a first bus, the first set value corresponding to an initial set value based on a decoded external control signal. In addition, the register data changing unit supplies a second set value to the control register unit through a second bus, separate from the first bus, when the first set value is to be changed. | 11-03-2011 |
20110268319 | DETECTING AND TRACKING OBJECTS IN DIGITAL IMAGES - There is provided an improved solution for detecting and tracking objects in digital images. The solution comprises selecting a neighborhood for each pixel under observation, the neighborhood being of known size and form, and reading pixel values of the neighborhood. Further the solution comprises selecting at least one set of coefficients for weighting each neighborhood such that each pixel value of each neighborhood is weighted with at least one coefficient; searching for an existence of at least one object feature at each pixel under observation on the basis of a combination of weighted pixel values at each neighborhood; and verifying the existence of the object in the digital image on the basis of the searches of existence of at least one object feature at a predetermined number of pixels. | 11-03-2011 |
20110268320 | METHOD AND APPARATUS FOR DETECTING AND SEPARATING OBJECTS OF INTEREST IN SOCCER VIDEO BY COLOR SEGMENTATION AND SHAPE ANALYSIS - Substantial elimination of errors in the detection and location of overlapping human objects in an image of a playfield is achieved, in accordance with at least one aspect of the invention, by performing a predominately shape-based analysis of one or more characteristics obtained from a specified portion of the candidate non-playfield object, by positioning a human object model substantially over the specified portion of the candidate non-playfield object in accordance with information based at least in part on information from the shape-based analysis, and removing an overlapping human object from the portion of the candidate non-playfield object identified by the human object model. In one exemplary embodiment, the human object model is an ellipse whose major and minor axes are variable in relation to one or more characteristics identified from the specified portion of the candidate non-playfield object. | 11-03-2011 |
20110268321 | PERSON-JUDGING DEVICE, METHOD, AND PROGRAM - A person-judging device comprises: an obstruction storage which stores information indicating an area of an obstruction which is extracted from an image based on a video signal from an external camera, the obstruction being extracted from the image; head portion range calculation means which, when a portion of an object which is extracted from the image is hidden by the obstruction, assumes that a potential range of grounding points where the object touches a reference face on the image is the area of the obstruction which is stored in the obstruction storage, and which, based on the assumed range and the correlation between the height of a person and the size and position of the head portion that are previously provided, calculates the potential range of the head portion on the image by assuming that a portion farthest from the grounding points of the object is the head portion of the person; and head portion detection means that judges whether an area including a shape corresponding to the head portion exists in the calculated range of the head portion. | 11-03-2011 |
20110274314 | REAL-TIME CLOTHING RECOGNITION IN SURVEILLANCE VIDEOS - Systems and methods are disclosed to recognize clothing from videos by detecting and tracking a human; performing face alignment and occlusal detection; and performing age and gender estimation, skin area extraction, and clothing segmentation to a linear support vector machine (SVM) to recognize clothing worn by the human. | 11-10-2011 |
20110274315 | METHOD, DEVICE, AND COMPUTER-READABLE MEDIUM OF OBJECT DETECTION - Disclosed are an object detection method and an object detection device. The object detection method comprises a step of obtaining plural detection results of a current frame according to plural object detection methods; a step of setting initial probabilities of the plural detection results of the current frame; a step of calculating a movement frequency distribution diagram representing movement frequencies of respective pixels in the current frame; a step of obtaining detection results of a previous frame; a step of updating the probabilities of the plural detection results of the current frame; and a step of determining a final list of detected objects based on the updated probabilities of the plural detection results of the current frame. | 11-10-2011 |
20110274316 | METHOD AND APPARATUS FOR RECOGNIZING LOCATION OF USER - A method of recognizing a location of a user including detecting the user's two eyes and mouth of their face is provided, which includes calculating a ratio of a distance between the two eyes to a distance between a middle point of the two eyes and the mouth, calculating a rotation angle of the face according to the ratio, and detecting a distance between the face and the camera based on the rotation angle. | 11-10-2011 |
20110274317 | MATCHING WEIGHT INFORMATION EXTRACTION DEVICE - The matching weight information extraction device includes a matching weight information extraction unit. The matching weight information extraction unit analyzes a change in a time direction of at least either an input video or features of a plurality of dimensions extracted from the video, in association with the dimensions. Further, the matching weight information extraction unit calculates weight information to be used for matching for each of the dimensions as matching weight information, according to a degree of the change in the time direction. | 11-10-2011 |
20110280438 | IMAGE PROCESSING METHOD, INTEGRATED CIRCUIT FOR IMAGE PROCESSING AND IMAGE PROCESSING SYSTEM - An image processing method includes: identifying at least one moving object of a current image according to the current image and at least one image different from the current image; and utilizing a processing circuit to generate an adjusted current image by performing a first image adjustment operation upon the at least one moving object of the current image and performing a second image adjustment operation upon a surrounding region of the at least one moving object of the current image, where the first image adjustment operation is different from the second image adjustment operation. | 11-17-2011 |
20110280439 | TECHNIQUES FOR PERSON DETECTION - Techniques are disclosed that involve the detection of persons. For instance, embodiments may receive, from an image sensor, one or more images (e.g., thermal images, infrared images, visible light images, three dimensional images, etc.) of a detection space. Based at least on the one or more images, embodiments may detect the presence of person(s) in the detection space. Also, embodiments may determine one or more characteristics of such detected person(s). Exemplary characteristics include (but are not limited to) membership in one or more demographic categories and/or activities of such persons. Further, based at least on such person detection and characteristics determining, embodiments may control delivery of content to an output device. | 11-17-2011 |
20110280440 | Method and Apparatus Pertaining to Rendering an Image to Convey Levels of Confidence with Respect to Materials Identification - A control circuit accesses image information regarding an image of a target. This information comprises, at least in part, information regarding material content of the target. The control circuit also accesses confidence information regarding at least one degree of confidence as pertains to the target's material content. The control circuit uses this confidence information to facilitate rendering the image such that the rendered image integrally conveys information both about materials included in the target and a relative degree of confidence that the materials are correctly identified. | 11-17-2011 |
20110280441 | PROJECTOR AND PROJECTION CONTROL METHOD - A method controls a projection of a projector. The method predetermines hand gestures, and assigns an operation function of an input device to each of the predetermined hand gestures. When an electronic file is projected onto a screen, the projector receives an image of a speaker captured by an image-capturing device connected to the projector. The projector identifies whether a hand gesture of the speaker matches one of the predetermined hand gestures. If the hand gesture matches one of the hand gestures, the projector may execute a corresponding assigned operation function. | 11-17-2011 |
20110280442 | OBJECT MONITORING SYSTEM AND METHOD - An object monitoring system and method identify a foreground object from a current frame of a video stream of a monitored area. The object monitoring system determines whether an object has entered or exited the monitored area according to the foreground object, and generates a security alarm. The object monitoring system searches N pieces of reference images just before an image is captured at the time of a generation of the security alarm, and detects information related to the object from the N pieces of reference images. By comparing the related information with vector descriptions of human body models stored in a feature database, and a holder or a remover of the object can be recognized. | 11-17-2011 |
20110280443 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - An image processing apparatus includes an identification criterion creating unit that creates an identification criterion so as to enable identification of specific regions in a target image to be processed that is selected in chronological order from among images constituting a set of time-series images; includes a feature data calculating unit that calculates the feature data of each segmented region in the target image to be processed; and includes a specific region identifying unit that, based on the feature data of each segmented region, identifies the specific regions in the target image to be processed by using the identification criterion. Moreover, the identification criterion creating unit creates the identification criterion based on the pieces of feature data of the specific regions identified in the images that have been already processed. | 11-17-2011 |
20110280444 | CAMERA AND CORRESPONDING METHOD FOR SELECTING AN OBJECT TO BE RECORDED - A camera is described having an image capturing device, an evaluation and control unit and a storage unit, the evaluation and control unit analyzes an image sequence having at least two successively captured images recorded by the image capturing device to segment and stabilize at least one object to be recorded during the image recording. The evaluation and control unit ascertains a deliberate panning movement of the camera and compares it with ascertained movements of objects represented in the captured images, the evaluation and control unit determining at least one object as an object to be recorded, the ascertained movement of which is most consistent with the camera's ascertained panning movement, and the evaluation and control unit storing an image section of the image captured by the image capturing device in the storage unit which represents the at least one object to be recorded. Also described is a corresponding method. | 11-17-2011 |
20110280445 | METHOD AND SYSTEM FOR ANALYZING AN IMAGE GENERATED BY AT LEAST ONE CAMERA - A method for analyzing an image of a real object, particularly a printed media object, generated by at least one camera comprises the following steps: generating at least a first image by the camera capturing at least one real object, defining a first search domain comprising multiple data sets of the real object, each of the data sets being indicative of a respective portion of the real object, and analyzing at least one characteristic property of the first image of the camera with respect to the first search domain, in order to determine whether the at least one characteristic property corresponds to information of at least a particular one of the data sets of the first search domain. If it is determined that the at least one characteristic property corresponds to information of at least a particular one of the data sets, a second search domain comprising only the particular one of the data sets is defined and the second search domain is used for analyzing the first image and/or at least a second image generated by the camera. | 11-17-2011 |
20110280446 | Method and Apparatus for Selective Disqualification of Digital Images - An unsatisfactory scene is disqualified as an image acquisition control for a camera. An image is acquired. One or more eye regions are determined. The eye regions are analyzed to determine whether they are blinking, and if so, then the scene is disqualified as a candidate for a processed, permanent image while the eye is completing the blinking. | 11-17-2011 |
20110280447 | METHODS AND SYSTEMS FOR CONTENT PROCESSING - Cell phones and other portable devices are equipped with a variety of technologies by which existing functionality can be improved, and new functionality can be provided. Some relate to visual search capabilities, and determining appropriate actions responsive to different image inputs. Others relate to processing of image data. Still others concern metadata generation, processing, and representation. Yet others relate to coping with fixed focus limitations of cell phone cameras, e.g., in reading digital watermark data. Still others concern user interface improvements. A great number of other features and arrangements are also detailed. | 11-17-2011 |
20110286627 | METHOD AND APPARATUS FOR TRACKING AND RECOGNITION WITH ROTATION INVARIANT FEATURE DESCRIPTORS - Various methods for tracking and recognition with rotation invariant feature descriptors are provided. One example method includes generating an image pyramid of an image frame, detecting a plurality of interest points within the image pyramid, and extracting feature descriptors for each respective interest point. According to some example embodiments, the feature descriptors are rotation invariant. Further, the example method may also include tracking movement by matching the feature descriptors to feature descriptors of a previous frame and performing recognition of an object within the image frame based on the feature descriptors. Related example methods and example apparatuses are also provided. | 11-24-2011 |
20110286628 | SYSTEMS AND METHODS FOR OBJECT RECOGNITION USING A LARGE DATABASE - A method of organizing a set of recognition models of known objects stored in a database of an object recognition system includes determining a classification model for each known object and grouping the classification models into multiple classification model groups. Each classification model group identifies a portion of the database that contains the recognition models of the known objects having classification models that are members of the classification model group. The method also includes computing a representative classification model for each classification model group. Each representative classification model is derived from the classification models that are members of the classification model group. When a target object is to be recognized, the representative classification models are compared to a classification model of the target object to enable selection of a subset of the recognition models of the known objects for comparison to a recognition model of the target object. | 11-24-2011 |
20110286629 | Method for reconstruction of a two-dimensional sectional image corresponding to a sectional plane through a recorded object and x-ray device - A method for reconstruction of a two-dimensional sectional image corresponding to a sectional plane through a recorded object from two-dimensional projection images recorded along a recording trajectory at different projection angles with an X-ray device is proposed. The sectional plane having at least two intersection points with the imaging trajectory is selected. After selection of the sectional plane, an intermediate function on the sectional plane is determined by backprojection of the projection images processed with a differentiation filter. The object densities forming the sectional image are determined from the intermediate function by a two-dimensional iterative deconvolution method. | 11-24-2011 |
20110286630 | Visualization of Medical Image Data With Localized Enhancement - Systems and methods for visualization of medical image data with localized enhancement. In one implementation, image data of a structure of interest is resampled within a predetermined plane to generate at least one background image of the structure of interest. In addition, at least one local image is reconstructed to visually enhance at least one local region of interest associated with the structure of interest. The local image and the background image are then combined to generate a composite image. | 11-24-2011 |
20110286631 | REAL TIME TRACKING/DETECTION OF MULTIPLE TARGETS - A mobile platform detects and tracks at least one target in real-time, by tracking at least one target, and creating an occlusion mask indicating an area in a current image to detect a new target. The mobile platform searches the area of the current image indicated by the occlusion mask to detect the new target. The use of a mask to instruct the detection system where to look for new targets increases the speed of the detection task. Additionally, to achieve real-time operation, the detection and tracking is performed in the limited time budget of the (inter) frame duration. Tracking targets is given higher priority than detecting new targets. After tracking is completed, detection is performed in the remaining time budget for the frame duration. Detection for one frame, thus, may be performed over multiple frames. | 11-24-2011 |
20110286632 | ASSEMBLY COMPRISING A RADAR AND AN IMAGING ELEMENT - An assembly comprising a radar and a camera for both deriving data relating to a golf ball and a golf club at launch, radar data relating to the ball and club being illustrated in an image provided by the camera. The data illustrated may be trajectories of the ball/club/club head, directions and/or angles, such as an angle of a face of the golf club striking the ball, the lie angle of the club head or the like. An assembly of this type may also be used for defining an angle or direction in the image and rotating e.g. an image of the golfer to have the determined direction or angle coincide with a predetermined angle/direction in order to be able to compare different images. | 11-24-2011 |
20110286633 | System And Method For Detecting, Tracking And Counting Human Objects of Interest - A method of identifying, tracking, and counting human objects of interest based upon at least one pair of stereo image frames taken by at least one image capturing device, comprising the steps of: obtaining said stereo image frames and converting each said stereo image frame to a rectified image frame using calibration data obtained for said at least one image capturing device; generating a disparity map based upon a pair of said rectified image frames; generating a depth map based upon said disparity map and said calibration data; identifying the presence or absence of said objects of interest from said depth map and comparing each of said objects of interest to existing tracks comprising previously identified objects of interest; for each said presence of an object of interest, adding said object of interest to one of said existing tracks if said object of interest matches said one existing track, or creating a new track comprising said object of interest if said object of interest does not match any of said existing tracks; updating each said existing track; and maintaining a count of said objects of interest in a given time period based upon said existing tracks created or modified during said given time period. | 11-24-2011 |
20110293136 | System and Method for Adapting Generic Classifiers for Object Detection in Particular Scenes Using Incremental Training - A generic classifier is adapted to detect an object in a particular scene, wherein the particular scene was unknown when the classifier was trained with generic training data. A camera acquires a video of frames of the particular scene. A model of the particular scene model is constructed using the frames in the video. The classifier is applied to the model to select negative examples, and new negative examples are added to the training data while removing another set of existing negative examples from the training data based on an uncertainty measure;. Selected positive examples are also added to the training data and the classifier is retrained until a desired accuracy level is reached to obtain a scene specific classifier. | 12-01-2011 |
20110293137 | ANALYSIS OF THREE-DIMENSIONAL SCENES - A method for processing data includes receiving a depth map of a scene containing a humanoid form. The depth map is processed so as to identify three-dimensional (3D) connected components in the scene, each connected component including a set of the pixels that are mutually adjacent and have mutually-adjacent depth values. Separate, first and second connected components are identified as both belonging to the humanoid form, and a representation of the humanoid form is generated including both of the first and second connected components. | 12-01-2011 |
20110293138 | DETECTION APPARATUS AND OBSTACLE DETECTION SYSTEM FOR VEHICLES USING THE SAME - A detection apparatus includes a housing, a circuit board, an image detection module, an ultrasonic detection module, and a connecting terminal. The image detection module includes a barrel, one or more lenses received in the barrel, and an image sensor configured to receive light through the lens and generate image signals. The image sensor is electrically connected to the circuit board. The ultrasonic detection module includes a piezoelectric member fixed to the housing to emit ultrasonic waves and receive reflected ultrasonic waves, and an ultrasonic control module operable to apply a voltage on the piezoelectric member, receive alternating voltages generated by the piezoelectric member, and generate voltage signals when receiving the voltages from the piezoelectric member. The ultrasonic control module is electrically connected to the piezoelectric member and the circuit board. The connecting terminal is electrically connected to the circuit board to output the image signals and the voltage signals. | 12-01-2011 |
20110293139 | METHOD OF AUTOMATICALLY TRACKING AND PHOTOGRAPHING CELESTIAL OBJECTS AND PHOTOGRAPHIC APPARATUS EMPLOYING THIS METHOD - A method of automatically tracking and photographing a celestial object, includes inputting latitude information, photographing azimuth angle information and photographing elevation angle information of a photographic apparatus; inputting star map data of a certain range including data on a location of a celestial object from the latitude information, the photographing azimuth angle information and the photographing elevation angle information; calculating a deviation amount between a location of the celestial object that is imaged in a preliminary image obtained by the photographic apparatus and the location of the celestial object which is defined in the input star map data; correcting at least one of the photographing azimuth angle information and the photographing elevation angle information using the deviation amount; and performing a celestial-object auto-tracking photographing operation based on the corrected at least one of the photographing azimuth angle information and the photographing elevation angle information. | 12-01-2011 |
20110293140 | Dataset Creation For Tracking Targets With Dynamically Changing Portions - A mobile platform visually detects and/or tracks a target that includes a dynamically changing portion, or otherwise undesirable portion, using a feature dataset for the target that excludes the undesirable portion. The feature dataset is created by providing an image of the target and identifying the undesirable portion of the target. The identification of the undesirable portion may be automatic or by user selection. An image mask is generated for the undesirable portion. The image mask is used to exclude the undesirable portion in the creation of the feature dataset for the target. For example, the image mask may be overlaid on the image and features are extracted only from unmasked areas of the image of the target. Alternatively, features may be extracted from all areas of the image and the image mask used to remove features extracted from the undesirable portion. | 12-01-2011 |
20110293141 | DETECTION OF VEHICLES IN AN IMAGE - The invention concerns a traffic surveillance system that is used to detect and track vehicles in video taken of a road from a low mounted camera. The inventors have discovered that even in heavily occluded scenes, due to traffic density or the angle of low mounted cameras capturing the images, at least one horizontal edge of the windshield is least likely to be occluded for each individual vehicle in the image. Thus, it is an advantage of the invention that the direct detection of a windshield on its own can be used to detect a vehicle in a single image. Multiple models are projected ( | 12-01-2011 |
20110293142 | METHOD FOR RECOGNIZING OBJECTS IN A SET OF IMAGES RECORDED BY ONE OR MORE CAMERAS - Method for improving the visibly of objects and recognizing objects in a set of images recorded by one or more cameras, the images of said set of images being made from mutual different geometric positions, the method comprising the steps or recording a set or subset of images by means of one camera which is moved rather freely and which makes said images during its movement, thus providing an array of subsequent images, estimating the camera movement between subsequent image recordings, also called ego-motion hereinafter, based on features of those recorded images, registering the camera images using a synthetic aperture method, recognizing said objects. | 12-01-2011 |
20110293143 | FUNCTIONAL IMAGING - A method includes generating a kinetic parameter value for a VOI in a functional image of a subject based on motion corrected projection data using an iterative algorithm, including determining a motion correction for projection data corresponding to the VOI based on the VOI, motion correcting the projection data corresponding to the VOI to generate the motion corrected projection data, and estimating the at least one kinetic parameter value based on the motion corrected projection data or image data generated with the motion corrected projection data. In another embodiment, a method includes registering functional image data indicative of tracer uptake in a scanned patient with image data from a different imaging modality, identifying a VOI in the image based on the registered images, generating at least one kinetic parameter for the VOI, and generating a feature vector including the at least one generated kinetic parameter and at least one bio- marker. | 12-01-2011 |
20110293144 | Method and System for Rendering an Entertainment Animation - Systems and methods for rendering an entertainment animation. The system can comprise a user input unit for receiving a non-binary user input signal; an auxiliary signal source for generating an auxiliary signal; a classification unit for classifying the non-binary user input signal with reference to the auxiliary signal; and a rendering unit for rendering the entertainment animation based on classification results from the classification unit. | 12-01-2011 |
20110293145 | DRIVING SUPPORT DEVICE, DRIVING SUPPORT METHOD, AND PROGRAM - Provided are a driving support device, a driving support method, and a program, in which the driver can more intuitively and accurately determine the distance to another vehicle in the side rear. A driving support device ( | 12-01-2011 |
20110299727 | Specific Absorption Rate Measurement and Energy-Delivery Device Characterization Using Thermal Phantom and Image Analysis - A system for use in characterizing an energy applicator includes a test fixture assembly. The test fixture assembly includes an interior area defined therein. The system also includes a thermally-sensitive medium disposed in the interior area of the test fixture assembly. The thermally-sensitive medium includes a cut-out portion defining a void in the thermally-sensitive medium. The cut-out portion is configured to receive at least a portion of the energy applicator therein. | 12-08-2011 |
20110299728 | AUTOMATIC DEPTH CAMERA AIMING - Automatic depth camera aiming is provided by a method which includes receiving from the depth camera one or more observed depth images of a scene. The method further includes, if a point of interest of a target is found within the scene, determining if the point of interest is within a far range relative to the depth camera. The method further includes, if the point of interest of the target is within the far range, operating the depth camera with a far logic, or if the point of interest of the target is not within the far range, operating the depth camera with a near logic. | 12-08-2011 |
20110299729 | APPARATUS AND METHOD FOR MEASURING GOLF CLUB SHAFT FLEX AND GOLF SIMULATION SYSTEM INCORPORATING THE SAME - A method for measuring shaft flex comprises capturing at least one image of a shaft during movement of the shaft through a swing plane and examining the at least one image to determine the flex of the shaft. | 12-08-2011 |
20110299730 | VEHICLE LOCALIZATION IN OPEN-PIT MINING USING GPS AND MONOCULAR CAMERA - Described herein is a method and system for vehicle localization in an open pit mining environment having intermittent or incomplete GPS coverage. The system comprises GPS receivers associated with the vehicles and providing GPS measurements when available, as well as one or more cameras | 12-08-2011 |
20110299731 | INFORMATION PROCESSING DEVICE AND METHOD, AND PROGRAM - An information processing device includes a first calculation unit which calculates a score of each sample image including a positive image in which an object as an identification object is present and a negative image in which the object as the identification object is not present, for each weak identifier of an identifier including a plurality of weak identifiers, a second calculation unit which calculates the number of scores when the negative image is processed, which are scores less than a minimum score among scores when the positive image is processed; and an realignment unit which realigns the weak identifiers in order from a weak identifier in which the number calculated by the second calculation unit is a maximum. | 12-08-2011 |
20110299732 | SYSTEM OF DRONES PROVIDED WITH RECOGNITION BEACONS - The present invention relates to a system ( | 12-08-2011 |
20110299733 | SYSTEM AND METHOD FOR PROCESSING RADAR IMAGERY - The present invention relates to a system and method for processing imagery, such as may be derived from a coherent imaging system e.g. a synthetic aperture radar (SAR). The system processes sequences of SAR images of a region taken in at least two different passes and generates Coherent Change Detection (CCD) base images from corresponding images of each pass. A reference image is formed from one or more of the CCD base images images, and an incoherent change detection image formed by comparison between a given CCD base image and the reference image. The technique is able to detect targets from tracks left in soft ground, or from shadow areas caused by vehicles, and so does not rely on a reflection directly from the target itself. The technique may be implemented on data recorded in real time, or may be done in post-processing on a suitable computer system. | 12-08-2011 |
20110299734 | METHOD AND SYSTEM FOR DETECTING TARGET OBJECTS - With a method and a system for detecting target objects, which are detected by a sensor device, for example, by radar, laser or passive reception of electromagnetic waves, through an imaging electro-optical sensor with subsequent digital image evaluation, it is proposed for a rapid allocation of the image sensor with changeable direction that takes into account the different importance of the individual target objects to predefine in an assessment device different assessment criteria for a target parameter of the respective target objects and to derive therefrom a prioritization value for each individual target. Based on the prioritization values a ranking is compiled of the target objects for detection by the image sensor, and the target objects are successively detected by the image sensor in the order given by the ranking and evaluated, in particular classified, in an image evaluation device. | 12-08-2011 |
20110299735 | METHOD OF USING STRUCTURAL MODELS FOR OPTICAL RECOGNITION - A method and system for recognizing all varieties of objects in an image by using structure models are disclosed. Structural elements are sought when comparing a structural model with an image but only within a framework of one or more generated hypotheses. The method for identifying objects includes preliminarily creating a structural model of objects by specifying a plurality of basic geometric structural elements corresponding to one or more portions of the object, recording a spatial characteristic of each identified basic geometric structural element, and recording a relational characteristic for each specified basic geometric structural element. Objects in the image are isolated and a list of hypotheses for each object is provided. Hypotheses are tested by determining if the corresponding group of basic geometric structural elements corresponds to another supposed object described in a classifier. Results of testing of hypotheses may be saved and the results may be used to identify objects. | 12-08-2011 |
20110305366 | Adaptive Action Detection - Described is providing an action model (classifier) for automatically detecting actions in video clips, in which unlabeled data of a target dataset is used to adaptively train the action model based upon similar actions in a labeled source dataset. The target dataset comprising unlabeled video data is processed into a background model. The action model is generated from the background model using a source dataset comprising labeled data for an action of interest. The action model is iteratively refined, generally by fixing a current instance of the action model and using the current instance of the action model to search for a set of detected regions (subvolumes), and then fixing the set of subvolumes and updating the current instance of the action model based upon the set of subvolumes, and so on, for a plurality of iterations. | 12-15-2011 |
20110305367 | STORAGE MEDIUM HAVING IMAGE RECOGNITION PROGRAM STORED THEREIN, IMAGE RECOGNITION APPARATUS, IMAGE RECOGNITION SYSTEM, AND IMAGE RECOGNITION METHOD - A game apparatus obtains a captured image captured by a camera. First, the game apparatus detects an object area of the captured image that includes a predetermined image object based on pixel values obtained at a first pitch across the captured image. Then, the game apparatus detects a predetermined image object from an image of the object area based on pixel values obtained at a second pitch smaller than the first pitch across the object area of the captured image. | 12-15-2011 |
20110305368 | STORAGE MEDIUM HAVING IMAGE RECOGNITION PROGRAM STORED THEREIN, IMAGE RECOGNITION APPARATUS, IMAGE RECOGNITION SYSTEM, AND IMAGE RECOGNITION METHOD - A game apparatus detects a predetermined image object including a first graphic pattern with a plurality of inner graphic patterns drawn therein from a captured image captured by an image-capturing section. The game apparatus first obtains the captured image captured by the image-capturing section, and detects an area of the first graphic pattern from the captured image. Then, the game apparatus detects the plurality of inner graphic patterns from within the detected area, and calculates center positions of the inner graphic patterns so as to detect the position of the predetermined image object. | 12-15-2011 |
20110305369 | PORTABLE WIRELESS MOBILE DEVICE MOTION CAPTURE AND ANALYSIS SYSTEM AND METHOD - Portable wireless mobile device motion capture and analysis system and method configured to display motion capture/analysis data on a mobile device. System obtains data from motion capture elements and analyzes the data. Enables unique displays associated with the user, such as 3D overlays onto images of the user to visually depict the captured motion data. Ratings associated with the captured motion can also be displayed. Predicted ball flight path data can be calculated and displayed. Data shown on a time line can also be displayed to show the relative peaks of velocity for various parts of the user's body. Based on the display of data, the user can determine the equipment that fits the best and immediately purchase the equipment, via the mobile device. Custom equipment may be ordered through an interface on the mobile device from a vendor that can assemble-to-order customer built equipment and ship the equipment. Includes active and passive golf shot count capabilities. | 12-15-2011 |
20110311099 | METHOD OF EVALUATING THE HORIZONTAL SPEED OF A DRONE, IN PARTICULAR A DRONE CAPABLE OF PERFORMING HOVERING FLIGHT UNDER AUTOPILOT - The method operates by estimating the differential movement of the scene picked up by a vertically-oriented camera. Estimation includes periodically and continuously updating a multiresolution representation of the pyramid of images type modeling a given picked-up image of the scene at different, successively-decreasing resolutions. For each new picked-up image, an iterative algorithm of the optical flow type is applied to said representation. The method also provides responding to the data produced by the optical-flow algorithm to obtain at least one texturing parameter representative of the level of microcontrasts in the picked-up scene and obtaining an approximation of the speed, to which parameters a battery of predetermined criteria are subsequently applied. If the battery of criteria is satisfied, then the system switches from the optical-flow algorithm to an algorithm of the corner detector type. | 12-22-2011 |
20110311100 | Method, Apparatus and Computer Program Product for Providing Object Tracking Using Template Switching and Feature Adaptation - A method, apparatus and computer program product are provided that may enable devices to provide improved object tracking, such as in connection with computer vision, multimedia content analysis and retrieval, augmented reality, human computer interaction and region-based image processing. In this regard, a method includes adjusting parameters of a portion of an input frame having a target object to match a template size and then performing feature-based image registration between the portion of the input frame and an active template and at least one selected inactive template. The method may also enable switching the selected inactive template to be an active template for a subsequent frame based at least on a matching score between the portion of the input frame and the selected inactive template and determine a position of a target object in the input frame based on one of the active template or the selected inactive template. | 12-22-2011 |
20110311101 | METHOD AND SYSTEM TO SEGMENT DEPTH IMAGES AND TO DETECT SHAPES IN THREE-DIMENSIONALLY ACQUIRED DATA - A method and system analyzes data acquired by image systems to more rapidly identify objects of interest in the data. In one embodiment, z-depth data are segmented such that neighboring image pixels having similar z-depths are given a common label. Blobs, or groups of pixels with a same label, may be defined to correspond to different objects. Blobs preferably are modeled as primitives to more rapidly identify objects in the acquired image. In some embodiments, a modified connected component analysis is carried out where image pixels are pre-grouped into regions of different depth values preferably using a depth value histogram. The histogram is divided into regions and image cluster centers are determined. A depth group value image containing blobs is obtained, with each pixel being assigned to one of the depth groups. | 12-22-2011 |
20110317871 | SKELETAL JOINT RECOGNITION AND TRACKING SYSTEM - A system and method are disclosed for recognizing and tracking a user's skeletal joints with a NUI system and further, for recognizing and tracking only some skeletal joints, such as for example a user's upper body. The system may include a limb identification engine which may use various methods to evaluate, identify and track positions of body parts of one or more users in a scene. In examples, further processing efficiency may be achieved by segmenting the field of view in smaller zones, and focusing on one zone at a time. Moreover, each zone may have its own set of predefined gestures which are recognized. | 12-29-2011 |
20110317872 | Low Threshold Face Recognition - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, are disclosed for reducing the impact of lighting conditions and biometric distortions, while providing a low-computation solution for reasonably effective (low threshold) face recognition. In one aspect, the methods include processing a captured image of a face of a user seeking to access a resource by conforming a subset of the captured face image to a reference model. The reference model corresponds to a high information portion of human faces. The methods further include comparing the processed captured image to at least one target profile corresponding to a user associated with the resource, and selectively recognizing the user seeking access to the resource based on a result of said comparing. | 12-29-2011 |
20110317873 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 12-29-2011 |
20110317874 | Information Processing Device And Information Processing Method - An image acquisition unit of an information processing device acquires data for a moving image including an image of a user and captured by an image capturing device. A tracking processing unit uses a particle filter to perform visual tracking in the moving image so as to estimate a head contour of the user. A gesture detection unit identifies a facial region in an area inside the head contour, acquires a parameter indicating the orientation of the face, and keeping a history of parameters. When time-dependent change in the orientation of the face meets a predetermined criterion, it is determined that a gesture is made. The output data generation unit generates output data dependent on a result of detecting a gesture. The output control unit controls the generated output data so as to display the data on the display, for example. | 12-29-2011 |
20110317875 | Identifying and Redressing Shadows in Connection with Digital Watermarking and Fingerprinting - The present disclosure relates generally to cell phones and cameras, and to shadow detection in images captured by such cell phones and cameras. One claim recites a method comprising: identifying a shadow cast by a camera on a subject being imaged; and using a programmed electronic processor, redressing the shadow in connection with: i) reading a digital watermark from imagery captured of the subject, or ii) calculating a fingerprint from the imagery captured of the subject. Another claim recites a method comprising: identifying a shadow cast by a cell phone on a subject being imaged by a camera included in the cell phone; and using a programmed electronic processor, determining a proximity of the camera to the subject based on an analysis of the shadow. Of course, other claims and combinations are provided too. | 12-29-2011 |
20110317876 | Optical Control System for Heliostats - A method of aligning a reflector with a target includes receiving, at a first reflector, light from a light source. The first reflector is configured to reflect light from the light source onto a target, illuminating the target in a first target region. A first image of the target is captured, using an imaging device. The first reflector is configured to reflect light from the light source onto the target, illuminating the target in a second target region. A second image of the target is captured, using the imaging device. The differences between the first image and the second image are compared to determine the alignment of the first reflector with respect to at least one of the light source and the target. | 12-29-2011 |
20110317877 | METHOD OF MOTION DETECTION AND AUTONOMOUS MOTION TRACKING USING DYNAMIC SENSITIVITY MASKS IN A PAN-TILT CAMERA - A method of identifying motion within a field of view includes capturing at least two sequential images within the field of view. Each of the images includes a respective array of pixel values. An array of difference values between corresponding ones of the pixel values in the sequential images is calculated. A sensitivity region map corresponding to the field of view is provided. The sensitivity region map includes a plurality of regions having different threshold values. A presence of motion is determined by comparing the difference values to corresponding ones of the threshold values. | 12-29-2011 |
20120002840 | METHOD OF AND ARRANGEMENT FOR LINKING IMAGE COORDINATES TO COORDINATES OF REFERENCE MODEL - A method of linking image coordinates to coordinates in a reference model is disclosed. The method includes acquiring a 2½D or 3D input image representing a body of a living being and including at least two image boundaries of at least two parts within said body, acquiring a 3D reference model representative of a reference living being describing in a reference model coordinate system at least two reference boundaries of the at least two parts within said body, and overlaying the reference model and the input image. The method further includes adjusting at least a portion of one of the reference boundaries and/or at least one of the image boundaries such that this reference boundary and this image boundary substantially coincide, while the adjusted reference boundary does not intersect with the remaining reference boundaries and/or the adjusted image boundary does not intersect with the remaining image boundaries. | 01-05-2012 |
20120002841 | INFORMATION PROCESSING APPARATUS, THREE-DIMENSIONAL POSITION CALCULATION METHOD, AND PROGRAM - An information processing apparatus includes a region segmentation unit configured to segment each of a plurality of images shot by an imaging apparatus for shooting an object from a plurality of viewpoints, into a plurality of regions based on colors of the object, an attribute determination unit configured to determine, based on regions in proximity to intersections between scanning lines set on the each image and boundary lines of the regions segmented by the region segmentation unit in the each image, attributes of the intersections, a correspondence processing unit configured to obtain corresponding points between the images based on the determined intersections' attributes, and a three-dimensional position calculation unit configured to calculate a three-dimensional position of the object based on the obtained corresponding points. | 01-05-2012 |
20120002842 | DEVICE AND METHOD FOR DETECTING MOVEMENT OF OBJECT - A device for detecting a movement of an object includes: an image shooting unit to generate a first image and a second image by continuous shooting; a detection unit to detect a movement region based on a difference between the first and second images; an edge detection unit to detect an edge region in the first image; a deletion unit to delete the edge region from the movement region; and a decision unit to determine a degree of object movement in accordance with the movement region in which a part of the movement region being deleted by the deletion unit. | 01-05-2012 |
20120002843 | DROWSINESS ASSESSMENT DEVICE AND PROGRAM - Local maxima values and local minima values are derived from eyelid openness time series data in a segment in which a continuous closed eye period of extracted blinks is a specific time duration (for example 1 second) or longer. When plural local minima values are present in the segment of continuous closed eye period of 1 second or longer, blinks are extracted passing over and back through each variable closed eye threshold value of a variable closed eye threshold that is slid in a direction from the derived local maxima value towards the local minima value in set steps to a low value, and a inter-blink interval derived. Determination is made that a blink burst has occurred when the derived inter-blink interval is 1 second or less, and say greater than 0.2 seconds, thereby detecting a blink burst. Blink bursts can be detected with good precision, and the state of drowsiness can be assessed with good precision. | 01-05-2012 |
20120008825 | SYSTEM AND METHOD FOR DYNAMICALLY TRACKING AND INDICATING A PATH OF AN OBJECT - A system for dynamically tracking and indicating a path of an object comprises an object position system for generating three-dimensional object position data comprising an object trajectory, a software element for receiving the three-dimensional object position data, the software element also for determining whether the three-dimensional object position data indicates that an object has exceeded a boundary, and a graphics system for displaying the object trajectory. | 01-12-2012 |
20120008826 | METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR DETECTING OBJECTS IN DIGITAL IMAGES - Method, device, and computer program product for detecting an object in a digital image are provided. The method includes providing a detection window and determining at least one area of the object in the digital image by traversing the detection window by a first step size onto a set of pixels. Further, at each pixel, presence of at least one portion of the object in the detection window is detected. Upon detection of the presence of the object, the detection window is shifted by a second step size to neighbouring pixels. Further, the detection window is selected as an area of the object if the at least one portion of the object is present in at least a threshold number of detection windows at the neighbouring pixels. Thereafter, an object area representing the object in the digital image is selected based on the at least one area. | 01-12-2012 |
20120008827 | METHOD AND DEVICE FOR IDENTIFYING OBJECTS AND FOR TRACKING OBJECTS IN A PRODUCTION PROCESS - An object ( | 01-12-2012 |
20120008828 | TARGET-LINKED RADIATION IMAGING SYSTEM - An imaging detection system includes at least one location detection device configured to determine coordinates of a target, at least one detector configured to detect events from a source associated with the target, and a processor coupled in communication with the at least one location detection device and the at least one detector. The processor is configured to receive the coordinates from the at least one location detection device and the events from the at least one detector, translate the events using the coordinates acquired from the at least one location detection device to compensate for a relative motion between the source and the at least one detector, and output a processed data set having the events translated based on the coordinates. | 01-12-2012 |
20120008829 | METHOD, DEVICE, AND COMPUTER-READABLE MEDIUM FOR DETECTING OBJECT IN DISPLAY AREA - Disclosed are a method and a device for detecting an object in a display area. The method comprises a step of generating a first image prepared to be displayed; a step of displaying the generated first image on a screen; a step of capturing a second image of the screen including the display area; and a step of comparing the generated first image with the captured second image so as to detect the object in the display area. | 01-12-2012 |
20120008830 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND COMPUTER-READABLE STORAGE MEDIUM - An information processing apparatus for estimating a position and orientation of a target object in a three-dimensional space, inputs a plurality of captured images obtained by imaging the target object from a plurality of viewpoints, clips, for each of the input captured images, a partial image corresponding to a region occupied by a predetermined partial space in the three-dimensional space, from the captured image, extracts, from a plurality of partial images clipped from the plurality of captured images, feature information indicating a feature of the plurality of partial images, stores dictionary information indicating a position and orientation of an object in association with feature information of the object corresponding to the position and orientation, and estimates the position and orientation of the target object by comparing the feature information of the extracted target object and the feature information indicated in the dictionary information. | 01-12-2012 |
20120008831 | OBJECT POSITION CORRECTION APPARATUS, OBJECT POSITION CORRECTION METHOD, AND OBJECT POSITION CORRECTION PROGRAM - An object position correction apparatus is provided with an observing device that detects an object to be observed to obtain an observed value, an observation history data base that records an observation history of the object, a position estimation history data base that records the estimated history of the position of the object, a prediction distribution forming unit that forms a prediction distribution that represents an existence probability at the position of the object, an object position estimation unit that estimates the ID and the position of the object, a center-of-gravity position calculation unit that calculates the center-of-gravity position of the observed values, an object position correction unit that carries out a correction on the estimated position of the object, and a display unit that displays the corrected position of the object. | 01-12-2012 |
20120008832 | REGION-OF-INTEREST VIDEO QUALITY ENHANCEMENT FOR OBJECT RECOGNITION - A video-based object recognition system and method provides selective, local enhancement of image data for improved object-based recognition. A frame of video data is analyzed to detect objects to receive further analysis, these local portions of the frame being referred to as a region of interest (ROI). A video quality metric (VQM) value is calculated locally for each ROI to assess the quality of the ROI. Based on the VQM value calculated with respect to the ROI, a particular video quality enhancement (VQE) function is selected and applied to the ROI to cure deficiencies in the quality of the ROI. Based on the enhanced ROI, objects within the defined region can be accurately identified. | 01-12-2012 |
20120014558 | POSITION-DEPENDENT GAMING, 3-D CONTROLLER, AND HANDHELD AS A REMOTE - Methods and systems for using a position of a mobile device with an integrated display as an input to a video game or other presentation are presented. Embodiments include rendering an avatar on a mobile device such that it appears to overlay a competing user in the real world. Using the mobile device's position, view direction, and the other user's mobile device position, an avatar (or vehicle, etc.) is depicted at an apparently inertially stabilized location of the other user's mobile device or body. Some embodiments may estimate the other user's head and body positions and angles and reflect them in the avatar's gestures. | 01-19-2012 |
20120014559 | Method and System for Semantics Driven Image Registration - A method and system for automatic semantics driven registration of medical images is disclosed. Anatomic landmarks and organs are detected in a first image and a second image. Pathologies are also detected in the first image and the second image. Semantic information is automatically extracted from text-based documents associated with the first and second images, and the second image is registered to the first image based the detected anatomic landmarks, organs, and pathologies, and the extracted semantic information. | 01-19-2012 |
20120014560 | METHOD FOR AUTOMATIC STORYTELLING FOR PHOTO ALBUMS USING SOCIAL NETWORK CONTEXT - A method for automatically selecting and organizing a subset of photos from a set of photos provided by a user, who has an account on at least one social network providing some context, for creating a summarized photo album with a storytelling structure. The method comprises: arranging the set of photos into a three level hierarchy, acts, scenes and shots; checking whether photos are photos with people or not; obtaining an aesthetic measure of the photos; creating and ranking face clusters; selecting the most aesthetic photo of each face cluster; selecting photos with people until complete a predefined number of photos of the summarized album picking the ones which optimize the function: | 01-19-2012 |
20120014561 | IMAGE TAKING APPARATUS AND IMAGE TAKING METHOD - An image taking apparatus according to an aspect of the invention comprises: an image pickup device which picks up an object image and outputs the picked-up image data; a face detection device which detects human faces in the image data; a face-distance calculating device which calculates the distance between the faces among a plurality of faces detected by the face detection device; and a controlling device which controls the image pickup device to start shooting, after a shooting instruction is issued, in the case where the distance between the faces calculated by the face-distance calculating device is not greater than a first predetermined threshold value. The image taking apparatus allows shooting the moment the distance between the faces is close enough not be greater than to a predetermined threshold value. | 01-19-2012 |
20120014562 | EFFICIENT METHOD FOR TRACKING PEOPLE - In accordance with one embodiment, a method to track persons includes generating a first and second set of facial coefficient vectors by: (i) providing a first and second image containing a plurality of persons; (ii) locating faces of persons in each image; and (iii) generating a facial coefficient vector for each face by extracting from the images coefficients sufficient to locally identify each face, then tracking the persons within the images, the tracking including comparing the first set of facial coefficient vectors to the second set of facial coefficient vectors to determine for each person in the first image if there is a corresponding person in the second image. Optically the method includes using estimated locations in combination with the vector distance between facial coefficient vectors to track persons. | 01-19-2012 |
20120020514 | OBJECT DETECTION APPARATUS AND OBJECT DETECTION METHOD - An object detection apparatus that detects an object to be detected captured in a determination image according to a feature amount of the object to be detected preliminarily learned by the use of a learning image, the object detection apparatus including a detector causing strong classifiers to operate in order of lower classification accuracy, continuing processing when the strong classifier has determined that the object to be detected is captured in the determination image, and determining that the object to be detected has not been detected without causing the strong classifier having classification accuracy higher than the aforementioned strong classifier to operate, when the strong classifier has determined that the object to be detected is not captured in the determination image, wherein the strong classifier inputs a classification result of the strong classifier having classification accuracy lower than the aforementioned strong classifier and determines whether the object to be detected is captured or not in the determination image according to the plurality of estimation values and the input classification result. | 01-26-2012 |
20120020515 | MULTI-PHENOMENOLOGY OBJECT DETECTION - Method and system for utilizing multiple phenomenological techniques to resolve closely spaced objects during imaging includes detecting a plurality of closely spaced objects through the imaging of a target area by an array, and spreading electromagnetic radiation received from the target area across several pixels. During the imaging, different phenomenological techniques may be applied to capture discriminating features that may affect a centroid of the electromagnetic radiation received on the array. Comparing the locations of the centroids over multiple images may be used to resolve a number of objects imaged by the array. Examples of such phenomenological discriminating techniques may include imaging the target area in multiple polarities of light or in multiple spectral bands of light. Another embodiment includes time-lapse imaging of the target area, to compare time lapse centroids for multiple movement signal characteristics over pluralities of pixels on the array. | 01-26-2012 |
20120020516 | SYSTEM AND METHOD FOR MONITORING MOTION OBJECT - A motion object monitoring system captures an image of a scene and distance data between points in the scene and a time-of-flight (TOF) camera by the TOF camera. A 3D model of the scene is built according to the image of the scene and the distance data. The motion object monitoring system gives numbers to the monitored objects according to specific features of the monitored objects. The specific features of the monitored objects are obtained by detecting the built 3D model of the scene. Only one of the numbers of each of the monitored objects is stored, instead of repeatedly storing the numbers of same motion objects. The motion object monitoring system analyzes the stored numbers, and displays an analysis result. The motion object monitoring system also determines a movement of each of the motion objects according to corresponding numbers of the motion objects. | 01-26-2012 |
20120020517 | METHOD FOR DETECTING, IN PARTICULAR COUNTING, ANIMALS - A method for detecting, in particular counting, animals that pass a predefined place in a walk-through direction with the aid of at least a camera, wherein the camera successively records pictures of the defined place and wherein the camera generates signals that represent these pictures and supplies these signals to signal processing means for further processing, wherein a multiplicity of the recorded pictures are processed with the aid of the signal processing means. | 01-26-2012 |
20120020518 | PERSON TRACKING DEVICE AND PERSON TRACKING PROGRAM - A two-dimensional moving track calculating unit | 01-26-2012 |
20120020519 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - An image processing apparatus includes a region setting unit configured to set a specific region where a reflection may occur in an image, a size setting unit configured to set a size of an object to be detected in association with a position in the image, and a changed region detection unit configured to detect a changed region by comparing a background model and an input image, wherein the changed region detection unit outputs the changed region in the specific region based on the size of the object associated with a position of the changed region, in a case where the changed region extends beyond a boundary of the specific region. | 01-26-2012 |
20120020520 | METHOD AND APPARATUS FOR DETECTING MOTION OF IMAGE IN OPTICAL NAVIGATOR - A system and method for determining a motion vector uses both a main block from an image and at least one ancillary block relating to the main block from the image. The main block and ancillary block are then tracked from image to image to provide a motion vector. The use of a composite tracking unit allows for more accurate correlation and identification of a motion vector. | 01-26-2012 |
20120020521 | OBJECT POSITION ESTIMATION APPARATUS, OBJECT POSITION ESTIMATION METHOD, AND OBJECT POSITION ESTIMATION PROGRAM - An object-state change determination unit calculates a correspondence relationship between each of a plurality of observed values obtained from a plurality of objects and each of a plurality of the latest object states to be recorded in an object state information storage unit, and determines presence or absence of a change in the object state so that only in a case where there is a change in the object state, an object position is estimated with high precision by using a batch estimation unit, while in the case of no change in the object state, the result of a position estimation of the object with high precision, recorded in the object state information storage unit, is outputted as a result of an object position estimation. | 01-26-2012 |
20120020522 | MOBILE IMAGING DEVICE AS NAVIGATOR - Embodiments of the invention are directed to obtaining information based on directional orientation of a mobile imaging device, such as a camera phone. Visual information is gathered by the camera and used to determine a directional orientation of the camera, to search for content based on the direction, to manipulate 3D virtual images of a surrounding area, and to otherwise use the directional information. Direction and motion can be determined by analyzing a sequence of images. Distance from a current location, inputted search parameters, and other criteria can be used to expand or filter content that is tagged with such criteria. Search results with distance indicators can be overlaid on a map or a camera feed. Various content can be displayed for a current direction, or desired content, such as a business location, can be displayed only when the camera is oriented toward the desired content. | 01-26-2012 |
20120020523 | INFORMATION CREATION DEVICE FOR ESTIMATING OBJECT POSITION AND INFORMATION CREATION METHOD AND PROGRAM FOR ESTIMATING OBJECT POSITION - Score determination means | 01-26-2012 |
20120020524 | TRACKED OBJECT DETERMINATION DEVICE, TRACKED OBJECT DETERMINATION METHOD AND TRACKED OBJECT DETERMINATION PROGRAM - Determination whether a moving object appearing in input video is an object tracked and captured by a cameraman is enabled. It is determined that a moving object is a subject image to which a cameraman pays attention based on a time difference between time when a movement state determined by a motion vector of the moving object changes and time when a shooting state determined by a motion vector of a camera motion changes. | 01-26-2012 |
20120020525 | DATA PROCESSING APPARATUS AND DATA PROCESSING METHOD - A data processing apparatus ( | 01-26-2012 |
20120027248 | Foreground Analysis Based on Tracking Information - Techniques for performing foreground analysis are provided. The techniques include identifying a region of interest in a video scene, applying a background subtraction algorithm to the region of interest to detect a static foreground object in the region of interest, and determining whether the static foreground object is abandoned or removed, wherein determining whether the static foreground object is abandoned or removed comprises performing a foreground analysis based on edge energy and region growing, and pruning one or more false alarms using one or more track statistics. | 02-02-2012 |
20120027249 | Multispectral Detection of Personal Attributes for Video Surveillance - Techniques for detecting an attribute in video surveillance include generating training sets of multispectral images, generating a group of multispectral box features comprising receiving input of a detector size of a width and height, a number of spectral bands in the multispectral images, and integer values representing a minimum and maximum width and height of multispectral box features, fixing a feature width and to height, generating feature building blocks with the fixed width and height, placing a feature building block at a same location for each spectral band level, and enumerating combinations of the feature building blocks through each spectral level until all sizes within the integer values have been covered, and wherein each combination determines a multispectral box feature, using the training sets to select multispectral box features to generate a multispectral attribute detector, and using the multispectral attribute detector to identify a location of an attribute in video surveillance. | 02-02-2012 |
20120027250 | DATA DIFFERENCE GUIDED IMAGE CAPTURING - Methods and apparatuses are disclosed. Previously stored images of one or more geographic areas may be viewed by online users. A new low-resolution image may be acquired and aspects of the new low-resolution image may be compared with a corresponding one of the previously stored images to determine an amount of change. A determination may be made regarding whether to acquire a new high-resolution image based on the determined amount of change and a freshness score associated with the one of the previously stored images. In another embodiment, a new image may be captured and corresponding location data may be obtained. A corresponding previously stored image may be obtained and compared with the new image to determine an amount of change. The new image may be uploaded to a remote computing device based on the determined amount of change and a freshness score of the previously stored image. | 02-02-2012 |
20120027251 | DEVICE WITH MARKINGS FOR CONFIGURATION - A device including a network interface is marked for determination of the position or orientation of the device. In particular, the markings can include a pattern and proportions that enable determination of at least one of a position and an orientation of the device relative to a station using appearance of the markings as observed from the station. | 02-02-2012 |
20120027252 | HAND GESTURE DETECTION - A method for detecting presence of a hand gesture in video frames includes receiving video frames having an original resolution, downscaling the received video frames into video frames having a lower resolution, and detecting a motion corresponding to the predefined hand gesture in the downscaled video frames based on temporal motion information in the downscaled video frames. The method also includes detecting a hand shape corresponding to the predefined hand gesture in a candidate search window within one of the downscaled video frames using a binary classifier. The candidate search window corresponds to a motion region containing the detected motion. The method further includes determining whether the received video frames contain the predefined hand gesture based on the hand shape detection. | 02-02-2012 |
20120027253 | ILLUMINATION APPARATUS AND BRIGHTNESS ADJUSTING METHOD - An illumination apparatus comprises a control unit, an image capturing unit, a processor unit, a comparison unit, an adjustment unit and an illumination unit. The control unit generates a start signal in a predetermined time. The image capturing unit captures a plurality of images of ambient road condition according to the start signal. The processor unit extracts the edges of the vehicle from the captured images to obtain a current traffic. The adjustment unit generates different pulse voltages according to the different volume of traffic. The illumination unit emits light according to the different pulse voltages. | 02-02-2012 |
20120027254 | Information Processing Apparatus and Information Processing Method for Drawing Image that Reacts to Input Information - In an information processing apparatus, an external-information acquisition unit acquires external information such as an image, a sound, textual information, and numerical information from an input apparatus. A field-image generation unit generates, as an image, a “field” that acts on a particle for a predetermined time step based on the external information. An intermediate-image memory unit stores an intermediate image that is generated in the process of generating a field image by the field-image generation unit. A field-image memory unit stores the field image generated by the field-image generation unit. A particle-image generation unit generates data of a particle image to be output finally by using the field image stored in the field-image memory unit. | 02-02-2012 |
20120027255 | VEHICLE DETECTION APPARATUS - A vehicle detection apparatus comprises an other-vehicle detection module configured to detect points of light in an image captured by a vehicle to which the vehicle detection module is mounted and to detect other vehicles based on the points of light, a vehicle lane-line detection module configured to detect an vehicle lane-line in the captured image, and a region sectioning module configured to section the captured image based on the detected vehicle lane-line into an own vehicle lane region, an oncoming vehicle lane region, and a vehicle lane exterior region. Other vehicles are detected by the other-vehicle detection module by detecting points of light based on respective detection conditions set for each of the sectioned regions. | 02-02-2012 |
20120027256 | Automatic Media Sharing Via Shutter Click - A computer-implemented method for automatically sharing media between users is provided. Collections of images are received from different users, where each collection is associated with a particular user and the users may be associated with each other. The collections are grouped into one or more albums based on the content of the images in the collection, where each album is associated with a particular user. The albums from the different users are grouped into one or more event groups based on the content of the albums. The event groups are then shared automatically, without user intervention, between the different users based on their associations with each other and their individual sharing preferences. | 02-02-2012 |
20120027257 | METHOD AND AN APPARATUS FOR DISPLAYING A 3-DIMENSIONAL IMAGE - A three-dimensional (3D) image display device may display a perceived 3D image. A location tracking unit may determine a viewing distance from a screen to a viewer. An image processing unit may calculate a 3D image pixel period based on the determined viewing distance, may determine a color of at least one of pixels and sub-pixels displaying the 3D image based on the calculated 3D image pixel period, and may control the 3D image to be displayed based on the determined color. | 02-02-2012 |
20120027258 | OBJECT DETECTION DEVICE - An object detection device including: an imaging unit ( | 02-02-2012 |
20120027259 | SYNCHRONIZATION OF TWO IMAGE SEQUENCES OF A PERIODICALLY MOVING OBJECT - A method and an apparatus for correlating two image sequences of a periodically moving object with respect to the periodicity is described. A first frame sequence of the object moving with the first periodicity is acquired. Therein the first frame sequence comprises at least one cycle of motion. A second frame sequence of the object moving with the second periodicity is acquired. Therein the second frame sequence comprises at least one cycle of motion. The first and the second frame sequences are synchronized with respect to the respective periodicity such that same phases of motion of the periodically moving object are correlated to be presented simultaneously. The present invention allows to compare sequences representing a periodical motion with a different number of frames in each of the sequences for the same cycle of motion. Thereby, e.g. image sequences of a beating heart acquired before and after a therapy may be presented in a synchronised way and therefore may be easily compared. | 02-02-2012 |
20120027260 | ASSOCIATING A SENSOR POSITION WITH AN IMAGE POSITION - A system for associating a sensor position with an image position comprises position information means | 02-02-2012 |
20120027261 | Method and Apparatus for Performing 2D to 3D Registration - A method and apparatus for performing 2D to 3D registration includes an initialization step and a refinement step. The initialization step is directed to identifying an orientation and a position by knowing orientation information where data images are captured and by identifying centers of relevant bodies. The refinement step uses normalized mutual information and pattern intensity algorithms to register the 2D image to the 3D volume. | 02-02-2012 |
20120033852 | SYSTEM AND METHOD TO FIND THE PRECISE LOCATION OF OBJECTS OF INTEREST IN DIGITAL IMAGES - The present invention is a method and system to precisely locate objects of interest in any given image scene space, which finds the presence of objects based upon pattern matching geometric relationships to a master, known set. The method and system prepares images for feature and attribute detection and identifies the presence of potential objects of interest, then narrows down the objects based upon how well they match a pre designated master template. The method by which matching takes place is done through finding all objects, plotting its area, juxtaposing a sweet spot overlap of its area on master objects, which in turn forms a glyph shape. The glyph shape is recorded, along with all other formed glyphs in an image's scene space and then mapped to form sets using a classifier and finally a pattern matching algorithm. The resulting objects of interest matches are then refined to plot the contour boundaries of the object's grouped elements (arrangement of contiguous pixels of the given object called a Co-Glyph) and finally snapped to its component actual dimensions e.g., x, y of a character or individual living cell. | 02-09-2012 |
20120033853 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - The present invention refers to an information processing apparatus comprising: an obtaining unit adapted to obtain an image of an object; a face region detection unit adapted to detect a face region of the object from the image; an eye region detection unit adapted to detect an eye region of the object; a generation unit adapted to generate a high-resolution image and low-resolution image of the face region detected by the face region detection means; a first extraction unit adapted to extract a first feature amount indicating a direction of a face existing in the face region from the low-resolution image; a second extraction unit adapted to extract a second feature amount indicating a direction of an eye existing in the eye region from the high-resolution image; and an estimation unit adapted to estimate a gaze direction of the object from the first feature amount and the second feature amount. | 02-09-2012 |
20120033854 | IMAGE PROCESSING APPARATUS - Provided are an image processing apparatus and method for counting moving objects in an image, the apparatus including: a motion detection unit which detects motion in an image; an object detection unit which detects objects based on the motion detected by the motion detection unit; an outline generation unit which generates at least one reference outline of which a size is adjusted according to a preset parameter based on a location in the image; and a calculation unit which calculates a number of objects having substantially a same size as that of the at least one reference outline from among the objects detected by the object detection unit, wherein the preset parameter is adjusted according to at least one circumstantial parameter. | 02-09-2012 |
20120033855 | PREDICTIVE FLIGHT PATH AND NON-DESTRUCTIVE MARKING SYSTEM AND METHOD - Systems and methods for acquiring and targeting an object placed in motion, tracking the object's movement, and while tracking, measuring the object's characteristics and marking the object with an external indicator until the object comes to rest is provided. The systems and methods include an acquisition and tracking system, a data capture system, and a marking control system. Through the components of the system, an object moving through two or three dimensional space can be externally marked to assist with improving the performance of striking the object. | 02-09-2012 |
20120033856 | SYSTEM AND METHOD FOR ENABLING MEANINGFUL INTERACTION WITH VIDEO BASED CHARACTERS AND OBJECTS - The present disclosure provides a system and method for enabling meaningful body-to-body interaction with virtual video-based characters or objects in an interactive imaging environment including: capturing a corpus of video-based interaction data, processing the captured video using a segmentation process that corresponds to the capture setup in order to generate binary video data, labeling the corpus by assigning a description to clips of silhouette video, processing the labeled corpus of silhouette motion data to extract horizontal and vertical projection histograms for each frame of silhouette data, and estimating the motion state automatically from each frame of segmentation data using the processed model. Virtual characters or objects are represented using video captured from video-based motion, thereby creating the illusion of real characters or objects in an interactive imaging experience. | 02-09-2012 |
20120033857 | SELECTIVE AND ADAPTIVE ILLUMINATION OF A TARGET - There are provided a method and a system for illuminating one or more target in a scene. An image of the scene is acquired using a sensing device that may use an infrared sensor for example. From the image, an illumination controller determines an illumination figure, such that the illumination figure adaptively matches at least a position of the target in the image. The target is the selectively illuminated using an illumination device, according to the illumination figure. | 02-09-2012 |
20120039505 | DYNAMICALLY RESIZING TEXT AREA ON A DISPLAY DEVICE - Dynamically resizing a text area in which text is displayed on a display device. A camera device periodically captures snapshots of a user's gaze point and head position while reading text, and the captured snapshots are used to detect movement of the user's head. Head movement suggests that the text area is too wide for comfortable viewing. Accordingly, the width of the text area is automatically resized, responsive to detecting head movement. Preferably, the resized width is set to the position of the user's gaze point prior to the detected head movement. The text is then preferably reflowed within the resized text area. Optionally, the user may be prompted to confirm whether the resizing will be performed. | 02-16-2012 |
20120039506 | METHOD FOR IDENTIFYING AN OBJECT IN A VIDEO ARCHIVE - The invention concerns a method for identifying an object in a video archive including multiple images acquired in a network of cameras including a phase of characterisation of the object to be identified and a phase of searching for the said object in the said archive, where the said characterisation phase consists in defining for the said object at least one semantic characteristic capable of being extracted, even in low-resolution images, from the said video archive. | 02-16-2012 |
20120039507 | Information Processing Device And Information Processing Method - An image acquisition unit of an information processing device acquires data for moving image including an image of a user and captured by an image capturing device. An initial processing unit determines correspondence between an amount of movement of the user and a parameter defining an image to be ultimately output in a conversion information storage unit. A tracking processing unit uses a particle filter to perform visual tracking in the moving image so as to estimate the magnification and translation amount of the user's head contour. The input value conversion unit converts the amount of movement of the user into the parameter defining an image using the magnification and the translation amount as parameters. The output data generation unit generates an image based on the parameter. The output control unit controls the generated image so as to be displayed on a display device. | 02-16-2012 |
20120039508 | TARGET DETECTING METHOD AND APPARATUS - Target detecting method and apparatus are disclosed. In the target detecting method, edges in a first direction in an input image may be detected to obtain an edge image comprising a plurality of edges in the first direction; and one or more candidate targets may be generated according to the plurality of edges in the first direction, a region between any two of the plurality of edges in the first direction in the input image corresponding to one of the candidate targets. | 02-16-2012 |
20120039509 | INFORMATION-INPUTTING DEVICE INPUTTING CONTACT POINT OF OBJECT ON RECORDING SURFACE AS INFORMATION - Structure and function for inputting information preferably includes a display device having two cameras in respective corners thereof. At least one computer readable medium preferably has program instructions configured to cause at least one processing structure to: (i) extract an object located on a plane of the display device from an image that includes the plane of the object, (ii) determine whether the object is a writing implement by determining, when a plurality of objects are extracted from the image, that one of the plurality of objects that satisfies a prescribed condition is the writing implement, (iii) calculate a position of a contact point between the writing implement and the plane as information to be input if the object has been determined as the writing implement, and (iv) input the information representing a position on the plane indicated by the object. | 02-16-2012 |
20120039510 | SYSTEM AND METHOD FOR REMOTELY MONITORING AND/OR VIEWING IMAGES FROM A CAMERA OR VIDEO DEVICE - A system and method are provided for remotely monitoring images from an image capturing device. Image data from an image capturing component is received where image data represents images of a scene in a field of view of the image capturing component. The image data may be analyzed to determine that the scene has changed. A determination may be made that the scene has changed. In response to this determination being made, a communication may be transmitted to a designated device, recipient or network location. The communication may be informative that a scene change or event occurred. The communication may be in the form of a notification or an actual image or series of images of the scene after the change or event. | 02-16-2012 |
20120039511 | Information Processing Apparatus, Information Processing Method, and Computer Program - An information processing apparatus that executes processing for creating an environmental map includes a camera that photographs an image, a self-position detecting unit that detects a position and a posture of the camera on the basis of the image, an image-recognition processing unit that detects an object from the image, a data constructing unit that is inputted with information concerning the position and the posture of the camera and information concerning the object and executes processing for creating or updating the environmental map, and a dictionary-data storing unit having stored therein dictionary data in which object information is registered. The image-recognition processing unit executes processing for detecting an object from the image acquired by the camera with reference to the dictionary data. The data constructing unit applies the three-dimensional shape data registered in the dictionary data to the environmental map and executes object arrangement on the environmental map. | 02-16-2012 |
20120045090 | MULTI-MODE VIDEO EVENT INDEXING - Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode. | 02-23-2012 |
20120045091 | System and Method for 3D Wireframe Reconstruction from Video - In one or more aspects of the present disclosure, a method, a computer program product and a system for reconstructing scene features of an object in 3D space using structure-from-motion feature-tracking includes acquiring a first camera frame at a first camera position; extracting image features from the first camera frame; initializing a first set of 3D points from the extracted image features; acquiring a second camera frame at a second camera position; predicting a second set of 3D points by converting their positions and variances to the second camera position; projecting the predicted 3D positions to an image plane of the second camera to obtain 2D predictions of the image features; measuring an innovation of the predicted 2D image features; and updating estimates of 3D points based on the measured innovation to reconstruct scene features of the object image in 3D space. | 02-23-2012 |
20120045092 | Hierarchical Video Sub-volume Search - Described is a technology by which video, which may be relatively high-resolution video, is efficiently processed to determine whether the video contains a specified action. The video corresponds to a spatial-temporal volume. The volume is searched with a top-k search that finds a plurality of the most likely sub-volumes simultaneously in a single search round. The score volumes of larger spatial resolution videos may be down-sampled into lower-resolution score volumes prior to searching. | 02-23-2012 |
20120045093 | METHOD AND APPARATUS FOR RECOGNIZING OBJECTS IN MEDIA CONTENT - An approach is provided for recognizing objects in media content. The capture manager determines to detect, at a device, one or more objects in a content stream. Next, the capture manager determines to capture one or more representations of the one or more objects in the content stream. Then, the capture manager associates the one or more representations with one or more instances of the content stream. | 02-23-2012 |
20120045094 | TRACKING APPARATUS, TRACKING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - The present invention provides a tracking apparatus for tracking a target designated on an image which is captured by an image sensing element, including a calculation unit configured to calculate, for each of feature candidate colors, a first area of a pixel group which includes a pixel of a feature candidate color of interest and in which pixels of colors similar to the feature candidate color of interest continuously appear, a second area of pixels of colors similar to the feature candidate color of interest in the plurality of pixels, and a ratio of the first area to the second area, and an extraction unit configured to extract a feature candidate color having the smallest first area as a feature color of the target from feature candidate colors for each of which the ratio of the first area to the second area is higher than a predetermined reference ratio. | 02-23-2012 |
20120045095 | IMAGE PROCESSING APPARATUS, METHOD THEREOF, PROGRAM, AND IMAGE CAPTURING APPARATUS - An image processing apparatus stores model information representing a subject model belonging to a specific category, detects the subject from an input image by referring to the model information, determines a region for which an image correction is to be performed within a region occupied by the detected subject in the input image, stores, for a local region of the image, a plurality of correction data sets representing correspondence between a feature vector representing a feature before correction and a feature vector representing a feature after correction, selects at least one of the correction data sets to be used to correct a local region included in the region determined to undergo the image correction, and corrects the region determined to undergo the image correction using the selected correction data sets. | 02-23-2012 |
20120045096 | MONITORING CAMERA TERMINAL - A monitoring camera terminal has an imaging portion for imaging a monitoring target area allocated to an own-terminal, an object extraction portion for processing a frame image imaged by the imaging portion to extract an imaged object, an ID addition portion for adding an ID to the object extracted by the object extraction portion, an object map creation portion for creating, for each object extracted by the object extraction portion, an object map associating the ID added to the object with a coordinate position in the frame image, and a tracing portion for tracing an object in the monitoring target area allocated to the own-terminal using the object maps created by the object map creation portion. | 02-23-2012 |
20120045097 | HIGH ACCURACY BEAM PLACEMENT FOR LOCAL AREA NAVIGATION - An improved method of high accuracy beam placement for local area navigation in the field of semiconductor chip manufacturing. This invention demonstrates a method where high accuracy navigation to the site of interest within a relatively large local area (e.g. an area 200 μm×200 μm) is possible even where the stage/navigation system is not normally capable of such high accuracy navigation. The combination of large area, high-resolution scanning, digital zoom and registration of the image to an idealized coordinate system enables navigation around a local area without relying on stage movements. Once the image is acquired any sample or beam drift will not affect the alignment. Preferred embodiments thus allow accurate navigation to a site on a sample with sub-100 nm accuracy, even without a high-accuracy stage/navigation system. | 02-23-2012 |
20120045098 | ARCHITECTURES AND METHODS FOR CREATING AND REPRESENTING TIME-DEPENDENT IMAGERY - The present invention pertains to geographical image processing of time-dependent imagery. Various assets acquired at different times are stored and processing according to acquisition date in order to generate one or more image tiles for a geographical region of interest. The different image tiles are sorted based on asset acquisition date. Multiple image tiles for the same region of interest may be available. In response to a user request for imagery as of a certain date, one or more image tiles associated with assets from prior to that date are used to generate a time-based geographical image for the user. | 02-23-2012 |
20120057745 | DETECTION OF OBJECTS USING RANGE INFORMATION - A system and method for detecting objects and background in digital images using range information includes receiving the digital image representing a scene; identifying range information associated with the digital image and including distances of pixels in the scene from a known reference location; generating a cluster map based at least upon an analysis of the range information and the digital image, the cluster map grouping pixels of the digital image by their distances from a viewpoint; identifying objects in the digital image based at least upon an analysis of the cluster map and the digital image; and storing an indication of the identified objects in a processor-accessible memory system. | 03-08-2012 |
20120057746 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - A processing device and method are provided. According to illustrative embodiments, the device and method are implemented by detecting a face region of an image, setting at least one action region according to the position of the face region, processing image data corresponding to the at least one action region to determine whether or not a predetermined action has been performed, and performing processing corresponding to the predetermined action when it is determined that the predetermined action has been performed. | 03-08-2012 |
20120057747 | IMAGE PROCESSING SYSTEM AND IMAGE PROCESSING METHOD - An image processing system performs a position-matching operation on first and second images, which are obtained by photographing the same object a plurality of times. A plurality of shift points are detected in the second image. The shift points correspond to fixed points, which are dispersed throughout the whole of the first image. The second image is divided into a plurality of partial images, the vertices of which are positioned at the same coordinates as the fixed points in the first image. Each of the partial images are shifted to the shift points to transform the partial images so that corresponding transformed partial images are produced. The transformed partial images are combined to form a combined image. | 03-08-2012 |
20120057748 | APPARATUS WHICH DETECTS MOVING OBJECT FROM IMAGE AND METHOD THEREOF - An image processing apparatus includes an input unit configured to input a plurality of time-sequential still images, a setting unit configured to set, in a still image among the plurality of still images, a candidate region that is a candidate of a region in which an object exists, and to acquire a likelihood of the candidate region, a motion acquisition unit configured to acquire motion information indicating a motion of the object based on the still image and another still image that is time-sequential to the still image, a calculation unit configured to calculate a weight corresponding to an appropriateness of the motion indicated by the motion information as a motion of the object, a correction unit configured to correct the likelihood based on the weight, and a detection unit configured to detect the object from the still image based on the corrected likelihood. | 03-08-2012 |
20120057749 | INATTENTION DETERMINING DEVICE - An inattention determining device includes range changing unit and inattention determining unit. When a curve detection result is output from curve detector, the range changing unit changes a first predetermined range to a second predetermined range by the predetermined amount in the curve direction before a turning direction of an acquisition result is changed in the curve direction of the curve detection result. The inattention determining unit determines whether or not a driver is in an inattention state on the basis of the second predetermined range. | 03-08-2012 |
20120057750 | System And Method For Data Assisted Chrom-Keying - The invention illustrates a system and method of displaying a base image and an overlay image comprising: capturing a base image of a real event; receiving an instrumentation data based on the real event; identifying a visual segment within the base image based on the instrumentation data; and rendering an overlay image within the visual segment. | 03-08-2012 |
20120057751 | Particle Tracking Methods - A method for tracking an object in a video data, comprises the steps of determining a plurality of particles for estimating a location of the object in the video data, determining a weight for each of the plurality of the particles, wherein the weights of two or more particles are determined substantially in parallel, and estimating the location of the object in the video data based upon the determined particle weights. | 03-08-2012 |
20120057752 | METHOD OF, AND APPARATUS AND COMPUTER SOFTWARE FOR, IMPLEMENTING IMAGE ANALYSIS PROTOCOLS - A computer-based method for the development of an image analysis protocol for analyzing image data, the image data containing images including image objects, in particular biological image objects such as biological cells. The image analysis protocol, once developed, is operable in an image analysis software system to report on one or more measurements conducted on selected ones of the image objects. The development process includes defining target identification settings to identify at least two different target sets of image objects, defining target identification settings to identify at least two different target sets of image objects, and defining one or more measurements to be performed using said pair-wise linking relationship(s). | 03-08-2012 |
20120057753 | SYSTEMS AND METHODS FOR TRACKING A MODEL - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose. | 03-08-2012 |
20120057754 | IMAGE SELECTION BASED ON IMAGE CONTENT - An image capture system comprises an image input and processing unit. The image input obtains image information which is then passed to the processing unit. The processing unit is coupled to the image input for determining image metrics on the image information. The processing unit initiates a capture sequence when the image metrics meet a predetermined condition. The capture sequence may store one or more images, or it may indicate that one or more images have been detected. In one embodiment, the image input is a CMOS or CCD sensor. | 03-08-2012 |
20120057755 | METHOD AND SYSTEM FOR CONTROLLING LIGHTING - A method is provided to control the lighting ambience in a space by means of a plurality of controllable light sources ( | 03-08-2012 |
20120063637 | ARRAY OF SCANNING SENSORS - An array of image sensors is arranged to cover a field of view for an image capture system. Each sensor has a field of view segment which is adjacent to the field of view segment covered by another image sensor. The adjacent field of view (FOV) segments share an overlap area. Each image sensor comprises sets of light sensitive elements which capture image data using a scanning technique which proceeds in a sequence providing for image sensors sharing overlap areas to be exposed in the overlap area during the same time period. At least two of the image sensors capture image data in opposite directions of traversal for an overlap area. This sequencing provides closer spatial and temporal relationships between the data captured in the overlap area by the different image sensors. The closer spatial and temporal relationships reduce artifact effects at the stitching boundaries, and improve the performance of image processing techniques applied to improve image quality. | 03-15-2012 |
20120063638 | EGOMOTION USING ASSORTED FEATURES - A system and method are disclosed for estimating camera motion of a visual input scene using points and lines detected in the visual input scene. The system includes a camera server comprising a stereo pair of calibrated cameras, a feature processing module, a trifocal motion estimation module and an optional adjustment module. The stereo pair of the calibrated cameras and its corresponding stereo pair of camera after camera motion form a first and a second trifocal tensor. The feature processing module is configured to detect points and lines in the visual input data comprising a plurality of image frames. The feature processing module is further configured to find point correspondence between detected points and line correspondence between detected lines in different views. The trifocal motion estimation module is configured to estimate the camera motion using the detected points and lines associated with the first and the second trifocal tensor. | 03-15-2012 |
20120063639 | INFORMATION PROCESSING DEVICE, RECOGNITION METHOD THEREOF AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An information processing device detects a background region from an image, extracts multiple partial regions from the image, sets multiple local regions for each of the multiple partial regions, selects a local region including a region other than the background region from among the multiple local regions and calculates a local feature amount from the selected local region, and determines a partial region that includes a recognition target object from among the multiple partial regions based on the calculated local feature amount. | 03-15-2012 |
20120063640 | IMAGE PROCESSING APPARATUS, IMAGE FORMING SYSTEM, AND IMAGE FORMING METHOD - An upstream image processing apparatus determines, when geometric conversion is instructed, whether the result of downstream correction processing changes due to the geometric conversion, and if it changes, the apparatus changes the conversion to geometric conversion that does not cause a change in the correction result. Then, the geometric conversion is performed on a target image, and the resultant image is transmitted to a downstream image processing apparatus. Together therewith, instruction information indicating an instruction for correction processing and instruction information indicating geometric transformation processing for performing geometric transformation processing to the instructed degree are transmitted to the downstream image processing apparatus. The downstream image processing apparatus adds an instruction for image processing as appropriate, and thereafter transmits the resultant data to an image forming apparatus. The image forming apparatus forms an image by performing correction processing and geometric transformation processing that have been instructed. | 03-15-2012 |
20120063641 | SYSTEMS AND METHODS FOR DETECTING ANOMALIES FROM DATA - The present disclosure concerns methods and/or systems for processing, detecting and/or notifying for the presence of anomalies or infrequent events from data. Some of the disclose methods and/or systems may be used on large-scale data sets. Certain applications are directed to analyzing sensor surveillance records to identify aberrant behavior. The sensor data may be from a number of sensor types including video and/or audio. Certain applications are directed to methods and/or systems that use compressive sensing. Certain applications may be performed in substantially real time. | 03-15-2012 |
20120063642 | SIMILARITY ANALYZING DEVICE, IMAGE DISPLAY DEVICE, IMAGE DISPLAY PROGRAM STORAGE MEDIUM, AND IMAGE DISPLAY METHOD - A similarity analyzing device includes: an image acquisition section which acquires picked-up images with which image pick-up dates and/or times are associated; and an image registration section which registers a face image showing a picked-up face and with which an image pick-up date and/or time is associated. The device further includes: a degree of similarity calculation section which detects a face in each of picked-up images acquired by the image acquisition section and calculates the degree of similarity between the detected face and the face in the face image registered in the image registration section; and a degree of similarity reduction section in which the larger the difference between the image pick-up date and/or time associated with the picked-up image and that associated with the face image is, the more the degree of similarity of the face calculated by the degree of similarity calculation section is reduced. | 03-15-2012 |
20120063643 | Methods, Systems, and Products for Gesture-Activation - Methods, systems, and products are disclosed recognizing gestures. A sequence of images is captured by a camera and compared to a stored sequence of images in memory. A gesture is then recognized in the stored sequence of images. | 03-15-2012 |
20120063644 | DISTANCE-BASED POSITION TRACKING METHOD AND SYSTEM - A pre-operative stage of a distance-based position tracking method ( | 03-15-2012 |
20120070033 | METHODS FOR OBJECT-BASED IDENTIFICATION, SORTING AND RANKING OF TARGET DETECTIONS AND APPARATUSES THEREOF - A method, non-transitory computer readable medium, and apparatus that provides object-based identification, sorting and ranking of target detections includes determining a target detection score for each pixel in each of one or more images for each of one or more targets. A region around one or more of the pixels with the determined detection scores which are higher than the determined detection scores for the remaining pixels in each of the one or more of images is identified. An object based score for each of the identified regions in each of the one or more images is determined. The one or more identified regions with the determined object based score for each region is provided. | 03-22-2012 |
20120070034 | METHOD AND APPARATUS FOR DETECTING AND TRACKING VEHICLES - The present invention relates to a method and apparatus for detecting and tracking vehicles. One embodiment of a system for detecting and tracking an object (e.g., vehicle) in a field of view includes a moving object indication stage for detecting a candidate object in a series of input video frames depicting the field of view and a track association stage that uses a joint probabilistic graph matching framework to associate an existing track with the candidate object. | 03-22-2012 |
20120070035 | METHOD AND INTERFACE OF RECOGNIZING USER'S DYNAMIC ORGAN GESTURE AND ELEC TRIC-USING APPARATUS USING THE INTERFACE - A method of recognizing a user's dynamic organ for use in an electric-using apparatus includes comparing a background image and a target image, which are inputted through an imaging element, to detect a candidate region including portions of the target image that are different between the background image and the target image; scanning the candidate region using a window; generating a HOG (histograms of oriented gradients) descriptor of a region of the target image that is scanned when it is judged that the scanned region includes a dynamic organ; measuring a resemblance value between the HOG descriptor of the scanned region and a HOG descriptor of a query template for a gesture of the dynamic organ; and judging that the scanned region includes the gesture of the dynamic organ when the resemblance value meets a predetermined condition. | 03-22-2012 |
20120070036 | Method and Interface of Recognizing User's Dynamic Organ Gesture and Electric-Using Apparatus Using the Interface - A method of recognizing a user's dynamic organ for use in an electric-using apparatus includes scanning a difference image, which reflects brightness difference between a target image and a comparative image that are inputted through an imaging element, using a window; generating a HOG (histograms of oriented gradients) descriptor of a region of the difference image that is scanned when it is judged that the scanned region includes a dynamic organ; measuring a resemblance value between the HOG descriptor of the scanned region and a HOG descriptor of a query template for a gesture of the dynamic organ; and judging that the scanned region includes the gesture of the dynamic organ when the resemblance value meets a predetermined condition, wherein the comparative image is one of frame images previous to the target image. | 03-22-2012 |
20120070037 | Method for estimating the motion of a carrier relative to an environment and computing device for navigation system | 03-22-2012 |
20120076353 | INTERACTIVE DISPLAY - Embodiments are disclosed herein that relate to the front-projection of an interactive display. One disclosed embodiment provides an interactive display system comprising a projector and a display screen configured to display an image projected by the projector, the display screen comprising a retroreflective layer and a diffuser layer covering the retroreflective layer, the diffuser layer being configured to diffusely reflect only a portion of light incident on the diffuser layer from the projector such that another portion of light passes through the diffuser layer and is reflected by the retroreflective layer back through the diffuser layer. The interactive display system also comprises a camera configured to capture images of the display screen via light reflected by the retroreflective layer to identify via the images a user gesture performed between the projector and the display screen. | 03-29-2012 |
20120076354 | IMAGE RECOGNITION BASED UPON A BROADCAST SIGNATURE - Methods and apparatus for processing image data are disclosed. In one embodiment, a method includes capturing, via an image sensor, an image that includes a plurality of objects including a target object, and receiving, from the target object, via a medium other than the image sensor, distinguishing information that is broadcast by the target object. The distinguishing information distinguishes the target object from other objects, and is used to select, within the captured image, the target object from among the other objects. | 03-29-2012 |
20120076355 | 3D OBJECT TRACKING METHOD AND APPARATUS - A 3D object tracking method and apparatus in which a model of an object to be tracked is divided into a plurality of polygonal planes and the object is tracked using texture data of the respective planes and geometric data between the respective planes to enable more precise tracking. The 3D object tracking method includes modeling the object to be tracked to generate a plurality of planes, and tracking the plurality of planes, respectively. The modeling of the object includes selecting points from among the plurality of planes, respectively, and calculating projective invariants using the selected points. | 03-29-2012 |
20120076356 | ANOMALY DETECTION APPARATUS - Behavior authority may be changed depending on a behavior performed by a person. Therefore, it is necessary to change the judgment criteria whether the behavior is anomalous or normal, in association with the changed behavior authority. Herein, an anomaly detection apparatus is provided, which calculates the behavior authority information of which judgment criteria of the anomalous and normal behaviors are changed corresponding to the behavior performed by the person, detects whether the behavior shown by the person is anomalous or not, and issues an alarm when the anomalous behavior is detected. | 03-29-2012 |
20120076357 | VIDEO PROCESSING APPARATUS, METHOD AND SYSTEM - According to one embodiment, a video processing apparatus includes an acquisition unit, a first extraction unit, a generation unit, a second extraction unit, a computation unit and a selection unit. The acquisition unit is configured to acquire video streams. A first extraction unit is configured to analyze at least one of the moving pictures and the sounds for each video stream and to extract feature values. A generation unit is configured to generate segments by dividing each video stream, and to generate associated segment groups. A second extraction unit is configured to extract the associated segment groups that number of associated segments is greater than or equal to threshold as common video segment groups. A computation unit is configured to compute summarization score. A selection unit is configured to select segments used for a summarized video as summarization segments from the common video segment groups based on the summarization score. | 03-29-2012 |
20120076358 | Methods for and Apparatus for Generating a Continuum of Three-Dimensional Image Data - The present invention provides methods and apparatus for generating a continuum of image data sprayed over three-dimensional models. The three-dimensional models can be representative of features captured by the image data and based upon multiple image data sets capturing the features. The image data can be captured at multiple disparate points along another continuum. | 03-29-2012 |
20120076359 | SYSTEM AND METHOD FOR TRACKING AN ELECTRONIC DEVICE - A system for tracking a spatially manipulated user controlling object using a camera associated with a processor. While the user spatially manipulates the controlling object, an image of the controlling object is picked-up via a video camera, and the camera image is analyzed to isolate the part of the image pertaining to the controlling object for mapping the position and orientation of the device in a two-dimensional space. Robust data processing systems and computerized method employing calibration and tracking algorithms such that minimal user intervention is required for achieving and maintaining successful tracking of the controlling object in changing backgrounds and lighting conditions. | 03-29-2012 |
20120076360 | IMAGE PROCESSING APPARATUS, METHOD AND COMPUTER-READABLE STORAGE MEDIUM - An image processing apparatus includes a receiver, a registration section, a determination section, and a controller. The receiver receives broadcast waves including signals of a plurality of channels. The registration section registers a recognition target. The determination section determines whether or not the recognition target, registered in the registration section, exists in a frame of an image including the signals of the plurality of channels included in the broadcast waves received by the receiver. The controller sequentially switches, in accordance with a determination result obtained by the determination section, the plurality of channels received by the receiver. | 03-29-2012 |
20120076361 | OBJECT DETECTION DEVICE - A depth histogram is created for each of a plurality of local regions of the depth image by grouping, according to specified depths, the depth information for the individual pixels that are contained in the local regions. A degree of similarity between two of the depth histograms for two of the local regions at different positions in the depth image is calculated as a feature. A depth image for training that has a high degree of certainty is defined as a positive example, a depth image for training that has a low degree of certainty is defined as a negative example, a classifier that is suitable for classifying the positive example and the negative example is constructed, and an object that is a target of detection is detected in the depth image, using the classifier and based on the feature. | 03-29-2012 |
20120082338 | ATTITUDE ESTIMATION BY REDUCING NOISE WITH DRAGBACK - In general, in one embodiment, a starfield image as seen by an object is analyzed. Compressive samples are taken of the starfield image and, in the compressed domain, processed to remove noise. Stars in the starfield image are identified and used to determine an attitude of the object. | 04-05-2012 |
20120082339 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus may include an obtaining unit to obtain a number of users from information on detection of a face region including a face in a captured image provided at the apparatus. The apparatus also may include a setting unit to set a display region for content and a display region for a captured image in a display screen; and a display image generation unit to generate a display image to be displayed in the display region for a captured image, in accordance with the information on the detection, the number of users, and the display region set for a captured image. | 04-05-2012 |
20120082340 | SYSTEM AND METHOD FOR PROVIDING MOBILE RANGE SENSING - The present invention provides an improved method for estimating range of objects in images from various distances comprising receiving a set of images of the scene having multiple objects from at least one camera in motion. Due to the motion of the camera, each of the images are obtained at different camera locations. Then an object visible in multiple images is selected. Data related to approximate camera positions and orientations and the images of the visible object are used to estimate the location of the object relative to a reference coordinate system. Based on the computed data, a projected location of the visible object is computed and the orientation angle of the camera for each image is refined. Additionally, pairs of cameras with various locations can obtain dense stereo for regions of the image at various ranges. | 04-05-2012 |
20120082341 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - A method is provided for displaying physical objects. The method comprises capturing an input image of physical objects, and matching a three-dimensional model to the physical objects. The method further comprises producing a modified partial image by at least one of modifying a portion of the matched three-dimensional model, or modifying a partial image extracted from the input image using the matched three-dimensional model. The method also comprises displaying an output image including the modified partial image superimposed over the input image. | 04-05-2012 |
20120082342 | 3 DIMENSION TRACKING SYSTEM FOR SURGERY SIMULATION AND LOCALIZATION SENSING METHOD USING THE SAME - The 3-dimensional tracking system according to the present disclosure includes: a photographing unit for photographing an object; a recognizing unit for recognizing a marker attached to the object by binarizing an image of the object photographed by the photographing unit; an extracting unit for extracting a 2-dimensional coordinate of the marker recognized by the recognizing unit; and a calculating unit for calculating a 3-dimensional coordinate from the 2-dimensional coordinate of the marker by using an intrinsic parameter of the photographing unit. | 04-05-2012 |
20120082343 | DETECTING A CHANGE BETWEEN IMAGES OR IN A SEQUENCE OF IMAGES - Detecting a change between images is performed more effectively when a measure of change is used for the detection that depends on a length of the code blocks to which the images are individually entropy-encoded, and which are allocated to different sections of the respective image, since the length of these code blocks is also available without decoding. This uses the fact that the length or amount of data of a code block directly depends, in large parts, on the entropy and hence on the complexity of the allocated image section, and that changes between images are, with high probability, also reflected in a change of complexity. | 04-05-2012 |
20120082344 | METHOD AND APPARATUS FOR COMPRESSED SENSING - Method and apparatus for compressed sensing yields acceptable quality reconstructions of an object from reduced numbers of measurements. A component x of a signal or image is represented as a vector having m entries. | 04-05-2012 |
20120087539 | METHOD OF DETECTING FEATURE POINTS OF AN OBJECT IN A SYSTEM FOR MOTION DETECTION - A method of detecting feature points of an object in a system for motion detection includes obtaining a first image of the object from a first camera and a second image of the object from a second camera, extracting a foreground image from each of the first image and the second image, based on an assumption that the foreground image is a T-pose image, segmenting the foreground image into a first set of sections, identifying a first set of feature points associated with the first set of sections, obtaining a T-pose image with a set of predetermined feature points, and determining whether the foreground image is a T-pose image by comparing the first set of feature points with the set of predetermined feature points. | 04-12-2012 |
20120087540 | COMPUTING DEVICE AND METHOD FOR MOTION DETECTION - A computing device for motion detection in a system capable of detecting feature points of an object of interest is disclosed. The computing device includes a vector forming unit to form a plurality of vectors associated with a set of the feature points and form a vector set based on the vectors, a posture identifying unit to identify a match of a posture in a database based on the vector set, a motion similarity unit to identify a set of predetermined postures in the database based on the matched posture and an immediately previous matched posture, and a motion identifying unit to identify a predetermined motion in the database based on the set of predetermined postures. | 04-12-2012 |
20120087541 | CAMERA FOR DETECTING DRIVER'S STATE - The present invention provides a camera for detecting a driver's drowsiness state, which can increase the number of pixels in an image of a driver's eye even when using an image sensor having the same number of pixels as a conventional camera instead of a high definition camera. The camera of the present invention is, thus, capable of determining whether the driver's eyes are open or closed. The camera for detecting the driver's state according to the present invention includes a cylindrical lens mounted in front of the camera configured so as to enlarge an image in the vertical direction, a convex lens located in the rear of the cylindrical lens, an image sensor for taking an image of a driver's face formed by the cylindrical lens and the convex lens, and an image processor for extracting an eye area from the image of the driver's face and determining whether the driver's eyes are open or closed. | 04-12-2012 |
20120087542 | LASER DETECTION DEVICE AND LASER DETECTION METHOD - A laser detection method and apparatus for detection of laser beams can each perform operations for producing an interference image from detected light radiation, recording the interference image, and processing the recorded interference image in order to detect laser radiation. In order to allow more robust and faster laser detection, the apparatus and method can detect a spatially defined point distribution from the interference image and transform the point distribution such that a grid interval remains between a point grid in the point distribution, and a fixed position, which is independent of a position in the original image, is associated with the point grid. The apparatus and method can further detect a grid interval in the point grid that was transformed, and detect the position of the point grid from the point distribution by filtering with the assistance of the grid interval. | 04-12-2012 |
20120087543 | IMAGE-BASED HAND DETECTION APPARATUS AND METHOD - An image-based hand detection apparatus includes a hand image detection unit for detecting a hand image corresponding to a shape of a hand clenched to form a fist from an image input. A feature point extraction unit extracts feature points from an area, having lower brightness than a reference value, in the detected hand image. An image rotation unit compares the feature points of the detected hand image with feature points of hand images stored in a hand image storage unit, and rotates the detected hand image or the stored hand images. A matching unit compares the detected hand image with the stored hand images and generates a result of the comparison. If at least one of the stored hand images is matched with the detected hand image, a hand shape recognition unit selects the at least one of the stored hand images as a matching hand image. | 04-12-2012 |
20120087544 | SUBJECT TRACKING DEVICE, SUBJECT TRACKING METHOD, SUBJECT TRACKING PROGRAM PRODUCT AND OPTICAL DEVICE - A subject tracking device includes: a tracking zone setting unit that sets an area where a main subject is present within a captured image as a tracking zone; a tracking unit that tracks the main subject based upon an image output corresponding to the tracking zone; and an arithmetic operation unit that determines through arithmetic operation image-capturing conditions based upon an image output corresponding to a central area within the tracking zone. | 04-12-2012 |
20120087545 | Fusing depth and pressure imaging to provide object identification for multi-touch surfaces - An apparatus for inputting information into a computer includes a 3d sensor that senses 3d information and produces a 3d output The apparatus includes a 2d sensor that senses 2d information and produces a 2d output The apparatus includes a processing unit which receives the 2d and 3d output and produces a combined output that is a function of the 2d and 3d output. A method for inputting information into a computer. The method includes the steps of producing a 3d output with a 3d sensor that senses 3d information. There is the step of producing a 2d output with a 2d sensor that senses 2d information. There is the step of receiving the 2d and 3d output at a processing unit. There is the step of producing a combined output with the processing unit that is a function of the 2d and 3d output. | 04-12-2012 |
20120093357 | VEHICLE THREAT IDENTIFICATION ON FULL WINDSHIELD HEAD-UP DISPLAY - A method to dynamically register a graphic identifying a potentially threatening vehicle onto a driving scene of a vehicle utilizing a substantially transparent windscreen head up display includes monitoring a vehicular environment, identifying the potentially threatening vehicle based on the monitored vehicular environment, determining the graphic identifying the potentially threatening vehicle, dynamically registering a location of the graphic upon the substantially transparent windscreen head up display corresponding to the driving scene of the vehicle, and displaying the graphic upon the substantially transparent windscreen head up display, wherein the substantially transparent windscreen head up display includes one of light emitting particles or microstructures over a predefined region of the windscreen permitting luminescent display while permitting vision therethrough. | 04-19-2012 |
20120093358 | CONTROL OF REAR-VIEW AND SIDE-VIEW MIRRORS AND CAMERA-COORDINATED DISPLAYS VIA EYE GAZE - An adaptive vision system includes a vision component to present an image to a user, a sensor for detecting a vision characteristic of the user and generating a sensor signal representing the vision characteristic of the user; and a processor in communication with the sensor and the vision component, wherein the processor receives the sensor signal, analyzes the sensor signal based upon an instruction set to determine the vision characteristic of the user, and configures the visual component based upon the vision characteristic of the user to modify the image presented to the user. | 04-19-2012 |
20120093359 | Batch Detection Association for Enhanced Target Descrimination in Dense Detection Environments - The embodiments described herein relate to systems and techniques for processing batch detection information received from one or more sensors configured to observe objects of interest. In particular the systems and techniques are configured to enhance track performance particularly in dense target environments. A substantially large number of batch detections can be processed in a number of phases of varying complexity. An initial phase performs relatively low complexity processing on substantially all detections obtained over an extended batch period, approximating object motion with a simplified model (e.g., linear). The batch detections are divided and redistributed into swaths according to the resulting approximations. A subsequent phase performs greater complexity (e.g., quadratic) processing on the divided sets of detections. The subdivision and redistribution of detections lends itself to parallelization. Beneficially, detections over extended batch periods can be processed very efficiently to provide improved target tracking and discrimination in dense target environments. | 04-19-2012 |
20120093360 | HAND GESTURE RECOGNITION - Systems, methods, and machine readable and executable instructions are provided for hand gesture recognition. A method for hand gesture recognition can include detecting, with an image input device in communication with a computing device, movement of an object. A hand pose associated with the moving object is recognized and a response corresponding to the hand pose is initiated. | 04-19-2012 |
20120093361 | TRACKING SYSTEM AND METHOD FOR REGIONS OF INTEREST AND COMPUTER PROGRAM PRODUCT THEREOF - In one exemplary embodiment, a tracking system for region-of-interest (ROI) performs a feature-point detection locally on an ROI of an image frame at an initial time via a feature point detecting and tracking module, and tracks the detected features. A linear transformation module finds out a transform relationship between two ROIs of two consecutive image frames, by using a plurality of corresponding feature points. An estimation and update module predicts and corrects a moving location for the ROI at a current time. Based on the result corrected by the estimation and update module, an outlier rejection module removes at least an outlier outside the ROI. | 04-19-2012 |
20120093362 | DEVICE AND METHOD FOR DETECTING SPECIFIC OBJECT IN SEQUENCE OF IMAGES AND VIDEO CAMERA DEVICE - A device for detecting a specific object includes: a suspect object region detection unit configured to create a foreground mask of each frame of image in a sequence of images and perform an inter-frame differential process on the foreground masks to detect a suspect object region; a unit for modeling a region with high incidence of false positive configured to, if at least one suspect object region is detected, determine a suspect object region satisfying a predetermined condition as a region with high incidence of false positive and build a model of each determined region; a post-processing unit configured to match each suspect object region not determined as a region with high incidence of false positive against at least one corresponding model, and detect the specific object according to a sequence of mismatching suspect object regions; and determine absence of the specific object if no suspect object region is detected. | 04-19-2012 |
20120093363 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - When a detection-target subject is imaged with an image pickup device having line-defect pixels, the detection-target subject is imaged, with the image pickup device or the detection-target subject rotated at a predetermined angle so that the edge of one side of the detection-target subject is not parallel to each of horizontal and vertical scanning lines of the image pickup device, and a gray-scale image is captured by a control apparatus. In the gray-scale image, the luminance of each of the line-defect pixels is corrected by interpolation with luminances of pixels adjacent to both sides of the line-defect pixel. The gray-scale image is subjected to sub-pixel processing to detect the edge of the detection-target subject. When the detection-target subject is a component in a rectangular shape, rotation is made so that four sides are not parallel to each of the horizontal and vertical scanning lines of the image pickup device. | 04-19-2012 |
20120093364 | OBJECT TRACKING DEVICE, OBJECT TRACKING METHOD, AND OBJECT TRACKING PROGRAM - An object tracking apparatus is provided that enables the possibility of erroneous tracking to be further reduced. An object tracking apparatus ( | 04-19-2012 |
20120093365 | CONFERENCE SYSTEM, MONITORING SYSTEM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND A NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - To provide a conference system, a monitoring system, an image processing apparatus, an image processing method and A non-transitory computer-readable storage medium that stores a computer-image processing program capable of accurately and effectively recognizing an object based on a fisheye-distorted image photographed at a wide angle. | 04-19-2012 |
20120093366 | IMAGE SELECTING APPARATUS, CAMERA, AND METHOD OF SELECTING IMAGE - An image selecting apparatus comprises an input unit | 04-19-2012 |
20120093367 | METHOD AND APPARATUS FOR ASSESSING THE THREAT STATUS OF LUGGAGE - A method and apparatus for assessing a threat status of a piece of luggage. The method comprises the steps of scanning the piece of luggage with penetrating radiation to generate image data and processing the image data with a computing device to identify one or more objects represented by the image data. The method also includes further processing the image data to compensate the image data for interaction between the object and the penetrating radiation to produce compensated image data and then determine the threat status of the piece of luggage. | 04-19-2012 |
20120093368 | ADAPTIVE SUBJECT TRACKING METHOD, APPARATUS, AND COMPUTER READABLE RECORDING MEDIUM - The present invention relates to a method for adaptively tracking a subject. The method includes the steps of: comparing a first block which indicates a region corresponding to a specific subject in a first frame with at least one block included in a second frame and determining a specific block among at least one block in the second frame which has the highest degree of similarity to the first block as a second block which indicates a region corresponding to the specific subject in the second frame; and detecting the specific subject from at least part of the whole region in the second frame by using a subject detection technology, if the degree of similarity between the first block and the second block is less than a predetermined threshold value, and resetting the second block in the second frame based on a region corresponding to the detected specific subject. | 04-19-2012 |
20120093369 | METHOD, TERMINAL DEVICE, AND COMPUTER-READABLE RECORDING MEDIUM FOR PROVIDING AUGMENTED REALITY USING INPUT IMAGE INPUTTED THROUGH TERMINAL DEVICE AND INFORMATION ASSOCIATED WITH SAME INPUT IMAGE - The present invention relates to a method for providing augmented reality (AR) by using an image inputted to a terminal and information relating to the inputted image. The method includes the steps of: (a) acquiring recognition information on an object included in the image inputted through the terminal; (b) instructing to search detailed information on the recognized object and providing a tag accessible to the detailed information, if the searched detailed information is acquired, on a location of the object appearing on a screen of the terminal in a form of the augmented reality; and (c) displaying the detailed information corresponding to the tag, if the tag is selected, in the form of the augmented reality; wherein, at the step (b), the information on the location of the object is acquired by applying an image recognition process to the inputted image. | 04-19-2012 |
20120099762 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - In a case where detecting a face contained in an image, the face is detected in all directions of the image by combining the rotation of a detector in the face detecting direction, and the rotation of the image itself. If the angle made by the image direction and the detecting direction of the detector is an angle at which image deterioration readily occurs, the detection range of the detector is made narrower than that for an angle at which image deterioration hardly occurs. | 04-26-2012 |
20120099763 | IMAGE RECOGNITION APPARATUS - An image recognition part of an image recognition apparatus recognizes an object based on a target area in an outside-vehicle image obtained by a camera installed in a vehicle. A position identifying part identifies an optical axis position of the camera relative to the vehicle based on the outside-vehicle image, and an area changing part changes a position of the target area in the outside-vehicle image according to the optical axis position of the camera. Therefore, it is possible to recognize an object properly based on the target area in the outside-vehicle image even though the optical axis position of the camera is displaced. | 04-26-2012 |
20120099764 | CALCULATING TIME TO GO AND SIZE OF AN OBJECT BASED ON SCALE CORRELATION BETWEEN IMAGES FROM AN ELECTRO OPTICAL SENSOR - A method and a system for calculating a time to go value between a vehicle and an intruding object. A first image of the intruding object at a first point of time retrieved. A second image of the intruding object at a second point of time is retrieved. The first image and the second image are filtered so that the first image and the second image become independent of absolute signal energy and so that edges become enhanced. An X fractional pixel position and a Y fractional pixel position are set to zero. The X fractional pixel position denotes a horizontal displacement at sub pixel level and the Y fractional pixel position denotes a vertical displacement at sub pixel level. A scale factor is selected. The second image is scaled with the scale factor and resampled to the X fractional pixel position and the Y fractional pixel position, which results in a resampled scaled image. Correlation values, are calculated between the first image and the resampled scaled image for different horizontal displacements at pixel level and different vertical displacements at pixel level for the resampled scaled image. A maximum correlation value at a subpixel level is found based on the correlation values. The X fractional pixel position and the Y fractional pixel position are also updated. j is set to j=j+1 and scaling of the second image, calculation of correlation values, finding the maximum correlation value and setting of j to j=j+1 are repeated a predetermined number of times. i is set to i=i+1 and selecting the scale factor, scaling of the second image, calculation of correlation values, finding the maximum correlation value, setting of j to j=j+1, and setting of i to i=i+1 are repeated a predetermined number of times. A largest maximum correlation value is found among the maximum correlation values and the scale factor associated with the largest maximum correlation value. The time to go is calculated based on the scale factor. | 04-26-2012 |
20120099765 | METHOD AND SYSTEM OF VIDEO OBJECT TRACKING - Methods and systems are provided to determine a target tracking box that surrounds a moving target. The pixels that define an image within the target tracking box can be classified as background pixels, foreground pixels, and changing pixels which may include pixels of an articulation, such as a portion of the target that moves relatively to the target tracking box. Identification of background image pixels improves the signal-to-noise ratio of the image, which is defined as the ratio between the number of pixels belonging to the foreground to the number of changing pixels, and which is used to track the moving target. Accordingly, tracking of small and multiple moving targets becomes possible because of the increased signal-to-noise ratio. | 04-26-2012 |
20120106781 | SIGNATURE BASED DRIVE-THROUGH ORDER TRACKING SYSTEM AND METHOD - A system and method for providing signature-based drive-through order tracking. An image with respect to a vehicle at a POS unit can be captured at an order point and a delivery point (e.g., a payment point and a pick-up point) utilizing an image capturing unit by detecting the presence of the vehicle at each point utilizing a vehicle presence sensor. The captured image can be processed in order to extract a small region of interest and can be reduced to a unique signature. The extracted signature of the vehicle at the order point can be stored into a database together with the corresponding order and the vehicle image. The signature extracted at the delivery point can be matched with the signature stored in the database. If a match is found, the order associated with the vehicle together with the images captured at the delivery point and the order point can be displayed in a user interface at the delivery point to ensure that the right order is delivered to a customer. | 05-03-2012 |
20120106782 | Detector for chemical, biological and/or radiological attacks - This specification generally relates to methods and algorithms for detection of chemical, biological, and/or radiological attacks. The methods use one or more sensors that can have visual, audio, and/or thermal sensing abilities and can use algorithms to determine by behavior patterns of people whether there has been a chemical, biological and/or radiological attack. | 05-03-2012 |
20120106783 | OBJECT TRACKING METHOD - An object tracking method includes steps of obtaining multiple first classifications of pixels within a first focus frame in a first frame picture, wherein the first focus frame includes an object to be tracked and has a first rectangular frame in a second frame picture; performing a positioning process to obtain a second rectangular frame; and obtaining color features of pixels around the second rectangular frame sequentially and establishing multiple second classifications according to the color feature. The established second classifications are compared with the first classifications sequentially to obtain an approximation value, compared with a predetermined threshold. The second rectangular frame is progressively adjusted, so as to establish a second focus frame. By analyzing color features of the pixels of the object and with a classification manner, the efficacy of detecting a shape and size of the object so as to update information of the focus frame is achieved. | 05-03-2012 |
20120106784 | APPARATUS AND METHOD FOR TRACKING OBJECT IN IMAGE PROCESSING SYSTEM - A method, apparatus, and system track an object in an image or a video. Pose information is extracted using a relation of at least one feature point extracted in a first Region of Interest (RoI). A pose is estimated using the pose information. A secpmd RoI is set using the pose. And the second RoI is estimated using a filtering scheme. | 05-03-2012 |
20120106785 | METHODS AND SYSTEMS FOR PRE-PROCESSING TWO-DIMENSIONAL IMAGE FILES TO BE CONVERTED TO THREE-DIMENSIONAL IMAGE FILES - Disclosed herein are methods and systems of efficiently, effectively, and accurately preparing images for a 2D to 3D conversion process by pre-treating occlusions and transparencies in original 2D images. A single 2D image, or a sequence of images, is ingested, segmented into discrete elements, and the discrete elements are individually reconstructed. The reconstructed elements are then re-composited and ingested into a 2D to 3D conversion process. | 05-03-2012 |
20120106786 | OBJECT DETECTING DEVICE - An object detecting device includes a camera ECU that detects an object from image data for a predetermined area has been captured by a monocular camera, a fusion processing portion that calculates the pre-correction horizontal width of the detected object, a numerical value calculating portion that estimates the length in the image depth direction of the calculated pre-correction horizontal width, and a collision determining portion that corrects the pre-correction horizontal width calculated by the fusion processing portion, based on the estimated length in the image depth direction. | 05-03-2012 |
20120106787 | APPARATUS AND METHODS FOR ANALYSING GOODS PACKAGES - An apparatus for constructing a data model of a goods package from a series of images, one of the series of images comprising an image of the goods package, comprises a processor and a memory for storing one or more routines. When the one or more routines are executed under control of the processor the apparatus extracts element data from goods package elements in the series of images and constructs the data model by associating element data from a number of visible sides of the goods package with the goods package. The apparatus may also analyse a candidate character string read in an OCR process from one of the series of images of the goods package. The apparatus may also analyse a barcode read from an image of a goods package. | 05-03-2012 |
20120106788 | Image Measuring Device, Image Measuring Method, And Computer Program - Provided are an image measuring device, an image measuring method, and a computer program, capable of performing accurate calibration and accurately measure a desired physical quantity even in a case of an object to be measured having a shape in which selection and tracking of target points are difficult or an object to be measured moving as time elapses. Frame images are played back frame by frame, and selection of a plurality of frame images is accepted from frame images played back frame by frame. A synthesized image in which the selected and accepted frame images are superimposed is generated. The generated synthesized image is displayed, and a predetermined physical quantity is measured on the displayed synthesized image. | 05-03-2012 |
20120106789 | IMAGE PROCESSING APPARATUS AND METHOD AND PROGRAM - An image processing apparatus includes an image input configured to receive image data, a target extraction device configured to extract an object from the image data as a target object based on recognizing a first movement by the object, and a gesture recognition device configured to issue a command based on recognizing a second movement by the target object. | 05-03-2012 |
20120106790 | Face or Other Object Detection Including Template Matching - A template matching module is configured to program a processor to apply multiple differently-tuned object detection classifier sets in parallel to a digital image to determine one or more of an object type, configuration, orientation, pose or illumination condition, and to dynamically switch between object detection templates to match a determined object type, configuration, orientation, pose, blur, exposure and/or directional illumination condition. | 05-03-2012 |
20120106791 | IMAGE PROCESSING APPARATUS AND METHOD THEREOF - An image processing apparatus such as a surveillance apparatus and method thereof are provided. The image processing apparatus includes: an object detecting unit which detects a plurality of moving objects from at least one of two or more images obtained by photographing a surveillance area from two or more view points, respectively; a depth determination unit which determines depths of the moving objects based on the two or more images, wherein the depth determination unit determines the moving objects as different objects if the moving objects have different depths. | 05-03-2012 |
20120106792 | USER INTERFACE APPARATUS AND METHOD USING MOVEMENT RECOGNITION - A movement recognition method and a user interface are provided. A skin color is detected from a reference face area of an image. A movement-accumulated area, in which movements are accumulated, is detected from sequentially accumulated image frames. Movement information corresponding to the skin color is detected from the detected movement-accumulated area. A user interface screen is created and displayed using the detected movement information. | 05-03-2012 |
20120106793 | METHOD AND SYSTEM FOR IMPROVING THE QUALITY AND UTILITY OF EYE TRACKING DATA - A system and method for interpreting eye-tracking data are provided. The system and method comprise receiving raw data from an eye tracking study performed using an eye tracking mechanism and structural information pertaining to an electronic document that was the subject of the study. The electronic document and its structural information are used to compute a plurality of transition probability values. The eye-tracking data and the transition probability values are used to compute a plurality of gaze probability values. Using the transition probability values and the gaze probability values, a maximally probably transition sequence corresponding to the most likely direction of the user's gaze upon the document is identified. | 05-03-2012 |
20120106794 | METHOD AND APPARATUS FOR TRAJECTORY ESTIMATION, AND METHOD FOR SEGMENTATION - A trajectory estimation apparatus includes: an image acceptance unit which accepts images that are temporally sequential and included in the video; a hierarchical subregion generating unit which generates subregions at hierarchical levels by performing hierarchical segmentation on each of the images accepted by the image acceptance unit such that, among subregions belonging to hierarchical levels different from each other, a spatially larger subregion includes spatially smaller subregions; and a representative trajectory estimation unit which estimates, as a representative trajectory, a trajectory, in the video, of a subregion included in a certain image, by searching for a subregion that is most similar to the subregion included in the certain image, across hierarchical levels in an image different from the certain image. | 05-03-2012 |
20120106795 | SYSTEM AND METHOD FOR OPTIMIZING CAMERA SETTINGS - There is provided a recognition system. The recognition system is coupled to an image capturing device, and determines a first matching percentage by comparing a first live image with a first reference image, determines a second matching percentage by comparing a second live image with the first reference image, compares the first matching percentage with the second matching percentage to determine a direction of adjustment of a setting of the image capturing device, and generates a feedback signal to adjust the setting based on the direction of adjustment. The first live image and second live image are captured by the image capturing device. | 05-03-2012 |
20120106796 | CREATING A CUSTOMIZED AVATAR THAT REFLECTS A USER'S DISTINGUISHABLE ATTRIBUTES - A capture system captures detectable attributes of a user. A differential system compares the detectable attributes with a normalized model of attributes, wherein the normalized model of attributes characterize normal representative attribute values across a sample of a plurality of users and generates differential attributes representing the differences between the detectable attributes and the normalized model of attributes. Multiple separate avatar creator systems receive the differential attributes and each apply the differential attributes to different base avatars to create custom avatars which reflect a selection of the detectable attributes of the user which are distinguishable from the normalized model of attributes. | 05-03-2012 |
20120106797 | IDENTIFICATION OF OBJECTS IN A VIDEO - Techniques related to identifying objects in a video are generally described. One example method for identifying a moving object in a video may include generating a background frame and a foreground frame based on the video, comparing the foreground and the background frames at each corresponding location, acquiring an object area based on the comparison, determining if object area contains a moving object based on size and shape of the object area, identifying the moving object against templates of target objects, and updating the background frame according to the comparison. | 05-03-2012 |
20120106798 | SYSTEM AND METHOD FOR EXTRACTING REPRESENTATIVE FEATURE - A representative feature extraction system which selects a representative feature from an input data group includes: occurrence distribution memory means for memorizing an occurrence distribution with respect to feature quantities assumed to be input; evaluation value calculation means for calculating, with respect to each of data items in the data group, the sum of distances to the other data items included in the data group based on the occurrence distribution, to determine an evaluation value for the data item; and data selecting means for selecting the data item having the smallest evaluation value as a representative feature of the data group. | 05-03-2012 |
20120106799 | TARGET DETECTION METHOD AND APPARATUS AND IMAGE ACQUISITION DEVICE - The present invention provides a target detection method comprising the following steps controlling a modulated light emitting device to emit optical pulse signals with a first light intensity and a second light intensity to a target to be detected and a background, wherein the capabilities of reflecting the light pulse signals of the target to be detected and the background are different, controlling an image sensor to acquire images of the target to be detected and the background, wherein the image sensor comprises a plurality of image acquisition regions, and it successively scans the same image acquisition region once in the first light intensity and in the second light intensity respectively to obtain a first light intensity image and a second light intensity image, and stores them into corresponding locations in a first frame image and a second frame image respectively, distinguishing the target to be detected and the background, using the first frame image and the second frame image. The present invention also provides a target detection apparatus and an image acquisition device. This invention can precisely detect targets, even moving targets, in a strong light background. | 05-03-2012 |
20120114171 | EDGE DIVERSITY OBJECT DETECTION - Methods for detecting objects in an image. The method includes a) receiving magnitude and orientation values for each pixel in an image and b) assigning each pixel to one of a predetermined number of orientation bins based on the orientation value of each pixel. The method also includes c) determining, for a first pixel, a maximum of all the pixel magnitude values for each orientation bin in a predetermined region surrounding the first pixel. The method also includes d) summing the maximum pixel magnitude values for each of the orientation bins in the predetermined region surrounding the first pixel, e) assigning the sum to the first pixel and f) repeating steps c), d) and e) for all the pixels in the image. | 05-10-2012 |
20120114172 | TECHNIQUES FOR FACE DETECTION AND TRACKING - Techniques are disclosed that involve face detection. For instance, face detection tasks may be decomposed into sets of one or more sub-tasks. In turn the sub-tasks of the sets may be allocated across multiple image frames. This allocation may be based on a resource budget. In addition, face tracking tasks may be performed. | 05-10-2012 |
20120114173 | IMAGE PROCESSING DEVICE, OBJECT TRACKING DEVICE, AND IMAGE PROCESSING METHOD - An edge extracting unit of a contour image generator generates an edge image of an input image using an edge extraction filter, etc. A foreground processing unites extracts the foreground from the input image using a background image and expands the foreground to generate an expanded foreground image. The foreground processing unit further generates a foreground boundary image constructed of the boundary of the expanded foreground region. A mask unit masks the edge image using the expanded foreground image to eliminate edges in the background. A synthesis unit synthesizes the masked edge image and the foreground boundary image to generate a contour image. | 05-10-2012 |
20120114174 | Voxel map generator and method thereof - A volume cell (VOXEL) map generation apparatus includes an inertia measurement unit to calculate inertia information by calculating inertia of a volume cell (VOXEL) map generator, a Time of Flight (TOF) camera to capture an image of an object, thereby generating a depth image of the object and a black-and-white image of the object, an estimation unit to calculate position and posture information of the VOXEL map generator by performing an Iterative Closest Point (ICP) algorithm on the basis of the depth image of the object, and to recursively estimate a position and posture of the VOXEL map generator on the basis of VOXEL map generator inertia information calculated by the inertia measurement unit and VOXEL map generator position and posture information calculated by the ICP algorithm, and a grid map construction unit to configure a grid map based on the recursively estimated VOXEL map generator position and posture. | 05-10-2012 |
20120114175 | OBJECT POSE RECOGNITION APPARATUS AND OBJECT POSE RECOGNITION METHOD USING THE SAME - An object pose recognition apparatus and method. The object pose recognition method includes acquiring first image data of an object to be recognized and 3-dimensional (3D) point cloud data of the first image data, and storing the first image data and the 3D point cloud data in a database, receiving input image data of the object photographed by a camera, extracting feature points from the stored first image data and the input image data, matching the stored 3D point cloud data and the input image data based on the extracted feature points and calculating a pose of the photographed object, and shifting the 3D point cloud data based on the calculated pose of the object, restoring second image data based on the shifted 3D point cloud data, and re-calculating the pose of the object using the restored second image data and the input image data. | 05-10-2012 |
20120114176 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes an object detection unit configured to detect an object from an image, a tracking unit configured to track the detected object, a trajectory management unit configured to manage a trajectory of the object being tracked, and a specific object detection unit configured to detect a specific object from the image. In a case where the specific object determination unit detects the object being tracked by the object tracking unit to be the specific object, the trajectory management unit manages a trajectory of the object being tracked at a time point before the time point the object being tracked is detected to be the specific object as the trajectory of the specific object. | 05-10-2012 |
20120114177 | IMAGE PROCESSING SYSTEM, IMAGE CAPTURE APPARATUS, IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND PROGRAM - There is provided an image processing system in which an image capture apparatus and an image processing apparatus are connected to each other via a network. When a likelihood indicating the probability that a detection target object detected from a captured image is a predetermined type of object does not meet a designated criterion, the image capture apparatus generates tentative object information for the detection target object, and transmits it to the image processing apparatus. The image processing apparatus detects, from detection targets designated by the tentative object information, a detection target as the predetermined type of object. | 05-10-2012 |
20120114178 | VISION SYSTEM AND METHOD OF ANALYZING AN IMAGE - A vision system comprises a camera that captures an image and a processor coupled to process the received image to determine at least one feature descriptor for the image. The processor includes an interface to access annotated map data that includes geo-referenced feature descriptors. The processor is configured to perform a matching procedure between the at least one feature descriptor determined for the at least one image and the retrieved geo-referenced feature descriptors. | 05-10-2012 |
20120114179 | FACE DETECTION DEVICE, IMAGING APPARATUS AND FACE DETECTION METHOD - A face detection device for detecting the face of a person in an input image may include the following elements: a face detection circuit including a hardware circuit configured to detect a face in an input image; a signal processing circuit configured to perform signal processing based on an input image signal in accordance with a rewritable program including a face detection program for detecting a face in an input image; and a controller configured to allow the face detection circuit and the signal processing circuit to perform face detection on an image of a frame or on respective images of adjacent frames among consecutive frames, and to control face detection by the signal processing circuit on the basis of a face detection result obtained by the face detection circuit. | 05-10-2012 |
20120114180 | Identification Of Objects In A 3D Video Using Non/Over Reflective Clothing - A computing system generates a depth map from at least one image, detects objects in the depth map, and identifies anomalies in the objects from the depth map. Another computing system identifies at least one anomaly in an object in a depth map, and uses the anomaly to identify future occurrences of the object. A system includes a three dimensional (3D) imaging system to generate a depth map from at least one image, an object detector to detect objects within the depth map, and an anomaly detector to detect anomalies in the detected objects, wherein the anomalies are logical gaps and/or logical protrusions in the depth map. | 05-10-2012 |
20120121123 | INTERACTIVE DEVICE AND METHOD THEREOF - An interactive device is provided. The interactive device has a display device; a camera, for continuously filming a plurality of images in front of the display device, wherein the plurality of images includes at least one first object; and a processor, connected to the display device and the camera, for receiving the plurality of images, displaying the plurality of images on the display device, determining occurrence of an interactive movement of the first object in the plurality of images, designating an interactive object in the plurality of images when the interactive movement is detected, analyzing at least one characteristic of the interactive object, and controlling displayed images on the display device according to a trace of the interactive object. | 05-17-2012 |
20120121124 | Method for optical pose detection - The tracking and compensation of patient motion during a magnetic resonance imaging (MRI) acquisition is an unsolved problem. A self-encoded marker where each feature on the pattern is augmented with a 2-D barcode is provided. Hence, the marker can be tracked even if it is not completely visible in the camera image. Furthermore, it offers considerable advantages over a simple checkerboard marker in terms of processing speed, since it makes the correspondence search of feature points and marker-model coordinates, which are required for the pose estimation, redundant. Significantly improved accuracy relative to a planar checkerboard pattern is obtained for both phantom experiments and in-vivo experiments with substantial patient motion. In an alternative aspect, a marker having non-coplanar features can be employed to provide improved motion tracking. Such a marker provides depth cues that can be exploited to improve motion tracking. The aspects of non-coplanar patterns and self-encoded patterns can be practiced independently or in combination. | 05-17-2012 |
20120121125 | METHODS AND SYSTEMS FOR SOLAR SHADE ANALYSIS - A device for performing solar shade analysis combines a spherical reflective dome and a ball compass mounted on a platform, with a compass alignment mark and four dots in the corners of the platform. A user may place the device on a surface of a roof, or in another location where solar shading analysis is required. A user, while standing above the device can take a photo of the device. The photographs can then be used in order to evaluate solar capacity and perform shade analysis for potential sites for solar photovoltaic systems. By using the device in conjunction with a mobile device having a camera, photographs may be taken and uploaded, to be analyzed and processed to determine a shading percentage. For example, the solar shade analysis system may calculate the percentage of time that the solar photovoltaic system might be shaded for each month of the year. These measurements and data, or similar measurements and data, may be valuable when applying for solar rebates or solar installation permits. | 05-17-2012 |
20120121126 | METHOD AND APPARATUS FOR ESTIMATING FACE POSITION IN 3 DIMENSIONS - An apparatus and method for estimating a three-dimensional face position. The method of estimating the three-dimensional face position includes acquiring two-dimensional image information from a single camera, detecting a face region of a user from the two-dimensional image information, calculating the size of the detected face region, estimating a distance between the single camera and the user's face using the calculated size of the face region, and obtaining positional information of the user's face in a three-dimensional coordinate system using the estimated distance between the single camera and the user's face. Accordingly, it is possible to estimate the distance between the user and the single camera using the size of the face region of the user in the image information acquired by the single camera so as to acquire the three-dimensional position coordinates of the user. | 05-17-2012 |
20120121127 | IMAGE PROCESSING APPARATUS AND NON-TRANSITORY STORAGE MEDIUM STORING IMAGE PROCESSING PROGRAM - An image processing apparatus executes acquiring, on a first image having a pattern having first areas and second areas that have a different color from the first areas, center position of the pattern where the first areas and the second areas cross, acquiring boundary positions between the first and second area, converting the first image to a second image having its image distortion corrected by using the center position and the boundary positions, acquiring, by scanning on the second image, expectation values which are areas including the point where the first and second areas cross excluding the center position, acquiring a intersection position of the intersection on the second image based on the expectation values, acquiring the center position and the positions on the first image corresponding to the intersection position by inverting the second image to the first image, determining the points corresponding to the acquired positions as features. | 05-17-2012 |
20120121128 | OBJECT TRACKING SYSTEM - The present invention provides a system, method and computer program product for tracking the movement of a plurality of targets, wherein the detected movement is used for the modification of an interactive environment. The system comprises one or more imaging devices configured to capture two or more images of at least some of a plurality of target identifiers with one or more of a plurality of targets. The system further comprises a processing module which is operatively coupled to the one or more imaging devices, and configured to receive and process the two or more images. During the processing a first location parameter and a second location parameter for a predetermined region are determined. The one or more movement parameters are at least in part determined from the first and second location parameters and used for the modification of the interactive environment. | 05-17-2012 |
20120121129 | IMAGE PROCESSING APPARATUS - An image processing apparatus includes a first searcher. The first searcher searches for, from a designated image, one or at least two first partial images each of which represents a face portion. A second searcher searches for, from the designated image, one or at least two second partial images each of which represents a rear of a head. A first setter sets a region corresponding to the one or at least two first partial images detected by the first searcher as a reference region for an image quality adjustment. A second setter sets a region different from a region corresponding to the one or at least two second partial images detected by the second searcher as the reference region. A start-up controller selectively starts up the first setter and the second setter so that the first setter has priority over the second setter. | 05-17-2012 |
20120121130 | FLEXIBLE COMPUTER VISION - A method for flexible interest point computation, comprising: producing multiple octaves of a digital image, wherein each octave of said multiple scale octaves comprises multiple layers; initiating a process comprising detection and description of interest points, wherein said process is programmed to progress layer-by-layer over said multiple layers of each of said multiple octaves, and to continue to a next octave of said multiple octaves upon completion of all layers of a current octave of said multiple octaves; upon the detection and the description of each interest point of said interest points during said process, recording an indication associated with said interest point in a memory, such that said memory accumulates indications during said process; and upon interruption to said process, returning a result being based at least on said indications. | 05-17-2012 |
20120121131 | METHOD AND APPARATUS FOR ESTIMATING POSITION OF MOVING VEHICLE SUCH AS MOBILE ROBOT - An apparatus of estimating a position of a moving vehicle such as a robot includes a feature point matching unit which generates vectors connecting feature points of a previous image frame and feature points of a current image frame, corresponding to the feature points of the previous image frame, and determines spatial correlations between the feature points of the current image frame, a clustering unit which configures at least one motion cluster by grouping at least one vector among the vectors based on the spatial correlations in a feature space, and a noise removal unit removing noise from each motion cluster, wherein the position of the moving vehicle is estimated based on the at least one motion cluster. | 05-17-2012 |
20120121132 | OBJECT RECOGNITION METHOD, OBJECT RECOGNITION APPARATUS, AND AUTONOMOUS MOBILE ROBOT - To carry out satisfactory object recognition in a short time. An object recognition method in accordance with an exemplary aspect of the present invention is an object recognition method for recognizing a target object by using a preliminarily-created object model. The object recognition method generates a range image of an observed scene, detects interest points from the range image, extracts first features, the first features being features of an area containing the interest points, carries out a matching process between the first features and second features, the second features being features of an area in the range image of the object model, calculates a transformation matrix based on a result of the matching process, the transformation matrix being for projecting the second features on a coordinate system of the observed scene, and recognizes the target object with respect to the object model based on the transformation matrix. | 05-17-2012 |
20120121133 | SYSTEM FOR DETECTING VARIATIONS IN THE FACE AND INTELLIGENT SYSTEM USING THE DETECTION OF VARIATIONS IN THE FACE - A face change detection system is provided, comprising an image input unit acquiring a plurality of input images, a face extraction unit extracting a face region of the input images, and a face change extraction unit detecting a face change in the input images by calculating an amount of change in the face region. | 05-17-2012 |
20120121134 | CONTROL APPARATUS, CONTROL METHOD, AND PROGRAM - The present invention relates to a control apparatus, a control method, and a program in which, when performing automatic image-recording, the frequency with which image-recording is performed can be changed so that the recording frequency can be suitably changed in accordance with, for example, a user's intention or the state of an imaging apparatus. | 05-17-2012 |
20120121135 | POSITION AND ORIENTATION CALIBRATION METHOD AND APPARATUS - A position and orientation measuring apparatus calculates a difference between an image feature of a two-dimensional image of an object and a projected image of a three-dimensional model in a stored position and orientation of the object projected on the two-dimensional image. The position and orientation measuring apparatus further calculates a difference between three-dimensional coordinate information and a three-dimensional model in the stored position and orientation of the object. The position and orientation measuring apparatus then converts a dimension of the first difference and/or the second difference to cause the first difference and the second difference to have an equivalent dimension and corrects the stored position and orientation. | 05-17-2012 |
20120128201 | BI-MODAL DEPTH-IMAGE ANALYSIS - A depth-image analysis system calculates first mode skeletal data representing a human target in an observed scene if a portion of the human target is observed with a first set of joint positions, and calculates second mode skeletal data representing the human target in the observed scene if the portion of the human target is observed with a second set of joint positions different than the first set of joint positions. The first mode skeletal data and the second mode skeletal data have different skeletal joint constraints. | 05-24-2012 |
20120128202 | Image processing apparatus, image processing method and computer readable information recording medium - An image processing apparatus includes an obtaining part configured to obtain a plurality of images including a photographing object photographed by a photographing part; a determination part configured to detects a shift in position between a first image and a second image included in the plurality of images obtained by the obtaining part, and determine whether the first image is suitable for being superposed to the second image; a selection part configured to select a certain number of images from the plurality of images based on a determination result of the determination part; and a synthesis part configured to synthesize the certain number of images selected by the selection part. | 05-24-2012 |
20120128203 | MOTION ANALYZING APPARATUS - A sensor unit is installed to a target object and detects a given physical amount. A data acquisition unit acquires output data of the sensor unit in a period including a first period for which a real value of a value of m time integrals of the physical amount is known and a second period that is a target for motion analysis. An error time function estimating unit performs m time integrals of the output data of the sensor unit and estimates a time function of an error of a value of the physical amount detected by the sensor unit with respect to the real value of the value of the physical amount detected by the sensor unit based on a difference between a value of m time integrals of the output data and the real value for the first period. | 05-24-2012 |
20120128204 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes a selection unit configured to select a plurality of specific areas of a target object, a learning unit configured to learn a detection model that relates to each of the plurality of specific areas, a generation unit configured to generate an area combination as a combination of specific areas selected from the plurality of specific areas, a recognition unit configured to recognize the target object based on the detection model and the area combination, and an addition unit configured to add a new specific area based on a recognition result obtained by the recognition unit. If the new specific area is added by the addition unit, the learning unit further learns a detection model that relates to the new specific area. | 05-24-2012 |
20120128205 | APPARATUS FOR PROVIDING SPATIAL CONTENTS SERVICE AND METHOD THEREOF - Disclosed herein is an apparatus for providing spatial contents service which includes a spatial contents insertion unit, a spatial contents generation unit, a topological relationship generation unit, and a spatial contents composition unit. The spatial contents insertion unit extracts spatial objects included in an image. The spatial contents generation unit generates primary spatial contents corresponding to the image. The topological relationship generation unit compares spatial location information of the primary spatial contents with spatial location information of one or more pieces of secondary spatial contents, and defines a spatial topological relationship between the primary spatial contents and the secondary spatial contents. The spatial contents composition unit couples or links the secondary spatial contents, which has a spatial topological relationship with the primary spatial contents, to the primary spatial contents. | 05-24-2012 |
20120128206 | OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD, AND COMPUTER-READABLE MEDIUM RECORDING PROGRAM THEREFOR - An object detection device includes: an obtaining unit successively obtaining frame images; a first determination unit determining whether a first similarity between a reference image and a first image region in one of the obtained frame images is less than a first threshold value; a second determination unit determining whether a second similarity between the reference image and a second image region, included in a frame image obtained before the one of the frame images and corresponding to the first image region, is less than a second threshold value larger than the first threshold value, when the first determination unit determines that the first similarity is not less than the first threshold value; and a detection unit detecting the first image region as a region of a particular object image when the second determination unit determines that the second similarity is not less than the second threshold value. | 05-24-2012 |
20120128207 | DATA ANALYSIS DEVICE, DATA ANALYSIS METHOD, AND PROGRAM - Provided is a data analysis device for automatically detecting a step on the ground based on point cloud data representing a three-dimensional shape of a feature surface. A space subject to analysis is divided into a plurality of subspaces. A boundary search unit ( | 05-24-2012 |
20120128208 | Human Tracking System - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities. | 05-24-2012 |
20120128209 | IMAGE ANALYSIS DEVICE AND IMAGE ANALYSIS PROGRAM - Problem to be Solved: | 05-24-2012 |
20120128210 | Method for Traffic Sign Recognition - The invention relates to a method for traffic sign recognition that analyzes and classifies the image data of a sensor ( | 05-24-2012 |
20120128211 | DISTANCE CALCULATION DEVICE FOR VEHICLE - Provided is a distance calculation device for a vehicle, which can accurately calculate the distance to an object, for example, even when the sunshine condition in an image capture environment changes. In the device, an image quality estimation means ( | 05-24-2012 |
20120134532 | ABNORMAL BEHAVIOR DETECTION SYSTEM AND METHOD USING AUTOMATIC CLASSIFICATION OF MULTIPLE FEATURES - Described herein are a system and a method for abnormal behavior detection using automatic classification of multiple features. Features from various sources, including those extracted from camera input through digital image analysis, are used as input to machine learning algorithms. These algorithms group the features and produce models of normal and abnormal behaviors. Outlying behaviors, such as those identified by their lower frequency, are deemed abnormal. Human supervision may optionally be employed to ensure the accuracy of the models. Once created, these models can be used to automatically classify features as normal or abnormal. This invention is suitable for use in the automatic detection of abnormal traffic behavior such as running of red lights, driving in the wrong lane, or driving against traffic regulations. | 05-31-2012 |
20120134533 | TEMPORAL THERMAL IMAGING METHOD FOR DETECTING SUBSURFACE OBJECTS AND VOIDS - A temporal thermal survey method to locate at a given area whether or not there is a subsurface object or void site. The method uses thermal inertia change detection. It locates temporal heat flows from naturally heated subsurface objects or faulty structures such as corrosion damage. The added value over earlier methods is the use of empirical methods to specify the optimum times for locating subsurface objects or voids amidst clutter and undisturbed host materials. Thermal inertia, or thermal effusivity, is the bulk material resistance to temperature change. Surface temperature highs and lows are shifted in time at the subsurface object or void site relative to the undisturbed host material sites. The Dual-band Infra-Red Effusivity Computed Tomography (DIRECT) method verifies the optimum two times to detect thermal inertia outliers at the subsurface object or void border with undisturbed host materials. | 05-31-2012 |
20120134534 | CONTROL COMPUTER AND SECURITY MONITORING METHOD USING THE SAME - A method for performing security surveillance using a control computer sends an image obtaining request from the control computer to a preset channel of a network video recorder (NVR) or a digital video recorder (DVR), and receives captured images from the preset channel of the NVR or the DVR. The method further detects a specified object in the captured images, and stores/outputs an image area of the specified object in a storage device of the control computer or a terminal device. | 05-31-2012 |
20120134535 | METHOD FOR ADJUSTING PARAMETERS OF VIDEO OBJECT DETECTION ALGORITHM OF CAMERA AND THE APPARATUS USING THE SAME - An apparatus for a video object detection algorithm of a camera includes a video object detection training module and a video object detection application module. The video object detection training module is configured to generate an optimum correspondence between quantified values of environmental variables and parameters of a video object detection algorithm according to a stream of training video signals and a video object detection reference result. The video object detection application module is configured to perform video object detection on a stream of training video signals based on the optimum correspondence between the quantified values of the environmental variables and the parameters of the video object detection algorithm. | 05-31-2012 |
20120134536 | Image Processing Apparatus and Method, and Program - An image processing apparatus includes a depth image obtaining unit configured to obtain a depth image including information on distances from an image-capturing position to a subject in a two-dimensional image to be captured; a local tip portion detection unit configured to detect a portion of the subject at a depth and a position close from the image-capturing position as a local tip portion; a projecting portion detection unit configured to detect, in a case where, when each of the blocks is set as a block of interest, the local tip portion of the block of interest in an area formed of the plurality of blocks adjacent to the block of interest, becomes a local tip portion closest from the image-capturing position, the local tip portion as a projecting portion; and a tracking unit configured to continuously track the position of the projecting portion. | 05-31-2012 |
20120134537 | SYSTEM AND METHOD FOR EXTRACTING THREE-DIMENSIONAL COORDINATES - A system and method for extracting 3D coordinates, the method includes obtaining, by a stereoscopic image photographing unit, two images of a target object, and obtaining 3D coordinates of the object on the basis of coordinates of each pixel of the two images, measuring, by a Time of Flight (TOF) sensor unit, a value of a distance to the object, and obtaining 3D coordinates of the object on the basis of the measured distance value, mapping pixel coordinates of each image to the 3D coordinates obtained through the TOF sensor unit, and calibrating the mapped result, determining whether each set of pixel coordinates and the distance value to the object measured through the TOF sensor unit are present, calculating a disparity value on the basis of the distance value or the pixel coordinates, and calculating 3D coordinates of the object on the basis of the calculated disparity value. | 05-31-2012 |
20120134538 | OBJECT TRACKING DEVICE CAPABLE OF TRACKING OBJECT ACCURATELY, OBJECT TRACKING METHOD, AND STORAGE MEDIUM - An object tracking device capable of accurately tracking an object as a tracking target. The device receives an image signal having a plurality of frame images and tracks a specific object in the image signal. The device sets a predetermined number of small areas in a reference area indicative of an area where an image of the object is formed in the preceding frame image. The object tracking device detects a motion vector of the object in each of the small areas, and determines a change of the object according to the motion vector to thereby obtain shape change information. The device corrects the location and size of the reference area according to the shape change information to thereby correct the reference area to a corrected reference area, and tracks the object using the corrected reference area. | 05-31-2012 |
20120134539 | OBSERVATION APPARATUS AND OBSERVATION METHOD - Provided are an observation apparatus and an observation method that allow a state change of an observation target to be observed after image-acquisition is started. An observation apparatus | 05-31-2012 |
20120134540 | METHOD AND APPARATUS FOR CREATING SURVEILLANCE IMAGE WITH EVENT-RELATED INFORMATION AND RECOGNIZING EVENT FROM SAME - An apparatus for creating a surveillance image with event-related information includes an event detection unit configured to detect an event in the surveillance image, an encoding unit configured to encode the surveillance image into a bit stream of the surveillance image, an event information creation unit configured to create event-related information based on the detected event, and a parsing unit configured to parse the encoded surveillance image and insert the event-related information into the bit stream of the encoded surveillance image. | 05-31-2012 |
20120134541 | OBJECT TRACKING DEVICE CAPABLE OF DETECTING INTRUDING OBJECT, METHOD OF TRACKING OBJECT, AND STORAGE MEDIUM - An object tracking device that is capable of detecting that an intruding object has entered an image frame of image data where a tracking target object is being tracked. A plurality of sub areas are set in a preceding or current frame target area indicative of a position of the tracking target object in a preceding or current frame of moving image data, and a feature value of each sub area is determined. If the feature value exceeds a first threshold value in at least one of the sub areas and at the same time the number of the at least one of the sub areas does not reach a reference value, it is determined that an intruding object different from the tracking target object has entered an area in which the tracking target object is positioned in the current frame. | 05-31-2012 |
20120140981 | System and Method for Combining Visible and Hyperspectral Imaging with Pattern Recognition Techniques for Improved Detection of Threats - Systems and method for detecting unknown samples wherein pattern recognition algorithms are applied to a visible image of a first target area comprising a first unknown sample to thereby generate a first set of target data. If comparison of the first set of target data to reference data results in a match, the first unknown is identified and a hyperspectral image of a second target area comprising a second unknown sample is obtained to generate a second set of test data. If comparison of the second set of test data to reference data results in a match, the second unknown sample is identified as a known material. Identification of an unknown through hyperspectral imaging can also trigger the visible camera to obtain an image. In addition, the visible and hyperspectral cameras can be run continuously to simultaneously obtain visible and hyperspectral images. | 06-07-2012 |
20120140982 | IMAGE SEARCH APPARATUS AND IMAGE SEARCH METHOD - According to one embodiment, an image search apparatus includes, an image input module which is input with an image, an event detection module which detects events from the input image input by the image input module, and determines levels, depending on types of the detected events, an event controlling module which retains the events detected by the event detection module, for each of the levels, and an output module which outputs the events retained by the event controlling module, for each of the levels. | 06-07-2012 |
20120140983 | METHOD FOR DETECTION OF SPECIMEN REGION, APPARATUS FOR DETECTION OF SPECIMEN REGION, AND PROGRAM FOR DETECTION OF SPECIMEN REGION - A method for detecting the specimen region includes the first step for the first region detecting unit to detect the first region which is a region with contrast in the first image of an object for observation which is photographed under illumination with visible light, the second step for the second region detecting unit to detect the second region which is a region with contrast in the second image of the object for observation which is photographed under illumination with ultraviolet light, and the third step for the specimen region defining unit to define, based on the first and second regions mentioned above, the specimen region where there exists the specimen in the object for observation. | 06-07-2012 |
20120140984 | DRIVING SUPPORT SYSTEM, DRIVING SUPPORT PROGRAM, AND DRIVING SUPPORT METHOD - Provided is a driving support system that includes an image recognition unit that performs image recognition processing to recognize if a recognition object associated with any of the support processes is included in image data captured by an on-vehicle camera and a recognition area information storage unit that stores information regarding a set recognition area in the image data that is set depending on a recognition accuracy of the recognition object set for execution of the support process. A candidate process extraction unit is also included for extracting at least one execution candidate support process from the plurality of support processes and a support process execution management unit that allows execution of the extracted execution candidate support process on a condition that a position in the image data of the recognition object recognized by the image recognition processing is included in the set recognition area. | 06-07-2012 |
20120140985 | IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREFOR - A parameter for each of a plurality of images captured in time series is computed based on information obtained from the image, and a normal reference image (an image captured before an image targeted for processing is stored). A degree of similarity between the image targeted for processing and the normal reference image is computed, and a parameter to be used in image processing applied to the image targeted for processing is computed by performing weighted addition such that a parameter computed from the normal reference image has a higher weight than a parameter computed from the image targeted for processing the higher the degree of similarity. | 06-07-2012 |
20120140986 | PROVIDING IMAGE DATA - Embodiments of the present invention provide a method of providing image data for constructing an image of a region of a target object, comprising providing incident radiation from a radiation source at a target object, detecting, by at least one detector, a portion of radiation scattered by the target object with the incident radiation or an aperture at first and second positions, and providing image data via an iterative process responsive to the detected radiation, wherein in said iterative process image data is provided corresponding to a portion of radiation scattered by the target object and not detected by the detector. | 06-07-2012 |
20120140987 | Methods and Systems for Discovering Styles Via Color and Pattern Co-Occurrence - Methods and systems for discovering styles via color and pattern co-occurrence are disclosed. According to one embodiment, a computer-implemented method comprises collecting a set of fashion images, selecting at least one subset within the set of fashion images, the subset comprising at least one image containing a fashion item, and computing a set of segments by segmenting the at least one image into at least one dress segment. Color and pattern representations of the set of segments are computed by using a color analysis method and a pattern analysis method respectively. A graph is created wherein each graph node corresponds to one of a color representation or a pattern representation computed for the set of segments. Weights of edges between nodes of the graph indicate a degree of how the corresponding colors or patterns complement each other in a fashion sense. | 06-07-2012 |
20120140988 | OBSTACLE DETECTION DEVICE AND METHOD AND OBSTACLE DETECTION SYSTEM - An obstacle region candidate point relating unit assumes that a pixel in an image corresponds to a point on a road surface, and associates pixels between images at two times on the basis of the amount of movement of a vehicle in question, a road plane, and a flow of the image estimated. When a pixel corresponds to a shadow of the vehicle in question or the moving object therearound appearing on the road surface, the ratio of intensities of the pixel values of the spectral images between two images should be approximately the same as the ratio of the spectral characteristics of the sunshine in the sun and the shade. Therefore, when the ratio of intensities is approximately the same as the ratio of the spectral characteristics, the obstacle determining unit does not determine that the pixel in question is a point corresponding to the obstacle. Only when the ratio of intensities is not approximately the same as the ratio of the spectral characteristics, the obstacle determining unit determines that the pixel in question is a point corresponding to the obstacle. | 06-07-2012 |
20120148092 | AUTOMATIC TRAFFIC VIOLATION DETECTION SYSTEM AND METHOD OF THE SAME - Disclosed herein are a system and method for the automatic detection of traffic and parking violations. Camera input is digitally analyzed for vehicle type and location. This information is then processed against local traffic and parking regulations to detect violations. Detectable driving offenses include, but are not limited to: no scooters, buses only, and scooters only lane violations. Detectable parking offenses include, but are not limited to: parking or loitering in bus stops, parking next to fire hydrants, and parking in no-parking zones. Camera input, detected vehicle information, and violations can be stored for later search and retrieval. The system may be configured to signal the authorities or other automated analysis systems about specific violations. When coupled with automatic license plate recognition, vehicles may be automatically matched against a registration database and reported or ticketed. | 06-14-2012 |
20120148093 | Blob Representation in Video Processing - A method of processing a video sequence is provided that includes receiving a frame of the video sequence, identifying a plurality of blobs in the frame, computing at least one interior point of each blob of the plurality of blobs, and using the interior points in further processing of the video sequence. The interior points may be used, for example, in object tracking. | 06-14-2012 |
20120148094 | IMAGE BASED DETECTING SYSTEM AND METHOD FOR TRAFFIC PARAMETERS AND COMPUTER PROGRAM PRODUCT THEREOF - An image-based detecting system for traffic parameters first sets a range of a vehicle lane for monitoring control, and sets an entry detection window and an exit detection window in the vehicle lane. When the entry detection window detects an event of a vehicle passing by using the image information captured at the entry detection window, a plurality of feature points are detected in the entry detection window, and will be tracked hereafter. Then, the feature points belonging to the same vehicle are grouped to obtain at least a location tracking result of single vehicle. When the tracked single vehicle moves to the exit detection window, according to the location tracking result and the time correlation through estimating the information captured at the entry detection window and the exit detection window, at least a traffic parameter is estimated. | 06-14-2012 |
20120148095 | IMAGE PROCESSING APPARATUS - An image processing apparatus includes a detector. A detector detects one or at least two object images each of which is coincident with a dictionary image from each of K (K: an integer of two or more) of continuous shot images. A classifier executes on the K of continuous shot images a process of classifying the object images detected according to a common object. A determiner determines an attribute of equal to or less than K of object images belonging to each of one or at least two object image groups classified. A first excluder excludes a continuous shot image satisfying an error condition out of the K of the continuous shot images, based on a determined result. A selector selects a part of one or at least two continuous shot images remained after an exclusion as a specific image. | 06-14-2012 |
20120148096 | APPARATUS AND METHOD FOR CONTROLLING IMAGE USING MOBILE PROJECTOR - Disclosed is an image control system using a mobile projector, including a first apparatus configured to determine, when a first picture is projected and a user input for a specific image is received, whether the projected first picture is projected onto the specific image, and if so, control the specific image to perform an operation corresponding to the user input, and a second apparatus configured to, receive the user input from the first apparatus, determine whether the first picture is projected onto the specific image, and if so, perform an operation corresponding to the user input. | 06-14-2012 |
20120148097 | 3D MOTION RECOGNITION METHOD AND APPARATUS - Disclosed are a three-dimensional motion recognition method and an apparatus using a motion template method and an optical flow tracking method of feature points. The three dimensional (3D) motion recognition method through feature-based stereo matching according to an exemplary embodiment of the present disclosure includes: obtaining a plurality of images from a plurality of cameras; extracting feature points from a single reference image; and comparing and tracking the feature points of the reference image and another comparison image photographed at the same time using an optical flow method. | 06-14-2012 |
20120148098 | ELECTRONIC CAMERA - An electronic camera includes an imager. An imager outputs an electronic image corresponding to an optical image captured on an imaging surface. A first generator generates a first notification forward of the imaging surface. A searcher searches for one or at least two face images each having a size exceeding a reference from the electronic image outputted from the imager. A controller controls a generation manner of the first generator with reference to an attribute of each of one or at least two face images detected by the detector. | 06-14-2012 |
20120148099 | SYSTEM AND METHOD FOR MEASURING FLIGHT INFORMATION OF A SPHERICAL OBJECT WITH HIGH-SPEED STEREO CAMERA - Disclosed is a method for automatically extracting centroids and features of a spherical object required to measure a flight speed, a flight direction, a rotation speed, and a rotation axis of the spherical object in a system for measuring flight information of the spherical object with a high-speed stereo camera. | 06-14-2012 |
20120148100 | POSITION AND ORIENTATION MEASUREMENT DEVICE AND POSITION AND ORIENTATION MEASUREMENT METHOD - A position and orientation measurement device includes a grayscale image input unit that inputs a grayscale image of an object, a distance image input unit that inputs a distance image of the object, an approximate position and orientation input unit that inputs an approximate position and orientation of the object with respect to the position and orientation measurement device, and a position and orientation calculator that updates the approximate position and orientation. The position and orientation calculator calculates a first position and orientation so that an object image on an image plane and a projection image of the three-dimensional shape model overlap each other, associates the three-dimensional shape model with the image features of the grayscale image and the distance image, and calculates a second position and orientation on the basis of a result of the association. | 06-14-2012 |
20120148101 | METHOD AND APPARATUS FOR EXTRACTING TEXT AREA, AND AUTOMATIC RECOGNITION SYSTEM OF NUMBER PLATE USING THE SAME - Disclosed is a method of extracting a text area, the method including generating a text area prediction value within an input second image based on a plurality of text area data stored in a database including geometric information about a text area of a first image, generating a text recognition result value by determining whether a text is recognized with respect to a probable text area within the input second image, and selecting a text area within the second image by combining the generated text area prediction value and text recognition result value. | 06-14-2012 |
20120148102 | MOBILE BODY TRACK IDENTIFICATION SYSTEM - There is provided a mobile body track identification system that determines which mobile body matches which detected track with a high precision irrespective of frequent interruption of tracks of a mobile body detected in a tracking area. Herein, hypotheses are generated by use of sets of track-coupling candidate/identification pairs, which combines track-coupling candidates, combining tracks of a mobile body detected in a predetermined time in the past, and identifications of the mobile body and which satisfies a predetermined condition. Next, identification likelihoods are calculated as likelihoods of detecting identifications in connection with tracks indicated by track-coupling candidates included in track-coupling candidate/identification pairs ascribed to each of the selected hypotheses. Identification likelihoods are integrated per each track-coupling candidate/identification pair, thus calculating an identification likelihood regarding the selected hypothesis. A most-probable hypothesis is estimated based on identification likelihoods of hypotheses. | 06-14-2012 |
20120148103 | METHOD AND SYSTEM FOR AUTOMATIC OBJECT DETECTION AND SUBSEQUENT OBJECT TRACKING IN ACCORDANCE WITH THE OBJECT SHAPE - A method and system for automatic object detection and subsequent object tracking in accordance with the object shape in digital video systems having at least one camera for recording and transmitting video sequences. In accordance with the method and system, an object detection algorithm based on a Gaussian mixture model and expanded object tracking based on Mean-Shift are combined with each other in object detection. The object detection is expanded in accordance with a model of the background by improved removal of shadows, the binary mask generated in this way is used to create an asymmetric filter core, and then the actual algorithm for the shape-adaptive object tracking, expanded by a segmentation step for adapting the shape, is initialized, and therefore a determination at least of the object shape or object contour or the orientation of the object in space is made possible. | 06-14-2012 |
20120148104 | PEDESTRIAN-CROSSING MARKING DETECTING METHOD AND PEDESTRIAN-CROSSING MARKING DETECTING DEVICE - Provided are a pedestrian-crossing marking detecting method and a pedestrian-crossing marking detecting device, wherein the existence of pedestrian crossing markings and the positions thereof can be detected accurately from within a picked up image, even when detection of the intensity edges of painted sections is difficult. In the pedestrian-crossing mark detecting device ( | 06-14-2012 |
20120155702 | System and Method for Detecting Nuclear Material in Shipping Containers - A system and method for detecting metal contraband such as weapons related material in shipping containers where a container is scanned with at least one penetrating beam, preferably a tomographic x-ray beam, and at least one image is formed. The image can be analyzed by a pattern recognizer to find voids representing metal. The voids can be further classified with respect to their 2 or 3-dimensional geometric shapes. Container ID and contents or bill of lading information can be combined along with other parameters such as total container weight to allow a processor to generate a detection probability. The processor can use artificial intelligence methods to classify suspicious containers for manual inspection. | 06-21-2012 |
20120155703 | MICROPHONE ARRAY STEERING WITH IMAGE-BASED SOURCE LOCATION - Methods and systems for beam forming an audio signal based on a location of an object relative to the listening device, the location being determined from positional data deduced from an optical image including the object. In an embodiment, an object's position is tracked based on video images of the object and the audio signal received from a microphone array located at a fixed position is filtered based on the tracked object position. Beam forming techniques may be applied to emphasize portions of an audio signal associated with sources near the object. | 06-21-2012 |
20120155704 | LOCALIZED WEATHER PREDICTION THROUGH UTILIZATION OF CAMERAS - Described herein are various technologies pertaining to predicting an amount of electrical power that is to be generated by a power system at a future point in time, wherein the power system utilizes a renewable energy resource to generate electrical power. A camera is positioned to capture an image of sky over a geographic region of interest. The image is analyzed to predict an amount of solar radiation that is to be received by the power source at a future point in time. The predicted solar radiation is used to predict an amount of electrical power that will be output by the power system at the future point in time. A computational resource of a data center that is powered by way of the power source is managed as a function of the predicted amount of power. | 06-21-2012 |
20120155705 | FIRST PERSON SHOOTER CONTROL WITH VIRTUAL SKELETON - A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control. | 06-21-2012 |
20120155706 | RANGE IMAGE GENERATION APPARATUS, POSITION AND ORIENTATION MEASUREMENT APPARATUS, RANGE IMAGE PROCESSING APPARATUS, METHOD OF CONTROLLING RANGE IMAGE GENERATION APPARATUS, AND STORAGE MEDIUM - A range image generation apparatus comprises: a generation unit adapted to generate a first range image of a target measurement object at one of a predetermined in-plane resolution and a predetermined depth-direction range resolving power; an extraction unit adapted to extract range information from the first range image generated by the generation unit; and a decision unit adapted to decide, as a parameter based on the range information extracted by the extraction unit, one of an in-plane resolution and a depth-direction range resolving power of a second range image to be generated by the generation unit, wherein the generation unit generates the second range image using the parameter decided by the decision unit. | 06-21-2012 |
20120155707 | IMAGE PROCESSING APPARATUS AND METHOD OF PROCESSING IMAGE - An image processing apparatus includes a first detecting unit configured to detect an object in an image; a determining unit configured to determine a moving direction of the object detected by the first detecting unit; and a second detecting unit configured to perform detection processing of detecting whether the object detected by the first detecting unit is a specific object on the basis of the moving direction of the object determined by the first determining unit. | 06-21-2012 |
20120155708 | APPARATUS AND METHOD FOR MEASURING TARGET POINT IN VIDEO - Disclosed are an apparatus and method for measuring a target point in a video. In the apparatus and method for measuring a target point in a video, a target point is recognized in a video including the target point set as a measuring target, information regarding the target point is extracted by using location information of the recognized target point and map information of the surroundings of the recognized target point, and the extracted target point is displayed in the video while providing detailed map information regarding the target point. Accordingly, a user can be quickly provided with detailed information regarding the location of the target point or an object present in a visual range and geo-spatial information of the surroundings. | 06-21-2012 |
20120155709 | Detecting Orientation of Digital Images Using Face Detection Information - A method of automatically establishing the correct orientation of an image using facial information. This method is based on the exploitation of the inherent property of image recognition algorithms in general and face detection in particular, where the recognition is based on criteria that is highly orientation sensitive. By applying a detection algorithm to images in various orientations, or alternatively by rotating the classifiers, and comparing the number of successful faces that are detected in each orientation, one may conclude as to the most likely correct orientation. Such method can be implemented as an automated method or a semi automatic method to guide users in viewing, capturing or printing of images. | 06-21-2012 |
20120155710 | PAPER-SHEET HANDLING APPARATUS AND PAPER-SHEET HANDLING METHOD - A paper-sheet handling apparatus ( | 06-21-2012 |
20120163656 | METHOD AND APPARATUS FOR IMAGE-BASED POSITIONING - Method and apparatus are provided for image based positioning comprising capturing a first image with an image capturing device. Wherein said first image includes at least one object. Moving the platform and capturing a second image with the image capturing device. The second image including the at least one object. Capturing in the first image an image of a surface; capturing in the second image a second image of the surface. Processing the plurality of images of the object and the surface using a combined feature based process and surface tracking process to track the location of the surface. Finally, determining the location of the platform by processing the combined feature based process and surface based process. | 06-28-2012 |
20120163657 | Summary View of Video Objects Sharing Common Attributes - Disclosed herein are a method, system, and computer program product for displaying on a display device ( | 06-28-2012 |
20120163658 | Temporal-Correlations-Based Mode Connection - Disclosed herein are a system, method, and computer program product for updating a scene model ( | 06-28-2012 |
20120163659 | IMAGING APPARATUS, IMAGING METHOD, AND COMPUTER READABLE STORAGE MEDIUM - An imaging apparatus includes an imaging unit that generates a pair of pieces of image data mutually having a parallax by capturing a subject, an image processing unit that performs special effect processing, which is capable of producing a visual effect by combining a plurality of pieces of image processing, on a pair of images corresponding to the pair of pieces of image data, and a region setting unit that sets a region where the image processing unit performs the special effect processing on the pair of images. | 06-28-2012 |
20120163660 | PROCESSING SYSTEM - A processing system for plate-like objects is provided, with an exposure device and an object carrier with an object carrier surface for receiving the object. The exposure device and the carrier are movable relative to one another, such that the exact position of the object relative to the carrier is determinable. An edge detection device is provided which comprises at least one edge illumination unit having an illumination area, within which an object edge located in the respective object edge area has light directed onto it from the side of the carrier. At least one edge image detection unit is provided on a side of the object located opposite the carrier, the edge image detection unit imaging an edge section of the object edges located in the illumination area as an edge image, such that the respective edge image is detectable in its exact position relative to the carrier. | 06-28-2012 |
20120163661 | APPARATUS AND METHOD FOR RECOGNIZING MULTI-USER INTERACTIONS - An apparatus for recognizing multi-user interactions includes: a pre-processing unit for receiving a single visible light image to perform pre-processing; a motion region detecting unit for detecting a motion region from the image to generate motion blob information; a skin region detecting unit for extracting information on a skin color region from the image to generate a skin blob list; a Haar-like detecting unit for performing Haar-like face and eye detection by using only contrast information from the image; a face tracking unit for recognizing a face of a user from the image by using the skin blob list and results of the Haar-like face and eye detection; and a hand tracking unit for recognizing a hand region of the user from the image. | 06-28-2012 |
20120163662 | METHOD FOR BUILDING OUTDOOR MAP FOR MOVING OBJECT AND APPARATUS THEREOF - The method for building an outdoor map for a moving object according to an exemplary embodiment of the present invention includes: receiving a real satellite image for an outdoor space to which the moving object is to move; calculating pixel information including sizes of length and width pixels and a physical distance of one pixel in the real satellite image; measuring a reference position coordinate for a reference position selected from the real satellite image; and linking a pixel number corresponding to the reference position, the reference position coordinate, and the pixel information to the real satellite image in order to build the outdoor map for the moving object, and further includes creating information on a road network in which the moving object navigates based on the pixel number corresponding to the reference position, the reference position coordinate, and the pixel information. | 06-28-2012 |
20120163663 | SECURITY USE RESTRICTIONS FOR A MEDICAL COMMUNICATION MODULE AND HOST DEVICE - System and method for interfacing with a medical device. The system has a host device and a communication module. The host device has a user interface configured to input and display information relating to the interfacing with the medical device. The communication module is locally coupled to the host device and configured to communicate wirelessly with the medical device. The system, implemented by the host device and the communication module, is configured to communicate with the medical device with functions. The system, implemented by at least one of the host device and the communication module, has validation layers configured for use by users, each of the users having access to at least one of the validation layers based on a validation condition, each individual one of the functions being operational through the user interface only with one of the validation layers. | 06-28-2012 |
20120163664 | METHOD AND SYSTEM FOR INPUTTING CONTACT INFORMATION - A method and a system for inputting contact information are provided. The method includes: acquiring a content attribute of a current edit box; starting up a camera device, and entering a shoot preview interface of the camera device; placing a text content of contact information to be input in the shoot preview interface of the camera device, and shooting the text content of the contact information; analyzing and recognizing the text content located near the positioning identifier in the preview interface in an image through an optical character recognition technology, and extracting a contact information character string conforming to the content attribute of the current edit box; and inputting a recognition result character string into the current edit box. | 06-28-2012 |
20120163665 | Method of object location in airborne imagery using recursive quad space image processing - A method and computer workstation are disclosed which determine the location in the ground space of selected point in a digital image of the earth obtained by an airborne camera. The method includes the steps of: (a) performing independently and in parallel a recursive partitioning of the image space and the ground space into successively smaller quadrants until a pixel coordinate in the image assigned to the selected point is within a predetermined limit (Δ) of the center of a final recursively partitioned quadrant in the image space. The method further includes a step of (b) calculating a geo-location of the point in the ground space corresponding to the selected point in the image space from the final recursively partitioned quadrant in the ground space corresponding to the final recursively partitioned quadrant in the image space. | 06-28-2012 |
20120163666 | Object Processing Employing Movement - Directional albedo of a particular article, such as an identity card, is measured and stored. When the article is later presented, it can be confirmed to be the same particular article by re-measuring the albedo function, and checking for correspondence against the earlier-stored data. The re-measuring can be performed through us of a handheld optical device, such as a camera-equipped cell phone. The albedo function can serve as random key data in a variety of cryptographic applications. The function can be changed during the life of the article. A variety of other features are also detailed. | 06-28-2012 |
20120163667 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 06-28-2012 |
20120163668 | TRANSLATION AND DISPLAY OF TEXT IN PICTURE - A method performed by a mobile terminal may include displaying an image via a display of the mobile device. A first user selection of a portion of the image is received and text in the selected portion of the image is identified. The identified text is displayed via the display. A second user selection of at least a portion of the identified text is received. The portion of the identified text is translated from a first language into a second language that differs from the first language. The translated text, in the second language, is displayed over the image via the display. | 06-28-2012 |
20120163669 | Systems and Methods for Detecting a Tilt Angle from a Depth Image - A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels. | 06-28-2012 |
20120163670 | BEHAVIORAL RECOGNITION SYSTEM - Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned. | 06-28-2012 |
20120170799 | MOVABLE RECGNITION APPARATUS FOR A MOVABLE TARGET - A movable recognition apparatus and a method thereof, which identify an activity configuration of at least a movable target, provide a plurality of distance measuring devices arranged as a two-dimensional matrix on a plane of a specific space to detect and obtain a plurality of vertical distance values between the movable target and the plane. Then, an analyzing device is applied to establish a contour graph corresponding to the movable target by means of referencing the vertical distance values and to identify the activity configuration in accordance with the shape change of the contour graph. Therefore, the movable recognition apparatus can perform the identification task conveniently with privacy requirement in addition to accuracy of the identified activity configuration. | 07-05-2012 |
20120170800 | SYSTEMS AND METHODS FOR CONTINUOUS PHYSICS SIMULATION FROM DISCRETE VIDEO ACQUISITION - A computer implemented method for processing video is provided. A first image and a second image are captured by a camera. A feature present in the first camera image and the second camera image is identified. A first location value of the feature within the first camera image is identified. A second location value of the feature within the second camera image is identified. An intermediate location value of the feature based at least in part on the first location value and the second location value is determined. The intermediate location value and the second location value are communicated to a physics simulation. | 07-05-2012 |
20120170801 | System for Food Recognition Method Using Portable Devices Having Digital Cameras - The present invention relates to a method for automatic food recognition by means of portable devices equipped with digital cameras. | 07-05-2012 |
20120170802 | SCENE ACTIVITY ANALYSIS USING STATISTICAL AND SEMANTIC FEATURES LEARNT FROM OBJECT TRAJECTORY DATA - Trajectory information of objects appearing in a scene can be used to cluster trajectories into groups of trajectories according to each trajectory's relative distance between each other for scene activity analysis. By doing so, a database of trajectory data can be maintained that includes the trajectories to be clustered into trajectory groups. This database can be used to train a clustering system, and with extracted statistical features of resultant trajectory groups a new trajectory can be analyzed to determine whether the new trajectory is normal or abnormal. Embodiments described herein, can be used to determine whether a video scene is normal or abnormal. In the event that the new trajectory is identified as normal the new trajectory can be annotated with the extracted semantic data. In the event that the new trajectory is determined to be abnormal a user can be notified that an abnormal behavior has occurred. | 07-05-2012 |
20120170803 | SEARCHING RECORDED VIDEO - Embodiments of the disclosure provide for systems and methods for creating metadata associated with a video data. The metadata can include data about objects viewed within a video scene and/or events that occur within the video scene. Some embodiments allow users to search for specific objects and/or events by searching the recorded metadata. In some embodiments, metadata is created by receiving a video frame and developing a background model for the video frame. Foreground object(s) can then be identified in the video frame using the background model. Once these objects are identified they can be classified and/or an event associated with the foreground object may be detected. The event and the classification of the foreground object can then be recorded as metadata. | 07-05-2012 |
20120170804 | Method and apparatus for tracking target object - A method and apparatus for tracking a target object are provided. A plurality of images is received, and one of the images is selected as a current image. A specific color of the current image is extracted. And the current image is compared with a template image to search a target object in the current image. If the target object is not found in the current image, a previous image with the target object is searched in the images received before the current image. And the target object is searched in the current image according to an object feature of the previous image. The object feature and an object location are updated into a storage unit when the target object is found. | 07-05-2012 |
20120170805 | OBJECT DETECTION IN CROWDED SCENES - Methods and systems are provided for object detection. A method includes automatically collecting a set of training data images from a plurality of images. The method further includes generating occluded images. The method also includes storing in a memory the generated occluded images as part of the set of training data images, and training an object detector using the set of training data images stored in the memory. The method additionally includes detecting an object using the object detector, the object detector detecting the object based on the set of training data images stored in the memory. | 07-05-2012 |
20120170806 | METHOD, TERMINAL, AND COMPUTER-READABLE RECORDING MEDIUM FOR SUPPORTING COLLECTION OF OBJECT INCLUDED IN INPUTTED IMAGE - The present invention relates to a method for supporting a collection of an object included in an image inputted through a terminal. The method includes the steps of: recognizing the identity of an object by using at least one of an object recognition technology, an optical character recognition technology, and a barcode recognition technology; getting a collection page including at least part of the information on an auto comment containing a phrase or sentence correctly combined under the grammar of a language by using the recognition information and the information on the image of the recognized object; allowing the collection page to be stored when a request for registration of the page is received; and providing a specific user with the information about a reward system. | 07-05-2012 |
20120170807 | APPARATUS AND METHOD FOR EXTRACTING DIRECTION INFORMATION IMAGE IN A PORTABLE TERMINAL - Provided is an apparatus and method for extracting a direction of an image in a portable terminal without using a sensor for extracting direction information of a captured image. An apparatus for extracting direction information of an image in a portable terminal includes a camera unit and a control unit, wherein the control unit extracts a detection direction of an object from the captured image as direction information, and stores the captured image together with the extracted direction information for a subsequent display of the captured image in a normal direction. | 07-05-2012 |
20120170808 | Obstacle Detection Device - The present invention provides an obstacle detection device that enables stable obstacle detection with less misdetections even when a bright section and a dark section are present in an obstacle and a continuous contour of the obstacle is present across the bright section and the dark section. The obstacle detection device includes a processed image generating unit that generates a processed image for detecting an obstacle from a picked-up image, a small region dividing unit that divides the processed image into plural small regions, an edge threshold setting unit that sets an edge threshold for each of the small regions from pixel values of the plural small regions and the processed image, an edge extracting unit that calculates a gray gradient value of each of the small regions from the plural small regions and the processed image and generates, using the edge threshold for the small region corresponding to the calculated gray gradient value, an edge image and a gradient direction image, and an obstacle recognizing unit that determines presence or absence of an obstacle from the edge image in a matching determination region set in the edge image and the gradient direction image corresponding to the edge image. The small region dividing unit divides the processed image into the plural small regions on the basis of an illumination state on the outside of the own vehicle. | 07-05-2012 |
20120170809 | PROCEDURE FOR RECOGNIZING OBJECTS - A recognition and placement procedure that identifies from a digital imaged captured with digital camera the position and orientation of a stored target object in a variety of positions, without digitally storing a wide variety of essential characters per pattern associated with the target object. | 07-05-2012 |
20120170810 | System and Method for Linking Real-World Objects and Object Representations by Pointing - A system and method are described for selecting and identifying a unique object or feature in the system user's three-dimensional (“3-D”) environment in a two-dimensional (“2-D”) virtual representation of the same object or feature in a virtual environment. The system and method may be incorporated in a mobile device that includes position and orientation sensors to determine the pointing device's position and pointing direction. The mobile device incorporating the present invention may be adapted for wireless communication with a computer-based system that represents static and dynamic objects and features that exist or are present in the system user's 3-D environment. The mobile device incorporating the present invention will also have the capability to process information regarding a system user's environment and calculating specific measures for pointing accuracy and reliability. | 07-05-2012 |
20120170811 | METHOD AND APPARATUS FOR WHEEL ALIGNMENT - A vehicle wheel alignment method and system is provided. A three-dimensional target is attached to a vehicle wheel known to be in alignment. The three-dimensional target has multiple target elements thereon, each of which has known geometric characteristics and 3D spatial relationship with one another. | 07-05-2012 |
20120170812 | DRIVING SUPPORT DISPLAY DEVICE - Disclosed is a driving support display device that composites and displays images acquired from a plurality of cameras, whereby images which are easy for the user to understand and which are accurate in the areas near the borders of partial images are provided. An image composition unit ( | 07-05-2012 |
20120170813 | METHOD OF MEASURING THE OUTLINE OF A FEATURE - A method of measuring an outline of a feature on a surface includes providing a substrate. The substrate includes a feature on a surface of the substrate. The feature includes walls. The surface of the substrate is illuminated. Edges of the walls are illuminated to measure a first contour and a second contour of the feature. An outline of the feature is calculated based on the first contour and the second contour. | 07-05-2012 |
20120177249 | METHOD OF DETECTING LOGOS, TITLES, OR SUB-TITLES IN VIDEO FRAMES - Detecting a static graphic object (such as a logo, title, or sub-title) in a sequence of video frames may be accomplished by analyzing each selected one of a plurality of pixels in a video frame of the sequence of video frames. Basic conditions for the selected pixel may be tested to determine whether the selected pixel is a static pixel. When the selected pixel is a static pixel, a static similarity measure and a forward motion similarity measure may be determined for the selected pixel. A temporal score for the selected pixel may be determined based at least in part on the similarity measures. Finally, a static graphic object decision for the selected pixel may be made based at least in part on the temporal score. | 07-12-2012 |
20120177250 | BOUNDARY DETECTION DEVICE FOR VEHICLES - In a lane boundary detection device, a plurality of edge components are extracted from a captured image capturing the periphery of the own vehicle. Candidates of a curve (including straight lines) that is to be the boundary of a driving area are extracted as boundary candidates based on the placement of the plurality of edge components. Then, an angle formed by a tangent in a predetermined section of each extracted boundary candidate and a vertical line in the captured image is calculated. Boundary candidates of which the formed angle is less than an angle reference value are set to have low probability. The boundary candidate having the highest probability among the boundary candidates is set as the boundary of the driving area. | 07-12-2012 |
20120177251 | IMAGE ANALYSIS BY OBJECT ADDITION AND RECOVERY - The invention described herein is generally directed to methods for analyzing an image. In particular, crowded field images may be analyzed for unidentified, unobserved objects based on an iterative analysis of modified images including artificial objects or removed real objects. The results can provide an estimate of the completeness of analysis of the image, an estimate of the number of objects that are unobserved in the image, and an assessment of the quality of other similar images. | 07-12-2012 |
20120183175 | METHOD FOR IDENTIFYING A SCENE FROM MULTIPLE WAVELENGTH POLARIZED IMAGES - Techniques for identifying images of a scene including illuminating the scene with a beam of 3 or more wavelengths, polarized according to a determined direction; simultaneously acquiring for each wavelength an image X | 07-19-2012 |
20120183176 | PERFORMING REVERSE TIME IMAGING OF MULTICOMPONENT ACOUSTIC AND SEISMIC DATA - A technique includes performing reverse time imaging to determine an image in a region of interest. The reverse time imaging includes modeling a pressure wavefield and a gradient wavefield in the region of interest based at least in part on particle motion data and pressure data acquired by sensors in response to energy being produced by at least one source. | 07-19-2012 |
20120183177 | IMAGE SURVEILLANCE SYSTEM AND METHOD OF DETECTING WHETHER OBJECT IS LEFT BEHIND OR TAKEN AWAY - An image surveillance system and a method of detecting whether an object is left behind or taken away are provided. The image surveillance system includes: a foreground detecting unit which detects a foreground region based on a pixel information difference between a background image and a current input image; a still region detecting unit which detects a candidate still region by clustering foreground pixels of the foreground region, and determines whether the candidate still region is a falsely detected still region or a true still region; and an object detecting unit which determines whether an object is left behind or taken away, based on edge information about the true still region. | 07-19-2012 |
20120183178 | METHOD AND DEVICE FOR RECOGNITION OF INFORMATION APPLIED ON PACKAGES - Embodiments describe a system and method for reading the information on bundled packages wrapped in transparent film. The film can obscure information on the outside of the packages making the automated identification and tracking of the packages difficult. Embodiments described herein provide a system and method for capturing the unique information regardless of the obscuring effects of packaging films. A camera that is insensitive to UV light captures visible light emitted by labels after the labels are irradiated by UV light. The light emission induces greater contrast overcoming any distortion that might have occurred due to the transparent packaging film. | 07-19-2012 |
20120189160 | LINE-OF-SIGHT DETECTION APPARATUS AND METHOD THEREOF - A line-of-sight detection apparatus includes a detection unit configured to detect a face from image data, a first extraction unit configured to extract a feature amount corresponding to a direction of the face from the image data, a calculation unit configured to calculate a line-of-sight reliability of each of a right eye and a left eye based on the face, a selection unit configured to select an eye according to the line-of-sight reliability, a second extraction unit configured to extract a feature amount of an eye region of the selected eye from the image data, and an estimation unit configured to estimate a line of sight of the face based on the feature amount corresponding to the face direction and the feature amount of the eye region. | 07-26-2012 |
20120189161 | VISUAL ATTENTION APPARATUS AND CONTROL METHOD BASED ON MIND AWARENESS AND DISPLAY APPARATUS USING THE VISUAL ATTENTION APPARATUS - Disclosed are a visual attention apparatus based on mind awareness and an image output apparatus using the same. Exemplary embodiments of the present invention can reduce data throughput by performing object segmentation and context analysis according to downsampling and colors and approximate shapes of input images so as to detect attention regions using extrinsic visual attention and intrinsic visual attention. In addition, the exemplary embodiments of the present invention can detect the attention regions having different viewpoints for each user by detecting the attention regions due to the extrinsic visual attention and the intrinsic visual attention and processing and displaying the attention regions as various regions of interest, thereby increasing the image immersion and the utility of contents. | 07-26-2012 |
20120189162 | MOBILE UNIT POSITION DETECTING APPARATUS AND MOBILE UNIT POSITION DETECTING METHOD - The mobile unit position detecting apparatus generates target data by extracting a target from an image shot by the image capturing device, extracts target setting data that best matches the target data, is prerecorded in a recording unit and is shot for each target, obtains a target ID corresponding to the extracted target setting data from the recording unit, detects position data associated with the obtained target ID, tracks the target in the image shot by the image capturing device, and calculates an aspect ratio of the target being tracked in the image. If the aspect ratio is equal to or lower than a threshold value, the mobile unit position detecting apparatus outputs the detected position data. | 07-26-2012 |
20120189163 | APPARATUS AND METHOD FOR RECOGNIZING HAND ROTATION - An apparatus and a method are provided that can intuitively and easily recognize hand rotation. The apparatus for recognizing a hand rotation includes a camera for photographing a plurality of hand image data, a detector for extracting circles through fingers of the hand image data and a controller for recognizing hand rotation through changes in positions and sizes of the circles extracted from each of the plurality of hand image data. | 07-26-2012 |
20120189164 | RULE-BASED COMBINATION OF A HIERARCHY OF CLASSIFIERS FOR OCCLUSION DETECTION - A person detection system includes a face detector configured to detect a face in an input video sequence, the face detector outputting a face keyframe to be stored if a face is detected; and a person detector configured to detect a person in the input video sequence if the face detector fails to detect a face, the person detector outputting a person keyframe to be stored, if a person is detected in the input video sequence. | 07-26-2012 |
20120189165 | METHOD OF PROCESSING BODY INSPECTION IMAGE AND BODY INSPECTION APPARATUS - A method of processing a body inspection image and a body inspection apparatus are disclosed. In one embodiment, the method may comprise recognizing a target region by means of pattern recognition, and performing privacy protection processing on the recognized target region. The target region may comprise a head and/or crotch part. According to the present disclosure, it is possible to achieve a compromise between privacy protection and body inspection. | 07-26-2012 |
20120195459 | CLASSIFICATION OF TARGET OBJECTS IN MOTION - A method for classifying objects in motion that includes providing, to a processor, feature data for one or more classes of objects to be classified, wherein the feature data is indexed by object class, orientation, and sensor. The method also includes providing, to the processor, one or more representative models for characterizing one or more orientation motion profiles for the one or more classes of objects in motion. The method also include acquiring, via a processor, feature data for a target object in motion from multiple sensors and/or for multiple times and trajectory of the target object in motion to classify the target object based on the feature data, the one or more orientation motion profiles and the trajectory of the target object in motion. | 08-02-2012 |
20120195460 | CONTEXT AWARE AUGMENTATION INTERACTIONS - A mobile platform renders different augmented reality objects based on the spatial relationship, such as the proximity and/or relative positions between real-world objects. The mobile platform detects and tracks a first object and a second object in one or more captured images. The mobile platform determines the spatial relationship of the objects, e.g., the proximity or distance between objects and/or the relative positions between objects. The proximity may be based on whether the objects appear in the same image or the distance between the objects. Based on the spatial relationship of the objects, the augmentation object to be rendered is determined, e.g., by searching a database. The selected augmentation object is rendered and displayed. | 08-02-2012 |
20120195461 | CORRELATING AREAS ON THE PHYSICAL OBJECT TO AREAS ON THE PHONE SCREEN - A mobile platform renders an augmented reality graphic to indicate selectable regions of interest on a captured image or scene. The region of interest is an area that is defined on the image of a physical object, which when selected by the user can generate a specific action. The mobile platform captures and displays a scene that includes an object and detects the object in the scene. A coordinate system is defined within the scene and used to track the object. A selectable region of interest is associated with one or more areas on the object in the scene. An indicator graphic is rendered for the selectable region of interest, where the indicator graphic identifies the selectable region of interest. | 08-02-2012 |
20120195462 | FLAME IDENTIFICATION METHOD AND DEVICE USING IMAGE ANALYSES IN HSI COLOR SPACE - In a flame identification method and device for identifying any flame image in a plurality of frames captured consecutively from a monitored area, for each image frame, intensity foreground pixels are obtained based on intensity values of pixels, a fire-like image region containing the intensity foreground pixels is defined when an intensity foreground area corresponding to the intensity foreground pixels is greater than a predetermined intensity foreground area threshold, and saturation foreground pixels are obtained from all pixels in the fire-like image region based on saturation values thereof to obtain a saturation foreground area corresponding to the saturation foreground pixels. Linear regression analyses are performed on two-dimensional coordinates each formed by the intensity and saturation pixel areas associated with a corresponding image frame to generate a determination coefficient. Whether a flame image exists in the image frames is determined based on the determination coefficient and a predetermined identification threshold. | 08-02-2012 |
20120195463 | IMAGE PROCESSING DEVICE, THREE-DIMENSIONAL IMAGE PRINTING SYSTEM, AND IMAGE PROCESSING METHOD AND PROGRAM - The image processing device includes a three-dimensional image data input unit which enters three-dimensional image data representing a three-dimensional image, a subject extractor which extracts a subject from the three-dimensional image data, a spatial vector calculator which calculates a spatial vector of the subject from a plurality of planar image data having different viewpoints contained in the three-dimensional image data, and a three-dimensional image data recorder which records the spatial vector and the three-dimensional image data in association with each other. | 08-02-2012 |
20120195464 | AUGMENTED REALITY SYSTEM AND METHOD FOR REMOTELY SHARING AUGMENTED REALITY SERVICE - An augmented reality (AR) system and method for remotely sharing an AR service is provided. The AR system includes a plurality of client devices and a host device. The AR system allows information related to a marker and information related an AR object to be shared between client devices participating in an AR session, which may be separated by a reference distance, through a host device. Accordingly, an AR service may be shared between the client devices. | 08-02-2012 |
20120195465 | PERSONNEL SECURITY SCREENING SYSTEM WITH ENHANCED PRIVACY - The present invention is directed towards processing security images of people subjected to X-ray radiation. The present invention processes a generated image by dividing the generated image into at least two regions or mask images, separately processing the at least two regions of the image, and viewing the resultant processed region images either alone or as a combined image. | 08-02-2012 |
20120195466 | IMAGE-BASED SURFACE TRACKING - A method of image-tracking by using an image capturing device ( | 08-02-2012 |
20120195467 | Object Information Derived from Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 08-02-2012 |
20120195468 | Object Information Derived from Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 08-02-2012 |
20120195469 | FORMATION OF A TIME-VARYING SIGNAL REPRESENTATIVE OF AT LEAST VARIATIONS IN A VALUE BASED ON PIXEL VALUES - A method of forming a time-varying signal representative of at least variations in a value based on pixel values from a sequence of images, the signal corresponding in length to the sequence of images, includes obtaining the sequence of images. A plurality of groups ( | 08-02-2012 |
20120195470 | HIGH CONTRAST RETROREFLECTIVE SHEETING AND LICENSE PLATES - The present disclosure relates to the formation of high contrast, wavelength independent retroreflective sheeting made by including a light scattering material on at least a portion of the retroreflective sheeting. The light scattering material reduces the brightness of the retroreflective sheeting without substantially changing the appearance of the retroreflective sheeting when viewed under scattered light. | 08-02-2012 |
20120201417 | APPARATUS AND METHOD FOR PROCESSING SENSORY EFFECT OF IMAGE DATA - A method and apparatus is capable of processing a sensory effect of image data. The apparatus includes an image analyzer that analyzes depth information and texture information about at least one object included in an image. A motion analyzer analyzes a motion of a user. An image matching processor matches the motion of the user to the image. An image output unit outputs the image to which the motion of the user is matched, and a sensory effect output unit outputs a texture of an object touched by the body of the user to the body of the user. | 08-09-2012 |
20120201418 | DIGITAL RIGHTS MANAGEMENT OF CAPTURED CONTENT BASED ON CAPTURE ASSOCIATED LOCATIONS - A certification is received from a user stating that captured content does not comprise a particular restricted element and a request from the user for an adjustment of a digital rights management rule identified for the captured content based on the captured content comprising the particular restricted element. At least one term of the digital rights management rule is adjusted to reflect that the captured content does not comprise the particular restricted element. The usage of the captured content by the user is monitored to determine whether the usage matches the certification statement. | 08-09-2012 |
20120201419 | MAP INFORMATION DISPLAY APPARATUS, MAP INFORMATION DISPLAY METHOD, AND PROGRAM - A map information display apparatus for displaying map information on the basis of information on image-capturing times and image-capturing positions that are respectively associated with a plurality of captured images includes a captured image extraction unit configured to extract images captured within a predetermined time period that includes the image-capturing time of a predetermined captured image from among the plurality of captured images; a map area selection unit configured to select an area of a map so as to include the image-capturing positions of the captured images extracted by the captured image extraction unit by using as a reference the image-capturing position of the predetermined captured image; and a map information display unit configured to display map information in such a manner that the area of the map, which is selected by the map area selection unit, is displayed. | 08-09-2012 |
20120201420 | Object Recognition and Describing Structure of Graphical Objects - Methods for processing machine-readable forms or documents of non-fixed format are disclosed. The methods make use of, for example, a structural description of characteristics of document elements, a description of a logical structure of the document, and methods of searching for document elements by using the structural description. A structural description of the spatial and parametric characteristics of document elements and the logical connections between elements may include a hierarchical logical structure of the elements, specification of an algorithm of determining the search constraints, specification of characteristics of searched elements, and specification of a set of parameters for a compound element identified on the basis of the aggregate of its components. The method of describing the logical structure of a document and methods of searching for elements of a document may be based on the use of the structural description. | 08-09-2012 |
20120201421 | System and Method for Automatic Registration Between an Image and a Subject - A patient defines a patient space in which an instrument can be tracked and navigated. An image space is defined by image data that can be registered to the patient space. A tracking device can be connected to a member in a known manner that includes imageable portions that generate image points in the image data. Selected image slices or portions can be used to register reconstructed image data to the patient space. | 08-09-2012 |
20120201422 | SIGNAL PROCESSING APPARATUS - A signal processing apparatus for displaying an input image in the sate in which a part of the image is enlarged, displays an enlarged image obtained by enlarging a part of a designated object in the input image so that the enlarged image is superimposed at a position in accordance with the position of the designated object. | 08-09-2012 |
20120201423 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE PROCESSING PROGRAM AND RECORDING MEDIUM - There are provided an image processing apparatus, an image processing method and an image processing program for transforming a target image having no contour of straight line portions. | 08-09-2012 |
20120207345 | TOUCHLESS HUMAN MACHINE INTERFACE - A system and method for receiving input from a user is provided. The system includes at least one camera configured to receive an image of a hand of the user and a controller configured to analyze the image and issue a command based on the analysis of the image. | 08-16-2012 |
20120207346 | Detecting and Localizing Multiple Objects in Images Using Probabilistic Inference - An object detection system is disclosed herein. The object detection system allows detection of one or more objects of interest using a probabilistic model. The probabilistic model may include voting elements usable to determine which hypotheses for locations of objects are probabilistically valid. The object detection system may apply an optimization algorithm such as a simple greedy algorithm to find hypotheses that optimize or maximize a posterior probability or log-posterior of the probabilistic model or a hypothesis receiving a maximal probabilistic vote from the voting elements in a respective iteration of the algorithm. Locations of detected objects may then be ascertained based on the found hypotheses. | 08-16-2012 |
20120207347 | IMAGE ROTATION FROM LOCAL MOTION ESTIMATES - A measure of frame-to-frame rotation is determined. Integral projection vector gradients are determined and normalized for a pair of images. Locations of primary maximum and minimum peaks of the integral projection vector gradients are determined. Based on normalized distances between the primary maximum and minimum peaks, a global image rotation is determined. | 08-16-2012 |
20120207348 | VEHICLE DETECTION APPARATUS - A vehicle detection apparatus includes a lamp candidate extraction unit that extracts a pixel region that may correspond to a tail lamp of a vehicle from pixel regions that an integration processing unit creates by extracting and integrating pixels of an image as a lamp candidate and a grouping unit that regroups groups containing the lamp candidate of the groups generated by grouping position data detected by a position detection unit and then regroups all groups. In the regrouping processing, a threshold used for regrouping groups containing the lamp candidate is set easier for regrouping than a threshold used for subsequently regrouping all groups. | 08-16-2012 |
20120207349 | TARGETED CONTENT ACQUISITION USING IMAGE ANALYSIS - A method is provided in which a tag is affixed to a known individual that is to be identified within a known field of view of an image capture system. The tag is a physical tag comprising at least a known feature. Subsequent to affixing the tag to the known individual, image data is captured within the known field of view of the image capture system, which is then provided to a processor. Image analysis is performed on the captured image data to detect the at least a known feature. In dependence upon detecting the at least a known feature, an occurrence of the known individual within the captured image data is identified. | 08-16-2012 |
20120207350 | APPARATUS FOR IDENTIFICATION OF AN OBJECT QUEUE, METHOD AND COMPUTER PROGRAM - In daily life, people are often forced to join a queue in order, for example, to pay at a checkout or to be dealt with at an airport, etc. Because of the various forms of a queue, these are not usually recorded automatically, but are analyzed manually. For example, if a long queue is formed at a supermarket, as a result of which the predicted waiting time for the customers rises above a threshold value, this situation can be identified by the checkout personnel, and a further checkout can be opened. A device | 08-16-2012 |
20120207351 | METHOD AND EXAMINATION APPARATUS FOR EXAMINING AN ITEM UNDER EXAMINATION IN THE FORM OF A PERSON AND/OR A CONTAINER - An examination apparatus examines an item including a person or a container and has a determination unit for determining a relevance level which can be assigned to the item under examination, in particular a hazard level, and an image capture unit for capturing an image of the item under examination. The examination apparatus has a database, an automated evaluation unit for automatically evaluating at least one section of the image using the database, an evaluation unit operated by a user for the visual evaluation of a section of the image by the user, and an input unit for inputting at least one evaluation input by the user, and a database processing unit for processing the database. The database processing unit processes a database entry using the evaluation input in conjunction with the determination of the relevance level. | 08-16-2012 |
20120207352 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 08-16-2012 |
20120207353 | System And Method For Detecting And Tracking An Object Of Interest In Spatio-Temporal Space - The present invention provides a system and method for detecting and tracking a moving object. First, robust change detection is applied to find initial candidate regions in consecutive frames. These initial detections in consecutive frames are stacked to produce space-time bands which are extracted by Hough transform and entropy minimization based band detection algorithm. | 08-16-2012 |
20120207354 | IMAGE SENSING APPARATUS AND METHOD FOR CONTROLLING THE SAME - Receiving an instruction from a user to start sensing a still image, an image sensing apparatus performs scene determination based on an evaluation value of scene determination from an image sensed immediately after the luminance of the image converges to a predetermined range of a target luminance. The image sensing apparatus can accurately determine a scene of the image even the image sensor with a narrow dynamic range. | 08-16-2012 |
20120207355 | X-RAY CT APPARATUS AND IMAGE DISPLAY METHOD OF X-RAY CT APPARATUS - The X-ray CT apparatus which includes an X-ray generator and an X-ray detector for acquiring projection data of an object from plural angles and creates an arbitrary cross-sectional image of the object on the basis of the projection data includes: an extraction section which extracts a region, which includes a target organ moving periodically, from the cross-sectional image; a synchronous phase determination section which determines a synchronous phase, which is used when creating a synchronous cross-sectional image synchronized with periodic motion of the target organ, on the basis of continuity of the target organ in a direction perpendicular to the cross-sectional image; a synchronous cross-sectional image creating section which creates the synchronous cross-sectional image on the basis of projection data corresponding to the synchronous phase determined by the synchronous phase determination section; and a display unit which displays the synchronous cross-sectional image. | 08-16-2012 |
20120213403 | Simultaneous Image Distribution and Archiving - The present specification discloses a storage system for enabling the substantially concurrent storage and access of data that has three dimensional images processed to identify a presence of a threat item. The system includes a source of data, a temporary storage memory for receiving and temporarily storing the data, a long term storage, and multiple workstations adapted to display three dimensional images. The temporary storage memory is adapted to support multiple file input/output operations executing substantially concurrently, including the receiving of data, transmitting of data to workstations, and transmitting of data to long term storage. | 08-23-2012 |
20120213404 | AUTOMATIC EVENT RECOGNITION AND CROSS-USER PHOTO CLUSTERING - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for automatic event recognition and photo clustering. In one aspect, methods include receiving, from a first user, first image data corresponding to a first image, receiving, from a second user, second image data corresponding to a second image, comparing the first image data and the second image data, and determining that the first image and the second image correspond to a coincident event based on the comparing. | 08-23-2012 |
20120213405 | MOVING OBJECT DETECTION APPARATUS - A moving object detection apparatus generates frame difference image data each time a frame data is captured, based on the captured frame data and previous frame data, and such frame difference image data is divided into pixel blocks. Subsequently, for each of the pixel blocks a discrete cosine transformation (DCT), a two-dimensional DCT coefficient is calculated, and such two-dimensional DCT coefficients are accumulated and stored. The value of each element of the two-dimensional DCT coefficient is arranged to form a characteristic vector, and, for each of the pixel blocks at the same position of the frame difference image data, the characteristic vector is generated and then such characteristic vector is arranged to form a time-series vector. The time-series vector derived from moving-object-capturing pixel blocks is used to calculate a principal component vector and a principal component score. | 08-23-2012 |
20120213406 | SUBJECT DESIGNATING DEVICE AND SUBJECT TRACKING APPARATUS - A subject designating device includes: a representative value calculation unit that calculates a representative value for each image of a brightness image and chrominance images based upon pixel values indicated at pixels present within a first subject area; a second image generation unit that creates a differential image by subtracting the representative value from pixel values indicated at pixels present within a second subject area; a binarizing unit that binarizes the differential image; a synthesizing unit that creates a synthetic image by combining binary images in correspondence to the brightness image and the chrominance images; a mask extraction unit that extracts a mask constituted with a white pixel cluster from the synthetic image; an evaluation value calculation unit that calculates an evaluation value indicating a likelihood of the mask representing the subject; and a subject designating unit that designates the subject in the target image based upon the evaluation value. | 08-23-2012 |
20120213407 | IMAGE CAPTURE AND POST-CAPTURE PROCESSING - Image data of a scene is captured. Spectral profile information is obtained for the scene. A database of plural spectral profiles is accessed, each of which maps a material to a corresponding spectral profile reflected therefrom. The spectral profile information for the scene is matched against the database, and materials for objects in the scene are identified by using matches between the spectral profile information for the scene against the database. Metadata which identifies materials for objects in the scene is constructed, and the metadata is embedded with the image data for the scene. | 08-23-2012 |
20120213408 | SYSTEM OF CONTROLLING DEVICE IN RESPONSE TO GESTURE - A control system includes: an input unit through which a signal for a gesture and a background of the gesture is input; a gesture recognition unit which recognizes the gesture on the basis of the input signal; an attribute recognition unit which recognizes an attribute of a background target of the recognized gesture on the basis of the input signal; and a command transmitting unit which generates a control command on the basis of a combination of the recognized gesture and the background target attribute and transmits the control command to a device. | 08-23-2012 |
20120213409 | DECODER-SIDE REGION OF INTEREST VIDEO PROCESSING - The disclosure is directed to decoder-side region-of-interest (ROI) video processing. A video decoder determines whether ROI assistance information is available. If not, the decoder defaults to decoder-side ROI processing. The decoder-side ROI processing may estimate the reliability of ROI extraction in the bitstream domain. If ROI reliability is favorable, the decoder applies bitstream domain ROI extraction. If ROI reliability is unfavorable, the decoder applies pixel domain ROI extraction. The decoder may apply different ROI extraction processes for intra-coded (I) and inter-coded (P or B) data. The decoder may use color-based ROI generation for intra-coded data, and coded block pattern (CBP)-based ROI generation for inter-coded data. ROI refinement may involve shape-based refinement for intra-coded data, and motion- and color-based refinement for inter-coded data. | 08-23-2012 |
20120213410 | METHODS AND APPARATUS FOR DETECTING A COMPOSITION OF AN AUDIENCE OF AN INFORMATION PRESENTING DEVICE - Methods and apparatus for detecting a composition of an audience of an information presenting device are disclosed. A disclosed example method includes: capturing at least one image of the audience; determining a number of people within the at least one image; prompting the audience to identify its members if a change in the number of people is detected based on the number of people determined to be within the at least one image; and if a number of members identified by the audience is different from the determined number of people after a predetermined number of prompts of the audience, adjusting a value to avoid excessive prompting of the audience. | 08-23-2012 |
20120213411 | IMAGE TARGET IDENTIFICATION DEVICE, IMAGE TARGET IDENTIFICATION METHOD, AND IMAGE TARGET IDENTIFICATION PROGRAM - A device is provided with a luminance histogram calculation unit which generates a luminance histogram showing appearance frequency of luminance values contained within the infrared image and determines a luminance value corresponding to a peak in the luminance histogram as a background luminance level of the background; a luminance shift calculation unit which sets the background luminance value as an intermediate value in luminance range width of the infrared image and generates a luminance shift image by linearly shifting other luminance values in the infrared image based on the intermediate value; a reversed image processing unit which generates a reversed shift image wherein the luminance level of the luminance shift image is reversed; and a luminance calculation processing unit which generates a calculation-processed image by performing calculation processing based on the difference in the luminance values at corresponding positions in the luminance shift image and the reversed shift image. | 08-23-2012 |
20120219174 | EXTRACTING MOTION INFORMATION FROM DIGITAL VIDEO SEQUENCES - A method for analyzing a digital video sequence of a scene to extract background motion information and foreground motion information, comprising: analyzing at least a portion of a plurality of image frames captured at different times to determine corresponding one-dimensional image frame representations; combining the one-dimensional frame representations to form a two-dimensional spatiotemporal representation of the video sequence; using a data processor to identify a set of trajectories in the two-dimensional spatiotemporal representation of the video sequence; analyzing the set of trajectories to identify a set of foreground trajectory segments representing foreground motion information and a set of background trajectory segments representing background motion information; and storing an indication of the foreground motion information or the background motion information or both in a processor-accessible memory. | 08-30-2012 |
20120219175 | ASSOCIATING AN OBJECT IN AN IMAGE WITH AN ASSET IN A FINANCIAL APPLICATION - The invention relates to a method for associating an object in an image with an asset of a number of assets in a financial application. The method includes receiving the image of the object comprising global positioning system (GPS) data, where the image is captured using an image-taking device with GPS functionality and processing the image to generate processed GPS data. The method further includes determining, using the processed GPS data, a geographic location of the object in the image, and identifying, using the geographic location, the object by performing a recognition analysis of the image. The method further includes associating, based on the recognition analysis, the object in the image with the asset of the assets of an owner in the financial application, and storing, in the financial application, the image of the object associated with the asset of the assets of the owner. | 08-30-2012 |
20120219176 | Method and Apparatus for Pattern Tracking - A method and apparatus for pattern tracking The method includes the steps of performing a foreground detection process to determine a hand-pill-hand region, performing image segmentation to separate the determined hand portion of the hand-pill-hand region from the pill portion thereof, building three reference models, one for each hand region and one for the pill region, initializing a dynamic model for tracking the hand-pill-hand region, determining N possible next positions for the hand-pill-hand region, for each such determined position, determining various features, building a new model for that region in accordance with the determined position, for each position, comparing the new model and a reference model, determining a position whose new model generates a highest similarity score, determining whether that similarity score is greater than a predetermined threshold, and wherein if it is determined that the similarity score is greater than the predetermined threshold, the object is tracked. | 08-30-2012 |
20120219177 | COMPUTER-READABLE STORAGE MEDIUM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - First, a series of edge pixels representing a contour of an object or of a design represented in the object are detected from an image acquired from a capturing apparatus. Then, a plurality of straight lines are generated on the basis of the series of detected edge pixels, and vertices of the contour are detected on the basis of the plurality of straight lines. Further, relative positions and orientations of the capturing apparatus and the object relative to each other are calculated on the basis of the detected vertices, and a virtual camera in a virtual space is set on the basis of the positions and the orientations. Then, a virtual space image obtained by capturing the virtual space with the virtual camera is displayed on a display device. | 08-30-2012 |
20120219178 | COMPUTER-READABLE STORAGE MEDIUM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - A position of a predetermined object or a predetermined design is sequentially detected from images. Then, an amount of movement of the predetermined object or the predetermined design is calculated on the basis of: a position, in a first image, of the predetermined object or the predetermined design detected from the first image; and a position, in a second image, of the predetermined object or the predetermined design detected from the second image acquired before the first image. Then, when the amount of movement is less than a first threshold, the position, in the first image, of the predetermined object or the predetermined design detected from the first image is corrected to the position, in the second image, of the predetermined object or the predetermined design detected from the second image. | 08-30-2012 |
20120219179 | COMPUTER-READABLE STORAGE MEDIUM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - A position of a predetermined object or design is sequentially detected from images. Then, an amount of movement of the predetermined object or design is calculated on the basis of: a position, in a first image, of the predetermined object or design detected from the first image; and a position, in a second image, of the predetermined object or design detected from the second image acquired before the first image. Then, when the amount of movement is less than a first threshold, the position, in the first image, of the predetermined object or design detected from the first image is corrected to a position internally dividing, in a predetermined ratio, line segments connecting: the position, in the first image, of the predetermined object or design detected from the first image; to the position, in the second image, of the predetermined object or design detected from the second image. | 08-30-2012 |
20120219180 | Automatic Detection of Vertical Gaze Using an Embedded Imaging Device - A method of detecting and applying a vertical gaze direction of a face within a digital image includes analyzing one or both eyes of a face within an acquired image, including determining a degree of coverage of an eye ball by an eye lid within the digital image. Based on the determined degree of coverage of the eye ball by the eye lid, an approximate direction of vertical eye gaze is determined. A further action is selected based on the determined approximate direction of vertical eye gaze. | 08-30-2012 |
20120219181 | AUGMENTED REALITY-BASED FILE TRANSFER METHOD AND FILE TRANSFER SYSTEM THEREOF - An augmented reality-based file transfer method and a related file transfer system integrated with cloud computing are provided. The file transfer method is applied to file transmission between a first device and a second device wirelessly connected to each other, wherein the first device includes a file, a display unit, and an input unit electronically connected to the display unit. The file transfer method includes the following steps: when an image stored in the first device is opened, displaying the file and the image on the display unit of the first device, wherein the image comprises a face image of the second user; when the file is dragged to the face image of the second user shown in the image via the input unit and is then released, generating a command; and transferring the file from the first device to the second device according to the command. | 08-30-2012 |
20120219182 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING PROGRAM - An image processing apparatus to extract a print image to be printed onto a print medium from an original image, the image processing apparatus includes: a detecting unit that detects a specific area, which includes a plurality of pixels having a low degree of variation in pixel values, from the original image, based on a predetermined detection criterion; and an extracting unit that, when an extraction range having a predetermined shape including the print image is set in the original image, extracts the print image so that the specific area is disposed in a non-print area, which is not printed on the print medium, within the extraction range. | 08-30-2012 |
20120219183 | 3D Object Detecting Apparatus and 3D Object Detecting Method - A 3D-object detecting apparatus may include a detection-image creating device configured to detect a 3D object on an image-capture surface from an image captured by an image-capture device and to create a detection image in which a silhouette of only the 3D object is left; a density-map creating device configured to determine the 3D objects spatial densities at corresponding coordinate points in a coordinate plane on the basis of the detection image and mask images obtained for the corresponding coordinate points on the basis of virtual cuboids arranged for the corresponding coordinate points and to create a density map having pixels for the corresponding coordinate points such that the pixels have pixel values corresponding to the determined spatial densities; and a 3D-object position detecting device that detects the position of the 3D object as a representative point in a high-density region in the density map. | 08-30-2012 |
20120219184 | MONITORING OF VIDEO IMAGES - A characteristic motion in a video is identified by determining pairs of moving features that have an indicative relationship between the motions of the two moving features in the pair. For example, the motion of a pedestrian is identified by an indicative relationship between the motions of the pedestrian's feet. This indicative relationship may be that one of the feet moves relative to the surroundings while the other remains stationary. | 08-30-2012 |
20120219185 | APPARATUS AND METHOD FOR DETERMINING A LOCATION IN A TARGET IMAGE - An apparatus and a computer-implemented method are provided for determining a location in a target image (T) of a site on a surface of a physical object using two or more reference images (I | 08-30-2012 |
20120219186 | Continuous Linear Dynamic Systems - Aspects of the present invention include systems and methods for segmentation and recognition of action primitives. In embodiments, a framework, referred to as the Continuous Linear Dynamic System (CLDS), comprises two sets of Linear Dynamic System (LDS) models, one to model the dynamics of individual primitive actions and the other to model the transitions between actions. In embodiments, the inference process estimates the best decomposition of the whole sequence into continuous alternating between the two set of models, using an approximate Viterbi algorithm. In this way, both action type and action boundary may be accurately recognized. | 08-30-2012 |
20120219187 | Data Capture and Identification System and Process - An identification method and process for objects from digitally captured images thereof that uses data characteristics to identify an object from a plurality of objects in a database. The data is broken down into parameters such as a Shape Comparison, Grayscale Comparison, Wavelet Comparison, and Color Cube Comparison with object data in one or more databases to identify the actual object of a digital image. | 08-30-2012 |
20120219188 | METHOD OF PROVIDING A DESCRIPTOR FOR AT LEAST ONE FEATURE OF AN IMAGE AND METHOD OF MATCHING FEATURES - A method of providing a descriptor for at least one feature of an image comprises the steps of providing an image captured by a capturing device and extracting at least one feature from the image, and assigning a descriptor to the at least one feature, the descriptor depending on at least one parameter which is indicative of an orientation, wherein the at least one parameter is determined from the orientation of the capturing device measured by a tracking system. The invention also relates to a method of matching features of two or more images. | 08-30-2012 |
20120219189 | METHOD AND DEVICE FOR DETECTING FATIGUE DRIVING AND THE AUTOMOBILE USING THE SAME - The present application discloses a method and device of detecting fatigue driving, comprising: analyzing an eye image in the driver's eye image area with a rectangular feature template to obtain the upper eyelid line; determining the eye closure state according to the curvature or curvature feature value of the upper eyelid line; and collecting statistics on the eye closure state and thereby determining whether the driver is in a fatigue state. The present application determines whether the eyes are opened or closed according to the shape of the upper eyelid, which is more accurate because the upper eyelid line has characteristics of higher relative contrast, anti-interference capacity, and adaptability to the changes in the facial expression. | 08-30-2012 |
20120224743 | SMARTPHONE-BASED METHODS AND SYSTEMS - Methods and arrangements involving portable devices, such as smartphones and tablet computers, are disclosed. Exemplary arrangements utilize the camera portions of such devices to identify nearby subjects, and take actions based thereon. Others rely on near field chip (RFID) identification of objects, or on identification of audio streams (e.g., music, voice). Some of the detailed technologies concern improvements to the user interfaces associated with such devices. Others involve use of these devices in connection with shopping, text entry, sign language interpretation, and vision-based discovery. Still other improvements are architectural in nature, e.g., relating to evidence-based state machines, and blackboard systems. Yet other technologies concern use of linked data in portable devices—some of which exploit GPU capabilities. Still other technologies concern computational photography. A great variety of other features and arrangements are also detailed. | 09-06-2012 |
20120224744 | IMAGE ANALYSIS METHOD - A moving feature is recognized in a video sequence by comparing its movement with a characteristic pattern. Possible trajectories through the video sequence are generated for an object by identifying potential matches of points in pairs of frames of the video sequence. When looking for the characteristic pattern, a number of possible trajectories are analyzed. The possible trajectories may be selected so that they are suitable for analysis. This may include selecting longer trajectories that can be easier to analyze. Thereby where the object being tracked is momentarily behind another object a continuous trajectory is generated. | 09-06-2012 |
20120224745 | EVALUATION OF GRAPHICAL OUTPUT OF GRAPHICAL SOFTWARE APPLICATIONS EXECUTING IN A COMPUTING ENVIRONMENT - Graphic objects generated by a software application executing in a computing environment are evaluated. The computing environment includes a graphical user interface for managing I/O functions, a data storage device for storing computer usable program code and data, and a data processing engine in communication with the graphical user interface and the data storage device The data processing engine receives and processes origin data from the data storage device to produce projected values for data points in the graphic image intended to be displayed. The data processing engine also creates and processes a snapshot of the displayed graphic object to produce actual values of data points in the displayed graphic object, compares the projected values to the actual values, and outputs an indication of the degree of similarity between the intended graphic object and the displayed graphic object. | 09-06-2012 |
20120224746 | CLASSIFIER ANOMALIES FOR OBSERVED BEHAVIORS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A combination of a self organizing map (SOM) and an adaptive resonance theory (ART) network may be used to identify a variety of different anomalous inputs at each cluster layer. As progressively higher layers of the cortex model component represent progressively higher levels of abstraction, anomalies occurring in the higher levels of the cortex model represent observations of behavioral anomalies corresponding to progressively complex patterns of behavior. | 09-06-2012 |
20120224747 | In-Vehicle Apparatus for Recognizing Running Environment of Vehicle - An in-vehicle running-environment recognition apparatus including an input unit for inputting an image signal from in-vehicle imaging devices for photographing external environment of a vehicle, an image processing unit for detecting a first image area by processing the image signal, the first image area having a factor which prevents recognition of the external environment, an image determination unit for determining a second image area based on at least any one of size of the first image area, position thereof, and set-up positions of the in-vehicle imaging devices having the first image area, an environment recognition processing being performed in the second image area, the first image area being detected by the image processing unit, and an environment recognition unit for recognizing the external environment of the vehicle based on the second image area. | 09-06-2012 |
20120230537 | TAG INFORMATION MANAGEMENT APPARATUS, TAG INFORMATION MANAGEMENT SYSTEM, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND TAG INFORMATION MANAGEMENT METHOD - A tag data management apparatus for managing tag data indicative of an attribute of content data, comprising: an extraction section that extracts positional information included in the content data, the positional information being indicative of a position associated with the content data; and a priority order determination section that determines a priority order of the content data, based on the positional information extracted by the extraction section. | 09-13-2012 |
20120230538 | PROVIDING INFORMATION ASSOCIATED WITH AN IDENTIFIED REPRESENTATION OF AN OBJECT - Methods, apparatus systems and computer program products are described herein that provide for using video or still shot analysis, such as AR or the like, to assist the user of mobile devices with receiving information corresponding to an abstraction or representation of a subject. Some subjects are difficult to capture in a video or still shot. The method and devices described herein capture representations of difficult to capture or unavailable subjects and presents information related to the subject with the representation. In an embodiment, the representation is a screenshot and the information is provided related to the application that is represented by the screenshot. Various other types of representations including depictions, advertisements, portions of, and identifying marks can be identified by the system and method and information presented relating to the corresponding subjects. In some cases, the information is customized with financial information of the user. | 09-13-2012 |
20120230539 | PROVIDING LOCATION IDENTIFICATION OF ASSOCIATED INDIVIDUALS BASED ON IDENTIFYING THE INDIVIDUALS IN CONJUNCTION WITH A LIVE VIDEO STREAM - Systems, methods, and computer program products are provided for using real-time video analysis, such as AR or the like to assist the user of a mobile device with commerce activities. Through the use of real-time vision object recognition faces, physical features, objects, logos, artwork, products, locations and other features that can be recognized in the real-time video stream can be matched to data associated with such to assist the user with commerce activity. The commerce activity may include, but is not limited to: identifying individuals associated with the user, identifying locations associated with individuals who are associated with the user, identifying groups of individuals who share a trait, or the like. In specific embodiments, the data that is matched to the images in the real-time video stream is specific to financial institutions, such as customer financial behavior history, customer purchase power/transaction history and the like. | 09-13-2012 |
20120230540 | DYNAMICALLY INDENTIFYING INDIVIDUALS FROM A CAPTURED IMAGE - Embodiments of the invention are directed to methods and apparatuses for capturing a real-time video stream using a mobile device, determining, using a processor, which images from the real-time video stream are associated with individuals meeting a user defined criteria, and presenting on a display of the real-time video stream, one or more indicators, each indicator being associated with an image determined to be a person meeting the predefined criteria. | 09-13-2012 |
20120230541 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - An image processing apparatus includes a first acquisition unit configured to obtain identification information for a plurality of blocks of an image, a second acquisition unit configured to obtain information to be used for image processing from a pixel value of a region of the image determined based on the identification information, and an image processing unit configured to perform image processing of the image based on the information obtained by the second acquisition unit. | 09-13-2012 |
20120230542 | METHOD FOR CREATING AND USING AFFECTIVE INFORMATION IN A DIGITAL IMAGING SYSTEM - An image file for storing a still digital image and metadata related to the still digital image, the image file including digital image data representing the still digital image, and metadata that categorizes the still digital image as an important digital image, wherein the categorization uses a range of levels and the range of levels includes at least three different integer values. | 09-13-2012 |
20120230543 | Object Information Derived from Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 09-13-2012 |
20120230544 | APPARATUS AND METHOD FOR FINDING A MISPLACED OBJECT USING A DATABASE AND INSTRUCTIONS GENERATED BY A PORTABLE DEVICE - The basic invention uses a portable device that can contain a camera, a database, and a text, voice or visual entry to control the storage of an image into a database. Furthermore, the stored image can be associated with text, color, visual or audio. The stored images can be used to guide the user towards a target that the user does not recall its current location. The user's commands can be issued verbally, textually or by scrolling through the target images in the database until the desired one is found. This target can be shoes, pink sneakers, a toy or some comparable items that the user needs to find. | 09-13-2012 |
20120230545 | Face Recognition Apparatus and Methods - One or more facial recognition categories are assigned to a face region detected in an input image ( | 09-13-2012 |
20120230546 | GENERIC OBJECT-BASED IMAGE RECOGNITION APPARATUS WITH EXCLUSIVE CLASSIFIER, AND METHOD FOR THE SAME - The present invention provides an image recognition apparatus with enhanced performance and robustness. | 09-13-2012 |
20120230547 | EYE TRACKING - An eye tracking apparatus and method of eye monitoring, comprising a target display adapted to project a moveable image of a target into a user's field of vision, an illumination source adapted to project a reference point onto a user's eye, a sensor adapted to monitor a user's eye, and a processor adapted to determine the position of a feature of a user's eye relative to the reference point, wherein the apparatus is arranged such that said determined position provides a direct indication of eye direction relative to the target direction. | 09-13-2012 |
20120237080 | METHOD FOR DETECTION OF MOVING OBJECT OF APPROXIMATELY KNOWN SIZE IN CONDITIONS OF LOW SIGNAL-TO-NOISE RATIO - The invention provides a method for detection of a moving object when signal-to-noise ratio is low. A field of view is presented as a regularly updated frame of data points. A state of the object is defined by an “azimuth—speed” pair (i.e., a hypothesis). On each update, a detection system performs two steps. At the first step, the brightness of data points of a new frame is replaced by the average brightness of points surrounding this point. At the second step, the brightness of data points of this frame is being accumulated separately for each hypothesis. On each update, one of hypotheses produces the accumulated frame with the brightest point. This hypothesis is considered the best; its frame is displayed on a screen. The object is detected when the best hypothesis stabilizes in a sequence of updates and the movement of the brightest point becomes consistent with this hypothesis. | 09-20-2012 |
20120237081 | ANOMALOUS PATTERN DISCOVERY - A trajectory of movement of an object is tracked in a video data image field that is partitioned into a plurality of different grids. Global image features from video data relative to the trajectory are extracted and compared to a learned trajectory model to generate a global anomaly detection confidence decision value as a function of fitting to the learned trajectory model. Local image features are also extracted for each of the image field grids that include object trajectory, which are compared to learned feature models for the grids to generate local anomaly detection confidence decisions for each grid as a function of fitting to the learned feature models for the grids. The global anomaly detection confidence decision value and the local anomaly detection confidence decision values for the grids are into a fused anomaly decision with respect to the tracked object. | 09-20-2012 |
20120237082 | VIDEO BASED MATCHING AND TRACKING - An analytical device is disclosed that analyzes whether a first image is similar to (or the same as) as a second image. The analytical device analyzes the first image by combining at least a part (or all) of the first image with at least a part (or all) of the second image, and by analyzing at least a part (or all) of the combined image. Part or all of the combination may be analyzed with respect to the abstraction of the first image and/or the abstraction of the second image. The abstraction may be based on a Bag of Features (BoF) description, based on a histogram of intensity values, or based on other types of abstraction methodologies. The analysis may involve comparing one or more aspects of the combination (such as the entropy or randomness of the combination) with the one or more aspects of the abstracted first image and/or abstracted second image. Based on the comparison, the analytical device may determine whether the first image is similar to or the same as the second image. The analytical device may work with a variety of images in a variety of applications including a video tracking system, a biometric analytic system, or a database image analytical system. | 09-20-2012 |
20120237083 | AUTOMATIC OBSTACLE LOCATION MAPPING - A method of automatic obstacle location mapping comprises receiving an indication of a feature to be identified in a defined area. An instance of the feature is found within an image. A report is then generated conveying the location of said feature. | 09-20-2012 |
20120237084 | SYSTEM AND METHOD FOR IDENTIFYING THE EXISTENCE AND POSITION OF TEXT IN VISUAL MEDIA CONTENT AND FOR DETERMINING A SUBJECT'S INTERACTIONS WITH THE TEXT - A reading meter system and method is provided for identifying the existence and position of text in visual media content (e.g., a document to be displayed (or being displayed) on a computer monitor or other display device) and determining if a subject has interacted with the text and/or the level of the subject's interaction with the text (e.g., whether the subject looked at the text, whether the subject read the text, whether the subject comprehended the text, whether the subject perceived and made sense of the text, and/or other levels of the subject's interaction with the text). The determination may, for example, be based on data generated from an eye tracking device. The reading meter system may be used alone and/or in connection with an emotional response tool (e.g., a software-based tool for determining the subject's emotional response to the text and/or other elements of the visual media content on which the text appears). If used together, the reading meter system and emotional response tool advantageously may both receive, and perform processing on, eye date generated from a common eye tracking device. | 09-20-2012 |
20120237085 | METHOD FOR DETERMINING THE POSE OF A CAMERA AND FOR RECOGNIZING AN OBJECT OF A REAL ENVIRONMENT - A method for determining the pose of a camera ( | 09-20-2012 |
20120237086 | MOVING BODY POSITIONING DEVICE - Provided is a moving body positioning device serving as an essential element for monitoring and tracing a moving body, which moving body positioning device uses an external monitoring camera. | 09-20-2012 |
20120243729 | LOGIN METHOD BASED ON DIRECTION OF GAZE - A method of authenticating a user of a computing device is proposed, together with computing device on which the method is implemented. A plurality of objects is displayed on a display screen. The plurality of objects includes at least objects that make up a sequence of objects pre-selected as the user's passcode. In response to a trigger signal an image of the user's face is captured while looking at one of the objects on the display screen. A determination of which object is in the direction of the user's gaze is made from the photograph and whether or not the gaze is on the correct object in the sequence of the passcode. This is repeated for each object in the sequence of the passcode. | 09-27-2012 |
20120243730 | COLLABORATIVE CAMERA SERVICES FOR DISTRIBUTED REAL-TIME OBJECT ANALYSIS - A collaborative object analysis capability is depicted and described herein. The collaborative object analysis capability enables a group of cameras to collaboratively analyze an object, even when the object is in motion. The analysis of an object may include one or more of identification of the object, tracking of the object while the object is in motion, analysis of one or more characteristics of the object, and the like. In general, a camera is configured to discover the camera capability information for one or more neighboring cameras, and to generate, on the basis of such camera capability information, one or more actions to be performed by one or more neighboring cameras to facilitate object analysis. The collaborative object analysis capability also enables additional functions related to object analysis, such as alerting functions, archiving functions (e.g., storing captured video, object tracking information, object recognition information, and so on), and the like. | 09-27-2012 |
20120243731 | IMAGE PROCESSING METHOD AND IMAGE PROCESSING APPARATUS FOR DETECTING AN OBJECT - An image processing method and an image processing apparatus for detecting an object are provided. The image processing method includes the following steps: partitioning an image into at least a first sub-image covering a first zone and a second sub-image covering a second zone according to a designed trait; and performing an image detection process upon the first sub-image for checking whether the object is within the first zone to generate a first detecting result. The object is a human face, and the image detection process is a face detection process. | 09-27-2012 |
20120243732 | Adaptable Framework for Cloud Assisted Augmented Reality - A mobile platform efficiently processes sensor data, including image data, using distributed processing in which latency sensitive operations are performed on the mobile platform, while latency insensitive, but computationally intensive operations are performed on a remote server. The mobile platform acquires sensor data, such as image data, and determines whether there is a trigger event to transmit the sensor data to the server. The trigger event may be a change in the sensor data relative to previously acquired sensor data, e.g., a scene change in an image. When a change is present, the sensor data may be transmitted to the server for processing. The server processes the sensor data and returns information related to the sensor data, such as identification of an object in an image or a reference image or model. The mobile platform may then perform reference based tracking using the identified object or reference image or model. | 09-27-2012 |
20120243733 | MOVING OBJECT DETECTING DEVICE, MOVING OBJECT DETECTING METHOD, MOVING OBJECT DETECTION PROGRAM, MOVING OBJECT TRACKING DEVICE, MOVING OBJECT TRACKING METHOD, AND MOVING OBJECT TRACKING PROGRAM - A moving object detecting device | 09-27-2012 |
20120243734 | Determining Detection Certainty In A Cascade Classifier - Disclosed are embodiments for determining detection certainty in a cascade classifier ( | 09-27-2012 |
20120243735 | ADJUSTING DISPLAY FORMAT IN ELECTRONIC DEVICE - A display format adjustment system includes a receiving module, a visual condition determination module, a display format determination module, and a display control module. The receiving module receives content for display in a first display format. The visual condition determination module determines a visual condition of a viewer in front of a display. The display format determination module determines a second display format based on the first display format and the visual condition of the viewer. The display control module displays the content in the second display format on the display. | 09-27-2012 |
20120243736 | ADJUSTING PRINT FORMAT IN ELECTRONIC DEVICE - A print format adjustment system includes a receiving module, a visual condition determination module, a print format determination module, and a print control module. The receiving module receives content for printing in a first print format. The visual condition determination module establishes the sharpness of vision of a viewer in front of a display, at a predetermined view distance. The print format determination module determines a second print format based on both the first print format and the visual condition of the viewer. The print control module prints the content in the second print format. | 09-27-2012 |
20120243737 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, RECORDING MEDIUM, AND PROGRAM - An image processing apparatus includes: a calculating unit that calculates an evaluation value, which is expressed as a sum of confidence degrees obtained by mixing, at a predetermined mixing ratio, a matching degree of a first feature quantity and a matching degree of a second feature quantity between a target image containing an object to be tracked and a comparison image which is an image of a comparison region compared to the target image of a first frame, when the mixing ratio is varied and obtaining the mixing ratio when the evaluation value is maximum; and a detecting unit that detects an image corresponding to the target image of a second frame based on the confidence degrees in which the mixing ratio is set when the evaluation value is the maximum. | 09-27-2012 |
20120243738 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - An image processing device comprises: a tracking area setting unit that sets a tracking area in an input moving image obtained by photographing an object; a following feature point setting unit that detects a feature point that exhibits a motion in correlation with the motion of the tracking area and sets the detected feature point as a following feature point; a motion detection unit that detects movement over time of the following feature point within the input image; and a clip area setting unit that sets a clip area of an image to be employed when a partial image including the tracking area is clipped out of the input image for either recording or displaying or both recording and displaying, and that sets a size and a position of the clip area on the basis of a motion detection result obtained by the motion detection unit. | 09-27-2012 |
20120243739 | INFORMATION PROCESSING DEVICE, OBJECT RECOGNITION METHOD, PROGRAM, AND TERMINAL DEVICE - There is provided an information processing device including a database that stores feature quantities of two or more images, the database being configured such that identification information for identifying an object in each image and an attribute value related to a lighting condition under which the object was imaged are associated with a feature quantity of each image, an acquisition unit configured to acquire an input image captured by an imaging device, and a recognition unit configured to recognize an object in the input image by checking a feature quantity determined from the input image against the feature quantity of each image stored in the database. The feature quantities stored in the database include feature quantities of a plurality of images of an identical object captured under different lighting conditions. | 09-27-2012 |
20120243740 | Scene Determination and Prediction - A system and method for scene determination is disclosed. The system comprises a communication interface, an object detector, a temporal pattern module and a scene determination module. The communication interface receives a video including at least one frame. The at least one frame includes information describing a scene. The object detector detects a presence of an object in the at least one frame and generates at least one detection result based at least in part on the detection. The temporal pattern module generates a temporal pattern associated with the object based at least in part on the at least one detection result. The scene determination module determines a type of the scene based at least in part on the temporal pattern. | 09-27-2012 |
20120243741 | Object Recognition For Security Screening and Long Range Video Surveillance - A method of detecting an object in image data that is deemed to be a threat includes annotating sections of at least one training image to indicate whether each section is a component of the object, encoding a pattern grammar describing the object using a plurality of first order logic based predicate rules, training distinct component detectors to each identify a corresponding one of the components based on the annotated training images, processing image data with the component detectors to identify at least one of the components, and executing the rules to detect the object based on the identified components. | 09-27-2012 |
20120243742 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing device includes a conversion unit that performs conversion such that an area including a feature point and the periphery thereof in a specific target is set as a first area and, when one pixel in the first area is set as a reference pixel, an area including the reference pixel and pixels in the periphery thereof is set as a second area, and, based on a comparison result of the feature amount of the reference pixel and the feature amount of another pixel in the second area, the feature amount of another pixel is converted for each pixel in the second area, and a calculation unit that calculates a feature amount to be used in an identification process for identifying the specific target by performing computation for the value of each pixel in the second area which is obtained from the conversion for each reference pixel. | 09-27-2012 |
20120243743 | DEVICE FOR INTERACTION WITH AN AUGMENTED OBJECT - A device for interacting with at least one augmented object ( | 09-27-2012 |
20120243744 | SECURITY ELEMENT COMPRISING A SUBSTRATE BEARING AN OPTICAL STRUCTURE AND A REFERENCE PATTERN, AND ASSOCIATED METHOD - The invention relates to a security element ( | 09-27-2012 |
20120243745 | Methods and Apparatus for Automatic Testing of a Graphical User Interface - Methods and apparatus in a computer for automatically testing computer programs involve opening a predefined graphical user interface (GUI) on a screen of the computer; loading a set of program script instructions from a script database in communication with the computer that is associated with the predefined GUI; reading a loaded set of program script instructions; retrieving, based on the loaded set, data and at least one image object corresponding to the predefined GUI from a data and image object database in communication with the computer; taking a screenshot of the predefined GUI that includes at least one image object of the predefined GUI; determining whether an image object in the screen shot matches an image object retrieved from the data and object image database; and if a target position on the screen of the matching image object based on data retrieved from the data and image object database, and activating a control function adapted to control the predefined GUI based on the loaded set of program script instructions and the target position. | 09-27-2012 |
20120250936 | INTERACTIVE INPUT SYSTEM AND METHOD - A method of determining locations of at least two pointers in a captured image frame comprises generating a vertical intensity profile (VIP) from the captured image frame, the VIP comprising peaks generally corresponding to the at least two pointers; determining if the peaks are closely spaced and, if the peaks are closely spaced, fitting a curve to the VIP; analyzing the fitted curve to determine peak locations of the fitted curve; and registering the peak locations as the pointer locations. | 10-04-2012 |
20120250937 | SCENE ENHANCEMENTS IN OFF-CENTER PERIPHERAL REGIONS FOR NONLINEAR LENS GEOMETRIES - A technique of enhancing a scene containing one or more off-center peripheral regions within an initial distorted image captured with a large field of view includes determining and extracting an off-center region of interest (hereinafter “ROI”) within the image. Geometric correction is applied to reconstruct the off-center ROI into a rectangular frame of reference as a reconstructed ROI. A quality of reconstructed pixels is determined within the reconstructed ROI. Image analysis is selectively applied to the reconstructed ROI based on the quality of the reconstructed pixels. | 10-04-2012 |
20120250938 | Method and System for Recording and Transferring Motor Vehicle Information - An improved system and method for capturing and uploading pertinent information related to a motor vehicle that is accurate, simple to use, and may be implemented on a wide-array of mobile devices in a cost-effective manner. Methods are also disclosed for users of the mobile devices to send identifying information to a database, where the identifying information is compared to other motor vehicle identifying information located the database. | 10-04-2012 |
20120250939 | OBJECT DETECTION SYSTEM AND METHOD THEREFOR - In an object detection system with a first and a second image processing apparatus, the first image processing apparatus includes a reduction unit configured to reduce an input image, a first detection unit configured to detect a predetermined object from a reduction image reduced by the reduction unit, and a transmission unit configured to transmit the input image and a first detection result detected by the first detection unit to the second image processing apparatus, and the second image processing apparatus includes a reception unit configured to receive the input image and the first detection result from the first image processing apparatus, a second detection unit configured to detect the predetermined object from the input image, and an output unit configured to output the first detection result and a second detection result detected by the second detection unit. | 10-04-2012 |
20120250940 | TERMINAL DEVICE, OBJECT CONTROL METHOD, AND PROGRAM - An apparatus is disclosed comprising a memory storing instructions and a control unit executing the instructions to detect an object of interest within an image of real space, detect an orientation and a position of the object, and generate a modified image. The generating comprises determining a region of the image of real space based on the detected orientation and position. The instructions may further include instructions to display a virtual image of the object in the region, change the virtual image based on a detected user input, the changed virtual image being maintained within the region, and display the modified image. | 10-04-2012 |
20120250941 | SOUND REPRODUCTION PROGRAM AND SOUND REPRODUCTION DEVICE - A sound reproduction program is provided which, in performing reading and sound reproduction of a musical score, precision of musical score recognition is improved. The sound reproduction program is stored in a terminal including the image pickup unit and a display unit and makes a computer execute a function of reading a musical score image at every predetermined time as a sampling image by a camera device | 10-04-2012 |
20120250942 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 10-04-2012 |
20120250943 | REAL-TIME CAMERA DICTIONARY - Information display equipment that can display translated words and/or translation information in real time. The information display equipment relates to a camera dictionary that can perform dictionary display in real time. In addition, this equipment distinguishes characters included in an object photographed by a photographing portion. Then this equipment extracts information corresponding to these characters from a dictionary. Examples of the information corresponding to the characters are translated words or illustrative examples for a certain term. Then a display portion displays the information corresponding to the characters. | 10-04-2012 |
20120250944 | IMAGE DETERMINING DEVICE - To determine the state of a subject person with a simple structure, an image determining device includes: an imaging unit that captures an image from a first direction, the image including the subject person; a first detector that detects size information from the image, the size information being about the subject person in the first direction; a second detector that detects position-related information, the position-related information being different from the information detected by the first detector; and a determining unit that determines the state of the subject person, based on a result of the detection performed by the first detector and a result of the detection performed by the second detector. | 10-04-2012 |
20120257787 | COMPUTER-READABLE STORAGE MEDIUM HAVING INFORMATION PROCESSING PROGRAM STORED THEREIN, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING SYSTEM - A computer-readable storage medium has stored therein an information processing program for causing a computer of an information processing apparatus to operate as: means for sequentially obtaining an image; specific object detection means for detecting a specific object from the obtained image; means for detecting, on the basis of a pixel value obtained from a central region of the detected specific object, first region information on the central region; means for determining whether or not a result of the detection meets a predetermined condition; means for detecting, on the basis of a pixel value obtained from a surrounding region of the specific object that is present around the central region, second region information on the surrounding region; and means for outputting at least the second region information detected by the second region information detection means when a result of the determination is positive. | 10-11-2012 |
20120257788 | COMPUTER-READABLE STORAGE MEDIUM HAVING INFORMATION PROCESSING PROGRAM STORED THEREIN, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING SYSTEM - A computer-readable storage medium has stored therein an information processing program that causes a computer of an information processing apparatus to operate as: means for sequentially obtaining an image; means for detecting a specific object from the obtained image; means for detecting, on the basis of a first threshold and a pixel value obtained from a first region of the detected specific object, first region information on the first region; calculation means for calculating a second threshold on the basis of the pixel value obtained from the first region when the first region information is detected; means for detecting, on the basis of the second threshold calculated by the calculation means and a pixel value obtained from a second region of the detected specific object that is different from the first region, second region information on the second region; and means for outputting at least the second region information detected. | 10-11-2012 |
20120257789 | Method and Apparatus for Motion Recognition - A motion recognition apparatus is provided. The motion recognition apparatus includes an event sensor configured to sense a portion of an object, where a motion occurs, and output events, a color sensor configured to photograph the object and output a color image, a motion area check unit configured to check a motion area which refers to an area where the motion occurs, using spatiotemporal correlations of the events, a shape recognition unit configured to recognize color information and shape information of an area corresponding to the motion area in the color image, a motion estimation unit configured to estimate a motion trajectory using the motion area, the color information, and the shape information, and an operation pattern determination unit configured to determine an operation pattern of the portion where the motion occurs, based on the estimated motion trajectory. | 10-11-2012 |
20120257790 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing apparatus that includes a move detecting unit that detects a move of a subject contained in a moving image from plural frame images, based on an image signal that indicates the moving image including the frame image and delay time information that indicates a delay time of an image pickup, and a correcting unit that corrects the image signal, based on the image signal and move information that indicates a move of a detected subject. | 10-11-2012 |
20120257791 | APPARATUS AND METHOD FOR DETECTING A VERTEX OF AN IMAGE - Disclosed are an apparatus for detecting a vertex of an image and a method for the same detecting the vertex with the high degree of accuracy and reducing time to detect the vertex by minimizing the interaction operations of the user to detect the vertex even in a touch input part having the low degree of sensing precision. The method includes inputting a vertex position of an image, setting an ROI, detecting a plurality of edges, detecting a candidate straight line group based on the edges, and removing a candidate straight line forming forms an angle less than a critical angle with respect to a base candidate straight line set from the candidate straight line group, and determining an intersection point between a remaining candidate straight line and the base candidate straight line provided at a position making a minimum distance from the input vertex position as an optimal vertex. | 10-11-2012 |
20120257792 | Method for Geo-Referencing An Imaged Area - A method for geo-referencing an area by an imaging optronics system which comprises acquiring M successive images by a detector, the imaged area being distributed between these M images, with M≧1. It comprises: measuring P distances d | 10-11-2012 |
20120257793 | VIDEO OBJECT CLASSIFICATION - Techniques for classifying one or more objects in at least one video, wherein the at least one video comprises a plurality of frames are provided. One or more objects in the plurality of frames are tracked. A level of deformation is computed for each of the one or more tracked objects in accordance with at least one change in a plurality of histograms of oriented gradients for a corresponding tracked object. Each of the one or more tracked objects is classified in accordance with the computed level of deformation. | 10-11-2012 |
20120257794 | SYSTEMS AND METHODS FOR MANAGING ERRORS UTILIZING AUGMENTED REALITY - Methods for managing errors utilizing augmented reality are provided. One system includes a transceiver configured to communicate with a systems management console, capture device for capturing environmental inputs, memory storing code comprising an augmented reality module, and a processor. The processor, when executing the code comprising the augmented reality module, is configured to perform the method below. One method includes capturing an environmental input, identifying a target device in the captured environmental input, and querying the systems management console regarding a status condition for the target device. Also provided are physical computer storage mediums including a computer program product for performing the above method. | 10-11-2012 |
20120263346 | VIDEO-BASED DETECTION OF MULTIPLE OBJECT TYPES UNDER VARYING POSES - Training data object images are clustered as a function of motion direction attributes and resized from respective original into same aspect ratios. Motionlet detectors are learned for each of the sets from features extracted from the resized object blobs. A deformable sliding window is applied to detect an object blob in input by varying window size, shape or aspect ratio to conform to a shape of the detected input video object blob. A motion direction of an underlying image patch of the detected input video object blob is extracted and motionlet detectors selected and applied that have similar motion directions. An object is thus detected within the detected blob and semantic attributes of an underlying image patch extracted if a motionlet detectors fires, the extracted semantic attributes available for use for searching for the detected object. | 10-18-2012 |
20120263347 | THREE-DIMENSIONAL SCANNER AND ROBOT SYSTEM - A three-dimensional scanner according to one aspect of the embodiments includes an irradiation unit, an imaging unit, a position detecting unit, and a scanning-region determining unit. The irradiation unit emits a slit-shaped light beam while changing an irradiation position with respect to a measuring object. The imaging unit sequentially captures images of the measuring object irradiated with the light beam. The position detecting unit detects a position of the light beam in an image captured by the imaging unit by scanning the image. The scanning-region determining unit determines a scanning region in an image as a scanning target by the position detecting unit based on a position of the light beam in an image captured by the imaging unit before the image as a scanning target. | 10-18-2012 |
20120263348 | ORIENTATION INVARIANT OBJECT IDENTIFICATION USING MODEL-BASED IMAGE PROCESSING - A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. It is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images. | 10-18-2012 |
20120263349 | MONITORING STATE DISPLAY APPARATUS, MONITORING STATE DISPLAY METHOD, AND MONITORING STATE DISPLAY PROGRAM - The present invention allows to grasp a problem area of a system quickly and accurately. The present invention has: a reference position allocation unit | 10-18-2012 |
20120263350 | METHOD FOR IDENTIFYING LOST OR UNASSIGNABLE LUGGAGE - A method identifies lost or non-assignable luggage, in particular air travel luggage, which is subjected to a security inspection after check-in by being channeled through an x-ray scanner and the x-ray images being analyzed. In order to improve and simplify the method for identifying lost or non-assignable luggage without mechanized effort, the x-ray images taken in the x-ray scanner, which contain details about the contents of the luggage, as well as assigned optically or electronically output information about the luggage are saved in a computer for later evaluation. | 10-18-2012 |
20120263351 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 10-18-2012 |
20120269382 | Object Recognition Device and Object Recognition Method - An object recognition device includes; an image-capturing unit mounted to a mobile body; an image generation unit that converts images captured by the image-capturing unit at different time points to corresponding synthesized images as seen vertically downwards from above; a detection unit that compares together a plurality of the synthesized images and detects corresponding regions; and a recognition unit that recognizes an object present upon the road surface from a difference between the corresponding regions. | 10-25-2012 |
20120269383 | RELIABILITY IN DETECTING RAIL CROSSING EVENTS - A method, data processing system, apparatus, and computer program product for monitoring objects. A plurality of images of an area is received. An object in the area is identified from the plurality of images. A plurality of points in a region within the area is identified from a first image in the plurality of images. The plurality of points has a fixed relationship with each other and the region. The object in the area is monitored to determine whether the object has entered the region. A determination that the object has not entered the region is made in response to identifying an absence of a number of the plurality of points in a second image in the plurality of images. | 10-25-2012 |
20120269384 | Object Detection in Depth Images - A method for detecting an object in a depth image includes determining a detection window covering a region in the depth image, wherein a location of the detection window is based on a location of a candidate pixel in the depth image, wherein a size of the detection window is based on a depth value of the candidate pixel and a size of the object. A foreground region in the detection window is segmented based on the depth value of the candidate pixel and the size of the object. A feature vector is determined based on depth values of the pixels in the foreground region and the feature vector is classified to detect the object. | 10-25-2012 |
20120269385 | SYSTEMS AND METHODS FOR OBJECT IMAGING - A method for imaging an object is provided. The method includes acquiring tomographic image data of the object at a plurality of frequencies, generating a composite image of the object at each of the plurality of frequencies using the acquired tomographic image data, determining a scaling factor for a first material at each of the plurality of frequencies, determining a scaling factor for a second material at each of the plurality of frequencies, and decomposing the composite images into a first discrete image and a second discrete image using the determined scaling factors, wherein the first discrete image contains any region of the object composed of the first material and the second discrete image contains any region of the object composed of the second material. | 10-25-2012 |
20120269386 | Motion Tracking - In one embodiment, one or more computing devices receive an identifying feature of a target entity, the identifying feature requiring that the target entity to be in a line of sight of a camera for the camera to recognize the identifying feature; locate the target entity using the camera based on the identifying feature; and track the target entity using the camera based on the identifying feature. | 10-25-2012 |
20120269387 | SYSTEMS AND METHODS FOR DETECTING THE MOVEMENT OF AN OBJECT - Systems and methods are provided for detecting a movement of an object marked with a marker. The system includes a sensor configured to capture a first image of the marker and to capture a second image of the marker after the first image, each of the first and second images having pixels each having a visual intensity. A controller is configured to compare the first image and the second image by comparing the visual intensity of each of the pixels of the first image and the second image, determine an area of overlap between the first image and the second image based on the comparison, calculate a change in position of the marker in the second image relative to the marker in the first image based on the area of overlap, and detect the movement of the object based on the change in position of the marker. | 10-25-2012 |
20120269388 | ONLINE REFERENCE PATCH GENERATION AND POSE ESTIMATION FOR AUGMENTED REALITY - A reference patch of an unknown environment is generated on the fly for positioning and tracking. The reference patch is generated using a captured image of a planar object with two perpendicular sets of parallel lines. The planar object is detected in the image and axes of the world coordinate system are defined using the vanishing points for the two sets of parallel lines. The camera rotation is recovered based on the defined axes, and the reference patch of at least a portion of the image of the planar object is generated using the recovered camera rotation. The reference patch can then be used for vision based detection and tracking. The planar object may be detected in the image as sets of parallel lines or as a rectangle. | 10-25-2012 |
20120269389 | INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING INFORMATION PROCESSING APPARATUS, AND STORAGE MEDIUM - An information processing apparatus comprising: an obtaining unit configured to obtain image data; a detection unit configured to detect an object from the image data; an attribute determination unit configured to determine an attribute indicating a characteristic of the object detected by the detection unit; a registration unit configured to register the image data in at least one of a plurality of dictionaries based on the attribute determined by the attribute determination unit; and an adding unit configured to add, when the image data is registered in not less than two dictionaries, link information concerning the image data registered in the other dictionary to the image data registered in one dictionary. | 10-25-2012 |
20120269390 | IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND STORAGE MEDIUM - An image processing apparatus comprising a storage unit configured to store image data; a readout unit configured to read out the image data stored in the storage unit; a detection unit configured to detect a target object from the image data read out by the readout unit; a conversion unit configured to convert a resolution of the image data read out by the readout unit; and a write unit configured to write the image data having the resolution converted by the conversion unit in the storage unit, wherein the readout unit outputs the readout image data in parallel to the detection unit and the conversion unit. | 10-25-2012 |
20120269391 | ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - An environment recognition device obtains a luminance of a target portion existing in a detection area, obtains a height of the target portion, and provisionally determines a specific object corresponding to the target portion or determines a specific object corresponding to grouped target objects, according to the luminance and the height of the target portion based on the association (specific object table) of a range of luminance and a range of height from a road surface with the specific object which is retained in a data retaining unit. | 10-25-2012 |
20120269392 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - A plurality of images obtained by capturing a recognition target object from different viewpoint positions is acquired, and a portion set on the recognition target object in the image is received as a set portion for each of the images. A plurality of feature points is set in each of the images so as to set a larger number of feature points at the set portion than at an unset portion other than the set portion. The recognition target object is learned using image feature amounts at the feature points. | 10-25-2012 |
20120269393 | ARTICULATION REGION DISPLAY APPARATUS, ARTICULATION REGION DETECTING APPARATUS, ARTICULATION REGION BELONGINGNESS CALCULATING APPARATUS, ARTICULATED OBJECT REGION BELONGINGNESS CALCULATING APPARATUS, AND ARTICULATION REGION DISPLAY METHOD - An articulation region display apparatus includes: an articulatedness calculating unit calculating an articulatedness, based on a temporal change in a point-to-point distance and a temporal change in a geodetic distance between given trajectories; an articulation detecting unit detecting, as an articulation region, a region corresponding to a first trajectory based on the articulatedness between the trajectories, the first trajectory being in a state where the regions corresponding to the first trajectory and a second trajectory are present on the same rigid body, the regions corresponding to the first trajectory and third trajectory are present on the same rigid body, and the region corresponding to the second trajectory is connected with the region corresponding to the third trajectory via the same joint; and a display control unit transforming the articulation region into a form visually recognized by a user, and output the transformed articulation region. | 10-25-2012 |
20120269394 | SYSTEMS AND METHODS FOR GENERATING ENHANCED SCREENSHOTS - Systems and methods for generating and providing enhanced screenshots may include executing instructions stored in memory to evaluate at least a portion of a viewing frustum generated by the instructions to determine one or more objects included therein, obtain metadata associated with the one or more objects, and generate at least one enhanced screenshot indicative of the at least a portion of the viewing frustum by associating the metadata of each of the one or more objects with a location of each of the one or more objects within the at least one enhanced screenshot to create hotspots indicative of each of the one or more objects such that selection at least one hotspot by a computing system causes at least a portion of the metadata associated with the at least one hotspot to be displayed on a display device of a computing system. | 10-25-2012 |
20120269395 | Automated Service Measurement, Monitoring and Management - In a method and system of service management, a radiative sensor is positioned to observe an area of interest. At least one frame of data of the area of interest is electronically acquired from the radiative sensor. The acquired frame of data is electronically processing to determine the presence or absence of at least one object in the area of interest. Based on the presence or absence of the object in the area of interest, (1) an alert is electronically caused to be generated in response to also electronically detecting another object in another area of interest, and/or (2) a timer is electronically caused to initiate or terminate counting a period of time. | 10-25-2012 |
20120269396 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 10-25-2012 |
20120269397 | DETECTION OF AN OBJECT IN AN IMAGE - The invention provides a method, system, and program product for detecting an object in a digital image. In one embodiment, the invention includes: deriving an initial object indication mask based on pixel-wise differences between a first digital image and a second digital image, at least one of which includes the object; performing an edge finding operation on both the first and second digital images, wherein the edge finding operation includes marking added edges; generating a plurality of straight linear runs of pixels across an image containing the object, wherein each of the plurality of straight linear runs starts and ends on an added edge and is contained within the initial object indication mask; and forming a final object indication mask by retaining only pixels that are part of at least one of the plurality of straight linear runs. | 10-25-2012 |
20120275645 | Method and Apparatus for Calibrating and Re-Aligning an Ultrasound Image Plane to a Navigation Tracker - The present disclosure relates to acquiring image data of a subject with an imaging system that has been calibrated. The imaging system can include an ultrasound imaging system that collects one of more images based on a plane of image acquisition. The plane of image acquisition can be calibrated to a tracking device associated with the ultrasound transducer. | 11-01-2012 |
20120275646 | METHOD, APPARATUS AND SYSTEM FOR DETERMINING IF A PIECE OF LUGGAGE CONTAINS A LIQUID PRODUCT - A method, an apparatus and a system are provided for determining if a piece of luggage contains a liquid product comprised of a container holding a body of liquid. The piece of X-ray scanner luggage is scanned with an X-ray scanner to generate X-ray image data conveying an image of the piece of luggage and contents thereof. The X-ray image data is processed with a computer to detect a liquid product signature in the X-ray image data and determine if a liquid product is present in the piece of luggage. A detection signal is released at an output of the computer conveying whether a liquid product was identified in the piece of luggage. The detection signal may, for example, be used in rendering a visual representation of the piece of luggage on a display device to convey information to an operator as to the presence of a liquid product in the piece of luggage. | 11-01-2012 |
20120275647 | IMAGE PROCESSING SYSTEM, IMAGE PROCESSING APPARATUS AND SERVER IN THE IMAGE PROCESSING SYSTEM, AND DATA CONTROL METHOD AND STORAGE MEDIUM STORING PROGRAM THEREOF - When an image processing apparatus transmits information about image data stored therein to a server, the server determines whether or not the image data contains confidential information, and transmits the determination result to the image processing apparatus. When the image processing apparatus receives, from the server, determination result indicating whether or not the image data contains confidential information, then if the determination result indicates that the image data contains specific information, the image processing apparatus limits use of the image data. | 11-01-2012 |
20120275648 | IMAGING DEVICE AND IMAGING METHOD AND PROGRAM - An imaging device includes an image input section which sequentially inputs image data with a predetermined time interval, a face detector to detect a face area of a subject from the image data, a rotational motion detector to detect a rotational motion between two frames of image data input with the predetermined time interval, and a controller to control the imaging device to execute a predetermined operation when the rotational motion is detected by the rotational motion detector. The rotational motion detector detects a candidate of rotational motion between the two frames of image data and calculate a coordinate of a rotation center and a rotational angle of the candidate, and determine whether or not the candidate is the rotational motion from a central coordinate of the detected face area, the coordinate of the rotation center and the rotational angle. | 11-01-2012 |
20120275649 | FOREGROUND OBJECT TRACKING - Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications. | 11-01-2012 |
20120275650 | METHOD AND APPARATUS FOR DETECTING AND PROCESSING SPECIFIC PATTERN FROM IMAGE - In an image within which a face pattern is detected, when a ratio of a skin color pixel is equal to or smaller than a first threshold value in a first region and a ratio of a skin color pixel is equal to or greater than a second threshold value in a second r region, the vicinity of the first region is determined to be a face candidate position at which the face pattern can exist. Face detection is carried out on the face candidate position. The second region is arranged in a predetermined position relative to the first region. | 11-01-2012 |
20120275651 | SYSTEM AND METHOD FOR DETECTING POTENTIAL PROPERTY INSURANCE FRAUD - A system and method for assessing a condition of property for insurance purposes includes a sensor for acquiring a spectral image. In a preferred embodiment, the spectral image is post-processed to generate at least one spectral radiance plot, the plot used as input to a radiative transfer computer model. The output of the model establishes a spectral signature for the property. Over a period of time, spectral signatures can be compared to generate a spectral difference, the spectral difference can be used to determine whether a change in the condition of the property was potentially fraudulently caused. | 11-01-2012 |
20120275652 | SYSTEM AND METHOD FOR USING FEATURE TRACKING TECHNIQUES FOR THE GENERATION OF MASKS IN THE CONVERSION OF TWO-DIMENSIONAL IMAGES TO THREE-DIMENSIONAL IMAGES - The present invention is directed to systems and methods for controlling 2-D to 3-D image conversion and/or generation. The methods and systems use auto-fitting techniques to create a mask based upon tracking features from frame to frame. When features are determined to be missing they are added prior to auto-fitting the mask. | 11-01-2012 |
20120281872 | DETECTING AN INTEREST POINT IN AN IMAGE USING EDGES - Technology is described for detecting an interest point in an image using edges. An example method can include the operation of computing locally normalized edge magnitudes and edge orientations for the image using a processor to form a normalized gradient image. The normalized gradient image can be divided into a plurality of image orientation maps having edge orientations. Orientation dependent filtering can be applied to the image orientation maps to form response images. A further operation can be summing the response images to obtain an aggregated filter response image. Maxima can be identified in spatial position and scale in the aggregated filter response image. Maxima in the aggregated filter response image can be defined as interest points. | 11-08-2012 |
20120281873 | INCORPORATING VIDEO META-DATA IN 3D MODELS - A moving object detected and tracked within a field of view environment of a 2D data feed of a calibrated video camera is represented by a 3D model through localizing a centroid of the object and determining an intersection with a ground-plane within the field of view environment. An appropriate 3D mesh-based volumetric model for the object is initialized by using a back-projection of a corresponding 2D image as a function of the centroid and the determined ground-plane intersection. Nonlinear dynamics of a tracked motion path of the object are represented as a collection of different local linear models. A texture of the object is projected onto the 3D model, and 2D tracks of the object are upgraded to 3D motion to drive the 3D model by learning a weighted combination of the different local linear models that minimizes an image re-projection error of model movement. | 11-08-2012 |
20120281874 | Method, material, and apparatus to improve acquisition of human frontal face images using image template - The present invention provides a method and apparatus to improve the image acquisition by utilizing frontal facial image template to improve acquisition of human frontal facial images. | 11-08-2012 |
20120281875 | WAFER DETECTING APPARATUS - A wafer detecting apparatus detects storage states of a plurality of wafers stored in a wafer container. The plurality of wafers are stored substantially horizontal in slots in the wafer container to be transferred in and out of a front opening of the wafer container. The wafer detecting apparatus includes a vertically extending illumination device that emits light through the front opening onto the plurality of wafers and an imaging device that receives the light reflected from the plurality of wafers. The imaging device is arranged substantially directly in front of the wafer container and the illumination device is arranged in at least one of left and right sides of the imaging device. | 11-08-2012 |
20120281876 | APPARATUS AND METHOD FOR DETERMINING KIND OF STEEL MATERIAL - An apparatus for determining a kind of a steel material includes an image pickup device | 11-08-2012 |
20120281877 | APPARATUS AND METHOD FOR DETERMINING KIND OF STEEL MATERIAL - An apparatus | 11-08-2012 |
20120281878 | TARGET-OBJECT DISTANCE MEASURING DEVICE AND VEHICLE MOUNTED WITH THE DEVICE - In a target-object distance measuring device and a vehicle on which the device is mounted, a human body detection device is utilized for calculating a distance between an image capturing device and a human body candidate in an actual space based on the size of the human body candidate in the image. The head width is approximately 15-16 cm. A body height in the actual space of the human candidate in the image is estimated based on the ratio between the head-width in the extracted image and at least one size of the human body feature, such as total height, in the extracted human body candidate region, and the distance from the image capturing device to the human body candidate in the actual space is calculated based on the estimated body height in the actual space and the body height of the human body candidate in the image. | 11-08-2012 |
20120281879 | Method and System for 2D Detection of Localized Light Contributions - The invention relates to a detection system for determining whether a light contribution of a light source is present at a selected position within a 2D scene. The light contribution includes an embedded code comprising a repeating sequence of N symbols. The detection system includes a camera and a processing unit. The camera is configured to acquire a series of images of the scene via specific open/closure patterns of the shutter. Each image includes a plurality of pixels, each pixel representing an intensity of the light output of the light source at a different physical position within the scene. The processing unit is configured to process the acquired series of images to determine whether the light contribution of the first light source is present at the selected physical position within the scene by e.g. correlating a sequence of pixels of the acquired series corresponding to the selected physical position with the first sequence of N symbols. | 11-08-2012 |
20120281880 | Sensing Data from Physical Objects - Directional albedo of a particular article, such as an identity card, is measured and stored. When the article is later presented, it can be confirmed to be the same particular article by re-measuring the albedo function, and checking for correspondence against the earlier-stored data. The re-measuring can be performed through us of a handheld optical device, such as a camera-equipped cell phone. The albedo function can serve as random key data in a variety of cryptographic applications. The function can be changed during the life of the article. A variety of other features are also detailed. | 11-08-2012 |
20120288138 | SYSTEM AND METHOD FOR TRAFFIC SIGNAL DETECTION - A method and system may determine a location of a vehicle, collect an image using a camera associated with the vehicle, analyze the image in conjunction with the location of the vehicle and/or previously collected information on the location of traffic signals or other objects (e.g., traffic signs), and using this analysis locate an image of a traffic signal within the collected image. The position (e.g., a geographic position) of the signal may be determined, and stored for later use. The identification of the signal may be used to provide an output such as the status of the signal, such as green light. | 11-15-2012 |
20120288139 | SMART BACKLIGHTS TO MINIMIZE DISPLAY POWER CONSUMPTION BASED ON DESKTOP CONFIGURATIONS AND USER EYE GAZE - Methods and devices to conserve power on a mobile device determine an active region on a display and dimming a portion of the display backlight corresponding to the non-active regions. The method includes detecting an active region and a non-active region on a display. The detection may be based on a user interaction with the display or processing an image of the user to determine where on the display the user is looking. The method may control a brightness of a backlight of the display depending on the active and non-active region. | 11-15-2012 |
20120288140 | METHOD AND SYSTEM FOR SELECTING A VIDEO ANALYSIS METHOD BASED ON AVAILABLE VIDEO REPRESENTATION FEATURES - A method is performed for selecting a video analysis method based on available video representation features. The method includes: determining a plurality of available video representation features for a first video output from a first video source and for a second video output from a second video source; and analyzing the plurality of video representation features as compared to at least one threshold to select one of a plurality of video analysis methods to track an object between the first and the second videos. | 11-15-2012 |
20120288141 | Device, Method and Program for Processing Image - Disclosed herein is a device for processing a moving image, the device including: a selection unit which selects an image group composed of a plurality of still images including a target image from the moving image, according to specified information for specifying the target image among the plurality of still images included in the moving image; an acquisition unit which performs an acquisition process of acquiring the plurality of still images included in the image group from the moving image; and a synthesis unit which performs a synthesis process of synthesizing the plurality of acquired still images and generating a high-resolution image of the target image having a pixel density higher than that of the target image, wherein the selection unit has a function for performing selection by a first mode for selecting the target image and a still image which is located behind the target image in time-series order. | 11-15-2012 |
20120288142 | OBJECT TRACKING - In general, the subject matter described in this specification can be embodied in methods, systems, and program products. A computing system accesses an indication of a first template that includes a region of a first image. The region of the first image includes a graphical representation of a face. The computing system receives a second image. The computing system identifies indications of multiple candidate templates. Each respective candidate template from the multiple candidate templates includes a respective candidate region of the second image. The computing system compares at least the first template to each of the multiple candidate templates, to identify a matching template from among the multiple candidate templates that includes a candidate region that matches the region of the first image that includes the graphical representation of the face. | 11-15-2012 |
20120288143 | MOTION TRACKING SYSTEM FOR REAL TIME ADAPTIVE IMAGING AND SPECTROSCOPY - This invention relates to a system that adaptively compensates for subject motion in real-time in an imaging system. An object orientation marker ( | 11-15-2012 |
20120288144 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND MOTION DETECTION SYSTEM - According to one embodiment, an image processing apparatus includes an integrator and a motion determination unit. The motion determination unit determines movement of an object. The integrator integrates information on a first frame in a unit domain in the image of each frame, and integrates information on a second frame while inverting a sign of a signal level in the integration of the first frame. The motion determination unit makes the motion determination in the unit domain according to the integration result of the integrator. | 11-15-2012 |
20120288145 | ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment, recognition device and an environment recognition method. The environment, recognition device obtains a luminance of a target portion in a detection area; obtains a height of the target portion; derives a white balance correction value, assuming that white balancing is performed to the obtained luminance; derives the corrected luminance by subtracting the white balance correction value and a color correction value based upon a color correction intensity indicating a degree of an influence of environment light from the obtained luminance; and provisionally determines a specific object corresponding to the target portion from the corrected luminance of the target portion based on an association of a luminance range and the specific object retained in a data retaining unit. | 11-15-2012 |
20120288146 | ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. The environment recognition device provisionally determines a specific object corresponding to a target portion from a luminance of a target portion, groups, as a target object, adjacent target portions provisionally determined to correspond to a same specific object, groups, as the target object, the target portions corresponding to a same specific object with respect to the target object and the luminance, when differences in horizontal distance and in height from the target object of target portions fall within a first predetermined range, and determines that the target object is the specific object when a ratio of target portion of which luminance is included in a predetermined luminance range with respect to all target portions in a specific region in the target object is equal to or more than a predetermined threshold value. | 11-15-2012 |
20120288147 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - In order to enable a substitute representative image to be properly selected in a case where a representative image representing a plurality of series of related images was deleted, if a deletion or the like of the representative image is performed, whether or not the representative image is associated with an image file which is subsequently read out is discriminated on the basis of attribute information of each image. If it is associated as a result of the discrimination, a system control unit stores the attribute information of the current read-out image file and repeats a processing until an image number is equal to a minimum number or the image file which is not the unrelated image is detected. Even if the representative image is deleted, a representative image is enabled to be selected from the images associated with the deleted representative image. | 11-15-2012 |
20120288148 | IMAGE RECOGNITION APPARATUS, METHOD OF CONTROLLING IMAGE RECOGNITION APPARATUS, AND STORAGE MEDIUM - An image recognition apparatus comprising: an obtaining unit configured to obtain one or more images; a detection unit configured to detect a target object image from each of one or more images; a cutting unit configured to cut out one or more local regions from the target object image; a feature amount calculation unit configured to calculate a feature amount from each of one or more local regions to recognize the target object; a similarity calculation unit configured to calculate, for each of one or more local regions, a similarity between the feature amounts; and a registration unit configured to, if there is a pair of feature amounts whose similarity is not less than a threshold, register, for each of one or more regions, one of the feature amounts as dictionary data for the target object. | 11-15-2012 |
20120288149 | ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. An environment recognition device | 11-15-2012 |
20120288150 | ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. The environment recognition device obtains luminances of a target portion in a detection area of a luminance image, assigns a color identifier to the target portion according to the luminances of the target portion, based on association between a color identifier and a luminance range retained in a data retaining unit, an groups target portions assigned one of one or more color identifiers associated with a same specific object, and of which position differences in the width direction and in the height direction are within a predetermined range, based on association between the color identifier and the luminance range retained in the data retaining unit. | 11-15-2012 |
20120288151 | ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. The environment recognition device provisionally determines a specific object corresponding to a target portion from a luminance of a target portion, groups adjacent target portions provisionally determined to correspond to a same specific object as a target object, derives a representative distance that is a representative value of the relative distance of target portions in the target object, and grouping the target portions as the target object, the target portions corresponding to the same specific object with respect to the target object and the luminance, when a difference in horizontal distance from the target object of the target portions and a difference in height from the target object of the target portions fall within a first predetermined range, a difference between the relative distance and the representative distance of target portions falls within a second predetermined range. | 11-15-2012 |
20120288152 | OBJECT RECOGNITION APPARATUS, CONTROL METHOD FOR OBJECT RECOGNITION APPARATUS AND STORAGE MEDIUM - An object recognition apparatus comprises: an extraction unit configured to extract a partial region from an image and extract a feature amount; a recognition unit configured to recognize whether the partial region is a target object based on the feature amount and one of a first recognition model including a feature amount of a positive example indicating the target object and a negative example indicating a background and a second recognition model including that of the positive example; an updating unit configured to update the first recognition model by adding the feature amount; and an output unit configured to output an object region recognized as being the target object, wherein the recognition unit performs recognition based on the first recognition model if the object region was output for a previous image, and based on the second recognition model if not. | 11-15-2012 |
20120288153 | APPARATUS FOR DETECTING OBJECT FROM IMAGE AND METHOD THEREFOR - An image processing apparatus stores a background model in which a feature amount is associated with time information for each state at each position of an image to be a background, extracts a feature amount for each position of an input video image, compares the feature amount in the input video image with that of each state in the background model, to determine the state similar to the input video image, and updates the time information of the state similar to the input video image, determines a foreground area in the input video image based on the time information of the state similar to the input video image, detects a predetermined subject from the foreground area, and updates the time information of the state in the background model. | 11-15-2012 |
20120288154 | Road-Shoulder Detecting Device and Vehicle Using Road-Shoulder Detecting Device - Disclosed is a road-shoulder detecting device including a distance-information calculating portion for calculating the presence of a physical object and the distance from the subject vehicle to the object from input three-dimensional image information relating to an environment around the vehicle, a vehicular road surface detecting portion for detecting a vehicular road surface with the subject vehicle from a distance image, a height difference calculating portion for measuring height difference between the detected vehicular road and an off-road region, and a road shoulder decision portion for deciding height difference as to whether the road shoulder is boundary between the surface and the region in a case where there is an off-road region lower than the vehicular road surface. | 11-15-2012 |
20120288155 | SYSTEMS AND METHODS OF TRACKING OBJECTS IN VIDEO - Systems and methods for identifying, tracking, and using objects in a video or similar electronic content, including methods for tracking one or more moving objects in a video. This can involve tracking one or more feature points within a video scene and separating those feature points into multiple layers based on motion paths. Each such motion layer can be further divided into different clusters, for example, based on distances between points. These clusters can then be used as an estimate to define the boundaries of the objects in video. Objects can also be compared with one another in cases in which identified objects should be combined and considered a single object. For example, if two objects in the first two frames have significantly overlapping areas, they may be considered the same object. Objects in each frame can further be compared to determine the life of the objects across the frames. | 11-15-2012 |
20120294476 | Salient Object Detection by Composition - A computing device configured to determine, for each of a plurality of locations in an image, a saliency measure based at least on a cost of composing parts of the image in the location from parts of the image outside of the location is described herein. The computing device is further configured to select one or more of the locations as representing salient objects of the image based at least on the saliency measures. | 11-22-2012 |
20120294477 | Searching for Images by Video - Techniques describe submitting a video clip as a query by a user. A process retrieves images and information associated with the images in response to the query. The process decomposes the video clip into a sequence of frames to extract the features in a frame and to quantize the extracted features into descriptive words. The process further tracks the extracted features as points in the frame, a first set of points to correspond to a second set of points in consecutive frames to construct a sequence of points. Then the process identifies the points that satisfy criteria of being stable points and being centrally located in the frame to represent the video clip as a bag of descriptive words for searching for images and information related to the video clip. | 11-22-2012 |
20120294478 | SYSTEMS AND METHODS FOR IDENTIFYING GAZE TRACKING SCENE REFERENCE LOCATIONS - A system is provided for identifying reference locations within the environment of a device wearer. The system includes a scene camera mounted on eyewear or headwear coupled to a processing unit. The system may recognize objects with known geometries that occur naturally within the wearer's environment or objects that have been intentionally placed at known locations within the wearer's environment. One or more light sources may be mounted on the headwear that illuminate reflective surfaces at selected times and wavelengths to help identify scene reference locations and glints projected from known locations onto the surface of the eye. The processing unit may control light sources to adjust illumination levels in order to help identify reference locations within the environment and corresponding glints on the surface of the eye. Objects may be identified substantially continuously within video images from scene cameras to provide a continuous data stream of reference locations. | 11-22-2012 |
20120294479 | IMAGE IDENTIFICATION APPARATUS AND METHOD - According to one embodiment, an image identification apparatus comprises an image pickup unit, an illumination unit, an illumination control unit and an identification unit. The image pickup unit configured to pickup an image of an identified object. The illumination unit configured to irradiate light towards the image pickup area of the image pickup unit. The illumination control unit configured to change the irradiation condition of the illumination unit in accordance with the image pickup timing of the image pickup unit. The identification unit configured to identify the identified object according to the image picked-up by the image pickup unit. | 11-22-2012 |
20120294480 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image of a prescribed frame of images of respective frames is set as a target image, and an area including a prescribed pattern is detected from the target image as a specific area. An image other than the target image is set as a non-target image, and the specific area in the non-target image is predicted. The images of the respective frames are encoded so that the specific area is encoded to have higher image quality than an area other than the specific area. In encoding, the images of the respective frames are encoded so that the specific area in the non-target image is not referred to from another frame. | 11-22-2012 |
20120294481 | ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. The environment recognition device obtains a luminance of each of a plurality of blocks formed by dividing a detection area; derives an edge direction based on a direction in which an edge of the luminance of each block extends; associates the blocks with each other based on the edge direction so as to generate an edge trajectory; groups a region enclosed by the plurality of edge trajectories as a target object; and determines the target object as a specific object. | 11-22-2012 |
20120294482 | ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. The environment recognition device includes: a position information obtaining unit that obtains position information of a target portion in a detection area, the position information including a relative distance to a subject vehicle; a grouping unit that groups the target portions as a target object based on the position information; a luminance obtaining unit that obtains a luminance of an image of the target object; a luminance distribution generating unit that generates a histogram of the luminance of the image of the target object; and a floating substance determining unit that determines whether or not the target object is a floating substance based on a statistical analysis on the histogram. | 11-22-2012 |
20120294483 | Image Analysis for Disposal of Explosive Ordinance and Safety Inspections - Hazardous objects in the field of explosives ordnance disposal or safety controls are identified using a sensor and image data generating arrangement and a comparison unit. The sensor and image data generating arrangement examines the object and produces an image thereof, which is compared by the comparison unit to known stored reference images. These reference images are digital images of reference objects. In this manner safety controls and explosives ordnance disposals can be organized safely and efficiently. | 11-22-2012 |
20120294484 | ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. the environment recognition device retains beforehand shape information that is information on a shape of a specific object; obtains a luminance of each of target portions, formed by dividing a detection area, and extracting a target portion including an edge; obtains a relative distance of the target portion including an edge; and determines a specific object indicated with the shape information by performing a Hough transform on the target portion having the edge based on the shape information according to the relative distance. | 11-22-2012 |
20120294485 | ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. The device obtains position information of a target portion in a detection area, including a relative distance from a subject vehicle; groups continuous target portions into a target object of which position differences in a width direction vertical to an advancing direction of the vehicle and in a depth direction parallel to the advancing direction fall within a first distance; determines that the target object is a candidate of a wall, when the target portions forming the target object forms a tilt surface tilting at a predetermined angle or more with respect to a plane vertical to the advancing direction; and determines that the continuous wall candidates of which position differences in the width and depth directions among the wall candidates fall within a second predetermined distance longer than the first predetermined distance are a wall. | 11-22-2012 |
20120294486 | DETECTING STEREOSCOPIC IMAGES - To detect the presence of the left and right constituent images of a stereoscopic image packed within an image frame or within a sequence of image frames, images are unpacked according to each one of said known formats; a candidate measure is formed according to each unpacking and the candidate measures are compared to identify the presence of left and right images packed according to an identified format. The candidate measure may be a low pass filtered measure of the difference between the left and right images and may be a high pass filtered measure of the activity in either the left or the right image. | 11-22-2012 |
20120294487 | OBJECT DETECTING DEVICE, IMAGE DIVIDING DEVICE, INTEGRATED CIRCUIT, METHOD OF DETECTING OBJECT, OBJECT DETECTING PROGRAM, AND RECORDING MEDIUM - An object detection device is provided with a plurality of processor units each detecting an object included in an image. The object detection device generates divided images by dividing the image, taking into consideration both the processing load for detection of an object by each processor element and the transfer load for transfer of the divided images to the processor elements. Independently of each other, the processor elements detect an object in each of the divided images. | 11-22-2012 |
20120294488 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 11-22-2012 |
20120294489 | METHOD FOR AUTOMATICALLY FOLLOWING HAND MOVEMENTS IN AN IMAGE SEQUENCE - A method for following hand movements in an image flow, includes receiving an image flow in real time, locating in each image in the received image flow a hand contour delimiting an image zone of the hand, extracting the postural characteristics from the image zone of the hand located in each image, and determining the hand movements in the image flow from the postural characteristics extracted from each image. The extraction of the postural characteristics of the hand in each image includes locating in the image zone of the hand the center of the palm of the hand by searching for a pixel of the image zone of the hand the furthest from the hand contour. | 11-22-2012 |
20120294490 | Secondary Market And Vending System For Devices - A recycling kiosk for recycling and financial remuneration for submission of a mobile telephone is disclosed herein. The recycling kiosk includes an inspection area with at least one camera and a plurality of electrical connectors in order to perform a visual analysis and an electrical analysis of the mobile telephone for determination of a value of the mobile telephone. The recycling kiosk also includes a processor, a display and a user interface. | 11-22-2012 |
20120294491 | System and Method for Automatic Registration Between an Image and a Subject - A patient defines a patient space in which an instrument can be tracked and navigated. An image space is defined by image data that can be registered to the patient space. A tracking device can be connected to a member in a known manner that includes imageable portions that generate image points in the image data. The tracking device can be tracked to register patient space to image space. | 11-22-2012 |
20120300978 | Device and Method for Determining the Orientation of an Eye - In a device or a method for determining the direction of vision of an eye, a starting point or a final point of a light beam reflected by a part of the eye and detected by a detector system, or of a light beam projected by a projection system onto or into the eye two-dimensionally, describes a pattern of a scanning movement in the eye. The inventive method uses a displacement device that guides the center of the pattern of movement into the pupil or macula center of the eye, and a determination device that uses the pattern of movement of the scanning movement to determine the pupil center or macula center. | 11-29-2012 |
20120300979 | PLANAR MAPPING AND TRACKING FOR MOBILE DEVICES - Real time tracking and mapping is performed using images of unknown planar object. Multiple images of the planar object are captured. A new image is selected as a new keyframe. Homographies are estimated for the new keyframe and each of a plurality of previous keyframes for the planar object that are spatially distributed. A graph structure is generated using the new keyframe and each of the plurality of previous keyframes and the homographies between the new keyframe and each of the plurality of previous keyframes. The graph structure is used to create a map of the planar object. The planar object is tracked based on the map and subsequently captured images. | 11-29-2012 |
20120300980 | LEARNING DEVICE, LEARNING METHOD, AND PROGRAM - Disclosed is a learning device. A feature-quantity calculation unit extracts a feature quantity from each feature point of a learning image. An acquisition unit acquires a classifier already obtained by learning as a transfer classifier. A classifier generation unit substitutes feature quantities into weak classifiers constituting the transfer classifier, calculates error rates of the weak classifiers on the basis of classification results of the weak classifiers and a weight of the learning image, and iterates a process of selecting a weak classifier of which the error rate is minimized a plurality of times. In addition, the classifier generation unit generates a classifier for detecting a detection target by linearly coupling a plurality of selected weak classifiers. | 11-29-2012 |
20120300981 | METHOD FOR OBJECT DETECTION AND APPARATUS USING THE SAME - A method for object detection and an apparatus using the same are provided, and the method includes: An image is captured, in which the image includes a plurality of sampling-windows. A first-stage sub-classifier of a classifier is used to detect whether the sampling-windows contain an object therein. The classifier is rotated at least one time by a predetermined rotation angle and the first-stage sub-classifier of the classifier is used to detect whether the sampling-windows contain the object after each rotating, wherein when the object is detected within the sampling-windows, keep detecting whether the sampling-windows contain the object therein sequentially by a second-stage sub-classifier to an N | 11-29-2012 |
20120300982 | IMAGE IDENTIFICATION DEVICE, IMAGE IDENTIFICATION METHOD, AND RECORDING MEDIUM - The invention provides an image identification device that classifies block images obtained by dividing a target image into predetermined categories, using a separating plane learning of which has been completed in advance for each of the categories. The image identification device includes a target image input unit inputs the target image, a block image generation unit divides the target image into blocks to generate the block images, a feature quantity computing unit computes feature quantities of the block images, and a category determination unit determines whether the block images are classified into one of the categories or not, using the separating plane and coordinate positions corresponding to magnitudes of feature quantities of the block images in a feature quantity space, wherein the feature quantity computing unit uses, as a feature quantity of a given target block image, local feature quantities and a global feature quantity. | 11-29-2012 |
20120300983 | SYSTEMS AND METHODS FOR MULTI-PASS ADAPTIVE PEOPLE COUNTING UTILIZING TRAJECTORIES - People are counted in a segment of video with a video processing system that is configured with a first set of parameters. This produces a first output. Based on this first output, a second set of parameters is chosen. People are then counted in the segment of video using the second set of parameters. This produces a second output. People are counted with a video played forward. People are counted with a video played backwards. The results of these two counts are reconciled to produce a more accurate people count. | 11-29-2012 |
20120300984 | RECORDING THE LOCATION OF A POINT OF INTEREST ON AN OBJECT - A method of recording the location of a point of interest on an object, the method comprising capturing a digital image of an object having a point of interest, accessing a three-dimensional virtual model of the object, aligning the image with the model, calculating the location of the point of interest with respect to the model, and recording the calculated point of interest location. Also, a system for performing the method. | 11-29-2012 |
20120300985 | AUTHENTICATION SYSTEM, AND METHOD FOR REGISTERING AND MATCHING AUTHENTICATION INFORMATION - A certain amount of unique data of a target is extracted from image information that was read, and it is determined whether or not the target is valid on the basis of the extracted unique data. Processes are executed by means of an image reading unit which extracts an image by scanning a target, an individual difference data calculating unit which calculates individual difference data from the obtained image, an individual difference data comparing unit which compares the calculated individual difference data, and a determination unit which determines whether or not to grant validation. | 11-29-2012 |
20120308076 | APPARATUS AND METHODS FOR TEMPORALLY PROXIMATE OBJECT RECOGNITION - Object recognition apparatus and methods useful for extracting information from an input signal. In one embodiment, the input signal is representative of an element of an image, and the extracted information is encoded into patterns of pulses. The patterns of pulses are directed via transmission channels to a plurality of detector nodes configured to generate an output pulse upon detecting an object of interest. Upon detecting a particular object, a given detector node elevates its sensitivity to that particular object when processing subsequent inputs. In one implementation, one or more of the detector nodes are also configured to prevent adjacent detector nodes from generating detection signals in response to the same object representation. The object recognition apparatus modulates properties of the transmission channels by promoting contributions from channels carrying information used in object recognition. | 12-06-2012 |
20120308077 | Computer-Vision-Assisted Location Check-In - In one embodiment, an uploaded multimedia object comprising a photo image or video is subjected to computer vision algorithms to detect and isolate objects within the multimedia object, and the isolated object is searched against a photographic location database containing images of a plurality of locations. Upon detecting a matching object, the location information associated with the photograph in the database containing the matching object may be leveraged to automatically check the user in to the associated location. | 12-06-2012 |
20120308078 | STORAGE MEDIUM STORING IMAGE PROCESSING PROGRAM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND IMAGE PROCESSING SYSTEM - An image processing apparatus writes a virtual space image obtained by imaging a virtual space in which objects are arranged from a virtual camera to an output area. When a pointer image representing a positional relationship between a referential position and an arrangement position of the object is depicted on the virtual space image stored in the output area, the pointer image to be depicted is changed in correspondence with conditions, such as the height of the virtual camera and the attribute of the object. | 12-06-2012 |
20120308079 | IMAGE PROCESSING DEVICE AND DROWSINESS ASSESSMENT DEVICE - An object of the present invention is to reduce false detection of an eyelid from a face image. According to the present invention, it is determined whether the amount of the change in the position of an eyelid outline candidate line during blinking matches the normal movement of an eyelid. When it is determined that the amount of the change in the position of the eyelid outline candidate line does not match the normal movement of the eyelid during blinking, the eyelid outline candidate line is not set as an eyelid outline. Therefore, it is possible to reduce false detection of the eyelid from the face image. | 12-06-2012 |
20120308080 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes a motion vector detector configured to detect, based on a first image and a second image different from the first image among a plurality of images, motion vectors representing a movement of an object on the second image with respect to an object on the first image; a first calculation unit configured to calculate an acceleration of the object on the image based on the motion vectors; a second calculation unit configured to calculate an object position representing a position of an object on an interpolation image interpolated between the images adjacent in a time direction among the images based on the acceleration, and an interpolation processing unit configured to interpolate the interpolation image on which the object is drawn at the object position. | 12-06-2012 |
20120308081 | POSITION INFORMATION ACQUIRING APPARATUS, POSITION INFORMATION ACQUIRING APPARATUS CONTROL METHOD, AND STORAGE MEDIUM - A position information acquiring apparatus comprises: a first acquiring unit configured to acquire first position information of the position information acquiring apparatus upon image capturing; a first storage unit configured to store image data generated by the image capturing and the first position information in a memory in association with each other; a second acquiring unit configured to acquire second position information of the position information acquiring apparatus upon image capturing; and a second storage unit configured to store the second position information in the memory in association with the image data when the second position information higher in accuracy than the first position information is acquired after the first storage unit stores the image data and the first position information in association with each other. | 12-06-2012 |
20120308082 | RECOGNITION OBJECT DETECTING APPARATUS - A recognition object detecting apparatus is provided which includes an imaging unit which generates image data representing a taken image, and a detection unit which detects a recognition object from the image represented by the image data. The imaging unit has a characteristic in which a relation between luminance and output pixel values varies depending on a luminance range. The detection unit binarizes the output pixel values of the image represented by the image data by using a plurality of threshold values to generate a to plurality of binary images, and detects the recognition object based on the plurality of binary images. | 12-06-2012 |
20120314899 | NATURAL USER INTERFACES FOR MOBILE IMAGE VIEWING - The mobile image viewing technique described herein provides a hands-free interface for viewing large imagery (e.g., 360 degree panoramas, parallax image sequences, and long multi-perspective panoramas) on mobile devices. The technique controls the imagery displayed on a display of a mobile device by movement of the mobile device. The technique uses sensors to track the mobile device's orientation and position, and front facing camera to track the user's viewing distance and viewing angle. The technique adjusts the view of a rendered imagery on the mobile device's display according to the tracked data. In one embodiment the technique can employ a sensor fusion methodology that combines viewer tracking using a front facing camera with gyroscope data from the mobile device to produce a robust signal that defines the viewer's 3D position relative to the display. | 12-13-2012 |
20120314900 | OBJECT TRACKING - The disclosure describes examples of systems, methods, program storage devices, and computer program products for tracking an object, where a reference image of the tracked object is outputted to an operator. | 12-13-2012 |
20120314901 | Fall Detection and Reporting Technology - Fall detection and reporting technology, in which output from at least one sensor configured to sense, in a room of a building, activity associated with a patient falling is monitored and a determination is made to capture one or more images of the room based on the monitoring. An image of the room is captured with a camera positioned to include the patient within a field of view of the camera and the captured image of the room is analyzed to detect a state of the patient at a time of capturing the image. A potential fall event for the patient is determined based on the detected state of the patient and a message indicating the potential fall event for the patient is sent based on the determination of the potential fall event for the patient. Techniques are also described for fall detection and reporting using an on-body sensing device. | 12-13-2012 |
20120314902 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - Provided is an image processing apparatus including a hand shape recognition unit that performs hand shape recognition on an input image to detect a position and a size of a hand with a specific shape in the input image, a determination region setting unit that sets a region in a vicinity of the hand on the input image as a determination region used to recognize a gesture performed using the hand, based on the position and the size of the hand, and a gesture recognition unit that recognizes the gesture by monitoring movement of the hand to the determination region. | 12-13-2012 |
20120314903 | METHOD OF RE-SAMPLING ULTRASOUND DATA - The present invention relates to multi-dimensional filtering of ultrasound scan data for antialiasing or reconstruction for the purpose of re-sampling. In particular, the present invention provides a method of re-sampling ultrasound scan data, comprising the steps of: a) obtaining sampled ultrasound scan data acquired from a beamforming system, the sampled data being defined by an original n-dimensional sample coordinate system having n axes, that is defined by the ultrasound probe and scan geometry and in which the samples are spaced uniformly along each axis when measured in units appropriate to that axis; b) defining desired target sample positions in a target n-dimensional co-ordinate system, that are uniformly spaced along each axis when measured in units appropriate to that axis; c) mapping the target sample positions defined in step (b) into said original n-dimensional sample co-ordinate system of step (a); d) quantizing the positions of the mapped target samples of step (c) so that they fall on simple exact integer subspacings between the original sample positions; e) designing a set of n-dimensional linear filter kernels according to application of Nyquist- Shannon Sampling Theory, one for each different target sample position relative to the original sample positions of its nearest neighbors, and using the original sample coordinates of the sampled data of step (a) and the desired target sample positions of step (d) in their respective n-dimensional spaces, said n-dimensional filter being separable along each of the original scan dimensions; and f) applying to the sampled data of step (a) the set of n-dimensional linear filter kernels designed in step (e), each filter being applied to calculate the target sample thereby obtaining re-sampled data. | 12-13-2012 |
20120314904 | IMAGE COLLATION SYSTEM, IMAGE COLLATION METHOD AND COMPUTER PROGRAM - An image collation system includes: a first direction estimating unit for estimating a first imaging direction of a reference object that matches an imaging direction of a collation target object by comparing global characteristics between an image of the collation target object and the three-dimensional data of the reference object; a second direction estimating unit for generating an image corresponding to the first imaging direction of the reference object, and estimating a second imaging direction of the reference object that matches the imaging direction of the collation target object by comparing local characteristics between the image of the collation target object and the generated image corresponding to the first imaging direction; and an image conformity determining unit for generating an image corresponding to the second imaging direction of the reference object, and determining whether the image of the collation target object matches the generated image corresponding to the second imaging direction. | 12-13-2012 |
20120314905 | System and Process for Detecting, Tracking and Counting Human Objects of Interest - A method of identifying, tracking, and counting objects of interest based upon at least one pair of stereo image frames taken by at least one image capturing device comprises the steps of: obtaining said at least one pair of stereo image frames and converting each said stereo image frame to a rectified image frame using calibration data obtained from said at least one image capturing device; identifying the presence or absence of said objects of interest and comparing each of said objects of interest to existing tracks comprising previously identified objects of interest; and for each said presence of an object of interest, adding said object of interest to one of said existing tracks if said object of interest matches said one existing track, or creating a new track comprising said object of interest if said object of interest does not match any of said existing tracks. | 12-13-2012 |
20120314906 | Device for Updating a Photometric Model - A photometric model includes at least one Gaussian model of a measurable physical magnitude in an image supplied by the camera and it is defined by the mean and the variance of the physical magnitude. A device comprises: means for computing the mean based on the current value of the physical magnitude, these means including a first summer mounted in a closed loop; means for measuring the difference between the mean and the current value of the physical magnitude, these means including a second summer; means for reducing the difference, these means including an automatic regulator. The first summer, the second summer and the automatic regulator are assembled in a closed-loop control of the first summer so as to update the model slowly in a period of stability of the observed scene and rapidly in a period of transition of the observed scene. Application: video surveillance, background subtraction. | 12-13-2012 |
20120314907 | SYSTEM AND METHOD FOR PREDICTING OBJECT LOCATION - A system for predicting object location includes a video capture system for capturing a plurality of video frames, each of the video frames having a first area, an object isolation element for locating an object in each of the plurality of video frames, the object being located at a first actual position in a first video frame and being located at a second actual position in a second video frame, and a trajectory calculation element configured to analyze the first actual position and the second actual position to determine an object trajectory, the object trajectory comprising past trajectory and predicted future trajectory, wherein the predicted future trajectory is used to determine a second area in a subsequent video frame in which to search for the object, wherein the second area is different in size than the first area. | 12-13-2012 |
20120321128 | VIDEO FEED TARGET TRACKING - Technologies for object tracking can include accessing a video feed that captures an object in at least a portion of the video feed; operating a generative tracker to capture appearance variations of the object operating a discriminative tracker to discriminate the object from the object's background, where operating the discriminative tracker can include using a sliding window to process data from the video feed, and advancing the sliding window to focus the discriminative tracker on recent appearance variations of the object; training the generative tracker and the discriminative tracker based on the video feed, where the training can include updating the generative tracker based on an output of the discriminative tracker, and updating the discriminative tracker based on an output of the generative tracker; and tracking the object with information based on an output from the generative tracker and an output from the discriminative tracker. | 12-20-2012 |
20120321129 | Methods for identifying rooftops using elevation data sets - In an embodiment, a method for identifying building unit rooftops, and their associated heights and locations is provided. The method includes subtracting a bare earth layer from a first return layer within a LIDAR data set for a geographic area of interest to form an above ground level (AGL) layer data set. A height mask is then applied to the AGL layer data set to form a building units data set. The building units data set includes data representative of potential building unit rooftops. This data set is refined through the application of a series of filters and masks to remove clutter (e.g., trees, bushes and other non-building unit structures) to refine the data set. | 12-20-2012 |
20120321130 | SYSTEM AND METHOD FOR CONFIDENCE-BASED MARKER ADJUSTMENT - A tracking system for improving observability of a marker in an image. The tracking system includes a memory unit that stores data; an imaging unit that images the marker and the image; a processor unit that detects the marker in the image; and a communication unit that transmits and receives data. The processor unit determines a first confidence level indicating a visibility of the marker to a user. | 12-20-2012 |
20120321131 | IMAGE-RELATED HANDLING SUPPORT SYSTEM, INFORMATION PROCESSING APPARATUS, AND IMAGE-RELATED HANDLING SUPPORT METHOD - An image handling support system extracts an image corresponding to a support target image from images associated with a user with use of information indicating association established between the user and the images, from a user's activity on an SNS. Then, the image handling support system supports handling of the support target image based on the extracted image. The image handling support system provides support for acquisition of an output image that matches various image conditions and complies with a user's preference without requiring a complicated operation to be performed in advance. | 12-20-2012 |
20120321132 | METHOD OF AUTOMATICALLY TRACKING AND PHOTOGRAPHING CELESTIAL OBJECTS, AND CELESTIAL-OBJECT AUTO-TRACKING PHOTOGRAPHING APPARATUS - A method of automatically tracking and photographing a celestial object, which moves due to diurnal motion, while moving an imaging area on an imaging surface of an image sensor so that an image of the celestial object becomes stationary, includes calculating theoretical linear movement amounts and a theoretical rotational angle amount of the imaging area per a specified time; obtaining a movable-amount data table which stores data on actual linearly-movable amounts and an actual rotatable amount of the imaging area; and setting an exposure time for completing a celestial-object autotracking photographing operation while moving the imaging area within the range of movement thereof by comparing the theoretical linear movement amounts and the theoretical rotational angle amount with the actual linearly-movable amounts and the actual rotatable amount of the imaging area stored in the movable-amount data table. | 12-20-2012 |
20120321133 | METHOD FOR DETERMINING RELATIVE MOTION WITH THE AID OF AN HDR CAMERA - In a method for detecting a motion of an object with the aid of an image recording system (e.g., HDR camera) which includes an image sensor, a first reset and a second reset are performed at a time interval during the exposure of the image sensor, an extent of a region of constant brightness is measured from the image of an object, and the motion (direction, velocity, and optionally acceleration) of the object is ascertained from the relationship between the measured extent and the time interval between the first and second resets. This motion determination is achieved with the aid of a single image. | 12-20-2012 |
20120321134 | FACE TRACKING METHOD AND DEVICE - Device and method for tracking human face are provided. The device may include an image collection unit to receive a video image and output a current frame image included in the received video image to a prediction unit, the prediction unit to predict a 2-dimensional (2D) position of a key point of a human face in a current frame image output through the image collection unit based on 2D characteristics and 3-dimensional (3D) characteristics of a human face in a previous image obtained through a face fitting unit, and to output the predicted 2D position of the key point to the face fitting unit, and the face fitting unit to obtain the 2D characteristics and the 3D characteristics by fitting a predetermined 2D model and 3D model of the human face based on the 2D position of the key point predicted by the prediction unit using at least one condition. | 12-20-2012 |
20120321135 | PATTERN POSITION DETECTING METHOD - A pattern position detecting method capable of reducing time for detecting a component position includes: acquiring a model image of a target; dividing the acquired model image into reference images each including a specific pattern; acquiring a detected image of the target; matching origins of the reference images respectively with predetermined positions on the detected image; comparing a region within the detected image with corresponding one of the reference images while moving the origin of the reference image in X and Y directions from the corresponding predetermined position and sequentially acquiring correlation values; integrating the correlation values at respective comparison positions within an integrated XY plane to generate integrated correlation values; and recognizing a value of integrated XY coordinates at a peak of the integrated correlation values as deviation of the specific patterns in the reference images from the predetermined positions of the target within the XY plane. | 12-20-2012 |
20120321136 | OPENING MANAGEMENT THROUGH GAIT DETECTION - Embodiments of the present invention provide a method, system and computer program product for managing an opening through gait recognition. In an embodiment of the invention, a method for managing an opening through gait recognition is provided. The method includes capturing imagery, for example through the use of a Web cam, of a moving object as the moving object approaches an automated door. The method additionally, includes determining from the captured imagery a presence or absence of a gait of the moving object. Finally, the method includes managing an automated opening of the door according to the determined presence or absence of a gait of the moving object. | 12-20-2012 |
20120321137 | METHOD FOR BUILDING AND EXTRACTING ENTITY NETWORKS FROM VIDEO - A computer implemented method for deriving an attribute entity network (AEN) from video data is disclosed, comprising the steps of: extracting at least two entities from the video data; tracking the trajectories of the at least two entities to form at least two tracks; deriving at least one association between at least two entities by detecting at least one event involving the at least two entities, said detecting of at least one event being based on detecting at least one spatio-temporal motion correlation between the at least two entities; and constructing the AEN by creating a graph wherein the at least two objects form at least two nodes and the at least one association forms a link between the at least two nodes. | 12-20-2012 |
20120321138 | SUSPICIOUS BEHAVIOR DETECTION SYSTEM AND METHOD - There is provided a suspicious behavior detection system capable of specifying and identifying a suspicious person exhibiting abnormal behavior. A suspicious behavior detection system is a system to detect suspicious behavior of a monitored subject, by using images captured by a stereo camera. The suspicious behavior detection system has an ambulatory path acquisition unit which acquires ambulatory path information of the monitored subject, and a behavioral identification unit which identifies behavior of the monitored subject based on the ambulatory path information, and automatically determines suspicious behavior of the monitored subject. | 12-20-2012 |
20120328150 | METHODS FOR ASSISTING WITH OBJECT RECOGNITION IN IMAGE SEQUENCES AND DEVICES THEREOF - A method, non-transitory computer readable medium, and apparatus that assist with object recognition includes determining when at least one eye of an observer fixates on a location in one or more of a sequence of fixation tracking images. The determined fixation location in the one or more of the sequence of fixation tracking images is correlated to a corresponding one of one or more sequence of field of view images. At least the determined fixation location in each of the correlated sequence of field of view images is classified based on at least one of a classification input or a measurement and comparison of one or more features of the determined fixation location in each of the correlated sequence of field of view images against one or more stored measurement feature values. The determined classification of the fixation location in each of the correlated sequence of field of view images is output. | 12-27-2012 |
20120328151 | High Accuracy Beam Placement for Local Area Navigation - An improved method of high accuracy beam placement for local area navigation in the field of semiconductor chip manufacturing. Preferred embodiments of the present invention can also be used to rapidly navigate to one single bit cell in a memory array or similar structure, for example to characterize or correct a defect in that individual bit cell. High-resolution scanning is used to scan only a “strip” of cells on the one edge of the array (along either the X axis and the Y axis) to locate a row containing the desired cell followed by a similar high-speed scan along the located row (in the remaining direction) until the desired cell location is reached. This allows pattern-recognition tools to be used to automatically “count” the cells necessary to navigate to the desired cell, without the large expenditure of time required to image the entire array. | 12-27-2012 |
20120328152 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND STORAGE MEDIUM - A high-resolution image obtained by an image sensing operation by an image sensing unit, and a low-resolution image having a resolution lower than the high-resolution image are acquired. An object which satisfies a predetermined condition is detected from the low-resolution image, and an object recognition processing for a region corresponding to the object in the high-resolution image is performed, thus correcting geometric distortions of the region. | 12-27-2012 |
20120328153 | DEVICE AND METHOD FOR MONITORING VIDEO OBJECTS - The invention relates to a device ( | 12-27-2012 |
20120328154 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing apparatus for calculating an evaluation value representing quality of a moving image. A second acquisition unit is configured to acquire position information representing a position of a chart image in each frame image of the input moving image. A cutout unit is configured to cut out, from each frame image of the input moving image, a partial image including the chart image based on the position information and generate a converted moving image having the cutout partial image as a frame image. A conversion unit is configured to frequency-convert the converted moving image at least in a temporal direction. A calculation unit is configured to calculate the evaluation value based on a frequency component value obtained by the conversion unit. | 12-27-2012 |
20120328155 | APPARATUS, METHOD, AND PROGRAM FOR DETECTING OBJECT FROM IMAGE - An image processing apparatus includes a detection unit configured to scan an input image and each of images at different resolutions, which are generated from the input image, by a predetermined-sized window to detect an object in the image, a storage unit configured to store a detection result of the detection unit, and a control unit configured to, if there is no free space in the storage unit to store a new detection result of the detection unit, store the new detection result instead of a detection result of an image at higher resolution than resolution of an image from which the new detection result is acquired. | 12-27-2012 |
20120328156 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING COMPUTER PROGRAM - An image processing apparatus according to the present invention includes a first identifying unit configured to identify the position of at least a part of a layer boundary based on a tomography image of a target to be captured, a setting unit configured to set a search range for a portion whose position has not been identified by the first identifying unit based on a depth directional position of the layer boundary whose position has been identified by the first identifying unit, and a second identifying unit configured to identify the position of a layer boundary portion whose position has not been identified based on a luminance value in the search range having been set. | 12-27-2012 |
20120328157 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An apparatus stores a luminance signal and a color signal extracted from a tracking area in image data and determines a correlation with the stored luminance signal, thereby extracting an area where a specified object exists in another image data to update the tracking area using the position information of the extracted area. If a sufficient correlation cannot be obtained from the luminance signal, the apparatus makes a comparison with the stored color signal to determine whether the specified object is lost. The apparatus updates the luminance signal every time the tracking area is updated, but does not update the color signal even if the tracking area is updated or updates the color signal at a period longer than a period at which the luminance signal is updated. | 12-27-2012 |
20120328158 | AUGMENTED REALITY METHOD AND DEVICES USING A REAL TIME AUTOMATIC TRACKING OF MARKER-FREE TEXTURED PLANAR GEOMETRICAL OBJECTS IN A VIDEO STREAM - Methods and devices for the real-time tracking of an object in a video stream for an augmented-reality application are disclosed herein. | 12-27-2012 |
20130004016 | USER IDENTIFICATION BY GESTURE RECOGNITION - A user can be identified and/or authenticated to an electronic device by analyzing aspects of a motion or gesture made by that user. At least one imaging element of the device can capture image information including the motion or gesture, and can determine time-dependent information about that motion or gesture in two or three dimensions of space. The time-dependent information can be used to identify varying speeds, motions, and other such aspects that are indicative of a particular user. The way in which a gesture or motion is made, in addition to the motion or gesture itself, can be used to authenticate an individual user. While other persons can learn the basic gesture or motion, the way in which each person makes that gesture or motion will generally be at least slightly different, which can be used to prevent unauthorized access to sensitive information, protected functionality, or other such content. | 01-03-2013 |
20130004017 | Context-Based Target Recognition - A method and apparatus for identifying a target object. A group of objects is identified in an image. The group of objects provides a context for identifying the target object in the image. The target object is searched for in the image using the context provided by the group of objects. | 01-03-2013 |
20130004018 | METHOD AND APPARATUS FOR DETECTING OBJECT USING VOLUMETRIC FEATURE VECTOR AND 3D HAAR-LIKE FILTERS - In a method of detecting a specific object using a multi-dimensional image including the specific object, with respect to each window slide of the image subjected to window sliding by applying a previously generated 3D cube filter, data of an area corresponding to the window sliding is normalized in a previously defined specific form. After the corresponding part of the normalized data is assigned to each cell in the 3D cube filter, a volume of the cell is then calculated, thereby expressing the volumes of the cells as one volumetric feature vector having a volumetric feature. The volumetric feature vector is applied to a classifier so as to decide whether or not the data of the area corresponding to the window slide corresponds to the specific object. | 01-03-2013 |
20130004019 | IMAGE RECOGNITION DEVICE, IMAGE RECOGNITION METHOD AND RECORDING MEDIUM FOR IMAGE RECOGNITION PROGRAM - An image recognition device includes a processor, and a memory coupled to the processor, wherein the processor executes a process includes detecting a plurality of contour points arranged on a contour line of a given region in an image, detecting a first contour point and a second contour point, in the contour points, the first and second contour points corresponding to respective ends of a first shortcut line formed by connecting portions of the contour line across an external area of the given region, and determining, based on a length of the first shortcut line or a length of a first route that extends along the contour line between the first and second contour points, whether or not a portion surrounded by the first shortcut line and the first route, not contained in the given region, is a depressed portion. | 01-03-2013 |
20130004020 | TRACKING APPARATUS, TRACKING METHOD, AND STORAGE MEDIUM TO STORE TRACKING PROGRAM - A tracking apparatus includes a face detection unit, a face corresponding region detection unit, a face tracking unit, a peripheral part tracking unit, a tracking switch unit. The face tracking unit tracks the face in accordance with a result detected by the face detection unit or a result detected by the face corresponding region detection unit. The peripheral part tracking unit tracks, as a part other than the face, a part of the subject having a preset positional relationship with the face. The tracking switch unit switches to the tracking of a part other than the face by the peripheral part tracking unit when the face is not tracked by the face tracking unit. | 01-03-2013 |
20130004021 | VEHICLE PERIMETER MONITORING DEVICE - An imaging means mounted on a vehicle performs imaging resulting in grayscale images having brightness values corresponding to object temperature, and objects around the vehicle are detected from said images. On the basis of said grayscale images, display images to be displayed on a display device mounted on the vehicle are generated and displayed on the display device. The display images are generated by lowering the brightness of areas not corresponding to the objects detected in the grayscale images. The display device is positioned in the vehicle width direction at no more than a prescribed distance away from an imaginary line passing through the center of rotation of the vehicle steering wheel and extending in the longitudinal direction of the vehicle. Accordingly, because display images are generated in which only the objects are spotlighted, the driver can quickly comprehend the objects present when using a display device. | 01-03-2013 |
20130004022 | AUGMENTED REALITY METHOD AND DEVICES USING A REAL TIME AUTOMATIC TRACKING OF MARKER-FREE TEXTURED PLANAR GEOMETRICAL OBJECTS IN A VIDEO STREAM - Methods and devices for the real-time tracking of one or more objects of a real scene in a video stream for an augmented-reality application are disclosed herein. | 01-03-2013 |
20130004023 | IMAGE PROCESING SYSTEM, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM - An image processing system includes: an object detecting unit that detects a moving body object from image data of an image of a predetermined area; an object-occurrence-position detecting unit that detects an occurrence position of the object detected by the object detecting unit; and a valid-object determining unit that determines that the object detected by the object detecting unit is a valid object when the object is present in a mask area set as a non-detection target in the image of the predetermined area and the occurrence position of the object in the mask area detected by the object-occurrence-position detecting unit is outside the mask area. | 01-03-2013 |
20130011009 | RECOGNITION SYSTEM BASED ON AUGMENTED REALITY AND REMOTE COMPUTING AND RELATED METHOD THEREOF - A recognition system based on augmented reality and remote computing includes a terminal touch screen, a terminal processing unit, a remote database, and a remote computing unit. The terminal touch screen fetches a recognition characteristic of an object to be recognized. The terminal processing unit transmits the recognition characteristic and an icon of an application program required to be executed to the remote computing unit. The remote database stores data corresponding to the recognition characteristic of the object to be recognized. The remote computing unit receives the recognition characteristic and the icon, fetches name and address information from the remote database according to the recognition characteristic and the icon, and transmits the name and address information to the terminal processing unit to make a terminal module enter the application program. | 01-10-2013 |
20130011010 | THREE-DIMENSIONAL IMAGE PROCESSING DEVICE AND THREE DIMENSIONAL IMAGE PROCESSING METHOD - A 3D image processing device comprising: an object detecting unit, for detecting a first location for an object in a first image and a second location for the object in a second image; a disparity determining unit, coupled to the object detecting unit, for computing a disparity result for the object between the first image and the second image according to the first location and the second location; a displacement computing unit, coupled to the disparity determining unit, for computing a first displacement distance of the first image and a second displacement distance of the second image according to the disparity result; and a displacement unit, coupled to the displacement computing unit, for moving the first image and the second image to generate a first displaced image and a second displaced image, according to the first displacement distance and the second displacement distance. | 01-10-2013 |
20130011011 | Insect Image Recognition and Instant Active Response - A device for detecting insects on substrates such as lettuce and other leaves. The device has a microscope lens which magnifies a portion of the leaf and send an image of the leaf portion to an image recognition system. If the image recognition system detects the presence of an insect—further steps are taken to remove the insect. | 01-10-2013 |
20130011012 | OBJECT DETECTION DEVICE, METHOD AND PROGRAM - When scores of classifiers for discriminating an image to be discriminated are sequentially obtained in a predetermined order, positions of saturated pixels in the image to be discriminated are detected. For each classifier which outputs the score based on pixel values at the detected position, the score is obtained by obtaining a value determined based on a difference between a discontinuing threshold set in advance correspondingly to the identified classifier and a discontinuing threshold set in advance correspondingly to a classifier immediately before the identified classifier. For each of the other classifiers, the score is obtained by obtaining an output obtained by applying the classifier to the image to be discriminated. A sum of the scores obtained so far is compared with the discontinuing threshold. If the sum exceeds the discontinuing threshold, the score of the next classifier is obtained. | 01-10-2013 |
20130011013 | MEASUREMENT APPARATUS, MEASUREMENT METHOD, AND FEATURE IDENTIFICATION APPARATUS - It is an object to measure a position of a feature around a road. An image memory unit stores images in which neighborhood of the road is captured. Further, a three-dimensional point cloud model memory unit | 01-10-2013 |
20130011014 | SURVEILLANCE SYSTEM AND METHOD - A method of performing a surveillance of a plurality of surveillance zones operable using a computerised system communicably interfaced with a plurality of input devices, each of the plurality of surveillance zones being monitored by at least one of the plurality of input devices, the method including the steps of: (i) the computerised system receiving input data captured by each of the plurality of input devices, the received input data representing characteristics of the surveillance zones under surveillance by the respective input devices; (ii) the computerised system comparing at least one characteristic of each surveillance zone against at least one surveillance ranking parameter; (iii) the computerised system assigning priority rating values to the results of each comparison between the at least one characteristic and the at least one surveillance ranking parameter. | 01-10-2013 |
20130011015 | BIOMETRIC AUTHENTICATON DEVICE, BIOMETRIC AUTHENTICATION PROGRAM, AND BIOMETRIC AUTHENTICATION METHOD - A biometric authentication device that authenticates a user using biological features of the user, the biometric authentication device includes: an illumination unit configured to illuminate a target which represents the biological features; an image sensor configured to obtain a first captured image by capturing the target illuminated by the illumination unit, and obtain a second captured image by capturing the target not illuminated by the illumination unit; an acquisition unit configured to acquire from a storage unit a mask which has a target area approximating the shape of the target in the first and second captured images obtained by the image sensor; and a detection unit configured to detect light other than illumination light illuminated by the illumination unit based on the mask acquired by the acquisition unit and at least one of the first and second images. | 01-10-2013 |
20130011016 | DETECTION OF OBJECTS IN DIGITAL IMAGES - A method to detect objects in a digital image. At least one image representing at least one frame of a video sequence is received. A given color channel of the image is extracted. At least one blob that stands out from a background of the given color channel is identified. One or more features are extracted from the blob. The one or more features are provided to a plurality of pre-learned object models each including a set of pre-defined features associated with a pre-defined blob type. The one or more features are compared to the set of pre-defined features. The blob is determined to be of a type that substantially matches a pre-defined blob type associated with one of the pre-learned object models. At least a location of an object is visually indicated within the image that corresponds to the blob. | 01-10-2013 |
20130016875 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR GENERATION OF TRACEABILITY MATRICES FROM VIDEO MEDIAAANM Boulila; NaoufelAACI MunchenAACO DEAAGP Boulila; Naoufel Munchen DE - A method, apparatus and computer program product for generating a traceability matrix document regarding a system under development, provides for: providing input information regarding the system under development to a processing framework, processing the input information by the processing framework, and automatically creating a traceability matrix document regarding the system under development from the input information. | 01-17-2013 |
20130016876 | SCALE INDEPENDENT TRACKING PATTERNAANM Wooley; KevinAACI San FranciscoAAST CAAACO USAAGP Wooley; Kevin San Francisco CA USAANM Mallet; RonaldAACI Mill ValleyAAST CAAACO USAAGP Mallet; Ronald Mill Valley CA US - In one aspect, a computer implemented method of motion capture, the method includes tracking the motion of a dynamic object bearing a pattern configured such that a first portion of the patterns is tracked at a first resolution and a second portion of the pattern is tracked at a second resolution. The method further includes causing data representing the motion to be stored to a computer readable medium. | 01-17-2013 |
20130016877 | MULTI-VIEW OBJECT DETECTION USING APPEARANCE MODEL TRANSFER FROM SIMILAR SCENESAANM Feris; Rogerio S.AACI White PlainsAAST NYAACO USAAGP Feris; Rogerio S. White Plains NY USAANM Pankanti; Sharathchandra U.AACI DarienAAST CTAACO USAAGP Pankanti; Sharathchandra U. Darien CT USAANM Siddiquie; BehjatAACI College ParkAAST MDAACO USAAGP Siddiquie; Behjat College Park MD US - View-specific object detectors are learned as a function of scene geometry and object motion patterns. Motion directions are determined for object images extracted from a training dataset and collected from different camera scene viewpoints. The object images are categorized into clusters as a function of similarities of their determined motion directions, the object images in each cluster are acquired from the same camera scene viewpoint. Zenith angles are estimated for object image poses in the clusters relative to a position of a horizon in the cluster camera scene viewpoint, and azimuth angles of the poses as a function of a relation of the determined motion directions of the clustered images to the cluster camera scene viewpoint. Detectors are thus built for recognizing objects in input video, one for each of the clusters, and associated with the estimated zenith angles and azimuth angles of the poses of the respective clusters. | 01-17-2013 |
20130016878 | Image Processing Device and Image Processing Method ThereofAANM Pan; Chia-HoAACI Tainan CityAACO TWAAGP Pan; Chia-Ho Tainan City TWAANM Chen; Shuei-LinAACI Kaohsiung CityAACO TWAAGP Chen; Shuei-Lin Kaohsiung City TWAANM Lee; I-HsienAACI Hsinchu CityAACO TWAAGP Lee; I-Hsien Hsinchu City TW - An image processing device and an image processing method thereof. The image processing device comprises a memory module, an object detection module and a processing module. The memory module is provided for storing a plurality of images captured by a camera module, and the image comprises at least one object. The object detection module retrieves one of the images as a reference image and compares the remaining images with the reference image to locate a region where the object with corresponding contour and color is situated. If the object detection module compares and determines that the object with the corresponding contour and color is situated in different regions of the reference image and the remaining images, the processing module will remove the corresponding object in the reference image or superimpose each corresponding object in the reference image simultaneously. | 01-17-2013 |
20130016879 | TRACKING METHODAANM Baele; XavierAACI BruxellesAACO BEAAGP Baele; Xavier Bruxelles BEAANM Guigues; LaurentAACI BruxellesAACO BEAAGP Guigues; Laurent Bruxelles BEAANM Martinez Gonzalez; JavierAACI BruxellesAACO BEAAGP Martinez Gonzalez; Javier Bruxelles BE - The present invention relates to a method for tracking at least one object in a sequence of frames, each frame comprising a pixel array, wherein a depth value is associated to each pixel. The method comprises grouping at least some of said pixels of each frame into several regions, grouping said regions into clusters (B | 01-17-2013 |
20130022232 | CUSTOMIZED AUDIO CONTENT RELATING TO AN OBJECT OF INTEREST - A device/system and method for creating customized audio segments related to an object of interest are disclosed. The device and/or system can create an additional level of interaction with the object of interest by creating customized audio segments based on the identity of the object of interest and/or the user's interaction with the object of interest. Thus, the mobile device can create an interactive environment for a user interacting with an otherwise inanimate object. | 01-24-2013 |
20130022233 | IDENTIFYING TRUE FEATURE MATCHES FOR VISION BASED NAVIGATION - An example embodiment includes a method for identifying true feature matches from a plurality of candidate feature matches for vision based navigation. A weight for each of the plurality of candidate feature matches can be set. The method also includes iteratively performing for N iterations: calculating a fundamental matrix for the plurality of candidate feature matches using a weighted estimation that accounts for the weight of each of the plurality of candidate feature matches; calculating a distance from the fundamental matrix for each of the plurality of candidate feature matches; and updating the weight for each of the plurality of candidate feature matches as a function of the distance for the respective candidate feature match. After N iterations candidate feature matches having a distance less than a distance threshold can be selected as true feature matches | 01-24-2013 |
20130022234 | OBJECT TRACKING - Methods, devices, and systems for object tracking are described herein. One or more method embodiments include receiving an initial set of track points associated with a trajectory of an object, compressing the initial set of track points into a plurality of track segments, each track segment having a start track point and an end track point, and storing the plurality of track segments to represent the trajectory of the object. | 01-24-2013 |
20130022235 | INTERACTIVE SECRET SHARING - Interactive secret sharing includes receiving video data from a source and interpreting the video data to track an observed path of a device. In addition, position information is received from the device, and the position information is interpreted to track a self-reported path of the device. If the observed path is within a threshold tolerance of the self-reported path, access is provided to a restricted resource. | 01-24-2013 |
20130022236 | Apparatus Capable of Detecting Location of Object Contained in Image Data and Detection Method Thereof - An apparatus capable of detecting location of object contained in image data and its detecting method are disclosed. The apparatus comprises an image capturing module, a weight assignment module, and a processing module. The image capturing module is for capturing an image. The weight assignment module performs the pixel weight/probability assignment according to the priori information and the image, and figures out the initial gravity center of the object according to the object location initialization. The processing module performs the statistical analysis according to the result of the pixel weight/probability assignment and the initial gravity center of the object so as to obtain the analysis result and update the object location. The processing module determines whether or not the analysis result meets the preset value, if it does, the processing module outputs an estimated result; if it doesn't, the processing module repeats the foregoing processes. | 01-24-2013 |
20130022237 | METHOD FOR STAND OFF INSPECTION OF TARGET IN MONITORED SPACE - This invention addresses remote inspection of target in monitored space. A three dimensional (3D) microwave image of the space is obtained using at least two emitters. The data undergoes coherent processing to obtain maximum intensity of the objects in the area. This image is combined with a 3D video image obtained using two or more video cameras synchronized with the microwave emitters. The images are converted into digital format and transferred into one coordinate system. The distance l is determined between the microwave and the video image. If l01-24-2013 | |
20130022238 | Systems and methods for tracking and authenticating goods - Systems and methods for identifying, tracking, tracing and determining the authenticity of a good include an imaging system, a database, and an authentication center. The imaging system is configured to capture an image of a unique signature associated with a good. The unique signature can be, for example, a random structure or pattern unique to the particular good. The imaging system is configured to process the image to identify at least one metric that distinguishes the unique signature from unique signatures of other goods. The database is configured to receive information related to the good and its unique signature from the imaging system, and to store the information therein. The authentication center is configured to analyze the field image with respect to the information stored in the database to determine whether the unique signature in the field image is a match to the captured image stored in the database. | 01-24-2013 |
20130022239 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - An image processing apparatus is capable of appropriately extracting a frame of an output target from a moving image. The image processing apparatus includes an analysis unit configured to analyze a plurality of analysis regions in each of a plurality of frames included in the moving image, an extraction unit configured to extract the frame of the output target from among the plurality of frames by comparing analysis results of the plurality of analysis regions in each of the plurality of frames analyzed by the analysis unit for each analysis regions corresponding to each other between the plurality of frames, and an output unit configured to output the frame of the output target extracted by the extraction unit. | 01-24-2013 |
20130022240 | Remote Automated Planning and Tracking of Recorded Data - The invention is a system for the remote automated planning and tracking of recorded data. The inventive system preferably is a software product that is used in conjunction with a foreign object tracking system and a visual inspection tracking system to provide an automated eddy current and visual inspection planning and tracking approach. The system provides a link between, for example, visual inspection of nuclear power plant steam generator secondary sides with eddy current inspection testing of the steam generator tubes from the primary side. This allows for possible loose part indications from the eddy current testing to be available to visual inspectors through foreign object tracking system for subsequent visual inspection and possible retrieval. | 01-24-2013 |
20130022241 | ENHANCING GMAPD LADAR IMAGES USING 3-D WALLIS STATISTICAL DIFFERENCING - A method for processing XYZ point cloud of a scene acquired by a GmAPD LADAR includes: performing on a computing device a three-dimensional statistical differencing on the XYZ point cloud obtained from the GmAPD LADAR to produce a SD point cloud; and displaying an image of the SD point cloud. | 01-24-2013 |
20130022242 | IDENTIFYING ANOMALOUS OBJECT TYPES DURING CLASSIFICATION - Techniques are disclosed for identifying anomaly object types during classification of foreground objects extracted from image data. A self-organizing map and adaptive resonance theory (SOM-ART) network is used to discover object type clusters and classify objects depicted in the image data based on pixel-level micro-features that are extracted from the image data. Importantly, the discovery of the object type clusters is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. The SOM-ART network is adaptive and able to learn while discovering the object type clusters and classifying objects and identifying anomaly object types. | 01-24-2013 |
20130022243 | METHODS AND APPARATUSES FOR FACE DETECTION - Methods and apparatuses are provided for face detection. A method may include selecting a face detection parameter subset from a plurality of face detection parameter subsets. Each face detection parameter subset may include a subset of face posture models from a set of face posture models and a subset of image patch scales from a set of image patch scales. The method may further include using the selected face detection parameter subset for performing face detection in an image. Corresponding apparatuses are also provided. | 01-24-2013 |
20130022244 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus comprises: an acquisition unit configured to acquire a plurality of image data items; an object detecting unit configured to detect a plurality of objects from the acquired image data items; a characteristic value calculation unit configured to calculate a characteristic value of each of the objects; a cluster information generating unit configured to, when the characteristic value of at least one of the objects meets a predetermined condition, generate first cluster information indicating a first cluster to which the at least one object belongs; a selecting unit configured to select an object for which a thumbnail is to be generated from among the at least one object indicated in the first cluster information; and a thumbnail generating unit configured to generate the thumbnail of the object using one of the image data items indicating the selected object. | 01-24-2013 |
20130028467 | SEARCHING RECORDED VIDEO - Embodiments of the disclosure provide for systems and methods for creating metadata associated with a video data. The metadata can include data about objects viewed within a video scene and/or events that occur within the video scene. Some embodiments allow users to search for specific objects and/or events by searching the recorded metadata. In some embodiments, metadata is created by receiving a video frame and developing a background model for the video frame. Foreground object(s) can then be identified in the video frame using the background model. Once these objects are identified they can be classified and/or an event associated with the foreground object may be detected. The event and the classification of the foreground object can then be recorded as metadata. | 01-31-2013 |
20130028468 | Example-Based Object Retrieval for Video Surveillance - Methods and apparatus are provided for example-based object retrieval that can retrieve objects from video images in real-time. An object of interest is identified in a sequence of images by obtaining an identification from a user of an example object having at least one attribute of interest; generating a query object based on the identified example object, wherein the query object has a substantially similar viewpoint as objects in the sequence of images and wherein the query object comprises a plurality of attributes that are substantially similar as the example object; and processing the sequence of images to identify the object of interest based on a similarity metric to the query object. | 01-31-2013 |
20130028469 | METHOD AND APPARATUS FOR ESTIMATING THREE-DIMENSIONAL POSITION AND ORIENTATION THROUGH SENSOR FUSION - An apparatus and method of estimating a three-dimensional (3D) position and orientation based on a sensor fusion process. The method of estimating the 3D position and orientation may include determining a position of a marker in a two-dimensional (2D) image, determining a depth of a position in a depth image corresponding to the position of the marker in the 2D image to be a depth of the marker, estimating a 3D position of the marker calculated based on the depth of the marker as a marker-based position of a remote apparatus, estimating an inertia-based position and an inertia-based orientation by receiving inertial information associated with the remote apparatus, estimating a fused position based on a weighted sum of the marker-based position and the inertia-based position, and outputting the fused position and the inertia-based orientation. | 01-31-2013 |
20130028470 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMUPTER READABLE RECORDING DEVICE - An image processing apparatus includes a corresponding region connecting unit that connects regions that depict the same target between a series of images captured in time series, thereby sets at least one connected region; a connected region feature data calculation unit that calculates feature data of the connected region; a digest index value calculation unit that calculates a digest index value corresponding to a degree at which the target depicted in the series of images is aggregated in each image of the series of images, based on the feature data; and a digest image detector that detects a digest image based on the digest index value. | 01-31-2013 |
20130028471 | IMAGE PROCESSING APPARATUS FOR CONVERTING IMAGE IN CHARACTERISTIC REGION OF ORIGINAL IMAGE INTO IMAGE OF BRUSHSTROKE PATTERNS - The importance detection unit 52 detects importance of each pixel composing the original image thus acquired. In addition, the importance map generation unit 52 generates an importance map indicating distribution of the importance detected for each pixel. The characteristic region detection unit 61 detects a characteristic region of the original image, from the original image thus acquired. The determination unit 62 determines a brushstroke pattern that should be applied to the characteristic region thus detected, from at least two types of brushstroke patterns stored in a storage unit. The brushstroke pattern conversion unit 63 converts an image in the characteristic region into an image, to which the brushstroke pattern is applied, based on the brushstroke pattern thus determined. The adjustment unit 64 adjusts color of the image of the brushstroke pattern being the image in the characteristic region, based on the importance map thus generated. | 01-31-2013 |
20130028472 | MULTI-HYPOTHESIS PROJECTION-BASED SHIFT ESTIMATION - A method for determining a shift between two images, determining a first correlation in a first direction, the first correlation being derived from a first image projection characteristics and a second image projection characteristics, and a second correlation in a second direction, the second correlation being derived from the first image projection characteristics and the second image projection characteristics. The method determines a set of hypotheses from a first plurality of local maxima of the first correlation and a second plurality of local maxima of the second correlation. The method then calculates a two-dimensional correlation score between the first image and the second image based on a shift indicated in at least one of the set of hypotheses, and selecting one of the set of hypotheses as the shift between the first image and the second image based on the calculated two-dimensional correlation score. | 01-31-2013 |
20130028473 | SYSTEM AND METHOD FOR PERIODIC LANE MARKER IDENTIFICATION AND TRACKING - A system and method for determining the presence and period of dashed line lane markers in a roadway. The system includes an imager configured to capture a plurality of high dynamic range images exterior of the vehicle and a processor, in communication with the at least one imager such that the processor is configured to process at least one high dynamic range image. The period of the dashed lane markers in the image is calculated for detecting the presence of the dashed lane marker and for tracking the vehicle within the markers. The processor communicates an output for use by the vehicle for use in lane departure warning (LDW) and/or other driver assist features. | 01-31-2013 |
20130028474 | METHOD AND SYSTEM FOR DYNAMIC FEATURE DETECTION - Disclosed are methods and systems for dynamic feature detection of physical features of objects in the field of view of a sensor. Dynamic feature detection substantially reduces the effects of accidental alignment of physical features with the pixel grid of a digital image by using the relative motion of objects or material in and/or through the field of view to capture and process a plurality of images that correspond to a plurality of alignments. Estimates of the position, weight, and other attributes of a feature are based on an analysis of the appearance of the feature as it moves in the field of view and appears at a plurality of pixel grid alignments. The resulting reliability and accuracy is superior to prior art static feature detection systems and methods. | 01-31-2013 |
20130028475 | LIGHT POSITIONING SYSTEM USING DIGITAL PULSE RECOGNITION - In one aspect, the present disclosure relates to a method of detecting information transmitted by a light source in a complementary metal-oxide-semiconductor (CMOS) image sensor by detecting a frequency of light pulses produced by the light source. In some embodiments, the method includes capturing on the CMOS image sensor with a rolling shutter an image in which different portions of the CMOS image sensor are exposed at different points in time; detecting visible distortions that include alternating stripes in the image; measuring a width of the alternating stripes present in the image; and selecting a symbol based on the width of the alternating stripes present in the image to recover information encoded in the frequency of light pulses produced by the light source captured in the image. | 01-31-2013 |
20130028476 | POSE TRACKING PIPELINE - A method of tracking a target includes receiving from a source a depth image of a scene including the human subject. The depth image includes a depth for each of a plurality of pixels. The method further includes identifying pixels of the depth image that belong to the human subject and deriving from the identified pixels of the depth image one or more machine readable data structures representing the human subject as a body model including a plurality of shapes. | 01-31-2013 |
20130028477 | IMAGE PROCESSING METHOD AND THERMAL IMAGING CAMERA - For a thermal imaging camera ( | 01-31-2013 |
20130028478 | OBJECT INSPECTION WITH REFERENCED VOLUMETRIC ANALYSIS SENSOR - A positioning method and system for non-destructive inspection of an object include providing at least one volumetric analysis sensor having sensor reference targets; providing a sensor model of a pattern of at least some of the sensor reference targets; providing object reference targets on at least one of the object and an environment of the object; providing an object model of a pattern of at least some of the object reference targets; providing a photogrammetric system including at least one camera and capturing at least one image in a field of view, at least a portion of the sensor reference and the object reference targets being apparent on the image; determining a sensor spatial relationship and an object spatial relationship; determining a sensor-to-object spatial relationship of the at I act one volumetric analysis sensor with respect to the object; repeating the steps and tracking a displacement of the volumetric analysis sensor and the object. | 01-31-2013 |
20130028479 | LANE RECOGNITION DEVICE - The lane mark recognition device is equipped with a lane mark detecting unit which executes a lane mark detection process in each predetermined control cycle, and adds a detection presence/absence data to a ring buffer, a detection presence/absence data addition inhibiting unit which inhibits addition of the detection presence/absence data to the ring buffer when the vehicle is traveling in the intersection, and a lane mark position recognizing unit which recognizes a relative position of the vehicle and the lane mark, when the lane mark is detected in the situation where a lane mark detection rate calculated from the data of the ring buffer is higher than a reliability threshold value. | 01-31-2013 |
20130028480 | GENERIC SUBSTANCE INFORMATION RETRIEVAL USING MOBILE DEVICE - A data processing system configured for computer visualization of drugs for drug interaction information retrieval is disclosed. For each of multiple different substances and using a camera within the mobile or other computing device, imagery of at least one external characteristic of a physical body of the substance is acquired. An identity of each of the multiple different substances is determined based upon the at least one external characteristic from the acquired imagery. Drug interaction data is retrieved for each of the multiple different substances using the determined identities. Drug interaction data for at least one of the multiple different substances is correlated with at least one other of the multiple different substances. At least one generic substance and/or cost information of at least one of the multiple different substances is identified. The correlated drug interaction data, the at least one generic substance, and/or the cost information are displayed. | 01-31-2013 |
20130034262 | Hands-Free Voice/Video Session Initiation Using Face Detection - A communication system includes a telecommunication appliance connected to a communication network, an image acquisition appliance coupled to the telecommunication appliance, software executing on the telecommunication appliance from a non-transitory physical medium, the software providing a first function enabling detecting that an image acquired by the camera comprises a human face in at least a portion of the image, and a second function initiating a communication event directed to a pre-programmed destination, the second function initiated by the first function detecting the human face image portion. | 02-07-2013 |
20130034263 | Adaptive Threshold for Object Detection - Systems and methods for developing and using adaptive threshold values for different input images for object detection are disclosed. In embodiments, detector response histogram-based systems and methods train models for predicting optimal threshold values for different images. In embodiments, when training the model, an optimal threshold value for an image is defined as the value that maximizes the reduction of false positive image patches while preserving as many true positive image patches as possible. Once trained, the model may be used to set different threshold values for different images by inputting a detector response histogram for the image patches of an image into the model to determine a threshold value for detection. | 02-07-2013 |
20130034264 | LOCOMOTION ANALYSIS METHOD AND LOCOMOTION ANALYSIS APPARATUS - An exemplary locomotion analysis method includes steps of: acquiring a depth map including an image of a measured object, filtering out a background image of the depth map according to a depth threshold, finding out the image of the measured object from the residual image of the depth map, calculating three-dimensional (3D) coordinates of the measured object according to the image of the measured object has been found out, recording the 3D coordinates to reconstruct a 3D moving track of the measured object and performing a locomotion analysis of the measured object according to the 3D moving track. Moreover, an exemplary locomotion analysis apparatus applied to the above method also is provided. | 02-07-2013 |
20130034265 | APPARATUS AND METHOD FOR RECOGNIZING GESTURE, AND NON-TRANSITORY COMPUTER READABLE MEDIUM THEREOF - According to one embodiment, a time series information acquisition unit acquires time series information of a position or a size of a specific part of a user's body. An operation segment detection unit detects a movement direction of the specific part from the time series information, and detects a plurality of operation segments each segmented by two of a start point, a turning point and an end point of the movement direction. A recognition unit specifies a first operation segment to be recognized and a second operation segment following the first operation segment among the plurality of operation segments, and recognizes a motion of the specific part in the first operation segment by using a first feature extracted from the time series information of the first operation segment and a second feature extracted from the time series information of the second operation segment. | 02-07-2013 |
20130034266 | METHOD AND SYSTEM FOR DETECTION AND TRACKING EMPLOYING MULTI-VIEW MULTI-SPECTRAL IMAGING - Multi view multi spectral detection and tracking system comprising at lease one imager, at least one of the at least one imager being a multi spectral imager, the at least one imager acquiring at least two detection sequences, and at least two tracking sequences, each sequence including at least one image, each acquired image being associated with respective image attributes, an object detector, coupled with the at least one imager, detecting objects of interest in the scene, according to the detection sequence of images and the respective image attributes, an object tracker coupled with the object detector, the object tracker tracking the objects of interest in the scene and determining dynamic spatial characteristics and dynamic spectral characteristics for each object of interest according to the tracking sequences of images and the respective image attributes and an object classifier, coupled with the object tracker, classifying the objects of interest according to the dynamic spatial characteristics and the dynamic spectral characteristics. | 02-07-2013 |
20130034267 | Data Capture and Identification System and Process - An identification method and process for objects from digitally captured images thereof that uses data characteristics to identify an object from a plurality of objects in a database. The data is broken down into parameters such as a Shape Comparison, Grayscale Comparison, Wavelet Comparison, and Color Cube Comparison with object data in one or more databases to identify the actual object of a digital image. | 02-07-2013 |
20130034268 | METHOD AND SYSTEM FOR USE IN PERFORMING SECURITY SCREENING - A method and apparatus for screening luggage are provided. X-ray images derived by scanning the luggage with X-rays are received and processed with an automated threat detection (ATD) engine. A determination is then made whether to subject respective ones of the X-ray images to further visual inspection by a human operator at least in part based on results obtained by the ATD engine. In certain cases, visual inspection by a human operator is by-passed and the ATD results are relied upon in order to mark luggage for further inspection or to mark luggage as clear. In another aspect, X-ray images derived by scanning the luggage using two or more X-ray scanning devices are pooled at a centralized location. ATD operations are applied to the X-ray images, which are then provided “on-demand” to a human operator for visual inspection. Results of the visual inspection are entered by the human operator and then conveyed to on-site screening technicians associated with respective X-ray scanning devices. | 02-07-2013 |
20130034269 | PROCESSING-TARGET IMAGE GENERATION DEVICE, PROCESSING-TARGET IMAGE GENERATION METHOD AND OPERATION SUPPORT SYSTEM - A processing-target image generation device generates a processing-target image which is an object to be subjected to an image conversion process for acquiring an output image based on an input image taken by an image-taking part. A coordinates correspondence part causes input coordinates, spatial coordinates, and projection coordinates to correspond to each other, the input coordinates being on an input image plane on which the input image is located, the spatial coordinates being on a space model on which the input image is projected, the projection coordinates being on a processing-target image plane on which the processing-target image is positioned and the image projected on the space model is re-projected. | 02-07-2013 |
20130039531 | METHOD AND APPARATUS FOR CONTROLLING MULTI-EXPERIENCE TRANSLATION OF MEDIA CONTENT - A method or apparatus for controlling a media device using gestures may include, for example, modifying media content to generate first updated media content according to a comparison of first information descriptive of a first environment of the source device to second information descriptive of a second environment of the recipient device, capturing images of a gesture, identifying a command from the gesture, and modifying the first updated media content to generate second updated media content according to the command. Other embodiments are disclosed. | 02-14-2013 |
20130039532 | PARKING LOT INFORMATION SYSTEM USING IMAGE TECHNOLOGY FOR IDENTIFYING AVAILABLE PARKING SPACES - A parking lot information system comprising a digital camera for obtaining an image of parking spaces in the parking lot where each parking space is marked with a visual identifier, a computer coupled to the digital camera for identifying available parking spaces by recognizing the identifiers marking the available parking spaces, and a display coupled to the computer for displaying information on the available parking spaces. | 02-14-2013 |
20130039533 | METHODS AND SYSTEMS FOR IMAGE DETECTION - A method is provided for image detection. The method includes measuring a temperature of an analog-to-digital (A/D) converter of an imaging system during an imaging scan of an object, and correcting a gain of the A/D converter based on the measured temperature of the A/D converter. | 02-14-2013 |
20130039534 | MOTION DETECTION METHOD FOR COMPLEX SCENES - A motion detection method for complex scenes has steps of receiving an image frame including a plurality of pixels, each of the pixel including a first pixel information; performing a multi-background generation module based on the plurality of pixels; generating a plurality of background pixels based on the multi-background generation module; performing a moving object detection module; and deriving the background pixel based on the moving object detection module. | 02-14-2013 |
20130039535 | METHOD AND APPARATUS FOR REDUCING COMPLEXITY OF A COMPUTER VISION SYSTEM AND APPLYING RELATED COMPUTER VISION APPLICATIONS - A method for reducing complexity of a computer vision system and applying related computer vision applications includes: obtaining instruction information, wherein the instruction information is used for a computer vision application; obtaining image data from a camera module and defining at least one region of recognition corresponding to the image data by user gesture input on a touch-sensitive display; outputting a recognition result of the aforementioned at least one region of recognition; and searching at least one database according to the recognition result. Associated apparatus are also provided. For example, the apparatus includes an instruction information generator, a processing circuit, and a database management module, where the instruction information generator obtains the instruction information, and the processing circuit obtains the image data from the camera module, defines the aforementioned at least one region of recognition and outputs a recognition result of the at least one region of recognition. | 02-14-2013 |
20130039536 | Method and System for Optoelectronic Detection and Location of Objects - Disclosed are methods and systems for optoelectronic detection and location of moving objects. The disclosed methods and systems capture one-dimensional images of a field of view through which objects may be moving, make measurements in those images, select from among those measurements those that are likely to correspond to objects in the field of view, make decisions responsive to various characteristics of the objects, and produce signals that indicate those decisions. The disclosed methods and systems provide excellent object discrimination, electronic setting of a reference point, no latency, high repeatability, and other advantages that will be apparent to one of ordinary skill in the art. | 02-14-2013 |
20130039537 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - An image processing apparatus includes a character recognition unit configured to perform character recognition of a character region where characters exist in an image to generate character code, a detection unit configured to detect a region of the image where a feature change in the image is small, and a placement unit configured to place data obtained from the character code in the detected region. | 02-14-2013 |
20130039538 | BALL TRAJECTORY AND BOUNCE POSITION DETECTION - Disclosed in some examples is a method, system and medium relating to determining a ball trajectory and bounce position on a playing surface. An example method includes recording a first and a second sequence of ball images before and after a ball bounce on the playing surface; constructing a composite image of the trajectory of the ball from the first and second sequences; and determining a bounce position of the ball from the composite image. | 02-14-2013 |
20130039539 | Portable Electronic Device - A portable electronic device includes a light source, which includes at least one luminescence diode and emits light during operation. The portable electronic device also includes a device for detecting an object in the beam path of the light emitted by the light source during operation. The device is designed to reduce the luminous flux of the light emitted by the light source during operation if the object is identified for a minimum duration within a minimum distance from the light source in the beam path. | 02-14-2013 |
20130039540 | INFORMATION PROVIDING DEVICE, INFORMATION PROVIDING PROCESSING PROGRAM, RECORDING MEDIUM HAVING INFORMATION PROVIDING PROCESSING PROGRAM RECORDED THEREON, AND INFORMATION PROVIDING METHOD - There are provided an information providing device, an information providing processing program, and an information providing method which can efficiently recommend information related to a shooting spot matching a user's preference. An information providing server is configured to decide a coincidence between user object information included in image data registered by a given user, and representative object information of a location whose position can be specified, and notify location information associated with the representative object information, to the user based on a decision result of the coincidence. | 02-14-2013 |
20130039541 | ROBOT SYSTEM, ROBOT CONTROL DEVICE AND METHOD FOR CONTROLLING ROBOT - A robot system includes a robot having a movable section, an image capture unit provided on the movable section, an output unit that allows the image capture unit to capture a target object and a reference mark and outputs a captured image in which the reference mark is imaged as a locus image, an extraction unit that extracts the locus image from the captured image, an image acquisition unit that performs image transformation on the basis of the extracted locus image by using the point spread function so as to acquire an image after the transformation from the captured image, a computation unit that computes a position of the target object on the basis of the acquired image, and a control unit that controls the robot so as to move the movable section toward the target object in accordance with the computed position. | 02-14-2013 |
20130039542 | SITUATIONAL AWARENESS - Police officers are provided with client devices capable of capturing multimedia and streaming multimedia. The client devices can upload captured multimedia to a central server or share streams in real time. A network operation center can review the multimedia in real time or afterwards. Situational awareness is the provision of multimedia to a police officer as that officer approaches the location of an incident. The multimedia may be real time streams as the officer responds to a particular location, or the multimedia may be historical files as the officer familiarizes himself with incidents as he patrols a new neighborhood. Since the client device also reports real time reporting patterns, police officers can review high resolution and fidelity patrolling and incident reports to analyze the efficacy of patrol coverage. Since the client device may run supplementary applications, example applications are disclosed. | 02-14-2013 |
20130039543 | STOCK ANALYTIC MONITORING - In selected embodiments video footage is automatically analyzed to determine whether product stock levels at particular product locations are low. Video analytics may be employed to track product removal from shelves and determine approximate quantities of product remaining on each shelf based on product size and dedicated shelf area. In selected implementations an alarm notification is generated to alert store personnel that restocking is appropriate. Such an alarm notification optionally includes a still image of the area corresponding to the alarm together with data related to the product and projected quantities needed to restock the shelf. In some embodiments the system automatically identifies the store personnel who are currently located in areas near where the alarm event occurred and the notification is wirelessly distributed to their mobile devices. | 02-14-2013 |
20130044911 | PARTICLE FILTER - A particle filter is suitable for performing particle filtering on a frame to track a particular object in the frame. The particle filter includes a frame cache, an observation model generator, and a particle filter controller. The frame cache is connected to a system memory through a system bus, in which the system memory stores all image blocks of the frame; and the frame cache obtains the at least one image block of the frame from the system memory and stores the obtained image block. The observation model generator reads at least one pixel from the frame cache, and generates an observation model corresponding to the object and the read image block according to the read pixel. The particle filter controller obtains the observation model from the observation model generator, and determines and outputs an object tracking result of the object according to the observation model. | 02-21-2013 |
20130044912 | USE OF ASSOCIATION OF AN OBJECT DETECTED IN AN IMAGE TO OBTAIN INFORMATION TO DISPLAY TO A USER - Camera(s) capture a scene, including an object that is portable. An image of the scene is processed to segment therefrom a portion corresponding to the object, which is then identified from among a set of predetermined real world objects. An identifier of the object is used, with a set of associations between object identifiers and user identifiers, to obtain a user identifier that identifies a user at least partially from among a set of users. Specifically, the user identifier may identify a group of users that includes the user (“weak identification”) or alternatively the user identifier may identify the user uniquely (“strong identification”) in the set. The user identifier is used either alone or in combination with user input to obtain and store in memory, information to be output to the user. At least a portion of the obtained information is thereafter output, e.g. displayed by projection into the scene. | 02-21-2013 |
20130044913 | Plane Detection and Tracking for Structure from Motion - Plane detection and tracking algorithms are described that may take point trajectories as input and provide as output a set of inter-image homographies. The inter-image homographies may, for example, be used to generate estimates for 3D camera motion, camera intrinsic parameters, and plane normals using a plane-based self-calibration algorithm. A plane detection and tracking algorithm may obtain a set of point trajectories for a set of images (e.g., a video sequence, or a set of still photographs). A 2D plane may be detected from the trajectories, and trajectories that follow the 2D plane through the images may be identified. The identified trajectories may be used to compute a set of inter-image homographies for the images as output. | 02-21-2013 |
20130044914 | METHODS FOR DETECTING AND RECOGNIZING A MOVING OBJECT IN VIDEO AND DEVICES THEREOF - A method, non-transitory computer readable medium, and apparatus that extracts at least one key image from one or more images of an object. Outer boundary makers for an identifier of the object in the at least one key image are detected. An identification sequence from the identifier of the object between the outer boundary markers in the at least one key image is recognized. The recognized identification sequence of the object in the at least one key image is provided. | 02-21-2013 |
20130044915 | METHOD AND APPARATUS FOR RECOGNIZING CHARACTERS - A method and an apparatus for recognizing characters using an image are provided. A camera is activated according to a character recognition request and a preview mode is set for displaying an image photographed through the camera in real time. An auto focus of the camera is controlled and an image having a predetermined level of clarity is obtained for character recognition from the images obtained in the preview mode. The image for character recognition is character-recognition-processed so as to extract recognition result data. A final recognition character row is drawn that excludes non-character data from the recognition result data. A first word is combined including at least one character of the final recognition character row and a predetermined maximum number of characters. A dictionary database that stores dictionary information on various languages using the first word is searched, so as to provide the user with the corresponding word. | 02-21-2013 |
20130044916 | METHOD AND APPARATUS OF PUSH & PULL GESTURE RECOGNITION IN 3D SYSTEM - The present invention provides method and apparatus of PUSH & PULL gesture recognition in 3D system. The method comprising determining whether the gesture is PUSH or PULL as a function of distances from the object performing the gesture to the cameras and the characteristics of moving traces of the object in the image planes of the two cameras. | 02-21-2013 |
20130051611 | IMAGE OVERLAYING AND COMPARISON FOR INVENTORY DISPLAY AUDITING - Image overlaying and comparison for inventory display auditing is disclosed herein. An example method to perform inventory display auditing disclosed herein comprises overlaying a reference image over a current image displayed on a camera display, the reference image corresponding to an inventory display to be audited, comparing the reference image and the current image to determine whether the current image and the reference image correspond to a same scene and when the reference image and the current image are determined to correspond to the same scene, indicating a difference region in the current image displayed on the camera display, the difference region being a first region of the current image that differs from a corresponding first region of the reference image. | 02-28-2013 |
20130051612 | SEGMENTING SPATIOTEMPORAL DATA BASED ON USER GAZE DATA - A segmentation task is specified to a user, and gaze data generated by monitoring eye movements of the user viewing spatiotemporal data as a plurality of frames is received. The gaze data includes fixation locations based on the user's gaze throughout the frames. A first frame and a second frame of the frames are selected based on the fixation locations. Segmentation is performed on the first and second frames to segment first and second objects, respectively, from the first and second frames based on a region of interest associated with the first and second frames, the region of interest corresponding to a location of one of the fixation locations. A determination is made as to whether the first and second objects are relevant to the segmentation task, and if so, association data to associate the first object with the second object when the first and second objects is generated. | 02-28-2013 |
20130051613 | MODELING OF TEMPORARILY STATIC OBJECTS IN SURVEILLANCE VIDEO DATA - A foreground object blob having a bounding box detected in frame image data is classified by a finite state machine as a background, moving foreground, or temporally static object, namely as the temporally static object when the detected bounding box is distinguished from a background model of a scene image of the video data input and remains static in the scene image for a threshold period. The bounding box is tracked through matching masks in subsequent frame data of the video data input, and the object sub-classified within a visible sub-state, an occluded sub-state, or another sub-state that is not visible and not occluded as a function of a static value ratio. The ratio is a number of pixels determined to be static by tracking in a foreground region of the background model corresponding to the tracked object bounding box over a total number of pixels of the foreground region. | 02-28-2013 |
20130051614 | SIGN LANGUAGE RECOGNITION SYSTEM AND METHOD - A sign language recognition method includes a depth-sensing camera capturing an image of a gesture of a signer and gathering data about distances between a number of points on the signer and the depth-sensing camera, building a three dimension (3D) model of the gesture, comparing the 3D model of the gesture with a number of 3D models of different gestures to find out the representations of the 3D model of the gesture, and displaying or vocalizing the representations of the 3D model of the gesture. | 02-28-2013 |
20130051617 | METHOD FOR SENSING MOTION AND DEVICE FOR IMPLEMENTING THE SAME - A method for sensing a motion of an object is to be implemented by a motion recognition device that includes an image acquiring unit and a processor. In the method, the image acquiring unit is configured to acquire a series of image frames by detecting intensity of light received thereby. The processor is configured to receive at least one of the image frames and to determine whether an object is detected in the at least one of the image frames. When an object is detected, the processor is further configured to receive the image frames from the image acquiring unit, and to determine a motion of the object with respect to a three-dimensional coordinate system according to the image frames thus received. | 02-28-2013 |
20130051618 | Method for controlling a light emission of a headlight of a vehicle - A method for controlling a light emission of at least one headlight of a vehicle, which has a traffic sign recognition device. The method includes receiving at least one traffic sign recognition signal from an interface to the traffic sign recognition device. In this instance, the at least one traffic sign recognition signal represents a traffic sign recognized in a course of the road currently being traveled by the vehicle. The method also includes setting a debounce time and/or a debounce stretch for a change in the light emission of the at least one headlight between first and second radiation characteristics as a function of the at least one traffic sign recognition signal. Finally, the method includes delaying the change in the light emission of the at least one headlight by the debounce time set and/or the debounce stretch set, to control light emission of the at least one headlight. | 02-28-2013 |
20130051621 | ADAPTIVE IMAGE ACQUISITION AND PROCESSING WITH IMAGE ANALYSIS FEEDBACK - Described are systems, methods, computer programs, and user interfaces for image location, acquisition, analysis, and data correlation that uses human-in-the-loop processing, Human Intelligence Tasks (HIT), and/or or automated image processing. Results obtained using image analysis are correlated to non-spatial information useful for commerce and trade. For example, images of regions of interest of the earth are used to count items (e.g., cars in a store parking lot to predict store revenues), detect events (e.g., unloading of a container ship, or evaluating the completion of a construction project), or quantify items (e.g., the water level in a reservoir, the area of a farming plot). | 02-28-2013 |
20130051622 | Method For Calculating Weight Ratio By Quality Grade In Grain Appearance Quality Grade Discrimination Device - A method is provided for calculating a weight ratio by quality grade using a grain appearance quality grade discrimination device. The method involves the steps of imaging a plurality of grains; discriminating the quality grade of the grains on the basis of data of the imaged grains; tallying, by quality grade, the number of pixels in said data of the imaged grains with regards to the grains whose quality grade has been discriminated; multiplying the number of pixels tallied by quality grade by a weight conversion coefficient per pixel predetermined by quality grade, and thereby converting said number of pixels into a weight by quality grade; and calculating the weight ratio by quality grade of the grains on the basis of the weight by quality grade. | 02-28-2013 |
20130058523 | UNSUPERVISED PARAMETER SETTINGS FOR OBJECT TRACKING ALGORITHMS - A method for automatically optimizing a parameter set for a tracking algorithm comprising receiving a series of image frames and processing the image frames using a tracking algorithm with an initialized parameter set. An updated parameter set is then created according to the processed image frames utilizing estimated tracking analytics. The parameters are validated using a performance metric that may be manually or automatically preformed using a GUI. The image frames are collected from a video camera with a fixed set-up at a fixed location. The image frames may include a training traffic video or a training video for tracking humans. | 03-07-2013 |
20130058524 | IMAGE PROCESSING SYSTEM PROVIDING SELECTIVE ARRANGEMENT AND CONFIGURATION FOR AN IMAGE ANALYSIS SEQUENCE - A computer-implemented method of processing a selected image using multiple processing operations is provided. An image analysis sequence having multiple processing steps is constructed. The image analysis sequence is constructed in response to receipt of multiple processing operation selections. Individual processing steps in the image analysis sequence are associated with a processing operation that is indicated in a corresponding processing operation selection. The processing steps are arranged in response to receipt of arrangement information that relates to a selective arrangement of the processing steps. At least one of the processing steps in the image analysis sequence is configured such that the processing operation associated with the processing step processes a specified input image to generate an output image when the processing step is performed. A display signal is generated for display of the output image at a display device. | 03-07-2013 |
20130058525 | OBJECT TRACKING DEVICE - In an object tracking device, a search region setting unit sets the search region of an object in a frame image at a present point in time, based on an object region in a frame image at a previous point in time, zoom center coordinates in the frame image at the previous point in time, and a ratio between the zoom scaling factor of the frame image at the previous point in time and the zoom scaling factor of the frame image at the present point in time. A normalizing unit normalizes the image of a search region of the object included in the frame image at the present point in time to a fixed size. A matching unit searches the normalized mage of the search region for an object region similar to a template image. | 03-07-2013 |
20130058526 | DEVICE FOR AUTOMATED DETECTION OF FEATURE FOR CALIBRATION AND METHOD THEREOF - A method for automated detection of feature for calibration is provided, which includes capturing images of a polyhedral structure including a plurality of rectangular planes and triangular planes in different directions through a plurality of cameras, and generating a plurality of image files, each of the rectangular planes having calibration objects formed thereon to be used as input values of a calibration engine, and each of the triangular planes having a marker formed thereon to grasp absolute and relative relationships between the rectangular planes; searching for the calibration objects in the image files; searching for the same plane in which the calibration objects are formed using the calibration objects; and indexing the respective calibration objects formed on the same plane. | 03-07-2013 |
20130058527 | SENSOR DATA PROCESSING - A method and apparatus for processing sensor data comprising measuring a value of a first parameter of a scene using a first sensor (e.g. a camera) to produce a first image of the scene, measuring a value of a second parameter of the scene using a second sensor (e.g. a laser scanner) to produce a second image, identifying a first point of the first image that corresponds to a class of features of the scene, identifying a second point of the second image that corresponds to the class of features, projecting the second point onto the first image, determining a similarity value between the first point and the projection of the second point on to the first image, and comparing the determined similarity value to a predetermined threshold value. The method or apparatus may be used on an autonomous vehicle. | 03-07-2013 |
20130058528 | METHOD AND SYSTEM FOR DETECTING VEHICLE POSITION BY EMPLOYING POLARIZATION IMAGE - Disclosed are a method and a system for detecting a vehicle position by employing a polarization image. The method comprises a step of capturing a polarization image by using a polarization camera; a step of acquiring two road shoulders in the polarization image based on a difference between a road surface and each of the two road shoulders in the polarization image, and determining a part between the two road shoulders as the road surface; a step of detecting at least one vehicle bottom from the road surface based on a significant pixel value difference between each wheel and the road surface in the polarization image; and a step of generating a vehicle position from the vehicle bottom based on a pixel value difference between a vehicle outline corresponding to the vehicle bottom and background in the polarization image. | 03-07-2013 |
20130058529 | VISUAL INPUT OF VEHICLE OPERATOR - The present invention relates to a method for determining a vehicle operator's visual input of an object in the operator's surroundings, which method comprises receiving an object position signal indicative of the position of at least one object, receiving an operator motion input signal indicative of operator physiological data comprising information relating to body motion of the operator, estimating an operator eye-gaze direction, and determining a visual input quality value representative of level of visual input of the at least one object received by the operator, based on the object position signal and the estimated operator eye-gaze direction. | 03-07-2013 |
20130058530 | IMAGE PROCESSING APPARATUS AND METHOD - An information processing apparatus comprises a first imaging section configured to image the holding surface of a holding platform on which an object is held from different directions, a recognition section configured to, read out the characteristics of the object image of a object contained in the first imaged image based on each of the first imaged images that are respectively imaged by the first imaging section from different directions and compare the read characteristics with the pre-stored characteristics of each object, thereby recognizing the object corresponding to the object image every first imaged image and a determination section configured to determine the recognition result of the object held on the holding platform based on the recognition result of the object image every first imaged image. | 03-07-2013 |
20130058531 | Electronic Toll Management and Vehicle Identification - Identifying a vehicle in a toll system includes accessing image data for a first vehicle and obtaining license plate data from the accessed image data for the first vehicle. A set of records is accessed. The license plate data for the first vehicle is compared with the license plate data for vehicles in the set of records. Based on the comparison of the license plate data, a set of vehicles is identified from the vehicles having records in the set of records. Second vehicle identifier data is accessed for the first vehicle and for a vehicle in the set of vehicles. Using a processing device, the second vehicle identifier data for the first vehicle is compared with the second vehicle identifier data for the vehicle in the set of vehicles. The vehicle in the set of vehicles is identified as the first vehicle based on results of the comparison. | 03-07-2013 |
20130058532 | Tracking An Object With Multiple Asynchronous Cameras - The path and/or position of an object is tracked using two or more cameras which run asynchronously so there is need to provide a common timing signal to each camera. Captured images are analyzed to detect a position of the object in the image. Equations of motion for the object are then solved based on the detected positions and a transformation which relates the detected positions to a desired coordinate system in which the path is to be described. The position of an object can also be determined from a position which meets a distance metric relative to lines of position from three or more images. The images can be enhanced to depict the path and/or position of the object as a graphical element. Further, statistics such as maximum object speed and distance traveled can be obtained. Applications include tracking the position of a game object at a sports event. | 03-07-2013 |
20130058533 | IMAGE RECONSTRUCTION BY POSITION AND MOTION TRACKING - A system, method, and apparatus provide the ability to reconstruct an image from an object. A hand-held image acquisition device is configured to acquire local image information from a physical object. A tracking system obtains displacement information for the hand-held acquisition device while the device is acquiring the local image information. An image reconstruction system computes the inverse of the displacement information and combines the inverse with the local image information to transform the local image information into a reconstructed local image information. A display device displays the reconstructed local image information. | 03-07-2013 |
20130058534 | Method for Road Sign Recognition - The invention relates to a method and to a device for the recognition of road signs ( | 03-07-2013 |
20130058535 | DETECTION OF OBJECTS IN AN IMAGE USING SELF SIMILARITIES - An image processor ( | 03-07-2013 |
20130064420 | AUTOMATED SYSTEM AND METHOD FOR OPTICAL CLOUD SHADOW DETECTION OVER WATER - System and method for detecting cloud shadows over water from ocean color imagery received from remote sensors. | 03-14-2013 |
20130064421 | RESOLVING HOMOGRAPHY DECOMPOSITION AMBIGUITY BASED ON VIEWING ANGLE RANGE - The homography between captured images of a planar object is determined and decomposed into at least one possible solution, and typically at least two ambiguous solutions. The removal of the ambiguity between the two solutions, or validation of a single solution, is performed using a viewing angle range. The viewing angle range may be used by comparing the viewing angle range to the orientation of each solution as derived from the rotation matrix resulting from the homography decomposition. Any solution with an orientation outside the viewing angle range may be eliminated as a solution. | 03-14-2013 |
20130064422 | METHOD FOR DETECTING DENSITY OF AREA IN IMAGE - Light is allowed to be incident from above wells provided on a microplate M and the light transmitted to the lower surface is received to obtain an original image of the wells (Step S | 03-14-2013 |
20130064423 | FEATURE EXTRACTION AND PROCESSING FROM SIGNALS OF SENSOR ARRAYS - Feature extraction includes extracting features from signals of a plurality of sensors of a sensor array, including, for each sensor, obtaining a signal of the sensor corresponding to responses of the sensor during one or more exposures to samples, computing a baseline function from the signal, and computing the features based on the baseline function and values corresponding to responses of the sensor during each exposure. Feature vectors are formed from the features of the sensors. The features in each feature vector correspond to the same exposure. At least one of computing the baseline function by interpolating baseline values corresponding to responses of the sensor prior to each exposure, and forming the feature vectors by combining features of at least one sensor with features of at least one redundant sensor of the sensor array in the feature vectors is performed. | 03-14-2013 |
20130064424 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM - An image corresponding to a pattern having a first size is detected from a first detection region in an acquired, first image, and an image corresponding to a pattern having a second size is detected from a second detection region different from the first detection region in the first image. | 03-14-2013 |
20130064425 | IMAGE RECOGNIZING APPARATUS, IMAGE RECOGNIZING METHOD, AND PROGRAM - An image recognizing apparatus is equipped with: a detecting unit configured to detect, from an input image, a candidate area for a target of recognition, based on a likelihood of a partial area in the input image; an extracting unit configured to extract, from a plurality of candidate areas detected by the detecting unit, a set of the candidate areas which are in an overlapping relation; a classifying unit configured to classify an overlapping state of the set of the candidate areas; and a discriminating unit configured to discriminate whether or not the respective candidate areas are the target of recognition, based on the overlapping state of the set of the candidate areas and the respective likelihoods of the candidate areas. | 03-14-2013 |
20130064426 | EFFICIENT SYSTEM AND METHOD FOR BODY PART DETECTION AND TRACKING - A method is provided for detecting a body part in a video stream from a mobile device. A video stream of a human subject is received from a camera connected to the mobile device. The video stream has frames. A first frame of the video stream is identified for processing. This first frame is then partitioned into observation windows, each observation window having pixels. In each observation window, non-skin-toned pixels are eliminated; and the remaining pixels are compared to determine a degree of entropy of the pixels in the observation window. In any observation window having a degree of entropy above a predetermined threshold, a bounded area is made around the region of high entropy pixels. The consistency of the entropy is analyzed in the bounded area. If the bounded area has inconsistently high entropy, a body part is determined to be detected at that bounded area. | 03-14-2013 |
20130064427 | METHODS AND SYSTEMS FOR OBJECT TRACKING - Methods and systems for object tracking are disclosed in which the bandwidth of a “slow” tracking system (e.g., an optical tracking system) is augmented with sensor data generated by a “fast” tracking system (e.g., an inertial tracking system). The tracking data generated by the respective systems can be used to estimate and/or predict a position, velocity, and orientation of a tracked object that can be updated at the sample rate of the “fast” tracking system. The methods and systems disclosed herein generally involve an estimation algorithm that operates on raw sensor data (e.g., two-dimensional pixel coordinates in a captured image) as opposed to first processing and/or calculating object position and orientation using a triangulation or “back projection” algorithm. | 03-14-2013 |
20130064428 | STRUCTURE DETECTION APPARATUS AND METHOD, AND COMPUTER-READABLE MEDIUM STORING PROGRAM THEREOF - A plurality of candidate points are extracted from image data. The plurality of candidate points are normalized, and a set of representative points composing form model that is most similar to set form is selected from the plurality of candidate points. Further, the candidate points and the form model are compared with each other, and correction is performed by adding a region forming structure or by deleting a region, or the like. Accordingly, the structure is detected in image data. | 03-14-2013 |
20130064429 | IMAGE PROCESSING DEVICE, OBJECT SELECTION METHOD AND PROGRAM - There is provided an image processing device including: a data storage unit that stores object identification data for identifying an object operable by a user and feature data indicating a feature of appearance of each object; an environment map storage unit that stores an environment map representing a position of one or more objects existing in a real space and generated based on an input image obtained by imaging the real space using an imaging device and the feature data stored in the data storage unit; and a selecting unit that selects at least one object recognized as being operable based on the object identification data, out of the objects included in the environment map stored in the environment map storage unit, as a candidate object being a possible operation target by a user. | 03-14-2013 |
20130064430 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - An image processing device comprises: image display means which displays at least one of transformation target images containing an object of interest; first reference point receiving/determining means which receives information on first reference point candidates according to a user operation, receives according to a user operation a determination signal targeted at the first reference point candidates displayed on the transformation target image based on the information on the first reference point candidates, and determines first reference points based on the information on the first reference point candidates targeted by the received determination signal; second reference point receiving/determining means which determines second reference points by receiving information on the second reference points; and geometric transformation means which outputs a transformed image by conducting geometric transformation to the transformation target image based on the first reference points determined by the first reference point receiving/determining means and the second reference points determined by the second reference point receiving/determining means. | 03-14-2013 |
20130070961 | System and Method for Providing Temporal-Spatial Registration of Images - A video imaging system for use with or in a mobile video capturing system (e.g., an airplane or UAV). A multi-camera rig containing a number of cameras (e.g., 4) receives a series of mini-frames (e.g., from respective field steerable mirrors (FSMs)). The mini-frames received by the cameras are supplied to (1) an image registration system that calibrates the system by registering relationships corresponding to the cameras and/or (2) an image processor that processes the mini-frames in real-time to produce a video signal. The cameras can be infra-red (IR) cameras or other electro-optical cameras. By creating a rigid model of the relationships between the mini-frames of the plural cameras, real-time video stitching can be accelerated by reusing the movement relationship of a first mini-frame of a first camera on corresponding mini-frames of the other cameras in the system. | 03-21-2013 |
20130070962 | EGOMOTION ESTIMATION SYSTEM AND METHOD - A computer-implemented method for determining an egomotion parameter using an egomotion estimation system is provided. First and second image frames are obtained. A first portion of the first image frame and a second portion of the second image frame are selected to respectively obtain a first sub-image and a second sub-image. A transformation is performed on each of the first sub-image and the second sub-image to respectively obtain a first perspective image and a second perspective image. The second perspective image is iteratively adjusted to obtain multiple adjusted perspective images. Multiple difference values are determined that respectively correspond to the respective difference between the first perspective image and the adjusted perspective images. A translation vector for an ego motion parameter is determined. The translation vector corresponds to one of the multiple difference values. | 03-21-2013 |
20130070963 | ADAPTIVE FEATURE RECOGNITION TOOL - The present invention provides an adaptive feature recognition tool that can be used to determine the location and/or count discrete features on an object being manufactured in a relatively quick time fashion. The tool can include an elongated rigid member that has a first end with a generally planar surface, the generally planar surface having a plurality of contrast targets thereon. The elongated rigid member can also have a second end for placement at a desired location, for example placement on a plurality of features whose number and/or location(s) on the object is desired. In addition, an exposure device that is operable to expose specific subsets of the plurality of contrast targets to a line-of-sight digital imaging device can be included. | 03-21-2013 |
20130070964 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An information processing apparatus includes an identifying unit, a character recognition unit, an obtaining unit, a correcting unit, and an output unit. The identifying unit identifies a still image included in a moving image. The character recognition unit performs character recognition on the still image identified by the identifying unit. The obtaining unit obtains information about the moving image. The correcting unit corrects, on the basis of the information obtained by the obtaining unit, a character recognition result generated by the character recognition unit. The output unit outputs the character recognition result corrected by the correcting unit in association with the moving image. | 03-21-2013 |
20130070965 | IMAGE PROCESSING METHOD AND APPARATUS - An image processing method and apparatus for obtaining a wide dynamic range image, the method including: obtaining a plurality of low dynamic range images having different exposure levels for a same scene; generating motion map representing whether motion occurred, depending on brightness ranks of the plurality of low dynamic range images; obtaining weights for the plurality of low dynamic range images; generating a weight map by combining the weights and the motion map; and generating a wide dynamic range image by fusing the plurality of low dynamic range images and the weight map. According to the image processing method and apparatus, it is possible to accurately detect motion area using a rank map, obtain a wide dynamic range image at a higher operation speed, and reduce a possibility that a phenomenon such as color warping occurs by directly combining images without using a tone mapping process. | 03-21-2013 |
20130070966 | Method and device for checking the visibility of a camera for surroundings of an automobile - A method for checking the visibility of a camera for surroundings of an automobile is proposed which includes receiving a camera image and a step of dividing the camera image into a plurality of partial images. A visibility value is determined based on a number of objects detected in the particular partial image. A visibility probability is subsequently determined for each of the partial images based on the blindness values and the visibility values of the particular partial images. | 03-21-2013 |
20130070967 | MOTION ANALYSIS THROUGH GEOMETRY CORRECTION AND WARPING - An object in a hot atmosphere with a temperature greater than 400 F in a gas turbine moves in a 3D space. The movement may include a vibrational movement. The movement includes a rotational movement about an axis and a translational movement along the axis. Images of the object are recorded with a camera, which may be a high-speed camera. The object s provided with a pattern that is tracked in images. Warpings of sub-patches in a reference image of the object are determined to form standard format warped areas. The warpings are applied piece-wise to areas in following images to create corrected images. Standard tracking such as SSD tracking is applied to the piece-wise corrected images to determine a movement of the object. The image correction and object tracking are performed by a processor. | 03-21-2013 |
20130070968 | APPARATUS AND METHOD FOR CALCULATING ENERGY CONSUMPTION BASED ON THREE-DIMENSIONAL MOTION TRACKING - An apparatus and method calculate an energy consumption based on 3D motion tracking. The method includes setting at least one specific portion of an analysis target as a reference point, analyzing the reference point before and after the lapse of a predetermined time, and determining an energy consumption of the analysis target on the basis of the analyzed reference point. | 03-21-2013 |
20130077818 | DETECTION METHOD OF OPTICAL NAVIGATION DEVICE - A detection method of an optical navigation device is disclosed. The device is used for determining whether an object is lifted from the optical navigation device or not. The method includes steps of reading the detection image detected by the optical navigation device, calculating the image signal value thereof during non-lift status, and integrating a historical threshold value with the image signal value according to adaptive factors for generating an adjustment threshold value serving as the navigation threshold of the detection image. The historical threshold value is the navigation threshold of a former detection image of the detection image. A step of comparing the adjustment threshold with the image signal value for determining whether the image signal value passes the navigation threshold or not may also be included. If the image signal value does not pass the navigation threshold, the object is determined as in the lift status. | 03-28-2013 |
20130077819 | BUILDING FOOTPRINT EXTRACTION APPARATUS, METHOD AND COMPUTER PROGRAM PRODUCT - A system, method and computer program product cooperate to extract a building footprint from other data associated with a property. Imagery data of real property is input to a computing device, the imagery data containing a plurality of parcels. A processing circuit detects contrasts of candidate man-made structures on a parcel of the plurality of parcels. The candidate man-made structures are then associated with the parcel. A building footprint is then extracted by distinguishing a man-made structure on said parcel from natural terrain, recognizing that man-made structures when viewed from above generally show a strong contrast from background terrain. Remaining candidate man-made structures are removed by observing that they having features inconsistent with predetermined extraction logic. | 03-28-2013 |
20130077820 | MACHINE LEARNING GESTURE DETECTION - A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human subject observed with a sensor such as a depth camera. A gesture detection module is trained via machine learning to identify one or more features of a virtual skeleton and indicate if the feature(s) collectively indicate a particular gesture. | 03-28-2013 |
20130077821 | Enhancing Video Using Super-Resolution - A method and apparatus for processing images. A portion of a selected image in which a moving object is present is identified. The selected image is one of a sequence of images. Pixels in a region of interest are identified in the selected image. First values are identified for a first portion of the pixels using the images and first transformations. The first portion of the pixels corresponds to the background in the selected image. A first transformation is configured to align features of the background between one image in the images and the selected image. Second values are identified for a second portion of the pixels using the images and second transformations. The second portion of the pixels corresponds to the moving object in the selected image. A second transformation is configured to align features of the moving object between one image in the images and the selected image. | 03-28-2013 |
20130077822 | METHOD FOR CREATING AN INDEX USING AN ALL-IN-ONE PRINTER AND ADJUSTABLE GROUPING PARAMETERS - A method for indexing and printing images on a printing system, the method includes inputting images with metadata into the printing system; by means of the metadata, selectively grouping the images into a plurality of groups by a controller of the printing system; selecting at least one representative image as an index image from each group; selecting an output format; and using the index images to create a index image file corresponding to the selected output format. | 03-28-2013 |
20130077823 | SYSTEMS AND METHODS FOR NON-CONTACT HEART RATE SENSING - An embodiment generally relates to systems and methods for estimating heart rates of individuals using non-contact imaging. A processing module can process multi-spectral video images of individuals and detect skin blobs within different images of the multi-spectral video images. The skin blobs, can be converted into time series signals and processed with a band pass filter. Further, the time series signals can be processed to separate pulse signals from unnecessary signals. The heart rate of the individual can be estimated according to the resulting time series signal processing. | 03-28-2013 |
20130077824 | HEURISTIC MOTION DETECTION METHODS AND SYSTEMS FOR INTERACTIVE APPLICATIONS - A method is provided for motion detection comprising acquiring a series of images comprising a current image and a previous image, determining a plurality of optical flow vectors, each representing movement of one of a plurality of visual elements from a first location in the previous image to a second location in the current image, storing the optical flow vectors in a current vector map associated with time information, and determining motion by calculating an intensity ratio between the current vector map and at least one prior vector map. | 03-28-2013 |
20130077825 | IMAGE PROCESSING APPARATUS - There is provided an image processing apparatus. The image processing apparatus includes: a color reproducing unit for reproducing a luminance of a color phase, which is not set to each pixel of a pair of image data composed of Bayer array, based upon the adjacent pixels; and a matching processing unit for extracting blocks with a predetermined size from the pair of image data whose luminance is reproduced, and executing a matching process so as to specify blocks having high correlation. The color reproducing unit and the matching processing unit respectively execute the luminance reproduction and the matching process with only a color phase with the highest degree of occupation in the Bayer array. | 03-28-2013 |
20130077826 | Method and apparatus for three-dimensional tracking of infra-red beacons - A method for processing data includes identifying a time signature of an infra-red (IR) beacon. Image data associated with the IR beacon is identified using the time signature. | 03-28-2013 |
20130077827 | AUTOMATED CRYSTAL IDENTIFICATION ACHIEVED VIA MODIFIABLE TEMPLATES - A nuclear imaging system ( | 03-28-2013 |
20130077828 | IMAGE PROCESSING - Apparatus and method for processing a sequence of images of a scene, the method including: tracking a region of interest in the sequence of images (e.g. using a Self Adaptive Discriminant filter), selecting a particular image in the sequence, selecting a set of images from the sequence, the set of images including one or more images that precede the particular image in the sequence of images; and determining a value indicative of the level of change between the region of interest in the particular image and the regions of interest in the images in the set of images (e.g. using a Change Detection Process). | 03-28-2013 |
20130083959 | Multi-Modal Sensor Fusion - A method and apparatus for processing images. A sequence of images for a scene is received from an imaging system. An object in the scene is detected using the sequence of images. A viewpoint of the imaging system is registered to a model of the scene using a region in the model of the scene in which an expected behavior of the object is expected to occur. | 04-04-2013 |
20130083960 | FUNCTION-CENTRIC DATA SYSTEM - Various embodiments of the invention provide a function centric data system that reduces avionics system weight and power requirements. In some embodiments, the function centric data system is housed in a vibration resistant package. A variety of functions typically performed by other avionics systems are incorporated into the system, allowing centralize power and processing management, reducing weight and improving system reliability. In some embodiments, the function centric data system is configured to provide high rate data sampling, allowing ground stations to apply sophisticated failure prediction algorithms, reducing maintenance costs and mean time between flights. Embodiments include methods of wireless networking with automatic hand offs and adaptive multi-hop topologies to allow this data to be promptly transferred when the aircraft lands. Embodiments also include methods for data processing to predict imminent failures using Bayesian statistics and catastrophe prediction methods. | 04-04-2013 |
20130083961 | IMAGE INFORMATION PROCESSING APPARATUS AND IMAGE INFORMATION PROCESSING METHOD - According to one embodiment, a viewer image processing module detects facial image data on a viewer from a shot image signal obtained by shooting the viewer, a viewed program image processing module detects facial image data on a performer included in program data the viewer is viewing, and a synchronous control module creates viewer information that correlates facial image data on the performer, facial image data on the viewer, and program information on the program with one another and transmits the viewer information to a viewing data entry module. | 04-04-2013 |
20130083962 | IMAGE PROCESSING APPARATUS - An image processing apparatus includes a definer. The definer defines a target image on a designated image. A first detector detects a degree of overlapping between the target image and a first specific object image appearing on the designated image. A second detector detects a degree of overlapping between the target image and a second specific object image appearing on the designated image. A modifier modifies the target image when the degree of overlapping detected by the first detector falls below a first reference or the degree of overlapping detected by the second detector is equal to or more than a second reference. A restrictor restricts a process of the modifier when the degree of overlapping detected by the first detector is equal to or more than the first reference and the degree of overlapping detected by the second detector falls below the second reference. | 04-04-2013 |
20130083963 | ELECTRONIC CAMERA - An electronic camera includes an imager. An imager repeatedly outputs an image representing a scene captured on an imaging surface. A searcher searches for a specific object image from the image outputted from the imager by executing a plurality of comparing processes respectively corresponding to a plurality of postures possibly taken by the imager in a direction around an axis orthogonal to the imaging surface. An executer executes a processing operation different depending on a search result of the searcher. A recorder repeatedly records the image outputted from the imager in parallel with a process of the imager. A restrictor executes a restricting process of restricting the comparing process executed by the searcher to any one of the plurality of comparing processes, in association with a process of the recorder. | 04-04-2013 |
20130083964 | METHOD AND SYSTEM FOR THREE DIMENSIONAL MAPPING OF AN ENVIRONMENT - A three-dimensional modeling system includes a multi-axis range sensor configured to capture a first set of three-dimensional data representing characteristics of objects in an environment; a data sensor configured to capture a first set of sensor data representing distances between at least a subset of the objects and the data sensor; a computer-readable memory configured to store each of the first set of three-dimensional data and the first set of sensor data; a mobile base; a processor; and a computer-readable medium containing programming instructions configured to, when executed, instruct the processor to process the first set of three-dimensional data and the first set of sensor data to generate a three-dimensional model of the environment. | 04-04-2013 |
20130083965 | APPARATUS AND METHOD FOR DETECTING OBJECT IN IMAGE - An apparatus and method detects an object in an original image captured by an image capturing device. The apparatus and method detects a location of the object using a thermal image for the captured image, designates a region of the detected object as an image inpainting region, restores a region corresponding to the region of the detected object using its surrounding information, examines a difference between the restored image and the original image, and separates an object region from the original image, thereby more accurately detecting the object. | 04-04-2013 |
20130083966 | Match, Expand, and Filter Technique for Multi-View Stereopsis - In accordance with one or more aspects of a match, expand, and filter technique for multi-view stereopsis, features across multiple images of an object are matched to obtain a sparse set of patches for the object. The sparse set of patches is expanded to obtain a dense set of patches for the object, and the dense set of patches is filtered to remove erroneous patches. Optionally, reconstructed patches can be converted into 3D mesh models. | 04-04-2013 |
20130083967 | System and Method for Extracting Features in a Medium from Data Having Spatial Coordinates - Systems and methods are provided for extracting various features from data having spatial coordinates. Based on a few known data points in a point cloud, other data points can be interpolated for a given parameter using probabilistic methods, thereby generating a greater number of data points. Using the greater number of data points, a Boolean function, related in part to the given parameter, can be used to extract more detailed features. Based on the Boolean values, a shape of a body having the characteristic(s) defined by the Boolean function can be constructed in a layered manner. The extraction of the features may be carried out automatically by a computing device. | 04-04-2013 |
20130083968 | VEHICLE PERIPHERY MONITORING DEVICE - A vehicle periphery monitoring device includes: a first edge image generation element | 04-04-2013 |
20130083969 | COLOR IMAGE PROCESSING METHOD, COLOR IMAGE PROCESSING DEVICE, AND COLOR IMAGE PROCESSING PROGRAM - An object area detection means detects an object area which is an area to be subjected to image processing from an input image. A reflection component reconstruction means calculates color information of the object area and a perfect diffusion component, which is a low-frequency component of the object area, and reconstructs a surface reflection component based on the color information and the low-frequency component. A surface reflection component correction means corrects the reconstructed surface reflection component according to a reference surface reflection component that is the surface reflection component set in advance according to the object area. A reproduced color calculation means calculates a reproduced color that is a color obtained by correcting each pixel included in the input image by using the perfect diffusion component and the corrected surface reflection component and generates an output image based on the reproduced color. | 04-04-2013 |
20130083970 | IMAGE PROCESSING - Apparatus and method for processing a sequence of images of a scene, the method including: tracking a region of interest in the sequence of images (e.g. using a Self Adaptive Discriminant filter); selecting a particular image in the sequence; selecting a set of images from the sequence, the set having one or more images that precede the particular image in the sequence of images; for each pixel in the region of interest in the particular image, determining a value for a parameter; for each pixel in the region of interest of each image in the set of images, determining a value for the parameters; and comparing a function of the determined values for the region of interest in the particular image to a further function of the determined values for the regions of interest in the images in the set of images. | 04-04-2013 |
20130089234 | TRAJECTORY INTERPOLATION APPARATUS AND METHOD - A trajectory interpolation apparatus is disclosed. The first storage part stores first time and first location information of a movable body at the first time. The second storage stores second time and second location information of the movable body at the second time. The calculation part calculates a first moving distance from the first time and a second moving distance from the second time based on a relationship between the time and the speed stored in the second storage part, regarding third time between the first time and the second time. The determination part determines, as the interpolation point, one of intersection points for a circle in which the first location is set as its center and the first moving distance is set as its radius, and another circle in which the second location is set as its center and the second moving distance is set as its radius. | 04-11-2013 |
20130089235 | MOBILE APPARATUS AND METHOD FOR CONTROLLING THE SAME - A method of controlling a mobile apparatus includes acquiring a first original image and a second original image, extracting a first feature point of the first original image and a second feature point of the second original image, generating a first blurring image and a second blurring image by blurring the first original image and the second original image, respectively, calculating a similarity between at least two images of the first original image, the second original image, the first blurring image, and the second blurring image, determining a change in scale of the second original image based on the calculated similarity, and controlling at least one of an object recognition and a position recognition by matching the second feature point of the second original image to the first feature point of the first original image based on the change in scale. | 04-11-2013 |
20130089236 | Iris Recognition Systems - The present invention concerns a method for capturing an image of an iris free of specularities from a spectacle-wearing user for use in an iris recognition identification system, which includes an illumination source and an image capture device. The method comprises illuminating the user's eye from a first illumination position associated with a first optical path, and capturing a first image of the eye; and determining if the first image comprises a specular image in a first region of interest, the specular image being formed by light reflected from the spectacles. If a specular image is present, the method further comprises illuminating the eye from a second illumination position associated with a second optical path different to the first optical path, such that the specular image is shifted to a second region; and capturing a second image of the eye. | 04-11-2013 |
20130089237 | SENSORS AND SYSTEMS FOR THE CAPTURE OF SCENES AND EVENTS IN SPACE AND TIME - Various embodiments comprise apparatuses and methods including a light sensor. The light sensor includes a first electrode, a second electrode, a third electrode, and a light-absorbing semiconductor in electrical communication with each of the first electrode, the second electrode, and the third electrode. A light-obscuring material to substantially attenuate an incidence of light onto a portion of the light-absorbing semiconductor is disposed between the second electrode and the third electrode. An electrical bias is to be applied between the second electrode, and the first and the third electrodes and a current flowing through the second electrode is related to the light incident on the light sensor. Additional methods and apparatuses are described. | 04-11-2013 |
20130094694 | THREE-FRAME DIFFERENCE MOVING TARGET ACQUISITION SYSTEM AND METHOD FOR TARGET TRACK IDENTIFICATION - Embodiments of a target-tracking system and method of determining an initial target track in a high-clutter environment are generally described herein. The target-tracking system may register image information of first and second warped images with image information of a reference image. Pixels of the warped images may be offset based on the outputs of the registration to align each warped images with the reference image. A three-frame difference calculation may be performed on the offset images and the reference image to generate a three-frame difference output image. Clutter suppression may be performed on the three-frame difference image to generate a clutter-suppressed output image for use in target-track identification. The clutter suppression may include performing a gradient operation on a background image to remove any gradient objects. | 04-18-2013 |
20130094695 | METHOD AND APPARATUS FOR AUTO-DETECTING ORIENTATION OF FREE-FORM DOCUMENT USING BARCODE - Method and apparatus of detecting orientation of document using a barcode decoding. The method includes (1) capturing an image of the document with an imaging arrangement having a solid-state imager; (2) determining a presence of a barcode in the captured image of the document; (3) decoding the barcode; (4) determining an up-direction of the document as a function of an orientation of the barcode in the document; and (5) setting an orientation of the document in the captured image based upon the up-direction of the document. In one implementation, the barcode is configured with orientation data indicating the up-direction of the document. | 04-18-2013 |
20130094696 | Integrated Background And Foreground Tracking - Systems and methods for tracking the foreground and background objects in a video image sequence. The systems and methods providing for determining a camera model based on a first group of feature points extracted from a video frame, extracting three-dimensional (3D) information for the first group of feature points based on the camera model and a previous camera model for a previous video frame, reassigning feature points to the first group or a second group, based on mapping each feature point to a corresponding 3D model using the extracted 3D information for each feature point and determining a new camera model when a number of reassigned feature points is greater than a predetermined threshold. | 04-18-2013 |
20130094697 | CAPTURING, ANNOTATING, AND SHARING MULTIMEDIA TIPS - Systems and methods are provided herein that can help people share tacit knowledge about how to operate and repair products in their environment. Systems and methods provided herein let users record video and improves the usefulness of recorded content by helping users add annotations and other meta-data to their videos at the point of capture. | 04-18-2013 |
20130094698 | VIDEO PROCESSING DEVICE FOR EMBEDDING TIME-CODED METADATA AND METHODS FOR USE THEREWITH - A video processing device includes a content analyzer that receives a video signal and generates content recognition data based on the video signal, wherein the content recognition data is associated with at least one timestamp included in the video signal. A metadata search device generates time-coded metadata in response to content recognition data and in accordance with the at least one time stamp. A metadata association device generates a processed video signal from the video signal, wherein the processed video signal includes the time-coded metadata. | 04-18-2013 |
20130094699 | FOREST FIRE SMOKE DETECTION METHOD USING RANDOM FOREST CLASSIFICATION - A forest fire smoke detection method using random forest classification is provided. In the method, a first reference value is set. For consecutively captured frames, images between the frames are compared, each block, in which a number of pixels, motions of which have been identified, is equal to or greater than the first reference value, is set as a candidate block, and a keyframe is selected. The selected keyframe is compared with at least one frame previous to the keyframe and then a plurality of feature vectors are extracted from the candidate blocks. The extracted feature vectors are learned using different random forest algorithms. Probabilities output to terminal nodes for classes are accumulated, and two first cumulative probability histograms are generated. The two first cumulative probability histograms are averaged, and then a second cumulative probability histogram is generated. A detected state of each candidate block is determined. | 04-18-2013 |
20130094700 | Aerial Survey Video Processing - An aerial survey video processing apparatus for analyzing aerial survey video. The apparatus includes a feature tracking section adapted to associate identified features with items in a list of features being tracked, based on a predicted location of the features being tracked. The tracking section updates the list of features being tracked with the location of the associated identified features. | 04-18-2013 |
20130094701 | ADAPTIVE CROSS PARTITION FOR LEARNING WEAK CLASSIFIERS - Systems and methods are disclosed to perform object detection for images from an image sensor by reusing a 1-dimensional feature from a previously learned weak classifier and selecting a new feature to construct a 2-dimensional feature space; and cross partitioning the 2-dimensional space to learn optimal outputs for instances in each domain within a boosting framework. | 04-18-2013 |
20130094702 | Arrangements Involving Social Graph Data and Image Processing - This technology concerns, in one aspect, using a person's social network graph data as a virtual visual cortex—taking image input from a smartphone or the like, and processing it with the graph data to yield a personalized form of processing based on the imagery. The user's network graph data is typically updated by such processing—providing a form of virtual image memory that can influence future social network behavior. In another aspect, the technology concerns identifying content (e.g., audio) by both fingerprint-based and watermark-based techniques, and arrangements employing such identification data. A great number of other features and arrangements are also detailed. | 04-18-2013 |
20130094703 | METHOD FOR VISUALIZING ZONES OF HIGHER ACTIVITY IN SURVEILLANCE SCENES - The invention relates to a method for visualizing zones of higher activity in a monitoring scene monitored by at least one monitoring device ( | 04-18-2013 |
20130094704 | SYSTEM FOR DETECTING BONE CANCER METASTASES - The invention relates to a detection system for automatic detection of bone cancer metastases from a set of isotope bone scan images of a patients skeleton, the system comprising a shape identifier unit, a hotspot detection unit, a hotspot feature extraction unit, a first artificial neural network unit, a patient feature extraction unit, and a second artificial neural network unit. | 04-18-2013 |
20130094705 | Method and Apparatus for Projective Volume Monitoring - According to one aspect of the teachings presented herein, a projective volume monitoring apparatus is configured to detect objects intruding into a monitoring zone. The projective volume monitoring apparatus is configured to detect the intrusion of objects of a minimum object size relative to a protection boundary, based on an advantageous processing technique that represents range pixels obtained from stereo correlation processing in spherical coordinates and maps those range pixels to a two-dimensional histogram that is defined over the projective coordinate space associated with capturing the stereo images used in correlation processing. The histogram quantizes the horizontal and vertical solid angle ranges of the projective coordinate space into a grid of cells. The apparatus flags range pixels that are within the protection boundary and accumulates them into corresponding cells of the histogram, and then performs clustering on the histogram cells to detect object intrusions. | 04-18-2013 |
20130094706 | INFORMATION PROCESSING APPARATUS AND PROCESSING METHOD THEREOF - An information processing apparatus acquires a plurality of geometric features and normals at the respective geometric features from a target object arranged at the first position. The information processing apparatus also acquires a plurality of normals corresponding to the respective geometric features of the target object from a shape model for the target object that is arranged at the second position different from the first position. The information processing apparatus calculates direction differences between the acquired normals for respective pairs of corresponding geometric features of the target object and shape model. The information processing apparatus determines whether or not occlusion occurs at each geometric feature by comparing the calculated direction differences with each other. | 04-18-2013 |
20130094707 | METHOD FOR VERIFYING A SURVEYING INSTRUMENT'S EXTERNAL ORIENTATION - Verifying surveying instrument's external orientation during a measurement process, comprising directing the imaging means onto a reference object and detecting a first photographing direction of the imaging means, taking a first image of the reference object in the first photographing direction, memorizing the first image and the first photographing direction as being indicative of the surveying instrument's external orientation, re-directing the imaging means onto the reference object and detecting a second photographing direction of the imaging means, taking a second image of the reference object in the second photographing direction, and comparing a first with a second imaged position of the reference object in the first respectively the second image by image processing as well as the first with the second photographing direction and verifying the surveying instrument's external orientation based on disparities between the first and the second imaged position and/or between the first and the second photographing direction. | 04-18-2013 |
20130094708 | Object Information Derived from Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 04-18-2013 |
20130094709 | APPARATUS AND METHOD FOR DETECTING SPECIFIC OBJECT PATTERN FROM IMAGE - A face area is detected from an image captured by an image pickup device, pixel values of the image are adjusted based on information concerning the detected face area, a person area is detected from the adjusted image, and the detected face area is integrated with the detected person area. With this configuration, it is possible to accurately detect an object even in a case, for example, where the brightness is varied. | 04-18-2013 |
20130094710 | POSITIONAL LOCATING SYSTEM AND METHOD - A method and system are disclosed for locating or otherwise generating positional information for an object, such as but not limited generating positional coordinates for an object attached to an athlete engaging in an athletic event. The positional coordinates may be processed with other telemetry and biometrical information to provide real-time performance metrics while the athlete engages in the athletic event. | 04-18-2013 |
20130094711 | IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREOF - In an image included in a moving image, a specific area is registered as a reference area, and a specific hue range of the reference area is set as a first feature amount based on the distribution of hues of pixels in the reference area. When the occupation ratio of pixels having hues included in a second feature amount, obtained by expanding the hue range of the first feature amount in a surrounding area larger than the reference area, is smaller than a predetermined ratio, an area having a high degree of correlation is identified from an image using the second feature amount in the subsequent matching process. When the occupation ratio is equal to or larger than the predetermined ratio, an area having a high degree of correlation is identified from an image using the first feature amount in the subsequent matching process. | 04-18-2013 |
20130094712 | SYSTEMS AND METHODS FOR EYE TRACKING USING RETROREFLECTOR-ENCODED INFORMATION - Embodiments of the present invention are directed to eye tracking systems and methods that can be used in uncontrolled environments and under a variety of lighting conditions. In one aspect, an eye tracking system ( | 04-18-2013 |
20130101155 | CONTROLLER FOR AN IMAGE STABILIZING ORTHOGONAL TRANSFER CHARGE-COUPLED DEVICE - An apparatus includes a video sensing device, a velocity vector estimator (VVE) coupled to the video sensing device, a controller coupled to the velocity vector estimator, and an orthogonal transfer charge-coupled device (OTCCD) coupled to the controller. The video sensing device transmits a plurality of image frames to the velocity vector estimator. The controller receives a location of an object in a current frame, stores locations of the object in one or more previous frames, predicts a motion trajectory and the predicted location of the object on it in a subsequent frame as a function of the locations of the object in the current frame and the one or more previous frames, and transmits the predicted location of the object to the OTCCD. The OTCCD shifts its image array of pixels as a function of the predicted location of the object. | 04-25-2013 |
20130101156 | METHOD AND APPARATUS PERTAINING TO NON-INVASIVE IDENTIFICATION OF MATERIALS - A control circuit having access to information regarding a plurality of models for different materials along with feasibility criteria processes imaging information for an object (as provided, for example, by a non-invasive imaging apparatus) to facilitate identifying the materials as comprise that object by using the plurality of models to identify candidate materials for portions of the imaging information and then using the feasibility criteria to reduce the candidate materials by avoiding at least one of unlikely materials and combinations of materials to thereby yield useful material-identification information. | 04-25-2013 |
20130101157 | OPTIMIZING THE DETECTION OF OBJECTS IN IMAGES - A system and method detect objects in a digital image. At least positional data associated with a vehicle is received. Geographical information associated with the positional data is received. A probability of detecting a target object within a corresponding geographic area associated with the vehicle is determined based on the geographical data. The probability is compared to a given threshold. An object detection process is at least one of activated and maintained in an activated state in response to an object detection process in response to the probability being one of above and equal to the given threshold. The object detection process detects target objects within at least one image representing at least one frame of a video sequence of an external environment. The object detection process is at least one of deactivated and maintained in a deactivated state in response to the probability being below the given threshold. | 04-25-2013 |
20130101158 | DETERMINING DIMENSIONS ASSOCIATED WITH AN OBJECT - Devices, methods, and systems for determining dimensions associated with an object are described herein. One system includes a range camera configured to produce a range image of an area in which the object is located, and a computing device configured to determine the dimensions of the object based, at least in part, on the range image. | 04-25-2013 |
20130101159 | IMAGE AND VIDEO BASED PEDESTRIAN TRAFFIC ESTIMATION - Person detection and tracking techniques may be used to estimate pedestrian traffic in locations equipped with cameras. Persons detected in video data from the cameras may help determine existing pedestrian traffic data. Future pedestrian traffic estimation may be performed to estimate pedestrian traffic characteristics (such as volume, direction, etc.) Such traffic estimation may be provided to users for route planning/congestion information. A traffic map can be derived based on the number of people at or expected to be at certain locations. The map may be provided to users to provide traffic data and/or estimations. | 04-25-2013 |
20130101160 | GENERATION OF A DISPARITY RESULT WITH LOW LATENCY - A system for generating disparity results comprises an interface, a first memory, a second memory, and a processor. The interface is for receiving a first element of a first set of image data and a first element of a second set of image data. The first memory is for storing the first element of the first set of image data. The second memory is for storing the first element of the second set of image data. The processor is for generating a disparity result for a first element before all elements of the first data set and the second data set have been received. The disparity result is generated using a low latency image processing system that processes a plurality of elements of the first set of image data and a plurality of elements of the second set of image data. | 04-25-2013 |
20130101161 | METHOD AND DEVICE FOR DISTINGUISHING A SELF-LUMINOUS OBJECT FROM A REFLECTING OBJECT - A method and device for distinguishing a self-luminous object from a reflecting object in a detection range of a camera of a vehicle having at least one headlight, when the object is illuminated by the headlight, are described. The method includes a step of receiving a relative position of the object with respect to the vehicle and a brightness value of the object from the camera. Furthermore, the method includes a step of comparing the brightness value to a self-luminous value expected at the relative position and a reflection value expected at the relative position. Moreover, the method includes a step of classifying the object as self-luminous, if the brightness value is within a self-luminous tolerance range about the self-luminous value or as reflecting, if the brightness value is within a reflection tolerance range about the reflection value. | 04-25-2013 |
20130101162 | Multimedia System with Processing of Multimedia Data Streams - A media system is disclosed that records and/or stores images, video, and/or audio representing a scene in its field of view into a multimedia data stream. The media system extracts and/or frames one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or from images, video, and/or audio of previously recorded multimedia data streams to provide a processed multimedia data stream. The media system plays back the images, video, and/or audio of the processed multimedia data stream. | 04-25-2013 |
20130101163 | METHOD AND/OR APPARATUS FOR LOCATION CONTEXT IDENTIFIER DISAMBIGUATION - The subject matter disclosed herein relates to a method, apparatus, and/or system for obtaining one or more images captured at a mobile device and determining a location context identifier (LCI) identifying an area including a location of the mobile device based, at least in part, on the one or more captured images. The LCI may be selected from among a plurality of LCIs. | 04-25-2013 |
20130101164 | METHOD OF REAL-TIME CROPPING OF A REAL ENTITY RECORDED IN A VIDEO SEQUENCE - A method of real-time cropping of a real entity in motion in a real environment and recorded in a video sequence, the real entity being associated with a virtual entity, the method comprising the following steps: extraction (S | 04-25-2013 |
20130101165 | METHOD AND DEVICE FOR LOCATING PERSONS IN A PRESCRIBED AREA - The invention relates to a method and device for locating persons ( | 04-25-2013 |
20130101166 | EVALUATING FEATURES IN AN IMAGE POSSIBLY CORRESPONDING TO AN INTERSECTION OF A PALLET STRINGER AND A PALLET BOARD - A programmable computer-implemented method is provided for finding possible corners of a pallet in an image. The method may comprise: acquiring a grey scale image including one or more pallets; determining, using a computer, horizontal cross correlations between the image and a first step-edge template to generate a set of horizontal cross correlation results; determining, using the computer, vertical cross correlations between the image and a second step-edge template to generate a set of vertical cross correlation results; and determining, using the computer, a first set of pixels, each such pixel respectively corresponding to a possible first corner of the one or more pallets, using a first corner template, the set of horizontal cross correlation results and the set of vertical cross correlation results. | 04-25-2013 |
20130101167 | IDENTIFYING, MATCHING AND TRACKING MULTIPLE OBJECTS IN A SEQUENCE OF IMAGES - A method of tracking scored candidate objects in a sequence of image frames acquired by a camera is provided. The scored candidate objects may comprise a set of existing scored candidate objects associated with a prior image frame and a set of new scored candidate objects associated with a next image frame. | 04-25-2013 |
20130101168 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - According to one embodiment, an information processing apparatus includes an acquirement unit and a reporting unit. The acquirement unit is configured to acquire an image captured by a image capturing section. In a situation that a similarity representing a degree with which the image of an object captured by the image capturing section is similar to the reference image of each commodity meets a condition of determining a captured commodity as one commodity in the commodities corresponding to the reference image, the reporting unit is configured to report a situation that the captured commodity is determined as the commodity meeting the condition and corresponding to the reference image. | 04-25-2013 |
20130101169 | IMAGE PROCESSING METHOD AND APPARATUS FOR DETECTING TARGET - An image processing method for detecting a target, includes: an image acquiring unit for acquiring depth information of an image; a histogram creating unit for creating a histogram on the depth information of the image; a critical value setting unit for setting a critical value of the depth information for detecting a region of a detection object from the image; an image processing unit for extracting a region of the detection object from the image by using the set critical value of the depth information; a data verifying unit for verifying whether the extracted region of the detection object corresponds to the target; and a storage unit for storing the extracted region of the detection object. A target is detected based on depth information of an image. | 04-25-2013 |
20130101170 | METHOD OF IMAGE PROCESSING AND DEVICE THEREFORE - Disclosed are an image processing method and an image processing apparatus. Disclosed are an image processing method and an image processing apparatus. The image processing method includes dividing the image into a plurality of regions; setting a portion of the divided regions to a first region of interest; detecting a candidate region for a target from the first region of interest; determining if the detected candidate region corresponds to the target; detecting a target region by using the candidate region if the candidate region corresponds to the target; estimating a pose of the target by using the detected target region; and performing modeling with respect to the target. | 04-25-2013 |
20130108102 | Abandoned Object Recognition Using Pedestrian Detection | 05-02-2013 |
20130108103 | IMAGE PROCESSING | 05-02-2013 |
20130108104 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE STROAGE MEDIUM | 05-02-2013 |
20130108105 | APPARATUS AND METHOD FOR MASKING PRIVACY REGION BASED ON MONITORED VIDEO IMAGE | 05-02-2013 |
20130108106 | SHOTSPOT SYSTEM | 05-02-2013 |
20130108107 | VISION RECOGNITION APPARATUS AND METHOD | 05-02-2013 |
20130108108 | Information Processing Apparatus, Information Processing Method, and Computer Program | 05-02-2013 |
20130108109 | Method and device for the detection of moving objects in a video image sequence | 05-02-2013 |
20130108110 | SYSTEM AND METHOD FOR REMOTELY MONITORING AND/OR VIEWING IMAGES FROM A CAMERA OR VIDEO DEVICE | 05-02-2013 |
20130108111 | SYSTEM AND METHOD FOR TRACKING MOVING OBJECTS | 05-02-2013 |
20130108112 | POSITION AND ORIENTATION MEASUREMENT METHOD AND POSITION AND ORIENTATION MEASUREMENT APPARATUS | 05-02-2013 |
20130108113 | METHOD AND APPARATUS FOR CONTROLLING USER EQUIPMENT | 05-02-2013 |
20130114849 | SERVER-ASSISTED OBJECT RECOGNITION AND TRACKING FOR MOBILE DEVICES - Exemplary embodiments for performing server-assisted object recognition and tracking are disclosed herein. For example, in certain embodiments of the disclosed technology, one or more objects are efficiently recognized and tracked on a mobile device by using a remote server that can provide high capacity computing and storage resources. With the benefit of high-speed image processing on a remote server and high-bandwidth communication networks connecting the mobile device and the remote server, it is possible to identify an object and to track changes in the object's characteristics or location, so that a user experiences seamless, real-time tracking. | 05-09-2013 |
20130114850 | SYSTEMS AND METHODS FOR HIGH-RESOLUTION GAZE TRACKING - A system is mounted within eyewear or headwear to unobtrusively produce and track reference locations on the surface of one or both eyes of an observer. The system utilizes multiple illumination sources and/or multiple cameras to generate and observe glints from multiple directions. The use of multiple illumination sources and cameras can compensate for the complex, three-dimensional geometry of the head and anatomical variations of the head and eye region that occurs among individuals. The system continuously tracks the initial placement and any slippage of eyewear or headwear. In addition, the use of multiple illumination sources and cameras can maintain high-precision, dynamic eye tracking as an eye moves through its full physiological range. Furthermore, illumination sources placed in the normal line-of-sight of the device wearer increase the accuracy of gaze tracking by producing reference vectors that are close to the visual axis of the device wearer. | 05-09-2013 |
20130114851 | RELATIVE POSE ESTIMATION OF NON-OVERLAPPING CAMERAS USING THE MOTION OF SUBJECTS IN THE CAMERA FIELDS OF VIEW - A relative pose between two cameras is determined by using input data obtained from the motion of subjects, such as pedestrians, between the fields of view of two cameras, determining trajectory information for the subjects, and computing homographies relating lines obtained from trajectories in the first image data to lines obtained from the trajectories in the second image data. The two fields of view need not overlap. | 05-09-2013 |
20130114852 | HUMAN FACE RECOGNITION METHOD AND APPARATUS - A human face recognition method and apparatus are provided. A processor of the human face recognition apparatus calculates red, green, and blue component statistic information for each of a plurality of human face images. The processor uses an independent component analysis algorithm to analyze component statistic information of two colors and derive a piece of first component information and a piece of second component information. The processor transforms the pieces of first component information and second component information into a frequency domain to derive a piece of first frequency-domain information and a piece of second frequency-domain information. The processor calculates an energy value of the first frequency-domain information within a frequency range. The energy value is used to decide whether the human face images are captured from a human being. | 05-09-2013 |
20130114853 | LOW-LIGHT FACE DETECTION - A face is detected within a camera's field despite inadequate illumination. In various embodiments, multiple images of the inadequately illuminated field of view are obtained and summed into a composite image. The composite image is tone-mapped based on a facial lighting model, and a bounded group of pixels in the tone-mapped image having a lighting distribution indicative of a face is identified. Facial features are resolved within the bounded group of pixels. | 05-09-2013 |
20130114854 | TRACKING APPARATUS AND TRACKING METHOD - A tracking apparatus includes an image data acquisition unit, a tracking process unit, a contrast information acquisition unit, a contrast information similarity evaluation unit, and a control unit. The image data acquisition unit acquires image data. The tracking process unit detects a candidate position of a tracking target in image data. The contrast information acquisition unit acquires contrast information at the candidate position. The contrast information similarity evaluation unit evaluates a similarity between contrast information at a position of the tracking target decided in a past frame and current frame. The control unit decides the position of the tracking target in the current frame based on the evaluation of the similarity. | 05-09-2013 |
20130114855 | Methods and Systems for Detection and Identification of Concealed Materials - Methods and systems for efficiently and accurately detecting and identifying concealed materials. The system includes an analysis subsystem configured to process a number of pixelated images, the number of pixelated images obtained by repeatedly illuminating regions with a electromagnetic radiation source from a number of electromagnetic radiation sources, each repetition performed with a different wavelength. The number of pixelated images, after processing, constitute a vector of processed data at each pixel from a number of pixels. At each pixel, the vector of processed data is compared to a predetermined vector corresponding to a predetermined material, presence of the predetermined material being determined by the comparison. | 05-09-2013 |
20130114856 | METHOD FOR POSE INVARIANT FINGERPRINTING - A computer-implemented method for matching objects is disclosed. At least two images where one of the at least two images has a first target object and a second of the at least two images has a second target object are received. At least one first patch from the first target object and at least one second patch from the second target object are extracted. A distance-based part encoding between each of the at least one first patch and the at least one second patch based upon a corresponding codebook of image parts including at least one of part type and pose is constructed. A viewpoint of one of the at least one first patch is warped to a viewpoint of the at least one second patch. A parts level similarity measure based on the view-invarient distance measure for each of the at least one first patch and the at least one second patch is applied to determine whether the first target object and the second target object are the same or different objects. | 05-09-2013 |
20130114857 | IMAGE PROCESSING DEVICE AND METHOD, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING PROGRAM - An image processing device includes: an entire image display control portion that performs control to display an entire image of a predetermined region in an entire image display window; and a cutout image display control portion that performs control to enlarge a plurality of tracking subjects included in the entire image and display the tracking subjects in a cutout image display window. The cutout image display control portion performs the control in such a manner that one cutout image including the tracking subjects is displayed in the cutout image display window in a case where relative distances among the tracking subjects are equal to or smaller than a predetermined value, and that two cutout images including the respective tracking subjects are displayed in the cutout image display window in a case where the relative distances among the tracking subjects are larger than the predetermined value. | 05-09-2013 |
20130114858 | Method for Detecting a Target in Stereoscopic Images by Learning and Statistical Classification on the Basis of a Probability Law - A method for the detection of a target present in at least two images of the same scene acquired simultaneously by different cameras comprises, under development conditions, a prior target-learning step, said learning step including a step of modeling of the data X corresponding to an area of interest in the images by a distribution law P such that P(X)=P(X | 05-09-2013 |
20130114859 | IMAGE INFORMATION ACQUIRING APPARATUS, IMAGE INFORMATION ACQUIRING METHOD AND IMAGE INFORMATION ACQUIRING PROGRAM - An image information acquiring apparatus of the present invention includes an acoustic wave detector having, disposed on a reception surface thereof, a plurality of elements that detect acoustic waves generated by an object corresponding to a reconstruction area; an acoustic signal generator that generates acoustic signals that are used in image reconstruction, from the detected acoustic waves; an element selector that selects elements that are used in image reconstruction; and a reconstructor that performs image reconstruction of a point of interest using acoustic signals based on the acoustic waves detected by the selected elements, the image information acquiring apparatus being configured such that, for each selected element, there exists another selected element located at a symmetrical position with respect to a point at which the reception surface is intersected by a perpendicular line drawn from the point of interest to the reception surface. | 05-09-2013 |
20130121526 | COMPUTING 3D SHAPE PARAMETERS FOR FACE ANIMATION - A three-dimensional shape parameter computation system and method for computing three-dimensional human head shape parameters from two-dimensional facial feature points. A series of images containing a user's face is captured. Embodiments of the system and method deduce the 3D parameters of the user's head by examining a series of captured images of the user over time and in a variety of head poses and facial expressions, and then computing an average. An energy function is constructed over a batch of frames containing 2D face feature points obtained from the captured images, and the energy function is minimized to solve for the head shape parameters valid for the batch of frames. Head pose parameters and facial expression and animation parameters can vary over each captured image in the batch of frames. In some embodiments this minimization is performed using a modified Gauss-Newton minimization technique using a single iteration. | 05-16-2013 |
20130121527 | SYSTEMS AND METHODS FOR ANALYSIS OF VIDEO CONTENT, EVENT NOTIFICATION, AND VIDEO CONTENT PROVISION - A method for remote event notification over a data network is disclosed. The method includes receiving video data from any source, analyzing the video data with reference to a profile to select a segment of interest associated with an event of significance, encoding the segment of interest, and sending to a user a representation of the segment of interest for display at a user display device. A further method for sharing video data based on content according to a user-defined profile over a data network is disclosed. The method includes receiving the video data, analyzing the video data for relevant content according to the profile, consulting a profile to determine a treatment of the relevant content, and sending data representative of the relevant content according to the treatment. | 05-16-2013 |
20130121528 | INFORMATION PRESENTATION DEVICE, INFORMATION PRESENTATION METHOD, INFORMATION PRESENTATION SYSTEM, INFORMATION REGISTRATION DEVICE, INFORMATION REGISTRATION METHOD, INFORMATION REGISTRATION SYSTEM, AND PROGRAM - An information presentation device includes an object information acquiring unit and an information presentation control unit. The object information acquiring unit acquires object identification information and relative positional information on the relative position between an object and a camera. The object identification information and the relative positional information are obtained by performing processing for detecting and identifying the object for image data. The information presentation control unit controls presentation of information on the basis of the object identification information and the relative positional information. | 05-16-2013 |
20130121529 | MILLIMETER-WAVE SUBJECT SURVEILLANCE WITH BODY CHARACTERIZATION FOR OBJECT DETECTION - An imaging apparatus may include an interrogating apparatus, such as a scanner, configured to transmit toward and receive from a test subject in a target position, electromagnetic radiation in a frequency range of about 100 MHz to about 2 THz. The interrogating apparatus or scanner may produce an image signal representative of the received radiation. A controller may store in memory reference-image data for at least one reference subject. The controller may produce test-image data from the image signal and may compare at least a portion of the test-image data with at least a portion of the reference-image data for the at least one reference subject. | 05-16-2013 |
20130121530 | MICROSCOPY METHOD FOR IDENTIFYING BIOLOGICAL TARGET OBJECTS - The invention relates to a microscopy method for identifying target objects ( | 05-16-2013 |
20130121531 | SYSTEMS AND METHODS FOR AUGMENTING A REAL SCENE - Systems and devices for augmenting a real scene in a video stream are disclosed herein. | 05-16-2013 |
20130121532 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 05-16-2013 |
20130121533 | INTER-TRAJECTORY ANOMALY DETECTION USING ADAPTIVE VOTING EXPERTS IN A VIDEO SURVEILLANCE SYSTEM - A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies. | 05-16-2013 |
20130121534 | Image Processing Apparatus And Image Sensing Apparatus - A tracking process portion includes a search area setting portion for setting a search area in the input image, an image analysis portion for analyzing an image in the search area, an auxiliary track value setting portion for setting an auxiliary track value based on a result of the analysis, a track value setting portion for setting an auxiliary track value based on a result of the analysis and deciding whether the set track value is correct or not, and a track target detection portion for detecting a track object from the image in the search area based on the track value. If the set track value is incorrect, the track value setting portion performs a switching operation for setting the auxiliary track value and a track value. | 05-16-2013 |
20130121535 | DETECTION DEVICE AND METHOD FOR TRANSITION AREA IN SPACE - Provided is a transition area detection device capable of detecting, with high precision, a transition area in a space without using a positioning sensor. The transition area detection device has a corresponding point search-use feature point selection unit for selecting feature points used for determining a reference image from among feature points of an input image (captured image), a geometric transformation parameter calculation-use feature point selection unit for selecting feature points used for calculating geometric transformation parameters from among feature points of the input image and feature points of the reference image, and a degree of similarity calculation-use feature point selection unit; for selecting feature points used for obtaining a degree of similarity between the captured image and the reference image from among the feature points of the input image and the feature points of the reference image. | 05-16-2013 |
20130129141 | Methods and Apparatus for Facial Feature Replacement - Three dimensional models corresponding to a target image and a reference image are selected based on a set of feature points defining facial features in the target image and the reference image. The set of feature points defining the facial features in the target image and the reference image are associated with corresponding 3-dimensional models. A 3D motion flow between the 3-dimensional models is computed. The 3D motion flow is projected onto a 2D image plane to create a 2D optical field flow. The target image and the reference image are warped using the 2D optical field flow. A selected feature from the reference image is copied to the target image. | 05-23-2013 |
20130129142 | AUTOMATIC TAG GENERATION BASED ON IMAGE CONTENT - Automatic extraction of data from and tagging of a photo (or video) having an image of identifiable objects is provided. A combination of image recognition and extracted metadata, including geographical and date/time information, is used to find and recognize objects in a photo or video. Upon finding a matching identifier for a recognized object, the photo or video is automatically tagged with one or more keywords associated with and corresponding to the recognized objects. | 05-23-2013 |
20130129143 | Global Classifier with Local Adaption for Objection Detection - Aspects of the present invention include object detection training systems and methods and using object detection systems and methods that have been trained. Embodiments presented herein include hybrid learning approaches that combine global classification and local adaptations, which automatically adjust model complexity according to data distribution. Embodiments of the present invention automatically determine model complexity of the local learning algorithm according to the distribution of ambiguous samples. And, embodiments of the local adaptation from global classifier avoid the common under-training problem for local classifier. | 05-23-2013 |
20130129144 | APPARATUS AND METHOD FOR DETECTING OBJECT USING PTZ CAMERA - An apparatus for detecting an object includes a filter for filtering a current input image and a background model generated based on a previous input image, a homography matrix estimation unit for estimating a homography matrix between the current input image and the background model, an image alignment unit for converting the background model by applying the homography matrix to a filtered background model and aligning a converted background model and a filtered current input image, and a foreground/background detection unit for detecting a foreground by comparing corresponding pixels between the converted background model and the filtered current input image. | 05-23-2013 |
20130129145 | ORIENTATION CORRECTION METHOD FOR ELECTRONIC DEVICE USED TO PERFORM FACIAL RECOGNITION AND ELECTRONIC DEVICE THEREOF - A method of performing facial recognition and tracking of an image captured by an electronic device includes: utilizing a camera of the electronic device to capture an image including at least a face; displaying the image on a display screen of the electronic device; determining a degree of orientation of the electronic device; and adjusting an orientation of scanning lines used to scan the image for performing face detection so that the orientation of the scanning lines corresponds to the orientation of the electronic device. | 05-23-2013 |
20130129146 | Methods, Circuits, Devices, Apparatuses and Systems for Providing Image Composition Rules, Analysis and Improvement - The present invention includes methods, circuits, devices, apparatuses and systems for analyzing, characterizing and/or rating the composition of images. Further embodiments of the present invention include methods, circuits, devices, apparatuses and systems for providing instructive feedback or automatic corrective actions, relating to the quality of the composition of an image, to a user of an imaging device (e.g. digital camera, camera phone, etc.)—Optionally while the user is preparing to acquire an image, i.e. in real time. Embodiments of the present invention may further include methods, circuits, devices, apparatuses and systems for extracting image composition related rules based on analysis of composition parameters of rated images. | 05-23-2013 |
20130129147 | AUTOMATIC DETECTION OF FIRES ON EARTH'S SURFACE AND OF ATMOSPHERIC PHENOMENA SUCH AS CLOUDS, VEILS, FOG OR THE LIKE, BY MEANS OF A SATELLITE SYSTEM - A method is provided for automatically detecting fires on Earth's surface by satellite. The method includes: acquiring multi-spectral images of the Earth at different times, each a collection of single-spectral images each associated with a respective wavelength, each image being made up of pixels each indicative of a spectral radiance from a respective area of the Earth; computing an adaptive predictive model predicting spectral radiances at a considered time for considered pixels based on previously acquired spectral radiances of the considered pixels and those previously predicted for the considered pixels by the adaptive predictive model; comparing acquired spectral radiances of the considered pixels at a considered time with those predicted at the same considered time for the considered pixels by the adaptive predictive model; and detecting fires or atmospheric phenomena in areas of the Earth's surface or atmosphere corresponding to the considered pixels based on an outcome of the comparison. | 05-23-2013 |
20130129148 | OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD, AND PROGRAM - An object detection device that can accurately identify an object candidate in captured stereo images as an object or a road surface. The object detection device ( | 05-23-2013 |
20130136298 | SYSTEM AND METHOD FOR TRACKING AND RECOGNIZING PEOPLE - A tracking and recognition system is provided. The system includes a computer vision-based identity recognition system configured to recognize one or more persons, without a priori knowledge of the respective persons, via an online discriminative learning of appearance signature models of the respective persons. The computer vision-based identity recognition system includes a memory physically encoding one or more routines, which when executed, cause the performance of constructing pairwise constraints between the unlabeled tracking samples. The computer vision-based identity recognition system also includes a processor configured to receive unlabeled tracking samples collected from one or more person trackers and to execute the routines stored in the memory via one or more algorithms to construct the pairwise constraints between the unlabeled tracking samples. | 05-30-2013 |
20130136299 | METHOD AND APPARATUS FOR RECOVERING DEPTH INFORMATION OF IMAGE - An image processing apparatus and method may estimate binocular disparity maps of middle views from among a plurality of views through use of images of the plurality of views. The image processing apparatus may detect a moving object from the middle views based on the binocular disparity maps of the frames. Pixels in the middle views may be separated into dynamic pixels and static pixels through detection of the moving object. The image processing apparatus may apply bundle optimization and a local three-dimensional (3D) line model-based temporal optimization to the middle views so as to enhance binocular disparity values of the static pixels and dynamic pixels. | 05-30-2013 |
20130136300 | Tracking Three-Dimensional Objects - Method and apparatus for tracking three-dimensional (3D) objects are disclosed. In one embodiment, a method of tracking a 3D object includes constructing a database to store a set of two-dimensional (2D) images of the 3D object using a tracking background, where the tracking background includes at least one known pattern, receiving a tracking image, determining whether the tracking image matches at least one image in the database in accordance with feature points of the tracking image, and providing information about the tracking image in respond to the tracking image matches the at least one image in the database. The method of constructing a database also includes capturing the set of 2D images of the 3D object with the tracking background, extracting a set of feature points from each 2D image, and storing the set of feature points in the database. | 05-30-2013 |
20130136301 | METHOD FOR CALIBRATION OF A SENSOR UNIT AND ACCESSORY COMPRISING THE SAME - Method, means, portable terminal accessory and system for calibrating a sensor device comprising a positioning unit detecting the position of the electronic device, an image capturing unit capturing an image of the environment around the electronic device, a processing unit detecting the presence of at least one identifiable object in the image captured and from a comparison of the position of the object in relation to the position of the user determining the heading of a user of the electronic device. Once the heading of the user is determined, it is used to calibrate one or more sensor devices or sensor functionalities in the electronic device. | 05-30-2013 |
20130136302 | APPARATUS AND METHOD FOR CALCULATING THREE DIMENSIONAL (3D) POSITIONS OF FEATURE POINTS - An apparatus for calculating spatial coordinates is disclosed. The apparatus may extract a plurality of feature points from an input image, calculate a direction vector associated with the feature points, and calculate spatial coordinates the feature points based on a distance between the feature points and the direction vector. | 05-30-2013 |
20130136303 | OBJECT DETECTION APPARATUS, METHOD FOR CONTROLLING THE OBJECT DETECTION APPARATUS, AND STORAGE MEDIUM - An object detection apparatus comprises a detection unit configured to calculate a detection likelihood from each of the plurality of frame images obtained by an image obtaining unit, and to detect a target object from each of the frame images based on the detection likelihood; and a tracking unit configured to calculate a tracking likelihood of the target object from each of the plurality of frame images, and to track the target object over the plurality of frame images based on the tracking likelihood, wherein the detection unit detects the target object from the frame images obtained by the image obtaining unit based on the tracking likelihood of the target object that is calculated by the tracking unit from the frame images, and the detection likelihood of the target object that is calculated by the detection unit from the frame images. | 05-30-2013 |
20130136304 | APPARATUS AND METHOD FOR CONTROLLING PRESENTATION OF INFORMATION TOWARD HUMAN OBJECT - A human object recognition unit recognizes a human object included in a captured image data. A degree-of-interest estimation unit estimates a degree of interest of the human object in acquiring information, based on a recognition result obtained by the human object recognition unit. An information acquisition unit acquires information as a target to be presented to the human object. An information editing unit generates information to be presented to the human object from the information acquired by the information acquisition unit, based on the degree of interest estimated by the degree-of-interest estimation unit. An information display unit outputs the information generated by the information editing unit. | 05-30-2013 |
20130136305 | Pattern generation using diffractive optical elements - Apparatus ( | 05-30-2013 |
20130136306 | OBJECT IDENTIFICATION DEVICE - An object identification device identifying an image region of an identification target includes an imaging unit receiving two polarization lights and imaging respective polarization images, a brightness calculation unit dividing the two polarization images into processing regions and calculating a brightness sum value between the two polarizations images for each processing region, a differential polarization degree calculation unit calculating a differential polarization degree for each processing region, a selecting condition determination unit determining whether the differential polarization degree satisfies a predetermined selecting condition, and an object identification processing unit specifying the processing region based on the differential polarization degree or the brightness sum value depending on whether the predetermined selecting condition is satisfied and identifying plural processing regions that are specified as the processing regions as the image region of the identification target. | 05-30-2013 |
20130136307 | METHOD FOR COUNTING OBJECTS AND APPARATUS USING A PLURALITY OF SENSORS - According to one embodiment of the present invention, a method for counting objects involves using an image sensor and a depth sensor, and comprises the steps of: acquiring an image from the image sensor and acquiring a depth map from the depth sensor, the depth map indicating depth information on the subject in the image; acquiring boundary information on objects in the image; applying the boundary information to the depth map to generate a corrected depth map; identifying the depth pattern of the objects from the corrected depth map; and counting the identified objects. | 05-30-2013 |
20130142383 | Scanned Image Projection System with Gesture Control Input - An imaging system ( | 06-06-2013 |
20130142384 | ENHANCED NAVIGATION THROUGH MULTI-SENSOR POSITIONING - Enhanced navigation and positional metadata are provided based upon position determination utilizing data provided by multiple different systems of sensors. Infrastructure, or fixed sensor, data provides an initial location determination of humans and user-specific sensors that are co-located with their respective users provides identification of the users whose locations were determined. Navigation instructions provided based on the determined locations are enhanced by additional sensor data that is received from other user-specific sensors that are co-located with the users. Additionally, user privacy can be maintained by only utilizing sensor data authorized by the user, or by publishing fixed sensor data, identifying locations and movements of users, but not their identity, thereby enabling a user's computing device to match such information to the information obtained from user-specific sensors to determine the user's location. | 06-06-2013 |
20130142385 | VEHICLE GHOSTING ON FULL WINDSHIELD DISPLAY - A method to display a ghosting image upon a transparent windscreen head-up display in a vehicle includes monitoring an operating environment of the vehicle, monitoring a driver registration input, determining a registered desired location graphic illustrating a future desired location for the vehicle based upon the operating environment of the vehicle and the driver registration input, and displaying the registered desired location graphic upon the head-up display. | 06-06-2013 |
20130142386 | System And Method For Evaluating Focus Direction Under Various Lighting Conditions - A system and method for generating a direction confidence measure includes a camera sensor device that captures blur images of a photographic target. A depth estimator calculates matching errors for the blur images. The depth estimator then generates the direction confidence measure by utilizing the matching errors and a dynamic optimization constant that is selected depending upon image characteristics of the blur images. | 06-06-2013 |
20130142387 | Identifying a Target Object Using Optical Occlusion - Methods are apparatuses are described for identifying a target object using optical occlusion. A head-mounted display perceives a characteristic of a reference object. The head-mounted display detects a change of the perceived characteristic of the reference object and makes a determination that a detected object caused the change of the perceived characteristic. In response to making the determination, the head-mounted display identifies the detected object as the target object. | 06-06-2013 |
20130142388 | ARRIVAL TIME ESTIMATION DEVICE, ARRIVAL TIME ESTIMATION METHOD, ARRIVAL TIME ESTIMATION PROGRAM, AND INFORMATION PROVIDING APPARATUS - An arrival time estimation device includes an image input unit configured to input an image signal to each frame, an object detecting unit configured to detect an object indicated by the image signal input through the image input unit, and an arrival time calculating unit configured to calculate a rotation matrix indicating rotation of an optical axis of an imaging device that captures the image signal based on a direction vector indicating a direction to the object detected by the object detecting unit, to calculate a change in a distance to the object based on a vector obtained by multiplying a past direction vector by the calculated rotation matrix and a current direction vector, and to calculate an arrival time to the object based on the calculated distance change. | 06-06-2013 |
20130142389 | EYE STATE DETECTION APPARATUS AND METHOD OF DETECTING OPEN AND CLOSED STATES OF EYE - An eye state detection apparatus includes a camera, a first calculator, a memory, a second calculator, and a third calculator. The camera obtains a plurality of face images of a driver. The first calculator calculates an opening amount of an eye of the driver based on each face image. The memory stores the opening amounts calculated by the first calculator. The second calculator groups the opening amounts into a plurality of groups in a sequential manner, calculates a group distribution of each group, calculates an entire distribution of all of the opening amounts, and sets the entire distribution as a reference distribution when a difference among the group distributions is within a predetermined range. The third calculator calculates an opening degree of the eye based on the reference distribution of the opening amounts when the reference distribution of the opening amounts is calculated by the second calculator. | 06-06-2013 |
20130142390 | MONOCULAR 3D POSE ESTIMATION AND TRACKING BY DETECTION - Methods and apparatus are described for monocular 3D human pose estimation and tracking, which are able to recover poses of people in realistic street conditions captured using a monocular, potentially moving camera. Embodiments of the present invention provide a three-stage process involving estimating ( | 06-06-2013 |
20130142391 | Face Recognition Performance Using Additional Image Features - A technique is provided for recognizing faces in an image stream using a digital image acquisition device. A first acquired image is received from an image stream. A first face region is detected within the first acquired image having a given size and a respective location within the first acquired image. First faceprint data uniquely identifying the first face region are extracted along with first peripheral region data around the first face region. The first faceprint and peripheral region data are stored, and the first peripheral region data are associated with the first face region. The first face region is tracked until a face lock is lost. A second face region is detected within a second acquired image from the image stream. Second peripheral region data around the second face region are extracted. The second face region is identified upon matching the first and second peripheral region data. | 06-06-2013 |
20130142392 | INFORMATION PROCESSING DEVICE AND METHOD, PROGRAM, AND RECORDING MEDIUM - An information processing device includes: an outline extraction unit extracting an outline of a subject from a picked-up image of the subject; a characteristic amount extraction unit extracting a characteristic amount, by extracting sample points from points making up the outline, for each of the sample points; an estimation unit estimating a posture of a high degree of matching as a posture of the subject by calculating a degree of the characteristic amount extracted in the characteristic amount extraction unit being matched with each of a plurality of characteristic amounts that are prepared in advance and represent predetermined postures different from each other; and a determination unit determining accuracy of estimation by the estimation unit using a matching cost when the estimation unit carries out the estimation. | 06-06-2013 |
20130148844 | Passenger Detector - A passenger detector includes an image taker, an image processor and a storage unit. The image taker is used for taking an image of a passenger sitting on a seat. The image processor is connected to the image taker. The image processor is used to learn and identify features of the image and possibilities of states of the passenger and integrate the possibilities to select the most likely state of the passenger. The storage unit is connected to the image processor. The storage unit is used to store image data before and after taking the image. | 06-13-2013 |
20130148845 | VEHICLE OCCUPANCY DETECTION USING TIME-OF-FLIGHT SENSOR - Vehicle occupancy detection involves projecting modulated light onto an occupant from a light source outside of a vehicle. Reflections of the light source are received at a detector located outside of the vehicle. Three-dimensional data is determined based on a time-of-flight of the reflections, and the occupant is detected based on the three-dimensional data. | 06-13-2013 |
20130148846 | CHANGING PARAMETERS OF SEQUENTIAL VIDEO FRAMES TO DETECT DIFFERENT TYPES OF OBJECTS - First and second camera parameters are optimized for detecting a respective retroreflective and non-retroreflective object. A sequential series of first and second video frames are captured based on the first and second camera parameters, and the retroreflective and non-retroreflective object are detected in a camera scene based on the respective first and second video frames of the series. | 06-13-2013 |
20130148847 | POST-PROCESSING A MULTI-SPECTRAL IMAGE FOR ENHANCED OBJECT IDENTIFICATION - What is disclosed is a system and method for post-processing a multi-spectral image which has already been processed for pixel classification. A binary image is received which contains pixels that have been classified using a pixel classification method. Each pixel in the image has an associated intensity value and has a pixel value of 1 or 0 depending on whether the pixel has been classified as a material of interest or not. The image is divided into a plurality of blocks of pixels. On a block by block basis, pixel values in a block are changed according to a threshold-based filtering criteria such that pixels in the same block all have the same binary value. Once all the blocks have been processed, contiguous pixels having the same binary value are grouped to form separate objects. In such a manner, pixel classification errors in the post-processed binary image can be reduced. | 06-13-2013 |
20130148848 | METHOD AND APPARATUS FOR VIDEO ANALYTICS BASED OBJECT COUNTING - A video analytics based object counting method which can obtain and process video frames from one or more video sources is proposed. By setting a variety of parameters, calculating a reference point, and a mapping table, a sampled referenced image can be constructed to obtain image pixels variation information according to these parameters. With the changed value of multiple sampling line segments and the pre-defined reference object, total object counts can be estimated by analyzing the whole number of the triggered sampling line segments and their directional states. | 06-13-2013 |
20130148849 | IMAGE PROCESSING DEVICE AND METHOD - An image processing device that accesses a storage unit that stores a feature point of a recognition-target object, the device includes an obtaining unit mounted with a user and configured to obtain image data in a direction of a field of view of the user; a recognizing unit configured to recognize the recognition-target object included in the image data by extracting a feature point from the image data and associating the extracted feature point and the feature point of the recognition-target object stored in the storage unit with each other; a calculating unit configured to calculate a location change amount of the feature point corresponding to the recognition-target object recognized by the recognizing unit from a plurality of the image data obtained at different times and calculate a motion vector of the recognition-target object from the location change amount; and a determining unit configured to determine a movement. | 06-13-2013 |
20130148850 | USER DETECTING APPARATUS, USER DETECTING MEHTOD, AND COMPUTER-READABLE RECORDING MEDIUM STORING A USER DETECTING PROGRAM - A user detecting apparatus includes: a memory; and a processor that executes a procedure, the procedure including: obtaining a first image and a second image, extracting a user-associated area from the first image according to a given condition, dividing the user-associated area into a plurality of areas, storing a histogram of each of the plurality of areas in the memory, detecting from the second image a corresponding area that corresponds to an area that is one of the plurality of areas and has a first reference histogram according to similarity, and changing a reference histogram used for a third image from the first reference histogram to a second reference histogram. | 06-13-2013 |
20130148851 | KEY-FRAME SELECTION FOR PARALLEL TRACKING AND MAPPING - A method of selecting a first image from a plurality of images for constructing a coordinate system of an augmented reality system. A first image feature in the first image corresponding to the feature of the marker is determined A second image feature in a second image is determined based on a second pose of a camera, said second image feature having a visual match to the first image feature. A reconstructed position of the feature of the marker in a three-dimensional (3D) space is determined based on positions of the first and second image features, the first and the second camera pose. A reconstruction error is determined based on the reconstructed position of the feature of the marker and a pre-determined position of the marker. | 06-13-2013 |
20130148852 | METHOD, APPARATUS AND SYSTEM FOR TRACKING AN OBJECT IN A SEQUENCE OF IMAGES - A method of tracking an object (e.g., | 06-13-2013 |
20130148853 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus and method may accurately separate only humans among moving objects, and also accurately separate even humans who have no motion via human segmentation using a depth data and face detection technology. The apparatus includes a face detecting unit to detect a human face in an input color image, a background model producing/updating unit to produce a background model using a depth data of an input first frame and face detection results, a candidate region extracting unit to produce a candidate region as a human body region by comparing the background model with a depth data of an input second or subsequent frame, and to extract a final candidate region by removing a region containing a moving object other than a human from the candidate region, and a human body region extracting unit to extract the human body region from the candidate region. | 06-13-2013 |
20130148854 | METHOD, SYSTEM AND APPARATUS FOR DETERMINING A SUBJECT AND A DISTRACTOR IN AN IMAGE - A method of identifying a subject and a distractor in a target image is disclosed. The method receives a reference image comprising image content corresponding to image content of the target image. A first saliency map, which defines a distribution of visual attraction values identifying salient regions within the target image, and a second saliency map, which defines a distribution of visual attraction values identifying salient regions within the reference image, are determined. The method compares image content in salient regions of the first saliency map and the second saliency map. The subject is identified by a salient region of the target image sharing image content with a salient region of the reference image. The distractor is identified based on at least one remaining salient region of the target image. | 06-13-2013 |
20130148855 | POSITIONING INFORMATION FORMING DEVICE, DETECTION DEVICE, AND POSITIONING INFORMATION FORMING METHOD - Provided is a positioning information forming device which improves object detection accuracy. This device comprises a synthesis unit ( | 06-13-2013 |
20130156260 | PROBLEM STATES FOR POSE TRACKING PIPELINE - A human subject is tracked within a scene of an observed depth image supplied to a pose tracking pipeline. An indication of a problem state is received from the pose tracking pipeline, and an identification of the problem state is supplied to the pose tracking pipeline. A virtual skeleton is received from the pose tracking pipeline that includes a plurality of skeletal points defined in three-dimensions. The pose tracking pipeline selects a three-dimensional position of at least one of the plurality of skeletal points in accordance with the identification of the problem state supplied to the pose-tracking pipeline. | 06-20-2013 |
20130156261 | METHOD AND APPARATUS FOR OBJECT DETECTION USING COMPRESSIVE SENSING - In one embodiment, the method for object detection and compressive sensing includes receiving, by a decoder, measurements. The measurements are coded data that represents video data. The method further includes estimating, by the decoder, probability density functions based upon the measurements. The method further includes identifying, by the decoder, a background image and at least one foreground image based upon the estimated probability density functions. The method further includes examining the at least one foreground image to detect at least one object of interest. | 06-20-2013 |
20130156262 | Voting-Based Pose Estimation for 3D Sensors - A pose of an object is estimated by first defining a set of pair features as pairs of geometric primitives, wherein the geometric primitives include oriented surface points, oriented boundary points, and boundary line segments. Model pair features are determined based on the set of pair features for a model of the object. Scene pair features are determined based on the set of pair features from data acquired by a 3D sensor, and then the model pair features are matched with the scene pair features to estimate the pose of the object. | 06-20-2013 |
20130156263 | VERIFICATION METHOD, VERIFICATION DEVICE, AND COMPUTER PRODUCT - A verification device | 06-20-2013 |
20130156264 | MINIMIZING DRIFT USING DEPTH CAMERA IMAGES - A device may obtain, from a camera associated with a reference object, depth image data including objects in a first frame and a second frame; identify features of the objects in the first frame and the second frame; and track movements of the features between the first frame and the second frame. The device may also identify independently moving features in the second frame, based on the tracking movements; remove the independently moving features from the depth image data to obtain a static feature set; and process the depth image data corresponding to the static feature set to detect changes in the relative position of objects in the first frame and the second frame. The processor may further translate the changes in relative position into corresponding movement data of the camera and provide the corresponding movement data to an inertial navigation system. | 06-20-2013 |
20130156265 | System and Method for Analyzing Three-Dimensional (3D) Media Content - A system and method are provided that use point of gaze information to determine what portions of 3D media content are actually being viewed to enable a 3D media content viewing experience to be improved. Tracking eye movements of viewers to obtain such point of gaze information are used to control characteristics of the 3D media content during consumption of that media, and/or to improve or otherwise adjust or refine the 3D media content during creation thereof by a media content provider. Outputs may be generated to illustrate what in the 3D media content was viewed at incorrect depths. Such outputs may then be used in subsequent or offline analyses, e.g., by editors for media content providers when generating the 3D media itself, in order to gauge the 3D effects. A quality metric can be computed based on the point of gaze information, which can be used to analyze the interactions between viewers and the 3D media content being displayed. The quality metric may also be calibrated in order to accommodate offsets and other factors and/or to allow for aggregation of results obtained for multiple viewers. | 06-20-2013 |
20130156266 | FUNCTION EXTENSION DEVICE, FUNCTION EXTENSION METHOD, COMPUTER-READABLE RECORDING MEDIUM, AND INTEGRATED CIRCUIT - An object recognition unit recognizes, from real-space video data, a body included in the video data. A function setting unit retains function information in which is prescribed a function configured from a pair of operation and processing that can be set for each type of body. In addition, the function setting unit sets, to each body recognized by the object recognition unit, a function that can be set, based on the type of each body. A selection determination unit determines a selected body selected by a user as the body to be operated among the respective bodies recognized by the object recognition unit 1101. An operation determination unit determines the operation that the user has performed on the selected body. A processing determination unit determines the processing for the operation that has been determined by the operation determination unit among the operations configuring the function set by the function setting unit. | 06-20-2013 |
20130156267 | DIAGNOSIS ASSISTANCE SYSTEM AND COMPUTER READABLE STORAGE MEDIUM - Provided is a diagnosis assistance system. The system includes, an imaging unit, an analysis unit, an operation unit, and a display unit. The analysis unit extracts a subject region from each of the plurality of image frames generated by the imaging unit, divides the extracted subject region into a plurality of regions, and analyzes the divided regions correlated among the plurality of image frames, thereby calculating a predetermined feature quantity indicating motions of the divided regions. The operation unit allows a user to select a region serving as a display target of an analysis result by the analysis unit. The display unit displays the calculated feature quantity regarding the selected region. | 06-20-2013 |
20130163810 | INFORMATION INQUIRY SYSTEM AND METHOD FOR LOCATING POSITIONS - An information inquiry system includes an information acquisition unit, to acquire information of a bus route. An image capture unit captures an image of an object on the bus route. An information processing unit compares the object in the image with the information of the bus route to locate the object and access information of the located object. A storage unit stores the information of the object. An output unit displays the information of the bus route as a map and highlights the located object in the map. An information inquiring method is also provided. | 06-27-2013 |
20130163811 | LAPTOP DETECTION - Provided herein are devices, systems, and methods for the detection of objects (e.g., laptop computers, electronics, explosives, etc.) within luggage. In particular, methods are provided for the detection of laptop computers within luggage (e.g., luggage containing other metallic objects and/or electronic devices). | 06-27-2013 |
20130163812 | INFORMATION PROCESSOR, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM - An information processor includes an image capturing part configured to obtain a displayed screen image; a storage part configured to store the screen image each time the screen image is obtained; an image comparison part configured to generate one or more difference pixels by comparing a screen image stored last and the obtained screen image; a difference region determination part configured to determine the smallest rectangular region including the difference pixels as a difference region based on a predetermined rectangle formed of a predetermined number of pixels, the screen image being divided using the predetermined rectangle as a unit; a compressed difference image generation part configured to generate a compressed difference image by compressing a difference image using the predetermined rectangle as a unit, the difference region being cut out from the screen image into the difference image; and an image transmission part configured to transmit the compressed difference image. | 06-27-2013 |
20130163813 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus for compositing a plurality of images that are shot with different exposures, comprises an object detection unit configured to detect object regions from the images; a main object determination unit configured to determine a main object region from among the object regions; a distance calculation unit configured to calculate object distance information regarding distances to the main object region for the object regions; and a compositing unit configured to composite the object regions of the plurality of images using a compositing method based on the object distance information, so as to generate a high dynamic range image. | 06-27-2013 |
20130163814 | IMAGE SENSING APPARATUS, INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM - Face recognition data to be used in recognizing a person corresponding to a face image is managed upon associating the feature amount of the face image, a first person's name, and a second person's name different from the first person's name with each other for each registered person. A person corresponding to a face image included in a captured image is identified using the feature amount managed in the face recognition data, and the second person's name for the identified person is stored in a storage in association with the captured image. When the image stored in the storage is read out and displayed on a display device, the first person's name which corresponds to the second person's name associated with the readout image is displayed on the display device together with the readout image. | 06-27-2013 |
20130163815 | 3D RECONSTRUCTION OF TRAJECTORY - Disclosed is a method of determining a 3D trajectory of an object from at least two observed trajectories of the object in a scene. The observed trajectories are captured in a series of images by at least one camera, each of the images in the series being associated with a pose of the camera. First and second points of the object from separate parallel planes of the scene are selected. A first set of 2D capture locations corresponding to the first point and a second set of 2D capture locations corresponding to the second point to determine a approximated 3D trajectory of the object. | 06-27-2013 |
20130163816 | Prioritized Contact Transport Stream - A detection process, contact recognition process, classification process, and identification process are applied to raw sensor data to produce an identified contact record set containing one or more identified contact records. A prioritization process is applied to the identified contact record set to assign a contact priority to each contact record in the identified contact record set. Data are removed from the contact records in the identified contact record set based on the contact priorities assigned to those contact records. A first contact stream is produced from the resulting contact records. The first contact stream is streamed in a contact transport stream. The contact transport stream may include and stream additional contact streams. The contact transport stream may be varied dynamically over time based on parameters such as available bandwidth, contact priority, presence/absence of contacts, system state, and configuration parameters. | 06-27-2013 |
20130163817 | METHOD AND AN APPARATUS FOR GENERATING IMAGE CONTENT - A method and a system for generating image content. The method and system allow segments of a panoramic scene, to be generated with reduced distortion. The method and system reduce the amount of distortion by mapping pixel data onto a pseudo camera focal plane which is provided substantially perpendicularly to the focal location of the camera that captured the image. A camera arrangement can implement the method and system. | 06-27-2013 |
20130163818 | METHOD FOR THE AUTHENTICATION AND/OR IDENTIFICATION OF A SECURITY ITEM - A method for authenticating and/or identifying a security article that includes a transparent or translucent substrate and, on a side of a first face of the substrate, a first image. The method includes superimposing at least partially the first image of the article with a second image. The second image may be produced by an electronic imager. The second image may be situated on the side of a second face of the substrate that is opposite to the first face. The method permits observation of an authentication and/or identification information item of the security article during a change of the angle of observation of the first and second superimposed images. | 06-27-2013 |
20130163819 | SYSTEM AND METHOD FOR INDENTIFYING IMAGE LOCATIONS SHOWING THE SAME PERSON IN DIFFERENT IMAGES - The same person is automatically recognized in different images from his or her clothing. Color pixel values of a first and second image are captures and areas are selected for a determination whether they show the same person. First histograms of pixels area are computed, representing sums of contributions from pixels with color values in histogram bins. Each histogram bin corresponds to a combination of a range of color values and a range of heights in the areas. The ranges of color values are normalized relative to a distribution of color pixel values in areas. Furthermore, second histograms of pixels in the areas are computed, the second histograms representing sums of contributions from pixels with color values in further histogram bins. The further histogram bins are at least partly unnormalized. First and second histogram intersection scores of the first and second histograms are computed. A combined detection score is computed from the first and second histogram scores. | 06-27-2013 |
20130170696 | CLUSTERING-BASED OBJECT CLASSIFICATION - An example of a method for identifying objects in video content according to the disclosure includes receiving video content of a scene captured by a video camera, detecting an object in the video content, identifying a track that the object follows over a series of frames of the video content, extracting object features for the object from the video content, and classifying the object based on the object features. Classifying the object further comprises: determining a track-level classification for the object using spatially invariant object features, determining a global-clustering classification for the object using spatially variant features, and determining an object type for the object based on the track-level classification and the global-clustering classification for the object. | 07-04-2013 |
20130170697 | Personal Augmented Reality - A method, article of manufacture and system for receiving, from a user device located at an environment at which a user is viewing an event both with and without the use of the user device, image data containing an image of the environment; receiving a request from the user device for an item of information that relates to the event, for placement into the image of the environment; responsive to receiving the request, accessing a database to retrieve the item of information; generating a scaled image of the item of information based on dimensions of the environment; and transmitting the scaled image for placement into the image of the environment on the user device to generate an augmented reality image. | 07-04-2013 |
20130170698 | IMAGE ACQUISITION SYSTEMS - Image acquisition systems are described herein. One image acquisition system includes an image recording device configured to determine and record a tracking error associated with a raw image of a moving subject, and a computing device configured to deblur the raw image using the tracking error. | 07-04-2013 |
20130170699 | Techniques for Context-Enhanced Confidence Adjustment for Gesture - Techniques are provided for a gesture device to detect a series of gestures performed by a user and execute corresponding electronic commands associated with the gestures. The gesture device detects a gesture constituting movements from a user in three-dimensional space and generates a confidence score value for the gesture. The gesture device selects an electronic command associated with the gesture and compares the electronic command with a prior electronic command associated with a prior gesture previously detected by the gesture device in order to determine a compatibility metric between the electronic command and the prior electronic command. The gesture device then adjusts the confidence score value based on the compatibility metric to obtain a modified confidence score value. The electronic command is executed by the gesture device when the modified confidence score value is greater than a predetermined threshold confidence score value. | 07-04-2013 |
20130170700 | Image Capturing Device Capable of Simplifying Characteristic Value Sets of Captured Images and Control Method Thereof - An image capturing device capable of simplifying characteristic value sets of captured images and a control method thereof. The image capturing device comprises a characteristic conversion module, a data storage module, a characteristic simplification module, a template storage module and a recognition module. The characteristic conversion module converts an image captured by the image capturing device into a characteristic image, and the characteristic image includes a group of first characteristic value sets. The data storage module stores a lookup table which comprises second characteristic value sets. The characteristic simplification module performs a simplification process according to the lookup table to produce a simplified group of characteristic value sets. Finally, compares the simplified group of characteristic value sets with the plurality of templates stored in the template storage module to recognize a specific object in the image. | 07-04-2013 |
20130170701 | COMPUTER-READABLE RECORDING MEDIUM AND ROAD SURFACE SURVEY DEVICE - A road surface survey device specifies a position at which abnormality is detected at one of a position at which abnormality on pavement of a road surface is detected from an image of a road captured by a camera and a position at which abnormality on pavement of a road surface is detected from a change in an acceleration measured when a car runs on the road surface by a G sensor. Further, the road surface survey device derives conditions that abnormality which is not detected at the specified position can be detected, and outputs an instruction of a resurvey for the specified position under the derived conditions. | 07-04-2013 |
20130170702 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 07-04-2013 |
20130170703 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD, AND PROGRAM - An image processing device for recognizing an object corresponding to a registered image registered beforehand from an imaged image, comprising: an obtaining unit configured to obtain the imaged image; a recognizing unit configured to recognize an object corresponding to the registered image from the imaged image; and a detecting unit configured to detect, based on a registered image corresponding to an object recognized from the imaged image thereof, an area where another object is overlapped with the object corresponding to the registered image thereof. | 07-04-2013 |
20130170704 | IMAGE PROCESSING APPARATUS AND IMAGE MANAGEMENT METHOD - Provided is an image processing apparatus comprising: an acquisition unit that acquires location information indicating a photographed point and date/time information indicating a photographed date/time for each of a plurality of images representing image data obtained by photographing; a determination unit that determines whether the photographed point of each image is a main photographed point or a sub-photographed point on the basis of the location information and the date/time information; and a recording unit that, if the photographed point of the image is the main photographed point, records information indicating the location of the main photographed point in association with the image data of the image, and that, if the photographed point of the image is the sub-photographed point, records information indicating the locations of the sub-photographed point and of the main photographed point in association with the image data of the image. | 07-04-2013 |
20130170705 | METHOD OF DETECTING PARTICLES BY DETECTING A VARIATION IN SCATTERED RADIATION - A smoke detecting method which uses a beam of radiation such as a laser ( | 07-04-2013 |
20130170706 | GUIDANCE DEVICE, GUIDANCE METHOD, AND GUIDANCE PROGRAM - Image recognition is performed based on a surrounding image and a recognition template used for the image recognition of a marker object, and a recognition confidence level used for determining if the marker object can be recognized in the surrounding image is calculated. A determination is made if the recognition confidence level has increased as compared with the recognition confidence level calculated based on the surrounding image acquired at the guidance output point. If it is determined that the recognition confidence level has increased, the image of the marker object, generated based on the surrounding image acquired at the guidance output point, is stored as a new template to be used for the image recognition of the marker object. This increases the possibility to recognize the marker object based on the new template, thus increasing the recognition accuracy of the marker object. | 07-04-2013 |
20130170707 | METHOD OF DETECTING SPACE DEBRIS - A method of detecting space debris includes: generating a virtual space debris in accordance with the law of conservation of mass by applying a debris breakup model to an object of breakup origin; calculating an orbit of each virtual space debris based on a debris orbit propagation model; and generating appearance frequency distribution of a motion vector of each virtual space debris on the celestial sphere based on the orbit calculation. The above operations are executed multiple times. The method further includes setting a search range vector based on a motion vector having a high level of the appearance frequency distribution of the motion vector, and applying a stacking method to regions in images captured at time intervals during the fixed point observation, the regions being shifted along the search range vector sequentially in the order of capture, thereby detecting space debris appearing on the images. | 07-04-2013 |
20130170708 | PROCESSING SAR IMAGERY - A method and apparatus ( | 07-04-2013 |
20130177200 | METHOD AND APPARATUS FOR MULTIPLE OBJECT TRACKING WITH K-SHORTEST PATHS - Trajectories of objects are estimated by determining the optimal solution(s) of a tracking model on the basis of an occupancy probability distribution. The occupancy probability distribution is the probability of presence of objects over a set of discrete points in the spatio-temporal space at a number of time steps. The tracking model is defined by the set of discrete points, a virtual source location and a virtual sink location, wherein objects in the tracking model are creatable in the virtual source location and are removable in the virtual sink location. | 07-11-2013 |
20130177201 | RECOGNIZING TEXT AT MULTIPLE ORIENTATIONS - Systems, methods, and apparatus, including software tangibly stored on a computer readable medium, involve identifying text in an electronic document. An electronic document that includes an image object is received. In a first region of the image object, a first set of text characters having a first orientation are recognized. In a second region of the image object, a second set of text characters having a second orientation are recognized. The electronic document is modified to include a first text object identifying the first set of text characters and a second text object identifying the second set of text characters. Each identification of text characters includes a set of values that each represent an individual text character recognized in the corresponding region. | 07-11-2013 |
20130177202 | Method for Controlling a Headlamp System for a Vehicle, and Headlamp System - In a method for controlling a headlamp system for a vehicle, the headlamp system having two headlamps, set apart from each other, road users are detected in front of the vehicle in the driving direction, and a first total light pattern is able to be produced, in which the illumination range on a first side of a center axis is greater than on the other, second side of this center axis, and a second total light pattern is able to be produced, in which the total light pattern is controllable such that it has an illumination range in the direction of at least one detected road user that is less than the distance to the detected road user, and which has an illumination range in another direction that is greater than the distance to the detected road user. During the switch from the first total light pattern to the second total light pattern, the illumination range of at least one headlamp on the first side of the center axis is first reduced to at least such an extent that it is less than the distance to the detected road user, the second total light pattern being produced subsequently. | 07-11-2013 |
20130177203 | OBJECT TRACKING AND PROCESSING - A method includes tracking an object in each of a plurality of frames of video data to generate a tracking result. The method also includes performing object processing of a subset of frames of the plurality of frames selected according to a multi-frame latency of an object detector or an object recognizer. The method includes combining the tracking result with an output of the object processing to produce a combined output. | 07-11-2013 |
20130177204 | APPARATUS AND METHOD FOR TRACKING HAND - Disclosed are an apparatus for tracking a location of a hand, includes: a skin color image detector for detecting a skin color region from an image input from an image device using a predetermined skin color of a user; a face tracker for tracking a face using the detected skin color image; a motion detector for setting a ROI using location information of the tracked face, and detecting a motion image from the set ROI; a candidate region extractor for extracting a candidate region with respect to a hand of the user using the skin color image detected by the skin color image detector and the motion image detected by the motion detector; and a hand tracker for tracking a location of the hand in the extracted candidate region to find out a final location of the hand. | 07-11-2013 |
20130177205 | EXTERIOR ENVIRONMENT RECOGNITION DEVICE AND EXTERIOR ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. An exterior environment recognition device obtains an image in a detection area, generates a block group by grouping, based on a first relative relationship between blocks, multiple blocks in an area extending from a plane corresponding to a road surface to a predetermined height in the obtained image, divides the block group into two in a horizontal direction of the image, and determines, based on a second relative relationship between two divided block groups, whether the block group is a first person candidate which is a candidate of a person. | 07-11-2013 |
20130177206 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes: an image processing unit that executes image processing on an input image; a point light source detection unit that detects a point light source included in the input image; a scene determination unit that determines whether or not the input image shows a vivid scene based on a detection result of the point light source detection unit and an image signal of the input image; and a control unit that controls the image processing unit to change image processing for the input image in accordance with a determination result of the scene determination unit. | 07-11-2013 |
20130177207 | VEHICLE PERIPHERY MONITORING APPARATUS - A vehicle periphery monitoring apparatus displays a detection line on a display unit, with side portions of the detection line positioned on far-off spots that are farther than a spot on which a center portion of the detection line is positioned. In addition, based on the distance of the respective spots on which the portions of the detection line are positioned, the apparatus includes a parameter table that defines different parameters for a short distance portion, a middle distance portion, and a long distance portion of the detection line. The apparatus detects a moving object based on an actual-detected brightness change of a pixel along the detection line and a predefined brightness change of a pixel along the detection line that is defined by the parameter of the parameter table. | 07-11-2013 |
20130177208 | GENERATING MAGNETIC FIELD MAP FOR INDOOR POSITIONING - There is provided an apparatus caused to acquire information indicating a measured magnetic field vector and information relating to an uncertainty measure of the measured magnetic field vector in at least one known location inside the building, wherein the indicated magnetic field vector represents magnitude and direction of the earth's magnetic field affected by the local structures of the building, and to generate the indoor magnetic field map for at least part of the building on the basis of at least the acquired information and the floor plan. | 07-11-2013 |
20130177209 | IMAGE CACHE - Techniques described herein provide a method for automatically and intelligently creating and updating an OCR cache while performing OCR using a computing device. An image captured using a camera coupled to the computing device may be matched against prior images stored in the OCR cache. If a match is found, the OCR cache may be updated with new or better information utilizing the new image. The matched prior image may be retained in the OCR cache, or the new captured image may replace the matched prior image in the OCR cache. In one embodiment, techniques are described to remove or reduce glare before storing the image in the OCR cache. In some embodiments, glare is removed or reduced in the absence of performing OCR. | 07-11-2013 |
20130177210 | METHOD AND APPARATUS FOR RECOGNIZING LOCATION OF USER - A method of recognizing a location of a user including detecting the user's two eyes and mouth of their face is provided, which includes calculating a ratio of a distance between the two eyes to a distance between a middle point of the two eyes and the mouth, calculating a rotation angle of the face according to the ratio, and detecting a distance between the face and the camera based on the rotation angle. | 07-11-2013 |
20130177211 | TRAVEL PATH ESTIMATION APPARATUS AND PROGRAM - A characteristic point extraction section acquires an image captured by an image capture device and extracts characteristic points from the captured image, a vehicle lane boundary point selection section selects vehicle lane boundary points that indicate vehicle lanes from the extracted characteristic points, a distribution determination section determines the distribution of the vehicle lane boundary points, a system noise setting section sets each system noise based on the distribution of vehicle lane boundary points, and a travel path parameter estimation section stably predicts travel path parameters based on the vehicle lane boundary points, past estimation results, and the system noise that has been set. | 07-11-2013 |
20130182890 | APPARATUS FOR DETECTING HUMANS ON CONVEYOR BELTS USING ONE OR MORE IMAGING DEVICES - A system for detecting a class of objects at a location, for example humans on a conveyor belt. A thermal camera may be used to detect objects and to detect the variance of the heat distribution of objects to classify them. Objects detected in an image from one camera may be detected in an image from another camera using geometric correction. A color camera may be used to detect the number of edges and the number of colors of an object to classify it. A color camera may be used with an upright human body classifier to detect humans in an area, and blobs corresponding to the detected humans may be tracked in a thermal or color camera image to detect if a human enters an adjacent forbidden area such as a conveyor belt. | 07-18-2013 |
20130182891 | METHOD AND SYSTEM FOR MAP GENERATION FOR LOCATION AND NAVIGATION WITH USER SHARING/SOCIAL NETWORKING - Methods and systems for map generation for location and navigation with user sharing/social networking may comprise determining a position of a wireless communication device (WCD) and capturing images of the surroundings of the WCD. Data associated with objects in the surroundings of said WCD may be extracted from the captured images, positions of the objects may be determined, and the determined positions and the data may then be uploaded to a database. The elements may comprise structural and/or textual features in the surroundings of the WCD. The position of the WCD may be determined utilizing sensors in the WCD to measure a distance from a last determined or known position. The sensors may comprise a pedometer, an altimeter, a camera, and/or a compass. The positions of the extracted elements may be determined utilizing known optical characteristics of a camera in the WCD. | 07-18-2013 |
20130182892 | GESTURE IDENTIFICATION USING AN AD-HOC MULTIDEVICE NETWORK - Methods, systems, and computer-readable media for establishing an ad hoc network of devices that can be used to interpret gestures. Embodiments of the invention use a network of sensors with an ad hoc spatial configuration to observe physical objects in a performance area. The performance area may be a room or other area within range of the sensors. Initially, devices within the performance area, or with a view of the performance area, are indentified. Once identified, the sensors go through a discovery phase to locate devices within an area. Once the discovery phase is complete and the devices within the ad hoc network are located, the combined signals received from the devices may be used to interpret gestures made within the performance area. | 07-18-2013 |
20130182893 | SYSTEM AND METHOD FOR VIDEO EPISODE VIEWING AND MINING - Systems and methods for video episode viewing and mining comprise: receiving video data comprising a plurality of frames representing images of one or more objects within a physical area; identifying a plurality of events within the video data, wherein an event represents a movement of an object of interest from a first location in a grid associated with the physical area to a second location in the grid; generating a plurality of event data records reflecting the plurality of events; and determining one or more frequent episodes from the plurality of event data records, wherein an episode comprises a series of events associated with a particular object of interest. | 07-18-2013 |
20130182894 | METHOD AND APPARATUS FOR CAMERA TRACKING - A camera pose tracking apparatus may track a camera pose based on frames photographed using at least three cameras, may extract and track at least one first feature in multiple-frames, and may track a pose of each camera in each of the multiple-frames based on first features. When the first features are tracked in the multiple-frames, the camera pose tracking apparatus may track each camera pose in each of at least one single-frame based on at least one second feature of each of the at least one single-frame. Each of the at least one second feature may correspond to one of the at least one first feature, and each of the at least one single-frame may be a previous frame of an initial frame of which the number of tracked second features is less than a threshold, among frames consecutive to multiple-frames. | 07-18-2013 |
20130182895 | Spectral Domain Optical Coherence Tomography Analysis and Data Mining Systems and Related Methods and Computer Program Products - Methods for analyzing images acquired using an image acquisition system include receiving a plurality of images from at least one image acquisition system; selecting at least a portion of a set of images for analysis using at least one attribute of image metadata; selecting at least one method for deriving quantitative information from the at least a portion of the set of images; processing the selected at least a portion of the set of images with the selected at least one method for deriving quantitative information to generate an intermediate set of quantitative data associated with the at least a portion of the set of images; and storing the intermediate set of quantitative data and the metadata in a reference database, the reference database including intermediate sets of quantitative data and associated metadata for images associated with a plurality of subjects. | 07-18-2013 |
20130182896 | GRADIENT ESTIMATION APPARATUS, GRADIENT ESTIMATION METHOD, AND GRADIENT ESTIMATION PROGRAM - A gradient estimation apparatus includes a feature point extracting unit configured to extract feature points on an image captured by an imaging unit, an object detecting unit configured to detect image regions indicating objects from the image captured by the imaging unit, and a gradient calculating unit configured to calculate a gradient of the road surface on which the objects are located, based on the coordinates of the feature points extracted by the feature point extracting unit in the image regions indicating the objects detected by the object detecting unit and the amounts of movements of the coordinates of the feature points over a predetermined time. | 07-18-2013 |
20130182897 | SYSTEMS AND METHODS FOR CAPTURING MOTION IN THREE-DIMENSIONAL SPACE - Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on reflections therefrom or shadows cast thereby. | 07-18-2013 |
20130182898 | IMAGE PROCESSING DEVICE, METHOD THEREOF, AND PROGRAM - An image processing device includes a difference image generation unit which generates a difference image by obtaining a difference between frames of a cutout image which is obtained by cutting out a predetermined region on a photographed image; a feature amount extracting unit which extracts a feature amount from the difference image; and a recognition unit which recognizes a specific movement of an object on the photographed image based on the feature amount which is obtained from the plurality of difference images which are aligned in time sequence. | 07-18-2013 |
20130182899 | INFORMATION PROCESSING APPARATUS, STORE SYSTEM AND METHOD - An information processing apparatus comprises a similarity calculation unit calculates a similarity showing the degree of similarity between the image of an object captured and the reference image of each registered commodity, which is registered together with a superior category showing information relevant with each registered commodity registered in a dictionary, a determination unit compares the degree of similarity between the reference image and each image acquired by an acquisition unit and determines whether or not the degree of similarity of the superior category acquired by adding the similarities of a plurality of varieties belonging to the same superior category meets a specified condition and a reporting unit reports the information relevant with a commodity corresponding to the plurality of varieties meeting the specified condition as a candidate of a captured commodity in the situation that the determination unit determines that the specified condition is met. | 07-18-2013 |
20130182900 | IMAGE PROCESSING APPARATUS, IMAGE SENSING APPARATUS, CONTROL METHOD, AND RECORDING MEDIUM - For obtained raw moving image data, an image processing apparatus decides a focal distance at which a specific subject is focused on. The respective pixels of image signals in each frame of the raw moving image data correspond to light beams having different combinations of pupil regions through which the light beams have passed, and incident directions in an imaging optical system. More specifically, the image processing apparatus generates, from the image signals of each frame of the raw moving image data, a pair of images corresponding to light beams having passed through different pupil regions, and decides, based on a defocus amount at the position of the specific subject that is calculated from the pair of images, the focal distance at which the specific subject is focused on. | 07-18-2013 |
20130182901 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - An information processing apparatus includes a region acquisition unit configured to obtain a specific region of a subject, a tomographic image acquisition unit configured to obtain a tomographic image of the subject, and a display control unit configured to cause a display unit to display a region indicating probability of existence of the specific region in the obtained tomographic image. | 07-18-2013 |
20130182902 | SYSTEMS AND METHODS FOR CAPTURING MOTION IN THREE-DIMENSIONAL SPACE - Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on edge points thereof. | 07-18-2013 |
20130182903 | ROBOT APPARATUS AND POSITION AND ORIENTATION DETECTING METHOD - A robot apparatus includes a reference-model storing unit configured to store a reference model of an object, a feature-value-table storing unit configured to store a feature value table that associates position data and orientation data of the reference model and a feature value, a photographed-image acquiring unit configured to capture a photographed image of the object, a detecting unit configured to calculate a photographed image feature value from the photographed image, and a driving control unit configured to control a robot main body on the basis of the position data and the orientation data to change the position and the orientation of a gripping unit. | 07-18-2013 |
20130182904 | SYSTEM AND METHOD FOR VIDEO CONTENT ANALYSIS USING DEPTH SENSING - A method and system for performing video content analysis based on two-dimensional image data and depth data are disclosed. Video content analysis may be performed on the two-dimensional image data, and then the depth data may be used along with the results of the video content analysis of the two-dimensional data for tracking and event detection. | 07-18-2013 |
20130182905 | SYSTEM AND METHOD FOR BUILDING AUTOMATION USING VIDEO CONTENT ANALYSIS WITH DEPTH SENSING - A method and system for monitoring buildings (including houses and office buildings) by performing video content analysis based on two-dimensional image data and depth data are disclosed. Occupation and use of such buildings may be monitored with higher accuracy to provide higher energy efficiency usage, to control operation of components therein, and/or provide better security.. Height data may be obtained from depth data to provide greater reliability in object detection, object classification and/or event detection. | 07-18-2013 |
20130182906 | DISTANCE MEASUREMENT DEVICE AND ENVIRONMENT MAP GENERATION APPARATUS - Based on an image imaged by an imaging unit ( | 07-18-2013 |
20130182907 | FEELING-EXPRESSING-WORD PROCESSING DEVICE, FEELING-EXPRESSING-WORD PROCESSING METHOD, AND FEELING-EXPRESSING-WORD PROCESSING PROGRAM - The present approach enables an impression of the atmosphere of a scene or an object present in the scene at the time of photography to be pictured in a person's mind as though the person were actually at the photographed scene. A feeling-expressing-word processing device has: a feeling information calculating unit | 07-18-2013 |
20130188825 | IMAGE RECOGNITION-BASED STARTUP METHOD - An image recognition-based startup method is provided. The startup method includes the steps of: generating a movement image signal; generating a behavior recognition signal; and performing a startup of an information apparatus. An image recognition unit performs behavior recognition on the movement image signal generated from a capturing unit. Then, the image recognition unit generates a startup control signal to instruct the information apparatus to perform a startup action. Users can start up or restart the information apparatus by performing behaviors such as shaking one's head, opening one's eyes, closing one's eyes, or blinking | 07-25-2013 |
20130188826 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - Provided is an image processing apparatus including a moving object detection unit configured to detect a moving object which is an image different from a background in a current image, a temporary pause determination unit configured to determine whether the moving object is paused for a predetermined time period or more, a reliability processing unit configured to calculate non-moving object reliability for a pixel of the current image using the current image and a temporarily paused image including a temporarily paused object serving as the moving object which is paused for a predetermined time period or more, the non-moving object reliability representing likelihood of being a non-moving object which is an image different from the background that does not change for a predetermined time period or more, and a non-moving object detection unit configured to detect the non-moving object from the current image based on the non-moving object reliability. | 07-25-2013 |
20130188827 | HUMAN TRACKING METHOD AND APPARATUS USING COLOR HISTOGRAM - A human tracking method using a color histogram is disclosed. The human tracking method using the color histogram according to the present invention can more adaptively perform human tracking using different target color histograms according to the human poses, instead of applying only one target color histogram to the tracking process of one person, such that the accuracy of human tracking can be increased. The human tracking method includes performing color space conversion of input video data; calculating a state equation of a particle based on the color-space conversion data; calculating the state equation, and calculating human pose-adaptive observation likelihood; resampling the particle using the observation likelihood, and estimating a state value of the human; and updating a target color histogram. | 07-25-2013 |
20130188828 | PREVENTING CLASSIFICATION OF OBJECT CONTEXTUAL INFORMATION - Technology is disclosed for preventing classification of objects, e.g., in an augmented reality system. The technology can identify a set of objects to be classified, determine whether context information for one or more objects in the identified set of objects to be classified is identified as not to be employed during classifiation, and during classification of two different objects, include context information for one object but not the other. | 07-25-2013 |
20130188829 | ANALYSIS APPARATUS, ANALYSIS METHOD, AND STORAGE MEDIUM - An analysis apparatus analyzes an image and performs counting of the number of object passages. The analysis apparatus executes the counting and outputs the execution status of the counting. | 07-25-2013 |
20130188830 | MOTION TRACKING SYSTEM FOR REAL TIME ADAPTIVE IMAGING AND SPECTROSCOPY - This invention relates to a system that adaptively compensates for subject motion in real-time in an imaging system. An object orientation marker ( | 07-25-2013 |
20130188831 | POSITIONING INFORMATION PROCESSING APPARATUS AND METHOD FOR CONTROLLING THE SAME - There is provided an information processing apparatus. An obtaining unit obtains positioning information from information associated with an image file. The positioning information includes positioning method information that indicates a positioning method and position information that indicates a position determined by the positioning method. A changing unit changes the position indicated by the position information. A determining unit determines whether or not an amount of change made by the changing unit is greater than or equal to a predetermined threshold. An updating unit updates the positioning method information associated with the image file, when it was determined that the amount of change is greater than or equal to the predetermined threshold. | 07-25-2013 |
20130188832 | SYSTEMS AND METHODS FOR ADAPTIVE VOLUME IMAGING - Systems and methods which provide volume imaging by implementing survey and target imaging modes are shown. According to embodiments, a survey imaging mode is implemented to provide a volume image of a relatively large survey area. A target of interest is preferably identified within the survey area for use in a target imaging mode. Embodiments implement a target imaging mode to provide a volume image of a relatively small target area corresponding to the identified target of interest. The target imaging mode preferably adapts the beamforming, volume field of view, and/or other signal and image processing algorithms to the target area. In operation according to embodiments, the target imaging mode provides a volume image of a target area with improved volume rate and image quality. | 07-25-2013 |
20130188833 | STATIONARY TARGET DETECTION BY EXPLOITING CHANGES IN BACKGROUND MODEL - One or more video frames may be obtained, and a background model may be constructed based on a first parameter. A second background model may be constructed using the one or more video frames based on a second parameter, the second parameter being different from the first parameter. A difference between the first and second background models may be determined. One or more stationary targets may be determined based on the determined difference. The one or more stationary targets may be classified. An alert concerning the one or more classified stationary targets may be generated. | 07-25-2013 |
20130188834 | GAZE POINT DETECTION METHOD AND GAZE POINT DETECTION DEVICE - A gaze point detection device | 07-25-2013 |
20130188835 | FEELING-EXPRESSING-WORD PROCESSING DEVICE, FEELING-EXPRESSING-WORD PROCESSING METHOD, AND FEELING-EXPRESSING-WORD PROCESSING PROGRAM - The present approach enables an impression of the atmosphere of a scene or an object present in the scene at the time of photography to be pictured in a person's mind as though the person were actually at the photographed scene. A feeling-expressing-word processing device has: a feeling information calculating unit | 07-25-2013 |
20130188836 | METHOD AND APPARATUS FOR PROVIDING HAND DETECTION - A method for providing hand detection may include receiving feature transformed image data for a series of image frames, determining asymmetric difference data indicative of differences between feature transformed image data of a plurality of frames of the series of image frames and a reference frame, and determining a target area based on an intersection of the asymmetric difference data. An apparatus and computer program product corresponding to the method are also provided. | 07-25-2013 |
20130188837 | POSITIONING SYSTEM - An object of the present invention is to provide a positioning system which makes it possible to perform positioning processing in a positioning target mobile object with a smaller calculation amount. | 07-25-2013 |
20130195313 | AUTOMATIC POSITIONING OF IMAGING PLANE IN ULTRASONIC IMAGING - The invention is directed to a method for ultrasonic imaging, in which two-dimensional images ( | 08-01-2013 |
20130195314 | PHYSICALLY-CONSTRAINED RADIOMAPS - A system for providing positioning functionality in an apparatus. Apparatuses may comprise various sensor resources and may utilize these resources to sense information at a location. For example, an apparatus may sense visual, signal and/or field information at a location. The apparatus may then compare the sensed information to a mapping database in order to determine position. | 08-01-2013 |
20130195315 | IDENTIFYING REGIONS OF TEXT TO MERGE IN A NATURAL IMAGE OR VIDEO FRAME - In several aspects of described embodiments, an electronic device and method use a camera to capture an image or a frame of video of an environment outside the electronic device followed by identification of blocks of regions in the image. Each block that contains a region is checked, as to whether a test for presence of a line of pixels is met. When the test is met for a block, that block is identified as pixel-line-present. Pixel-line-present blocks are used to identify blocks that are adjacent. One or more adjacent block(s) may be merged with a pixel-line-present block when one or more rules are found to be satisfied, resulting in a merged block. The merged block is then subject to the above-described test, to verify presence of a line of pixels therein, and when the test is satisfied the merged block is processed normally, e.g. classified as text or non-text. | 08-01-2013 |
20130195316 | SYSTEM AND METHOD FOR FACE CAPTURE AND MATCHING - According to an example, a face capture and matching system may include a memory storing machine readable instructions to receive captured images of an area monitored by an image capture device, and detect one or more faces in the captured images. The memory may further store machine readable instructions to track movement of the one or more detected faces in the area monitored by the image capture device, and based on the one or more tracked detected faces, select one or more images from the captured images to be used for identifying the one or more tracked detected faces. The memory may further store machine readable instructions to select one or more fusion techniques to identify the one or more tracked detected faces using the one or more selected images. The face capture and matching system may further include a processor to implement the machine readable instructions. | 08-01-2013 |
20130195317 | APPARATUS FOR MEASURING TRAFFIC USING IMAGE ANALYSIS AND METHOD THEREOF - Disclosed are an apparatus and method for measuring traffic of moving objects by analyzing an image expressed in a spatiotemporal domain. The traffic measuring apparatus includes a feature extraction unit that sets a virtual measurement line in an input image, generates a spatiotemporal domain image expressing the input image in a spatiotemporal domain based on the virtual measurement line, and extracts image features from the spatiotemporal domain image, and a traffic estimation unit that estimates the number of objects passing the virtual measurement line by accumulating the image features over time. Accordingly, the traffic measuring apparatus may accurately measure in real-time the traffic of objects such as pedestrians through analysis of the input image so as to be utilized in a variety of fields. | 08-01-2013 |
20130195318 | Real-Time Face Tracking in a Digital Image Acquisition Device - An image processing apparatus for tracking faces in an image stream iteratively receives an acquired image from the image stream including one or more face regions. The acquired image is sub-sampled at a specified resolution to provide a sub-sampled image. An integral image is then calculated for a least a portion of the sub-sampled image. Fixed size face detection is applied to at least a portion of the integral image to provide a set of candidate face regions. Responsive to the set of candidate face regions produced and any previously detected candidate face regions, the resolution is adjusted for sub-sampling a subsequent acquired image. | 08-01-2013 |
20130195319 | Real-Time Face Tracking in a Digital Image Acquisition Device - An image processing apparatus for tracking faces in an image stream iteratively receives an acquired image from the image stream including one or more face regions. The acquired image is sub-sampled at a specified resolution to provide a sub-sampled image. An integral image is then calculated for a least a portion of the sub-sampled image. Fixed size face detection is applied to at least a portion of the integral image to provide a set of candidate face regions. Responsive to the set of candidate face regions produced and any previously detected candidate face regions, the resolution is adjusted for sub-sampling a subsequent acquired image. | 08-01-2013 |
20130195320 | Real-Time Face Tracking in a Digital Image Acquisition Device - An image processing apparatus for tracking faces in an image stream iteratively receives an acquired image from the image stream including one or more face regions. The acquired image is sub-sampled at a specified resolution to provide a sub-sampled image. An integral image is then calculated for a least a portion of the sub-sampled image. Fixed size face detection is applied to at least a portion of the integral image to provide a set of candidate face regions. Responsive to the set of candidate face regions produced and any previously detected candidate face regions, the resolution is adjusted for sub-sampling a subsequent acquired image. | 08-01-2013 |
20130195321 | PANTOGRAPH MONITORING SYSTEM AND METHOD - A method for automatic diagnostics of images related to pantographs, comprising the steps of: capturing an image that shows a pantograph of a locomotive, the image being taken from an aerial view during the travel of the locomotive, the image comprising the gliding area of a plurality of slippers of the pantograph; identifying, by means of a module for classifying the pantograph model, the model of the pantograph within a plurality of pantograph models, on the basis of the image captured; determining, by means of a module for classifying materials, a material of which the slippers are composed among a plurality of materials, on the basis of the pantograph model identified; and determining a value related to the state of wear for each one of the plurality of slippers, on the basis of the type of material determined. | 08-01-2013 |
20130202152 | Selecting Visible Regions in Nighttime Images for Performing Clear Path Detection - A method provides for determining visible regions in a captured image during a nighttime lighting condition. An image is captured from an image capture device mounted to a vehicle. An intensity histogram of the captured image is generated. An intensity threshold is applied to the intensity histogram for identifying visible candidate regions of a path of travel. The intensity threshold is determined from a training technique that utilizes a plurality of training-based captured images of various scenes. An objective function is used to determine objective function values for each correlating intensity value of each training-based captured image. The objective function values and associated intensity values for each of the training-based captured images are processed for identifying a minimum objective function value and associated optimum intensity threshold for identifying the visible candidate regions of the captured image. | 08-08-2013 |
20130202153 | Object Tracking with Opposing Image Capture Devices - Systems and method of compensating for tracking motion of an object are disclosed. One such method includes receiving a series of images captured by each of a plurality of image capture devices. The image capture devices are arranged in an orthogonal configuration of two opposing pairs. The method further includes computing a series of positions of the object and orientations of the object, by processing the images captured by each of the plurality of image capture devices. | 08-08-2013 |
20130202154 | PRODUCT IMAGING DEVICE, PRODUCT IMAGING METHOD, IMAGE CONVERSION DEVICE, IMAGE PROCESSING DEVICE, IMAGE PROCESSING SYSTEM, PROGRAM, AND INFORMATION RECORDING MEDIUM - A product imaging device ( | 08-08-2013 |
20130208943 | DEVICES AND METHODS FOR TRACKING MOVING OBJECTS - The present invention is directed to methods and devices for tracking moving characters of at least two moving objects. The present invention identifies an occlusive condition(s) of the moving objects on a video frame(s), analyzes information of a physical framework(s) of the moving objects, finds and records a position(s) of the occluded moving object(s) on the video frames and tracks a moving character(s) of the moving objects. The present invention further provides devices and methods for tracking the moving characters of behaviors of living bodies. | 08-15-2013 |
20130208944 | METHOD AND APPARATUS FOR OBJECT TRACKING VIA HYPERSPECTRAL IMAGERY - A computer-implemented method for tracking a small sample size user-identified object comprising extracting a plurality of blocks of pixels from a first frame of a plurality of frames of a scene detected by a hyperspectral (HS) sensor, comparing a reference sample of the object with the plurality of blocks to generate a first attribute set corresponding to contrasting HS response values of the reference sample and HS response values of each block of the plurality of blocks, comparing a test sample of a portion of the first frame to each block of the plurality of blocks to generate a second attribute set corresponding to contrasting HS response values of the test samples and HS response values of each block of the plurality of blocks and determining if the object exists in two or more of the frames by comparing the first HS attribute set with the second HS attribute set. | 08-15-2013 |
20130208945 | METHOD FOR THE DETECTION AND TRACKING OF LANE MARKINGS - In a method for the detection and tracking of lane markings from a motor vehicle, an image of a space located in front of the vehicle is captured by means of an image capture device at regular intervals. The picture elements that meet a predetermined detection criterion are identified as detected lane markings in the captured image. At least one detected lane marking as a lane marking to be tracked is subjected to a tracking process. At least one test zone is defined for each detected lane marking. With the aid of intensity values of the picture elements associated with the test zone, at least one parameter is determined. The detected lane marking is assigned to one of several lane marking categories, depending on the parameter. | 08-15-2013 |
20130208946 | INFORMATION DETECTION APPARATUS AND INFORMATION DETECTION METHOD - According to one embodiment, an information detection apparatus includes an image input unit, a symbol detection unit, a service information detection unit and an output unit. The image input unit inputs an image captured by an image capturing apparatus. The symbol detection unit configured to detect a first symbol and a second symbol, which are predetermined, according to the image input by the image input unit. The service information detection unit configured to detect a service information existing at a relative position predetermined for the first symbol and the second symbol in the image when the first symbol and the second symbol are detected by the symbol detection unit according to the image input by the image input unit. The output unit configured to output the service information detected by the service information detection unit. | 08-15-2013 |
20130208947 | OBJECT TRACKING APPARATUS AND CONTROL METHOD THEREOF - A control method of an object tracking apparatus for tracking a target tracking-object includes receiving a first frame including the target tracking-object distinguishing between a target tracking-object including the target tracking-object and a background in the first frame, generating histograms of color values for the target tracking-object and the background, comparing the histograms corresponding to the target tracking-object and the background to determine reliable data of the target tracking-object and reliable data of the background, and estimating a next position of the target tracking-object in a second frame based on the reliable data of the target tracking-object and the background. | 08-15-2013 |
20130208948 | TRACKING AND IDENTIFICATION OF A MOVING OBJECT FROM A MOVING SENSOR USING A 3D MODEL - A system and method for detection, tracking, classification, and/or identification of a moving object from a moving sensor uses a three-dimensional (3D) model. The system facilitates generation of a 3D model using images from a variety of sensors, in particular passive two-dimensional (2D) image capture devices. 2D images are processed to determine viewpoint and find moving objects in the 2D images. Conventional techniques or an innovative technique can be used to find segments of 2D images having moving objects. Viewpoint and segment information is used for generation of a 3D model of an object, in particular using both object motion and sensor motion to generate the 3D model. | 08-15-2013 |
20130208949 | METHOD FOR IDENTIFYING AND DEFINING BASIC PATTERNS FORMING THE TREAD PATTERN OF A TYRE. - A tyre tread, having circumferentially juxtaposed elements separated from one another by identically shaped boundaries and having a least one basic pattern, is inspected by: producing an image of the tyre tread; identifying tread wear indicators on the image; grouping together sub-sets of the indicators according to the basic pattern(s) included in the indicators; determining a characteristic point of each of the sub-sets of the indicators; determining a sequence of distances by computing distances between the characteristics points of each of the sub-sets of the indicators; comparing and the sequence of distances with a known sequence of distances between characteristic points of the basic pattern(s) to confirm coincidence thereof; and projecting a shape of a boundary between elements of the tyre tread onto a surface to be inspected according to the known sequence of distances between characteristic points of the basic pattern(s). | 08-15-2013 |
20130216092 | Image Capture - An apparatus including a processor configured to move automatically a sub-set of pixels defining a target captured image within a larger set of available pixels in a direction of an edge of the target captured image when a defined area of interest within the target captured image approaches the edge of the target captured image and configured to provide a pre-emptive user output when the sub-set of pixels approaches an edge of the set of available pixels. | 08-22-2013 |
20130216093 | WALKING ASSISTANCE SYSTEM AND METHOD - An example walking assistance method includes obtaining an image captured by a camera. The image includes distance information indicating distances between the camera and objects captured by the camera. Next, the method determines whether one or more objects appear in the captured image. If yes, the method then creates a 3D scene model according to the captured image and the distances between the camera and objects captured by the camera. Next, the method determines whether one or more specific objects appear in the created 3D scene model, and further determines one or more obstacles appear when no specific object appears in the captured image. The method then creates an obstacle audio file based on the determined one or more obstacles, and further outputs the created obstacle audio file through an audio output device, to prompt one or more obstacles appear ahead. | 08-22-2013 |
20130216094 | SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR IDENTIFYING OBJECTS IN VIDEO DATA - Image based operating systems and methods are provided that identify objects in video data and then take appropriate action in a wide variety of environments. In some embodiments, the image based operating systems and methods allow a user to activate other devices and systems by making a gesture. | 08-22-2013 |
20130216095 | VERIFICATION OBJECT SPECIFYING APPARATUS, VERIFICATION OBJECT SPECIFYING PROGRAM, AND VERIFICATION OBJECT SPECIFYING METHOD - In a verification object specifying apparatus that specifies a verification object for biometric authentication, a biometric information acquisition unit acquires biometric information from a biometric information source part. An abnormality detection unit detects an abnormal portion in the biometric information source part based on the biometric information. A verification object specifying unit determines whether biometric information located in the abnormal portion is to be included in a verification object, and specifies biometric information to be used as the verification object based on the determination result. The verification object specifying apparatus causes a registration unit to register the biometric information as registration information when serving as a registration apparatus, and causes a verification unit to verify the biometric information against registration information when serving as a verification apparatus. | 08-22-2013 |
20130216096 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING SYSTEM - There is provided an image management apparatus, in which an image conversion section is configured to generate a setup image, in a setup image format, within a first image pyramid structure for display, the setup image being converted from an object image, in an object image format, included within a second image pyramid structure in response to a request for the setup image. | 08-22-2013 |
20130216097 | IMAGE-FEATURE DETECTION - An embodiment is a method for detecting image features, the method including extracting a stripe from a digital image, the stripe including of a plurality of blocks; processing the plurality of blocks for localizing one or more keypoints; and detecting one or more image features based on the one or more localized keypoints. | 08-22-2013 |
20130216098 | MAP GENERATION APPARATUS, MAP GENERATION METHOD, MOVING METHOD FOR MOVING BODY, AND ROBOT APPARATUS - Performing map construction under a crowded environment where there are a lot of people. It includes a successive image acquisition unit that obtains images that are taken while a robot is moving, a local feature quantity extraction unit that extracts a quantity at each feature point from the images, a feature quantity matching unit that performs matching among the quantities in the input images, where quantities are extracted by the extraction unit, an invariant feature quantity calculation unit that calculates an average of the matched quantities among a predetermined number of images by the matching unit as an invariant feature quantity, a distance information acquisition unit that calculates distance information corresponding to each invariant feature quantity based on a position of the robot at times when the images are obtained, and a map generation unit that generates a local metrical map as a hybrid map. | 08-22-2013 |
20130216099 | IMAGING SYSTEM AND IMAGING METHOD - An imaging system comprises a whole image read out unit for reading out a whole image in a first resolution from an imaging device, a partial image region selecting unit for selecting a region of a partial image in a part of the whole image which is read out, a partial image read out unit for reading out the partial image in the selected region in a second resolution from the imaging device, a characteristic region setting unit for setting a characteristic region, in which a characteristic object exists, within the partial image, a characteristic region image read out unit for reading out an image of the characteristic region, which is set, in a third resolution from the imaging device, and a resolution setting unit for setting such that the first resolution08-22-2013 | |
20130216100 | OBJECT IDENTIFICATION USING SPARSE SPECTRAL COMPONENTS - One or more systems and/or techniques are provided to identify and/or classify objects of interest (e.g., potential granular objects) from a radiographic examination of the object. Image data of the object is transformed using a spectral transformation, such as a Fourier transformation, to generate image data in a spectral domain. Using the image data in the spectral domain, one or more one-dimensional spectral signatures can be generated and features of the signatures can be extracted and compared to features of one or more known objects. If one or more features of the signatures correspond (e.g., within a predetermined tolerance) to the features of a known object to which the feature(s) is compared, the object of interest may be identified and/or classified based upon the correspondence. | 08-22-2013 |
20130223676 | APPARATUS AND METHOD FOR SPATIALLY RELATING VIEWS OF SKY IMAGES ACQUIRED AT SPACED APART LOCATIONS - Homography-based imaging apparatus and method are provided. The apparatus may include a processor ( | 08-29-2013 |
20130223677 | System for counting trapped insects - This is a method of counting trapped insects that makes it possible to count the trapped insects without having to open the trap. The method involves using a trap made with a transparent polymer making up at least one side of the trap allowing an image to be made of the insects trapped inside with an imaging device. | 08-29-2013 |
20130223678 | Time in Line Tracking System and Method - A method of determining the amount of time it will take a person (P) waiting in a line (L) to move between two points. The method includes acquiring a facial pattern of the person when they are at a first point in the line and establishing the time at which the facial pattern was obtained. Next, a facial pattern of the person is acquired when the person arrives at a second point in the line. The two facial patterns are compared and when a match is found, a lapsed time is established. By subtracting the two times the transit time of the person from the first point to the second point is established and this time is displayed at the entry point of the line. | 08-29-2013 |
20130223679 | MOVEMENT ANALYSIS AND/OR TRACKING SYSTEM - A novel system analyzes and/or tracks the motion of moved or moving objects that carry marker elements glowing in different, defined colors. Cameras record movements and continuously store digital color images. A transformer unit converts the color images into RGB color space. Three color intensities are present for each color pixel. A grayscale image production unit adopts the maximum of the three color intensities for each pixel as the grayscale value. A localization unit exclusively compares each grayscale value with a defined threshold value and stores grayscale values above the threshold value as a member of a pixel cloud that represents a potential marker element. A measuring unit measures the geometry of every pixel cloud exclusively in the grayscale image and deletes pixel clouds that can be excluded as marker elements. An identification unit determines the color of the confirmed pixel clouds in the digitally stored color image. | 08-29-2013 |
20130223680 | RECOGNITION SYSTEM, RECOGNITION METHOD AND COMPUTER READABLE MEDIUM - A recognition system includes an acquisition module configured to acquire an image data generated by an image sensor, a first generation module configured to generate a graphical user interface which contains the image data, and an input module configured to detect an input on the graphical user interface, the input indicating a position designation on the image data. The recognition system further includes a second generation module configured to overlap a frame-line on the image data of the graphical user interface based on the position designation detected by the input module, and a calculation module configured to calculate one or more feature values of an object image within the frame-line. | 08-29-2013 |
20130223681 | APPARATUS AND METHOD FOR IDENTIFYING FAKE FACE - An apparatus for identifying a fake face is provided. A first eye image acquirer acquires a first eye image by taking a picture of a subject while radiating a first ray having a first wavelength. A second eye image acquirer acquires a second eye image by taking a picture of the subject while radiating a second ray having a second wavelength that is shorter than the first wavelength. A controller extracts a first area and a second area having brighter lightness than the first area from each of the first and second eye images, calculates a lightness of the first area and a lightness of the second area in the first eye image, and a lightness of the first area and a lightness of the second area in the second eye image, and determines whether the subject uses a fake face based on the calculated lightness. | 08-29-2013 |
20130223682 | ARTICLE RECOGNITION SYSTEM AND ARTICLE RECOGNITION METHOD - According to embodiments, an article recognition system is disclosed. The article recognition system comprises an image sensor configured to capture an image of an article, and a determining module configured to determine a value indicative of darkness of the captured image and compare the determined value with a reference value. The article recognition system further comprises a changing module configured to change the reference value when the determined value is less than the reference value, and an extracting module configured to identify the article on the basis of the captured image when the determined value is greater than the reference value. | 08-29-2013 |
20130223683 | Method and Apparatus for Generating Image Description Vector, Image Detection Method and Apparatus - This invention relates to a method and an apparatus for generating an image description vector, an image detection method and apparatus. The method for generating an image description vector comprising: an encoding step of encoding each of a plurality of pixel regions of an image into M pieces of N-bit binary codes, wherein each bit of an N-bit binary code represents a neighbouring pixel region which is in neighbourhood of a corresponding pixel region; and a generating step of generating an image description vector of the image based on matching at least one of the M pieces of N-bit binary code of each pixel region of the plurality of pixel regions with a particular code pattern, where M is an integer of 3 or larger, and N is an integer of 3 or larger. | 08-29-2013 |
20130223684 | OVERLAY-BASED ASSET LOCATION AND IDENTIFICATION SYSTEM - A network asset location system and methods of its use and operation are disclosed. In one aspect, the network asset location system includes a mobile application component executable on a mobile device including a camera and a display, the mobile application component configured to receive image data from the camera and display an image on the display based on the image data and overlay information identifying one or more network assets identifiable in the image data. The network asset location system also includes an asset management tracking engine configured to receive the image data and generate the overlay information including an identification of a location of at least one of the one or more network assets within the image. | 08-29-2013 |
20130223685 | CALIBRATION OF A PROBE IN PTYCHOGRAPHY - A method of providing image data for constructing an image of a region of a target object, comprising providing a reference diffraction pattern of a reference target object; determining an initial guess for a probe function based upon the reference diffraction pattern; and determining, by an iterative process based on the initial guess for the probe function and an initial guess for an object function, image data for a target object responsive to an intensity of radiation detected by at least one detector. | 08-29-2013 |
20130223686 | MOVING OBJECT PREDICTION DEVICE, HYPOTHETICAL MOVABLE OBJECT PREDICTION DEVICE, PROGRAM, MOVING OBJECT PREDICTION METHOD AND HYPOTHETICAL MOVABLE OBJECT PREDICTION METHOD - A position, behavior state and movement state of a moving object are detected, together with plural categories of track segment region and stationary object regions, using an environment detection section. A presence probability is applied to the detected track segment regions and stationary object regions and a presence probability map is generated, using a map generation section. A moving object position distribution and movement state distribution are generated by a moving object generation section based on the detected moving object position, behavior state and movement state, and recorded on the presence probability map. The moving object position distribution is moved by a position update section based on the moving object movement state distribution. The moved position distribution is changed by a distribution change section based on the presence probabilities of the presence probability map, and a future position distribution of the moving object is predicted on the presence probability map. Consequently, the future position of the moving object can be predicted with good precision under various conditions. | 08-29-2013 |
20130223687 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus according to an embodiment includes a lung field region extracting unit, a lung field bottom region extracting unit, and a detecting unit. The lung field region extracting unit is configured to extract, based on pixel values of pixels constituting a three-dimensional medical image capturing a chest of a subject, a lung field region from the three-dimensional medical image. The lung field bottom region extracting unit is configured to extract a lung field bottom region from the lung field region. The detecting unit is configured to detect a vertex position of the lung field bottom region on the head side of the subject. | 08-29-2013 |
20130223688 | System and Method for Capturing, Storing, Analyzing and Displaying Data Related to the Movements of Objects - A system and method for the capture and storage of data relating to the movements of objects, in a specified area and enables this data to be displayed in a graphically meaningful and useful manner. Video data is collected and video metadata is generated relating to objects (persons) appearing in the video data and their movements over time. The movements of the objects are then analyzed to detect the movements within a region of interest. This detection of movement allows a user, such as a manager of a store, to make informed decisions as to the infrastructure and operation of the store. One detection method relates to the number of people that are present in a region of interest for a specified time period. A second detection method relates to the number of people that remain or dwell in a particular area for a particular time period. A third detection method determines the flow of people and the direction they take within a region of interest. A fourth detection method relates to the number of people that enter a certain area by crossing a virtual line, a tripwire. | 08-29-2013 |
20130230206 | FOLIAGE PENETRATION BASED ON 4D LIDAR DATASETS - A method for detecting terrain, through foliage, includes the steps of: receiving point cloud data in a three-dimensional (3D) space from an airborne platform, in which the point cloud data includes foliage that obscures the object; reformatting the point cloud data from the 3D space into a one-dimensional (1D) space to form a 1D signal; and decomposing the 1D signal using a wavelet transform (WT) to form a decomposed WT signal. The decomposed WT signal is reconstructed to form a low-pass filtered profile. The method classifies the low-pass filtered profile as terrain. The terrain includes a natural terrain, or a ground profile. | 09-05-2013 |
20130230207 | METHOD FOR QUANTIFYING THE NUMBER OF FREE FIBERS EMANATING FROM A SURFACE - The present disclosure provides a method for counting the number of fibers emanating from the surface of a web substrate. | 09-05-2013 |
20130230208 | VISUAL OCR FOR POSITIONING - A mobile device can receive OCR library information associated with a coarse position. The coarse position can be determined by the mobile device, or by a network server configured to communicate with the mobile device. A camera on the mobile device can obtain images of human-readable information in an area near the coarse position. The view finder image can be processed with an OCR engine that is utilizing the OCR library information to determine one or more location string values. A location database can be searched based on the location string values. The position of the mobile device can be estimated and displayed. The position estimated can be adjusted based on the proximity of the mobile device to other features in the image. | 09-05-2013 |
20130230209 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND COMPUTER-READABLE MEDIUM - The present invention improves the recognition rate of an augmented reality marker and the processing speed thereof, simultaneously. In the present invention, a CPU binarizes actual image captured in an image sensor in accordance with an adaptive thresholding, and detects an augmented reality marker from within the binarized image. Then, the CPU determines a binarization threshold based on the augmented reality marker, and after binarizing the actual image captured in the image sensor in accordance with a fixed threshold binarization method using the binarization threshold, recognizes the augmented reality marker based on the binarized image. | 09-05-2013 |
20130230210 | Object Information Derived From Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 09-05-2013 |
20130230211 | POSTURE ESTIMATION DEVICE AND POSTURE ESTIMATION METHOD - The present invention is a posture estimation device for estimating a wide variety of 3-dimensional postures by using a skeletal model. The posture estimation device ( | 09-05-2013 |
20130236047 | ENHANCED DATA ASSOCIATION OF FUSION USING WEIGHTED BAYESIAN FILTERING - A method of associating targets from at least two object detection systems. An initial prior correspondence matrix is generated based on prior target data from a first object detection system and a second object detection system. Targets are identified in a first field-of-view of the first object detection system based on a current time step. Targets are identified in a second field-of-view of the second object detection system based on the current time step. The prior correspondence matrix is adjusted based on respective targets entering and leaving the respective fields-of-view. A posterior correspondence matrix is generated as a function of the adjusted prior correspondence matrix. A correspondence is identified in the posterior correspondence matrix between a respective target of the first object detection system and a respective target of the second object detection system. | 09-12-2013 |
20130236048 | IMAGE PROCESSOR FOR FEATURE DETECTION - Disclose embodiments include an image processor for feature detection comprising a single non-planar chip containing a plurality of integrated sensing and processing resources across two or more layers adapted to capture image frames and extract image features. In a particular embodiment, the non-planar chip is a three dimensional CMOS integrated circuit (3D CMOS IC) with vertical distribution of sensing and processing resources across two or more vertical integrated circuit layers. The 3D CMOS IC implements two or more feature detectors in a single chip by reusing a plurality of circuits employed for gradient and keypoint detection. Feature detectors include a scale invariant feature transform detector (SIFT), a Harris-based feature detector, and a Hessian-based feature detector. | 09-12-2013 |
20130236049 | INDOOR USER POSITIONING METHOD USING MOTION RECOGNITION UNIT - An indoor user positioning method including storing user information on a user terminal and user feature information detected from a feature detection device in a central server, detecting the position of the user terminal periodically and storing the detected position in a database, detecting by a motion recognition device attribute information on a user at the front thereof and transmitting the detected attribute information to the central server, extracting user terminals corresponding to the position of the user that the motion recognition device recognizes from the user terminals stored in the database in order to select target users, and comparing the user feature information on the target users stored in the database with the user attribute information that the motion recognition device transmits in order to specify a user at the front of the motion recognition device. | 09-12-2013 |
20130236050 | METHOD OF POST-CORRECTION OF 3D FEATURE POINT-BASED DIRECT TEACHING TRAJECTORY - There is provided a method of post-correction of a 3D feature point-based direct teaching trajectory, which improves direct teaching performance by extracting shape-based feature points based on curvature and velocity and improving a direct teaching trajectory correction algorithm using the shape-based feature points. Particularly, there is provided a method of post-correction of a 3D feature point-based direct teaching trajectory, which makes it possible to extract and post-correct a 3D (i.e., spatial) trajectory, as well as a 2D (i.e., planar) trajectory, with higher accuracy. | 09-12-2013 |
20130236051 | COMPUTER READABLE MEDIA CAN PERFORM INTERFERENCE IMAGE DETERMINING METHOD AND INTERFERENCE IMAGE DETERMINING APPARATUS - A computer readable media having at least one program code recorded thereon. An interference image determining method can be performed when the program code is read and executed. The interference image determining method comprises: (a) controlling a light source to illuminate an object on a detecting surface to generate an image; (b) controlling a sensor to catch a current frame of the image; (c) utilizing an image characteristic included in the current frame to determine a interference image part of the current frame; and (d) updating a defined interference image according to the determined interference image part. | 09-12-2013 |
20130236052 | Digital Image Processing Using Face Detection and Skin Tone Information - A technique for processing a digital image uses face detection to achieve one or more desired image processing parameters. A group of pixels is identified that corresponds to a face image within the digital image. A skin tone is detected for the face image by determining one or more default color or tonal values, or combinations thereof, for the group of pixels. Values of one or more parameters are adjusted for the group of pixels that correspond to the face image based on the detected skin tone. | 09-12-2013 |
20130236053 | OBJECT IDENTIFICATION SYSTEM AND METHOD - According to embodiments, an object identification system is disclosed. The object identification system comprises a dictionary file comprising multiple records, each record including: an object identification code, and one or more standard images, wherein each standard image is related to one of the object identification codes. The object identification system further comprises a computation module configured to calculate a similarity by comparing an image data produced by an image sensor with the standard images in each record, and an identification module configured to identify one or more of the object identification codes based on the calculated similarity. The object identification system further comprises a production module configured to produce a graphical user interface that displays each of one or more standard images that are related to one of the object identification codes specified by a user. | 09-12-2013 |
20130236054 | Feature Detection Filter Using Orientation Fields - A target object is found in a target image, by using a computer for determining an orientation field of at least plural pixels of the target image, where the orientation field describes pixels at discrete positions of the target image being analyzed according to an orientation, and location. The orientation field is mapped against an orientation field in model images in a database to compute match values between the orientation field of the target image and orientation field of model images in the database. The match values are thresholded, and those match values that exceed the threshold to are counted to determine a match between the target and the model. | 09-12-2013 |
20130236055 | IMAGE ANALYSIS DEVICE FOR CALCULATING VECTOR FOR ADJUSTING A COMPOSITE POSITION BETWEEN IMAGES - The image processing device | 09-12-2013 |
20130236056 | EVENT DETECTION SYSTEM AND METHOD USING IMAGE ANALYSIS - Provided is an event detection system including an image acquisition unit that acquires an image of a predetermined region, an image analysis unit that obtains focus data including a focus distance and a focus gain of the acquired image, an event occurrence determination unit that determines based on the focus data whether an event has occurred, and an alarm generation unit that generates an alarm signal according to an event signal transmitted from the event occurrence determination unit. | 09-12-2013 |
20130236057 | DETECTING APPARATUS OF HUMAN COMPONENT AND METHOD THEREOF - Disclosed are an apparatus and a method of detecting a human component from an input image. The apparatus includes a training database (DB) to store positive and negative samples of a human component, an image processor to calculate a difference image for the input image, a sub-window processor to extract a feature population from a difference image that is calculated by the image processor for the positive and negative samples of a predetermined human component stored in the training DB, and a human classifier to detect a human component corresponding to a human component model using the human component model that is learned from the feature population. | 09-12-2013 |
20130236058 | System And Process For Detecting, Tracking And Counting Human Objects Of Interest - A method of identifying, tracking, and counting human objects of interest based upon at least one pair of stereo image frames taken by at least one image capturing device, comprising the steps of: obtaining said stereo image frames and converting each said stereo image frame to a rectified image frame using calibration data obtained for said at least one image capturing device; generating a disparity map based upon a pair of said rectified image frames; generating a depth map based upon said disparity map and said calibration data; identifying the presence or absence of said objects of interest from said depth map and comparing each of said objects of interest to existing tracks comprising previously identified objects of interest; for each said presence of an object of interest, adding said object of interest to one of said existing tracks if said object of interest matches said one existing track, or creating a new track comprising said object of interest if said object of interest does not match any of said existing tracks; updating each said existing track; and maintaining a count of said objects of interest in a given time period based upon said existing tracks created or modified during said given time period. | 09-12-2013 |
20130236059 | FLUORESCENCE REFLECTION IMAGING DEVICE WITH TWO WAVELENGTHS - A first light source has a first wavelength corresponding to an excitation wavelength of a fluorophore. The excitation wavelength and an emission wavelength of the fluorophore delineate a predetermined interval. A second light source has a second wavelength offset with respect to the first wavelength so as to be outside said predetermined interval. The offset between the first and second wavelengths is comprised between 30 nm and 100 nm. A camera comprises a filter opaque to the first and second wavelengths and transparent to the emission wavelength and to wavelengths substantially higher than the higher of the first and second wavelengths. The light sources and camera are synchronized to alternately activate one of the light sources and make the camera alternately acquire a fluorescence image and a background noise image. | 09-12-2013 |
20130236060 | Image Analysis Method and Image Analysis Device - An image analysis method includes acquiring fluorescent images of frames in time-series. Each fluorescent image comprises pixels in which pixel data are acquired in the time-series. The method further includes setting analysis areas to the acquired fluorescent images, calculating a classification value of the analysis areas, classifying the images into one or more groups on the basis of the classification value, calculating an average image of the analysis area every group, subtracting the average image from each image of the analysis area every group to calculate a new image of the analysis area, and calculating a correlation value on the basis of the new images every group. | 09-12-2013 |
20130236061 | Image Analysis Method and Image Analysis Device - An image analysis method includes acquiring fluorescent images of frames in time-series. Each fluorescent image comprises pixels in which pixel data are acquired in the time-series. The method further includes setting analysis areas to the fluorescent images, selecting the fluorescent images of two or more frames to be used in analysis, extracting data pairs each comprising two pixels in which acquisition time intervals are the same in the analysis area of each of the selected fluorescent images, and performing product sum calculation of each of the data pairs for all of the selected images to calculate a correlation value. | 09-12-2013 |
20130236062 | INFORMATION PROCESSING APPARATUS, PROCESSING METHOD THEREOF, AND COMPUTER-READABLE STORAGE MEDIUM - An information processing apparatus that calculates information on a position and an orientation of an image capture device relative to an object captured by the image capture device, holds three-dimensional information including a plurality of line segments that constitute the object, acquires an image of the object captured by the image capture device, detects an image feature indicating a line segment from the acquired image, calculates a position and orientation of the image capture device based on correspondence between the image feature indicating the detected line segment and the held line segment, and determines, for each of the held line segments, whether to use the line segment for the calculation of the position and orientation thereafter, based on at least one of a result of detection of the image feature, and information acquired in the calculation of the position and orientation. | 09-12-2013 |
20130243240 | Camera-Based 3D Climate Control - A climate control unit is controlled by constructing background and foreground models of an environment from images acquired of the environment by a camera. The background model represents the environment when unoccupied, and there is one foreground model for each person in the environment. A 2D location of each person in the environment is determined using the background and foreground models. A 3D location of each person is determined using the 2D locations and inferences made from the images. The controlling of the climate control unit is according to the 3D locations. | 09-19-2013 |
20130243241 | METHOD, APPARATUS, AND MANUFACTURE FOR SMILING FACE DETECTION - A method, apparatus, and manufacture for smiling face detection is provided. For each frame, a list of new smiling faces for the frame is generated by performing smiling face detection employing an object classifier that trained is to distinguish between smiling faces and all objects in the frame that are not smiling faces. For the first frame, the list of new smiling faces is employed as an input smiling face list for the next frame. For each frame after the first frame, a list of tracked smiles for the frame is generated by tracking smiling faces in the frame from the input smiling list for the frame. Further, a list of new smiling faces is generated for the next frame by combining the list of new smiling faces for the frame with the list of tracked smiles for the frame. | 09-19-2013 |
20130243242 | User identification system and method for identifying user - The present invention discloses an identification system which includes an image sensor, a storage unit and a comparing unit. The image sensor captures a plurality of images of the motion trajectory generated by a user at different timings. The storage unit has stored motion vector information of a group of users including or not including the user generating the motion trajectory. The comparing unit compares the plurality of images with the motion vector information to identify the user. The present invention also provides an identification method. | 09-19-2013 |
20130243243 | IMAGE PROCESSOR, IMAGE PROCESSING METHOD, CONTROL PROGRAM AND RECORDING MEDIUM - In processing of detecting a target included in a registered image from a certain image, a processing load or a processing time is reduced. | 09-19-2013 |
20130243244 | Apparatus, Method, and Computer Program Product for Medical Diagnostic Imaging Assistance - An apparatus for medical diagnostic imaging assistance includes memory that stores first feature information representing the feature of a lesion mask or non-lesion mask, a sampling unit that acquires a plurality of samples by making sampling form the memory based on the first feature information, a machine-learning unit that generates a first discrimination condition corresponding to each of samples by carrying out a machine-learning step on the multiple samples, and a statistical processing unit that generates a second discrimination condition by carrying out a statistical processing step under the first discrimination condition, in which a detection function determines whether a lesion candidate mask is an actual lesion by referring to second feature information representing the feature of a lesion candidate mask under a second discrimination condition. | 09-19-2013 |
20130243245 | PERSONALIZING CONTENT BASED ON MOOD - In order to increase the efficacy of a mood-based playlisting system, a mood sensor such as a camera may be used to provide mood information to the mood model. When the mood sensor includes a camera, a camera may be used to capture an image of the user. The image is analyzed to determine a mood for the user so that content may be selected responsive to the mood of the user. | 09-19-2013 |
20130243246 | MONITORING DEVICE, RELIABILITY CALCULATION PROGRAM, AND RELIABILITY CALCULATION METHOD - A monitoring device has a detection target person storage part in which a feature of a face of each detection target person is stored, an image processor that processes images captured with a plurality of imaging devices having different imaging areas, and detects the image in which the detection target person stored in the detection target person storage part is captured, a detection information storage part in which detection information is stored, the detection information including the detection target person, imaging area, and imaging date and time with respect to the image in which the detection target person detected by the image processor is captured, and a reliability calculator that calculates a degree of detection reliability in the image processor based on a time-space rationality, the time-space rationality being determined from a plurality of pieces of detection information on each detection target person. | 09-19-2013 |
20130243247 | PERIPHERAL INFORMATION GENERATING APPARATUS, CONVEYANCE, PERIPHERAL INFORMATION GENERATING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - The peripheral information generating apparatus includes (i) a projection section for forming a projection pattern L, which at least partially has a continuous profile, on a road by irradiating the road with light, (ii) an image capturing section, and (iii) an image analyzing section for generating peripheral information, which indicates a peripheral situation of the peripheral information generating apparatus and of the road, by analyzing the projection pattern. | 09-19-2013 |
20130243248 | METHOD FOR DIFFERENTIATING BETWEEN BACKGROUND AND FOREGROUND OF SCENERY AND ALSO METHOD FOR REPLACING A BACKGROUND IN IMAGES OF A SCENERY - The present invention relates to a method for differentiating between background and foreground in images or films of scenery recorded by an electronic camera. The invention relates in addition to a method for replacing the background in recorded images or films of scenery whilst maintaining the foreground. | 09-19-2013 |
20130243249 | ELECTRONIC DEVICE AND METHOD FOR RECOGNIZING IMAGE AND SEARCHING FOR CONCERNING INFORMATION - A method for recognizing an image of an object and searching for concerning information is provided. The method includes: capturing an image of an object; analyzing the image to extract characteristic from the image, and generating a request for searching for further information associated with the image; searching for further information associated with the image from a searchable database according to the request and the extracted characteristic of the image; and displaying all of the information which has been found. A related electronic device is also provided. | 09-19-2013 |
20130243250 | LOCATION OF IMAGE CAPTURE DEVICE AND OBJECT FEATURES IN A CAPTURED IMAGE - A method for matching a region on an object of interest with a geolocation in a coordinate system is disclosed. In one embodiment, an image of a region on an object of interest is captured on an image capture device. The image is processed to detect a located feature using a feature detection algorithm. Further processing of the located feature is performed to derive a first feature descriptor using a feature descriptor extraction algorithm. The feature descriptor is stored in a memory. A database of feature descriptors having geolocation information associated with the feature descriptors is searched for a match to the first feature descriptor. The geolocation information is then made available for access. | 09-19-2013 |
20130243251 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - An image processing device includes a memory unit, a candidate pupil detecting unit and a pupil determining unit. The memory unit is used in storing information regarding a pupil size. The candidate pupil detecting unit detects noncircular candidate pupils from an image in which an eye area is captured. The pupil determining unit extrapolates the shapes of the candidate pupils that are detected by the candidate pupil detecting unit and, based on the pupil size stored in the memory unit, determines a pupil from among the candidate pupils. | 09-19-2013 |
20130243252 | LOITERING DETECTION IN A VIDEO SURVEILLANCE SYSTEM - A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may be configured to learn patterns of behavior consistent with a person loitering and generate alerts for same. Upon receiving information of a foreground object remaining in a scene over a threshold period of time, a loitering detection module evaluates the whether the object trajectory corresponds to a random walk. Upon determining that the trajectory does correspond, the loitering detection module generates a loitering alert. | 09-19-2013 |
20130243253 | IMAGE MONITORING SYSTEM AND IMAGE MONITORING PROGRAM - An image monitoring system includes a recorder that records an image captured by a camera via a network. The system is controlled to display the present image captured by the camera or a past image recorded on the recorder. A moving object is detected from the image captured by the camera, the detector including a resolution converter for generating an image with a resolution lower than the resolution of the image captured by the camera. A moving object is detected from the image generated by the resolution converter and positional information on the detected moving object is output. The positional information of the detected moving object is merged with the image captured by the camera on the basis of the positional information. | 09-19-2013 |
20130243254 | Foreground Analysis Based on Tracking Information - Techniques for performing foreground analysis are provided. The techniques include identifying a region of interest in a video scene, detecting a static foreground object in the region of interest, and determining whether the static foreground object is abandoned or removed, wherein said determining comprises performing a foreground analysis based on tracking information and pruning one or more false alarms using one or more track statistics. | 09-19-2013 |
20130243255 | SYSTEM FOR FAST, PROBABILISTIC SKELETAL TRACKING - A system and method are disclosed for recognizing and tracking a user's skeletal joints with a NUI system. The system includes one or more experts for proposing one or more skeletal hypotheses each representing a user pose within a given frame. Each expert is generally computationally inexpensive. The system further includes an arbiter for resolving the skeletal hypotheses from the experts into a best state estimate for a given frame. The arbiter may score the various skeletal hypotheses based on different methodologies. The one or more skeletal hypotheses resulting in the highest score may be returned as the state estimate for a given frame. It may happen that the experts and arbiter are unable to resolve a single state estimate with a high degree of confidence for a given frame. It is a further goal of the present system to capture any such uncertainty as a factor in how a state estimate is to be used. | 09-19-2013 |
20130243256 | Multispectral Detection of Personal Attributes for Video Surveillance - Techniques, systems, and articles of manufacture for multispectral detection of attributes for video surveillance. A method includes generating one or more training sets of one or more multispectral images, generating a group of one or more multispectral box features, using the one or more training sets to select one or more of the one or more multispectral box features to generate a multispectral attribute detector, and using the multispectral attribute detector to identify a location of an attribute in video surveillance, wherein using the multispectral attribute detector comprises, for one or more locations on each spectral band level of the multispectral image, applying the multispectral attribute detector and producing an output indicating attribute detection or an output indicating no attribute detection, and wherein the attribute corresponds to the multispectral attribute detector. | 09-19-2013 |
20130243257 | SYSTEMS AND METHODS FOR TRACKING A MODEL - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose. | 09-19-2013 |
20130243258 | METHODS AND APPARATUS FOR ESTIMATING POINT-OF-GAZE IN THREE DIMENSIONS - Methods for determining a point-of-gaze (POG) of a user in three dimensions are disclosed. In particular embodiments, the methods involve: presenting a three-dimensional scene to both eyes of the user; capturing image data including both eyes of the user; estimating first and second line-of-sight (LOS) vectors in a three-dimensional coordinate system for the user's first and second eyes based on the image data; and determining the POG in the three-dimensional coordinate system using the first and second LOS vectors. | 09-19-2013 |
20130243259 | OBJECT DETECTION DEVICE AND OBJECT DETECTION METHOD - Disclosed is an object detection method capable of detecting with high precision information relating to a jointed object from image data. An object detection device ( | 09-19-2013 |
20130251192 | ESTIMATED POSE CORRECTION - Embodiments are disclosed that relate to the correction of an estimated pose determined from depth image data. One disclosed embodiment provides, on a computing system, a method of obtaining a representation of a pose of articulated object from image data capturing the articulated object. The method comprises receiving the depth image data, obtaining an initial estimated skeleton of the articulated object from the depth image data, applying a random forest subspace regression function to the initial estimated skeleton, and determining the representation of the pose based upon a result of applying the random forest subspace regression to the initial estimated skeleton. | 09-26-2013 |
20130251193 | METHOD OF FILTERING AN IMAGE - For each of a plurality of homogeneity regions in relative pixel space, a deviation associated therewith with respect to a pixel to be filtered is given by a sum of associated difference values, each difference value given by the absolute value of a difference between a value of a pixel to be filtered and that of a neighboring pixel selected in accordance with the selected homogeneity region. The filtered pixel value is responsive to values of the neighboring pixels for the homogeneity region with minimum deviation. The relative pixel locations of each homogeneity region are symmetric relative to a radially-extending axis extending outwards in a corresponding polar direction therealong from a relatively central pixel location to a boundary of the homogeneity region, including all relative pixels intersected by the radially-extending axis, different homogeneity regions being associated with different polar directions. | 09-26-2013 |
20130251194 | RANGE-CUED OBJECT SEGMENTATION SYSTEM AND METHOD - Objects in a range map image are clustered into regions of interest responsive to range as determined from either a separate ranging system or from a top-down transformation of a range map image from a stereo vision system. A relatively-central location of each region of interest is transformed to mono-image geometry, and the corresponding portion of an associated mono-image is searched radially outwards from the relatively-central location along a plurality of radial search paths, along which the associated image pixels are filtered using an Edge-Preserving Smoothing filter in order to find an edge of the associated object along the radial search path. Edge locations for each of the radial search paths are combined in an edge profile vector that provides for discriminating the object. | 09-26-2013 |
20130251195 | ELECTRONIC DEVICE AND METHOD FOR MEASURING POINT CLOUD OF OBJECT - A method obtains an original point-cloud of the object, filters discrete points from the original point-cloud, determines a first sub-point-cloud and a second sub-point-cloud from the filtered point-cloud, and creates an updated point-cloud of the object based on the first sub-point-cloud and the second sub-point-cloud, determines points to be fitted from the updated point-cloud. The method further fits a figure according to the determined points, determines a reference figure according to the fitted figure, determines a first point from the first sub-point-cloud and a second point from the second-point-cloud, calculates a gap width and a gap height of the updated point-cloud according to the first determined point, the second determined point, and the reference figure, and displays the gap width and the gap height on a display device. | 09-26-2013 |
20130251196 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM - A non-transitory information storage medium stores a program for causing a computer to execute processing including obtaining a search region for a search of the outside of one object and selecting any object in the search region as a search result from among a plurality of other objects. | 09-26-2013 |
20130251197 | METHOD AND A DEVICE FOR OBJECTS COUNTING - A method and a device for objects counting in image processing includes acquiring the depth image of any one frame; detecting objects according to the depth image; associating the identical object in different frames to form a trajectory; and determining the number of objects according to the number of trajectories. The devices include an acquisition module for acquiring the depth image of any one frame; a detection module for detecting objects according to the depth image; an association module for associating the identical object in different frames to form a trajectory; a determining module for determining the number of objects according to the number of trajectories. The objects are detected according to the depth image. The identical object in different frames is associated to form a trajectory and the number of objects is determined according to the number of trajectories. | 09-26-2013 |
20130251198 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND STORAGE MEDIUM - In an information processing apparatus, an object recognition process is performed on each image of a multi-viewpoint image group based on focal length information and information about objects. Objects are specified in each image. A target object is determined based on relationship information indicating a relationship between the objects. An image that contains the determined target object is generated. | 09-26-2013 |
20130251199 | SYSTEM AND METHOD OF ESTIMATING PAGE POSITION - A method captures a video image frame of a book, estimates a position of at least a first endpoint of the book's spine, applies an edge detection operation to the video image frame to generate an edge image, applies a Hough transform to a first region in the edge image to obtain a plurality of line estimates, and rejects line estimates that do not substantially intersect with an estimated endpoint of the book's spine. For line estimates that are not rejected, detecting one or more clusters of angles of the line estimates with respect to an estimated endpoint of the book's spine, and generating an average angle from the cluster of angles. An average angle is selected, and an angular position of the turning leaf in the book's image responsive to the currently selected average angle is estimated. | 09-26-2013 |
20130251200 | IMAGE PROCESSING DEVICE THAT ANALYZES MOTION OF OBJECT - In an image processing device ( | 09-26-2013 |
20130251201 | SYSTEM AND METHOD FOR RECOMMENDING BUDDIES IN SOCIAL NETWORK - A system for recommending buddies in a social network includes a data management module, a face detection and characteristics extraction module, a face matching module, a user avatar determining module, a buddy recommendation computing module and a buddy recommendation control module. The buddy recommendation control module extracts photo album data of respective users from a data management module, controls the face detection and characteristics extraction module to perform face detection and extraction of face characteristics data, and constructs a photo album face characteristics table comprising extracted face characteristics data for respective users. The buddy recommendation control module controls the user avatar determining module to determine user avatar characteristics of each user and controls the buddy recommendation computing module to generate the buddy recommendation data based on the constructed photo album face characteristics tables, user information and buddy information in the data management modules of respective users. | 09-26-2013 |
20130251202 | Facial Features Detection - There is described a method for facial features detection in a picture frame containing a skin tone area, comprising dividing ( | 09-26-2013 |
20130251203 | PERSON DETECTION DEVICE AND PERSON DETECTION METHOD - Provided is a person detection device with which it is possible to estimate a state of a part of a person from an image. A person detection device ( | 09-26-2013 |
20130251204 | VALIDATION ANALYSIS OF HUMAN TARGET - Technology for testing a target recognition, analysis, and tracking system is provided. A searchable repository of recorded and synthesized depth clips and associated ground truth tracking data is provided. Data in the repository is used by one or more processing devices each including at least one instance of a target recognition, analysis, and tracking pipeline to analyze performance of the tracking pipeline. An analysis engine provides at least a subset of the searchable set responsive to a request to test the pipeline and receives tracking data output from the pipeline on the at least subset of the searchable set. A report generator outputs an analysis of the tracking data relative to the ground truth in the at least subset to provide an output of the error relative to the ground truth. | 09-26-2013 |
20130251205 | Relative Pose Estimation of Non-Overlapping Cameras Using the Motion of Subjects in the Camera Fields of View - A relative pose between two cameras is determined by using input data obtained from the motion of subjects, such as pedestrians, between the fields of view of two cameras, determining trajectory information for the subjects, and computing homographies relating lines obtained from trajectories in the first image data to lines obtained from the trajectories in the second image data. The two fields of view need not overlap. | 09-26-2013 |
20130251206 | OBJECT DETECTION METHOD AND OBJECT DETECTOR USING THE METHOD - In an object detection method and an object detector | 09-26-2013 |
20130251207 | Method and System to Detect the Microcalcifications in X-Ray Images Using Nonlinear Energy Operator - A method and system to detect the microcalcifications (MC) in different type of images viz. X-ray images/mammograms/computer tomography with varied densities using nonlinear energy operator (NEO) is disclosed to favor precise detection of early breast cancer. Such Microcalcifications are associated with both high intensity and high frequency content. The same NEO output is useful to detect and remove the irrelevant curvilinear structures (CLS) thereby helps in reducing the false alarms in micro calcification detection technique. This is effective on different dataset (scanned film, mammograms with large spatial resolution such as CR and DR) of varied breast composition (viz. dense, fatty glandular, fatty), demonstrated quantitatively by Free-response receiver operating characteristic (FROC). Importantly, the method and apparatus of the invention can be used in conjunction with machine learning techniques viz. SVM to favor detection of incipient or small microcalcifications, thus benefiting radiologists in confirming detection of micro-calcifications in X-rays images/mammograms and reducing death rates. | 09-26-2013 |
20130259297 | IMAGE-RELATED SOCIAL NETWORK METHODS AND ARRANGEMENTS - In one aspect, a user captures an image of a physical object (e.g., a grocery item) with a smartphone. The depicted object is identified, such as by extracting fingerprint or watermark data from the imagery. Other imagery depicting that object—or depicting related objects—is identified on the web, and is displayed to the user on the smartphone screen. The user may select one or more of these images and direct that they be posted to a social network account (e.g., Pinterest) associated with the user. In another aspect, the user's location is sensed (e.g., an aisle of a department store), and a collection of images depicting nearby products is presented to the user for selection and posting to a social networking service. A great variety of other features and arrangements are also detailed. | 10-03-2013 |
20130259298 | METHODS AND APPARATUS TO COUNT PEOPLE IN IMAGES - An example method includes analyzing frame pairs of a plurality of frame pairs captured over a period of time to identify a redundant person indication detected in an overlap region, the overlap region corresponding to an intersection of a first field of view of a first image sensor and a second field of view of a second image sensor, each of the frame pairs including a first frame captured by the first image sensor and a second frame captured by the second image sensor; eliminating the identified redundant person indication to form a conditioned set of person indications for the period of time; grouping similarly located ones of the person indications of the conditioned set to form groups; analyzing the groups to identify redundant groups detected in the overlap region; and eliminating the redundant groups from a people tally generated based on the groups. | 10-03-2013 |
20130259299 | METHODS AND APPARATUS TO COUNT PEOPLE IN IMAGES - Methods and apparatus to count people in images are disclosed. An example method includes, in response to determining that a first person indication of a first frame obtained via a first image sensor is redundant to a second person indication of a second frame obtained via a second image sensor, storing a first coordinate of the first person indication in connection with a second coordinate of the second person indication in a database; and in response to detecting a third person indication in a third frame obtained via the first image sensor, querying the database with a third coordinate of the third person indication to determine whether the third coordinate matches the first coordinate. | 10-03-2013 |
20130259300 | AUTOMATIC DETECTION OF SWARM ATTACKS - Methods and apparatus for detecting a swarm attack based on a plurality of convergence hypotheses related to correlated movements of entities in an area of interest. Projected tracks for the entities are determined based on position reports received for the entities. At least one of the convergence hypotheses are updated based, at least in part on the projected tracks and a convergence hypotheses is output when a score assigned to the hypothesis exceeds a threshold value. | 10-03-2013 |
20130259301 | STAIN DETECTION - A method of detecting staining on a media item is described. An example method includes receiving an image of the media item, including a plurality of pixels having different intensity values within a range of intensity values, applying central weighting to the received image, applying a threshold to each pixel in the centrally-weighted image to transform each pixel to a binary value, comparing a pixel in the evaluation image with a pixel in a binary reference image to create a difference image including (i) a stain pixel at each spatial location in which a pixel in the evaluation image has a low intensity pixel and the corresponding pixel in the binary reference image has a high intensity pixel, and (ii) a non-stain pixel at all other spatial locations. The media item is identified as stained in the event that the difference image meets a staining criterion. | 10-03-2013 |
20130259302 | Method of tracking objects - A method of object tracking is provided with creating areas of a tracking object and a non-tracking object respectively; determining a state of the tracking object and the non-tracking object is separation, proximity, or overlap; creating at least one separation template image of a separation area of the tracking object and/or the non-tracking object if the tracking object is proximate the non-tracking object; fetching all feature points of an overlapping area of the tracking object and the non-tracking object if the tracking object and the non-tracking object overlap; performing a match on each of the feature points and the separation template image so as to calculate a corresponding matching error score respectively; and comparing the matching error score of each feature point with that of the separation template image so as to determine whether the feature points belong to the tracking object or the non-tracking object. | 10-03-2013 |
20130259303 | A SYSTEM AND METHOD FOR TRACKING MOVING OBJECTS - A method for tracking an object that is embedded within images of a scene, including: in a sensor unit, generating, storing and transmitting over a communication link a succession of images of a scene. In a remote control unit, receiving the images, receiving a command for selecting an object of interest in a given image and determining object data associated with the object and transmitting the object data to the sensor unit. In the sensor unit, identifying the given image and the object of interest using the object data, and tracking the object in other images. If the object cannot be located in the latest image of the stored succession of images, using information of images in which the object was located to predict estimated real-time location thereof and generating direction commands to the movable sensor for generating realtime images of the scene and locking on the object. | 10-03-2013 |
20130259304 | TARGET AND METHOD OF DETECTING, IDENTIFYING, AND DETERMINING 3-D POSE OF THE TARGET - We disclose a photogrammetry target that includes a background having a first color and a plurality of ovoid regions located on the background and having a second color contrasting the first color. We further disclose a method and system for detecting the target and processing image data captured from the target to discern therefrom at least one of a distance to the target, identification of the target, or pose of the target. | 10-03-2013 |
20130259305 | COMMODITY MANAGEMENT APPARATUS - A commodity management apparatus comprises an extraction section configured to extract a plurality of objects shown in a display state image which is obtained by color-photographing a display space in which a plurality of commodities are displayed, wherein the plurality of commodities respectively have labels which indicate objects of different colors according to the timing at which labels are respectively adhered to the commodities, a discrimination section configured to discriminate color of each object extracted by the extraction section, and a counting section configured to respectively count the plurality of objects extracted by the extraction section according to each color discriminated by the discrimination section. | 10-03-2013 |
20130259306 | AUTOMATIC REVOLVING DOOR AND AUTOMATIC REVOLVING DOOR CONTROL METHOD - An exemplary automatic revolving door control method includes obtaining a preset number of successive images captured by a camera. The images include distance information by TOF technology of the objects captured in the images. The method creates successive 3D scene models. Next, the method determines whether one or more persons appear in the created successive 3D scene models. The method further includes determining a foremost person of the one or more person as a person being monitored, and determines whether the moving direction of the person being monitored is toward the entrance. The method determines the moved distance by the person being monitored in the two created 3D scene models. The method determines the moving time taken for the calculated moved distance, and further determines the moving speed of the person being monitored, to rotate the automatic revolving door at a speed to match that of the person being monitored. | 10-03-2013 |
20130259307 | OBJECT DETECTION APPARATUS AND METHOD THEREFOR - An object detection apparatus includes a first detection unit configured to detect a first portion of an object from an input image, a second detection unit configured to detect a second portion different from the first portion of the object, a first estimation unit configured to estimate a third portion of the object based on the first portion, a second estimation unit configured to estimate a third portion of the object based on the second portion, a determination unit configured to determine whether the third portions, which have been respectively estimated by the first and second estimation units, match each other, and an output unit configured to output, if the third portions match each other, a detection result of the object based on at least one of a detection result of the first or second detection unit and an estimation result of the first or second estimation unit. | 10-03-2013 |
20130259308 | SYSTEM AND METHOD OF ROOM DECORATION FOR USE WITH A MOBILE DEVICE - The present disclosure includes systems and computer-implemented methods for redesigning rooms in a house using digital image analysis. The analysis includes defining room parameters based on the architectural shape of the room as determined from an analysis of the walls, ceiling, windows, and doors, performing a room size calibration and defining an empty 3D room. Using the analyzed digital image, redesign can progress with selecting types of inner surfaces of the room from a pre-defined collection of architectural shapes, selecting types of furniture in the room, and selecting types of lighting. Then, a 3D model of the redesigned room is generated wherein the architectural shape is in the form of 2D and wherein the 2D image has an associated 3D image. At least one image of the redesigned 3D room may be generated and stored, and may be transmitted to a receiver wherein the corresponding showroom picture is displayed. | 10-03-2013 |
20130259309 | DRIVING SUPPORT APPARATUS - There is provided a driving support apparatus. A recognition controller determines whether an object detected by processing a captured image by an object detection unit is a smoke-like object or not in a smoke-like object determination unit. When the detected object is determined to be the smoke-like object, the recognition controller checks a range distribution in a region of the smoke-like object, adds the result as attribute information of “density”, and transmits the resultant to a controller. The controller decides in a support operation level decision unit whether a pre-crash brake control can be executed or not and an intensity of an operation based on the attribute information of the smoke-like object. Thus, even if the smoke-like object is detected, an appropriate driving support process according to the condition can be executed. | 10-03-2013 |
20130259310 | OBJECT DETECTION METHOD, OBJECT DETECTION APPARATUS, AND PROGRAM - An object detection method includes an image acquisition step of acquiring an image including a target object, a layer image generation step of generating a plurality of layer images by one or both of enlarging and reducing the image at a plurality of different scales, a first detection step of detecting a region of at least a part of the target object as a first detected region from each of the layer images, a selection step of selecting at least one of the layer images based on the detected first detected region and learning data learned in advance, a second detection step of detecting a region of at least a part of the target object in the selected layer image as a second detected region, and an integration step of integrating a detection result detected in the first detection step and a detection result detected in the second detection step. | 10-03-2013 |
20130259311 | Method and Apparatus for Spawning Specialist Belief Propagation Networks - A method and apparatus for processing image data is provided. The method includes the steps of employing a main processing network for classifying one or more features of the image data, employing a monitor processing network for determining one or more confusing classifications of the image data, and spawning a specialist processing network to process image data associated with the one or more confusing classifications. | 10-03-2013 |
20130259312 | Eye Gaze Based Location Selection for Audio Visual Playback - In response to the detection of what the user is looking at on a display screen, the playback of audio or visual media associated with that region may be modified. For example, video in the region the user is looking at may be sped up or slowed down. A still image in the region of interest may be transformed into a moving picture. Audio associated with an object depicted in the region of interest on the display screen may be activated in response to user gaze detection. | 10-03-2013 |
20130266174 | SYSTEM AND METHOD FOR ENHANCED OBJECT TRACKING - A system and method are provided for object tracking using depth data, amplitude data and/or intensity data. In some embodiments, time of flight (ToF) sensor data may be used to enable enhanced image processing, the method including acquiring depth data for an object imaged by a ToF sensor; acquiring amplitude data and/or intensity data for an object imaged by a ToF sensor; applying an image processing algorithm to process the depth data and the amplitude data and/or the intensity data; and tracking object movement based on an analysis of the depth data and the amplitude data and/or the intensity data. | 10-10-2013 |
20130266175 | ROAD STRUCTURE DETECTION AND TRACKING - Method for detecting road edges in a road of travel for clear path detection. Input images are captured at various time step frames. An illumination intensity image and a yellow image are generated from the captured image. Edge analysis is performed. The line candidates identified in a next frame are tracked. A vanishing point is estimated in the next frame based on the tracked line candidates. Respective line candidates are selected in the next frame. A region of interest is identified in the captured image for each line candidate. Features relating to the line candidate are extracted from the region of interest and input to a classifier. The classifier assigns a confidence value to the line candidate identifying a probability of whether the line candidate is a road edge. The potential line candidate is identified as a reliable road edge if the confidence value is greater than a predetermined value. | 10-10-2013 |
20130266176 | SYSTEM AND METHOD FOR SCRIPT AND ORIENTATION DETECTION OF IMAGES USING ARTIFICIAL NEURAL NETWORKS - A system and method for script and orientation detection of images using artificial neural networks (ANNs) are disclosed. In one example, textual content in the image is extracted. Further, a vertical component run (VCR) and horizontal component run (HCR) are obtained by vectorizing each connected component in the extracted textual content. Furthermore, a zonal density run (ZDR) is obtained for each connected component in the extracted textual content. In addition, a concatenated vertical document vector (VDV), horizontal document vector (HDV), and zonal density vector (ZDV) is computed by normalizing the obtained VCR, HCR, and ZDR, respectively, for each connected component. Moreover, the script in the image is determined using a script detection ANN module and the concatenated VDV, HDV, and ZDV of the image. Also, the orientation of the image is determined using an orientation detection ANN module and the concatenated VDV, HDV, and ZDV of the image. | 10-10-2013 |
20130266177 | Method and Device for Detecting an Object in an Image - A method for detecting an object in an image by means of an image processing device, includes several steps of object search in the image at different search scales. During at least one of the search steps, portions of the image are excluded from the search. The size of the portions decreases as the search scale increases. | 10-10-2013 |
20130266178 | REAL-TIME QUALITY CONTROL OF EM CALIBRATION | 10-10-2013 |
20130266179 | Initialization for Robust Video-Based Structure from Motion - An initialization technique that may, for example, be used in an adaptive reconstruction algorithm implemented by structure from motion (SFM) techniques. The initialization technique computes an initial reconstruction from a subset of frames in an image sequence. The initialization technique may be performed to determine and reconstruct a set of initial keyframes covering a portion of the image sequence according to the point trajectories. In the initialization technique, a set of temporally spaced keyframe candidates is determined and two initial keyframes are selected from the set of keyframe candidates. The two initial keyframes are reconstructed, and then one or more additional keyframes between the two initial keyframes are selected and reconstructed. | 10-10-2013 |
20130266180 | Keyframe Selection for Robust Video-Based Structure from Motion - An adaptive technique is described for iteratively selecting and reconstructing keyframes to fully cover an image sequence that may, for example, be used in an adaptive reconstruction algorithm implemented by a structure from motion (SFM) technique. A next keyframe to process may be determined according to an adaptive keyframe selection technique. The determined keyframe may be reconstructed and added to the current reconstruction. A global optimization may be performed on the current reconstruction. One or more outlier points may be determined and removed from the reconstruction. One or more inlier points may be determined and recovered. If the number of inlier points that were added exceeds a threshold, then global optimization may again be performed. If the current reconstruction is a projective construction, self-calibration may be performed to upgrade the projective reconstruction to a Euclidean reconstruction. | 10-10-2013 |
20130266181 | OBJECT TRACKING AND BEST SHOT DETECTION SYSTEM - A method and system using face tracking and object tracking is disclosed. The method and system use face tracking, location, and/or recognition to enhance object tracking, and use object tracking and/or location to enhance face tracking. | 10-10-2013 |
20130266182 | HUMAN BODY POSE ESTIMATION - Techniques for human body pose estimation are disclosed herein. Depth map images from a depth camera may be processed to calculate a probability that each pixel of the depth map is associated with one or more segments or body parts of a body. Body parts may then be constructed of the pixels and processed to define joints or nodes of those body parts. The nodes or joints may be provided to a system which may construct a model of the body from the various nodes or joints. | 10-10-2013 |
20130266183 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 10-10-2013 |
20130266184 | Methods for Automatic Segmentation and Temporal Tracking - In one embodiment, a method of detecting centerline of a vessel is provided. The method comprises steps of acquiring a 3D image volume, initializing a centerline, initializing a Kalman filter, predicting a next center point using the Kalman filter, checking validity of the prediction made using the Kalman filter, performing template matching, updating the Kalman filter based on the template matching and repeating the steps of predicting, checking, performing and updating for a predetermined number of times. Methods of automatic vessel segmentation and temporal tracking of the segmented vessel is further described with reference to the method of detecting centerline. | 10-10-2013 |
20130272569 | TARGET IDENTIFICATION SYSTEM TARGET IDENTIFICATION SERVER AND TARGET IDENTIFICATION TERMINAL - A computer and a terminal apparatus retain position information about targets. The terminal apparatus includes: a capturing portion that captures an image of the target; a position information acquisition portion that acquires information about a position to capture the target; an orientation information acquisition portion that acquires information about an orientation to capture the target; and a communication portion that transmits the image, the position information, and the orientation information to the computer. The computer identifies at least one first target candidate as a candidate for the captured target from the targets based on the position information about the targets, the acquired position information, and the acquired orientation information. The computer identifies at least one second target candidate from at least the one first target candidate based on a distance from the terminal apparatus to the captured target. | 10-17-2013 |
20130272570 | ROBUST AND EFFICIENT LEARNING OBJECT TRACKER - This disclosure presents methods, systems, computer-readable media, and apparatuses for optically tracking the location of one or more objects. The techniques may involve accumulation of initial image data, establishment of a dataset library containing image features, and tracking using a plurality of modules or trackers, for example an optical flow module, decision forest module, and color tracking module. Tracking outputs from the optical flow, decision forest and/or color tracking modules are synthesized to provide a final tracking output. The dataset library may be updated in the process. | 10-17-2013 |
20130272571 | HUMAN SUBMENTAL PROFILE MEASUREMENT - An imaging system captures images of a human submental profile in a dimension controlled environment and utilizes image analysis algorithms for detecting submental changes. Instead of implementing a strict posture control of a subject, the imaging system allows the subject to freely move his/her head in an up-and-down direction and a high speed camera captures this movement through a series of images at varying head-to-shoulder angles. The image analysis algorithms may accurately compare before and after images at similar head-to-shoulder angles to identify changes in a human submental profile using a series of measurements and checkpoints. | 10-17-2013 |
20130272572 | APPARATUS AND METHOD FOR MAPPING A THREE-DIMENSIONAL SPACE IN MEDICAL APPLICATIONS FOR DIAGNOSTIC, SURGICAL OR INTERVENTIONAL MEDICINE PURPOSES - The present invention relates to an apparatus and to a method for mapping a three-dimensional space in medical applications for diagnostic, surgical or interventional medicine purposes. The apparatus and the method according to the invention use acquisition means, capable of recording two-dimensional images of said three-dimensional space from at least a first recording position and from a second recording position, and a reference target, comprising a plurality of marker elements and movable between a first target point and a second target point of said three-dimensional space. A processing unit, adapted to receive data indicative of a first image and of a second image of said three-dimensional space, comprises computerized means adapted to calculate registration data to register the two-dimensional reference systems, used to express the coordinates of the points of said first image and of said second image, with the three-dimensional reference system, defined by the marker elements of said reference target. | 10-17-2013 |
20130272573 | MULTI-VIEW OBJECT DETECTION USING APPEARANCE MODEL TRANSFER FROM SIMILAR SCENES - View-specific object detectors are learned as a function of scene geometry and object motion patterns. Motion directions are determined for object images extracted from a training dataset and collected from different camera scene viewpoints. The object images are categorized into clusters as a function of similarities of their determined motion directions, the object images in each cluster are acquired from the same camera scene viewpoint. Zenith angles are estimated for object image poses in the clusters relative to a position of a horizon in the cluster camera scene viewpoint, and azimuth angles of the poses as a function of a relation of the determined motion directions of the clustered images to the cluster camera scene viewpoint. Detectors are thus built for recognizing objects in input video, one for each of the clusters, and associated with the estimated zenith angles and azimuth angles of the poses of the respective clusters. | 10-17-2013 |
20130272574 | Interactivity Via Mobile Image Recognition - Systems and methods of interacting with a virtual space, in which a mobile device is used to electronically capture image data of a real-world object, the image data is used to identify information related to the real-world object, and the information is used to interact with software to control at least one of: (a) an aspect of an electronic game; and (b) a second device local to the mobile device. Contemplated systems and methods can be used to gaming, in which the image data can be used to identify a name of the real-world object, to classify the real-world object, identify the real-world object as a player in the game, to identify the real-world object as a goal object or as having some other value in the game, to use the image data to identify the real-world object as a goal object in the game. | 10-17-2013 |
20130272575 | OBJECT DETECTION USING EXTENDED SURF FEATURES - Systems, apparatus and methods are described including generating gradient images from an input image, where the gradient images include gradient images created using 2D filter kernels. Feature descriptors are then generated from the gradient images and object detection performed by applying the descriptors to a boosting cascade classifier that includes logistic regression base classifiers. | 10-17-2013 |
20130272576 | HUMAN HEAD DETECTION IN DEPTH IMAGES - Systems, devices and methods are described including receiving a depth image and applying a template to pixels of the depth image to determine a location of a human head in the depth image. The template includes a circular shaped region and a first annular shaped region surrounding the circular shaped region. The circular shaped region specifies a first range of depth values. The first annular shaped region specifies a second range of depth values that are larger than depth values of the first range of depth values. | 10-17-2013 |
20130272577 | LANE RECOGNITION DEVICE - Provided is a lane recognition device capable of extracting linear elements derived from lane marks from a linear element extraction image obtained by processing a captured image and recognizing lane boundary lines. Local areas | 10-17-2013 |
20130272578 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes a first setting unit setting a relative position-posture relationship between a 3D-shaped model of an object and a viewpoint from which the model is observed as a base position-posture, a detector detecting geometric features of the model observed from the viewpoint in the base position-posture as base geometric features, a second setting unit setting a relative position-posture relationship between the model and a viewpoint as a reference position-posture, a retrieval unit retrieving reference geometric features corresponding to the base geometric features of the model observed from the viewpoint in the reference position-posture, a first calculation unit calculating similarity degrees between the base geometric features and the reference geometric features, and a second calculation unit calculating evaluation values of correspondences between the base geometric features and the reference geometric features in accordance with the similarity degrees. | 10-17-2013 |
20130279742 | AUTOMATIC RECORD DETECTION DEVICE AND METHOD FOR STOP ARM VIOLATION EVENT - An automatic record detection device and method for stop arm violation event, comprising: a plurality of image fetching units, to fetch external video signals; at least an analog-to-digital converter, to process said external video signal into digital data; a processor unit, to detect dynamic images of said digital data based on a set sensitivity value, when said set sensitivity value of said dynamic image of a violating vehicle fulfills an image block number, it determines said dynamic image to trigger a violation event, then generates said digital data based on said external video signal. As such, a user can search said dynamic images of said digital data by examining said triggered violation event, to find out license plate number of a violating vehicle, and sends it to an agency concerned for prosecution, thus saving enormous time and manpower in viewing and searching through said digital data. | 10-24-2013 |
20130279743 | ANOMALOUS RAILWAY COMPONENT DETECTION - A method and system for inspecting railway components. The method includes receiving a stream of images containing railway components, detecting a railway component in each image, generating a plurality of feature vectors for each railway component image, measuring the dissimilarity between the railway component and a set of railway components detected in preceding images, in a sliding window, based on the feature vectors. | 10-24-2013 |
20130279744 | SYSTEMS AND METHODS FOR CONTROLLING OUTPUT OF CONTENT BASED ON HUMAN RECOGNITION DATA DETECTION - Systems and methods for controlling output of content based on human recognition data captured by one or more sensors of an electronic device are provided. The control of the output of particular content may be based on an action of a rule defined for the particular content, and may be performed when at least one human feature detection related condition of the rule is satisfied. In some embodiments, the action may include granting access to requested content when detected human feature data satisfies at least one human feature detection related condition of a rule defined for the requested content. In other embodiments the action may include altering a presentation of content, during the presentation of the content, when detected human feature data satisfies at least one human feature detection related condition of a rule defined for the presented content. | 10-24-2013 |
20130279745 | IMAGE RECOGNITION DEVICE, IMAGE RECOGNITION METHOD, AND IMAGE RECOGNITION PROGRAM - An image recognition device includes an image acquiring unit configured to acquire an image, and an object recognition unit configured to calculate gradient directions and gradient values of intensity of the image acquired by the image acquiring unit, to scan the gradient values of each acquired gradient direction with a window, to calculate a rectangular feature value, and to recognize a target object using a classifier based on the rectangular feature value. | 10-24-2013 |
20130279746 | IMAGE RECOGINITION DEVICE, IMAGE RECOGNITION METHOD, AND IMAGE RECOGNITION PROGRAM - An image recognition device includes an image acquiring unit configured to acquire an image, and an object recognition unit configured to extract feature points from the image acquired by the image acquiring unit, to detect coordinates of the extracted feature points in a three-dimensional spatial coordinate system, and to determine a raster scan region which is used to recognize a target object based on the detection result. | 10-24-2013 |
20130279747 | FEELING-EXPRESSING-WORD PROCESSING DEVICE, FEELING-EXPRESSING-WORD PROCESSING METHOD, AND FEELING-EXPRESSING-WORD PROCESSING PROGRAM - The present approach enables an impression of the atmosphere of a scene or an object present being photographed to be pictured in a person's mind as if the person were actually at the photographed scene. A feeling-expressing-word processing device has: a feeling information calculating unit for analyzing a photographed image, and calculating feeling information which indicates a situation of a scene portrayed in the photographed image, a condition of an object present in the scene, a temporal change in the scene, or a movement of the object; a feeling-expressing-word extracting unit for extracting, from among feeling-expressing words which express feelings and are stored in a feeling-expressing-word database in association with the feeling information, a feeling-expressing word which corresponds to the feeling information calculated by the feeling information calculating unit; and a superimposing unit for superimposing the feeling-expressing word extracted by the feeling-expressing-word extracting unit on the photographed image. | 10-24-2013 |
20130279748 | OBJECT IDENTIFICATION USING OPTICAL CODE READING AND OBJECT RECOGNITION - An object identification system comprises an optical code reader that scans an optical code of an object and decodes a portion of the optical code. Using the decoded portion of the optical code, a database filter unit generates a filtered subset of feature models from a set of feature models of known objects stored in a database. An image capture device captures an image of the object, and a feature detector unit detects visual features in the image. A comparison unit compares the detected visual features to the filtered subset of feature models to identify a match between the object and a known object. | 10-24-2013 |
20130279749 | SYSTEM AND METHOD FOR DETECTING TARGET RECTANGLES IN AN IMAGE - A system and method for detecting target rectangles in an image acquired from a camera or computer file in which average pixel values from small horizontal and vertical rectangles called sub-exlons are computed from an edge-detected and binarized version of the image. The sub-exlons are then used to scan for exlons, which are combinations of horizontal and vertical sub-exlons that make up the four possible types of corner (upper left, upper right, lower left, and lower right). Possible corners that are located are then matched up and a rectangle is confirmed by scanning for properly oriented horizontal and vertical connecting lines composed from the previously computed sub-exlons. All confirmed rectangles are stored in an array of found rectangles, which is then processed to filter out rectangles of the wrong aspect ratio. Finally, the most prevalent concentric group of rectangles is located by rejecting rectangles whose geometric center is too far from the center of mass of the entire group of located rectangles. This final group of concentric rectangles is then used to automatically target a robotic shooting mechanism or to provide feedback to robot operators for manual targeting. | 10-24-2013 |
20130279750 | IDENTIFICATION OF FOREIGN OBJECT DEBRIS - System and method for identification of foreign object debris, FOD, in a sample, based on comparison of edge features identified in images of the sample takes at a reference point in time and at a later time (when FOD may be already present). The rate of success of identification of the FOD is increased by compensation for relative movement between the imaging camera and the sample, which may include not only processing the sample's image by eroding of imaging data but also preceding spatial widening of edge features that may be indicative of FOD. | 10-24-2013 |
20130279751 | KEYPOINT UNWARPING - Apparatus and methods to unwarp at least portions of distorted, electronically-captured images are described. Keypoints, instead of an entire image, may be unwarped and used in various machine-vision algorithms, such as object recognition, image matching, and 3D reconstruction algorithms. When using unwarped keypoints, the machine-vision algorithms may perform reliably irrespective of distortions that may be introduced by one or more image capture systems. | 10-24-2013 |
20130279752 | AUTOMATED IMAGING OF PREDETERMINED REGIONS IN SERIES OF SLICES - A method for the magnified depiction of samples is disclosed. At least two sections from a sample, which are present on at least one sample carrier, are depicted in magnified form using an apparatus for the magnified depiction of samples. The sample carrier is connected to the apparatus via a sample carrier holder. The position of the depicted sample carrier regions in relation to the apparatus and the magnification stage used are recorded. At least one selected feature contained in the image information from the sections depicted in magnified form is used to define local coordinate systems, which are specific to the respective section, for the at least two sections depicted in magnified form. | 10-24-2013 |
20130279753 | Distance-Varying Illumination and Imaging Techniques for Depth Mapping - A method for mapping includes projecting a pattern onto an object ( | 10-24-2013 |
20130279754 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 10-24-2013 |
20130279755 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM - The present invention includes: a database which stores a position on a map and feature information in an image which can be taken by an imaging device at the position, to be associated with each other; an extraction means for extracting the feature information from the image; an estimation means for estimating the position at which the imaging device exists on a map on the basis of the extracted feature information referring to the database; a display means for displaying an estimated current position of the imaging device; a determination means for determining whether or not an imaging direction of the imaging device is varied by a predetermined amount from a direction in which the image from which the feature information is extracted is taken during the imaging; and a control means for controlling the extraction means so that new feature information is extracted, upon determining that the direction is varied by the predetermined amount; wherein the estimation means combines the new feature information and the extracted feature information and re-estimates the position on the map at which the imaging device exists when the new feature information is extracted. | 10-24-2013 |
20130279756 | COMPUTER VISION BASED HAND IDENTIFICATION - There is provided a method for computer vision based hand identification, the method comprising: obtaining an image of an object; detecting in the image at least two different types of shape features of the object; obtaining information of each type of shape feature; combining the information of each type of shape feature to obtain combined information; and determining that the object is a hand based on the combined information. | 10-24-2013 |
20130287248 | REAL-TIME VIDEO TRACKING SYSTEM - A method for detecting and tracking a target includes detecting the target using a plurality of feature cues, fusing the plurality of feature cues to form a set of target hypotheses, tracking the target based on the set of target hypotheses and a scene context analysis, and updating the tracking of the target based on a target motion model. | 10-31-2013 |
20130287249 | SYSTEM AND METHOD FOR DETECTING MOVING OBJECTS USING VIDEO CAPTURING DEVICE - Provided is a system for detecting moving objects. The system includes a video capturing device and a detection unit. The video capturing device captures “n” pieces of consecutive images during a time period, where “n” represents a positive integer. The detecting unit selects one of the images as a reference images and processes the other n−1 pieces of images. The detecting unit differentiates the n−1 pieces of images relative to the reference image, grays the differentiated n−1 pieces of images, binarizes the grayed n−1 pieces of imaged, blurs the binarized n−1 pieces of images, dilates the blurred n−1 pieces of images, and detects edges from the dilated n−1 pieces of images. | 10-31-2013 |
20130287250 | METHOD AND APPARATUS FOR TRACKING OBJECT IN IMAGE DATA, AND STORAGE MEDIUM STORING THE SAME - Disclosed is a system for tracking an object in an image. A method for tracking an object in an image according to an exemplary embodiment of the present invention includes generating an object model represented by multiple patch histograms of an object that is divided into N partial patch regions and histograms are built from each patch region, forming an object model; estimating the probability of each image pixel being an object pixel; and determining the most promising location of an object in the image by using the estimated object probability values. According to the exemplary embodiment of the present invention, it is possible to more improve separability from a background than a case in which a single histogram mode is used, to increase tracking performance, and to more accurately search the object region than a mean-shift method of the related art. | 10-31-2013 |
20130287251 | IMAGE RECOGNITION DEVICE, IMAGE RECOGNITION METHOD, AND IMAGE RECOGNITION PROGRAM - An image recognition device includes an image acquiring unit configured to acquire an image, and an object recognition unit configured to calculate gradient directions and gradient values of intensity of the image acquired by the image acquiring unit, to scan the gradient values of each acquired gradient direction with windows, calculate a rectangular feature value, and extract a window in which a target object is recognized to be present using a classifier based on the calculated rectangular feature value through the use of a first recognition unit, and to calculate a predetermined feature value from the window extracted by the first recognition unit and recognize the target object using a classifier based on the predetermined feature value through the use of a second recognition unit. | 10-31-2013 |
20130287252 | Computer Vision Based Method for Extracting Features Relating to the Developmental Stages of Trichuris Spp. Eggs - There is provided a computer vision based method for extracting features relating to the developmental stages of | 10-31-2013 |
20130287253 | REVERSE GEO-CODING FOR TRACK PATH - Methods and systems are disclosed for associating non-geographical information to track paths. Among other things, meaningful labels for the track paths can be formulated. In one aspect, a method performed by an application executing on a computer system includes receiving a set of images taken during a trip, a corresponding set of acquisition times, and a track path of the trip. The method further contains identifying landmarks near the received track path. Furthermore, the method includes receiving from a human user of the application a landmark selection from the identified landmarks and one or more image selections from the received set of images. In response to receiving the human user's selections, the method can associate the one or more selected images with the selected landmark. Additionally, the method included matching the received set of images to the received track path based on the association. | 10-31-2013 |
20130287254 | Method and Device for Detecting an Object in an Image - A method for detecting at least one object in an image including a pixel array, by means of an image processing device, including searching out the silhouette of the object in the image only if pixels of the image are at the minimum or maximum level. | 10-31-2013 |
20130287255 | METHOD AND APPARATUS OF DETECTING REAL SIZE OF ORGAN OR LESION IN MEDICAL IMAGE - A method of detecting a real size of an object from a medical image is provided. The method includes: receiving depth information of the object which is a distance between the object and a skin of a patient; measuring a distance between the pinhole and the skin; detecting a magnified size of the object from the medical image; and calculating the real size of the object based on the depth information of the object, the distance between the pinhole and the skin, and the magnified size of the object, and the distance between the pinhole and the scintillator. | 10-31-2013 |
20130287256 | METHOD AND SYSTEM FOR REAL TIME IMAGE RECOGNITION ON A MOBILE DEVICE - The various embodiments herein provide a method and system for real time image searching on a mobile device. The method comprises of installing an image recognition application in the mobile device, capturing one or more images using the mobile device and recognizing a plurality of images in successive frames by ranking one or more feature points of the captured images through the image recognition application. The ranking of feature points is performed by generating a random forest for the images, obtaining a plurality of features points in the captured images using a feature based method, matching the images captured through the mobile device with the plurality of images stored in the random forest, designating a rank for the tracked feature points in the images, determining the stable features of the images, recognizing the matched image based on stable features and delivering the content based on the recognized object. | 10-31-2013 |
20130287257 | FOREGROUND SUBJECT DETECTION - Classifying pixels in a digital image includes receiving a primary image from a primary image sensor. The primary image includes a plurality of primary pixels. Depth information from a depth sensor is also received. The depth information and the primary image are cooperatively used to identify whether a primary pixel images a foreground subject or a background subject. | 10-31-2013 |
20130287258 | CODE SYMBOL READING SYSTEM - A digital-imaging based system reads graphical indicia, including code symbols, on objects such as, but not limited to, code symbol menus by the user pointing his or her finger at the particular code symbol on the code symbol menu to be read, while digital images of the code symbol menu and the pointing finger are automatically captured, buffered and processed. | 10-31-2013 |
20130287259 | IMAGE PROCESSING DEVICE, IMAGE CAPTURING DEVICE, AND IMAGE PROCESSING METHOD - An image processing device for tracking a subject included in a first image, in a second image captured after the first image includes: a segmentation unit that divides the first image into a plurality of segments based on similarity in pixel values; an indication unit that indicates a position of the subject in the first image; a region setting unit that sets, as a target region, a region including at least an indicated segment which is a segment at the indicated position; an extraction unit that extracts a feature amount from the target region; and a tracking unit that tracks the subject by searching the second image for a region similar to the target region using the extracted feature amount. | 10-31-2013 |
20130287260 | X-RAY CT APPARATUS, SUBSTANCE IDENTIFYING METHOD, AND IMAGE PROCESSING APPARATUS - An X-ray CT apparatus of an embodiment displays an image of the inside of an object based on projection data obtained by scanning the object, and comprises a generator, a converter, an image forming part and an identifier. The generator scans the object with each of X-rays of different energy levels and generates multiple projection data. The converter converts the multiple projection data into multiple new projection data corresponding to multiple reference substances. The image forming part reconstructs each of the multiple new projection data converted by the converter, thereby forming multiple reference substance images corresponding to the multiple reference substances. The identifier identifies a target substance based on a correlation of pixel values in the multiple reference substance images. | 10-31-2013 |
20130294642 | AUGMENTING VIDEO WITH FACIAL RECOGNITION - A video segment including interactive links to information about an actor appearing in the segment may be prepared in an automatic or semi-automatic process. A computer may detect an actor's face appearing in a frame of digital video data by processing the video file with a facial detection algorithm. A user-selectable link may be generated and activated along a track of the face through multiple frames of the video data. The user-selectable link may include a data address for obtaining additional information about an actor identified with the face. The video data may be associated with the user-selectable link and stored in a computer memory. When later viewing the video segment via a media player, a user may select the link to obtain further information about the actor. | 11-07-2013 |
20130294643 | TIRE DETECTION FOR ACCURATE VEHICLE SPEED ESTIMATION - In some aspects of the present application, a computer-implemented method for determining the speed of a motor vehicle in a vehicle speed detection system is disclosed. The method can include receiving a plurality of images of a motor vehicle traveling on a road, each of the images being separated in time by a known interval; determining, for each of at least two of the images, a point of contact where a same tire of the vehicle contacts a surface of the road based, in part, on one or more identified features of the vehicle in one or more of the plurality of images; and using the points of contact and the time interval separations to calculate a speed at which the vehicle is traveling on the road. | 11-07-2013 |
20130294644 | SYSTEM AND METHOD FOR REPAIRING COMPOSITE PARTS - A composite repair system and method for assisting in the repair of a cured composite part in which a damaged portion has been cut out and removed, exposing a plurality of composite plies and their cut edges, which are then taper sanded to expose a plurality of taper-sanded surfaces and their corresponding ply boundaries. The ply boundaries may be traced by a user with a marking device. The composite repair system may comprise an image capturing device to obtain an image of the traced ply boundaries and a computing device for processing creating a map of the traced ply boundaries based on the image. The map may be used to manufacture filler plies having peripheral edges shaped to correspond with the ply boundaries for replacing the damaged portion of the composite part. | 11-07-2013 |
20130294645 | METHOD AND APPARATUS FOR SINGLE-PARTICLE LOCALIZATION USING WAVELET ANALYSIS - Accurate localization of isolated particles is important in single particle based super-resolution microscopy. It allows the imaging of biological samples with nanometer-scale resolution using a simple fluorescence microscopy setup. Nevertheless, conventional techniques for localizing single particles can take minutes to hours of computation time because they require up to a million localizations to form an image. In contrast, the present particle localization techniques use wavelet-based image decomposition and image segmentation to achieve nanometer-scale resolution in two dimensions within seconds to minutes. This two-dimensional localization can be augmented with localization in a third dimension based on a fit to the imaging system's point-spread function (PSF), which may be asymmetric along the optical axis. For an astigmatic imaging system, the PSF is an ellipse whose eccentricity and orientation varies along the optical axis. When implemented with a mix of CPU/GPU processing, the present techniques are fast enough to localize single particles while imaging (in real-time). | 11-07-2013 |
20130294646 | METHOD AND SYSTEM FOR ANALYZING INTERACTIONS - Techniques are provided for tracking different types of subjects and labeling the tracks according to subject type. In an implementation, the tracking includes tracking first and second subject types using video, and also tracking subjects of the first type using Wi-Fi tags provided to the subjects of the first type. The video and Wi-Fi tracks can be compared in order to identify and label which video tracks are associated with subjects of the first type and which video tracks are associated with subjects of the second type. Upon the tracks having been labeled, interactions between the different subject types can be identified and analyzed. | 11-07-2013 |
20130294647 | VISUAL MONITORING - Systems, methods, and computer program products for monitoring events. For example, a system for monitoring events, may comprise reporting and alerting tools operable to determine one or more values of one or more parameters associated with events, to attempt to match one or more accessible images to the one or more determined values, and to enable display of one or more images which matched the one or more determined values with events. | 11-07-2013 |
20130294648 | INTUITIVE COMPUTING METHODS AND SYSTEMS - A smart phone senses audio, imagery, and/or other stimulus from a user's environment, and acts autonomously to fulfill inferred or anticipated user desires. In one aspect, the detailed technology concerns phone-based cognition of a scene viewed by the phone's camera. The image processing tasks applied to the scene can be selected from among various alternatives by reference to resource costs, resource constraints, other stimulus information (e.g., audio), task substitutability, etc. The phone can apply more or less resources to an image processing task depending on how successfully the task is proceeding, or based on the user's apparent interest in the task. In some arrangements, data may be referred to the cloud for analysis, or for gleaning. Cognition, and identification of appropriate device response(s), can be aided by collateral information, such as context. A great number of other features and arrangements are also detailed. | 11-07-2013 |
20130294649 | Mobile Image Search and Indexing System and Method - A computer-implemented system and method are described for image searching and image indexing that may be incorporated in a mobile device that is part of an object identification system. A computer-implemented system and method relating to a MISIS client and MISIS server that may be associated with mobile pointing and identification system for the searching and indexing of objects in in situ images in geographic space taken from the perspective of a system user located near the surface of the Earth including horizontal, oblique, and airborne perspectives. | 11-07-2013 |
20130294650 | IMAGE GENERATION DEVICE - An image generation device includes: an object information obtaining unit which obtains a location of an object; an image information obtaining unit which obtains images captured from a moving object and locations of the moving object of a time when the respective images are captured; a traveling direction obtaining unit which obtains directions of travel of the moving object of the time when the respective images are captured; and an image cropping unit which calculates a direction of view covering both a direction from a location of the moving object toward the location of the object and one of a direction of travel of the moving object and an opposite direction to the direction of travel, and crops an image, which is one of the images, into a cropped image, which is a portion of an angle of view of the image, based on the calculated direction of view. | 11-07-2013 |
20130294651 | SYSTEM AND METHOD FOR GESTURE RECOGNITION - A system and method for gesture spotting and recognition are provided. Systems and methods are also provided employing Hidden Markov Models (HMM) and geometrical feature distributions of a hand trajectory of a user to achieve adaptive gesture recognition. The system and method provide for acquiring a sequence of input images of a specific user and recognizing a gesture of the specific user from the sequence of input images based on a gesture model and geometrical features extracted from a hand trajectory of the user. State transition points of the gesture model are detected and the geometrical features of the hand trajectory of the user are extracted based on the relative positions of the detected state transition points and a starting point of the gesture. The system and method further provide for adapting the gesture model and geometrical feature distribution for the specific user based on adaptation data. | 11-07-2013 |
20130301873 | BALLOT ADJUDICATION IN VOTING SYSTEMS UTILIZING BALLOT IMAGES - Methods, systems, and devices are described for adjudicating votes made on voter-marked paper ballots. Voter-marked paper ballots may be scanned to obtain optical image data of the voter-marked paper ballots. The optical image may be analyzed to determine the votes contained in the ballot for tabulation purposes. One or more votes on the ballot may be identified as requiring adjudication by an election official. Adjudication information, according to various embodiments, is appended to the optical images of the voter-marked paper ballots such that the image of the ballot and the image of the adjudication information may be viewed in an optical image. The optical image may be stored in a file format that allows the ballot image and the appended adjudication information to be viewed using readily available image viewers. | 11-14-2013 |
20130301874 | METHOD AND SYSTEM FOR REALTIME DE-DUPLICATION OF OBJECTS IN AN ENTITY-RELATIONSHIP GRAPH - Method, system, and programs for realtime de-duplication of objects. A received object is hashed to generate a hashed object, which is then used to generate a query for an inverted index. Candidate matching objects are determined based on the query of the inverted index. From the candidate matching objects, a matched object that corresponds to the received object is determined. | 11-14-2013 |
20130301875 | AUGMENTED REALITY VIRTUAL AUTOMOTIVE X-RAY HAVING SERVICE INFORMATION - A tool for providing a user with information on a particular object related to a position and an orientation of the tool with respect to the object includes an image capturing device configured to capture an image of the object. The tool further includes a position and orientation sensor configured to determine the position of the tool with respect to the object, a processor configured to determine from the image the type of object, a display configured to display the image of the object, the display further configured to display additional information in addition to the image of the object in response to the determination of the type of object, and the processor further configured to determine a change in one of the position and the orientation of the sensor and the tool and further configured to modify the display. | 11-14-2013 |
20130301876 | VIDEO ANALYSIS | 11-14-2013 |
20130301877 | IMAGE READING OUT CONTROL APPARATUS, IMAGE READING OUT CONTROL METHOD THEREOF AND STORAGE MEDIUM - An image reading out control apparatus comprises: a detection unit configured to detect an object on an image captured by an image capturing unit; and a control unit configured to control an interval to extract images of a region including the object from the image capturing unit, according to the moving speed of the object detected by the detection unit. | 11-14-2013 |
20130301878 | SYSTEM AND METHOD OF BOOK LEAF TRACKING - A method of book leaf tracking comprises receiving a video image comprising a book, estimating the current position and orientation of the book within the video image in response to a fiduciary marker of the book visible in the image,estimating the visibility of one or more predetermined features of the book, calculating a range of leaf turning angles that is consistent with the detected visibility of the or each predetermined feature of the book for the estimated current position and orientation of the book, and estimating the angle of a turning leaf of the book responsive to the calculated range. | 11-14-2013 |
20130301879 | OPERATING A COMPUTING DEVICE BY DETECTING ROUNDED OBJECTS IN AN IMAGE - A method is disclosed for operating a computing device. One or more images of a scene captured by an image capturing device of the computing device is processed. The scene includes an object of interest that is in motion and that has a rounded shape. The one or more images are processed by detecting a rounded object that corresponds to the object of interest. Position information is determined based on a relative position of the rounded object in the one or more images. One or more processes are implemented that utilize the position information determined from the relative position of the rounded object. | 11-14-2013 |
20130301880 | DISPLACEMENT DETECTION APPARATUS AND METHOD - A displacement detection method includes the steps of: capturing a first frame and a second frame; selecting a first block with a predetermined size in the first frame and selecting a second block with the predetermined size in the second frame; determining a displacement according to the first block and the second block; comparing the displacement with at least one threshold; and adjusting the predetermined size according to a comparison result of comparing the displacement and the threshold. The present invention further provides a displacement detection apparatus. | 11-14-2013 |
20130301881 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing apparatus including a dynamic body detecting unit for detecting a dynamic body contained in a moving image, a dynamic body region setting unit for, during a predetermined time from a time point the dynamic body is detected by the dynamic body detecting unit, setting a region containing the dynamic body at the detection time point as a dynamic body region, and a fluctuation removable processing unit for performing a fluctuation removal process on a region other than the dynamic body region set by the dynamic body region setting unit. | 11-14-2013 |
20130301882 | ORIENTATION STATE ESTIMATION DEVICE AND ORIENTATION STATE ESTIMATION METHOD - Disclosed is an orientation state estimation device capable of estimating with high accuracy the orientation state of a jointed body. An orientation state estimation device ( | 11-14-2013 |
20130308820 | MOTION DETECTION THROUGH STEREO RECTIFICATION - A motion detecting engine is provided. Given a pair of stereo rectified images in which the stereo rectified images are taken at different times from one or more sensors that are oriented perpendicular to a stereo baseline and parallel to each other, for each feature in one of the stereo rectified images, the motion detecting engine associates a subject feature with the same feature in the other stereo rectified image to form a feature association. For each feature association, the motion detecting engine forms a feature motion track following a subject feature association from one of the stereo rectified images to the other stereo rectified image. The motion detecting engine then differentiates feature motion tracks from other feature motion tracks that are parallel to the stereo baseline. The feature motion tracks being differentiated by the motion detecting engine represent detected objects that are moving with respect to the ground. | 11-21-2013 |
20130308821 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing device includes a processor, and a memory which stores an instruction, which when executed by the processor, causes the processor to execute an operation including obtaining an image including information on a traveling direction of a vehicle and information of side regions of the image relative to the traveling direction. The operation includes reducing a size of an arbitrary region of the image nonlinearly toward a center of the image from side ends of one of the regions of the image, extracting feature points from the arbitrary region, calculating traveling amounts of the feature points included in a plurality of arbitrary regions which are obtained at different timings and determining an approaching object which approaches the vehicle in accordance with the traveling amounts of the feature points. | 11-21-2013 |
20130308822 | METHOD AND SYSTEM FOR CALCULATING THE GEO-LOCATION OF A PERSONAL DEVICE - The method comprises performing said calculation by using data provided by an image recognition process which identifies at least one geo-referenced image of an object located in the surroundings of said personal device. The system is arranged for implementing the method of the present invention. | 11-21-2013 |
20130308823 | Method For Quantitatively Determining Eyelash Clumping - A method for quantifying clumping in fibrous materials is disclosed herein. In some examples, the method may be utilized for quantifying clumping in keratinous materials such as eyelashes when a composition such as mascara is applied thereto. | 11-21-2013 |
20130308824 | DETECTION OF DISEASE-RELATED RETINAL NERVE FIBER LAYER THINNING - Methods, apparatuses, and computer readable media for detecting abnormalities in a characteristic of an eye using eye-imaging methods are presented. A plurality of images of the eye are received over time. Each image includes a plurality of pixels, which can be partitioned into blocks of pixels with varying sizes, called pixel partitions. A value is determined for each pixel partition, e.g., an average of the pixel values. A pixel partition set may be identified, which includes a pixel partition from each image, corresponding to a common region of a patient's eye. A regression model is computed for each pixel partition set using the values determined for each pixel partition. The regression model computes a rate of change of the retinal nerve fiber thickness at individual pixel partitions over time. An abnormality may be identified by comparing the rates of change of the model and the expected age-related rate of change. | 11-21-2013 |
20130308825 | CAPTURED IMAGE RECOGNITION DEVICE, CAPTURED IMAGE RECOGNITION SYSTEM, AND CAPTURED IMAGE RECOGNITION METHOD - Provided is a captured image recognition device that enables the performance of an image recognition function to be sufficiently evinced. A field-of-view splitting estimation unit ( | 11-21-2013 |
20130315440 | Fiducial Tracking System - The present disclosure is directed to fiducial tracking system. The fiducial tracking system includes a first device having a fiducial pattern disposed thereon and a second device having an image capture device disposed thereon. The image capturing device is configured to obtain a fiducial image of the fiducial pattern. A controller is also provided that is configured to receive the fiducial image, correct the fiducial image for lens distortion, find correspondence between the fiducial image and a model image, estimate a camera pose, and transform a position of the second device to model coordinates. | 11-28-2013 |
20130315441 | SYSTEM FOR EXTRACTING TEXT FROM A DOCUMENT - A system including a data processing system, a network interface for communicating over a network, and a program memory storing instructions configured to cause the data processing system to implement a method for extracting textual information from images of a document containing text characters. The method includes receiving a plurality of digital images of the document over the network. Each of the captured digital images is automatically analyzed using an optical character recognition process to determine extracted textual data. The extracted textual data for the captured digital images are merged to determine the textual information for the document, wherein differences between the extracted textual data for the captured digital images are analyzed to determine the textual information for the document. | 11-28-2013 |
20130315442 | OBJECT DETECTING APPARATUS AND OBJECT DETECTING METHOD - According to an embodiment, an object detecting apparatus includes an image acquiring unit and a determining unit. The image acquiring unit is configured to acquire a target image within a second range included in a first range. The determining unit is configured to determine whether an object not captured in a reference image captured by the image capturing unit while the image capturing unit moves within the first range is captured in the target image, on the basis of a difference between each frequency of pixel values in a histogram for a first region and each frequency of pixel values in a second region of the target image corresponding to the first region, the first region being one of regions each extending in a direction of blurring caused by movement. | 11-28-2013 |
20130315443 | VEHICULAR PARKING CONTROL SYSTEM AND VEHICULAR PARKING CONTROL METHOD USING THE SAME - Provided are a vehicular parking control system capable of removing temporary obstacles from an image of objects within a parking space so that an available parking space can be searched for, and a vehicular parking control method using the same. The vehicular parking control system includes: a camera configured to acquire an image of a parking space with reference to a position of a personal car; a sensing unit configured to sense an object in the parking space; and an electronic control unit configured to search for an available parking space by comparing an image pattern of an object within the image of the parking space acquired from the camera with a preset reference image pattern, identifying the type of the object, and removing a contour of the object, the type of which has been identified as a temporary obstacle, from contours of objects in the parking space corresponding to a sensing signal sensed by the sensing unit. | 11-28-2013 |
20130315444 | STATIONARY TARGET DETECTION BY EXPLOITING CHANGES IN BACKGROUND MODEL - A computer-implemented method for processing one or more video frames may include obtaining one or more video frames; generating one or more blobs using the one or more video frames; classifying the one or more blobs to produce one or more classified blobs, wherein the one or more classified blobs include one or more of a stationary target, a moving target, a target insertion, a target removal, or a local change; and constructing a list of detected targets based on the one or more classified blobs. | 11-28-2013 |
20130315445 | SYSTEMS, METHODS AND APPARATUS FOR PROVIDING CONTENT BASED ON A COLLECTION OF IMAGES - Systems, methods, articles of manufacture and apparatus provide for an augmented media experience. In some embodiments, the recognition of an image (e.g., by a mobile device and/or a central server) results in providing at least one associated media file to a user (e.g., via a display device). | 11-28-2013 |
20130322682 | Profiling Activity Through Video Surveillance - Embodiments of the invention relate to profiling activity. Content is captured and keywords are identified in the captured content. In response to the keyword identification, rules associated with the keywords are identified. These rules are employed to identify and capture relevant content in real-time. | 12-05-2013 |
20130322683 | CUSTOMIZED HEAD-MOUNTED DISPLAY DEVICE - Customizing a head-mounted display device including a see-through display includes receiving a plurality of fit points of a user. The head-mounted display device may be assembled based on the fit points such that exit pupils of the see-through display are substantially aligned with pupils of the user when the head-mounted display device is worn by the user. | 12-05-2013 |
20130322684 | SURVEILLANCE INCLUDING A MODIFIED VIDEO DATA STREAM - Systems and computer program products provide surveillance including a modified video data stream. The systems and products include computer readable program code, when read by a processor, that is configured for receiving at an image processor a first video data stream and a second video data stream, each of the first and second video data streams may include a target object having an assigned tracking position tag. The code further includes extracting a first facial image of the target object from the first video data stream, determining a target object location in the second video data stream based at least in part on the tracking position tag and generating a modified video data stream including the first facial image superimposed on or adjacent to the target object location in the second video data stream. | 12-05-2013 |
20130322685 | SYSTEM AND METHOD FOR PROVIDING AN INTERACTIVE SHOPPING EXPERIENCE VIA WEBCAM - A system and method for providing an interactive shopping experience via webcam is disclosed. A particular embodiment includes enabling a user to select from a plurality of items of virtual apparel; obtaining an image of a user via a web-enabled camera (webcam); using a data processor to perform facial detection on the image to isolate an image of a face of the user; estimating the user's position according to a position and a size of the image of the user's face; modifying an image corresponding to the selected item of virtual apparel based on the size of the image of the user's face; and auto-fitting the modified image corresponding to the selected item of virtual apparel to the image of the user's face. | 12-05-2013 |
20130322686 | Profiling Activity Through Video Surveillance - Embodiments of the invention relate to profiling activity. Content is captured and keywords are identified in the captured content. In response to the keyword identification, rules associated with the keywords are identified. These rules are employed to identify and capture relevant content in real-time. | 12-05-2013 |
20130322687 | SURVEILLANCE INCLUDING A MODIFIED VIDEO DATA STREAM - Methods provide surveillance, including a modified video data stream, with computer readable program code, when read by a processor, that is configured for receiving at an image processor a first video data stream and a second video data stream. Each of the first and second video data streams may include a target object having an assigned tracking position tag. The methods may further include extracting a first facial image of the target object from the first video data stream, determining a target object location in the second video data stream based at least in part on the tracking position tag and generating a modified video data stream including the first facial image superimposed on or adjacent to the target object location in the second video data stream. | 12-05-2013 |
20130322688 | PERIODIC STATIONARY OBJECT DETECTION SYSTEM AND PERIODIC STATIONARY OBJECT DETECTION METHOD - A periodic stationary object detection system extracts a feature point of a three-dimensional object from image data on a predetermined region of a bird's eye view image for each of multiple sub regions included in the predetermined region, calculates waveform data corresponding to a distribution of the feature points in the predetermined region on the bird's eye view image, and judges whether or not the three-dimensional object having the extracted feature point is a periodic stationary object candidate on the basis of whether or not peak information of the waveform data is equal to or larger than a predetermined threshold value. | 12-05-2013 |
20130322689 | Intelligent Logo and Item Detection in Video - Techniques to follow objects in a video. An object detector detects the object, and an object tracker follows that object even when the detectable part cannot be seen. The object can be tagged in its display. The object can be individual team members in a video showing sports. A color filter that is based on colors of a uniform of a team of the team member can be used to restrict an area of said automated object detection. | 12-05-2013 |
20130322690 | SITUATION RECOGNITION APPARATUS AND METHOD USING OBJECT ENERGY INFORMATION - A situation recognition apparatus and method analyzes an image to convert a position and motion change rate of an object in a space and an object number change rate into energy information, and then changes the energy information into entropy in connection with an entropy theory of a measurement theory of a disorder within a space. Accordingly, the situation recognition apparatus and method recognizes an abnormal situation in the space and issues a warning for the recognized abnormal situation. Therefore, the situation recognition apparatus and method recognizes an abnormal situation within a space, thereby effectively preventing or perceiving a real-time incident at an early stage. | 12-05-2013 |
20130322691 | TARGET RECOGNITION SYSTEM AND TARGET RECOGNITION METHOD EXECUTED BY THE TARGET RECOGNITION SYSTEM - A target recognition system operatively connected to a stereo imaging device to capture a stereo image of an area ahead of the target recognition system, includes a parallax calculator to calculate parallax of the stereo image including two captured images; a target candidate detector to detect a candidate set of recognition target areas based on a luminance image of one of the captured images; and a target recognition processor to limit the candidate set of recognition target areas detected by the target candidate detector within a range of threshold values of characteristics in the candidate set of recognition target areas set in advance based on the parallax calculated by the parallax calculator to extract and output the one or more recognition targets. | 12-05-2013 |
20130322692 | TARGET RECOGNITION SYSTEM AND TARGET RECOGNITION METHOD EXECUTED BY THE TARGET RECOGNITION SYSTEM - A target recognition system and a target recognition method to recognize one or more recognition targets, operatively connected to an imaging device to capture an image of an area ahead of the target recognition system, each of which includes a recognition area detector to detect multiple recognition areas from the captured image; a recognition weighting unit to set recognition weight indicating existence probability of images of the recognition targets to the respective recognition areas detected by the recognition area detector; and a target recognition processor to recognize the one or more recognition targets in a specified recognition area based on the recognition weight set in the respective recognition area. | 12-05-2013 |
20130322693 | Temporal Thermal Imaging Method For Detecting Subsurface Objects and Voids - A temporal thermal survey method to locate at a given area whether or not there is a subsurface object or void site. The method uses thermal inertia change detection. It locates temporal heat flows from naturally heated subsurface objects or faulty structures such as corrosion damage. The added value over earlier methods is the use of empirical methods to specify the optimum times for locating subsurface objects or voids amidst clutter and undisturbed host materials. Thermal inertia, or thermal effusivity, is the bulk material resistance to temperature change. Surface temperature highs and lows are shifted in time at the subsurface object or void site relative to the undisturbed host material sites. The Dual-band Infra-Red Effusivity Computed Tomography (DIRECT) method verifies the optimum two times to detect thermal inertia outliers at the subsurface object or void border with undisturbed host materials. | 12-05-2013 |
20130322694 | Energy Efficient Routing Using An Impedance Factor - A method and system for calculating an energy efficient route is disclosed. A route calculation application calculates one or more routes from an origin to a destination. For each of the routes, the route calculation application uses impedance factor data associated with each segment in the route. The impedance factor is calculated using probe data when the probe data is available for a road segment. When probe data is unavailable, the impedance factor is calculated using machine learning techniques that analyze the results of the impedance factor classifications for road segments having probe data. | 12-05-2013 |
20130322695 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE CAPTURING APPARATUS - Face regions are detected from a captured image, and a weight of each detected face region is computed based on a size and/or a position of the detected face region. Then a previous priority ranking weight is computed based on a priority ranking determined in previous processing. A priority of the face region is computed from the weight and the previous priority ranking weight. For example, if the continuous processing number exceeds the threshold the priority ranking weight is reduced. After the processing is completed for all face regions, a priority ranking of each face region is determined based on the priority computed for each face region. | 12-05-2013 |
20130322696 | DIFFERING REGION DETECTION SYSTEM AND DIFFERING REGION DETECTION METHOD - The present invention enables detection of a local differing region between images. Inter-image difference information indicating a difference in feature amounts for each subregion between first and second images is generated based on a first feature amount vector that is a set of feature amounts respectively corresponding to a plurality of subregions in the first image and a second feature amount vector that is a set of feature amounts respectively corresponding to a plurality of subregions in the second image, a differing region that is an image region that differs between the first and second images is detected based on differences in the respective subregions indicated by the inter-image difference information, and detection information that indicates a result of the detection is outputted. | 12-05-2013 |
20130329942 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND A COMPUTER-READABLE STORAGE MEDIUM - A pattern light projection unit which stores information on pattern light including information on line patterns projects pattern light including line patterns having at least two lines and at least one reference line pattern serving as a reference of the line patterns on an object in accordance with the information on pattern light. An image pickup unit captures an image of the object to which the pattern light is projected. | 12-12-2013 |
20130329943 | SYSTEM AND METHOD FOR PROVIDING AUTOMOTIVE PURCHASE, INSURANCE QUOTE, AND VEHICLE FINANCING INFORMATION USING VEHICLE RECOGNITION - A system for providing vehicle information at an automobile point of purchase includes a user device having a camera or other image capturing device that is used to capture an image of an automobile. An application on or associated with the image capturing device can either transmit the image to a service provider for processing, or can implement one or more steps in a feature recognition process locally, and thereafter transmit the data to a service provider. In either case, the service provider can then complete the feature recognition processing and identify the automobile from the image. The service provider can then communicate with a make and model database to provide useful information on the vehicle, which can then be transmitted to the user device and conveniently displayed. | 12-12-2013 |
20130329944 | TRACKING AIRCRAFT IN A TAXI AREA - Tracking aircraft in a taxi area is described herein. One method includes receiving a video image of an aircraft while the aircraft is taxiing, determining a portion of the video image associated with the aircraft, determining a geographical track associated with the aircraft based, at least in part, on the portion of the video image, and mapping the determined geographical track to a coordinate system display while the aircraft is taxiing. | 12-12-2013 |
20130329945 | SELF-ADAPTIVE IMAGE-BASED OBSTACLE DETECTION METHOD - A self-adaptive image-based obstacle detection method comprises steps: capturing an original image; transforming the original image to an HSV color space, and retrieving a hue component (H) and a saturation component (S) of the HSV color space to form an HS-based image; dividing the HS-based image into image blocks; selecting one image block as a background block; using an obstacle recognition equation to determine whether each of the image blocks is similar to the background block; if no, deleting the image block; if yes, preserving the image block to form a binary obstacle image; and overlaying the binary obstacle image on the original image to filter out the background and obtain an initial ambit of an obstacle image. Then, three orderly movement flow equations are used to determine whether it is an obstacle. | 12-12-2013 |
20130329946 | FAST POSE DETECTOR - Methods and apparatuses are presented for determining whether a gesture is being performed in a sequence of source images. In some embodiments, a method includes detecting a gesture in each of one or more reference images using one or more gesture models of a plurality of gesture models. The method may also include selecting a first gesture model from the one or more gesture models that most closely matches the detected gesture, prioritizing the first gesture model over other gesture models in the plurality of gesture models for searching for the gesture in the sequence of source images, and scanning the sequence of source images to determine whether the gesture is being performed, using the prioritized first gesture model. If the gesture is being performed, the method may end scanning prior to using another gesture model of the plurality of gesture models to determine whether the gesture is being performed. | 12-12-2013 |
20130329947 | IMAGE CAPTURING METHOD FOR IMAGE RECOGNITION AND SYSTEM THEREOF - An image capturing method includes providing at least three image capturing devices arranged along a same direction and an image processor, the at least three image capturing devices capturing at least three first images, determining a target object in the at least three first images, activating a first pair of image capturing devices of the at least three image capturing devices according to shooting angles of the target object in the at least three first images in order to capture a first pair of motion images, and the image processor performing image recognition to the target object of the first pair of motion images. | 12-12-2013 |
20130329948 | SUBJECT TRACKING DEVICE AND SUBJECT TRACKING METHOD - A subject tracking device includes a processor; and a memory which stores a plurality of instructions, which when executed by the processor, cause the processor to execute, detecting at least one subject candidate area in which it is probable that a tracking target subject appears on an image that is received from an imaging unit; calculating a degree of blur of the subject candidate area for each of the subject candidate areas; determining that the subject appears in a subject candidate area having a degree of blur in accordance with a moving speed of the subject, out of the subject candidate areas; and deciding movement of the subject depending on a movement direction from an area in which the subject appears on a previous image that is captured by the imaging unit before capturing the image, to the subject candidate area in which the subject appears on the image. | 12-12-2013 |
20130329949 | IMAGE RECOGNITION APPARATUS AND IMAGE RECOGNITION METHOD - An image recognition apparatus includes a reception part that receives an image that has been read; a determination part that determines a registered object to correspond to an object included in the received image that has been read from among previously registered plural objects; a reflecting part that reflects colors of the image that has been read in previously stored plural similar objects each similar to the registered object determined by the determination part; and a printing control part that causes a printing apparatus to print the plural similar objects in which the colors have been reflected by the reflecting part. | 12-12-2013 |
20130329950 | METHOD AND SYSTEM OF TRACKING OBJECT - Provided is a method and system for tracking an object that may track a point into which an object is to move by combining the object in an image and position coordinates of the object acquired through a position tracking apparatus provided to the object and thereby displaying the position coordinates. | 12-12-2013 |
20130329951 | METHOD AND APPARATUS FOR ESTIMATING A POSE OF A HEAD FOR A PERSON - A method of estimating a pose of a head for a person, includes estimating the pose of the head for the person based on a content, and generating a three-dimensional (3D) model of a face for the person. The method further includes generating pictorial structures of the face based on the estimated pose and the 3D model, and determining a refined pose of the head by locating parts of the face in the pictorial structures. | 12-12-2013 |
20130329952 | APPARATUS AND METHOD FOR PROCESSING ASYNCHRONOUS EVENT INFORMATION - An event information processing apparatus and method are provided that may process an asynchronous event. The event information processing apparatus may include a grouper which groups at least one item of event information generated at an identical time, a time information identifier which identifies basic time information associated with the grouped event information, and an information transmitter which arranges and thereby transmits the grouped event information and basic time information. | 12-12-2013 |
20130329953 | OPTICAL NON-CONTACTING APPARATUS FOR SHAPE AND DEFORMATION MEASUREMENT OF VIBRATING OBJECTS USING IMAGE ANALYSIS METHODOLOGY - Apparatuses and methods related to measuring motion or deformations of vibrating objects are provided. A plurality of images of an object are acquired in synchronization with a plurality of determined times of interest during oscillation of the object. The plurality of images are compared to obtain one or more quantities of interest of the object based at least in part on the plurality of images. | 12-12-2013 |
20130329954 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING SYSTEM - An image processing apparatus, connected to an imaging part to capture an image of workpieces conveyed on a conveyer, includes an interface that receives a signal indicating a travel distance of the conveyer, an interface that communicates with a control device for controlling a moving machine disposed downstream of an imaging area of a imaging part, a positional information acquisition unit that processes the image captured by the imaging part and thereby acquiring positional information of a pre-registered workpiece in the image, a travel distance obtaining unit that obtains the travel distance of the conveyer synchronized with the control device, an initiating unit that initiates the capturing by the imaging part in response to an imaging command, and a transmission unit that transmits, to the control device, the positional information and the travel distance upon the capturing of the image used to acquire the positional information. | 12-12-2013 |
20130329955 | Real-Time Face Tracking with Reference Images - A method of tracking a face in a reference image stream using a digital image acquisition device includes acquiring a full resolution main image and an image stream of relatively low resolution reference images each including one or more face regions. One or more face regions are identified within two or more of the reference images. A relative movement is determined between the two or more reference images. A size and location are determined of the one or more face regions within each of the two or more reference images. Concentrated face detection is applied to at least a portion of the full resolution main image in a predicted location for candidate face regions having a predicted size as a function of the determined relative movement and the size and location of the one or more face regions within the reference images, to provide a set of candidate face regions for the main image. | 12-12-2013 |
20130329956 | METHOD OF IMPROVING THE RESOLUTION OF A MOVING OBJECT IN A DIGITAL IMAGE SEQUENCE - A method of improving the resolution of a small moving object in a digital image sequence comprises the steps of:
| 12-12-2013 |
20130329957 | METHOD FOR DETECTING POINT OF GAZE AND DEVICE FOR DETECTING POINT OF GAZE - A gaze point detection device | 12-12-2013 |
20130329958 | PERSON TRACKING DEVICE, PERSON TRACKING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PERSON TRACKING PROGRAM - A person region information extraction unit ( | 12-12-2013 |
20130329959 | OBJECT DETECTION DEVICE - An object detection device includes an acquisition unit configured to acquire information indicating a temperature distribution, a storage unit configured to store background information indicating a temperature distribution when no target object exists, a detection unit configured to detect existence or absence of a target object, and an update unit configured to repeatedly update the background information. The update unit performs, with respect to a non-detection region, a first background updating process for the update of the background information based on the acquired information and performs, with respect to a detection region, a second background updating process for the update of the background information using a correction value. | 12-12-2013 |
20130336523 | NORMALIZED IMAGES FOR ITEM LISTINGS - Disclosed in some examples is a method including receiving a selection of an outline template; displaying the outline template in an image preview screen of a digital image capture device; responsive to a capture of an image of the digital image capture device, cropping the image to an outline of the outline template; positioning the cropped image over a second image and sending a combined image formed from the image positioned over the second image to a commerce server, the combined image for use as a product image. | 12-19-2013 |
20130336524 | Dynamic Hand Gesture Recognition Using Depth Data - The subject disclosure is directed towards a technology by which dynamic hand gestures are recognized by processing depth data, including in real-time. In an offline stage, a classifier is trained from feature values extracted from frames of depth data that are associated with intended hand gestures. In an online stage, a feature extractor extracts feature values from sensed depth data that corresponds to an unknown hand gesture. These feature values are input to the classifier as a feature vector to receive a recognition result of the unknown hand gesture. The technology may be used in real time, and may be robust to variations in lighting, hand orientation, and the user's gesturing speed and style. | 12-19-2013 |
20130336525 | SPECTRAL EDGE MARKING FOR STEGANOGRAPHY OR WATERMARKING - A system for detecting visibly hidden content on a print media ( | 12-19-2013 |
20130336526 | METHOD AND SYSTEM FOR WILDFIRE DETECTION USING A VISIBLE RANGE CAMERA - Wildfires are detected by controlling image scanning within the viewing range of a video camera to generate digital images that are analyzed to detect gray colored regions, and then to determine whether a detected gray colored region is smooth. Further analysis to determine movement in a gray colored smooth region uses a past image which is within a slow moving time range, as determined by a strategy for controlling the image scanning. Additional analysis connects a candidate region to a land portion of the image, and a support vector machine is applied to a covariance matrix of the candidate region to determine whether the region shows smoke from a wildfire. | 12-19-2013 |
20130336527 | FACIAL IMAGE QUALITY ASSESSMENT - An example method includes capturing, by a camera of a mobile computing device, an image, determining whether the image includes a representation of at least a portion of a face, and, when the image includes the representation of at least the portion of the face, analyzing characteristics of the image. The characteristics include at least one of a tonal distribution of the image that is associated with a darkness-based mapping of a plurality of pixels of the image, and a plurality of spatial frequencies of the image that are associated with a visual transition between adjacent pixels of the image. The method further includes classifying, by the mobile computing device, a quality of the image based at least in part on the analyzed characteristics of the image. | 12-19-2013 |
20130336528 | METHOD AND APPARATUS FOR IDENTIFYING INPUT FEATURES FOR LATER RECOGNITION - Disclosed are methods and apparatuses to recognize actors during normal system operation. The method includes defining actor input such as hand gestures, executing and detecting input, and identifying salient features of the actor therein. A model is defined from salient features, and a data set of salient features and/or model are retained, and may be used to identify actors for other inputs. A command such as “unlock” may be executed in response to actor input. Parameters may be applied to further define where, when, how, etc. actor input is executed, such as defining a region for a gesture. The apparatus includes a processor and sensor, the processor defining actor input, identifying salient features, defining a model therefrom, and retaining a data set. A display may also be used to show actor input, a defined region, relevant information, and/or an environment. A stylus or other non-human actor may be used. | 12-19-2013 |
20130336529 | METHOD AND APPARATUS FOR SEARCHING IMAGES - Disclosed are methods and apparatuses for searching images. An image is received and a first search path is defined for the image. The first search path may be a straight line, horizontal, and/or near the bottom of the image, and/or may begin at one edge and move toward the other. A transition is defined for the image, distinguishing a feature to be found. The image is searched for the transition along the first search path. When the transition is detected, the image is searched along a second search path that follows the transition. The apparatus includes an image sensor and a processor. The sensor is adapted to obtain images. The processor is adapted to define a first search path and a transition for the image, to search for the transition along the first search path, and to search along a second search path upon detecting the transition, following the transition. | 12-19-2013 |
20130336530 | Data Capture and Identification System and Process - An identification method and process for objects from digitally captured images thereof that uses data characteristics to identify an object from a plurality of objects in a database. The data is broken down into parameters such as a Shape Comparison, Grayscale Comparison, Wavelet Comparison, and Color Cube Comparison with object data in one or more databases to identify the actual object of a digital image. | 12-19-2013 |
20130336531 | SEQUENTIAL EVENT DETECTION FROM VIDEO - Human behavior is determined by sequential event detection by constructing a temporal-event graph with vertices representing primitive images of images of a video stream, and also of idle states associated with the respective primitive images. A human activity event is determined as a function of a shortest distance path of the temporal-event graph vertices. | 12-19-2013 |
20130336532 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM PRODUCT - According to one embodiment, an information processing apparatus includes: a detector configured to set a plurality of detection areas to a single piece of face image included in a video image that is based on input video data, with reference to a position of the face image to detect movements of an operator giving an operation instruction in the detection areas; and an output module configured to output operation data indicating the operation instruction based on a combination of the movements detected in the detection areas. | 12-19-2013 |
20130336533 | Object Information Derived from Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 12-19-2013 |
20130336534 | MULTI-MODE VIDEO EVENT INDEXING - Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode. | 12-19-2013 |
20130336535 | MULTI-MODE VIDEO EVENT INDEXING - Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode. | 12-19-2013 |
20130336536 | METHOD AND SYSTEM FOR DETECTING A STREAM OF ELECTROMAGNETIC PULSES, AND DEVICE INCLUDING SUCH A DETECTION SYSTEM AND INTENDED FOR ELECTROMAGNETICALLY GUIDING AMMUNITION TOWARD A TARGET - A method for detecting a stream of electromagnetic pulses emitted, according to a predefined occurrence law, in a scene observed using a detection system comprising a matrix detector and a processing unit for processing signals comprising the electromagnetic pulses. The method includes the following steps: acquiring and transmitting the signals from the matrix detector to the processing unit, and for each pixel of the detector calculating a subtraction signal between two signals acquired during two consecutive time windows of the same length, calculating a signal for accumulating the subtraction signals spaced apart in time by an interval defined by the predefined occurrence law, and thresholding the accumulation signal, the pulse being detected if the accumulation signal is greater than a predetermined threshold for at least one pixel, and locating the pulse detected in the observed scene from the coordinates of the pixel including the detected pulse. | 12-19-2013 |
20130343600 | SELF LEARNING FACE RECOGNITION USING DEPTH BASED TRACKING FOR DATABASE GENERATION AND UPDATE - Face recognition training database generation technique embodiments are presented that generally involve collecting characterizations of a person's face that are captured over time and as the person moves through an environment, to create a training database of facial characterizations for that person. As the facial characterizations are captured over time, they are will represent the person's face as viewed from various angles and distances, different resolutions, and under different environmental conditions (e.g., lighting and haze conditions). Further, over a long period of time where facial characterizations of a person are collected periodically, these characterizations can represent an evolution in the appearance of the person. This produces a rich training resource for use in face recognition systems. In addition, since a person's face recognition training database can be established before it is needed by a face recognition system, once employed, the training will be quicker. | 12-26-2013 |
20130343601 | GESTURE BASED HUMAN INTERFACES - A method for implementing gesture based human interfaces includes segmenting data generated by an IR camera of an active area and detecting objects in an active area. The objects are distinguished as either island objects or peninsula objects and a human hand is identified from among the peninsula objects. The motion of the human hand is tracked as a function of time and a gesture made by the human hand is recognized. | 12-26-2013 |
20130343602 | APPARATUS AND METHODS FOR USE IN FLASH DETECTION - The present embodiments provide methods, systems and apparatuses that detect, classify and locate flash events. In some implementations, some of the methods detect a flash event, trigger an imaging system in response to detecting the flash event to capture an image of an area that includes the flash event, and determines a location of the flash event. | 12-26-2013 |
20130343603 | METHOD AND SYSTEM FOR DETECTING MOTION CAPABLE OF REMOVING SHADOW BY HEAT - A method and system for detecting a motion of a target object in a thermal image by removing a shadow by heat of the target object from the thermal image. The motion detecting system includes: a learning unit obtaining at least one of size and brightness of a shadow by heat of a reference object based on characteristics of the shadow by heat of the reference object by temperature; and a detecting unit removing a shadow region of the target object from the thermal image including the target object based on at least one of the size and the brightness of the shadow by heat of the object. | 12-26-2013 |
20130343604 | VIDEO PROCESSING APPARATUS AND VIDEO PROCESSING METHOD - A video processing apparatus tracks an object in a video and performs detection processing for detecting that an object in the video is a specific object such that a number of times the detection processing is performed within a predetermined period on a tracking object not detected to be the specific object is more than a number of times the detection processing is performed within the predetermined period on a tracking object detected to be the specific object. | 12-26-2013 |
20130343605 | SYSTEMS AND METHODS FOR TRACKING HUMAN HANDS USING PARTS BASED TEMPLATE MATCHING - Systems and methods for tracking human hands using parts based template matching are described. One embodiment of the invention includes a processor, a reference camera and memory containing: a hand tracking application; and a finger template including an edge features template. In addition, the hand tracking application configures the processor to: detect at least one candidate finger in a frame of video data received from the reference camera, where each candidate finger is a grouping of pixels identified by searching the frame of video data for a grouping of pixels that have image gradient orientations that match the edge features of the finger template accounting for rotation and scaling differences; and verify the correct detection of a candidate finger by confirming that the colors of the pixels within the grouping of pixels identified as a candidate finger satisfy a skin color criterion. | 12-26-2013 |
20130343606 | SYSTEMS AND METHODS FOR TRACKING HUMAN HANDS BY PERFORMING PARTS BASED TEMPLATE MATCHING USING IMAGES FROM MULTIPLE VIEWPOINTS - Systems and methods for tracking human hands by performing parts based template matching using images captured from multiple viewpoints are described. One embodiment of the invention includes a processor, a reference camera, an alternate view camera, and memory containing: a hand tracking application; and a plurality of edge feature templates that are rotated and scaled versions of a finger template that includes an edge features template. In addition, the hand tracking application configures the processor to: detect at least one candidate finger in a reference frame, where each candidate finger is a grouping of pixels identified by searching the reference frame for a grouping of pixels that have image gradient orientations that match one of the plurality of edge feature templates; and verify the correct detection of a candidate finger in the reference frame by locating a grouping of pixels in an alternate view frame that correspond to the candidate finger. | 12-26-2013 |
20130343607 | METHOD FOR TOUCHLESS CONTROL OF A DEVICE - A system and method for computer vision based control of a device may include using a virtual line passing through an area of a user's eyes and through a user's hand (or any object controlled by the user) to a display of the device, to control the device. | 12-26-2013 |
20130343608 | INFORMATION DEVICE - An information system for a motor vehicle includes a translation aid incorporated in a navigation system. The translation aid can transform a text of a first script system into a second script system. A motor vehicle with such an information system and a method for using such information system are also disclosed. | 12-26-2013 |
20130343609 | DOCUMENT UNBENDING AND RECOLORING SYSTEMS AND METHODS - According to one aspect, a system for processing a document image is disclosed. In an exemplary embodiment, the system includes an edge-detection unit configured to identify an edge of a document from a document image. The system also includes a keystone-correction unit and a flattening unit. The keystone-correction unit is configured to correct keystone distortion in the document image. The flattening unit is configured to flatten content of the document in the document image. | 12-26-2013 |
20130343610 | SYSTEMS AND METHODS FOR TRACKING HUMAN HANDS BY PERFORMING PARTS BASED TEMPLATE MATCHING USING IMAGES FROM MULTIPLE VIEWPOINTS - Systems and methods for tracking human hands by performing parts based template matching using images captured from multiple viewpoints are described. One embodiment includes a processor, a reference camera, an alternate view camera, and memory containing: a hand tracking application; and a plurality of edge feature templates that are rotated and scaled versions of a finger template that includes an edge features template. In addition, the hand tracking application configures the processor to: detect at least one candidate finger in a reference frame, where each candidate finger is a grouping of pixels identified by searching the reference frame for a grouping of pixels that have image gradient orientations that match one of the plurality of edge feature templates; and verify the correct detection of a candidate finger in the reference frame by locating a grouping of pixels in an alternate view frame that correspond to the candidate finger. | 12-26-2013 |
20130343611 | Gestural interaction identification - A method of identifying gestural interaction comprises detecting a user with an imaging device, detecting with the imaging device the depth value at the centroid of the user with respect to the imaging device, detecting with the imaging device the closest distance of the user with respect to the imaging device, and, with a processor, identifying the initiation of a gestural interaction based on the ratio of the closest distance and the depth value at the centroid of the user is above a predetermined threshold. A computer program product for identifying initiation and termination of gestural interaction within a gestural interaction system comprises a computer readable storage medium having computer usable program code embodied therewith, the computer usable program code comprising computer usable program code that identifies the initiation of a gestural interaction by a user depending on whether a virtual bubble around the user has been broken. | 12-26-2013 |
20140003651 | Electronic Devices in Local Interactions between Users | 01-02-2014 |
20140003652 | INDIVIDUALIZING GENERIC COMMUNICATIONS | 01-02-2014 |
20140003653 | System and Method for Detemining the Position of an Object Displaying Media Content | 01-02-2014 |
20140003654 | METHOD AND APPARATUS FOR IDENTIFYING LINE-OF-SIGHT AND RELATED OBJECTS OF SUBJECTS IN IMAGES AND VIDEOS | 01-02-2014 |
20140003655 | METHOD, APPARATUS AND SYSTEM FOR PROVIDING IMAGE DATA TO REPRESENT INVENTORY | 01-02-2014 |
20140003656 | SYSTEM OF A DATA TRANSMISSION AND ELECTRICAL APPARATUS | 01-02-2014 |
20140003657 | SETTING APPARATUS AND SETTING METHOD | 01-02-2014 |
20140003658 | METHOD AND APPARATUS FOR CODING OF EYE AND EYE MOVEMENT DATA | 01-02-2014 |
20140003659 | Passenger service unit with gesture control | 01-02-2014 |
20140003660 | HAND DETECTION METHOD AND APPARATUS | 01-02-2014 |
20140003661 | CAMERA APPARATUS AND METHOD FOR TRACKING OBJECT IN THE CAMERA APPARATUS | 01-02-2014 |
20140003662 | REDUCED IMAGE QUALITY FOR VIDEO DATA BACKGROUND REGIONS | 01-02-2014 |
20140003663 | METHOD OF DETECTING FACIAL ATTRIBUTES | 01-02-2014 |
20140003664 | DATA PROCESSOR, DATA PROCESSING SYSTEM, AND COMPUTER-READABLE RECORDING MEDIUM | 01-02-2014 |
20140003665 | Nail Region Detection Method, Program, Storage Medium, and Nail Region Detection Device | 01-02-2014 |
20140003666 | SENSING DEVICE AND METHOD USED FOR VIRTUAL GOLF SIMULATION APPARATUS | 01-02-2014 |
20140003667 | IMAGE PROCESSING DEVICE, OBJECT SELECTION METHOD AND PROGRAM | 01-02-2014 |
20140003668 | Image Capture and Identification System and Process | 01-02-2014 |
20140003669 | MOVING OBJECT DETECTING DEVICE, MOVING OBJECT DETECTING METHOD, AND COMPUTER PROGRAM | 01-02-2014 |
20140010405 | METHODS AND SYSTEMS FOR CREATING VIRTUAL TRIPS FROM SETS OF USER CONTENT ITEMS - A set of user content items, such as a set of photographs or audio or video recordings, is used to identify recommended locations. Each item of user content in the set includes a geographical identifier and a time-stamp indicative of the location and time of origin of the item. The items of user content are placed in time-order, and a route is determined that links, in time-order, the locations identified by the geographical identifiers. A recommended location is then identified, from a database of recommended locations, that is located near the determined route but is not geographically coincident with the locations of any of the items of user content. Information for a map containing the route and an illustration of the identified location are sent to a terminal device for presentation to a user. | 01-09-2014 |
20140010406 | METHOD OF POINT SOURCE TARGET DETECTION FOR MULTISPECTRAL IMAGING - A method of point source target detection for multispectral imaging is disclosed. In one embodiment, a background source spectral ratio is determined using at least one radiant source, such as baseline intensities, camera optics sensitivity properties and atmospheric transmission properties. Further, a spectral difference is computed for each pixel in an incoming frame by applying the background source spectral ratio to a spectral band-specific radiant intensity value of each pixel. Furthermore, offset biasing in the incoming frame is removed by applying spatial median filtering to each computed spectral difference in the incoming frame. | 01-09-2014 |
20140010407 | IMAGE-BASED LOCALIZATION - Image-based localization technique embodiments are presented which provide a real-time approach for image-based video camera localization within large scenes that have been reconstructed offline using structure from motion or similar techniques. From monocular video, a precise 3D position and 3D orientation of the camera can be estimated on a frame by frame basis using only visual features. | 01-09-2014 |
20140010408 | LENS-ATTACHED MATTER DETECTOR, LENS-ATTACHED MATTER DETECTION METHOD, AND VEHICLE SYSTEM - A lens-attached matter detector includes an edge extractor configured to create an edge image based on an input image, divide the edge image into a plurality of areas including a plurality of pixels, and extract an area whose edge intensity is a threshold range as an attention area, a brightness distribution extractor configured to obtain a brightness value of the attention area and a brightness value of a circumference area, a brightness change extractor configured to obtain the brightness value of the attention area and the brightness value of the circumference area for a predetermined time interval, and obtain a time series variation in the brightness value of the attention area based on the brightness value of the attention area, and an attached matter determiner configured to determine the presence or absence of attached matter based on the time series variation in the brightness value of the attention area. | 01-09-2014 |
20140010409 | OBJECT TRACKING DEVICE, OBJECT TRACKING METHOD, AND CONTROL PROGRAM - An object tracking device which tracks a target object in a time-series image including a plurality of frames has a location information acquisition unit that acquires location information of a target object in a first frame, the target object being a tracked target, a detailed contour model generation unit that generates a detailed contour model in the first frame, on the basis of the location information, the detailed contour model being formed with a plurality of contour points representing a contour of the target object, and a search location setting unit that sets a plurality of different search locations in a second frame, the second frame being any one of frames following the first frame. | 01-09-2014 |
20140010410 | IMAGE RECOGNITION SYSTEM, IMAGE RECOGNITION METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING IMAGE RECOGNITION PROGRAM - An image recognition system includes a partial area determination unit for determining a plurality of learning partial areas in a learning image including an object to be recognized, a partial area set generation unit for generating a learning partial area set including the learning partial area and a plurality of peripheral areas included in a predetermined range with reference to the learning partial area, and a learning unit for selecting an area including an image suitable to be determined as an object to be identified included the object to be recognized from a plurality of areas included in the learning partial area set, to learn a classifier so as to determine a likelihood that the image included in the area is the object to be identified to be higher based on a feature amount related to the selected area. | 01-09-2014 |
20140016818 | PRODUCE COLOR DATA CORRECTION METHOD AND AN APPARATUS THEREFOR - A produce recognition system comprises an image capture device arranged to (i) capture a first color image which is representative of a color image of a produce item, and (ii) capture a second color image which is representative of a color image of at least one target color swatch. The produce recognition system further comprises control circuitry arranged to (i) calculate one or more color correction factors based upon differences between the captured second color image and a store of reference color images, and (ii) apply the calculated one or more color correction factors to the captured first color image to correct for color variations in the color image of the produce item due to a combination of variations in natural lighting and variations in interior lighting. | 01-16-2014 |
20140016819 | METHOD AND SYSTEM FOR ISOLATED HOLE DETECTION AND GROWTH IN A DOCUMENT IMAGE - A method for detecting and growing isolated holes in a document image having a plurality of pixels is provided. The method includes isolating the pixels of the image to form a plurality of windows, each window having a target pixel; identifying a hole growth factor to grow an isolated hole in the received image; using the hole growth factor to identify tiered pixel patterns from a plurality of predefined, tiered pixel patterns, wherein each of the tiered pixel patterns having a predetermined hole growth factor; comparing the pixels within each window to the pixel patterns within the identified tier to identify a match between the pixels within the window and at least one of the pixel patterns; and changing a pixel value of the target pixel, when a match is identified, to grow the isolated hole by the hole growth factor. | 01-16-2014 |
20140016820 | DISTRIBUTED OBJECT TRACKING FOR AUGMENTED REALITY APPLICATION - One embodiment of the present invention provides a system for tracking and distributing annotations for a video stream. During operation, the system receives, at an annotation server, the video stream originating from a remote field computer, extracts a number of features from the received video stream, and identifies a group of features that matches a known feature group, which is associated with an annotation. The system further associates the identified group of features with the annotation, and forwards the identified group of features and the annotation to the remote field computer, thereby facilitating the remote field computer to associate the annotation with a group of locally extracted features and display the video stream with the annotation placed in a location based at least on locations of the locally extracted features. | 01-16-2014 |
20140016821 | SENSOR-AIDED WIDE-AREA LOCALIZATION ON MOBILE DEVICES - A mobile device uses vision and orientation sensor data jointly for six degree of freedom localization, e.g., in wide-area environments. An image or video stream is captured while receiving geographic orientation data and may be used to generate a panoramic cylindrical map of an environment. A bin of model features stored in a database is accessed based on the geographic orientation data. The model features are from a pre-generated reconstruction of the environment produced from extracted features from a plurality of images of the environment. The reconstruction is registered to a global orientation and the model features are stored in bins based on similar geographic orientations. Features from the panoramic cylindrical map are matched to model features in the bin to produce a set of corresponding features, which are used to determine a position and an orientation of the camera. | 01-16-2014 |
20140016822 | INFORMATION PROVIDING DEVICE AND INFORMATION PROVIDING METHOD - An information providing device according to the exemplary embodiment includes an object recognizing unit, a retrieving unit, an obtaining unit, and a transmitting unit. The object recognizing unit extracts an image of a specific object which appears in an image of a moving image content to be distributed to a terminal device. The retrieving unit requests a retrieval device to retrieve a similar image with the image of the specific object as a retrieval key and obtains a retrieval result from the retrieval device. The obtaining unit obtains recommend information related with an image of the specific object which appears in an image of a moving image content based on the retrieval result. The transmitting unit transmits the recommend information to the terminal device. | 01-16-2014 |
20140016823 | METHOD OF VIRTUAL MAKEUP ACHIEVED BY FACIAL TRACKING - Method of applying virtual makeup and producing makeover effects to 3D face model driven by facial tracking in real-time includes the steps: capturing static or live facial images of a user; performing facial tracking of facial image, and obtaining tracking points on captured facial image; and producing makeover effects according to tracking points in real time. Virtual makeup can be applied using virtual makeup input tool such as a user's finger sliding over touch panel screen, mouse cursor or an object passing through makeup-allowed area. Makeup-allowed area for producing makeover effects is defined by extracting feature points from facial tracking points and dividing makeup-allowed area into segments and layers; and defining and storing parameters of makeup-allowed area. Virtual visual effects including color series, alpha blending, and/or superposition are capable of being applied. Makeover effect takes into account of lighting condition, facial posture rotation, face size scaling, and face translation, respectively. | 01-16-2014 |
20140016824 | DEVICE AND METHOD FOR DETECTING ANGLE OF ROTATION FROM NORMAL POSITION OF IMAGE - An evaluation value indicative of the extent of lines in each direction is calculated for a pre-processed image in which 0s are filled in and extended in the lateral direction of the inputted image and which has been reduced ⅛th in the longitudinal direction. To obtain the angle of rotation of an image from the change in the evaluation value obtained while the angle relative to the lateral direction of the pre-processed image is modified in small steps, a parallel line is drawn for each direction, a projection is taken, and the sum of squares serves as the evaluation value of the direction. The direction having the highest evaluation value serves as the obtained direction of rotation from the normal position. The projection of each direction references the point of intersection between the parallel line drawn for each direction and the coordinate line of the horizontal axis. | 01-16-2014 |
20140016825 | IMAGE PROCESSING APPARATUS, DISPLAY CONTROL METHOD AND PROGRAM - Aspects of the present invention include an apparatus comprising a recognition unit configured to recognize real object in an image. The apparatus may further comprise a determining unit configured to determine a stability indicator indicating a stability of the recognition, and a display control unit configured to modify a display of a virtual object according to the stability indicator. | 01-16-2014 |
20140023229 | HANDHELD DEVICE AND METHOD FOR DISPLAYING OPERATION MANUAL - In a method for displaying an operation manual of a component of a product using a handheld device. An image of the component is captured by an image capturing device of the handheld device. The captured image is compared with each image template stored in an image template database to determine a matching image corresponding to the captured image in the image template database, and contents of the operation manual of the component corresponding to the matching image are displayed on a display screen of the handheld device. | 01-23-2014 |
20140023230 | GESTURE RECOGNITION METHOD AND APPARATUS WITH IMPROVED BACKGROUND SUPPRESSION - A gesture recognition method with improved background suppression includes the following steps. First, a plurality of images are sequentially captured. Next, a position of at least one object in each of the images is calculated to respectively obtain a moving vector of the object at different times. Then, an average brightness of the object in each of the images is calculated. Finally, magnitudes of the moving vectors of the object at different times are respectively adjusted according to the average brightness of the object in each of the images. There is further provided a gesture recognition apparatus using the method mentioned above. | 01-23-2014 |
20140023231 | IMAGE PROCESSING DEVICE, CONTROL METHOD, AND STORAGE MEDIUM FOR PERFORMING COLOR CONVERSION - An image processing device includes: a detecting unit, which detects a specific region of a human body from image data; a color selecting unit, which selects color information relative to the detected specific region; a correction amount acquisition unit, which acquires a correction amount corresponding to the selected color information; and a color conversion unit, which performs color conversion on the specific region based on the selected color information and the acquired correction amount. | 01-23-2014 |
20140023232 | METHOD OF DETECTING TARGET IN IMAGE AND IMAGE PROCESSING DEVICE - A method of detecting a target in an image. The method includes receiving an image; generating a plurality of scaled-down images based on the received image; generating integral column images of each of the plurality of scaled images by calculating integral values of pixels column by column; selecting and classifying a plurality of windows of the integral column images according to a feature arithmetic operation based on a recursive column calculation; and detecting the target on the basis of the classification results for the plurality of windows. | 01-23-2014 |
20140023233 | Method and System for Determining a Number of Transfer Objects - The invention proposes a method for determining a number ( | 01-23-2014 |
20140023234 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 01-23-2014 |
20140023235 | METHOD OF CONTROLLING A FUNCTION OF A DEVICE AND SYSTEM FOR DETECTING THE PRESENCE OF A LIVING BEING - A method of controlling a function of a device, includes obtaining a sequence ( | 01-23-2014 |
20140023236 | PROCESSING IMAGES OF AT LEAST ONE LIVING BEING - A method of processing images of at least one living being, includes obtaining a sequence ( | 01-23-2014 |
20140023237 | Digitally-Generated Lighting for Video Conferencing Applications - A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model. | 01-23-2014 |
20140029788 | DETECTING OBJECTS WITH A DEPTH SENSOR - Detecting an object includes receiving depth data and infrared (IR) data from a depth sensor. A first background subtraction is performed on the IR data to create a first mask, and a second background subtraction is performed on the IR data to create a second mask. The first and second masks and the depth data are merged to create a third mask. | 01-30-2014 |
20140029789 | METHOD AND SYSTEM FOR VISION BASED INTERFACING WITH A COMPUTER - System and method which allow a user to interface with a machine/computer using an image capturing device (e.g. camera) instead of conventional physical interfaces e.g. keyboard, mouse. The system allows the user to interface from any physical and non-physical location within the POV of the camera at a distance that is determined by the resolution of the camera. Using images of the hand, the system may detect a change of hand states. If the new state is a known state that represents a hit state, the system would map the change of state to a key hit in a row and column of the keyboard and sends the function associated with that key for execution. In an embodiment, the system determines the row based on the rotation of the wrist and/or position of the hand. | 01-30-2014 |
20140029790 | METHOD AND DEVICE FOR DETECTING FOG AT NIGHT - A method of detecting the presence of an element (fog, rain, etc. . . . ) disturbing the visibility of a scene illuminated by a headlight ( | 01-30-2014 |
20140029791 | LIGHT EMITTING SOURCE DETECTION DEVICE, LIGHT BEAM CONTROL DEVICE AND PROGRAM FOR DETECTING LIGHT EMITTING SOURCE - A processing section as a light emitting source detection device in a light beam control system changes irradiation parameters, one of an irradiation range and a luminance of a light beam of head lamps of an own vehicle. The light beam of the head lamps is irradiated toward a light object corresponding to a light source detected in captured image data. The processing section detects whether or not the luminance of the detected light source is changed after the change of the irradiation parameters, and sets a probability value of the detected light source to a value lower than a probability value of a light source when luminance is not changed even if the irradiation parameters are changed. When the probability value of the detected light source is not less than a predetermined threshold value, the processing section determines that the detected light source is a vehicle light source. | 01-30-2014 |
20140029792 | VEHICLE LIGHT SOURCE DETECTION DEVICE, LIGHT BEAM CONTROL DEVICE AND PROGRAM OF DETECTING VEHICLE LIGHT SOURCE - A vehicle light source detection device in a light beam control system detects a position of a light source appeared and detected in captured image data The device calculates a gradient of a road on which the own vehicle is running. The vehicle light source detection device estimates a vanishing point in the captured image data on the basis of the detected gradient of the road. The device further increases a reliability value of the detected light source when the point of the detected light source more approaches the vanishing point. When the reliability value of the detected light source is not less than a predetermined reference value, the device determines that the detected light source is a head lamp of an oncoming vehicle, and adjusts an irradiation range of the light beam of the head lamps of the own vehicle to avoid the oncoming vehicle. | 01-30-2014 |
20140029793 | METHOD OF OPTIMAL OUT-OF-BAND CORRECTION FOR MULTISPECTRAL REMOTE SENSING - A method of image processing. An expected band-averaged spectral radiances image vector is simulated from training hyperspectral data and at least one filter transmittance function corresponding to the at least one optical filter. A simulated measured band-averaged spectral radiances image vector is simulated from the training hyperspectral data and the at least one transmittance function. A realistic measured band-averaged spectral radiances image vector is provided from at least one optical filter. A cross-correlation matrix of the expected band-averaged spectral radiances image vector and the realistic measured band-averaged spectral radiances image vector is calculated. An auto-correlation matrix of the simulated measured band-averaged spectral radiances image vector is calculated. An optimal out-of-band transform matrix is generated by matrix-multiplying the cross-correlation matrix and an inverse of the auto-correlation matrix. A realistic recovered band-averaged spectral radiances image vector is generated by matrix-multiplying the optimal out-of-band transform matrix and the realistic measured band-averaged spectral radiances image vector, the realistic recovered band-averaged spectral radiances image vector being free of out-of-band effects. | 01-30-2014 |
20140029794 | METHOD AND APPARATUS FOR DETECTING A PUPIL - A method for detecting a pupil in an image of an eye, comprises analysing the image, exploiting an expected shape of the pupil, to identify portions of the image that are candidate portions of the pupil. A first region of the image, which corresponds to a lower part of the pupil, is analysed in preference to a second region of the image, which corresponds to an upper part of the pupil, so as to reduce or avoid errors arising from artifacts that tend to be present in said second region of the image. A computer or processor program and an apparatus for performing the method are also disclosed. | 01-30-2014 |
20140029795 | METHOD AND APPARATUS FOR IDENTIFYING PLAYING BALLS - A method for identifying a selected playing ball from a prescribed number of playing balls, wherein each of the playing balls is provided with a different symbol, wherein: a) the selected playing ball is moved from a starting position past an image recording unit pickup, b) the mass centre of the depiction of the selected playing ball in the image is kept unaltered for a prescribed, c) the image position and size of the depiction of the playing ball is ascertained, and a check is performed to determine whether portions of the depiction of the playing ball are situated outside a lateral of the image, and d) if portions of the depiction of the playing ball are situated outside said lateral edge, the playing ball is returned to the pickup area of the image recording unit and/or is repositioned and steps b) to d) are repeated. | 01-30-2014 |
20140029796 | METHOD FOR THE OPTICAL IDENTIFICATION OF OBJECTS IN MOTION - The method includes the steps of acquiring, by at least one camera having a predetermined magnification ratio, at least one image containing at least one coded information and at least one object and properly associating the coded information with the at least one object on the basis of the position of the coded information and of the at least one object along an advancing direction with respect to a fixed reference system. The above-mentioned position of the coded information is determined starting from the position of the coded information within the image acquired by the camera and on the basis of the distance of the coded information from the camera. Such a distance is in turn determined on the basis of the magnification factor of a reference physical dimension detected at a surface of the object or of the coded information. | 01-30-2014 |
20140029797 | METHOD FOR LOCATING ANIMAL TEATS - A method and apparatus for locating teats of an animal uses an automated three-dimensional image capturing device and includes automatically obtaining and storing a three-dimensional numerical image of the animal that includes a teat region of the animal; making the image available for review by an operator; receiving manually input data designating a location of the teats in the image; from the designated location of the teats, creating a teat position data file containing the location co-ordinates of each defined teat from within the image; updating an animal data folder with the teat position data file. The method references the teat position data file containing the location co-ordinates of each defined teat during an animal related operation involving connecting a milking or cleaning apparatus to the teats of an animal. | 01-30-2014 |
20140029798 | Matching An Approximately Located Query Image Against A Reference Image Set - Aspects of the invention pertain to matching a selected image/photograph against a database of reference images having location information. The image of interest may include some location information itself, such as latitude/longitude coordinates and orientation. However, the location information provided by a user's device may be inaccurate or incomplete. The image of interest is provided to a front end server, which selects one or more cells to match the image against. Each cell may have multiple images and an index. One or more cell match servers compare the image against specific cells based on information provided by the front end server. An index storage server maintains index data for the cells and provides them to the cell match servers. If a match is found, the front end server identifies the correct location and orientation of the received image, and may correct errors in an estimated location of the user device. | 01-30-2014 |
20140029799 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - An image processing device including a subject frame setting section which, by operating a subject detector which detects a subject captured in an image, sets a subject frame which surrounds a predetermined range of the subject detected from the image; an acceptance frame setting section which sets an acceptance frame with a range wider than the subject frame according to the context of the image; a position detecting section which detects a specified position on an image which is specified by a user; and a recognizing section which recognizes a subject which is a tracking target based on the acceptance frame set by the acceptance frame setting section and the specified position detected by the position detecting section. | 01-30-2014 |
20140029800 | POSITION AND ORIENTATION CALIBRATION METHOD AND APPARATUS - A position and orientation measuring apparatus calculates a difference between an image feature of a two-dimensional image of an object and a projected image of a three-dimensional model in a stored position and orientation of the object projected on the two-dimensional image. The position and orientation measuring apparatus further calculates a difference between three-dimensional coordinate information and a three-dimensional model in the stored position and orientation of the object. The position and orientation measuring apparatus then converts a dimension of the first difference and/or the second difference to cause the first difference and the second difference to have an equivalent dimension and corrects the stored position and orientation. | 01-30-2014 |
20140029801 | In-Video Product Annotation with Web Information Mining - A system provides product annotation in a video to one or more users. The system receives a video from a user, where the video includes multiple video frames. The system extracts multiple key frames from the video and generates a visual representation of the key frame. The system compares the visual representation of the key frame with a plurality of product visual signatures, where each visual signature identifies a product. Based on the comparison of the visual representation of the key frame and a product visual signature, the system determines whether the key frame contains the product identified by the visual signature of the product. To generate the plurality of product visual signatures, the system collects multiple training images comprising multiple of expert product images obtained from an expert product repository, each of which is associated with multiple product images obtained from multiple web resources. | 01-30-2014 |
20140037134 | GESTURE RECOGNITION USING DEPTH IMAGES - Methods, apparatuses, and articles associated with gesture recognition using depth images are disclosed herein. In various embodiments, an apparatus may include a face detection engine configured to determine whether a face is present in one or more gray images of respective image frames generated by a depth camera, and a hand tracking engine configured to track a hand in one or more depth images generated by the depth camera. The apparatus may further include a feature extraction and gesture inference engine configured to extract features based on results of the tracking by the hand tracking engine, and infer a hand gesture based at least in part on the extracted features. Other embodiments may also be disclosed and claimed. | 02-06-2014 |
20140037135 | CONTEXT-DRIVEN ADJUSTMENT OF CAMERA PARAMETERS - A system and method for adjusting the parameters of a camera based upon the elements in an imaged scene are described. The frame rate at which the camera captures images can be adjusted based upon whether the object of interest appears in the camera's field of view to improve the camera's power consumption. The exposure time can be set based on the distance of an object form the camera to improve the quality of the acquired camera data. | 02-06-2014 |
20140037136 | Method and System for Determining Poses of Vehicle-Mounted Cameras for In-Road Obstacle Detection - Poses of a movable camera relative to an environment are obtained by determining point correspondences from a set of initial images and then applying 2-point motion estimation to the point correspondences to determine a set of initial poses of the camera. A point cloud is generated from the set of initial poses and the point correspondences. Then, for each next image, the point correspondences and corresponding poses are determined, while updating the point cloud. | 02-06-2014 |
20140037137 | SYSTEMS AND METHODS FOR EFFICIENT 3D TRACKING OF WEAKLY TEXTURED PLANAR SURFACES FOR AUGMENTED REALITY APPLICATIONS - The present system provides an on the fly simple to complex 6DOF registration approach using the direct method. On the fly means it does not require training time, a user points a phone/camera to a planar surface and can start tracking it instantly. Simple to complex means the system performs registration in multiple levels of complexity from 2DOF to 6DOF. By increasing the complexity model the system enables more surfaces to be tracked and for surfaces that are tracked the system can avoid local minima solution providing a more robust and accurate 6DOF tracking. Even surfaces that are very weak in features can be tracked in 6DOF and virtual content can be registered to them. The system enables playing Augmented Reality games on low-end devices such as mobile phones on almost any surface in the real world. | 02-06-2014 |
20140037138 | MOVING OBJECT RECOGNITION SYSTEMS, MOVING OBJECT RECOGNITION PROGRAMS, AND MOVING OBJECT RECOGNITION METHODS - The moving object recognition system includes: a camera that is installed in a vehicle and captures continuous single-view images; a moving object detecting unit that detects a moving object from the images captured by the camera; a relative approach angle estimating unit that estimates the relative approach angle of the moving object detected by the moving object detecting unit with respect to the camera; a collision risk calculating unit that calculates the risk of the moving object colliding with the vehicle, based on the relationship between the relative approach angle and the moving object direction from the camera toward the moving object; and a reporting unit that reports a danger to the driver of the vehicle in accordance with the risk calculated by the collision risk calculating unit. | 02-06-2014 |
20140037139 | DEVICE AND METHOD FOR RECOGNIZING GESTURE BASED ON DIRECTION OF GESTURE - A device and method for recognizing a gesture according to movement directions of an object. | 02-06-2014 |
20140037140 | METHOD FOR DETERMINING CORRESPONDENCES BETWEEN A FIRST AND A SECOND IMAGE, AND METHOD FOR DETERMINING THE POSE OF A CAMERA - A method for determining correspondences between a first and a second image, comprising the steps providing a first image and a second image of the real environment, defining a warping function between the first and second image, determining the parameters of the warping function between the first image and the second image by means of an image registration method, determining a third image by applying the warping function with the determined parameters to the first image, determining a matching result by matching the third image and the second image, and determining correspondences between the first and the second image using the matching result and the warping function with the determined parameters. The method may be used in a keyframe based method for determining the pose of a camera based on the determined correspondences. | 02-06-2014 |
20140037141 | METHOD FOR EVALUATING A PLURALITY OF TIME-OFFSET PICTURES, DEVICE FOR EVALUATING PICTURES, AND MONITORING SYSTEM - The invention relates to a method for evaluating a plurality of chronologically staggered images, said method comprising the following steps:
| 02-06-2014 |
20140037142 | METHOD AND SYSTEM FOR VEHICLE CLASSIFICATION - A method and system of vehicle classification, and more particularly to a method and system called hierarchical vehicle classification system using a video and/or video image, a method and system of vehicle classification using a vehicle ground clearance measurement system, and method and system for classification of passenger vehicles and measuring their properties, and more particularly to capturing a vehicle traveling along a road from a single camera and classifying the vehicle into a vehicle class. | 02-06-2014 |
20140037143 | AUDITING VIDEO ANALYTICS THROUGH ESSENCE GENERATION - Video analytics data is audited through review of selective subsets of visual images from a visual image stream as a function of a temporal relationship of the images to a triggering alert event. The subset comprehends an image contemporaneous with the triggering alert event and one or more other images occurring before or after the contemporaneous image. The generated subset may be presented for review to determine whether the triggering alert event is a true or false alert, or whether additional data from the visual image stream is required to make such a determination. If determined from the presented visual essence that the additional data is required make the true or false determination, then additional data is presented from the visual image stream for review. | 02-06-2014 |
20140037144 | EYELID-DETECTION DEVICE, EYELID-DETECTION METHOD, AND RECORDING MEDIUM - A lower eyelid search window (W | 02-06-2014 |
20140044305 | OBJECT TRACKING - Embodiments are disclosed herein that relate to the automatic tracking of objects. For example, one disclosed embodiment provides a method of operating a mobile computing device having an image sensor. The method includes acquiring image data, identifying an inanimate moveable object in the image data, determining whether the inanimate moveable object is a tracked object, ate moveable object is a tracked object, then storing information regarding a state of the inanimate moveable object, detecting a trigger to provide a notification of the state of the inanimate moveable object, and providing an output of the notification of the state of the inanimate moveable object. | 02-13-2014 |
20140044306 | METHOD AND APPARATUS FOR DETECTING PROXIMATE INTERFACE ELEMENTS - A method, apparatus and computer program product are therefore provided in order to provide an efficient, aesthetically pleasing display of points of interest in an AR interface that maximizes usability and display efficiency. In this regard, the method, apparatus and computer program product may utilize a mobile terminal to perform pre-processing of interface elements to reduce display clutter and increase efficiency of display processing. Interface elements may be projected onto a cylindrical surface to locate the interface elements relative to the mobile terminal. Interface elements may be analyzed in the projection to identify interface elements that are proximate to one another. Data indicating that particular interface elements are proximate to one another may be stored in a data structure for reference prior to displaying of the interface elements in an AR interface. | 02-13-2014 |
20140044307 | SENSOR INPUT RECORDING AND TRANSLATION INTO HUMAN LINGUISTIC FORM - Systems, methods, and devices use a mobile device's sensor inputs to automatically draft natural language messages, such as text messages or email messages. In the various embodiments, sensor inputs may be obtained and analyzed to identify subject matter which a processor of the mobile device may reflect in words included in a communication generated for the user. In an embodiment, subject matter associated with a sensor data stream may be associated with a word, and the word may be used to assemble a natural language narrative communication for the user, such as a written message. | 02-13-2014 |
20140044308 | IMAGE DETERMINING METHOD AND OBJECT COORDINATE COMPUTING APPARATUS - An image determining method, for determining which pixels in an image are specific image pixels of a specific image, comprising: (a) determining which pixels in the image have brightness values larger than a threshold value; (b) determining the pixels having brightness values larger than the threshold value as the specific image pixels; and determining pixels in a predetermined range of at least one the specific image pixel as the specific image pixels as well. | 02-13-2014 |
20140044309 | HUMAN TRACKING SYSTEM - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities. | 02-13-2014 |
20140044310 | METHOD OF IDENTIFYING AN OBJECT IN A VISUAL SCENE - An image of a visual scene, comprising a plurality of pixels. is acquired and an associated range map is either determined therefrom or separately acquired. Elements of the range map comprise distances from the camera for each pixel of the image. In one aspect, either the image or the range map is processed with a connected-components sieve filter that locates clusters of pixels or elements that are connected to one another along either adjacent rows, columns or diagonally. In another aspect, a cross-range value of range-map element is compared with a down-range-responsive cross-range threshold of a boundary of a collision-possible space and the pixel or element is nulled or ignored if associated with a location that is not in the collision-possible space. The collision-possible space is responsive to an operating condition of a vehicle from which the image is acquired. | 02-13-2014 |
20140044311 | NEIGHBORING VEHICLE DETECTING APPARATUS - A neighboring vehicle detecting device according to the invention includes: a neighboring vehicle detecting part configured to detect a neighboring vehicle behind an own vehicle; a curved road detecting part configured to detect information related to a curvature radius of a curved road; a storing part configured to store a detection result of the curved road detecting part; and a processing part configured to set a detection target region behind the own vehicle based on the detection result of the curved road detecting part which is stored in the storing part and is related to the curved road behind the own vehicle, wherein the processing part detects the neighboring vehicle behind the own vehicle based on a detection result in the set detection target region by the neighboring vehicle detecting part, the neighboring vehicle traveling on a particular lane which has a predetermined relationship with a traveling lane of the own vehicle. | 02-13-2014 |
20140050352 | METHOD OF IDENTIFYING A TRACKED OBJECT FOR USE IN PROCESSING HYPERSPECTRAL DATA - The invention relates a method of identifying a tracked object that has a known database of hyperspectral and spatial information. The method associates an identifier with the tracked object; selects a parameter associated with the hyperspectral or spatial information of the tracked object; detects a deviation in the selected parameter; compares the deviation with the database; and if the deviation exceeds a predetermined threshold, assigns a new identifier to the tracked object, and if the deviation does not exceed the predetermined threshold, continues tracking the tracked object. | 02-20-2014 |
20140050353 | EXTRACTING FEATURE QUANTITIES FROM AN IMAGE TO PERFORM LOCATION ESTIMATION - A feature extraction method for extracting a feature from an image includes receiving an image and measured acceleration data from a mobile device; obtaining a gravity vector in the image in a camera coordinate system based on the measured acceleration data; obtaining a vanishing point in the image in a vertical direction in a screen coordinate system using the gravity vector; obtaining differential vectors along two axes for each pixel in the screen coordinate system; obtaining a connection line vector connecting each of the pixels with the vanishing point; identifying a vertical edge based on determining that an angle formed by the differential vector and the connection line vector is within a certain threshold range; obtaining the sum of strengths of vertical edges and writing the sum in a predetermined variable array; extracting a keypoint based on the variable array; and calculating a feature quantity from the keypoint. | 02-20-2014 |
20140050354 | Automatic Gesture Recognition For A Sensor System - A method for gesture recognition including detecting one or more gesture-related signals using the associated plurality of detection sensors; and evaluating a gesture detected from the one or more gesture-related signals using an automatic recognition technique to determine if the gesture corresponds to one of a predetermined set of gestures. | 02-20-2014 |
20140050355 | METHOD AND SYSTEM FOR DETECTING SEA-SURFACE OIL - A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may be configured to detect and evaluate the presence of sea-surface oil on the water surrounding an offshore oil platform. The computer vision engine may be configured to segment image data into detected patches or blobs of surface oil (foreground) present in the field of view of an infrared camera (or cameras). A machine learning engine may evaluate the detected patches of surface oil to learn to distinguish between sea-surface oil incident to the operation of an offshore platform and the appearance of surface oil that should be investigated by platform personnel. | 02-20-2014 |
20140050356 | MULTI-MODE VIDEO EVENT INDEXING - Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode. | 02-20-2014 |
20140050357 | METHOD FOR DETERMINING A PARAMETER SET DESIGNED FOR DETERMINING THE POSE OF A CAMERA AND/OR FOR DETERMINING A THREE-DIMENSIONAL STRUCTURE OF THE AT LEAST ONE REAL OBJECT - A method for determining a parameter set which is designed to be used for determining the pose of a camera with regard to at least one real object and/or for determining a three-dimensional structure of the at least one real object, comprises the steps of providing a reference image including at least a part of the at least one real object, capturing at least one current image including at least a part of the at least one real object, providing an initial estimate of a parameter set which is including at least the three-dimensional translation in the common coordinate system between the pose of the camera when capturing the reference image and the pose of the camera when capturing the current image, and the depth of at least a first point of the at least one real object in the common coordinate system, and determining an update of the estimate of the parameter set by means of an iterative minimization process, wherein in the iterative minimization process a first set of pixels in the reference image is compared with a computed set of pixels in the current image and the computed set of pixels in the current image used for the comparison varies at each iteration. | 02-20-2014 |
20140050358 | METHOD OF FACIAL LANDMARK DETECTION - Detecting facial landmarks in a face detected in an image may be performed by first cropping a face rectangle region of the detected face in the image and generating an integral image based at least in part on the face rectangle region. Next, a cascade classifier may be executed for each facial landmark of the face rectangle region to produce one response image for each facial landmark based at least in part on the integral image. A plurality of Active Shape Model (ASM) initializations may be set up. ASM searching may be performed for each of the ASM initializations based at least in part on the response images, each ASM search resulting in a search result having a cost. Finally, a search result of the ASM searches having a lowest cost function may be selected, the selected search result indicating locations of the facial landmarks in the image. | 02-20-2014 |
20140050359 | EXTRACTING FEATURE QUANTITIES FROM AN IMAGE TO PERFORM LOCATION ESTIMATION - A feature extraction method for extracting a feature from an image includes receiving an image and measured acceleration data from a mobile device; obtaining a gravity vector in the image in a camera coordinate system based on the measured acceleration data; obtaining a vanishing point in the image in a vertical direction in a screen coordinate system using the gravity vector; obtaining differential vectors along two axes for each pixel in the screen coordinate system; obtaining a connection line vector connecting each of the pixels with the vanishing point; identifying a vertical edge based on determining that an angle formed by the differential vector and the connection line vector is within a certain threshold range; obtaining the sum of strengths of vertical edges and writing the sum in a predetermined variable array; extracting a keypoint based on the variable array; and calculating a feature quantity from the keypoint. | 02-20-2014 |
20140050360 | SYSTEMS AND METHODS FOR PRESENCE DETECTION - Systems and methods are provided for presence detection using an image system. The image system may be a camera that is integrated into an electronic device. In some embodiments, the image system can accommodate multiple operating modes of the electronic device. For example, when the electronic device is operating in a normal power mode, control circuitry of the image system can detect when a user has left and is no longer using the electronic device. When the electronic device is operating in a power saving mode, the control circuitry can detect user presence (e.g., when a user has come back to the electronic device). In some embodiments, the control circuitry can adjust for both gradual and sudden light changes. | 02-20-2014 |
20140050361 | APPARATUS AND PROCESS FOR TREATING BIOLOGICAL, MICROBIOLOGICAL AND/OR CHEMICAL SAMPLES - A process for treating samples of biological, microbiological and/or chemical material, comprising at least a step of arranging a plurality of samples of biological, microbiological and/or chemical material on a corresponding plurality of housing seats ( | 02-20-2014 |
20140056470 | TARGET OBJECT ANGLE DETERMINATION USING MULTIPLE CAMERAS - Systems, methods, and computer media for determining the angle of a target object with respect to a device are provided herein. Target object information captured at approximately the same time by at least two cameras can be received. The target object information can comprise images or distances from the target object to the corresponding camera. An angle between the target object and the device can be determined based on the target object information. When the target object information includes images, the angle can be determined based on a correlation between two images. When the target object information includes distances from the target object to the corresponding camera, the angle can be calculated geometrically. | 02-27-2014 |
20140056471 | OBJECT TRACKING USING BACKGROUND AND FOREGROUND MODELS - Various arrangements for modeling a scene are presented. A plurality of images of the scene captured over a period of time may be received, each image comprising a plurality of pixels. A plurality of background models may be created using the plurality of images. At least one background model may be created for each pixel of the plurality of pixels. A plurality of foreground models may be created using the plurality of images. A foreground model may be created for each pixel of at least a first subset of pixels of the plurality of pixels. The background models and the foreground models may be indicative of the scene over the period of time. | 02-27-2014 |
20140056472 | HAND DETECTION, LOCATION, AND/OR TRACKING - Various arrangements for identifying a location of a hand of a person are presented. A group of pixels may be identified in an image of a scene as including the person. A reference point may be set for the group of pixels identified as the person. The hand may be identified as using a local distance maximum from the reference point. An indication, such as coordinates, of the location of the hand may be output based on the local distance. | 02-27-2014 |
20140056473 | OBJECT DETECTION APPARATUS AND CONTROL METHOD THEREOF, AND STORAGE MEDIUM - The object detection apparatus prevents or eliminates detection errors caused by changes of an object which frequently appears in a background. To this end, an object detection apparatus includes a detection unit which detects an object region by comparing an input video from a video input device and a background model, a selection unit which selects a region of a background object originally included in a video, a generation unit which generates background object feature information based on features included in the background object region, and a determination unit which determines whether or not the object region detected from the input video is a background object using the background object feature information. | 02-27-2014 |
20140056474 | METHOD AND APPARATUS FOR RECOGNIZING POLYGON STRUCTURES IN IMAGES - Technology is disclosed herein for recognizing and processing planar features in images such as walls of rooms. A method according to the technology receives a digital at a computing device. The computing device recognizes a polygonal region of the digital image corresponding to a planar feature of an object captured in the digital image. The computing device further processes the polygonal region of the digital image according to user instructions. The processed polygonal region of the digital image is visualized on a display of the computing device in real time. | 02-27-2014 |
20140056475 | APPARATUS AND METHOD FOR RECOGNIZING A CHARACTER IN TERMINAL EQUIPMENT - A text recognition apparatus and method recognizes text in the image taken by a camera. The text recognition method of a mobile terminal includes displaying a preview image input from a camera; recognizing a text image where a pointer is placed on the preview image; displaying recognized text data and at least one action item corresponding to the recognized text data; and executing, when the action item is selected, an action mapped to the selected action item. | 02-27-2014 |
20140056476 | INCORPORATING VIDEO META-DATA IN 3D MODELS - A moving object tracked within a field of view environment of a two-dimensional data feed of a calibrated video camera is represented by a three-dimensional model. An appropriate three-dimensional mesh-based volumetric model for the object is initialized by using a back-projection of a corresponding two-dimensional image. A texture of the object is projected onto the three-dimensional model, and two-dimensional tracks of the object are upgraded to three-dimensional motion to drive a three-dimensional model. | 02-27-2014 |
20140056477 | VIDEO OBJECT FRAGMENTATION DETECTION AND MANAGEMENT - Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation ( | 02-27-2014 |
20140056478 | PRODUCT IDENTIFICATION USING MOBILE DEVICE - A method and apparatus for obtaining an image and providing one or more document files to a user is disclosed. The method may include receiving an image of a target object using an imaging device, analyzing the image to identify one or more features, and accessing a model database to identify an object model having features that match the identified features from the image. When the system determines that more than one model may be a match, the method looks for distinguishing features of the target object and selects a model that includes the distinguishing features. The method then includes, retrieving a document file that corresponds to the identified model from a file database, and providing the document file to a user. | 02-27-2014 |
20140064552 | System And Method For Utilizing Enhanced Scene Detection In A Depth Estimation Procedure - A system for performing an enhanced scene detection procedure including a sensor device for capturing blur images of a photographic target. The blur images each correspond to a scene type that is detected from a first scene type which is typically a pillbox blur scene, and a second scene type which is typically a Gaussian scene type. A scene detector performs an initial scene detection procedure to identify a candidate scene type for the blur images. The scene detector then performs the enhanced scene detection procedure to identify a final scene type for the blur images. | 03-06-2014 |
20140064553 | SYSTEM AND METHOD FOR LEAK DETECTION - This disclosure describes embodiments of systems and methods that can identify and image leaks and spills while simultaneously viewing the unchanging background. In one embodiment, the system includes an image capture device and an image processing device, which receives a first image frame and a second image frame from the image capture device. The image processing device can identify a region of variation in the second image frame that corresponds to a change in a scene parameter (e.g., temperature) as between the first image frame and the second image frame. These embodiments provide a normal dynamic range thermal image that can be colorized to identify the leak or spill as the leak or spill develops over time. The systems and methods can minimize false alarms, addressing potential issues that arise in connection with meteorological events (e.g., precipitation), noise sources, and relative motion between the image capture device and the scene. | 03-06-2014 |
20140064554 | IMAGE STATION MATCHING, PREPROCESSING, SPATIAL REGISTRATION AND CHANGE DETECTION WITH MULTI-TEMPORAL REMOTELY-SENSED IMAGERY - A method for collecting and processing remotely sensed imagery in order to achieve precise spatial co-registration (e.g., matched alignment) between multi-temporal image sets is presented. Such precise alignment or spatial co-registration of imagery can be used for change detection, image fusion, and temporal analysis/modeling. Further, images collected in this manner may be further processed in such a way that image frames or line arrays from corresponding photo stations are matched, co-aligned and if desired merged into a single image and/or subjected to the same processing sequence. A second methodology for automated detection of moving objects within a scene using a time series of remotely sensed imagery is also presented. Specialized image collection and preprocessing procedures are utilized to obtain precise spatial co-registration (image registration) between multitemporal image frame sets. In addition, specialized change detection techniques are employed in order to automate the detection of moving objects. | 03-06-2014 |
20140064555 | System and Method for Increasing Resolution of Images Obtained from a Three-Dimensional Measurement System - A system uses range and Doppler velocity measurements from a lidar system and images from a video system to estimate a six degree-of-freedom trajectory (6DOF) of a target. The 6DOF transformation parameters are used to transform multiple images to the frame time of a selected image, thus obtaining multiple images at the same frame time. These multiple images may be used to increase a resolution of the image at each frame time, obtaining the collection of the superresolution images. | 03-06-2014 |
20140064556 | OBJECT DETECTION SYSTEM AND COMPUTER PROGRAM PRODUCT - According to an embodiment, an object detection system includes an obtaining unit, an estimating unit, a setting unit, a calculating unit, and a detecting unit. The obtaining unit is configured to obtain an image in which an object is captured. The estimating unit is configured to estimate a condition of the object. The setting unit is configured to set, in the image, a plurality of areas that have at least one of a relative positional relationship altered according to the condition and a shape altered according to the condition. The calculating unit is configured to calculate a feature value of an image covering the areas. The detecting unit is configured to compare the calculated feature value with a feature value of a predetermined registered object, and detect the registered object corresponding to the object. | 03-06-2014 |
20140064557 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND A COMPUTER-READABLE NON-TRANSITORY MEDIUM - An image processing method includes recognizing a first object that is included in the image data, calculating a positional variation amount of a feature point that corresponds to the first object and is moving in an image center direction so as to calculate a moving speed of the first object on the basis of the positional variation amount, determining whether or not the first object is a gaze target object of the user, in accordance with a behavior of the first object, the behavior being obtained on the basis of the positional variation amount of the first object among the plurality of image data, of which acquisition time is respectively different and whether or not the object continuingly exists in a second region, the second region being positioned in an inside of the first region and including a center point of the image data, for a predetermined period of time. | 03-06-2014 |
20140064558 | OBJECT TRACKING APPARATUS AND METHOD AND CAMERA - The invention discloses an object tracking apparatus and method and a camera. The object tracking apparatus is used for determining, according to a predetermined object region containing an object in an initial image of an image sequence, an object region estimated to contain the object in each subsequent image thereof, and comprises a first tracking unit configured to determine a first candidate object region in each subsequent image, a size of which being fixed for each subsequent image; a second tracking unit configured to determine a second candidate object region in each subsequent image based on the first candidate object region thereof, the second candidate object region for each subsequent image being adapted to a shape or size of the object therein; and a weighting unit configured to calculate a weighted sum of the first and second candidate object regions of each subsequent image as the object region thereof. | 03-06-2014 |
20140064559 | COMMODITY RECOGNITION APPARATUS AND COMMODITY RECOGNITION METHOD - A commodity recognition apparatus detects an appearance feature amount of a commodity included in an image captured by an image capturing unit, extracts a candidate of the commodity included in the captured image by comparing the data of the appearance feature amount with the feature amount data in a recognition dictionary file, recognizes a character string included in the image captured by the image capturing unit, and determines a commodity of a recognition target according to the recognized character string and the extracted candidate of the commodity. | 03-06-2014 |
20140064560 | MOTION-VALIDATING REMOTE MONITORING SYSTEM - A method of autonomously monitoring a remote site, including the steps of locating a primary detector at a site to be monitored; creating one or more geospatial maps of the site using an overhead image of the site; calibrating the primary detector to the geospatial map using a detector-specific model; detecting an object in motion at the site; tracking the moving object on the geospatial map; and alerting a user to the presence of motion at the site. In addition thermal image data from a infrared cameras, rather than optical/visual image data, is used to create detector-specific models and geospatial maps in substantially the same way that optical cameras and optical image data would be used. | 03-06-2014 |
20140064561 | Object Information Derived from Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 03-06-2014 |
20140064562 | APPROACHING-OBJECT DETECTOR, APPROACHING OBJECT DETECTING METHOD, AND RECORDING MEDIUM STORING ITS PROGRAM - An approaching-object detector for detecting an object approaching an own vehicle includes: a memory; and a processor configured to perform a process, the process including extracting a plurality of corresponding feature points from chronologically captured images, which are obtained by capturing the object using an imaging device provided for the own device, detecting a behavior among the captured images in regard to each of the plurality of feature points, determining whether or not the behavior is random in regard to each of the plurality of feature points, and determining whether or not the object is approaching the own vehicle based on a behavior of a feature point whose behavior is determined to be not random among the plurality of feature points, and outputting a result of the determination. | 03-06-2014 |
20140064563 | IMAGE PROCESSING APPARATUS, METHOD OF CONTROLLING IMAGE PROCESSING APPARATUS AND STORAGE MEDIUM - An image processing apparatus comprises: detection means for detecting a region corresponding to a diseased part reference region other than a diseased part region in an input image; and identifying means for identifying the diseased part region based on the corresponding region detected by the detection means. | 03-06-2014 |
20140072168 | VIDEO-TRACKING FOR VIDEO-BASED SPEED ENFORCEMENT - A method for tracking a moving vehicle includes detecting the vehicle by acquiring a series of temporal related image frames. In an initial frame, the detecting includes locating a reference feature representative of a vehicle. The method further includes setting the reference feature as a full-size template. The method includes tracking the vehicle by searching a current frame for features matching the full-size template and at least one scaled template. The tracking further includes setting as an updated template one of the full-size and scaled templates closest matching the feature. The method includes repeating the tracking using the updated template for each next frame in the series. | 03-13-2014 |
20140072169 | LOCATION DETERMINATION FOR AN OBJECT USING VISUAL DATA - A global position of an observed object is determined by obtaining a first global position of an observed object with at least one positioning device. A determination is made as to whether a set of stored visual characteristic information of at least one landmark matches a visual characteristic information set obtained from at least one captured image comprising a scene associated with the observed object. In response to the set of stored visual characteristic information matching the obtained visual characteristic information set, a second global position of the observed object is determined based on a set of stored location information associated with the at least one landmark and the first global position. | 03-13-2014 |
20140072170 | 3D HUMAN POSE AND SHAPE MODELING - Methods, devices and systems for performing video content analysis to detect humans or other objects of interest a video image is disclosed. The detection of humans may be used to count a number of humans, to determine a location of each human and/or perform crowd analyses of monitored areas. | 03-13-2014 |
20140072171 | SYSTEM AND METHOD FOR GENERATING SEMANTIC ANNOTATIONS - In accordance with one aspect of the present technique, a method is disclosed. The method includes receiving a new video from one or more sensors and generating a new content graph (CG) based on the new video. The method also includes comparing the new CG with a plurality of prior CGs. The method further includes identifying a first portion of the new CG matching a portion of a first prior CG and a second portion of the new CG matching a portion of the second prior CG. The method further includes analyzing a first set of semantic annotations (SAs) associated with the portion of the first prior CG and a second set of SAs associated with the portion of the second prior CG. The method further includes generating a sequence of SAs for the new video based on the analysis of the first and the second set of SAs. | 03-13-2014 |
20140072172 | TECHNIQUES FOR FACE DETECETION AND TRACKING - Techniques are disclosed that involve face detection. For instance, face detection tasks may be decomposed into sets of one or more sub-tasks. In turn the sub-tasks of the sets may be allocated across multiple image frames. This allocation may be based on a multiple layer, quad-tree approach. In addition, face tracking tasks may be performed. | 03-13-2014 |
20140072173 | LOCATION DETERMINATION FOR AN OBJECT USING VISUAL DATA - A global position of an observed object is determined by obtaining a first global position of an observed object with at least one positioning device. A determination is made as to whether a set of stored visual characteristic information of at least one landmark matches a visual characteristic information set obtained from at least one captured image comprising a scene associated with the observed object. In response to the set of stored visual characteristic information matching the obtained visual characteristic information set, a second global position of the observed object is determined based on a set of stored location information associated with the at least one landmark and the first global position. | 03-13-2014 |
20140072174 | LOCATION-BASED SIGNATURE SELECTION FOR MULTI-CAMERA OBJECT TRACKING - Disclosed herein are a method, system, and computer program product for determining a correspondence between a first object ( | 03-13-2014 |
20140072175 | FAST ARTICULATED MOTION TRACKING - The present technology relates to a computer-implemented method for tracking an object in a sequence of multi-view input video images comprising the steps of acquiring a model of the object, tracking the object in the multi-view input video image sequence, and using the model. | 03-13-2014 |
20140072176 | METHOD AND APPARATUS FOR IDENTIFYING A POSSIBLE COLLISION OBJECT - An imaging unit is arranged in a motor vehicle. The imaging unit is designed for providing, as a function of an image acquired by it, a digital source image of a predefined image size. A first intermediate image of a predefined first intermediate image size is generated by reducing a resolution of the source image for the sake of reducing pixels. Furthermore, a second intermediate image of the predefined image size is generated such that it comprises the first intermediate image. The second intermediate image is analyzed by a predefined detector in order to examine whether an object of a predefined object category is situated in the second intermediate image, the detector being designed for analyzing a predefined image detail and for detecting an object of a predefined object category of a predefined object size range. | 03-13-2014 |
20140079280 | AUTOMATIC DETECTION OF PERSISTENT CHANGES IN NATURALLY VARYING SCENES - A method for detecting a persistent change in a dynamically varying scene includes: obtaining a set of reference images of the scene; transforming the reference images into an abstract feature space; classifying pixels of the reference images in the abstract feature space; generating a stable reduced-reference image based on the classifications of corresponding pixels; obtaining a set of test images of the scene; transforming the test images into the abstract feature space; classifying pixels of the test images in the abstract feature space; generating a stable test image based on the classifications of corresponding pixels; and comparing the stable reduced-reference and test images to one another to detect a difference therein, the difference corresponding to a persistent change in the dynamically varying scene occurring between when the reference images and the test images were obtained. | 03-20-2014 |
20140079281 | AUGMENTED REALITY CREATION AND CONSUMPTION - Architectures and techniques for augmenting content on an electronic device are described herein. In particular implementations, a user may use a portable device (e.g., a smart phone, tablet computer, etc.) to capture images of an environment, such as a room, outdoors, and so on. As the images of the environment are captured, the portable device may send information to a remote device (e.g., server) to determine whether augmented reality content is associated with a textured target in the environment (e.g., a surface or portion of a surface). When such a textured target is identified, the augmented reality content may be sent to the portable device. The augmented reality content may be displayed in an overlaid manner on the portable device as real-time images are displayed. | 03-20-2014 |
20140079282 | System And Method For Detecting, Tracking And Counting Human Objects Of Interest Using A Counting System And A Data Capture Device - A system for counting and tracking objects of interest within a predefined area with a sensor that captures object data and a data capturing device that receives subset data to produce reports that provide information related to a time, geographic, behavioral, or demographic dimension. | 03-20-2014 |
20140079283 | GENERATING A REPRESENTATION OF AN OBJECT OF INTEREST - A volumetric image of a space is acquired from an imaging system. The space includes an object of interest and another object, and the volumetric image includes data representing the object of interest and the other object. A two-dimensional radiograph of the space is acquired from the imaging system. The two-dimensional radiograph of the space includes data representing the object of interest and the other object. The two-dimensional radiograph and the volumetric image are compared at the imaging system. A two-dimensional image is generated based on the comparison. The generated two-dimensional image includes the object of interest and excludes the other object. | 03-20-2014 |
20140079284 | ELECTRONIC SYSTEM - An electronic system comprises an image-sensing device and a processor coupled with the image-sensing device. The image-sensing device includes an image-sensing area configured to generate a picture. The picture includes a noisy region. The processor is configured to select a tracking region from the portion of the picture outside of the noisy region. The tracking region corresponds to an operative region of the image sensing area. | 03-20-2014 |
20140079285 | MOVEMENT PREDICTION DEVICE AND INPUT APPARATUS USING THE SAME - A movement prediction device includes a CCD camera (image pickup device) for obtaining image information and a control unit for performing prediction of the movement of an operation body. In the control unit, a region regulation unit identifies a movement detection region on the basis of the image information, a computation unit computes, for example, a motion vector of a center of gravity of the operation body and tracks the movement locus of the operation body which has entered the movement detection region, and a movement prediction unit performs prediction of the movement of the operation body on the basis of the movement locus. | 03-20-2014 |
20140079286 | METHOD AND APPARATUS OF OBJECT RECOGNITION - A method for object recognition recognizing an object included in an image comprises extracting image features from the image; extracting a first candidate object matched to each of the image features with the highest similarity score from among objects within an object database which previously stores information about a target object for recognition; extracting a second candidate object based on a first matching score of the first candidate object; and based on a second matching score of the second candidate object calculated by matching features of the second candidate object and the image features, recognizing whether the second candidate object is the target object included in the image. | 03-20-2014 |
20140079287 | SONAR IMAGING - A method for recognising a target in a sonar image, the method comprising: normalising a sonar image; using/defining multiple test objects; rotating each test object between multiple positions; using a projection of each test object in each position as a template, so that multiple templates are provided for each test object, each template corresponding to a different rotational position; applying the multiple templates for the multiple test objects to the normalised image; and creating at least one feature vector for the image for use in target recognition. | 03-20-2014 |
20140086448 | Appearance Model Based Automatic Detection in Sensor Images - Embodiments relate to appearance model based automatic detection in sensor images. In one arrangement, a detection apparatus of an Automated Threat Detection system utilizes generative, statistical models of human appearance in sensor images, such as MMW images, as a basis for comparison with images received by a sensor such as an MMW sensor. The detection apparatus can effectively replicate, through the models, the approach of human observers to provide a relatively high throughput of subjects. Additionally, by utilizing generative, statistical models as the basis for comparison against received MMW images, the detection apparatus can minimize detection errors caused by variance in body geometry, skin reflectance, and posture of the subject to maintain a relatively low rate of false alarms and a relatively high rate of detection of threats. | 03-27-2014 |
20140086449 | INTERACTION SYSTEM AND MOTION DETECTION METHOD - A motion detection method applied in an interaction system is provided. The method has the following steps of: retrieving a plurality of images; recognizing a target object from the retrieved images; calculating a first integral value of a position offset value of the target object along a first direction from the retrieved images; determining whether the calculated first integral value is larger than a first predetermined threshold value; and determining the target object as moving when the calculated first integral value is larger than the first predetermined threshold value. | 03-27-2014 |
20140086450 | FACIAL TRACKING METHOD - A facial tracking method for detecting and tracking at least one face image in a region during a time period. The facial tracking method includes a step of performing an image acquiring operation, a step of performing a facial detecting operation to detect whether there is any face image in the entire of a current photo image, and at least one step of performing a facial tracking operation. For performing the facial tracking operation, plural tracking frames are located around a face image of the current photo image, and a similarity between the face image of the current photo image and the image included in each tracking frame in order to judge whether the face image exists in the next photo image. By the facial tracking method of the present invention, the time period of tracking face images is largely reduced. | 03-27-2014 |
20140086451 | METHOD AND APPARATUS FOR DETECTING CONTINUOUS ROAD PARTITION - A method and an apparatus are disclosed for detecting a continuous road partition with a height, the method comprising the steps of: obtaining disparity maps having the continuous road partition, and U-disparity maps corresponding to the disparity maps; obtaining an intermediate detection result of the continuous road partition detected from the U-disparity maps of first N frames; and detecting the continuous road partition from the U-disparity map of a current frame, based on the obtained intermediate detection result. | 03-27-2014 |
20140086452 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR PERIODIC MOTION DETECTION IN MULTIMEDIA CONTENT - In an example embodiment a method, apparatus and computer program product are provided. The method includes facilitating selection of a region of interest (ROI) in a plurality of frames of a multimedia content. The ROI is associated with a motion of at least one object. An object mobility data matrix associated with the ROI is determined in the plurality of frames. The object mobility data matrix is indicative of a difference in motion of the at least one object in the plurality of frames. A projection of the object mobility data matrix is determined on a line. The motion of the at least one object in the ROI is determined across the plurality of frames to as a periodic motion or a non-periodic motion based on the projection of the object mobility data matrix. | 03-27-2014 |
20140086453 | TIRE DEFECT DETECTION METHOD - Provided is a tire defect detection method capable of accurately detecting a thinly extending convex defect of a tire surface. Prior to the start of Step S | 03-27-2014 |
20140093125 | Personalized Advertising at a Point of Sale Unit - Methods and apparatuses are provided to deliver advertisements personalized for customers proximate a point of sale unit, e.g., a fuel dispenser. An image capturing unit captures at least one image of an object disposed in a specified range of the point of sale unit, e.g., within 6 feet of the point of sale unit. A processor determines a visually perceptible characteristic of the object based on the captured image(s), selects an advertisement based on the determined characteristic, and outputs the selected advertisement from an advertisement unit proximate the point of sale unit. | 04-03-2014 |
20140093126 | LIGHT ID ERROR DETECTION AND CORRECTION FOR LIGHT RECEIVER POSITION DETERMINATION - A light receiver records images of light beams originating from a neighborhood of lights, and demodulates identifiers (IDs) from them at determined image positions. The receiver retrieves a set of neighbor IDs for each demodulated ID and a real-world position of the corresponding light. The receiver cross-references the demodulated IDs against the retrieved sets of neighbor IDs to reveal errors in the demodulated IDs. The receiver corrects the errors to produce correct IDs each indexing a real-world position that is correctly matched to one of the determined light beam positions. The receiver determines a position of the receiver relative to the light transmitter based on the correctly matched real-world and determined light beam positions. | 04-03-2014 |
20140093127 | Method and System for Using Fingerprints to Track Moving Objects in Video - A method and system for tracking moving objects in a sequence of images. In one illustrative embodiment, a current image in the sequence of images is segmented into a plurality of segments. Segments in the plurality of segments belonging to a same motion profile are fused together to form a set of master segments. A set of target segments is identified from the set of master segments. The set of target segments represent a set of moving objects in the current image. A set of fingerprints is created for use in tracking the set of moving objects in a number of subsequent images in the sequence of images. | 04-03-2014 |
20140093128 | THRESHOLD SETTING DEVICE FOR SETTING THRESHOLD USED IN BINARIZATION PROCESS, OBJECT DETECTION DEVICE, THRESHOLD SETTING METHOD, AND COMPUTER READABLE STORAGE MEDIUM - A threshold setting device, an object detection device, a threshold setting method, and a computer readable storage medium are shown. According to one implementation, the threshold setting device includes, an image acquisition unit, a ratio acquisition unit, and a setting unit. The image acquisition unit acquires an image including a specific object. The ratio acquisition unit acquires ratio information related to a ratio of a plurality of colors present in the specific object. The setting unit sets, based on the ratio information acquired by the ratio acquisition unit, a threshold used in a binarization process performed on the image including the specific object acquired by the image acquisition unit. | 04-03-2014 |
20140093129 | OBJECT DETECTION APPARATUS AND METHOD - According to one embodiment, an object detection apparatus includes an acquisition unit, a first detector, a determination unit, and a second detector. The acquisition unit acquires frames in a time-series manner. The first detector detects a predetermined object in each of the frames. The determination unit stores detection results corresponding to the frames, compares a first detection result corresponding to a first frame of the frames with a second detection result corresponding to a second frame of the frames, and determines whether false negative of the predetermined object exists in the second frame. The second detector detects the predetermined object in the second frames when it is determined that false negative of the predetermined object exists. The second detector differs in performance from the first detector. | 04-03-2014 |
20140093130 | Systems and Methods For Sensing Occupancy - A computer implemented method for sensing occupancy of a workspace includes creating a difference image that represents luminance differences of pixels in past and current images of the workspace resulting from motion in the workspace, determining motion occurring in regions of the workspace based on the difference image, and altering a workspace environment based at least in part on the determined motion. The method also includes determining which pixels in the difference image represent persistent motion that can be ignored and determining which pixels representing motion in the difference image are invalid because the pixels are isolated from other pixels representing motion. | 04-03-2014 |
20140098988 | Fitting Contours to Features - Various embodiments of methods and apparatus for feature point localization are disclosed. An object in an input image may be detected. A profile model may be applied to determine feature point locations for each object component of the detected object. Applying the profile model may include globally optimizing the feature points for each object component to find a global energy minimum. A component-based shape model may be applied to update the respective feature point locations for each object component. | 04-10-2014 |
20140098989 | MULTI-CUE OBJECT ASSOCIATION - Multiple discrete objects within a scene image captured by a single camera track are distinguished as un-labeled from a background model within a first frame of a video data input. Object position and object appearance and/or object size attributes are determined for each of the blobs, and costs determined to assign to existing blobs of existing object tracks as a function of the determined attributes and combined to generate respective combination costs. The un-labeled object blob that has a lowest combined cost of association with any of the existing object tracks is labeled with the label of that track having the lowest combined cost, said track is removed from consideration for labeling remaining un-labeled object blobs, and the process iteratively repeated until each of the track labels have been used to label one of the un-labeled blobs. | 04-10-2014 |
20140098990 | Distributed Position Identification - A method and apparatus for identifying a position of a mobile platform. Images are provided by a camera system on a first mobile platform. The images include images of a second platform. An identified position of the first mobile platform is generated using the images and the position information for the second platform. The position information for the second platform identifies a location of the second platform. | 04-10-2014 |
20140098991 | GAME DOLL RECOGNITION SYSTEM, RECOGNITION METHOD AND GAME SYSTEM USING THE SAME - The present invention discloses a game doll recognition system, a recognition method, and a game system using the same. The game doll recognition system is capable of recognizing a plurality of game dolls; it includes: a data storage unit storing identification data of the game dolls; an image capturing unit capturing at least one picture of an game doll to be recognized; a processor comparing the identification data with at least a part of the picture to verify the identity of the game doll to be recognized in the picture; and a display unit showing the identity of the game doll to be recognized. | 04-10-2014 |
20140098992 | Electronic divice, selection method, acquisition method, electronic appratus, synthesis method and synthesis program - An electronic device ( | 04-10-2014 |
20140098993 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 04-10-2014 |
20140098994 | ANALYZING A SEGMENT OF VIDEO - There is disclosed a quick and efficient method for analyzing a segment of video, the segment of video having a plurality of frames. A reference portion is acquired from a reference frame of the plurality of frames. Plural subsequent portions are then acquired from a corresponding subsequent frame of the plurality of frames. Each subsequent portion is then compared with the reference portion, and an event is detected based upon each comparison. There is also disclosed a method of optimizing video including selectively storing, labeling, or viewing video based on the occurrence of events in the video. Furthermore, there is disclosed a method for creating a video summary of video which allows a used to scroll through and access selected parts of a video. The methods disclosed also provide advancements in the field of video surveillance analysis. | 04-10-2014 |
20140098995 | MOTION-CONTROLLED ELECTRONIC DEVICE AND METHOD THEREFOR - An electronic device obtains a motion of a displaced object in two captured video frames utilizing phase correlation of the two frames. The electronic device identifies a magnitude of the motion and an area in a phase correlation surface corresponding to an area of the object, and accordingly determines if the motion is a qualified motion operable to trigger a gesture command of the electronic device. The phase correlation surface is obtained from the phase correlation of the two frames. | 04-10-2014 |
20140098996 | IMAGE DISPLAY APPARATUS AND IMAGE DISPLAY METHOD - An image display apparatus is provided that can obtain a stable and easy to view detection frame and cut-out image in a captured image in which there is a possibility that a congested region and a non-congested region are mixed, such as an omnidirectional image. Congested region detecting section detects a congested region in a captured image by detecting a movement region of the captured image. Object detecting section detects images of targets in the captured image by performing pattern matching. Detection frame forming section forms a congested region frame that surrounds a congested region detected by congested region detecting section, and object detection frame that surround image of target detected by object detecting section. | 04-10-2014 |
20140098997 | METHOD AND DEVICE FOR DETECTING OBJECTS IN THE SURROUNDINGS OF A VEHICLE - A method for detecting objects in the surroundings of a vehicle. The method includes reading in a first image, of a vehicle camera, which represents the surroundings taken using a first exposure time and reading in a second image of the vehicle camera, which was taken after the first image and using a second exposure time, the second exposure time differing from the first exposure time, and extracting an image detail from the second image, the image detail representing a smaller area of the surroundings than the first image. During the extracting, a position of the image detail in the second image is determined based on at least one parameter which represents information on travel of the vehicle and/or a position of an infrastructure measure in front of the vehicle and/or which is independent of a moving object that was detected in a preceding step in the image detail. | 04-10-2014 |
20140105453 | GESTURE IDENTIFICATION WITH NATURAL IMAGES - A method for gesture identification with natural images includes generating a series of variant images by using each two or more successive ones of the natural images, extracting an image feature from each of the variant images, and comparing the varying pattern of the image feature with a gesture definition to identify a gesture. The method is inherently insensitive to indistinctness of images, and supports the motion estimation in axes X, Y, and Z without requiring the detected object to maintain a fixed gesture. | 04-17-2014 |
20140105454 | TRACKING APPARATUS - A tracking apparatus includes a grouping setting unit, a tracking feature detection unit, a tracking unit. The grouping setting unit groups a plurality of focus detection areas with an in-focus state. The tracking feature detection unit detects a feature amount of the tracking target in areas of the groups grouped. The tracking unit tracks the tracking target in accordance with a first or second tracking position depending on the number of the set groups. | 04-17-2014 |
20140105455 | INFORMATION PROCESSING APPARATUS AND INPUT CONTROL METHOD - An information processing apparatus includes an image capturing section to capture an image of a hand; an extracting section to extract a hand area from the captured image; a reference line determining section to determine a reference pushdown line in the image on the hand area; a determining section to determine a pushdown move if the bottom part of the hand area comes below the reference pushdown line; a first position determining section to determine a depth position based on an aspect ratio of the hand area if the pushdown move is determined; a second position determining section to determine a lateral position based on a position of the bottom part of the hand area if the pushdown move is determined; and an input key determining section to determine an input key from the determined depth position and lateral position. | 04-17-2014 |
20140105456 | DEVICE, SYSTEM AND METHOD FOR DETERMINING COMPLIANCE WITH AN INSTRUCTION BY A FIGURE IN AN IMAGE - A system and method for determining a compliance with an instruction to assemble a figure according to a depiction of the figure on an output device, by presenting image data of the figure, capturing an image of the assembled figure, and comparing the figure captured in the image to the figure depicted on the output device. | 04-17-2014 |
20140105457 | METHOD FOR PROVIDING TARGET POINT CANDIDATES FOR SELECTING A TARGET POINT - Method for providing target point candidates forming a candidate set for selecting a target point from the candidate set by means of a geodetic measuring device. The measuring device is coarsely oriented toward the target point, and an image is recorded in the sighting direction. A search process for certain target object candidates in the recorded image is performed by means of image processing and wherein at least one respective point representing the target object candidate is associated with each of the target object candidates as a target point candidate. Candidates are associated with a candidate set. respective weight values are derived according to at least one value of a predetermined target point property of the candidates and associated with the target point candidates. The target point candidates from the candidate set are each provided together with respective information representing the weight value associated with the target point candidate. | 04-17-2014 |
20140105458 | METHOD FOR REMOTELY DETERMINING AN ABSOLUTE AZIMUTH OF A TARGET POINT - The invention relates to a method and system for remotely determining an absolute azimuth of a target point (B) by ground means, via the creation of an image bank georeferenced in the absolute azimuth only from a first point (P | 04-17-2014 |
20140105459 | LOCATION-AWARE EVENT DETECTION - Techniques for detecting one or more events are provided. The techniques include using multiple overlapping regions of interest on a video sequence to cover a location for one or more events, wherein each event is associated with at least one of the multiple overlapping regions of interest, applying multiple-instance learning to the video sequence to select one or more of the multiple overlapping regions of interest to construct one or more location-aware event models, and applying the models to the video sequence to detect the one or more events and to determine the one or more regions of interest that are associated with the one or more events. | 04-17-2014 |
20140105460 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - An image processing device, method and program in which a feature point derivation unit derives a plurality of characteristic points in an input moving image. A tracking subject feature point setting unit sets a feature point within a tracking subject, from the characteristic points. A background feature point setting unit sets a group of background feature points from the characteristic points. The background feature points are not located within the tracking subject. A motion detection unit detects movement over time of the background feature points. A clip area setting unit sets a size and a position of a clip area of an image to be employed which includes the feature point within the tracking subject, on the basis of the movement of the feature point within the tracking subject and the movement of the background feature points, when the motion detection unit detects movement of the background feature points. | 04-17-2014 |
20140105461 | METHODS AND APPARATUS TO COUNT PEOPLE IN IMAGES - Methods and apparatus to count people in images are disclosed. An example method includes analyzing frame pairs of a plurality of frame pairs captured over a period of time to identify a redundant person indication detected in an overlap region, the overlap region corresponding to an intersection of a first field of view and a second field of view; eliminating the identified redundant person indication to form a conditioned set of person indications for the period of time; grouping similarly located ones of the person indications of the conditioned set to form groups; analyzing the groups to identify redundant groups detected in the overlap region; and eliminating the redundant groups from a people tally generated based on the groups. | 04-17-2014 |
20140105462 | METHODS AND APPARATUS FOR DETECTING A COMPOSITION OF AN AUDIENCE OF AN INFORMATION PRESENTING DEVICE - Methods and apparatus for detecting a composition of an audience of an information presenting device are disclosed. A disclosed example method includes maintaining a first count of a number of people detected in an environment based on image data representative of the environment; when the image data is indicative of a change in the number of people detected in the environment, presenting a request for identity information; determining if the people were compliant in providing the identity information based on a difference between the number of people appearing in the image data and a second number of received identity responses; and when the people were non-compliant in providing the identity information, increasing a second count maintained for the environment indicative of unidentified people in the room. | 04-17-2014 |
20140105463 | METHOD AND SYSTEM FOR MOTION DETECTION IN AN IMAGE - Embodiments for moving object detection in an image are disclosed. These include detecting a moving object in an input image by selecting video frames that are visually similar to the input image, generating a model motion image by estimating motion for each selected video frame, and detecting, using the model motion image, a moving object in the input image based on differences between the model motion image and the input image. | 04-17-2014 |
20140112526 | DETECTING EMBOSSED CHARACTERS ON FORM FACTOR - A portable computing device reads information embossed on a form factor utilizing a built-in digital camera and determines dissimilarity between each pair of embossed characters to confirm consistency. Techniques comprise capturing an image of a form factor having information embossed thereupon, and detecting embossed characters. The detecting utilizes a gradient image and one or more edge images with a mask corresponding to the regions for which specific information is expected to be found on the form factor. The embossed form factor may be a credit card, and the captured image may comprise an account number and an expiration date embossed upon the credit card. Detecting embossed characters may comprise detecting the account number and the expiration date of the credit card, and/or the detecting may utilize a gradient image and one or more edge images with a mask corresponding to the regions for the account number and expiration date. | 04-24-2014 |
20140112527 | SIMULTANEOUS TRACKING AND TEXT RECOGNITION IN VIDEO FRAMES - Architecture that enables optical character recognition (OCR) of text in video frames at the rate at which the frames are received. Additionally, conflation is performed on multiple text recognition results in the frame sequence. The architecture comprises an OCR text recognition engine and a tracker system; the tracker system establishes a common coordinate system in which OCR results from different frames may be compared and/or combined. From a set of sequential video frames, a keyframe is chosen from which the reference coordinate system is established. An estimated transformation from keyframe coordinates to subsequent video frames is computed using the tracker system. When text recognition is completed for any subsequent frame, the result coordinates can be related to the keyframe using the inverse transformation from the processed frame to the reference keyframe. The results can be rendered for viewing as the results are obtained. | 04-24-2014 |
20140112528 | OBJECT RECOGNITION IN LOW-LUX AND HIGH-LUX CONDITIONS - A system for capturing image data for gestures from a passenger or a driver in a vehicle with a dynamic illumination level comprises a low-lux sensor equipped to capture image data in an environment with an illumination level below an illumination threshold, a high-lux sensor equipped to capture image data in the environment with the illumination level above the illumination threshold, and an object recognition module for activating the sensors. The object recognition module determines the illumination level of the environment and activates the low-lux sensor if the illumination level is below the illumination threshold. If the illumination level is above the threshold, the object recognition module activates the high-lux sensor. | 04-24-2014 |
20140112529 | METHOD, APPARATUS, AND SYSTEM FOR CORRECTING MEDICAL IMAGE ACCORDING TO PATIENT'S POSE VARIATION - Provided is a method of correcting a medical image according to a patient's pose variation. The method includes attaching a marker to an object, generating a first non-real-time image and a first real-time image when the object is in a first pose, generating a second real-time image when the object is in a second pose, and correcting the first non-real-time image based on shift information of the marker when the object is changed from the first pose to the second pose. | 04-24-2014 |
20140112530 | IMAGE RECOGNITION DEVICE, IMAGE RECOGNITION METHOD, PROGRAM, AND INTEGRATED CIRCUIT - An image recognition device including: a first recognition unit that performs image recognition within an image to find a first object; an obtaining unit that obtains an attribute of the first object found by the first recognition unit; an object specifying unit that refers to object correspondence information showing identifiers of second objects and associating each identifier with an attribute, and specifies an identifier of one of the second objects that is associated with the attribute of the first object; an area specifying unit that refers to area value information showing values that are associated with the identifiers of the second objects and are related to a first area occupied by the first object, and specifies a second area within the image by using a value associated with the identifier of the one of the second objects; and a second recognition unit that performs image recognition within the second area to find the one of the second objects. | 04-24-2014 |
20140112531 | IMAGE PROCESSING APPARATUS AND METHOD FOR DETECTING TRANSPARENT OBJECT IN IMAGE - Provided is an image processing apparatus and method for detecting a transparent image from an input image. The image processing apparatus may include an image segmenting unit to segment an input image into a plurality of segments, a likelihood determining unit to determine a likelihood that a transparent object is present between adjacent segments among the plurality of segments, and an object detecting unit to detect the transparent object from the input image based on the likelihood. | 04-24-2014 |
20140112532 | METHOD FOR ANALYZING AN IMAGE RECORDED BY A CAMERA OF A VEHICLE AND IMAGE PROCESSING DEVICE - A method for analyzing an image recorded by a camera of a vehicle. The method includes a step of reading the image of the camera. Furthermore, the method includes a step of recognizing at least one object in a subsection of the image, the subsection imaging a smaller area of the vehicle surroundings than the image. Furthermore, the method includes a step of transmitting the subsection and/or information about the subsection of the image to a driver assistance module. Finally, the method includes a step of using the subsection of the image and/or the information about the subsection of the image instead of the entire image in the driver assistance module, in order to analyze the image recorded by the camera. | 04-24-2014 |
20140112533 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING SYSTEM - An information processing apparatus includes an input unit, an attention object detection unit, and a calculation unit. The input unit is configured to input a plurality of temporally continuous images taken by an image pickup apparatus. The attention object detection unit is configured to detect an attention object as an attention target from a first image which is an image taken at a first time point out of the plurality of images input. The calculation unit is configured to compare the first image with one or more second images which are one or more images taken at a time point previous to the first time point, to calculate, as a second time point, a time point when the attention object appears in the continuous plurality of images. | 04-24-2014 |
20140112534 | INFORMATION PROCESSING DEVICE AND STORAGE MEDIUM - There is provided an information processing device including a detection unit that detects a face image region in an image, a determination unit that determines a human attribute of at least one face image in the face image region detected by the detection unit, and a face image replacement unit that replaces the at least one face image with a natural face image of another person according to the human attribute determined by the determination unit. | 04-24-2014 |
20140112535 | METHOD AND APPARATUS FOR OBJECT DETECTION - A method for detecting a plurality of object regions in an image, wherein the plurality of object regions having similar specific structural features, comprises: an estimation step for estimating a common initial value for the specific structural features of the plurality of object regions; and a determination step for determining, for each of the plurality of object regions, a final value for the specific structural feature of the object region and a final position thereof separately based on the estimated common initial value. | 04-24-2014 |
20140112536 | SYSTEM AND METHOD FOR AUTOMATICALLY REGISTERING AN IMAGE TO A THREE-DIMENSIONAL POINT SET - A system and methods can create a synthetic image of a target from a 3D data set, by using an electro-optical (EO) image and sun geometry associated with the EO image. In some examples, a 3D surface model is created from a 3D data set. The 3D surface model establishes a local surface orientation at each point in the 3D data set. A surface shaded relief (SSR) is produced from the local surface orientation, from an EO image, and from sun geometry associated with the EO image. Points in the SSR that are in shadows are shaded appropriately. The SSR is projected into the image plane of the EO image. Edge-based registration extracts tie points from the projected SSR. The 3D data set converts the tie points to ground control points. A geometric bundle adjustment aligns the EO image geometry to the 3D data set. | 04-24-2014 |
20140112537 | SYSTEMS AND METHODS FOR INTELLIGENT MONITORING OF THOROUGHFARES USING THERMAL IMAGING - Various techniques are disclosed for systems and methods using thermal imaging to intelligently monitor thoroughfares. For example, an intelligent monitoring system may include an infrared imaging module, a processor, a communication module, a memory, and an adjustable component. The system may be mounted, installed, or otherwise disposed at various locations along thoroughfares, and capture thermal images of a scene that includes at least a portion of the thoroughfares. Various thermal image processing and analysis operations may be performed on the thermal images to generate comprehensive monitoring information including an indication of detected objects in the scene and at least one attribute associated with the objects. Various actions may be taken, such as generating various alarms and intelligently adjusting operation of various adjustable devices on thoroughfares, based on the monitoring information. The monitoring information may be shared among multiple instances of the system, and may be communicated to external devices. | 04-24-2014 |
20140112538 | PEDESTRIAN MOTION PREDICTING DEVICE - A subject is to provide a pedestrian motion predicting device capable of accurately predicting a possibility of a rush out before a pedestrian actually begins to rush out. According to the embodiments, the pedestrian is detected from input image data, a portion in which the detected pedestrian is imaged is cut out from the image data, a shape of the pedestrian imaged in the cut-out partial image data is classified by collating the shape with a learning-finished identifier group or a pedestrian recognition template group, and the rush out of the pedestrian is predicted based on a result of the acquired classification. | 04-24-2014 |
20140112539 | APPARATUS AND METHODS FOR REDUCING VISIBILITY OF A PERIPHERY OF AN IMAGE STREAM - Apparatus and methods are described for imaging a portion of a body of a subject that undergoes a motion cycle, including acquiring a plurality of image frames of the portion of the subject's body. A given feature is identified in at least some of the image frames. At least some image frames are image tracked with respect to the feature, and the image frames that have been image tracked with respect to the given feature are displayed as a stream of image frames. Visibility of a periphery of the displayed stream of image frames is at least partially reduced. Other applications are also described. | 04-24-2014 |
20140112540 | COLLECTION OF AFFECT DATA FROM MULTIPLE MOBILE DEVICES - A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and mental states inferred from these performances. Multiple devices, including mobile devices, can observe and record or transmit a user's mental state data. The mental state data collected from the multiple devices can be used to analyze the mental states of the user. The mental state data can be in the form of facial expressions, electrodermal activity, movements, or other detectable manifestations. Multiple cameras on the multiple devices can be usefully employed to collect facial data. An output can be rendered based on an analysis of the mental state data. | 04-24-2014 |
20140112541 | Method and System for Edge Detection - A method executed by a computer system for detecting edges comprises receiving an image comprising a plurality of pixels, determining a phase congruency value for a pixel, where the phase congruency value comprises a plurality of phase congruency components, and determining if the phase congruency value satisfies a phase congruency criteria. If the phase congruency value satisfies the phase congruency criteria, the computer system categorizes the pixel as an edge pixel. If the phase congruency value does not satisfy the phase congruency criteria, the computer system compares a first phase congruency component of the plurality of phase congruency components to a phase congruency component criteria. If the first phase congruency component satisfies the phase congruency component criteria, the computer system categorizes the pixel as an edge pixel, and if the first phase congruency component does not satisfy the phase congruency component criteria, categorizes the pixel as a non-edge pixel. | 04-24-2014 |
20140119594 | PEOPLE COUNTER INCLUDING SETTING INTERFACE AND METHOD FOR SETTING THE SAME - Disclosed is a people counter including a setting interface and a setting method thereof. Since a reference width used to count of a moving object within an image is visibly arranged and displayed on a screen so that a detected width of the moving object can be compared with the reference width, setting and verification for count is very easy. In addition, since the interface can be freely moved for adjustment and comparison of a reference width using a pointing device such as a mouse, thereby providing verification and resetting which are intuitive and practical over conventional manual adjustment schemes, count accuracy can be easily increased in different environments depending on conditions or type of moving objects within an image. | 05-01-2014 |
20140119595 | METHODS AND APPARATUS FOR REGISTERING AND WARPING IMAGE STACKS - A set of images is processed to modify and register the images to a reference image in preparation for blending the images to create a high-dynamic range image. To modify and register a source image to a reference image, a processing unit generates a correspondence map for the source image based on a non-rigid dense correspondence algorithm, generates a warped source image based on the correspondence map, estimates one or more color transfer functions for the source image, and fills the holes in the warped source image. The holes in the warped source image are filled based on either a rigid transformation of a corresponding region of the source image or a transformation of the reference image based on the color transfer functions. | 05-01-2014 |
20140119596 | METHOD FOR RECOGNIZING GESTURE AND ELECTRONIC DEVICE - A method for recognizing a gesture adopted by an electronic device to recognize a gesture of at least a hand. In the method, a hand image of the hand is captured and the hand image includes a hand region. A geometric center of the hand region is calculated. At least a concentric circle is disposed on the hand region with the geometric center as the center of the concentric circles. A number of intersection points of each concentric circle and the hand region is calculated respectively to determine a feature vector of the gesture. According to the feature vector, a hand recognition is performed to recognize the gesture of the hand. | 05-01-2014 |
20140119597 | APPARATUS AND METHOD FOR TRACKING THE POSITION OF A PERIPHERAL VEHICLE - The present disclosure provides an apparatus and a method for tracking a position of a peripheral vehicle. The apparatus includes: a processor; memory; an image obtaining unit configured to receive one or more images from one or more cameras disposed on a vehicle; a peripheral vehicle detecting unit configured to analyze the one or more images to detect a peripheral vehicle in the peripheral one or more images; a position tracking unit configured to track the peripheral vehicle detected in the peripheral one or more images; a view converting unit configured to generate a view-converted image by converting a view of the peripheral image based on the tracked position of the peripheral vehicle; and an output controlling unit configured to output the view-converted image to a display provided in the vehicle. | 05-01-2014 |
20140119598 | Systems and Methods of Merging Multiple Maps for Computer Vision Based Tracking - Method, apparatus, and computer program product for merging multiple maps for computer vision based tracking are disclosed. In one embodiment, a method of merging multiple maps for computer vision based tracking comprises receiving a plurality of maps of a scene in a venue from at least one mobile device, identifying multiple keyframes of the plurality of maps of the scene, and merging the multiple keyframes to generate a global map of the scene. | 05-01-2014 |
20140119599 | SYSTEMS AND METHODS FOR TRACKING HUMAN HANDS USING PARTS BASED TEMPLATE MATCHING WITHIN BOUNDED REGIONS - Systems and methods for tracking human hands using parts based template matching within bounded regions are described. One embodiment of the invention includes a processor; an image capture system configured to capture multiple images of a scene; and memory containing a plurality of templates that are rotated and scaled versions of a finger template. A hand tracking application configures the processor to: obtain a reference frame of video data and an alternate frame of video data from the image capture system; identify corresponding pixels within the reference and alternate frames of video data; identify at least one bounded region within the reference frame of video data containing pixels having corresponding pixels in the alternate frame of video data satisfying a predetermined criterion; and detect at least one candidate finger within the at least one bounded region in the reference frame of video data. | 05-01-2014 |
20140119600 | DETECTION APPARATUS, VIDEO DISPLAY SYSTEM AND DETECTION METHOD - According to one embodiment, a detection apparatus includes a detector and a detection area setting module. The detector is configured to detect a human face within a detection area which is a part of or a whole of an image captured by a camera, by varying a distance between the human face to be detected and the camera. The detection area setting module is configured to set the detection area narrower as the distance is longer. | 05-01-2014 |
20140119601 | COMPOSITION DETERMINATION DEVICE, COMPOSITION DETERMINATION METHOD, AND PROGRAM - A composition determination device includes: a subject detection unit configured to detect a subject in an image based on acquired image data; an actual subject size detection unit configured to detect the actual size which can be viewed as being equivalent to actual measurements, for each subject detected by the subject detection unit; a subject distinguishing unit configured to distinguish relevant subjects from subjects detected by the subject detection unit, based on determination regarding whether or not the actual size detected by the actual subject size detection unit is an appropriate value corresponding to a relevant subject; and a composition determination unit configured to determine a composition with only relevant subjects, distinguished by the subject distinguishing unit, as objects. | 05-01-2014 |
20140119602 | METHOD AND APPARATUS FOR IMPLEMENTING MOTION DETECTION - The present invention discloses a method and an apparatus for implementing motion detection. The method includes: obtaining a pixel value of an image to be detected in a video image sequence, a pixel average value of same positions in a preset number of frame images before the image to be detected, and scene luminance values of the preset number of frame images before the image to be detected; obtaining a pixel scene luminance value and an average scene luminance value by calculation according to the pixel value and the scene luminance values; obtaining P1 according to the pixel value and the pixel average value, and obtaining P2 according to the pixel scene luminance value and the average scene luminance value; and obtaining P3 by integrating the P1 and P2, and detecting, according to the P3, whether the image to be detected includes a motion image region. | 05-01-2014 |
20140119603 | METHODS OF AND APPARATUSES FOR RECOGNIZING MOTION OF OBJECTS, AND ASSOCIATED SYSTEMS - A method of recognizing motion of an object may include periodically obtaining depth data of a first resolution and two-dimensional data of a second resolution with respect to a scene using an image capturing device, wherein the second resolution is higher than the first resolution; determining a motion tracking region by recognizing a target object in the scene based on the depth data, such that the motion tracking region corresponds to a portion of a frame and the portion includes the target object; periodically obtaining tracking region data of the second resolution corresponding to the motion tracking region; and/or analyzing the motion of the target object based on the tracking region data. | 05-01-2014 |
20140119604 | METHOD, APPARATUS AND SYSTEM FOR DETECTING A SUPPORTING SURFACE REGION IN AN IMAGE - A method of detecting a supporting surface region in an image captured by a camera is disclosed. An object in the image is detected. One or more regions of the image in which a lower part of the detected object exists are determined. A degree of confidence for each of the regions is determined. The degree of confidence indicates likelihood of a corresponding region being a supporting surface region. One or more of the regions are selected based on each corresponding degree of confidence. Similarity of other regions in the image to at least one of the selected regions is determined. The supporting surface region is detected based on the determined similarity. | 05-01-2014 |
20140119605 | Method for Recognizing Traffic Signs - The invention relates to a method for the recognition of traffic signs that at least include one main sign and one assigned additional sign, and to a respective device. A method according to the invention comprises the following steps: | 05-01-2014 |
20140119606 | System and Method for Real-Time Environment Tracking and Coordination - A configurable real-time environment tracking and command module (RTM) is provided to coordinate one or more than one devices or objects in a physical environment. A virtual environment is created to correlate with various objects and attributes within the physical environment. The RTM is able to receive data about attributes of physical objects and accordingly update the attributes of correlated virtual objects in the virtual environment. The RTM is also able to provide data extracted from the virtual environment to one or more than devices, such as robotic cameras, in real-time. An interface to the RTM allows multiple devices to interact with the RTM, thereby coordinating the devices. | 05-01-2014 |
20140119607 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes: a data processing section which processes input image data and obtains output image data; a face detecting section which detects a face image on the basis of the input image data and obtains information about a face image region in which the face image exists; and a processing controller which controls the process of the data processing section on the basis of the information about the face image region obtained in the face detecting section. | 05-01-2014 |
20140119608 | System and Methods for Improving Accuracy and Robustness of Abnormal Behavior Detection - A surveillance system which improves accuracy and robustness of abnormal behavior detection of a monitored object traversing a space includes a metadata processing module, a model building module, and a behavior assessment module. The metadata processing module generates trajectory information for a monitored object and determines attributes of the monitored object. The model building module generates and updates normal motion models based on at least one of the trajectory information, the attributes, and an abnormal behavior score. The behavior assessment module generates the abnormal behavior score based on one of a plurality of methods. A first one of the plurality of methods defines wrong direction behavior. A second one of the plurality of methods defines wandering/loitering behavior. A third one of the plurality of methods defines speeding behavior. | 05-01-2014 |
20140126767 | SYSTEM AND METHOD FOR DETERMINING THE THREE-DIMENSIONAL LOCATION AND ORIENTATION OF IDENTIFICATION MARKERS - A three-dimensional position and orientation tracking system comprises one or more pattern tags, each comprising a plurality of contrasting portions, a tracker for obtaining image information about the pattern tags, a database with geometric information describing patterns on pattern tags; and a controller for receiving and processing the image information from the tracker, accessing the database to retrieve geometric information, and comparing the image information with the geometric information. The contrasting portions are arranged in a rotationally asymmetric pattern and at least one of the contrasting portions on a pattern tag has a perimeter that has a mathematically describable curved section. The perimeter of the contrasting portion may comprise a conic section, including for example an ellipse or a circle. The tracking system can be implemented in a surgical monitoring system in which the pattern tags are attached to tracking markers or are themselves tracking markers. | 05-08-2014 |
20140126768 | Method for Initializing and Solving the Local Geometry or Surface Normals of Surfels Using Images in a Parallelizable Architecture - A system and method is described herein for solving for surface normals of objects in the scene observed in a video stream. The system and method may include sampling the video stream to generate a set of keyframes; generating hypothesis surface normals for a set of mappoints in each of the keyframes; warping patches of corresponding mappoints in a first keyframe to the viewpoint of a second keyframe with a warping matrix computed from each of the hypothesis surface normals; scoring warping errors between each hypothesis surface normal in the two keyframes; and discarding hypothesis surface normals with high warping errors between the first and second keyframes. | 05-08-2014 |
20140126769 | FAST INITIALIZATION FOR MONOCULAR VISUAL SLAM - Apparatuses and methods for fast visual simultaneous localization and mapping are described. In one embodiment, a three-dimensional (3D) target is initialized immediately from a first reference image and prior to processing a subsequent image. In one embodiment, one or more subsequent reference images are processed, and the 3D target is tracked in six degrees of freedom. In one embodiment, the 3D target is refined based on the processed the one or more subsequent images. | 05-08-2014 |
20140126770 | PHR/EMR Retrieval System Based on Body Part Recognition and Method of Operation Thereof - A method of filtering an electronic medical record (EMR) based on a selected body part (SBP). The method may be controlled by one or more controllers and may include one or more acts of obtaining image information of a patient, analyzing the image information using an object recognition method, identifying a SBP of the patient based upon the analyzing of the image information, and filtering the EMR of the patient in accordance with the SBP. | 05-08-2014 |
20140126771 | ADAPTIVE SCALE AND/OR GRAVITY ESTIMATION - Systems, apparatus and methods for estimating gravity and/or scale in a mobile device are presented. A difference between an image-based pose and an inertia-based pose is using to update the estimations of gravity and/or scale. The image-based pose is computed from two poses and is scaled with the estimation of scale prior to the difference. The inertia-based pose is computed from accelerometer measurements, which are adjusted by the estimation for gravity. | 05-08-2014 |
20140126772 | COMMODITY RECOGNITION APPARATUS AND COMMODITY RECOGNITION METHOD - A candidate output element configured to output recognition target commodities as candidates of a recognized commodity in a descending order of the similarity degrees calculated by the similarity degree calculation element, a distance measurement element configured to measure the distance from the image capturing section to a commodity photographed by the image capturing section, and a changing element configured to change the number of candidates of a recognized commodity output by the candidate output element according to the distance measured by the distance measurement element. | 05-08-2014 |
20140126773 | COMMODITY RECOGNITION APPARATUS AND COMMODITY RECOGNITION METHOD - A commodity recognition apparatus comprises a feature amount extraction unit configured to extract the appearance feature amount of a commodity contained in an image captured by an image capturing section; a distance measurement unit configured to measure the distance from the image capturing section to a commodity captured by the image capturing section; a file selection unit configured to select a recognition dictionary file corresponding to the distance measured by the distance measurement unit from the recognition dictionary files for each distance. | 05-08-2014 |
20140126774 | LIBRARY APPARATUS - Disclosed is a library apparatus capable of indicating an availability state of a slot according to the availability state of the slot that a magazine has. | 05-08-2014 |
20140126775 | COMMODITY RECOGNITION APPARATUS AND COMMODITY RECOGNITION METHOD - A commodity recognition apparatus comprises an image interface, a memory and a processor. The image interface is configured to acquire a commodity image captured by a camera. The memory is configured to store a candidate of a commodity recognized from the commodity image acquired by the image interface. The processor is configured to try to read a commodity recognition code from the commodity image acquired by the image interface and reset the candidate of the commodity stored in the memory if the commodity recognition code is read. | 05-08-2014 |
20140126776 | EYELID DETECTION DEVICE, EYELID DETECTION METHOD, AND RECORDING MEDIUM - Distances between edges which are detected and paired from images which show a driver's face are computed sequentially. Based on a change in the computed distances, a low-probability candidate for a pair of eyelid edges is eliminated. The edge pairs ultimately remaining as a result thereof are detected as upper eyelid edge and lower eyelid edge pairings. It is thus possible to carry out a detection which takes into account not only a feature which is near to an eyelid edge, but also movement as an eyelid. Accordingly, it is possible to accurately detect edges of a driver's eyelids. | 05-08-2014 |
20140126777 | ENHANCED FACE RECOGNITION IN VIDEO - The computational resources needed to perform processes such as image recognition can be reduced by determining appropriate frames of image information to use for the processing. In some embodiments, infrared imaging can be used to determine when a person is looking substantially towards a device, such that an image frame captured at that time will likely be adequate for facial recognition. In other embodiments, sound triangulation or motion sensing can be used to assist in determining which captured image frames to discard and which to select for processing based on any of a number of factors indicative of a proper frame for processing. | 05-08-2014 |
20140133697 | TURBINE INSPECTION SYSTEM, COMPUTER PROGRAM PRODUCT AND METHOD OF INSPECTING - The disclosure includes a system, a computer program product, and a method for inspecting a turbine system. In one embodiment, the system includes at least one computing device configured to inspect a turbine system by performing actions including: obtaining a set of pre-maintenance digital images of the turbine system, obtaining a set of post-maintenance digital images of the turbine system, comparing the set of pre-maintenance digital images with the set of post-maintenance digital images to identify an anomaly in the set of post-maintenance digital images, and comparing the set of post-maintenance digital image with a set of computer modeled image of the turbine system to determine a type of the anomaly in response to identifying the anomaly. The post-maintenance digital images depict the turbine system after a maintenance process has been performed on the turbine system. | 05-15-2014 |
20140133698 | OBJECT DETECTION - Objects are detected in real-time at full VGA 30 frame per second resolution. A preprocessor performs run-length encoding (RLE) and generates a summed area table (SAT) of an image. The RLE and SAT are used to identify candidate objects and to iteratively refine their boundaries. A histogram of gradients (HoG) and support vector machine (SVM) then reliably classify the object. The method may be part of an advanced driver assistance system (ADAS). | 05-15-2014 |
20140133699 | TARGET POINT ARRIVAL DETECTOR, METHOD OF DETECTING TARGET POINT ARRIVAL, STORAGE MEDIUM OF PROGRAM OF DETECTING TARGET POINT ARRIVAL AND VEHICLE-MOUNTED DEVICE CONTROL SYSTEM - A target point arrival detector for detecting that a vehicle arrives at a target point based on an image ahead of the vehicle moving on a surface captured by an image capturing unit includes a target point arrival signal output unit, using a processing circuit, to output a signal indicating that the vehicle arrives at the target point where an inclination condition of a surface ahead of the vehicle with respect to a surface over which the vehicle moves changes to a downward based on the captured image. | 05-15-2014 |
20140133700 | DETECTING DEVICE, DETECTION METHOD, AND COMPUTER PROGRAM PRODUCT - According to an embodiment, a detecting device includes a projecting unit, a calculator, and a detector. The projecting unit is configured to obtain a first projection position by projecting a capturing position of a captured image on a road surface, obtain a second projection position by projecting a spatial position in the captured image on the road surface, and obtain a third projection position by projecting an error position on the road surface. The calculator is configured to calculate an existence probability of an object on a line passing through the first and second projection positions so that an existence probability of the object between the second and third projection positions on the straight line is greater than between the first and third projection positions on the straight line. The detector is configured to detect a boundary between the road surface and the object by using the existence probability. | 05-15-2014 |
20140133701 | Systems and Methods for Tracking Objects - Various embodiments are disclosed for performing object tracking. One embodiment is a method for tracking an object in a plurality of frames, comprising obtaining a reference contour of an object in a reference frame and estimating, for a current frame after the reference frame, a contour of the object. The method further comprises comparing the reference contour with the estimated contour and determining at least one local region of the reference contour in the reference frame based on a difference between the reference contour and the estimated contour. Based on the difference, at least one corresponding region of the current frame is determined. The method further comprises computing a degree of similarity between the at least one corresponding region in the current frame and the at least one local region in the reference frame, adjusting the estimated contour in the current frame according to the degree of similarity, and designating the current frame as a new reference frame and a frame after the new reference as a new current frame. | 05-15-2014 |
20140133702 | Methods for Rapid Distinction between Debris and Growing Cells - Methods of rapid distinction between growing cells and debris, which determine a time-lapse movie of specimen images, track features of each entity, and categorize each entity as growing cells or debris. | 05-15-2014 |
20140133703 | VIDEO OBJECT TRACKING USING MULTI-PATH TRAJECTORY ANALYSIS - A method for obtaining trajectory of an object using multi-path tracking mode is provided. The method includes marking a portion of the object in a frame of a video, obtaining consecutive frames in the video, and tracking the marked portion of the object in consecutive frames by estimating sum of absolute difference. The method further includes comparing the sum of absolute difference to a sum of absolute difference threshold, switching between the multi-path tracking mode and single path tracking mode based on the comparison of the sum of absolute difference to the sum of absolute difference threshold, and obtaining trajectory of the marked portion by combining the single path tracking mode and multi-path tracking mode. | 05-15-2014 |
20140133704 | COMMODITY RECOGNITION APPARATUS AND COMMODITY RECOGNITION METHOD - A commodity recognition apparatus acquires an image including a commodity captured by an image capturing module and displays the acquired image on a display module. The commodity recognition apparatus displays a frame border surrounding the commodity on at least a portion of the image displayed on the display module. Moreover, the commodity recognition apparatus recognizes the commodity existing in the frame border according to a feature amount of the image in the area surrounded by the frame border. The commodity recognition apparatus outputs information of the commodity recognized. | 05-15-2014 |
20140133705 | RED-EYE DETERMINATION DEVICE - A black eye position existence probability density distribution learning unit that records a black eye position which is detected in the daytime to a black eye position existence probability density distribution, a red-eye candidate detection unit that detects red-eye candidates from the image of the driver at night, and a red-eye determination unit that determines the red eye from the red-eye candidates. The red-eye determination unit determines the red eye on the basis of the relationship between a change in the direction of the face and the behavior of the red-eye candidate and determines, as the red eye, the red-eye candidate disposed at the position of high black eye position existence probability density with reference to the black eye position existence probability density distribution. | 05-15-2014 |
20140140572 | PARALLEL FACE DETECTION AND TRACKING SYSTEM - The present disclosure is directed to a parallel face detection and tracking system. In general, embodiments consistent with the present disclosure may be configured to distribute the processing load associated with the detection and tracking of different faces in an image between multiple data processors. If needed, processing load balancing and/or protective features may be implemented to prevent the data processors from becoming overwhelmed. In one embodiment, a device may comprise, for example, a communication module and at least one processing module. The communication module may be configured to receive at least image information that may be processed by a plurality of data processors in the data processing module. For example, each of the data processors may be configured to detect faces in the image information and/or track detected faces in the image information based on at least one criterion. | 05-22-2014 |
20140140573 | Pose Tracking through Analysis of an Image Pyramid - Techniques for tracking a pose of a textured target in an augmented reality environment are described herein. The techniques may include processing an initial image representing the textured target to generate feature relation information describing associations between features on different image layers of the initial image. The feature relation information may be used to locate features in different image layers of a subsequent image. Upon locating features in a highest resolution image of the subsequent image, the pose of the textured target may be determined for the subsequent image. | 05-22-2014 |
20140140574 | COMMODITY RECOGNITION APPARATUS AND COMMODITY RECOGNITION METHOD - A commodity recognition apparatus comprises an image interface, and a processor. The image interface acquires a commodity image captured by a camera. The processor detects a commodity image from the image acquired by the image interface. Then the processor carries out a commodity recognition processing for recognizing a commodity candidate according to feature amount extracted from the commodity image and a code reading processing for reading a commodity recognition code from the same commodity image in parallel. If the commodity recognition code is read in the code reading processing, the processor determines the commodity recognition code. If a photographing period of a commodity is ended while the commodity recognition code is not read, the processor outputs a result of the commodity recognition processing. | 05-22-2014 |
20140140575 | IMAGE CAPTURE WITH PRIVACY PROTECTION - A method of providing obscurant data includes receiving image data including an image of a target and receiving a preference setting corresponding to the target. Obscurant data of at least a portion of the image data corresponding to the target are determined using the received preference setting. A method of providing surveillance image data includes capturing image data including an image of a target, querying a database to receive a preference setting corresponding to the target, determining the obscurant data of the portion of the image data, and selectively modifying the received image data according to the determined obscurant data to provide the surveillance image data. | 05-22-2014 |
20140140576 | OBJECT DETECTION APPARATUS DETECTION METHOD AND PROGRAM - [Problem] In object detection on a three-dimensional space using a back projection of an object area extracted from a plurality of camera images, influence by a precision decline (such as lack of an object area) of object area extraction from a camera image is reduced, and it enables to detect a robust object. | 05-22-2014 |
20140140577 | EYELID DETECTION DEVICE | 05-22-2014 |
20140146997 | Systems and Methods for Tracking Objects - Various embodiments are disclosed for performing object tracking. One embodiment is a system for tracking an object in a plurality of frames, comprising a probability map generator configured to generate a probability map by estimating probability values of pixels in the frame, wherein the probability of each pixel corresponds to a likelihood of the pixel being located within the object. The system further comprises a contour model generator configured to identify a contour model of the object based on a temporal prediction method, a contour weighting map generator configured to derive a contour weighting map based on thickness characteristics of the contour model, a tracking refinement module configured to refine the probability map according to weight values specified in the contour weighting map, and an object tracker configured to track a location of the object within the plurality of frames based on the refined probability map. | 05-29-2014 |
20140146998 | SYSTEMS AND METHODS TO CLASSIFY MOVING AIRPLANES IN AIRPORTS - A sequence of video images is generated of a pavement area of an airport which contains one or more objects. A processor accesses a background model of the pavement area and determines in a current image a single cluster of foreground pixels that is not part of the background model and assigns a first value to each foreground pixel in the cluster to create a foreground mask. The background model is updated by learning new conditions. A convex hull is generated from the foreground mask. A ratio is determined from pixels captured by the complex hull and pixels in the foreground mask. A ratio higher than a threshold value indicates an object not being an airplane and an alert is displayed on a computer display. Images may be thermal images. A surveillance system based on the calculated ratio is disclosed. | 05-29-2014 |
20140146999 | DEVICE, METHOD AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM FOR DETECTING OBJECT - A method for detecting objects is provided. The method comprises the steps outlined below. An image having pixels is acquired. Image blocks each corresponding to one of the pixels are generated. A specific image block is filtered using N filtering parameters that gradually enhance the blurriness of the specific image block to generate N filtering results. N RMSE values are computed, in which the M-th RMSE value is computed according to the M-th and the (M−1)-th filtering results. A slope of an approximate line is computed according to the RMSE values as the blurriness value of the specific image block. The above steps are repeated to generate the blurriness values of all the pixels. The blurriness value is compared to a threshold value to detect sharp pixels which are parts of a sharp object and further detect an in-focus object. | 05-29-2014 |
20140147000 | IMAGE TRACKING DEVICE AND IMAGE TRACKING METHOD THEREOF - An image tracking device and an image tracking method thereof are provided. The image tracking device includes an image capture interface, a storage means, and a processor means. The storage means has a multi-dimensional storage space for storing a plurality of first images, each dimension of the multi-dimensional storage space being corresponding to a feature-related variance of a multi-dimensional variance. The processor means is configured to execute the following operations: marking a second image in the picture frame; calculating a multi-dimensional variance between the second image and each of the first images separately; determining whether the second image contains the object according to the multi-dimensional variance calculated; and if the second image is determined as one containing the object, storing the second image as one of the first images, in a specific subspace of the multi-dimensional storage space according to the multi-dimensional variance calculated. | 05-29-2014 |
20140147001 | Method and System for Controlling Computer Tomography Imaging - A method, a device, a system and a computer program are for controlling limited-area computer tomography imaging. The method includes determining location data of a first imaging object when the first imaging object is positioned in an imaging area, determining reference location data related to the first imaging object and adjusting the imaging area based on the location data of the first imaging object and said reference location data for imaging a second imaging object. The first and the second imaging object can be located at a distance determined by the reference location data from each other or symmetrically in relation to the reference location data. | 05-29-2014 |
20140147002 | USER AUTHENTICATION APPARATUS AND METHOD USING MOVEMENT OF PUPIL - A user authentication apparatus and method using movement of a pupil is capable of rapidly and accurately performing authentication with high security by storing the respective frequencies of objects mechanically moving on a screen and security keys corresponding thereto, comparing a frequency detected from movement of a pupil gazing at any object with a frequency of the object, and performing the authentication based on a corresponding security key when the detected frequency is included in a predetermined range. | 05-29-2014 |
20140147003 | Method and Apparatus for Facial Image Processing - The present invention provides a method for image processing, a corresponding apparatus and a computer program product. The method comprises performing face detection of an image, obtaining a coarse face segmentation region of at least one face and a contour edge of the at least one face based on the face detection; and adjusting the coarse face segmentation region based on the contour edge to obtain a fine face segmentation region. By using the method, corresponding apparatus and computer program product of the present invention, the coarse face region in an image can be precisely segmented, which provides a good basis for the subsequent image processing based on the fine face segmentation region. | 05-29-2014 |
20140147004 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE PROCESSING SYSTEM, AND STORAGE MEDIUM STORING PROGRAM - An image processing apparatus connectable to a terminal which captures an image includes an acquisition unit configured to acquire augmented information and attribute information from feature information extracted from a captured image, a processing unit configured to generate, if a plurality of pieces of the feature information is extracted, at least one piece of new augmented information by using a plurality of pieces of the augmented information acquired by the acquisition unit, based on the attribute information, and a transmission unit configured to transmit the new augmented information generated by the processing unit to the terminal. | 05-29-2014 |
20140147005 | METHOD AND APPARATUS FOR DETECTING FRAUD ATTEMPTS IN REVERSE VENDING MACHINES - A reverse vending machine, including: a chamber adapted to receive an object returned to the reverse vending machine; a plurality of cameras arranged around the perimeter of the chamber for viewing said object; a transparent or translucent plate arranged such that the cameras in use view the object obliquely through the transparent or translucent plate; and means adapted to couple light into the plate such that the light undergoes total internal reflection in the plate. Also, a method of detecting dirt in a reverse vending machine. | 05-29-2014 |
20140147006 | Object Information Derived From Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 05-29-2014 |
20140147007 | OBJECT DETECTOR AND OBJECT DETECTION METHOD - A solid object detection device detects solid objects in the periphery of a vehicle. A camera captures images including detection regions set in adjacent traffic lanes to the rear of the vehicle. A solid object assessment unit assesses whether or not a solid object is present in the detection regions. A lateral position detection unit detects a distance between the vehicle position and a dividing line that divides traffic lanes. A region setting unit enlarges the detection region on the side of the dividing line by a greater amount correspondingly with respect to an increase in the distance to the dividing line. A traffic lane change detection unit detects a traffic lane change made by the vehicle. Upon detecting a traffic lane change by the vehicle, a smaller enlarged amount is used when enlarging the size of the predetermined region outward in the vehicle-width direction. | 05-29-2014 |
20140153772 | SYSTEM AND METHOD OF DETERMINING MATERIAL REACTION OR SENSITIVITY USING HIGH-SPEED VIDEO FRAMES - A system and method for evaluating the sensitivity of energetic substances or materials for transportation, storage, and in-process scenarios are disclosed. The disclosure discusses a system and method that use a high-speed video device, a CPU or computer, sensitivity equipment for testing and assessing the substance or material reaction or explosion sensitivities, such as an electrostatic discharge device or impact assessment device, and software for running a process or set of rules or instructions to be followed for quantifying and determining whether a reaction has occurred or not. | 06-05-2014 |
20140153773 | Image-Based Indoor Position Determination - In one implementation, a method may comprise: determining a topological representation of an indoor portion of a building based, at least in part, on positions or number of lines in an image of the indoor portion of the building; and comparing the topological representation to one or more stored topological representations, for example in a digital map of the building, to determine a potential position of the indoor portion of the building. | 06-05-2014 |
20140153774 | GESTURE RECOGNITION APPARATUS, GESTURE RECOGNITION METHOD, AND RECORDING MEDIUM - A gesture recognition apparatus for recognizing a gesture of a predetermined operating body includes a storage unit, having stored therein, correspondence relationships between a plurality of coordinate ranges in relation to the operating body and to a plurality of operation target apparatuses, each of the plurality of coordinate ranges further corresponding to each operation target apparatus of the plurality of operation target apparatuses. An image capturing unit captures one or more images of the operating body, a coordinate detecting detects coordinates of the operating body based on the one or more captured images, and an operation target apparatus specifying unit selects an operation target apparatus corresponding to the detected coordinates of the operating body and the stored correspondence relationships. A gesture recognition processing unit recognizes a gesture associated with the operating body based on one or more captured images and corresponding to the selected operation target apparatus. | 06-05-2014 |
20140153775 | SIMILARITY DETERMINATION APPARATUS, SIMILARITY DETERMINATION SYSTEM, AND SIMILARITY DETERMINATION METHOD - A similarity determination apparatus, a similarity determination system, and a similarity determination method are provided, each of which calculates spectral information of an object, transforms the spectral information of the object into characteristic quantity, generates a similarity determination criterion from one or a plurality of items of characteristic quantity of a reference item, and checks the characteristic quantity of the object against the similarity determination criterion to determine similarity of the object with reference to the reference item. | 06-05-2014 |
20140153776 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing device including an image input unit configured to input captured image data of a portion of parking stalls that are compartmented at least by a first side line extending in a first direction and a second side line extending in the first direction, an image processing unit configured to generate edge image data by performing an edge extraction process on the captured image data, and a parking determination unit configured to obtain an integrated value of edge pixels of the first direction portion corresponding to the parking stalls in each position of a second direction that is orthogonal to the first direction based on the edge image data, and then to determine whether or not vehicles are parked in the parking stalls based on the obtained integrated value of each position of the second direction. | 06-05-2014 |
20140153777 | LIVING BODY RECOGNIZING DEVICE - A living body recognizing device is equipped with a captured image acquiring unit which acquires captured images from infrared cameras having a characteristics that a luminance of an image portion of a target object becomes higher as a temperature of the target object becomes higher than a background, and vice versa, according to a temperature difference between the background and the target object, a living body image extracting unit which executes a first living body image extracting processing of extracting the image portion of the target object assumed as a living body, from a region in the captured image where the luminance is equal to or lower than a first threshold value, and a living body recognizing unit which recognizes an existence of the living body, based on the image portion of the target object extracted by the living body image extracting unit. | 06-05-2014 |
20140153778 | Method For Displaying Successive Image Frames on a Display to Stabilize the Display of a Selected Feature in the Image Frames - A method for displaying successive image frames on a display. The method including: processing image data containing the successive image frames to identify features in an image frame and to display the image frame to a user with two or more of the identified features highlighted; manually selecting one of the identified features by a user; determining a portion of a subsequent image frame in which the selected feature is likely to be present; and if the selected feature is found in the portion of the subsequent image frame, displaying the subsequent image frame to change the position of the selected feature. | 06-05-2014 |
20140153779 | Object Segmentation at a Self-Checkout - Techniques for segmenting an object are provided. The techniques include capturing an image of an object, dividing the image into one or more blocks, computing a confidence value for each of the one or more blocks, and eliminating one or more blocks from consideration based on the confidence value for each of the one or more blocks. | 06-05-2014 |
20140153780 | FACE SEARCHING AND DETECTION IN A DIGITAL IMAGE ACQUISITION DEVICE - A method of detecting a face in an image includes performing face detection within a first window of the image at a first location. A confidence level is obtained from the face detection indicating a probability of the image including a face at or in the vicinity of the first location. Face detection is then performed within a second window at a second location, wherein the second location is determined based on the confidence level. | 06-05-2014 |
20140153781 | IMAGE FILTERING METHOD FOR DETECTING ORIENTATION COMPONENT OF EDGE AND IMAGE RECOGNIZING METHOD USING THE SAME - The present disclosure relates to an image filtering method for detecting an orientation component of an edge and an image recognizing method using the same. The image filtering method includes receiving an original image, generating a plurality of first images by filtering the original image with filters respectively generated along a plurality of channels, generating a second image by selecting a channel having a maximum value for each image unit, from the generated first images, and generating an output image whose edge is detected so as to maintain the consistency of channel by filtering the second image with filters respectively generated along the plurality of channels to generate a plurality of third images and comparing the channel of the second image with the channels of the third images. | 06-05-2014 |
20140161305 | METHODS AND APPARATUS TO MONITOR ENVIRONMENTS - Methods and apparatus to monitor environments are disclosed. An example method includes analyzing a plurality of three-dimensional data points having respective depth values representative of distances between a sensor and respective objects of an environment; when a first set of the three-dimensional data points has a first depth value less than a threshold, executing a first type of recognition analysis on a first area of the environment corresponding to the first set of the three-dimensional data points; and when a second set of the three-dimensional data points has a second depth value greater than the threshold, executing a second type of recognition analysis different than the first type of recognition analysis on a second area of the environment corresponding to the second set of the three-dimensional data points. | 06-12-2014 |
20140161306 | TECHNIQUES FOR IMPROVED IMAGE DISPARITY ESTIMATION - Techniques for improved image disparity estimation are described. In one embodiment, for example, an apparatus may comprise a processor circuit and an imaging management module, and the imaging management module may be operable by the processor circuit to determine a measured horizontal disparity factor and a measured vertical disparity factor for a rectified image array, determine a composite horizontal disparity factor for the rectified image array based on the measured horizontal disparity factor and an implied horizontal disparity factor, and determine a composite vertical disparity factor for the rectified image array based on the measured vertical disparity factor and an implied vertical disparity factor. Other embodiments are described and claimed. | 06-12-2014 |
20140161307 | Methods and Systems for Vascular Pattern Localization Using Temporal Features - A system and method of localizing vascular patterns by receiving frames from a video camera, identifying and tracking an object within the frames, determining temporal features associated with the object; and localizing vascular patterns from the frames based on the temporal features associated with the object. | 06-12-2014 |
20140161308 | OBJECT LOCALIZATION USING VERTICAL SYMMETRY - A symmetric object in an image is identified by (a) converting an edge map of the acquired image into a binary image map including binary pixel values; (b) dividing the binary image map within a scanning window into multiple bins; (c) summing binary pixel values in each bin; and (d) identifying the symmetric object based at least in part on the summed binary pixel values in the bins. | 06-12-2014 |
20140161309 | GESTURE RECOGNIZING DEVICE AND METHOD FOR RECOGNIZING A GESTURE - A gesture recognizing device includes an image processing module. The image processing module is adapted to process an image and includes a skin color detection unit adapted to determine whether the area of a skin color of the image is larger than a threshold value; a feature detection unit electrically connected to the skin color detection unit and adapted to determine a hand image of the image; and an edge detection unit electrically connected to the feature detection unit and adapted to determine a mass center coordinate, the number of fingertips and coordinate locations of fingertips of the hand image. | 06-12-2014 |
20140161310 | Device and Method for Determining Gesture and Operation Method of Gesture Determining Device - A device for determining a gesture includes a light emitting unit, an image sensing device and a processing circuit. The light emitting unit emits a light beam. The image sensing device captures an image of a hand reflecting the light beam. The processing circuit obtains the image and determine a gesture of the hand by performing an operation on the image; wherein the operation includes: selecting pixels in the image having a brightness greater than or equal to a brightness threshold; sorting the selected pixels; selecting a first predetermined percentage of pixels from the sorted pixels; dividing the adjacent pixels in the first predetermined percentage of pixels into a same group; and determining the gesture of the hand according to the number of groups of pixels. A method for determining a gesture and an operation method of the aforementioned device are also provided. | 06-12-2014 |
20140161311 | SYSTEM AND METHOD FOR OBJECT IMAGE DETECTING - A system for detecting an object generates light control signals to operate one or more lights separately radiating light toward an object to detect the object. Light is radiated toward the object in response to the light control signals, one or more items of image information each including shadow information generated radiating light are collected, and shadow information is collected. The outline of an object is recognized by compounding the collected one or more items of shadow information, thereby detecting the object. | 06-12-2014 |
20140161312 | SETTING APPARATUS, IMAGE PROCESSING APPARATUS, CONTROL METHOD OF SETTING APPARATUS, AND STORAGE MEDIUM - A setting apparatus for setting a detection processing region to detect a specific object from an image, the setting apparatus includes: an acquisition unit configured to acquire an input concerning the detection processing region from a user interface; and a setting unit configured to set the detection processing region to detect the specific object in accordance with evaluation information for the input concerning the detection processing region acquired from the user interface. | 06-12-2014 |
20140161313 | TRACKING DEVICE - A tracking device is provided which includes an image information acquiring unit configured to acquire image information in the form of successive frames; and a tracking unit configured to generate a plurality of sub images each smaller than a frame of the acquired image information, to calculate likelihoods with eye images, and to decide locations of eyes using a sub image having a large likelihood value. The tracking unit decides locations of the sub images, based on locations decided by a frame of image information acquired before the one frame. | 06-12-2014 |
20140161314 | Object Search by Description - Systems and methods search video data for objects that satisfy a general object description. A database is populated with identified objects and object characteristics detected in video data with at least one identifier that specifies video image data. At least one search parameter is received that presents a general object description. The database is queried based upon the received at least one search parameter. At least one identifier is returned from the database based upon the at least one search parameter. | 06-12-2014 |
20140161315 | Irregular Event Detection in Push Notifications - Systems and methods of detecting irregular events include the extraction of values for measure in each of a plurality of notifications. The extracted values are stored in a measures database and a distribution is calculated for the values of each of the measures. The extracted values are compared to the calculated distributions to determine if an irregular event has occurred. An irregularity alert is produced if an irregular event has occurred. | 06-12-2014 |
20140161316 | TIME-IN-STORE ESTIMATION USING FACIAL RECOGNITION - A method of monitoring the amount of time spent in a specified area by an individual comprises employing a first camera to automatically create one or more entrance images, each entrance image containing a face of an entering individual that passes a first location, and storing each entrance image in a database along with a corresponding entrance time that the entering individual passed the entrance location. A second camera is also employed to automatically create an exit image of a face of an exiting individual that passes a second location, and the exit image is recorded along with the corresponding exit time that the exiting individual passed the exit location. The exit image is then compared to the entrance images in the database to identify a matching entrance image containing the same face as the exit image. A stay time is then determined for the exiting individual by determining the difference between the entrance time corresponding to the matching entrance image and the exit time. | 06-12-2014 |
20140161317 | EYELID DETECTION DEVICE, EYELID DETECTION METHOD, AND RECORDING MEDIUM - A secondary curve, the ends of which coincide with the inner corner and the outer corner of the eye, is determined successively, and the total of the edge values of the pixels overlapping the secondary curve is calculated as an evaluation value. Next, a characteristic curve is generated on the basis of data made up of the calculated evaluation value and the Y-coordinate of the intersection between the secondary curve and a straight line passing through the center of a line segment whose ends coincide with the inner corner and the outer corner of the eye. Then, the reference positions for the upper eyelid and the lower eyelid of the eye are set on the basis of the result of an attempt to detect a pixel group occurring because of the red-eye effect in a search area defined on the basis of peaks in the characteristic curve. | 06-12-2014 |
20140161318 | ASSESSMENT OF ROTOR BLADES - The present invention concerns a method of optically assessing a wind power installation or a part thereof, in particular a rotor blade, including the steps: orienting a camera on to a region to be assessed, recording a photograph of the region to be assessed with the camera, detecting the position of the photographed region, and associating the ascertained position with the photographed region. | 06-12-2014 |
20140161319 | INFORMATION PROCESSING APPARATUS, METHOD FOR TRACKING OBJECT AND PROGRAM STORAGE MEDIUM - Provided is a technology which enables early removal from a tracking target of an object unnecessary to be tracked and which, together therewith, when the tracking target is temporarily in a state of being not detected, enables continuous tracking of the relevant tracking target without removing it from the tracking target. | 06-12-2014 |
20140161320 | METHOD AND SYSTEM FOR TRACKING MOTION OF A DEVICE - The present invention relates to a method for tracking the motion of a device across a surface. The method repeats the following steps: (a) acquiring, using the device, an input image showing an input area of the surface; (b) comparing the input image to a plurality of current reference images to estimate the displacement between the input image and each current reference image; (c) deciding whether to update each current reference image based on the displacements estimated in step (b), and if said decision is positive, updating the current reference image to form an updated reference image; and (d) determining, based on the displacements, the motion of the device across the surface from an area shown in a previously acquired image to the input area. The previously acquired image may be a previously acquired input image or one of the current reference images. | 06-12-2014 |
20140169621 | GESTURE PRE-PROCESSING OF VIDEO STREAM TO REDUCE PLATFORM POWER - Techniques are disclosed for processing a video stream to reduce platform power by employing a stepped and distributed pipeline process, wherein CPU-intensive processing is selectively performed. The techniques are particularly well-suited for efficient hand-based navigational gesture processing of a video stream, in accordance with some embodiments. The stepped and distributed nature of the process allows for a reduction in power needed to transfer image data from a given camera to memory prior to image processing. In one example case, for instance, the techniques are implemented in a user's computer system wherein initial threshold detection (image disturbance) and optionally user presence (hand image) processing components are proximate to or within the system's camera, and the camera is located in or proximate to the system's primary display. The computer system may be any mobile or stationary computing system having a display and camera that are internal and/or external to the system. | 06-19-2014 |
20140169622 | APPARATUS AND METHOD FOR MONITORING HAND WASHING - A method and apparatus to monitor and document that proper hygienic procedures are followed by food service providers consisting of a camera, a processor controlling the camera, and software to accomplish the hand washing monitoring. The criteria for identifying the start and end of a hand washing event by monitoring activity is selected areas is presented. A record is created of the wash event including a sequence of photograph during the event and additional related data such as start time, duration, location, and any employee identification. This record is available for recording or downloading to a server for further manipulation, including washer identification and statistical analysis. | 06-19-2014 |
20140169623 | ACTION RECOGNITION BASED ON DEPTH MAPS - A plurality of depth maps corresponding to respective depth measurements determined over a respective plurality of time frames may be obtained. A plurality of skeleton representations respectively corresponding to the respective time frames may be obtained. Each skeleton representation may include joints associated with an observed entity. Local feature descriptors corresponding to the respective time frames may be determined, based on the depth maps and the joints associated with the skeleton representations. An activity recognition associated with the observed entity may be determined, based on the obtained skeleton representations and the determined local feature descriptors. | 06-19-2014 |
20140169624 | IMAGE BASED PEDESTRIAN SENSING APPARATUS AND METHOD - An image based pedestrian sensing apparatus and method that rapidly senses a pedestrian within an image by setting a region of interest (ROI) corresponding to a size of an object in a front image of a vehicle (i.e., an image taken from in front of the vehicle), extracting pedestrian candidates based on motion of the object, and sequentially comparing the extracted pedestrian candidates with pedestrian feature databases (e.g., databases for each posture of the pedestrian) according to a distance to judge the pedestrian. | 06-19-2014 |
20140169625 | IMAGE PROCESSING APPARATUS, METHOD, AND STORAGE MEDIUM - An image processing apparatus includes: a processor configured to: store information on a reference area that has been extracted from a first image not including a target object by using a condition regarding color, generate, by using the condition, information on a target area in a second image that has been captured at a point in time different from a point in time at which the first image has been captured, determine, by using the target area and the reference area, whether or not there is an overlap between the reference area and the target object, when the overlap exists, identify an overlap area, and extract, by using the difference area between the reference area and the target area, and the overlap area, the target object from the second image. | 06-19-2014 |
20140169626 | SYSTEM AND METHOD FOR EFFECTIVE SECTION DETECTING OF HAND GESTURE - A system is provided for detecting an effective section of a gesture by recognizing the gesture, pose information and motion information included in the gesture from an acquired image. In addition, a controller determines whether a pose has been recognized based on the pose information and when the pose has been recognized, an effective section is detected based on a start point and an end point of the pose. Further, when the effective section for the pose is detected, the gesture is recognized based on the motion information. | 06-19-2014 |
20140169627 | IMAGE PROCESSING METHOD FOR DETECTING OBJECTS USING RELATIVE MOTION - An image based obstacle detection method. A camera mounted on a vehicle provides a set of image frames whilst the vehicle is in motion. The image frames define an image plane having a vertical aspect and a horizontal aspect. The relevancy of an object is determined by (i) selecting first and second feature points from the object that are spaced apart vertically in a first image frame; (ii) tracking the positions of the first and second feature points over at least a second image frame; and (iii) deciding the object to be relevant if the first and second feature points move dissimilar distances in physical space, within a tolerance, and deciding the object to be irrelevant otherwise. The motion of relevant objects is then estimated to determine if any relevant object is likely to become an obstacle to the vehicle. | 06-19-2014 |
20140169628 | Method and Device for Detecting the Gait of a Pedestrian for a Portable Terminal - A method and system for recognizing a pedestrian's step is provided for a portable terminal. The portable terminal has an acquisition unit. In an image of a sequence of images acquired by the acquisition unit, an object is, in each case, detected which represents at least a part of a foot, a shoe, and/or a leg. The position of the object is determined in the respective image, and a pedestrian's step is recognized as a function of a position change of the object between at least two images of the sequence of images. | 06-19-2014 |
20140169629 | System to determine product characteristics, counts, and per unit weight details - A system for determining the characteristics of a volume of preferable small fungible products within an acceptable size range. The system differentiates among products even if in close contact to identify acceptable product. The system may store data for later review or for dispensing of product in real time. The system may include a scale to determine a sample's weight, a camera to image the sample, an imaging table to permit viewing of the sample, a processor to determine the number of products in the sample, and a processor to determine the density of desired product. The system may also determine product count-per-weight, product volume-per-weight, and/or product surface-area-per-weight. These determinations may be useful, including in determining product processing and packaging options. | 06-19-2014 |
20140169630 | DRIVING ASSISTANCE APPARATUS AND DRIVING ASSISTANCE METHOD - A driving assistance device is provided with a turning state detection unit, an imaging unit, a solid object detection unit and a detection region modification unit. When the turning state detection unit detects that a host vehicle is in a turning state, the detection region modification unit alters a position of a detection region with respect to the host vehicle, or alters a shape or an area of the detection region based on the turning state of the host vehicle. For example, the detection region modification unit sets a shorter region length of the detection region as the turning radius of the host vehicle becomes smaller. Hereby, the region closest to the host vehicle is set, to a limited extent, as the detection regions. | 06-19-2014 |
20140169631 | IMAGE RECOGNITION APPARATUS - An image recognition apparatus determines whether an image of a pedestrian is captured in a frame of video data captured by a vehicle mounted camera. A pre-processing unit determines a detection block from within a frame, and cuts out block image data corresponding to the detection block from the frame. Block data with a predetermined size that is smaller than the size of the detection block is created from the block image data. A neuro calculation unit executes neuro calculation on the block data, and calculates an output synapse. A post-processing unit determines whether a pedestrian exists within the detection block on the basis of the output synapse. When a pedestrian is detected, the post-processing unit creates result data, which is obtained by superimposing the detection block within which the pedestrian was detected onto the frame. | 06-19-2014 |
20140169632 | INSPECTING APPARATUS AND INSPECTING METHOD OF ABSORBENT SHEET-LIKE MEMBER RELATED TO ABSORBENT ARTICLE - An inspecting apparatus is provided which inspects whether or not a liquid absorbent particulate is deposited with a predetermined deposition pattern on an absorbent sheet-like member, the absorbent sheet-like member having a continuous web and a plurality of absorbent bodies, the continuous web being transported along a transport direction, the absorbent bodies being formed on one surface of the continuous web in a spaced apart manner in the transport direction, each absorbent body including the liquid absorbent particulate as a main material. The inspecting apparatus includes: an imaging process section which is adapted to image, from one side of a surface of the absorbent sheet-like member, a region on the absorbent sheet-like member where the absorbent body is expected to exist, and that is adapted to produce data relating to a planar image of the region as planar image data of the absorbent body; an extracting process section which is adapted to extract a proper quantity region from the planar image by performing a binarization process on the produced planar image data based on a threshold value, the proper quantity region being an imaged region in which the liquid absorbent particulate is of a specified amount or more; and a pass/fail determination process section that is adapted to perform a pass/fail determination process based on a value indicating a size of the proper quantity region. | 06-19-2014 |
20140177905 | METHODS AND SYSTEMS FOR CUSTOMIZING A PLENOPTIC MEDIA ASSET - Methods and systems are described for providing customized user experiences with media assets created using plenoptic content capture technology. The ability to increase the focus on different objects while the media asset is progressing may allow a user to more easily track the object. Conversely, the ability to decrease the focus on different objects while the media asset is progressing may block, or cloud the display of, the object from being seen by a user. | 06-26-2014 |
20140177906 | GENERATING STATIC SCENES - Implementations generally relate to generating static scenes. In some implementations, a method includes collecting photos associated with objects in at least one location. The method also includes collecting attention information associated with one or more of the objects. The method also includes generating an attention map based on the attention information. The method also includes generating a model of the at least one location based on the photos and the attention map. | 06-26-2014 |
20140177907 | FAULTY CART WHEEL DETECTION - A system and method of identifying carts exhibiting tendencies that are indicative of damaged or defective wheels. A shopping cart may be identified and tracked visually through one or more surveillance cameras. By comparing the cart's tracked movement to known symptomatic movement patterns, the system may identify defective or damaged carts. Alternatively, by analyzing movement and positioning of a cart's swiveling wheels, the system may identify defective or damaged carts. Alternatively, by identifying if a customer has abandoned a cart, the system may identify defective or damaged carts. A notification message may be transmitted to an associate to repair or replace the identified problematic cart. The notification may be displayed on a mobile computing device, a workstation, or other like systems. | 06-26-2014 |
20140177908 | SYSTEM OF OBJECT DETECTION - In a system of object detection, a color detector detects at least one image region in an input image having a color specifically pertinent to the object under detection, thereby obtaining an object width. A dynamic down-sampling unit adaptively performs down-sampling on the detected image region using a generated down-sampling factor according to the object width. An image feature generator receives the down-sampled image and accordingly generates image features for describing the object under detection, and a cascade of classifiers then operates on the image features. | 06-26-2014 |
20140177909 | THREE-DIMENSIONAL INTERACTIVE DEVICE AND OPERATION METHOD THEREOF - A three-dimensional (3D) interactive device and an operation method thereof are provided. The 3D interactive device includes a projection unit, an image capturing unit, and an image processing unit. The projection unit projects an interactive pattern to a surface of a body, so that a user performs an interactive trigger operation on the interactive pattern by a gesture. The image capturing unit captures a depth image within an image capturing range. The image processing unit receives the depth image and determines whether the depth image includes a hand region of the user. If yes, the image processing unit performs hand geometric recognition on the hand region to obtain gesture interactive semantics. According to the gesture interactive semantics, the image processing unit controls the projection unit and the image capturing unit. Accordingly, the disclosure provides a portable, contact-free 3D interactive device. | 06-26-2014 |
20140177910 | DEVICE, METHOD, AND COMPUTER PROGRAM PRODUCT FOR DETECTING OBJECT - According to one embodiment, a device for detecting an object includes a first detection processor, a determination module, an area setting module, and a second detection processor. The first detection processor is configured to detect an object to be detected with respect to a frame image that constitutes input moving image data, with a first algorithm for searching for an area having a feature value similar to a feature value of the object by learning. The area setting module is configured to set, when a travel is smaller than a threshold, a second detection area inside a first detection area in which the object is detected with the first algorithm. The second detection processor is configured to detect, the object in the second detection area with a second algorithm for searching without learning for the movement destination of a feature area in the frame image. | 06-26-2014 |
20140177911 | Real-Time Bicyclist Detection with Synthetic Training Data - A determination is made in real-time regarding whether a bicyclist is present in a target image. A target image is received. The target image is classified and an error value for the target image is determined using a linear classifier. If the error value does not exceed the threshold value, the classification is outputted. Otherwise, if the error value exceeds the threshold value, the target image is classified using a non-linear classifier. | 06-26-2014 |
20140177912 | COMMODITY READING APPARATUS, COMMODITY SALES DATA PROCESSING APPARATUS AND COMMODITY READING METHOD - In accordance with one embodiment, a determination unit configured to determine whether or not the object discriminated by the discrimination unit passes through a second area specified different from a first area in a frame range during movement of the object to the first area specified in the frame range of the animation, a first recognition unit configured to recognize a candidate commodity as a candidate of a sales commodity based on a feature amount appeared on the object in the first area if the determination unit determines that the object passes through the second area, and a second recognition unit configured to recognize a commodity data represented by an optical mark on the object in the first area if the determination unit determines that the object does not pass through the second area. | 06-26-2014 |
20140177913 | ENHANCED CONTRAST FOR OBJECT DETECTION AND CHARACTERIZATION BY OPTICAL IMAGING - Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels. | 06-26-2014 |
20140177914 | METHOD AND SYSTEM FOR VIDEO-BASED ROAD CHARACTERIZATION, LANE DETECTION AND DEPARTURE PREVENTION - A method and system for video-based road departure warning for a vehicle on a road, is provided. Road departure warning involves receiving an image of a road in front of the vehicle from a video imager, and detecting one or more road markings in the image corresponding to markings on the road. Then, analyzing the characteristics of an image region beyond the detected markings to determine a rating for drivability of the road corresponding to said image region, and detecting the lateral offset of the vehicle relative to the markings on the road based on the detected road markings. A warning signal is generated as function of said lateral offset and said rating. | 06-26-2014 |
20140177915 | METHOD AND APPARATUS FOR DETECTING OBJECT - A method and an apparatus for detecting an object are disclosed. The method comprises the steps of obtaining a plurality of depth images of the object; extracting foregrounds; fusing the foregrounds in a unified three-dimensional world coordinate system; calculating an appearance two-dimensional histogram from the fused foreground by the following steps of dividing the foreground into vertical members and getting statistics of the numbers of foreground points in the vertical members so as to obtain the appearance two-dimensional histogram; determining an overlapping region and the number of overlaps based on the placement of stereo cameras; determining a detection parameter relating to a detection position based on the overlapping region and the number of overlaps; and detecting the object by the appearance two-dimensional histogram based on the determined detection parameter. | 06-26-2014 |
20140177916 | Object Information Derived From Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 06-26-2014 |
20140177917 | IMAGE PROCESSING DEVICE, OBJECT SELECTION METHOD AND PROGRAM - There is provided an image processing device including: a data storage unit that stores object identification data for identifying an object operable by a user and feature data indicating a feature of appearance of each object; an environment map storage unit that stores an environment map representing a position of one or more objects existing in a real space and generated based on an input image obtained by imaging the real space using an imaging device and the feature data stored in the data storage unit; and a selecting unit that selects at least one object recognized as being operable based on the object identification data, out of the objects included in the environment map stored in the environment map storage unit, as a candidate object being a possible operation target by a user. | 06-26-2014 |
20140177918 | Object Information Derived From Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 06-26-2014 |
20140177919 | Systems and Methods for Multi-Pass Adaptive People Counting - People are counted in a segment of video with a video processing system that is configured with a first set of parameters. This produces a first output. Based on this first output, a second set of parameters is chosen. People are then counted in the segment of video using the second set of parameters. This produces a second output. People are counted with a video played forward. People are counted with a video played backwards. The results of these two counts are reconciled to produce a more accurate people count. | 06-26-2014 |
20140177920 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 06-26-2014 |
20140177921 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 06-26-2014 |
20140177922 | Object Information Derived From Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 06-26-2014 |
20140177923 | METHOD FOR TRACKING MULTIPLE IMAGE OBJECTS BASED ON A PENALTY GRAPH FOR AVOIDING HIJACKING OF A TRACKER - A method for tracking multiple image objects, includes resampling particles from each of the image objects tracked in a previous image, calculating respective weights of the resampled particles, and predicting and tracking locations of the image objects in a current image based on values obtained by multiplying the locations of the particles by corresponding weights, wherein when an image object of the previous image has a neighboring image object located within a threshold distance, weights of the particles sampled from the image object are calculated by multiplying image values of the current image by penalties, which are values of distances between the corresponding particles and the neighboring image object. The method is capable of avoiding a hijacking problem in which identifiers of nearby objects are confused with each other or disappear during the tracking of multiple objects. | 06-26-2014 |
20140185864 | PROBABILISTIC IDENTIFICATION OF SOLID MATERIALS IN HYPERSPECTRAL IMAGERY - Systems, methods and computer program products, for identification of materials based on hyperspectral imagery, are disclosed. An example system comprises one or more processors, a memory, a library of spectral signatures, a receiver, a model generator, and a material identifier. The receiver module is configured to receive a first spectral signature corresponding to a region of interest contained in the hyperspectral image. The model generator is configured to create a model search space including one or more model signatures based on the spectral signatures in the library, wherein each of the one or more model signatures approximate the first spectral signature. The material identifier is a material identifier configured to calculate a probability associated with a presence or absence of a material within the first spectral signature, based on the first spectral signature and the model search space and determine the presence or absence of the material in the region of interest based on the probability. | 07-03-2014 |
20140185865 | IMPLANT IDENTIFICATION SYSTEM AND METHOD - Objects implanted in a being are identified by acquiring a first internal medical image of the object from a first perspective; acquiring a second internal medical image of the object from a second perspective different than the first perspective; and receiving descriptive information about the object that is in addition to the first and second internal medical images. The object is identified based on the first internal medical image, the second internal medical image, and the descriptive information; one or more operational characteristics of the object are then determined and transmitted to a remote requestor that provided the first and second internal medical images. | 07-03-2014 |
20140185866 | OPTICAL NAVIGATION METHOD AND DEVICE USING SAME - The invention provides an optical navigation method, which includes: sequentially obtaining plural images including a first image, a second image, and a third image; choosing a main reference block in the first image; comparing the main reference block and the second image by block matching comparison to determine a first motion vector; resizing the main reference block according to the first motion vector to generate an ancillary reference block having a size smaller than the main reference block; and comparing the ancillary reference block and the third image by block matching comparison to determine a second motion vector. | 07-03-2014 |
20140185867 | ANALYSIS SYSTEM AND ANALYSIS METHOD - An object of the present invention is to enable accurate classification of golf swings. A data processing apparatus computes a state of motion of a golf ball on the basis of still images captured by an imaging unit. The data processing apparatus divides a golf swing identified by swing data acquired during a golf swing by a player into golf swing paths of a back swing, a down swing and a follow through, assigns a path in two-dimensional coordinates having a vertical axis and a horizontal axis to any two of these three golf swing paths, calculates an angle formed between the golf swing path and the horizontal axis, for each of the two golf swing paths, and classifies the golf swing on the basis of an angular difference between the respective angles thus calculated. The present invention can be applied to, for example, an analysis system for analyzing a golf swing. | 07-03-2014 |
20140185868 | GESTURE RECOGNITION MODULE AND GESTURE RECOGNITION METHOD - A gesture recognition module for recognizing a gesture of a user, includes a detecting unit, including at least one image capture device, for capturing at least one image of a hand of the user, to obtain a first position and a second position of the hand sequentially; a computing unit, electrically coupled to the detecting unit, for determining a first angle between a first virtual straight line connected between a fixed reference point and the first position and a reference plane passing through the fixed reference point, and determining a second angle between a second virtual straight line connected between the fixed reference point and the second position and the reference plane; and a determining unit, electrically coupled to the computing unit, for determining a relation between the first angle and the second angle, to decide whether a gesture of the hand is a back-and-forth gesture. | 07-03-2014 |
20140185869 | SCENE CORRELATION - A method for maintaining north comprising the steps of locating north with a north finding gyroscope, tying north to a feature in a scene, correlating the feature to a target in the field of regard of a plurality of cameras, and determining a north factor and translating the north factor into a target vector relative to north. | 07-03-2014 |
20140185870 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes an object distance information acquisition unit configured to acquire information regarding a distance to an object detected from acquired image data, a distance information acquisition unit configured to divide the image data into small areas and acquire object distance information for each of the small areas, a determination unit configured to determine a blurring level for each of the small areas, and a blurring processing unit configured to perform blurring processing on the image data based on the blurring level. The determination unit identifies an object located within a predetermined distance from a main object based on the distance information acquired by the object distance information acquisition unit, and determines a distance group of the identified object as a non-blur area. | 07-03-2014 |
20140185871 | INFORMATION PROCESSING APPARATUS, CONTENT PROVIDING METHOD, AND COMPUTER PROGRAM - There is provided an information processing apparatus including a trigger recognition unit configured to acquire a captured image of a trigger and recognize predetermined trigger information included in the captured image, and a content acquisition unit configured to acquire a content including augmented reality information which is based on a state at a time of capturing the captured image or a state of content acquisition in a past and which corresponds to the predetermined trigger information recognized by the trigger recognition unit. | 07-03-2014 |
20140185872 | METHOD AND SYSTEM FOR RECOGNIZING HAND GESTURE USING SELECTIVE ILLUMINATION - A method and system that recognize a hand gesture using selective illumination that can reliably perform hand gesture recognition by effectively removing an unnecessary image and noise including static disturbance light and dynamic disturbance light are provided. The method of recognizing a hand gesture includes acquiring, by a controller from an imaging device a hand image, which is a recognition target. A static background image and a dynamic background image are removed by the controller from the hand image. The method further includes recognizing, by the controller, a gesture of the hand by extracting a characteristic point from the hand image in which the dynamic background image is removed. | 07-03-2014 |
20140185873 | THREE-DIMENSIONAL DATA PROCESSING AND RECOGNIZING METHOD - A three-dimensional data processing and recognizing method including scanning and re-constructing an object to be detected so as to obtain three-dimensional data for recognition of the object to be detected; and extracting data matching to features from the three-dimensional data, so that the extracted data constitutes an interested target in order to display and recognize the object to be detected. A quick method to recognize an object to be detected, such as the shapes of cuboid, cylinder, and cutting tool, and so on. | 07-03-2014 |
20140185874 | THREE-DIMENSIONAL DATA PROCESSING AND RECOGNIZING METHOD - A three-dimensional data processing and recognizing method including scanning and re-constructing objects to be detected so as to obtain three-dimensional data for recognition of the objects to be detected; extracting data matching to features from the three-dimensional data, so that the extracted data constitutes an interested target; with respect to the data matching to features, merging and classifying adjacent data points as one group, to form an image of the merged interested target; recognizing a cross section of the interested target; cutting the interested targets by a perpendicular plane which passes through a central point of the cross section and is perpendicular to it, in order to obtain a graph; and recognizing shape of the interested targets based on a property of the graph. | 07-03-2014 |
20140185875 | OBJECT AREA TRACKING APPARATUS, CONTROL METHOD, AND PROGRAM OF THE SAME - An object area tracking apparatus has: a face detection unit for detecting a face area on the basis of a feature amount of a face from a supplied image; a person's body detection unit for detecting an area of a person's body on the basis of a feature amount of the person's body; and a main object determination unit for obtaining a priority for each of the objects by using detection results by the face detection unit and the person's body detection unit and determining a main object of a high priority, wherein for the object detected only by the person's body detection unit, the priority is changed in accordance with a past detection result of the object in the face detection unit. | 07-03-2014 |
20140185876 | OBJECT COUNTER AND METHOD FOR COUNTING OBJECTS - An object counter performs a method for estimating the number of objects crossing a counting boundary. The method comprising: capturing, during a time period, a plurality of images representing moving images; registering, from the captured images, motion region areas passing across the counting boundary; calculating the integral of the registered motion region areas for forming a resulting total motion region area; and estimating the number of objects that have crossed the counting boundary by dividing the resulting total motion region area by a reference area. | 07-03-2014 |
20140185877 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND FILTER SETTING METHOD - Disclosed herein is an image processing apparatus including an image data acquisition section, a metadata acquisition section, a display section, a filter setting section, and a combination setting section. The image data acquisition section is configured to acquire image data from a camera. The metadata acquisition section is configured to acquire, from the camera, metadata representing information concerning an object of surveillance. The display section is configured to display a setting screen usable for setting a plurality of filters. The filter setting section is configured to perform filter setting using the information of the metadata. The combination setting section is configured to set a combination of the plurality of filters. The filter setting section and the combination setting section are provided on the same setting screen. | 07-03-2014 |
20140185878 | METHOD AND SYSTEM FOR ANALYZING AN IMAGE GENERATED BY AT LEAST ONE CAMERA - A method for analyzing an image of a real object, generated by at least one camera includes the following steps: generating at least a first image by the camera capturing at least one real object, defining a first search domain comprising multiple data sets of the real object, each of the data sets being indicative of a respective portion of the real object, and analyzing at least one characteristic property of the first image with respect to the first search domain, in order to determine whether the at least one characteristic property corresponds to information of at least a particular one of the data sets of the first search domain. If it is determined that the at least one characteristic property corresponds to information of at least a particular one of the data sets, a second search domain comprising only the particular one of the data sets is defined and the second search domain is used for analyzing the first image and/or at least a second image. | 07-03-2014 |
20140185879 | APPARATUS AND METHOD FOR DETECTING TRAFFIC LANE IN REAL TIME - An apparatus and a method for detecting traffic lanes in real time are disclosed. The disclosed real-time lane detection apparatus may include: a candidate area establisher unit configured to establish as a candidate area for lane detection an area having an intensity value which corresponds to the intensity of a traffic lane marking from among the intensity values in a color space of a color image; and a lane-marking determiner unit configured to determine a traffic lane marking from the established candidate area by using a line component of the candidate area. According to the present invention, traffic lanes can be detected accurately and quickly, even in environments where the lighting conditions of the road vary. | 07-03-2014 |
20140193029 | Text Detection in Images of Graphical User Interfaces - Systems and methods for text detection are provided. An image is received, and a set of connected components in the image are determined. For each connected component in the set, a bounding area is determined. A set of regions of the image are determined, based on the bounding area. Each region in the set of regions is classified and normalized based on the classification. The normalized set of regions is merged into a binary image. | 07-10-2014 |
20140193030 | GESTURE PRE-PROCESSING OF VIDEO STREAM WITH HOLD-OFF PERIOD TO REDUCE PLATFORM POWER - Techniques are disclosed for processing a video stream to reduce platform power by employing a stepped and distributed pipeline process, wherein CPU-intensive processing is selectively performed. In one example case, the techniques are implemented in a user's computer system wherein initial threshold detection (image disturbance) and optionally user presence (e.g., hand image) processing components are proximate to or within the system's camera, and the camera is located in or proximate to the system's primary display. The threshold detection and/or target presence stages can be selectively disabled for a hold-off period. The hold-off period may be, for example, in the range of 50 to 1000 mSec and triggered in response to an indication that a user of the system is unlikely to be making navigational gestures or that the system is not ready to process video, thereby conserving power by avoiding processing of video frames free of navigation gestures. | 07-10-2014 |
20140193031 | COMPRESSIVE SENSING WITH LOCAL GEOMETRIC FEATURES - Methods and apparatuses for compressive sensing that enable efficient recovery of features in an input signal based on acquiring a few measurements corresponding to the input signal. One method of compressive sensing includes folding an image to generate first and second folds, and recovering a feature of the image based on the first and second folds without reconstructing the image. One example of a compressive sensing apparatus includes a lens, a focal plane array coupled to the lens and configured to generate first and second folds based on the image, and a decoder configured to receive the first and second folds and to recover a feature of the image without reconstructing the image. The feature may be a local geometric feature or a corner. Compressive sensing methods and apparatuses for determining translation and rotation between two images are also disclosed. | 07-10-2014 |
20140193032 | IMAGE SUPER-RESOLUTION FOR DYNAMIC REARVIEW MIRROR - Method for applying super-resolution to images captured by a camera device of a vehicle includes receiving a plurality of image frames captured by the camera device. For each image frame, a region of interest is identified within the image frame requiring resolution related to detail per pixel to be increased. Spatially-implemented super-resolution is applied to the region of interest within each image to enhance image sharpness within the region of interest. | 07-10-2014 |
20140193033 | METHOD AND DEVICE FOR ROAD SIGN RECOGNITION - A method and a device are provided for recognizing road signs in image data. The method includes, but is not limited to segmenting an object in the image data that is a road sign for a predefined probability. A text mapped in the segmented image data is identified using a text recognition method, where this text comprises numbers and/or words and/or abbreviations and/or combinations thereof. A probability value is determined for the text being depicted on a road sign and, in case the probability value is smaller than or equal to a predefined threshold value, is selected as a potential road sign. In case the probability value is greater than the predefined threshold value, a classifier is applied to the segmented image data for recognizing the object as an actual road sign. | 07-10-2014 |
20140193034 | OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD AND OBJECT DETECTION PROGRAM - The present invention accurately detects an object from a video image where large distortion of an image may be generated for covering a wide field of view. An object detection device | 07-10-2014 |
20140193035 | Method and Device for Head Tracking and Computer-Readable Recording Medium - The present disclosure is directed to performing head tracking. The method comprises: (a) receiving an input of an image including a facial area; and (b) tracking a movement of the facial area. The step (b) comprises: (b-1) if a rotation angle of a facial area is within a predetermined angle range from a front side, searching for a location change of feature points within a facial area through a comparison with a template learned in advance; and (b-2) if a rotation angle of a facial area is beyond a predetermined angle range from a front side, searching for a location change of feature points within a facial area through a comparison with a facial area image frame previously inputted. | 07-10-2014 |
20140193036 | DISPLAY DEVICE AND METHOD FOR ADJUSTING OBSERVATION DISTANCES THEREOF - A method for adjusting an observation distances between an user and a display device is provided. When the display device determines that the user is squinting and determines a distance value between the user and the display device is larger than a predetermined value, the display device controls a driving unit to drive a display unit of the display device to move toward to the user. A display device is also provided. | 07-10-2014 |
20140193037 | Displaying an Image on Multiple Dynamically Located Displays - Moving and/or still image(s) are aligned to multiple display devices. One or more display devices may be connected to a system using a variety of communicative connections. One or more imaging devices may be used to capture images of a plurality of display devices while the display devices display one or more unique identifiers. The captured image(s) are processed to determine the location of one or more of the connected display devices. Portions of an overall image may be distributed to one or more of the connected display devices to display the overall image across the display device(s). Location information for one or more display devices may be updated periodically, which may update the portion of an image displayed on one or more connected display devices. | 07-10-2014 |
20140193038 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An information processing system that acquires an image captured by an image pickup unit; acquires one or a plurality of templates each including one or a plurality of fields; compares the image to the one or plurality of templates; and outputs a result based on the comparison, the result indicating whether recognition of each of the one or plurality of fields of the one or plurality of templates was successful. | 07-10-2014 |
20140198944 | USE OF EMG FOR SUBTLE GESTURE RECOGNITION ON SURFACES - An apparatus, a method, and a computer program product for detecting a gesture of a body part relative to a surface are provided. The apparatus determines if the body part is in proximity of the surface. If the body part is in proximity of the surface, the apparatus determines if electrical activity sensed from the body part is indicative of contact between the body part and the surface. If the body part is in contact with the surface, the apparatus determines if motion activity sensed from the body part is indicative of the gesture. | 07-17-2014 |
20140198945 | Systems and Methods for Tracking an Object in a Video - Disclosed are various embodiments for tracking an object shown as moving in a video. One embodiment is a method for tracking an object in a video that comprises tracking in a first temporal direction an object in a plurality of video frames and generating a first tracking result, evaluating the first tracking result corresponding to tracking of the object in the first temporal direction, and stopping tracking in the first temporal direction upon the occurrence of a predefined event, wherein the predefined event is based on an evaluated tracking result. The method further comprises obtaining data identifying an object outline of the object upon stopping the tracking in the first temporal direction, tracking in a second temporal direction the object based on the data identifying the object outline of the object to generate a second tracking result, and generating a refined tracking result based on at least on one of the first tracking result, the second tracking result, or a combination thereof. | 07-17-2014 |
20140198946 | METHOD SYSTEM AND COMPUTER PRODUCT FOR NON-DESTRUCTIVE OBJECT ANALYSIS - Aspects of the invention provide a solution for analyzing an object, such as a part of a turbo machine. A planar surface is generated using a curved reformat function based on a surface of a three-dimensional (3D) image of an object. A peel of the 3D image that is adjacent to the surface is determined. Based on the peel, a second planar surface is generated. These two, and/or other similarly generated planar surfaces can be analyzed to determine characteristics of the original object. | 07-17-2014 |
20140198947 | Methods and Systems for Video-Based Chew Counting Via Feature Tracking - A system and method of video-based chew counting by receiving image frames from a video camera, determining feature points within the image frames from the video camera, generating a motion signal based on movement of the feature points across the image frames from the video camera, and determining a chew count based on the motion signal. | 07-17-2014 |
20140198948 | VIDEO-BASED MOTION CAPTURE AND ADAPTATION - The disclosure provides an approach for estimating a state-space controller from a set of video frames depicting a motion of an entity. The approach includes incrementally optimizing parameters of the state-space controller and changing a structure of the state-space controller based on expanding subsets of the set of video frames. In one embodiment, a controller-estimation application greedily selects, at every stage of the incremental optimization, structure and parameters of the controller which minimize an objective function. In another embodiment, the controller-estimation application re-optimizes, after the incremental optimization, all parameters of the state-space controller based on all of the video frames. In yet a further embodiment, the controller-estimation application alters the structure of the state-space controller for robustness and compactness by adding cycles in the state-space controller and enforcing constraints on the structure of the state-space controller and adding and modifying state transition types, as appropriate. | 07-17-2014 |
20140198949 | PROJECTOR LIGHT BULB - A display system that includes a projector light bulb. The system includes a light fixture with a conventional light bulb socket. A projector light bulb is provided with a socket adapter for mating electrically with the light bulb socket. A projector such as a pico projector is fit into the socket adapter or to mate with this adapter to be powered by the light bulb socket via the adapter. In lamp-type implementations, the projector is used to project onto a lamp shade-shaped rear projection screen or through a translucent shade and also upon surfaces of the ceiling or objects above the projector light bulb. A light conditioning or directing assembly may be provided that directs a portion of the projected light onto the projection screen (or shade) and another portion up onto the ceiling so as to concurrently focus on two or more surfaces at two or more focal distances. | 07-17-2014 |
20140198950 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND COMPUTER READABLE MEDIUM - An image processing device is provided, the image processing device comprising: an image input unit configured to be input with a frame image of an imaging area imaged by a camera; an image processing unit configured to process the frame image input to the image input unit, and detect an object imaged in the frame image; and an operation frequency determination unit configured to determine a frequency of an operation clock of the image processing unit according to the number of objects detected by the image processing unit, wherein the operation frequency determination unit lowers the frequency of the operation clock of the image processing unit as the number of objects detected by the image processing unit becomes smaller. | 07-17-2014 |
20140198951 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - A dictionary for detection of an object is created from an image obtained by performing an image process, which depends on a first image process parameter, on the training image of the detection target object. The dictionary created based on the image process depending on the first image process parameter is determined, based on a result of detecting the object from an image obtained by performing an image process, which depends on the first image process parameter, on a photographed image based on the dictionary. A second image process parameter is determined, based on a result of detecting the object from an image obtained by performing an image process, which depends on the second image process parameter, on the photographed image using the determined dictionary. | 07-17-2014 |
20140198952 | IMAGE PROCESSING METHOD - A method and apparatus for localizing an area in relative movement and for determining the speed and direction thereof in real time is disclosed. Each pixel of an image is smoothed using its own time constant. A binary value corresponding to the existence of a significant variation in the amplitude of the smoothed pixel from the prior frame, and the amplitude of the variation, are determined, and the time constant for the pixel is updated. For each particular pixel, two matrices are formed that include a subset of the pixels spatially related to the particular pixel. The first matrix contains the binary values of the subset of pixels. The second matrix contains the amplitude of the variation of the subset of pixels. In the first matrix, it is determined whether the pixels along an oriented direction relative to the particular pixel have binary values representative of significant variation, and, for such pixels, it is determined in the second matrix whether the amplitude of these pixels varies in a known manner indicating movement in the oriented direction. In each of several domains, histogram of the values in the first and second matrices falling in such domain is formed. Using the histograms, it is determined whether there is an area having the characteristics of the particular domain. The domains include luminance, hue, saturation, speed (V), oriented direction (D1), time constant (CO), first axis (x(m)), and second axis (y(m)). | 07-17-2014 |
20140198953 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, CONTROL PROGRAM, AND RECORDING MEDIUM - An image processing device configured to detect a detection target, which is all or a part of a predetermined main body on an image has a detection target detection unit that detects an estimated detection target that the image processing device assumes to be the detection target from the image, a heterogeneous target determination unit that determines whether the estimated detection target detected by the detection target detection unit is an estimated heterogeneous target that the image processing device assumes to be a heterogeneous target, which is all or a part of a main body different in class from the main body, and a detection target determination unit that determines whether the estimated detection target detected by the detection target detection unit is the detection target based on a determination result of the heterogeneous target determination unit. | 07-17-2014 |
20140198954 | SYSTEMS AND METHODS OF DETECTING BODY MOVEMENTS USING GLOBALLY GENERATED MULTI-DIMENSIONAL GESTURE DATA - The disclosure describes systems and methods of detecting body movements using gesture data. The gesture data may be self-referenced and may be comprised by frames which may identify locations or positions of body parts of a subject with respect to a particular reference point within the frame. A classifier may process frames to learn body movements and store the frames of gesture data in a database. Data comprising frames of self-referenced gesture data may be received by a recognizer which recognizes movements of the subject identified by the frames by matching gesture data of the incoming frames to the classified self-referenced gesture data stored in the database. | 07-17-2014 |
20140205138 | DETECTING THE LOCATION OF A KEYBOARD ON A DESKTOP - Methods and systems for detecting the location of a keyboard on a desktop. The method includes receiving an image of the desktop with the keyboard situated thereon and analyzing the image of the desktop to identify an area of the image corresponding to the keyboard. In one example, the image of the desktop is a depth image and analyzing the image of the desktop includes identifying an image element of the depth image that forms part of the keyboard, identifying first and second corners of the keyboard from the identified image element and determining the area of the image corresponding to the keyboard based on the first and second corners. | 07-24-2014 |
20140205139 | OBJECT RECOGNITION SYSTEM IMPLEMENTING IMAGE DATA TRANSFORMATION - A object recognition system has a camera configured to generate source image data and a processor configured to access the source image data from the camera. The processor is also configured to accesses state data of the camera and generate transformed image data from the source image data based at least in part on the state data. The processor is also configured to detect an object in the transformed image data and to classify the detected object using the transformed image data. | 07-24-2014 |
20140205140 | SYSTEMS, DEVICES, AND METHODS FOR TRACKING MOVING TARGETS - A system for tracking a moving target having up to six degrees of freedom and rapidly determining positions of the target, said system includes an easy to locate precision optical target fixed to the target. This system includes at least two cameras positioned so as to view the optical camera from different directions with each of the at least two cameras being adapted to record two dimensional images of the precision optical target defining precise target point. A computer processor is programmed to determine the target position of x, y and z and pitch, roll and yaw. In an embodiment, the system can be configured to utilize an iteration procedure whereby an approximate first-order solution is proposed and tested against the identified precise target points to determine residual errors which can be divided by the local derivatives with respect to each component of rotation and translation, to determine an iterative correction. | 07-24-2014 |
20140205141 | SYSTEMS AND METHODS FOR TRACKING AND DETECTING A TARGET OBJECT - A method for detecting and tracking a target object is described. The method includes performing motion-based tracking for a current video frame by comparing a previous video frame and the current video frame. The method also includes selectively performing object detection in the current video frame based on a tracked parameter. | 07-24-2014 |
20140205142 | METHOD AND APPARATUS OF ENVIRONMENT VISUALIZATION FOR TELE-OPERATION THROUGH HIERARCHIZATION OF OBJECT CHARACTERISTICS - A method of environment visualization for a tele-operation, includes an augmented reality technology in which an image with various types of information is provided to a user to achieve fun and usefulness, and an apparatus and a method of environment visualization for a tele-operation, in which external factors are added to image information to be provided to an operator by using hierarchization of object characteristics so as to improve the attention of the operator, thereby maximally preventing mistakes of the operator. | 07-24-2014 |
20140205143 | EYES-OFF-THE-ROAD CLASSIFICATION WITH GLASSES CLASSIFIER - A method for determining an Eyes-Off-The-Road (EOTR) condition exists includes capturing image data corresponding to a driver from a monocular camera device. A detection of whether the driver is wearing eye glasses based on the image data using an eye glasses classifier. When it is detected that the driver is wearing eye glasses, a driver face location is detected from the captured image data and it is determined whether the EOTR condition exists based on the driver face location using an EOTR classifier. | 07-24-2014 |
20140205144 | IN-VEHICLE TARGET DETECTING DEVICE - In an in-vehicle target detecting device, a captured image which is an image capturing an area ahead of the own vehicle is acquired at a predetermined measurement cycle. Radio waves are transmitted and received. Positional information indicating at least an orientation and a distance of at least one target candidate in relation to the own vehicle is acquired. The target candidate reflects radio waves. Image recognition is performed to detect a detection object by searching a predetermined image search area in the captured image. At least an image detection position of the detection object in the captured image is stored. The image search area is set, based on an image-plane measurement position corresponding to a measurement position in the captured image. The measurement position is a position in three-dimensional space of the target candidate indicated by the positional information acquired at a timing at which the captured image is acquired. | 07-24-2014 |
20140205145 | Method and Apparatus for Tracking Objects in a Target Area of a Moving Organ - A method for tracking position of features of a moving organ from at least one sequence of image frames of the moving organ, comprising:
| 07-24-2014 |
20140205146 | SYSTEMS AND METHODS OF TRACKING OBJECT MOVEMENTS IN THREE-DIMENSIONAL SPACE - The technology disclosed relates to tracking movement of a real world object in three-dimensional (3D) space. In particular, it relates to mapping, to image planes of a camera, projections of observation points on a curved volumetric model of the real world object. The projections are used to calculate a retraction of the observation points at different times during which the real world object has moved. The retraction is then used to determine translational and rotational movement of the real world object between the different times. | 07-24-2014 |
20140205147 | OBSTACLE ALERT DEVICE - An obstacle alert device is capable of indicating clearly presence of an obstacle approaching a vehicle to a driver, without impairing visibility of a peripheral situation of the vehicle. The device includes a photographed image acquisition section acquiring a photographed image photographing a scene in the periphery of the vehicle, a photographed-image-of-interest generation section generating a photographed image of interest based on the photographed image, a masked region setting section setting a masked region making un-displayed at least a portion of the scene of the vehicle periphery in the photographed image of interest, an object presence determination section determining whether an object is present or not in an outside region outside the photographed image of interest, a clear indication image outputting section outputting a clear indication image including a clear indication indicator clearly indicating presence of the object to be displayed at an end of the photographed image of interest on the side of the outside region where the object is present in case the object in the outside region moves to the side of a region corresponding to the photographed image of interest, and a motion image outputting section outputting an image in which the clear indication indicator becomes absorbed from the side of the masked region where the object is present in case the object in the outside region has entered the region corresponding to the photographed image of interest. | 07-24-2014 |
20140205148 | VIDEO SEARCH DEVICE, VIDEO SEARCH METHOD, RECORDING MEDIUM, AND PROGRAM - A video search device for video searches in which a user specifies the position and orientation of an object that should appear in a video. A receiver receives input of a still image, two reference positions in the still image and two target positions in a video frame. An extractor extracts a reference image containing the two reference positions from the still image. A searcher searches for similar frame images in which local images similar to the reference image are depicted, from frame images in the video, traces movement tracks of two noteworthy pixels at start positions corresponding to the two reference positions in a local image when time advances or regresses from a similar frame image in the video, searches for a target frame image where the two movement tracks approach two target positions, and produces videos containing the similar frame image and the target frame image. | 07-24-2014 |
20140205149 | DOZE DETECTION METHOD AND APPARATUS THEREOF - A doze detection method, which accurately detects a blink burst and improves speed and accuracy of doze detection, includes measuring a state where the eye is substantially open as an open eye time and another state as a closed eye time, defining a time shorter than an average blink interval of a healthy adult in an alert state as a first threshold time; defining a time longer than an average closed eye time of a healthy adult in an alert state as a second threshold time; and defining blinks as a blink burst when detecting an eye opening equal to or shorter than the first threshold time. A doze state is determined when the closed eye time of a blink among the blinks during the blink burst reaches at least the second threshold time, the blink occurring after an open eye time equal to at most the first threshold time. | 07-24-2014 |
20140211983 | Information Technology Asset Location Using Visual Detectors - Mechanisms are provided for determining the physical location of a physical asset in a physical area. A plurality of physical assets are controlled to cause each physical asset to output a visual output pattern on visual output elements of the physical asset. An image of a target physical asset is captured that has the current state of the visual output elements. An identification of the target physical asset is determined based on the current state of the visual output elements. A physical location of the target physical asset is determined based on a physical location of the image capture device when the image was captured. Location data identifying the determined physical location of the target physical asset is stored in an asset database in association with configuration information for the physical asset. | 07-31-2014 |
20140211984 | Information Technology Asset Location Using Visual Detectors - Mechanisms are provided for determining the physical location of a physical asset in a physical area. A plurality of physical assets are controlled to cause each physical asset to output a visual output pattern on visual output elements of the physical asset. An image of a target physical asset is captured that has the current state of the visual output elements. An identification of the target physical asset is determined based on the current state of the visual output elements. A physical location of the target physical asset is determined based on a physical location of the image capture device when the image was captured. Location data identifying the determined physical location of the target physical asset is stored in an asset database in association with configuration information for the physical asset. | 07-31-2014 |
20140211985 | Image-Based Occupancy Sensor - An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generates an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume. | 07-31-2014 |
20140211986 | APPARATUS AND METHOD FOR MONITORING AND COUNTING TRAFFIC - A method and apparatus to monitor and document movement of bodies along or through selected regions is described for the directional counting of such bodies. The reduction of the consideration to selected regions avoids excessive calculation and allows the use of an inexpensive image acquisition and processor. Methods for the determining the direction of movement are described. A record is created for counting events for recording or downloading to a server for further manipulation. | 07-31-2014 |
20140211987 | SUMMARIZING SALIENT EVENTS IN UNMANNED AERIAL VIDEOS - A method for summarizing image content from video images received from a moving camera includes detecting foreground objects in the images, determining moving objects of interest from the foreground objects, tracking the moving objects, rating movements of the tracked objects, and generating a list of highly rated segments within the video images based on the ratings. | 07-31-2014 |
20140211988 | ATTRIBUTE-BASED ALERT RANKING FOR ALERT ADJUDICATION - Alerts to object behaviors are prioritized for adjudication as a function of relative values of abandonment, foregroundness and staticness attributes. The attributes are determined from feature data extracted from video frame image data. The abandonment attribute indicates a level of likelihood of abandonment of an object. The foregroundness attribute quantifies a level of separation of foreground image data of the object from a background model of the image scene. The staticness attribute quantifies a level of stability of dimensions of a bounding box of the object over time. Alerts are also prioritized according to an importance or relevance value that is learned and generated from the relative abandonment, foregroundness and staticness attribute strengths. | 07-31-2014 |
20140211989 | Component Based Correspondence Matching for Reconstructing Cables - In a stereoscopic pair of images, global homography at the image level is applied to feature points extracted from connected components (CC) to identify corresponding CC's and feature points, and to discard any CC's that do not have a corresponding pair in the stereoscopic pair of images. Local homography at the CC level is then applied to individual footprint areas of the previously identified paired CC to further clean feature point correspondence. Any CC or feature point or pixel within a paired CC footprint not satisfying local homography constraint is discarded. A correspondence is also extrapolated between unknown pixels within a paired CC footprint using a weighing mechanism and the unknown pixel's surrounding pixels that do have a known correspondence. This provides a dense correspondence of pixels, or feature points, which is then used to create a dense 3D point cloud of identified objects within a 3D space. | 07-31-2014 |
20140211990 | POSITION-SETUP FOR GESTURE-BASED GAME SYSTEM - Technologies are generally described for position-setup for gesture-based game. In some examples, a method performed under control of a gesture-based game system includes capturing, by an image capture unit, an image of a first player and an image of a second player, cropping, from the image of the first player and the image of the second player, a first sub-image of at least part of the first player and a second sub-image of at least part of the second player, respectively, determining whether to adjust the first sub-image and the second sub-image, if it is determined to adjust the first sub-image and second sub-image, adjusting the first sub-image and the second sub-image, and merging the first adjusted sub-image and the second adjusted sub-image into an output image. | 07-31-2014 |
20140211991 | SYSTEMS AND METHODS FOR INITIALIZING MOTION TRACKING OF HUMAN HANDS - Systems and methods for initializing motion tracking of human hands are disclosed. One embodiment includes a processor; a reference camera; and memory containing: a hand tracking application; and a plurality of edge feature templates that are rotated and scaled versions of a base template. The hand tracking application configures the processor to: determine whether any pixels in a frame of video are part of a human hand, where a part of a human hand is identified by searching the frame of video data for a grouping of pixels that have image gradient orientations that match the edge features of one of the plurality of edge feature templates; track the motion of the part of the human hand visible in a sequence of frames of video; confirm that the tracked motion corresponds to an initialization gesture; and commence tracking the human hand as part of a gesture based interactive session. | 07-31-2014 |
20140211992 | SYSTEMS AND METHODS FOR INITIALIZING MOTION TRACKING OF HUMAN HANDS USING TEMPLATE MATCHING WITHIN BOUNDED REGIONS - Systems and methods for initializing motion tracking of human hands within bounded regions are disclosed. One embodiment includes: a processor; reference and alternate view cameras; and memory containing a plurality of templates that are rotated and scaled versions of a base template. In addition, a hand tracking application configures the processor to: obtain reference and alternate view frames of video data; generate a depth map; identify at least one bounded region within the reference frame of video data containing pixels having distances from the reference camera that are within a specific range of distances; determine whether any of the pixels within the at least one bounded region are part of a human hand; track the motion of the part of the human hand in a sequence of frames of video data obtained from the reference camera; and confirm that the tracked motion corresponds to a predetermined initialization gesture. | 07-31-2014 |
20140211993 | IMAGE-PROCESSING DEVICE, IMAGE-CAPTURING DEVICE, AND IMAGE-PROCESSING METHOD - An image processing device includes: a reference area setting unit which sets a reference area including an indicated segment; an extraction unit which extracts an interest object feature quantity indicating a first feature from the reference area; an interest area setting unit which sets an area of interest in a third image, based on a relationship between a position of a feature point extracted from a feature area which is an area corresponding to an object of interest and included in a second image and a position of a feature point in the third image corresponding to the extracted feature point; and a tracking unit which determines for each of two or more of plural segments included in the area of interest with use of the interest object feature quantity whether the segment is a segment corresponding to the object of interest. | 07-31-2014 |
20140211994 | HUMAN DETECTION AND TRACKING APPARATUS, HUMAN DETECTION AND TRACKING METHOD, AND HUMAN DETECTION AND TRACKING PROGRAM - A human detection and tracking apparatus prevents errors in a size of a person and a location of body parts between actual image data and a tracking result. Human frame detecting section detects, from first image data, a human frame as a region having high possibility of presence of a human, based on human feature data representing a feature of an entire human body. Body part frame location determining section determines a body part frame in the first image data, based on part feature data illustrating a feature of a body part of the human and a part frame determined as a region having high possibility of presence of a body part of the human in second image data previous to the first image data. Body part frame location correcting section corrects, based on the human frame, a location of the part frame determined in the first image data. | 07-31-2014 |
20140211995 | POINT-OF-GAZE ESTIMATION ROBUST TO HEAD ROTATIONS AND/OR ESTIMATION DEVICE ROTATIONS - Point-of-gaze of a user looking at a display is estimated, taking into account rotation of the user's head or rotation of the display. An image of an eye of the user is captured. The image is processed to determine coordinates in the image of defined eye features, sufficient to determine the eye's optical axis. At least one angle is determined, the at least one angle proportional to an angle between (i) a line coincident with an edge of the display and (ii) an intersection of the sagittal plane of the user's head with a plane of the display. An intersection of the eye's line-of-sight with the plane of the display is estimated using the eye's optical axis, and using the at least one angle to account for rotation of the user's head or the display. | 07-31-2014 |
20140211996 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 07-31-2014 |
20140211997 | TRACKING-FRAME INITIAL-POSITION SETTING DEVICE AND METHOD OF CONTROLLING OPERATION OF SAME - An image obtained by imaging a subject is displayed and a tracking frame is displayed at the central portion of a display screen. A target area is set surrounding the tracking frame and a high-frequency-component image is generated. A distance image indicating the distance to the subject image within an imaging zone is generated. An area, which represents a subject at a distance identical with that of the subject portion specified by the tracking frame displayed at the reference position, is decided upon as a search area. While a moving frame is moved within the search area of the high-frequency-component image, amounts of high-frequency component are calculated. The position of the moving frame at which the calculated amount of high-frequency component is maximized is adopted as the initial position of the tracking frame. | 07-31-2014 |
20140211998 | METHOD FOR LOCATING AN OBJECT USING A REFERENCE GRID - A method for locating an object by a reference grid, the object moving in a plane parallel to or identical to that of the grid. When crossing of a line of the grid is detected by the object, its heading is determined and, as a function of the detection, probabilities of the thus crossed line being a horizontal line and a vertical line respectively are obtained. Displacement of the object is assessed from the probabilities obtained and a horizontal and vertical pitch of the grid. A position of the object is then updated from a position of the object determined during a last line crossing of the grid and the displacement thus assessed. | 07-31-2014 |
20140211999 | MEASURING DEVICE FOR DETERMINING THE SPATIAL POSITION OF AN AUXILIARY MEASURING INSTRUMENT - A positioning method continuously determines the spatial position of an auxiliary measuring instrument having several auxiliary-point markings in a fixed, known spatial distribution relative to one another. Camera images of the auxiliary-point markings are continually recorded using a camera having a surface sensor that includes pixels, and read-out processes are continually performed by reading out the pixels with regard to a respective current exposure value. Image positions of the imaged auxiliary-point markings in the respective current camera image are determined, with which the current spatial position of the auxiliary measuring instrument is derived. Respective current areas of interest on the surface sensor are continually set using image positions determined in at least one previously recorded camera image. The current image positions are determined using exclusively only at most those current exposure values that are received by pixels of the surface sensor lying within the currently set areas of interest. | 07-31-2014 |
20140212000 | SYSTEM AND METHOD FOR OPTIMIZING TRACKER SYSTEM - The present invention relates to the field of computation and simulation and covers methods to optimize the fiducial marker positions in optical object tracking systems, by simulating the visibility. The method for optimizing tracker system which is realized to simulate camera and fiducial positions and pose estimation algorithm parameters to optimize the system comprises the steps of; acquire mesh data representing possible active marker positions and orientations on a tracked object, pose data representing possible poses of tracked object, camera positions and orientations; compute visibility of each node from all camera viewports and generate a visibility value list; select the node with highest visibility count as a marker placement node; remove nodes closer to the selected node than a threshold; remove the pose(s) having a predetermined number of selected nodes; does percentage of all poses have predetermined number of selected nodes?; project selected node positions on the image plane of each camera viewport and calculate the pose of the mesh using the tracker algorithm to be optimized; calculate pose error and pose coverage by comparing algorithm results with initial data; record and output results; and select among the results a parameter set satisfying at least one constraint. | 07-31-2014 |
20140219497 | TEMPORAL WINNER TAKES ALL SPIKING NEURON NETWORK SENSORY PROCESSING APPARATUS AND METHODS - Apparatus and methods for contrast enhancement and feature identification. In one implementation, an image processing apparatus utilizes latency coding and a spiking neuron network to encode image brightness into spike latency. The spike latency is compared to a saliency window in order to detect early responding neurons. Salient features of the image are associated with the early responding neurons. An inhibitory neuron receives salient feature indication and provides inhibitory signal to the other neurons within an area of influence of the inhibitory neuron. The inhibition signal reduces probability of responses by the other neurons to stimulus that is proximate to the feature thereby increasing contrast within the encoded data. The contrast enhancement may facilitate feature identification within the image. Feature detection may be used for example for image compression, background removal and content distribution. | 08-07-2014 |
20140219498 | DATA ACQUISITION METHOD AND DEVICE FOR MOTION RECOGNITION, MOTION RECOGNITION SYSTEM AND COMPUTER READABLE STORAGE MEDIUM - A data acquisition method and device for motion recognition, a motion recognition system and a computer readable storage medium are disclosed. The data acquisition device for motion recognition comprises: an initial motion recognition module adapted to perform an initial recognition with respect to motion data collected by a sensor and provide motion data describing a predefined range around a motion trigger point to a data storage module for storage; a data storage module adapted to store motion data provided from the initial motion recognition module; and a communications module adapted to forward the motion data stored in the data storage module to a motion computing device for motion recognition. The present invention makes an initial selection to the motion data to be transmitted to the motion computing device under the same sampling rate. Consequently, the present invention reduces pressures on wireless channel transmission and wireless power consumption, and provides high accuracy in motion recognition while providing motion data at the same sampling rate. | 08-07-2014 |
20140219499 | VISUAL TRACKING FRAMEWORK - A computer program product tangibly embodied in a computer-readable storage medium includes instructions that when executed by a processor perform a method. The method includes identifying a frame of a video sequence, transforming a model into an initial guess for how the region appears in the frame, performing an exhaustive search of the frame, performing a plurality of optimization procedures, wherein at least one additional model parameter is taken into account as each subsequent optimization procedure is initiated. A system includes a computer readable storage medium, a graphical user interface, an input device, a model for texture and shape of the region, the model generated using the video sequence and stored in the computer readable storage medium, and a solver component. | 08-07-2014 |
20140219500 | IMAGE REPORTING METHOD - An image reporting method is provided. The image reporting method comprises the steps of retrieving an image representation of a sample structure from an image source; mapping a generic structure to the sample structure, the generic structure being related to the sample structure; determining a region of interest within the sample structure based on content of the image representation of the sample structure; providing a focused set of representations of diagnostic knowledge which is contextually appropriate to the region of interest and prompting the user to select at least one diagnostic finding from the focused set of knowledge representation or by entering free-form text; and generating a diagnostic report based on the selections and free-form text entries. | 08-07-2014 |
20140219501 | RECOGNIZING METHOD OF FLAKY OR BLOCKY PROHIBITED ARTICLES, EXPLOSIVES OR DRUGS - The present invention discloses recognizing methods of flaky or blocky prohibited articles, explosives or drugs. Specifically, the method for recognizing flaky prohibited articles, explosives or drugs comprises steps of: (1) reading in tomogram data of an object to be inspected for one tomogram; (2) pre-processing the tomogram data; (3) splitting the pre-processed tomogram data into a plurality of regions that have similar physical properties; (4) analyzing whether each of the split regions is a flaky region; (5) determining whether the flaky region recognized in the current tomogram can be merged with the flaky region detected from the previous tomogram, so as to form a flaky target; (6) determining whether each detected flaky target is complete or finished; (7) repeating steps (1)-(6) and processing each tomogram data layer by layer, until all of the tomogram data have been processed. | 08-07-2014 |
20140219502 | POSITION AND ORIENTATION MEASURING APPARATUS, INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - There is provided a position and orientation measurement apparatus, information processing apparatus, and an information processing method, capable of performing robust measurement of a position and orientation. In order to achieve the apparatuses and method, at least one coarse position and orientation of a target object is acquired from an image including the target object, at least one candidate position and orientation is newly generated as an initial value used for deriving a position and orientation of the target object based on the acquired coarse position and orientation, and the position and orientation of the target object in the image is derived by using model information of the target object and by performing at least once of fitting processing of the candidate position and orientation generated as the initial value with the target object in the image. | 08-07-2014 |
20140219503 | THRONGING DETERMINATION DEVICE AND THRONGING DETERMINATION METHOD - A thronging determination device for determining occurrence of a thronging state in which persons are gathered locally, includes an image receiving unit that receives a moving image, an image dividing unit that divides an input image received by the image receiving unit into local regions, and a degree-of-congestion estimating unit that judges the degree of congestion in plural ones of the local regions. If the degree-of-congestion estimating unit judges that the degree of congestion in the plural ones of the local regions is lower than a prescribed value, a thronging determination is performed using local regions that are smaller in number than the local regions that have been used in estimating the degree of congestion. | 08-07-2014 |
20140219504 | OBJECT DETECTION DEVICE - It's an object of the invention to provide an object detection device capable of detecting an object for detection in an input image with high precision. In an object detection device | 08-07-2014 |
20140219505 | PEDESTRIAN BEHAVIOR PREDICTING DEVICE AND PEDESTRIAN BEHAVIOR PREDICTING METHOD - According to the present invention, a pedestrian is detected from an imaged image and a partial image including the pedestrian is extracted, shape information of the pedestrian acquired from the extracted partial image is accumulated and the shape information of a predetermined time before and the current shape information are compared using the accumulated shape information to detect change in the movement of the pedestrian, discontinuous movement estimating information indicating a discontinuous movement of the pedestrian that occurs following the change in the movement of the pedestrian is acquired from a storage means at the time the change in the movement of the pedestrian is detected, and a behavior of the pedestrian is predicted using the acquired discontinuous movement estimating information. | 08-07-2014 |
20140233787 | OBJECT DETECTION USING DIFFERENCE OF IMAGE FRAMES - Object detection using a difference between image frames may include receiving a first image of a field of view, receiving a second image of the field of view, determining a difference between portions of the first image and corresponding portions of the second image, and declaring based on the difference between the portions of the first image and the corresponding portions of the second image that a specific object has been detected in the field of view. | 08-21-2014 |
20140233788 | SYSTEM, METHOD, AND SOFTWARE FOR OPTICAL DEVICE RECOGNITION ASSOCIATION - A system including an image capturing unit configured to capture an image of at least one medical device monitoring a patient, a database including images of a plurality of medical devices, where each image corresponds to a particular medical device, and a data collection server configured to receive the at least one image, receive patient identification data corresponding to the patient, and identify the medical device in the image by comparing the received image with the images stored in the database and matching the received image with the images stored in the database. | 08-21-2014 |
20140233789 | SYSTEMS AND METHODS FOR IMPLEMENTING AND USING OFF-CENTER EMBEDDED MEDIA MARKERS - Provided is an off-center embedded media marker, which may have a form of an iconic marker printed outside the boundary of a region of interest in a document or other article and indicating an available media object or a function associated with the aforesaid region of interest. This marker is used by defining a sight element with the boundary shape of the marker near the edge of a viewable portion of a display, aligning the sight element with the marker and capturing an image of a predetermined region of the document without using a visible region boundary on the hardcopy document. The media or function associated with the marker is automatically determined by performing a feature-based analysis of the captured image similarly to the techniques developed in connection with the conventional embedded media markers. Upon the determination, the associated media is retrieved of the associated function is performed. | 08-21-2014 |
20140233790 | MOTION ESTIMATION SYSTEMS AND METHODS - A motion estimation system is disclosed. The motion estimation system may include one or more memories storing instructions, and one or more processors configured to execute the instructions to receive, from a scanning device, scan data representing at least one object obtained by a scan over at least one of the plurality of sub-scanning regions, and generate, from the scan data, a sub-pointcloud for one of the sub-scanning regions. The sub-pointcloud includes a plurality of surface points of the at least one object in the sub-scanning region. The one or more processors may be further configured to execute the instructions to estimate the motion of the machine relative to the at least one object by comparing the sub-pointcloud with a reference sub-pointcloud. | 08-21-2014 |
20140233791 | SYNTHETIC APERTURE RADAR MAP APERTURE ANNEALING AND INTERPOLATION - A method for repairing, bridging, or extrapolating an existing aperture to improve image interpretability in synthetic aperture radar images. | 08-21-2014 |
20140233792 | System and Method for Detecting Motion in Compressed Video - A method and apparatus wherein the method includes the steps of parsing a stream of compressed video, obtaining macroblock size information from the parsed stream, computing factors derived from the macroblock size, wherein the factors include a normalized bit size, a bit size ratio and a neighbor score, computing corresponding adaptive threshold values derived from the relative frame characteristics of the compressed video, comparing the factors derived from the macroblock size information with the corresponding adaptive threshold values and detecting motion based upon combinations of the comparisons when the factors exceed the threshold value. | 08-21-2014 |
20140233793 | DIFFERENTIATING ABANDONED AND REMOVED OBJECT USING TEMPORAL EDGE INFORMATION - Disclosed is a computer-implemented method for classifying a detected region of change of a video frame as indicating an abandoned object or a removed object, the detected region of change is classified as an removed object if the temporal change of the edge consistency is from a more consistent state to a less consistent state, and the detected region of change is classified as an abandoned object if the temporal change of the edge consistency if from a less consistent to a more consistent state. | 08-21-2014 |
20140233794 | METHOD, APPARATUS AND MEDICAL IMAGING SYSTEM FOR TRACKING MOTION OF ORGAN - A method of tracking motion of an organ, includes receiving organ shape data that includes a shape of an organ of an examinee at a moment of motion of the examinee, and loading first to Nth interpolation curves that represent spatiotemporal motion of respective organs of other examinees, the organs of the other examinees being the same type as the organ of the examinee. The method further includes estimating an interpolation curve that represents a spatiotemporal motion of the organ of the examinee based on the first to Nth interpolation curves and the organ shape data. | 08-21-2014 |
20140233795 | DRIVER ASSISTANCE SYSTEM, DRIVER ASSISTANCE METHOD AND INFORMATION STORAGE MEDIUM - A driver assistance system includes a mobile terminal provided on an automobile, and a detection server capable of communicating with the mobile terminal. The mobile terminal includes an image capture apparatus which captures images around the automobile, and an image transmission unit which transmits the captured image to the detection server. The detection server includes an image filter which carries out a working process for the image, and a detection engine which receives the image after the working process by the image filter as an input thereto and detects whether or not the image includes an object. If it is decided that the image includes an object, the detection server transmits the result of the decision to the mobile terminal. | 08-21-2014 |
20140233796 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - An image processing device has a first image input unit that inputs a visible light image of a target area imaged by a first camera, a second image input unit that inputs a temperature distribution image of the target area imaged by a second camera, a first image processing section that processes the visible light image input to the first image input unit, and detects an object imaged in the visible light image, a second image processing section that processes the temperature distribution image input to the second image input unit and detects an object imaged in the temperature distribution image, and an image region extracting unit that extracts an image region not suited for the detection of the object in the first image processing section for the visible light image of the target area input to the first image input unit. | 08-21-2014 |
20140233797 | IMAGE ANALYSIS PLATFORM FOR IDENTIFYING ARTIFACTS IN SAMPLES AND LABORATORY CONSUMABLES - A High-resolution Image Acquisition and Processing Instrument (HIAPI) performs at least five simultaneous measurements in a noninvasive fashion, namely: (a) determining the volume of a liquid sample in welh (or microtubes) containing liquid sample, (b) detection of precipitate, objects of artifacts within microliter plate wells, (c) classification of colored samples in microliter plate wells or microtubes; (dl determination of contaminant (e.g. wafer concentration}; (e) air bubbles; (f) problems with the actual plate. Remediation of contaminant is also possible. | 08-21-2014 |
20140233798 | ELECTRONIC DEVICE AND METHOD OF OPERATING ELECTRONIC DEVICE - An electronic device is provided. The electronic device includes a memory configured to store a plurality of digital images and a processor, wherein the processor tracks movement of an object recognized in the plurality of digital images, and the processor detects an amount of movement between the plurality of digital images, and selects one among a plurality of object tracking methods based on at least a part of the amount of the movement. | 08-21-2014 |
20140233799 | ELECTRONIC DEVICE AND OBJECT RECOGNITION METHOD IN ELECTRONIC DEVICE - An electronic device for fast object recognition is provided. The electronic device includes a first storage unit configured to store digital image data, and a processor configured to recognize an object in first image data, to receive a second object related to the first object in the first image data from a second storage unit, to store the first and second objects in the first storage unit, and to use one or more of the first and second objects stored in the first storage unit to recognize an object in second image data. | 08-21-2014 |
20140233800 | METHOD OF TRACKING OBJECT AND ELECTRONIC DEVICE SUPPORTING THE SAME - A method of tracking an object and an electronic device supporting the same are provided. The method includes predicting a movement of a tracked object, comparing features of current image information based on predicted information with features of each of key frames, selecting a particular key frame from the key frames according to a result of the comparison, and estimating a pose by correcting the movement of the object in the current image information based on the selected key frame, wherein the comparing of the features comprises defining a location value of the feature by relation with neighboring features. | 08-21-2014 |
20140233801 | METHOD AND ELECTRONIC DEVICE FOR PROCESSING OBJECT - A method of operating an electronic device is provided. The method includes tracking at least one object included in a plurality of digital images including at least a first image and a second image. The tracking of the object includes determining values of phase correlation between a part of the first image and a part of the second image, determining a position of a peak value among the values of the phase correlation, and determining a variance of the values of the phase correlation according to at least a part of the peak vale. | 08-21-2014 |
20140233802 | Increased Quality of Image Objects Based on Depth in Scene - Systems, methods, and software for operating an image processing system are provided herein. In a first example, a method of operating an image processing system is provided. The method includes identifying object pixels associated with an object of interest in a scene, identifying additional pixels to associate with the object of interest, and performing an operation based on a depth of the object in the scene on target pixels comprised of the object pixels and the additional pixels to change a quality of the object of interest. | 08-21-2014 |
20140233803 | Increased Quality of Image Objects Based on Depth in Scene - Systems, methods, and software for operating an image processing system are provided herein. In a first example, a method of operating an image processing system is provided. The method includes identifying object pixels associated with an object of interest in a scene, identifying additional pixels to associate with the object of interest, and performing an operation based on a depth of the object in the scene on target pixels comprised of the object pixels and the additional pixels to change a quality of the object of interest. | 08-21-2014 |
20140233804 | METHOD AND APPARATUS FOR FINDING STICK-UP HEIGHT OF A PIPE OR FINDING A JOINT BETWEEN TWO PIPES IN A DRILLING ENVIRONMENT - The present invention generally relates to a method, apparatus, a computer program and computer for automated detection of pipe ( | 08-21-2014 |
20140241570 | USING A COMBINATION OF 2D AND 3D IMAGE DATA TO DETERMINE HAND FEATURES INFORMATION - A method of determining hand features information using both two dimensional (2D) image data and three dimensional (3D) image data is described. In one implementation, a method includes: receiving a 2D image frame; receiving 3D image data corresponding to the 2D image frame; using the 3D image data corresponding to the 2D image frame, transforming the 2D image frame; and using the 3D image data corresponding to the 2D image frame, scaling the 2D image frame, where the transforming and scaling results in a normalized 2D image frame, where the normalized 2D image frame is a scaled and transformed version of the 2D image frame, and where the scaling and transforming is performed using a computer. | 08-28-2014 |
20140241571 | SYSTEM AND METHOD FOR CONTAMINATION MONITORING - A system for contamination monitoring includes a tracking component, a material identification component, a procedural component, and a notification component. The tracking component tracks an individual and one or more objects in a work area using a three-dimensional tracking system. The material identification component identifies a material of the one or more objects based on a captured image. The procedural component determines that an object of the one or more objects is contaminated based on tracked locations of the one or more objects and the individual. The notification component provides a notification of the contamination. | 08-28-2014 |
20140241572 | Identification of Aircraft Surface Positions Using Camera Images - A method and apparatus for identifying a position of a surface on an aircraft. Image data for an image of the surface on the aircraft is received. The image data is processed to determine whether the position of the surface on the aircraft is a desired position. A surface position identification report comprising information identifying whether the position of the surface on the aircraft is the desired position is generated. | 08-28-2014 |
20140241573 | SYSTEM FOR AND METHOD OF TRACKING TARGET AREA IN A VIDEO CLIP - A system for and a method of tracking a target area in a video clip. In an embodiment, a video clip comprising a sequence of frames is obtained. The video clip includes a frame having an identified target area. A plane is identified in three-dimensional space for the target area, the target area being defined by a set a points on the plane. A position of the target area is estimated in a next frame of the video clip. A transformation matrix is generated from the position of the target area in the next frame. The transformation matrix is applied to the target area to determine its position in the next frame of the video clip. Data representing the position of the target area is stored a data storage device. The target area can be tracked for each frame of the video clip in which at least a portion of the target area appears. Image data can be inserted into the tracked target area of each frame of the video clip. | 08-28-2014 |
20140241574 | TRACKING AND RECOGNITION OF FACES USING SELECTED REGION CLASSIFICATION - Methods, apparatuses, and articles associated with facial tracking and recognition are disclosed. In embodiments, facial images may be detected in video or still images and tracked. After normalization of the facial images, feature data may be extracted from selected regions of the faces to compare to associated feature data in known faces. The selected regions may be determined using a boosting machine learning processes over a set of known images. After extraction, individual two-class comparisons may be performed between corresponding feature data from regions on the tested facial images and from the known facial image. The individual two-class classifications may then be combined to determine a similarity score for the tested face and the known face. If the similarity score exceeds a threshold, an identification of the known face may be output or otherwise used. Additionally, tracking with voting may be performed on faces detected in video. After a threshold of votes is reached, a given tracked face may be associated with a known face. | 08-28-2014 |
20140241575 | Wearable display-based remote collaboration apparatus and method - Disclosed herein are a wearable display-based remote collaboration apparatus and method. The wearable display-based remote collaboration apparatus includes an image acquisition unit, a recognition unit, an image processing unit, and a visualization unit. The image acquisition unit obtains image information associated with the present point of time of a worker. The recognition unit recognizes the location and motion of the worker based on the obtained image information. The image processing unit matches a virtual object, corresponding to an object of work included in the obtained image information, with the image information, and matches a motion of the object of work, matched with the image information, with the image information based on manipulation information. The visualization unit visualizes the image information processed by the image processing unit, and outputs the visualized image information. | 08-28-2014 |
20140241576 | APPARATUS AND METHOD FOR CAMERA TRACKING - A camera tracking apparatus including a sequence image input unit configured to obtain one or more image frames by decoding an input two-dimensional image, a two-dimensional feature point tracking unit configured to obtain a feature point track by extracting feature points from respective image frames obtained by the sequence image input unit, and comparing the extracted feature points with feature points extracted from a previous image frame, to connect feature points determined to be similar, and a three-dimensional reconstruction unit configured to reconstruct the feature point track obtained by the two-dimensional feature point tracking unit. | 08-28-2014 |
20140241577 | METHOD OF TRACKING MOVING OBJECT, METHOD OF DETERMINING DISPLAY STATE OF MOVING OBJECT, AND CONTROL APPARATUS FOR TRACKING MOVING OBJECT - A method of tracking a moving object includes measuring displacement of an object to be tracked, obtaining a particle of the object to be tracked using the measured displacement, and tracking the object using pose information of the object in an image thereof and the obtained particle. A control apparatus includes an imaging module to perform imaging of an object and generates an image, and a tracking unit to acquire displacement and pose information of the object using the generated image of the object, to set a particle of the object using the acquired displacement of the object, and to track the object using the pose information of the object and the particle. | 08-28-2014 |
20140241578 | VEHICLE-TO-VEHICLE DISTANCE CALCULATION APPARATUS AND METHOD - The distance to a target vehicle is calculated. To achieve this, a target vehicle traveling ahead is imaged and it is determined to what vehicle group, such as a light-duty vehicle group, standard passenger car group or heavy-duty vehicle group, the image of the target vehicle belongs. Representative vehicle widths are stored in a vehicle group table on a per-vehicle-group basis. The distance from one's own vehicle to a target vehicle is calculated using the vehicle width that corresponds to the vehicle group decided. | 08-28-2014 |
20140241579 | VEHICLE-TO-VEHICLE DISTANCE CALCULATION APPARATUS AND METHOD - The distance to a target vehicle is calculated comparatively accurately. To achieve this, a target vehicle traveling ahead of one's own vehicle is imaged by a camera and it is determined to what vehicle group, such as a light-duty vehicle group, standard passenger car group or heavy-duty vehicle group, the image of the target vehicle belongs. A first distance from one's own vehicle to the target vehicle is calculated by a circuit using the representative vehicle width of the vehicle group decided. A vanishing point is detected from the captured image by a vanishing point detection circuit and a second distance from one's own vehicle to the target vehicle is calculated utilizing the vanishing point. The distance to the target vehicle is decided from the first and second distances by a distance decision circuit, wherein the shorter the distance to the vanishing point, the more the value of a weighting coefficient of the second distance is reduced. | 08-28-2014 |
20140241580 | OBJECT DETECTION APPARATUS - An object detection apparatus mounted in a vehicle for detecting a target object in various changing environmental conditions. In the apparatus, a storage prestores plural image recognition dictionaries each describing reference data for the target object, and plural image recognition techniques each used to detect the target object from an input image with use of one of the plural image recognition dictionaries. A first acquirer acquires an operating state of a lighting device of the vehicle. A selector selects, according to the acquired operating state of the lighting device, one of the plural of image recognition dictionaries and one of the plural of image recognition techniques. A detector detects the target object in the input image by applying image recognition processing thereto with use of the selected image recognition dictionary and technique. | 08-28-2014 |
20140241581 | METHOD AND SYSTEM FOR AUTOMATICALLY COUNTING PHYSICAL OBJECTS - A periphery band is around an excluded region. For automatically counting physical objects within the periphery band and the excluded region, an imaging sensor captures: a first image of the periphery band and the excluded region; and a second image of the periphery band and the excluded region. In response to the first image, a first number is counted of physical objects within the periphery band and the excluded region. Relevant motion is automatically detected within the periphery band, while ignoring motion within the excluded region. In response to the second image, a second number is counted of physical objects within the periphery band and the excluded region. In response to determining that a discrepancy exists between the detected relevant motion and the second number, the discrepancy is handled. | 08-28-2014 |
20140241582 | DIGITAL PROCESSING METHOD AND SYSTEM FOR DETERMINATION OF OBJECT OCCLUSION IN AN IMAGE SEQUENCE - A method and system for occlusion region detection and measurement between a pair of images are disclosed. A processing device receives a first image and a second image. The processing device estimates a field of motion vectors between the first image and the second image. The processing device motion compensates the first image toward the second image to obtain a motion-compensated image. The processing device compares a plurality of pixel values of the motion-compensated image to a plurality of pixels of the first image to estimate an error field. The processing device inputs the error field to a weighted error cost function to obtain an initial occlusion map. The processing device regularizes the initial occlusion map to obtain a regularized occlusion map. | 08-28-2014 |
20140241583 | SECURE SELF-CHECKOUT - Various embodiments allow scanning by a shopper using a barcode reader (e.g., a scanner) attached to or positioned near the shopping receptacle. As items are scanned, they are identified based on their barcode and added to an item list. Item verification can then performed at checkout using imaging technology. For example, the shopping cart or shopping basket can be brought into the field of view of a computer-connected camera. The camera and computer can, working from the customer's item list developed when the items are scanned, observe each product in the receptacle and ring it up. If all products can be accounted for, the customer is free to leave; otherwise the customer is denied egress, informed of the problem, etc. A store employee can also be signaled to investigate. | 08-28-2014 |
20140247963 | OBJECT DETECTION VIA VALIDATION WITH VISUAL SEARCH - One exemplary embodiment involves receiving, at a computing device comprising a processor, a test image having a candidate object and a set of object images detected to depict a similar object as the test image. The embodiment involves localizing the object depicted in each one of the object images based on the candidate object in the test image to determine a location of the object in each respective object image and then generating a validation score for the candidate object in the test image based at least in part on the determined location of the object in the respective object image and known location of the object in the same respective object image. The embodiment also involves computing a final detection score for the candidate object based on the validation score that indicates a confidence level that the object in the test image is located as indicated by the candidate object. | 09-04-2014 |
20140247964 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM - An image inputter inputs image data in which a user has been captured. An extractor extracts a region corresponding to a hand of the user included in the image data input by the image inputter. A determinator determines whether or not the region corresponding to the hand of the user extracted by the extractor is in a stationary state. If it is determined by the determinator that the region corresponding to the hand of the user is in a stationary state, a recognizer recognizes the shape of the region corresponding to the hand of the user extracted by the extractor. Otherwise, if it is determined by the determinator that the region corresponding to the hand of the user is not in a stationary state, the recognizer recognizes the movement of the region corresponding to the hand of the user extracted by the extractor. An inputter inputs data associated with the shape or the movement recognized by the recognizer. | 09-04-2014 |
20140247965 | INDICATOR MARK RECOGNITION - A method and system for deciphering answer sheets for standardized tests and surveys using multiple-choice answer sheets. Multiple-choice answer sheets are typically scanned by automatic scanning machines, where the answers are deciphered and the information is gathered. An improved answer sheet includes a character, such as a symbol, a letter or a number in each bubble-type space on an answer sheet. A device, which may be a hand-held device, scans the marked-up answer sheet. Bubbles that are filled in may sometimes be hard to distinguish from un-marked spaces. The device that scans the answer sheets is equipped with optical mark recognition (OMR) software to detect marks. The device is also equipped with optical character recognition (OCR) software. If a bubble is not marked, the OCR software detects the character and correctly interprets the bubble as not marked. This allows for correct counting of the number of answers marked per sheet. | 09-04-2014 |
20140247966 | Determining GPS Coordinates for Images - Systems, methods and articles of manufacture for GPS coordinate determination for images are described herein. Embodiments of the present disclosure relate to equipping an image with GPS coordinates where the image is uploaded onto a mapping site without GPS coordinates. The mapping site is able to equip the image with GPS coordinates by identifying a recognizable structure in the image and then comparing the recognizable structure with stored structures in the mapping site. The stored structures in the mapping site have GPS coordinates for each. The mapping site compares the recognizable structure of the image without GPS coordinates to a structure stored in the mapping site with GPS coordinates. The mapping site then tags the image without GPS coordinates with the GPS coordinates associated with the stored structure that matches the structure of the image. | 09-04-2014 |
20140247967 | Methods and Systems for Image Analysis Identification - A computer-implemented method for identifying a first object-of-interest is provided. The first object-of-interest includes two identifiers and a sample portion. The method includes imaging the first object-of-interest including the two identifiers. The imaging generates a first set of image data. The method further includes determining a position of the first object-of-interest in the field-of-view of an optical sensor and determining the two identifiers from the first set of image data. The method includes identifying the first object-of-interest based on the two identifiers. | 09-04-2014 |
20140254863 | Method for Detecting and Tracking Objects in Image Sequences of Scenes Acquired by a Stationary Camera - In a sequence of images of a scene acquired by a stationary camera, objects are detected and tracked by determining a first set of candidate foreground regions according to a background model. A second set of candidate foreground regions is determined. according to a set of foreground models. Then, candidate foreground regions in the first set and the second set are validated to produce a final set of foreground regions in the image that include the objects. | 09-11-2014 |
20140254864 | SYSTEM AND METHOD FOR GESTURE DETECTION THROUGH LOCAL PRODUCT MAP - System and method for image detection that include collecting image data; at a processor, over a plurality of support regions of the image data, computing a dimensionality component of a support region of the image data, wherein the, non-nucleus pixels of a support region; calculating a normalizing factor of the dimensionality component; for at least one weighted pattern of a pattern set, applying a weighted pattern to the dimensionality component to create a gradient vector, mapping the gradient vector to a probabilistic model, and normalizing the gradient vector by the normalizing factor; condensing probabilistic models of the plurality of support regions into a probabilistic distribution feature for at least one cell of the image data; applying a classifier to at least the probabilistic distribution feature; and detecting an object in the image data according to a result of the applied classifier. | 09-11-2014 |
20140254865 | Image Identification Method and System - Novel tools and techniques are described for identifying objects and/or persons. In one aspect, a method might comprise obtaining a digital image of an object(s) with a digital image recording device. The digital image may be transmitted to a remote computer system, and compared to multiple preexisting digital images using an image comparison software application running thereon. A set of preexisting digital images matching the digital image of the object(s) may be identified, and a (best match) keyphrase associated with the preexisting digital images may be determined. The keyphrase may be returned to a user computer for user confirmation or rejection. In some embodiments, a point cloud may be generated for each object in the image, and fitted with available 3D models, so as to confirm the keyphrase. In some embodiments, the confirmed keyphrase may be sent to a user computer for implementation in a cadastral survey application. | 09-11-2014 |
20140254866 | PREDICTIVE ANALYSIS USING VEHICLE LICENSE PLATE RECOGNITION - A method and a system for predictive analysis using vehicle license plate recognition are described. The system has a gateway, a web server, and a client device. The gateway is coupled to security devices. The web server has a management application configured to communicate with the gateway. The client device communicates with the gateway identified by the web server. The gateway monitors data from security devices coupled to the gateway. A predictive behavioral model is generated using historical data from the monitoring data comprising identified characters in license plates of vehicles monitored by the security devices. | 09-11-2014 |
20140254867 | USER BODY ANGLE, CURVATURE AND AVERAGE EXTREMITY POSITIONS EXTRACTION USING DEPTH IMAGES - Embodiments described herein use depth images to extract user behavior, wherein each depth image specifies that a plurality of pixels correspond to a user. In certain embodiments, information indicative of an angle and/or curvature of a user's body is extracted from a depth image. This can be accomplished by fitting a curve to a portion of a plurality of pixels (of the depth image) that correspond to the user, and determining the information indicative of the angle and/or curvature of the user's body based on the fitted curve. An application is then updated based on the information indicative of the angle and/or curvature of the user's body. In certain embodiments, one or more average extremity positions of a user, which can also be referred to as average positions of extremity blobs, are extracted from a depth image. An application is then updated based on the average positions of extremity blobs. | 09-11-2014 |
20140254868 | Visual Signature Determination System for Moving Targets - According to one embodiment, the visual signature of a moving target may be measured by measuring, using a photometer, an optical property of a moving target while the target moves along a path from a start position to an end position in front of a background. The photometer may be repositioned to measure optical properties of the background at the start position. The photometer may measure the optical property of the background along the path between the start position and the end position. The visual signature of the moving target may be determined by comparing the measured optical property of the moving target along the path to the measured optical property of the background along the path. | 09-11-2014 |
20140254869 | METHOD AND DEVICE FOR DETECTING DISPLACEMENT IN ELASTOGRAPHY - Disclosed are a method and a device for detecting displacement in elastography. The method comprises: acquiring a target point, acquiring a cross-correlation phase calculation location of the target point in a second frame image; calculating a cross-correlation phase according to the cross-correlation phase calculation location; calculating a longitudinal displacement result according to the cross-correlation phase; and calculating a gradient of the displacement result to obtain a strain result. Through the elastography method and device, I/Q-channel echo baseband signals, obtained by downsampling, of two frames before and after compression are acquired, displace information between the two frames is rapidly detected by guiding phase estimation, and axial gradient calculation is performed to obtain strain information, which can not only obtain a strain image of high quality but also reduce the calculation amount, thereby satisfying the clinical real-time requirement. | 09-11-2014 |
20140254870 | METHOD FOR RECOGNIZING MOTION GESTURE COMMANDS - A computer capable of recognizing gesture commands is disclosed. Suppose that a user makes a gesture of swinging a hand from side to side in front of a camera associated with a computer. A camera module receives frames with shots of the gesture in order. The camera module calculates a gradation difference between corresponding pixels of each frame and a background image to generate a set of binarized differential images. The camera module then combines differential images to generate composite images. In response to a determination that any of the composite images matches a reference pattern, the camera module outputs a computer command. The computer command can be used to control the power state of the computer or start a specific application within the computer. | 09-11-2014 |
20140254871 | IMAGE MOTION DETECTION METHOD, IMAGE PROCESSING METHOD AND APPARATUS USING THE METHODS - An image processing method for detecting an image motion information between a first image unit and a second image unit is provided. The first image unit and second image unit respectively comprise a plurality of blocks, and each of the blocks comprises a plurality of pixels. The image motion detection method comprises: analyzing pixels at the same position in all blocks of the first image unit to generate a first image statistical information; analyzing pixels at the same position in all blocks of the first image unit to generate a first image statistical information; and comparing the first image statistical information with the second image statistical information to determine the image motion information. | 09-11-2014 |
20140254872 | OBJECT DETECTION APPARATUS, VEHICLE-MOUNTED DEVICE CONTROL SYSTEM AND STORAGE MEDIUM OF PROGRAM OF OBJECT DETECTION - An object detection apparatus, using at least one processing circuit, for detecting an object in an image capturing area based on parallax information generated from a plurality of images captured by a plurality of image capturing units, includes a parallax histogram information generator to generate vertical-direction parallax histogram information indicating a frequency profile of parallax values in each of vertical row areas in a captured image based on the parallax information; and an object image area extraction unit to extract, among parallax values having frequency exceeding a given frequency threshold, a group of pixels having parallax values existing within proximity of a given parallax value and having a pixel-to-pixel interval in an image left-to-right direction within a given range as an object image area displaying an object based on the vertical-direction parallax histogram information. | 09-11-2014 |
20140254873 | METHOD AND DEVICE FOR DETECTING INTERFERING OBJECTS IN THE AMBIENT AIR OF A VEHICLE - A method for detecting interfering objects in the ambient air of a vehicle, include determining line structures in at least one image section of an image of surroundings of the vehicle, determining a position of a first converging area of first line structures and a position of a second converging area of second line structures, and ascertaining interfering objects depicted in the image which represent objects present in the ambient air of the vehicle, based on the position of the first converging area and the position of the second converging area. | 09-11-2014 |
20140254874 | METHOD OF DETECTING AND DESCRIBING FEATURES FROM AN INTENSITY IMAGE - The invention provides methods of detecting and describing features from an intensity image. In one of several aspects, the method comprises the steps of providing an intensity image captured by a capturing device, providing a method for determining a depth of at least one element in the intensity image, in a feature detection process detecting at least one feature in the intensity image, wherein the feature detection is performed by processing image intensity information of the intensity image at a scale which depends on the depth of at least one element in the intensity image, and providing a feature descriptor of the at least one detected feature. For example, the feature descriptor contains at least one first parameter based on information provided by the intensity image and at least one second parameter which is indicative of the scale. | 09-11-2014 |
20140254875 | METHOD AND SYSTEM FOR AUTOMATIC OBJECTS LOCALIZATION - A method for automatic localization of objects in a mask. The method includes building a dictionary or atoms, wherein each atom models the presence of one object at one location and iteratively determining the atom of said dictionary which is best correlated with said mask, until ending criteria are met. The invention system concerns also automatically detects objects in a mask. At least one fixed camera is provided for acquiring video frames. A computation device is used for calibrating at least one fixed camera for extracting foreground silhouettes in each acquired video frames for discretizing said ground plane into a non-regular grid of potential location points for constructing a dictionary of atoms, and for finding objects location points with the previous method. And a propagating device is provided to propagate the result in at least one fixed camera view. | 09-11-2014 |
20140254876 | METHODS AND APPARATUS TO COUNT PEOPLE IN IMAGES - Methods and apparatus to count people in images are disclosed. An example method includes maintaining a history of instances in which a person is detected by a first image sensor and by a second sensor different than the first image sensor at approximately a same time, respective ones of the instances including a first coordinate at which a first person was detected via the first image sensor, and a second coordinate at which the first person was detected via the second image sensor; and, in response to first image data captured by the first image sensor including a second person at the first coordinate, determining whether second image data captured by the second image sensor includes the second person without comparing the first image data to the second image data. | 09-11-2014 |
20140270342 | METHODS AND SYSTEMS FOR DETECTION AND IDENTIFICATION OF CONCEALED MATERIALS - Methods and systems for efficiently and accurately detecting and identifying concealed materials. The system includes an analysis subsystem configured to process a number of pixelated images, the number of pixelated images obtained by repeatedly illuminating regions with a electromagnetic radiation source from a number of electromagnetic radiation sources, each repetition performed with a different wavelength. The number of pixelated images, after processing, constitute a vector of processed data at each pixel from a number of pixels. At each pixel, the vector of processed data is compared to a predetermined vector corresponding to a predetermined material, presence of the predetermined material being determined by the comparison. | 09-18-2014 |
20140270343 | EFFICIENT 360 DEGREE VIDEO PROCESSING - In accordance with the present disclosure, systems and method for efficient 360 degree video processing are described herein. A first image within a first frame of a video stream may be detected. The first frame may be partitioned into a first portion containing the first image and a second portion. An attempt may be made to detect the first image within a third portion of a subsequent frame that corresponds to the first portion of the first frame, and an attempt may not be made to detect a second image within a fourth portion of the subsequent frame that corresponds to the second portion of the first frame. Additionally, an attempt may be made to detect the second image within a fifth portion of at least one other subsequent frame that corresponds to the second portion of the first frame. | 09-18-2014 |
20140270344 | REDUCING OBJECT DETECTION TIME BY UTILIZING SPACE LOCALIZATION OF FEATURES - In one example, a method for exiting an object detection pipeline includes determining, while in the object detection pipeline, a number of features within a first tile of an image, wherein the image consists of a plurality of tiles, performing a matching procedure using at least a subset of the features within the first tile if the number of features within the first tile meets a threshold value, exiting the object detection pipeline if a result of the matching procedure indicates an object is recognized in the image, and presenting the result of the matching procedure. | 09-18-2014 |
20140270345 | METHOD AND APPARATUS FOR MOVEMENT ESTIMATION - A method for estimating movement of a mobile device includes: obtaining images from a camera communicatively coupled to a processor of the mobile device; identifying a stationary light source using at least one image of the images; calculating a displacement of the stationary light source based on a first location of the stationary light source in a first image of the images and a second location of the stationary light source in a second image of the images, the first image and the second image being captured at different times; and estimating movement of the mobile device based on the displacement. | 09-18-2014 |
20140270346 | TRACKING TEXTURE RICH OBJECTS USING RANK ORDER FILTERING - A method of real-time tracking of an object includes capturing a first and a second image of the object. The object is detected in the first image and movement of the object is tracked between the images. Tracking of the object includes obtaining an initial pose of the camera; projecting an image of a model object onto the second image; determining a gradient profile of the second image from an edge point of the model object along a first direction that is normal to the edge of the model object; computing a radius on the gradient profile; determining a rank order of the peaks of the gradient profile along the radius; comparing the rank order with a predetermined rank order to generate a feature candidate point; and reducing a distance along the first direction between the feature candidate point and the edge point on the edge of the model object. | 09-18-2014 |
20140270347 | HIERARCHICAL IMAGE CLASSIFICATION SYSTEM - A technique for image processing that includes receiving a model image, an input image, and registering the input image with the model image. A modified input image is determined that includes a first component that is substantially free of error components with respect to the model image and a second component that is substantially free of non-error aspects with respect to the model image. The technique determines an improved alignment of the modified input image with the model image where the improved alignment and the first and second components are determined jointly. | 09-18-2014 |
20140270348 | MOTION BLUR AWARE VISUAL POSE TRACKING - Various methods, apparatuses and/or articles of manufacture are provided which may be implemented for use by an electronic device to track objects across two or more digital images. For example, an electronic device may generate a plurality of warped patches corresponding to a reference patch of a reference image, and combine two or more warped patches to form a blurred warped patch corresponding to the reference patch with a motion blur effect applied to a digital representation corresponding to a keypoint of an object to be tracked. | 09-18-2014 |
20140270349 | SYSTEMS AND METHODS FOR CLASSIFYING OBJECTS IN DIGITAL IMAGES CAPTURED USING MOBILE DEVICES - In one embodiment, a method includes receiving a digital image captured by a mobile device; and using a processor of the mobile device: generating a first representation of the digital image, the first representation being characterized by a reduced resolution; generating a first feature vector based on the first representation; comparing the first feature vector to a plurality of reference feature matrices; and classifying an object depicted in the digital image as a member of a particular object class based at least in part on the comparing. | 09-18-2014 |
20140270350 | DATA DRIVEN LOCALIZATION USING TASK-DEPENDENT REPRESENTATIONS - A computer implemented method for localization of an object, such as a license plate, in an input image includes generating a task-dependent representation of the input image based on relevance scores for the object to be localized. The relevance scores are output by a classifier for a plurality of locations in the input image, such as patches. The classifier is trained on patches extracted from training images and their respective relevance labels. One or more similar images are identified from a set of images, based on a comparison of the task-dependent representation of the input image and task-dependent representations of images in the set of images. A location of the object in the input image is identified based on object location annotations for the similar images. | 09-18-2014 |
20140270351 | CENTER OF MASS STATE VECTOR FOR ANALYZING USER MOTION IN 3D IMAGES - Techniques described herein determine a center of mass state vector based on a body model. The body model may be formed by analyzing a depth image of a user who is performing some motion. The center of mass state vector may include, for example, center-of-mass position, center-of-mass velocity, center-of-mass acceleration, orientation, angular velocity, angular acceleration, inertia tensor, and angular momentum. A center of mass state vector may be determined for an individual body part or for the body as a whole. The center of mass state vector(s) may be used to analyze the user's motion. | 09-18-2014 |
20140270352 | THREE DIMENSIONAL FINGERTIP TRACKING - Systems and methods for detecting, tracking the presence, location, orientation and/or motion of a hand or hand segments visible to an input source are disclosed herein. Hand, hand segment and fingertip location and tracking can be performed using ball fit methods. Analysis of hand, hand segment and fingertip location and tracking data can be used as input for a variety of systems and devices. | 09-18-2014 |
20140270353 | DICTIONARY DESIGN FOR COMPUTATIONALLY EFFICIENT VIDEO ANOMALY DETECTION VIA SPARSE RECONSTRUCTION TECHNIQUES - Methods, systems, and processor-readable media for pruning a training dictionary for use in detecting anomalous events from surveillance video. Training samples can be received, which correspond to normal events. A dictionary can then be constructed, which includes two or more classes of normal events from the training samples. Sparse codes are then generated for selected training samples with respect to the dictionary derived from the two or more classes of normal events. The size of the dictionary can then be reduced by removing redundant dictionary columns from the dictionary via analysis of the sparse codes. The dictionary is then optimized to yield a low reconstruction error and a high-interclass discriminability. | 09-18-2014 |
20140270354 | METHODS AND APPARATUS TO MEASURE EXPOSURE TO LOGOS IN VEHICLE RACES - Methods and apparatus to measure logo exposure in vehicle races are disclosed. An example apparatus includes a vehicle database containing first time-location data identifying a first set of physical locations of a first vehicle at corresponding points in time, the first vehicle to display a first logo; a camera database containing time-camera view data identifying a set of views of a camera at corresponding points in time; and credit logic to determine whether to credit the first logo with an exposure to the camera based on the first time-location data and the time-camera view data. | 09-18-2014 |
20140270355 | METHODS AND APPARATUS TO ESTIMATE DEMOGRAPHY BASED ON AERIAL IMAGES - Methods and apparatus to estimate demography based on aerial images are disclosed. An example method includes analyzing a first aerial image of a first geographic area to detect a first plurality of objects, and estimating a demographic characteristic of the first geographic area based on the first plurality of objects. | 09-18-2014 |
20140270356 | SYSTEMS, METHODS AND DEVICES FOR ITEM PROCESSING - Methods, systems and devices for item processing. The systems can include a PASS module that can include features that receive inputs relating to an item for processing and provide those inputs to other components and/or modules of a PASS system and/or of another system. The PASS system can include a variety of modules, including the PASS module, and can collect information and/or inputs from the variety of modules of the PASS system and use that information in item processing. The methods of item processing can use the PASS system and the PASS module to perform a variety of functions including, for example, revenue protection, sorting of items, task management, sampling and data collection, redirecting if enroute items, and personnel management. | 09-18-2014 |
20140270357 | USER LOCATION SYSTEM - A user location system (ULS) can use images, such as video or still images, captured from at least one camera of an electronic device, such as a mobile device, to determine, via at least edge detection and image uniformity analysis, location of a user in an environment, such as in a cabin of a vehicle. The determined location of the user can then be used as an input to control at least one aspect of the environment. In the case of a vehicle, such input may be used to facilitate control of speed, safety features, climate, and/or audio playback, for example. | 09-18-2014 |
20140270358 | Online Learning Method for People Detection and Counting for Retail Stores - People detection can provide valuable metrics that can be used by businesses, such as retail stores. Such information can be used to influence any number of business decisions such a employment hiring and product orders. The business value of this data hinges upon its accuracy. Thus, a method according to the principles of the current invention outputs metrics regarding people in a video frame within a stream of video frames through use of an object classifier configured to detect people. The method further comprises automatically updating the object classifier using data in at least a subset of the video frames in the stream of video frames. | 09-18-2014 |
20140270359 | METHODS AND SYSTEMS FOR AUTOMATIC AND SEMI-AUTOMATIC GEOMETRIC AND GEOGRAPHIC FEATURE EXTRACTION - Methods and systems for facilitating detecting features in sensor data are described. One example method implemented by a computing device includes receiving a first set of sensor data about a geographical region, and generating a second set of sensor data. The first set of sensor data includes data in a plurality of bands. The second set of sensor data is generated by receiving a first input designating a first sub-region of the geographical region, and determining a single band representation of at least a portion of the first set of sensor data associated with the first sub-region. | 09-18-2014 |
20140270360 | EDGEL SAMPLING FOR EDGE-BASED TRACKING - Embodiments include selecting edgels for edge based tracking by dividing a reference image frame (RF) into N×M bins of pixels and projecting a subset of the edgels per bin into a current image frame (CF) using an estimated pose to identify valid bins of the RF as bins having their projected one edgel found within the borders of the CF. Then, K edgels of RF with different orientations from each valid bins may be selected. Then, the selected RF edgels of bins may be reduced by removing bins randomly, or first removing bins from the center of the RF of the image (then next removing the next further outward bins), until a desirable edgel number is obtained. Edge-based tracking can then be performed using the desirable edgel number, to track edges in current frame that are found in prior frame. | 09-18-2014 |
20140270361 | COMPUTER-BASED METHOD AND SYSTEM OF DYNAMIC CATEGORY OBJECT RECOGNITION - A computer-based method/system of dynamic category object recognition for estimating pose and/or positioning of target objects and target object's parts. The method/system may recognize a target object and the target object's parts. The method/system may segment and extract data corresponding to the target object and the target object's parts, and estimate the pose and positioning of the target object and the target object's parts using a plurality of stored object models. The dynamic method/system may supplement or modify the parameters of the plurality of stored object models and/or store learned object models. The learned object models assist in recognizing and estimating pose and/or positioning of newly encountered objects more accurately and with fewer processing steps. The method and system may include a processor, a sensor, an external device, a communications unit, and a database. | 09-18-2014 |
20140270362 | FAST EDGE-BASED OBJECT RELOCALIZATION AND DETECTION USING CONTEXTUAL FILTERING - Embodiments include detection or relocalization of an object in a current image from a reference image, such as using a simple and relatively fast and invariant edge orientation based edge feature extraction, then a weak initial matching combined with a strong contextual filtering framework, and then a pose estimation framework based on edge segments. Embodiments include fast edge-based object detection using instant learning with a sufficiently large coverage area for object re-localization. Embodiments provide a good trade-off between computational efficiency of the extraction and matching processes. | 09-18-2014 |
20140270363 | 3D Visual Proxemics: Recognizing Human Interactions in 3D From a Single Image - A unified framework detects and classifies people interactions in unconstrained user generated images. Previous approaches directly map people/face locations in two-dimensional image space into features for classification. Among other things, the disclosed framework estimates a camera viewpoint and people positions in 3D space and then extracts spatial configuration features from explicit three-dimensional people positions. | 09-18-2014 |
20140270364 | PERFORMING OBJECT DETECTION OPERATIONS VIA A GRAPHICS PROCESSING UNIT - In one embodiment of the present invention, a graphics processing unit (GPU) is configured to detect an object in an image using a random forest classifier that includes multiple, identically structured decision trees. Notably, the application of each of the decision trees is independent of the application of the other decision trees. In operation, the GPU partitions the image into subsets of pixels, and associates an execution thread with each of the pixels in the subset of pixels. The GPU then causes each of the execution threads to apply the random forest classifier to the associated pixel, thereby determining a likelihood that the pixel corresponds to the object. Advantageously, such a distributed approach to object detection more fully leverages the parallel architecture of the PPU than conventional approaches. In particular, the PPU performs object detection more efficiently using the random forest classifier than using a cascaded classifier. | 09-18-2014 |
20140270365 | IMAGE PROCESSING OF IMAGES THAT INCLUDE MARKER IMAGES - An image processing method, includes: obtaining an image, the image having marker images and a background image; identifying presence of an object in the background image using a processor; and providing a signal for stopping a procedure if the presence of the object is identified. An image processing apparatus, includes: a processor configured for: obtaining an image, the image having marker images and a background image; identifying presence of an object in the background image; and providing a signal for stopping a procedure if the presence of the object is identified. A computer product having a non-transitory medium storing a set of instructions, an execution of which causes an image processing method to be performed, the method includes: obtaining an image, the image having marker images and a background image; identifying presence of an object in the background image; and providing a signal for stopping a procedure if the presence of the object is identified. | 09-18-2014 |
20140270366 | Dimension-Wise Spatial Layout Importance Selection: An Alternative Way to Handle Object Deformation - Systems and methods are disclosed for object detection by receiving an image; segmenting the image; extracting features from the image; and performing a dimension-wise spatial layout selection to pick up dimensions inside a discriminative spatial region for classification. | 09-18-2014 |
20140270367 | Selective Max-Pooling For Object Detection - Systems and methods are disclosed for object detection by receiving an image and extracting features therefrom; applying a learning process to determine sub-regions and select predetermined pooling regions; and performing selective max-pooling to choose one or more feature regions without noises. | 09-18-2014 |
20140270368 | ASSOCIATING SIGNAL INTELLIGENCE TO OBJECTS VIA RESIDUAL REDUCTION - Generally discussed herein are systems and apparatuses that are configured to and techniques for associating a SIGnal INTelligence (SIGINT) signal with an object or tracklet. According to an example a technique can include estimating Times of Arrival (ToAs) at each of a plurality of collectors of a first signal from each of a plurality of moving transmitters, each first signal transmitted from a transmitter on a tracklet extracted from video data and received at the plurality of collectors, wherein a location of each of the plurality of collectors is known, comparing each estimated ToA to a respective actual ToA of a SIGINT signal received at each of the collectors, or determining a likelihood that the signal corresponds to the SIGINT signal to determine whether the SIGINT signal was transmitted from a transmitter on the corresponding tracklet. | 09-18-2014 |
20140270369 | ASSOCIATING SIGNAL INTELLIGENCE TO OBJECTS VIA RESIDUAL REDUCTION - Generally discussed herein are systems and apparatuses that are configured to and techniques for associating a SIGINT signal with an object or tracklet. According to an example, a technique can include (1) estimating a first set of times, each time of the first set of times can indicate how much time it would take for a respective SIGINT signal of a set of SIGINT signals to travel from a point on a tracklet extracted from video data to a respective collector, (2) estimating a second set of times corresponding to times at which the video data corresponding to the point on the tracklet was gathered, or (3) associating the set of SIGINT signals with a tracklet of the plurality of tracklets based on the first set of times, the second set of times, and a set of ToAs of SIGINT signals at the plurality of collectors. | 09-18-2014 |
20140270370 | PERSON RECOGNITION APPARATUS AND PERSON RECOGNITION METHOD - A person recognition apparatus is disclosed that includes an image input unit, a face detection unit in which a face is expressed from the inputted image data, as a score which takes a value in accordance with facial likeness, a facial feature point detection unit, a feature extraction unit, a feature data administrative unit, a person identification unit to calculate similarity between the amount calculated by the feature extraction unit and the amount stored in the feature data administrative unit, a number of candidates calculation unit which displays the images stored in descending order of the similarity, and calculates a score from the face detection unit and the facial feature point detection unit, a candidate confirmation unit in which images displayed in descending order of the similarity are subjected to visual inspection. | 09-18-2014 |
20140270371 | REAL-TIME TRACKING AND CORRELATION OF MICROSPHERES - Methods and apparatuses for tracking and correlating particles include an optical detector that captures a first and a second image of the particles. A video detector is used to capture a plurality of video frames of the particles. The video detector captures the video frames of the particles at a rate faster than the rate at which images are captured by the optical detector to track the movement of particles. A first image position of a particle in the first image of the particles is identified, and then the first image position of the particle is correlated to a second image position of the particle in the second image using the plurality of video frames. | 09-18-2014 |
20140270372 | ELECTRONIC DEVICE AND METHOD OF OPERATING THE SAME - A method and apparatus for image processing includes receiving images, detecting non-stationary objects in the images, displaying a first image that includes a non-stationary object, selecting a frame region including the non-stationary object in the first image, selecting a second image based on a low similarity with the first image, and replacing image data in the frame region of the first image with image data represented in the frame region of the second image. | 09-18-2014 |
20140270373 | ELECTRONIC DEVICE AND METHOD FOR SYNTHESIZING CONTINUOUSLY TAKEN IMAGES - An operation method of an electronic device is provided. The method includes detecting motion objects in each of two or more continuously captured images, determining whether the detected motion objects are synthesizable for use as wallpaper, and providing feedback according to whether the detected motion objects are synthesizable for use as wallpaper. | 09-18-2014 |
20140270374 | Systems, Methods, and Software for Detecting an Object in an Image - In an exemplary embodiment, software made in accordance with the present invention allows for template building, template matching, facial point detection, fitting a 3D face model, and other related concepts. Such techniques can be utilized, for example, in a process for fitting a deformable 3D face model to an image or video containing a face. Various corresponding and related methods and software are described. | 09-18-2014 |
20140270375 | System and Method for Identifying and Interpreting Repetitive Motions - A motion tracking system monitors the motions performed by a user based on motion data received from one or more sensors. The motion tracking system may include a motion tracking device with one or more sensors, a smart device with one or more sensors and/or a server. As the user interacts with the motion tracking system or smart device the motion data generated by one or more sensors is processed by a software application. The software application generates interpreted data based on the motion data and contextual data such as the equipment being used by the user. Feedback is then provided to the user during and/or after the user has performed a motion or a set of motions. The feedback provided to the user may be visual, audio or tactile. The application may be used to monitor a routine in a sporting, fitness, industrial or medical environment, for example. | 09-18-2014 |
20140270376 | FACIAL EXPRESSION RECOGNITION APPARATUS, IMAGE SENSING APPARATUS, FACIAL EXPRESSION RECOGNITION METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - A facial expression recognition apparatus ( | 09-18-2014 |
20140270377 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING DEVICE - An image processing apparatus includes an abnormal portion candidate region detection unit configured to detect a candidate region of an abnormal portion based on color information of each pixel constituting an image obtained by capturing an image of an inside of a lumen of a subject, a border neighboring pixel identifying unit configured to identify a border neighboring pixel which is a pixel existing in proximity to a border of the candidate region, a feature data calculation unit configured to calculate feature data based on a pixel value of the border neighboring pixel, and an abnormal portion region distinguishing unit configured to distinguish an abnormal portion region based on the feature data. | 09-18-2014 |
20140270378 | VEHICLE VICINITY MONITORING DEVICE - When a pedestrian candidate and an animal candidate that are detected from an image imaged by an imaging device mounted in a vehicle are in a specified relationship in said image (such as existing nearby), the animal candidate is considered to be an item related to the pedestrian candidate, in other words, a pair object. Attention-arousing output directed at the animal candidate configuring the pair object is not generated. Therefore, a vehicle vicinity monitoring device is provided that reduces the frequency of attention-arousing directed at an animal (for ex-ample, a small animal such as a dog) being walked by a human. | 09-18-2014 |
20140286527 | SYSTEMS AND METHODS FOR ACCELERATED FACE DETECTION - A method for face detection is disclosed. The method includes evaluating a scanning window using a first weak classifier in a first stage classifier. The method also includes evaluating the scanning window using a second weak classifier in the first stage classifier based on the evaluation using the first weak classifier. | 09-25-2014 |
20140286528 | BIOMETRIC INFORMATION INPUT APPARATUS AND BIOMETRIC INFORMATION INPUT METHOD - A biometric information input apparatus includes an image capturing section configured to obtain a captured image of a biological object; a pliable part detecting section configured to detect whether there is a pliable part on a surface of the biological object by obtaining a distance to the surface of the biological object from the captured image and comparing the obtained distance with a predetermined distance to be compared set beforehand; and an extracting section configured to extract biometric information from the captured image if the pliable part is not detected by the pliable part detecting section. | 09-25-2014 |
20140286529 | Methods and Systems For Tracking Movement of Microscopic Worms and Worm-Like Organisms, and Software Therefor - Methods and systems for tracking one or more worms or worm-like organisms over a sequence of video frames in virtual real time. The methods and systems can include a robust organism model that accounts for shape changes that occur from one frame to another, such as peristaltic progression, longitudinal deformation, lateral deformation, and bending action. Other features disclosed include: features that allow a user to correct tracking errors, such as splitting a single organism track into two tracks, joining two organism tracks into a single track, switching locations of physical features (such as heads and tails of worms), deleting undesired tracked organisms, and manually tracing organism outlines for model fitting; features that allow a user to set tracking parameters by selecting one or more organisms having desired characteristics; features for automatedly resolving interactions between/among multiple organisms; and features for handling multiple tracking hypotheses, among others. | 09-25-2014 |
20140286530 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT - According to an embodiment, an image processing apparatus includes an acquisition unit, a setting unit, and a calculator. The acquisition unit is configured to acquire an image. The setting unit is configured to set a plurality of sampling points in a sampling area of the image, each sampling point being associated with a calculation area. The calculator is configured to calculate feature values of the image in the calculation area. The setting unit is configured to set the sampling points to provide at least one of an arrangement in which distances between the adjacent sampling points change with distances from a center of the sampling area, and an arrangement in which the sampling points exist on circumferences of a plurality of circles different in diameter. | 09-25-2014 |
20140286531 | SYSTEM FOR TRACKING A MOVING OBJECT, AND A METHOD AND ANON-TRANSITORY COMPUTER READABLE MEDIUM THEREOF - According to one embodiment, a plurality of moving objects is detected from a plurality of frames acquired in time series. Each of the moving objects is corresponded among the frames. A tracklet of each moving object corresponded is extracted and stored. A frame to calculate a position of a moving object is set to a notice frame. The frames are grouped into a first block including at least the notice frame, a second block positioned before the first block in time series, and a third block positioned after the first block in time series. A secondary tracklet included in the second block is acquired from the stored tracklets. The secondary tracklet is corresponded with tracklets included in the first block and the third block, based on a similarity between the secondary tracklet and each of the tracklets. The secondary tracklet is associated with the corresponded tracklets, as a tertiary tracklet. | 09-25-2014 |
20140286532 | HUMAN DETECTION DEVICE - In a human detection device | 09-25-2014 |
20140286533 | Method And System For Recognizing And Assessing Surgical Procedures From Video - A Method and System For Recognizing and Assessing Surgical Procedures from a video or series of still images is described. Evaluation of surgical techniques of residents learning skills in areas such as cataract surgery is an important aspect of the learning process. The use of videos has become common in such evaluations, but is a time consuming manual process. The present invention increases the efficiency and speed of the surgical technique evaluation process by identifying and saving only information that is relevant to the evaluation, process. Using image processing techniques of the present invention, an anatomic structure of a surgical procedure is located on a video, timing of predefined surgical stages is determined, and measurements are taken from frames of the predefined surgical stages to allow the performance of a surgeon to be assessed in an automated and efficient manner. | 09-25-2014 |
20140286534 | GENERATING MAGNETIC FIELD MAP FOR INDOOR POSITIONING - Disclosed is an apparatus caused to acquire information indicating a measured magnetic field vector and information relating to an uncertainty measure of the measured magnetic field vector in at least one known location inside the building, wherein the indicated magnetic field vector represents magnitude and direction of the earth's magnetic field affected by the local structures of the building, and to generate the indoor magnetic field map for at least part of the building on the basis of at least the acquired information and the floor plan. | 09-25-2014 |
20140286535 | Methods and Apparatuses for Gesture Recognition - Methods, apparatuses, and computer program products are herein provided for enabling hand gesture recognition using an example infrared (IR) enabled mobile terminal. One example method may include determining a hand region in at least one captured frame using an adaptive omnidirectional edge operator (AOEO). The method may further include determining a threshold for hand region extraction using a recursive binarization scheme. The method may also include determining a hand location using the determined threshold for the extracted hand region in the at least one captured frame. The method may also include determining a fingertip location based on the determined hand location. Similar and related example apparatuses and example computer program products are also provided. | 09-25-2014 |
20140294231 | AUTOMATICALLY DETERMINING FIELD OF VIEW OVERLAP AMONG MULTIPLE CAMERAS - Field of view overlap among multiple cameras is automatically determined as a function of the temporal overlap of object tracks determined within their fields-of-view. Object tracks with the highest similarity value are assigned into pairs, and portions of the assigned object track pairs having a temporally overlapping period of time are determined. Scene entry points are determined from object locations on the tracks at a beginning of the temporally overlapping period of time, and scene exit points from object locations at an ending of the temporally overlapping period of time. Boundary lines for the overlapping fields-of-view portions within the corresponding camera fields-of-view are defined as a function of the determined entry and exit points in their respective fields-of-view. | 10-02-2014 |
20140294232 | METHOD AND APPARATUS FOR REMOVING SHADOW FROM AERIAL OR SATELLITE PHOTOGRAPH - A method for removing a shadow from an aerial or satellite photograph, includes collecting aerial or satellite photographs, by the photographing information collecting unit; extracting buildings at each of the collected aerial or satellite photographs, by the building information extracting unit; and estimating a shadow area cast by the extracted buildings, by the shadow area estimating unit. Further, the method includes restoring a shaded image in the aerial or satellite photograph, which corresponds to the estimated shadow area, by the image restoration unit; and composing the restored image and a residual image of the aerial or satellite photograph except the shadow area, by the image composition unit. | 10-02-2014 |
20140294233 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - An information processing device includes, a processor; and a memory which stores a plurality of instructions, which when executed by the processor, cause the processor to execute, acquiring images which includes a target object, and the images captured by a plurality of cameras on a time series basis; calculating a plurality of distance from the plurality of each cameras to a target object by using the images; and correcting, in a case where the target object has reached a predetermined x-y plane and a difference in an area of the target object between the images is equal to or less than a predetermined first threshold, the distance that has been calculated to a distance from the cameras to the x-y plane. | 10-02-2014 |
20140294234 | System and Method for Initiating Actions and Providing Feedback by Pointing at Object of Interest - A system and method as described for compiling feedback in command statements that relate to applications or services associated with spatial objects or features, pointing at such spatial object or feature order to identify the object of interest, and executing the command statements on a system server and attaching feedback information to their representation of this object or feature in a database of the system server. | 10-02-2014 |
20140294235 | FUNDUS IMAGE PROCESSING APPARATUS, FUNDUS IMAGE PROCESSING METHOD, AND RECORDING MEDIUM - A fundus image processing apparatus that processes a fundus image of an examinee's eye, the fundus image processing apparatus includes: a processor; and a memory storing computer readable instructions, when executed by the processor, causing the fundus image processing apparatus to: identify the optic disc included in the fundus image; identify a blood vessel included in the fundus image; calculate an upper diameter ratio which is a diameter ratio of those of the identified blood vessels that are positioned in a region above a height of the optic disc; calculate a lower diameter ratio which is a diameter ratio of those of the identified blood vessels that are positioned in a region under the height of the optic disc; and calculate an arteriovenous diameter ratio in the fundus of the examinee's eye based on the upper diameter ratio and the lower diameter ratio. | 10-02-2014 |
20140294236 | SYSTEMS AND METHODS FOR NOTE RECOGNITION - At least some aspects of the present disclosure feature systems and methods for note recognition. The note recognition system includes a sensor, a note recognition module, and a note extraction module. The sensor is configured to capture a visual representation of a scene having one or more notes. The note recognition module is coupled to the sensor. The note recognition module is configured to receive the captured visual representation and determine a general boundary of a note from the captured visual representation. The note extraction module is configured to extract content of the note from the captured visual representation based on the determined general boundary of the note. | 10-02-2014 |
20140294237 | Combined color image and depth processing - A method for image processing includes receiving a depth image of a scene containing a human subject and receiving a color image of the scene containing the human subject. A part of a body of the subject is identified in at least one of the images. A quality of both the depth image and the color image is evaluated, and responsively to the quality, one of the images is selected to be dominant in processing of the part of the body in the images. The identified part is localized in the dominant one of the images, while using supporting data from the other one of the images. | 10-02-2014 |
20140294238 | INSPECTION AND RECYCLING OF CONTAINERS - A method for examining filled containers, which are filled with CO | 10-02-2014 |
20140294239 | METHOD AND APPARATUS FOR AUTOMATIC DETECTION OF FEATURES IN AN IMAGE AND METHOD FOR TRAINING THE APPARATUS - In one or more embodiments described herein, there is provided a method of training an apparatus. The method trains the apparatus to automatically detect features of interest in an image. An image is received, the image being of at least one object for inspection, each image comprising a plurality of pixels. The image is segmented into a plurality of superpixels, each superpixel comprising a plurality of pixels which each have similar image data attributes to one another. The superpixels are classified into at least two classes in response to user input identifying at least one feature of interest in one or more of the super-pixels. From a library of image data attributes, a subset of image data attributes is determined that provides preferential discrimination between the at least two classes. The apparatus is then trained using said determined subset of image data attributes to thereby enable the apparatus to classify super-pixels of an image into the at least two classes. | 10-02-2014 |
20140301597 | WINDSHIELD LOCALIZATION FOR OCCUPANCY DETECTION - A system and method to capture an image of an oncoming target vehicle and localize the windshield of the target vehicle. Upon capturing an image, it is then analyzed to detect certain features of the target vehicle. Based on geometrical relationships of the detected features, the area of the image containing the windshield of the vehicle can then be identified and localized for downstream processing. | 10-09-2014 |
20140301598 | TRUE SPACE TRACKING OF AXISYMMETRIC OBJECT FLIGHT USING DIAMETER MEASUREMENT - Methods and apparatus for determining a trajectory of a axisymmetric object in 3-D physical space using a digital camera which records 2-D image data are described. In particular, based upon i) a characteristic length of the axisymmetric object, ii) a physical position of the camera determined from sensors associated with the camera (e.g., accelerometers) and iii) captured 2-D digital images from the camera including a time at which each image is generated relative to one another, a position, a velocity vector and an acceleration vector can be determined in three dimensional physical space for axisymmetric object objects as a function of time. In one embodiment, the method and apparatus can be applied to determine the trajectories of objects in games which utilize axisymmetric object objects, such as basketball, baseball, bowling, golf, soccer, rugby or football. | 10-09-2014 |
20140301599 | METHOD FOR FACE RECOGNITION - A method for a face recognition by a face recognition service server, includes receiving a face image that is photographed from a face registration terminal or a face recognition terminal; detecting a face area of the received face image; and quantifying at least one quality factor for the detected face area in order to determine whether the received face image is suitable as sample image required for face recognition. Further, the method includes selecting the received face image as the sample image required for the face recognition when the quality factor satisfies a predetermined quality criterion. | 10-09-2014 |
20140301600 | TRUE SPACE TRACKING OF AXISYMMETRIC OBJECT FLIGHT USING DIAMETER MEASUREMENT - Methods and apparatus for determining a trajectory of a axisymmetric object in 3-D physical space using a digital camera which records 2-D image data are described. In particular, based upon i) a characteristic length of the axisymmetric object, ii) a physical position of the camera determined from sensors associated with the camera (e.g., accelerometers) and iii) captured 2-D digital images from the camera including a time at which each image is generated relative to one another, a position, a velocity vector and an acceleration vector can be determined in three dimensional physical space for axisymmetric object objects as a function of time. In one embodiment, the method and apparatus can be applied to determine the trajectories of objects in games which utilize axisymmetric object objects, such as basketball, baseball, bowling, golf, soccer, rugby or football. | 10-09-2014 |
20140301601 | TRUE SPACE TRACKING OF AXISYMMETRIC OBJECT FLIGHT USING DIAMETER MEASUREMENT - Methods and apparatus for determining a trajectory of a axisymmetric object in 3-D physical space using a digital camera which records 2-D image data are described. In particular, based upon i) a characteristic length of the axisymmetric object, ii) a physical position of the camera determined from sensors associated with the camera (e.g., accelerometers) and iii) captured 2-D digital images from the camera including a time at which each image is generated relative to one another, a position, a velocity vector and an acceleration vector can be determined in three dimensional physical space for axisymmetric object objects as a function of time. In one embodiment, the method and apparatus can be applied to determine the trajectories of objects in games which utilize axisymmetric object objects, such as basketball, baseball, bowling, golf, soccer, rugby or football. | 10-09-2014 |
20140301602 | Queue Analysis - A method and system for analysing a queue comprising: obtaining a first image acquired at a first time of a first position within a queue; obtaining a second image acquired at a second time of a second position within the queue; detecting a queue member within the first image; detecting a queue member with the second image; determining that the queue member detected within the second image is the same as the queue member detected within the first image; and determining a trajectory of the queue member within the queue based on a difference between the first time and the second time. | 10-09-2014 |
20140301603 | SYSTEM AND METHOD FOR COMPUTER VISION CONTROL BASED ON A COMBINED SHAPE - A method and system for computer vision based control of a device are provided in which a shape detection algorithm is applied on an image to identify in the image a finger positioned over or near a user's lips. A device may be controlled based on the detection of the shape, for example, a change of volume of an audio output of the device may be caused. | 10-09-2014 |
20140301604 | METHOD AND SYSTEM FOR LUMINANCE ADJUSTMENT OF IMAGES IN AN IMAGE SEQUENCE - Disclosed are a method and apparatus for adjusting a set of luminance values associated with a set of visual elements in a current frame ( | 10-09-2014 |
20140301605 | POSTURE ESTIMATION DEVICE AND POSTURE ESTIMATION METHOD - A posture estimation device that is capable of highly precisely estimating the posture of an object comprising multiple parts. Said device ( | 10-09-2014 |
20140307917 | ROBUST FEATURE FUSION FOR MULTI-VIEW OBJECT TRACKING - Multi-Task Multi-View Tracking (MTMVT) is used to visually identify and track an object. The MTMVT employs visual cues such as color, edge, and texture as complementary features to intensity in the target appearance representation, and combines a multi-view representation with a robust multi-task learning to solve feature fusion tracking problems. To reduce computational demands, feature matrices are sparsely represented in a single matrix and then decomposed into a pair of matrices to improve robustness to outliers. Views and particles are further combined based on interdependency and commonality single computational task. Probabilities are computed for each particle across all features and the particle with the greatest probability is selected as the target tracking result. | 10-16-2014 |
20140307918 | TARGET-IMAGE DETECTING DEVICE, CONTROL METHOD AND CONTROL PROGRAM THEREOF, RECORDING MEDIUM, AND DIGITAL CAMERA - A method for controlling a target-image detecting device configured to detect a target image as a part of photographed images and as an image of a target object from the photographed images, includes sequentially obtaining a plurality of the photographed images that form a moving image, detecting a target image included in the obtained photographed image, generating a detection result and accumulating the generated detection result in memory as detection hysteresis, referring to the detection hysteresis of the memory and deciding whether the detection result of the target image of the same target object is included in a latest predetermined number of detection results, outputting the detection result when the detection result is included, and not outputting the detection result when the detection result is not included. | 10-16-2014 |
20140307919 | GESTURE RECOGNITION DEVICE, GESTURE RECOGNITION METHOD, ELECTRONIC APPARATUS, CONTROL PROGRAM, AND RECORDING MEDIUM - A gesture recognition device configured to recognize a gesture of a hand from a captured image of a user, having a fingertip candidate detector that detects a fingertip candidate from the image, a finger detector that detects a skin area as a finger, the skin area extending by a predetermined length in a certain direction from the fingertip candidate detected by the fingertip candidate detector, and a gesture type specifying unit that specifies a type of the gesture based on the number of fingers detected by the finger detector. | 10-16-2014 |
20140307920 | SYSTEMS AND METHODS FOR TRACKING OCCLUDED OBJECTS IN THREE-DIMENSIONAL SPACE - Methods and systems for the tracking of one or more occluded objects in 3D space include creating an approximation of an object while it is occluded. | 10-16-2014 |
20140307921 | METHOD FOR LOCATING OBJECTS BY RESOLUTION IN THE THREE-DIMENSIONAL SPACE OF THE SCENE - In the field of videosurveillance by calibrated cameras and locating objects of interest in images, a method uses, on the one hand, an initial presence map p | 10-16-2014 |
20140314269 | DEVICE, SYSTEM AND METHOD FOR RECOGNIZING ACTION OF DETECTED SUBJECT - The present disclosure discloses a device, a system and a method for recognizing the action of a detected subject. The device includes an input section for the user to input scene mode selected among a plurality of scene modes; a detection section for detecting the action of the detected subject and outputting an action signal when the device is disposed on the subject; and a microprocessor for processing the action signal according to the selected scene mode, to recognize and output the action of the detected subject in different scene modes. The system includes a device and a terminal, wherein the device is used to recognize the action of the detected subject based on a scene mode selected through the terminal by a user; and the terminal is used to display the action recognition result. The method includes recognizing the action based on a scene mode selected by a user. | 10-23-2014 |
20140314270 | DETECTION OF FLOATING OBJECTS IN MARITIME VIDEO USING A MOBILE CAMERA - A method and system for detecting floating objects in maritime video is disclosed. The horizon is detected within the video. Modeling of the sky and water is performed on the video. Objects are detected that are not water and sky within the video. | 10-23-2014 |
20140314271 | Systems and Methods for Pedestrian Detection in Images - System, apparatus, and method embodiments are provided for detecting the presence of a pedestrian in an image. In an embodiment, a method for determining whether a person is present in an image includes receiving a plurality of images, wherein each image comprises a plurality of pixels and determining a modified center symmetric local binary pattern (MS-LBP) for the plurality of pixels for each image, wherein the MS-LBP is calculated on a gradient magnitude map without using an interpolation process, and wherein a value for each pixel is a gradient magnitude. | 10-23-2014 |
20140314272 | METHOD OF TRACKING OBJECTS USING HYPERSPECTRAL IMAGERY - A method of tracking motion of at least one object of a group of moving objects using hyperspectral imaging includes, among other things, obtaining a series of hyperspectral image frames; comparing each frame in the series to a template to determine changes in the image between frames; identifying a group of pixels in each frame associated with the changes; identifying changes as motion of the moving objects; correlating the pixel groups frame to frame to spatially determine at least one parameter of the motion of the objects; and correlating the pixel groups with a spectral reflectance profile associated with the at least one object wherein the track of the at least one object is distinguishable from the tracks of other moving objects. | 10-23-2014 |
20140314273 | Method, Apparatus and Computer Program Product for Object Detection - In accordance with an example embodiment a method and apparatus is provided. The method comprises detecting presence of an object portion in at least one sub-window in an image based on a first classifier. The first classifier is associated with a first set of weak classifiers. A set of sample sub-windows is generated corresponding to the at least one sub-window by performing at least one of a row shifting and column shifting of the at least one sub-window. A presence of the object portion in the set of sample sub-windows is detected based on a second classifier. The second classifier is associated with a second set of weak classifiers. The presence of the object portion is determined in the at least one sub-window based on the comparison of a number of sample sub-windows in the set of sample sub-windows comprising the object portion with a predetermined threshold number. | 10-23-2014 |
20140314274 | METHOD FOR OPTIMIZING SIZE AND POSITION OF A SEARCH WINDOW OF A TRACKING SYSTEM - A method for optimizing the position and size of a search window in a tracking system is disclosed. According to some embodiments of the present invention, the method may comprise: calculating a velocity of a tracked object based on comparison of at least two previously captured consecutive frames; calculating an expected position of the tracked object in a subsequent frame based on calculated velocity of the tracked object; determine possible positions of the tracked object in the subsequent frame, based on the last known position of the tracked object, status of the tracked object, the expected position and the acceleration of the tracked object; and optimizing the size and position of the search window within the subsequent frame so that the search window covers the expected position and at least one of the possible positions of said tracked object. | 10-23-2014 |
20140314275 | Pedestrian Right of Way Monitoring and Reporting System and Method - A system and method for monitoring vehicle traffic and collecting data indicative of pedestrian right of way violations by vehicles is provided. The system comprises memory and logic for monitoring traffic intersections and recording evidence indicating that vehicles have violated pedestrian right of way. Two sensor modalities collecting video data and radar data of the intersection under observation are employed in one embodiment of the system. The violation evidence can be accessed remotely by a traffic official for issuing of traffic citations. | 10-23-2014 |
20140314276 | SYSTEM AND METHOD OF MEASURING DISTANCES RELATED TO AN OBJECT - A system and method for measuring distances related to a target object depicted in an image and the construction and delivery of supplemental window materials for fenestration. A digital image is obtained that contains a target object dimension and a reference object dimension in the same plane. The digital image may contain a target object dimension identified by an ancillary object and a reference object dimension in different planes. Fiducial patterns on the reference and optional ancillary objects are used that are recognized by an image analysis algorithm. Information regarding a target object and its immediate surroundings is provided to an automated or semi-automated measurement process, design and manufacturing system such that customized parts are provided to end users. The digital image contains a reference object having a reference dimension and calculating a constraint dimension from the digital image based on a reference dimension. The custom part is then designed and manufactured based on a calculated constraint dimension. | 10-23-2014 |
20140314277 | INCORPORATING VIDEO META-DATA IN 3D MODELS - A moving object tracked within a field of view environment of a two-dimensional data feed of a calibrated video camera is represented by a three-dimensional model. An appropriate three-dimensional mesh-based volumetric model for the object is initialized by using a back-projection of a corresponding two-dimensional image. A texture of the object is projected onto the three-dimensional model, and two-dimensional tracks of the object are upgraded to three-dimensional motion to drive a three-dimensional model. | 10-23-2014 |
20140314278 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, PROGRAM, AND IMAGE PROCESSING SYSTEM - An image processing apparatus including a region-of-interest decision unit that identifies an interest space region in which an object to be analyzed is likely to be present based on a distance image, which is shape information of an object space corresponding to a captured image to be analyzed acquired by a distance image sensor, to identify a region of interest in the captured image corresponding to the interest space region and an image analysis unit that performs different image analyses for the region of interest and other image regions. | 10-23-2014 |
20140321697 | KERNEL WITH ITERATIVE COMPUTATION - Provided are examples of a detecting engine for determining in which pixels in a hyperspectral scene are materials of interest or targets present. A collection of spectral references, typically five to a few hundred, is used in look a through a million or more pixels per scene to identify detections. An example of the detecting engine identifies detections by calculating a kernel vector for each spectral reference in the collection. This calculation is quicker than the conventional Matched Filter kernel calculation which computes a kernel for each scene pixel. Another example of the detecting engine selects pixels with high detection filter scores and calculates coherence scores for these pixels. This calculation is more efficient than the conventional Adaptive Cosine/Coherence Estimator calculation that calculates a score for each scene pixel, most of which do not provide a detection. | 10-30-2014 |
20140321698 | METHOD FOR IMAGE-BASED STATUS DETERMINATION - Methods, systems, computer-readable media, and apparatuses for image-based status determination are presented. In some embodiments, a method includes capturing at least one image of a moving path. At least one feature within the at least one image is analyzed and based on the analysis of the at least one feature, a direction of movement of the moving path is determined. In some embodiments, a method includes capturing an image of an inclined path. At least one feature within the image is analyzed and based on analysis of the at least one feature, a determination is made whether the image was captured from a top position relative to the inclined path or a bottom position relative to the inclined path. | 10-30-2014 |
20140321699 | Method For Characterizing Confined Fission Tracks in Solids - A method for determining the position and its statistical uncertainty of a confined fission track in a crystal based on detecting confined fission track tips in a series of transmitted light images. A computer software program for: detecting confined fission track tips in a series of transmitted light images and assessing the viability of each tip using a scoring equation; writing to and loading from a computer database of confined fission tracks; modifying the scoring equation for assessing confined fission track tip viability based on the contents of the computer database. A computer database consisting of transmitted light images of confined fission tracks. A method for determining the statistical probability that a confined fission track is a real confined fission track. | 10-30-2014 |
20140321700 | LIGHT SENSING MODULE AND SYSTEM - A light sensing module used in a light sensing system incorporated with a processor includes at least one first light source, for emitting light; at least one first light sensor, for sensing the light emitted by the first light source, light reflected by an ambient object or ambient light, in order to obtain a sensing result; a control unit, for performing image detecting and object identification or ambient light sensing by computing according to the sensing result, and generating a computational result; and at least one interrupt driver, for sending an interrupt signal to the processor, in order to notify the processor to receive the computational result; wherein the processor disposes a type and a number of the first light sensor, and configures the control unit accordingly, so that the control unit performs computation on the sensing result to generate the computational result. | 10-30-2014 |
20140321701 | METHOD AND APPARATUS FOR RECOGNIZING DIRECTIONAL STRUCTURES ON A WINDOW PANE OF A VEHICLE - A method for recognizing directional structures on a window pane of a vehicle is described. The method includes carrying out an assessment of image points of an image of the window pane, which image points are disposed along an evaluation path, a course of the evaluation path being dependent on an expected orientation of the directional structures on the window pane. The method further includes recognizing a directional structure based on the assessment. | 10-30-2014 |
20140321702 | DIMINISHED AND MEDIATED REALITY EFFECTS FROM RECONSTRUCTION - Disclosed embodiments pertain to apparatus, systems, and methods for mixed reality. In some embodiments, a camera pose relative to a tracked object in a live image may be determined and used to render synthetic images from keyframes in a 3D model without the tracked object. Optical flow magnitudes for pixels in a first mask region relative to a subset of the synthetic images may be determined and the optical flow magnitudes may be used to determine pixels in each of the subset of synthetic images that correspond to pixels in the first mask. For each pixel in the first mask, a corresponding replacement pixel may be determined as a function of pixels in the subset of synthetic images that correspond to the corresponding pixel in the first mask. | 10-30-2014 |
20140321703 | IMAGE COMPOSITING DEVICE AND IMAGE COMPOSITING METHOD - It is an object to generate a desired composite image in which a motion area of a subject is correctly composited. | 10-30-2014 |
20140321704 | METHOD, SYSTEM AND APPARATUS FOR TRACKING OBJECTS OF A SCENE - A method of tracking objects of a scene is disclosed. The method determines two or more tracks which have merged. Each track is associated with at least one of the objects and having a corresponding graph structure. Each graph structure comprising at least one node representing the corresponding track. A new node representing the merged tracks is created. The graph structures are added as children nodes of the new node to create a merged graph structure. A split between the objects associated with one of the tracks represented by the nodes of the merged graph structure is determined. Similarity between one or more of the nodes in the merged graph structure and foreground areas corresponding to split objects is determined. One of the nodes in the merged graph structure is selected based on the determined similarity. A new graph structure for tracking the objects is created, the new graph structure having the selected node at the root of the new graph structure. | 10-30-2014 |
20140321705 | METHOD OF DETERMINING REFERENCE FEATURES FOR USE IN AN OPTICAL OBJECT INITIALIZATION TRACKING PROCESS AND OBJECT INITIALIZATION TRACKING METHOD - A method of determining reference features for use in an optical object initialization tracking process is disclosed, said method comprising the following steps: a) capturing at least one current image of a real environment or synthetically generated by rendering a virtual model of a real object to be tracked with at least one camera and extracting current features from the at least one current image, b) providing reference features adapted for use in an optical object initialization tracking process, c) matching a plurality of the current features with a plurality of the reference features, d) estimating at least one parameter associated with the current image based on a number of current and reference features which were matched, and determining for each of the reference features which were matched with one of the current features whether they were correctly or incorrectly matched, e) wherein the steps a) to d) are processed iteratively multiple times. | 10-30-2014 |
20140321706 | AUTOMATED, REMOTELY-VERIFIED ALARM SYSTEM WITH INTRUSION AND VIDEO SURVEILLANCE AND DIGITIAL VIDEO RECORDING - An automated self-monitored alarm verification solution including at least a premises portion, a server portion, and an end user device portion. Alarm verification includes capturing by an image capture device at least one image in response to a detection event, and transmitting a first data signal including the image to a local signal processing device. The signal processing device transmits a second signal including at least a portion of the image to a remote hosted server according to at least a first set of predetermined parameters. After receiving the second signal, the server transmits a third signal including at least a portion of the image from the hosted server to a user device. Using the user device, a user views the image and indicates a validity status of the alarm based at least in part on the content of the image. Based at least upon either the validation status indicated by the user, or upon a failure to receive a message including a validation status from the user within a predetermined duration of time, the server portion may send an alarm signal to an emergency response service. | 10-30-2014 |
20140321707 | PREDICTIVE FLIGHT PATH AND NON-DESTRUCTIVE MARKING SYSTEM AND METHOD - Systems and methods for acquiring and targeting an object placed in motion, tracking the object's movement, and while tracking, measuring the object's characteristics and marking the object with an external indicator until the object comes to rest is provided. The systems and methods include an acquisition and tracking system, a data capture system, and a marking control system. Through the components of the system, an object moving through two or three dimensional space can be externally marked to assist with improving the performance of striking the object. | 10-30-2014 |
20140321708 | METHOD FOR DETERMINING THE POSE OF A CAMERA AND FOR RECOGNIZING AN OBJECT OF A REAL ENVIRONMENT - A method for determining the pose of a camera relative to a real environment includes the following steps: taking at least one image of a real environment by means of a camera, the image containing at least part of a real object, performing a tracking method that evaluates information with respect to correspondences between features associated with the real object and corresponding features of the real object as it is contained in the image of the real environment, so as to obtain conclusions about the pose of the camera, determining at least one parameter of an environmental situation, and performing the tracking method in accordance with the at least one parameter. Analogously, the method can also be utilized in a method for recognizing an object of a real environment in an image taken by a camera. | 10-30-2014 |
20140321709 | IMAGE PROCESSING APPARATUS, IMAGE-CAPTURING METHOD, AND VEHICLE - An image processing apparatus includes a light source configured to emit light onto a glass; an image-capturing unit configured to capture light from an image-capturing region including reflection light of the emitted light from the glass; an object detection filter used to detect an object attached to the glass, light from a portion of the image-capturing region entering the object detection filter; an exposure control unit configured to determine a first exposure amount used in image capturing for a first region where the object detection filter does not exist and a second exposure amount used in image capturing for a second region where the object detection filter exists; and an image analysis unit configured to analyze a captured image obtained by the image-capturing unit. The image-capturing unit switches an exposure amount used in an image-capturing process for the image-capturing region between the first exposure amount and the second exposure amount. | 10-30-2014 |
20140321710 | METHOD FOR THREE-DIMENSIONAL LOCALIZATION OF AN OBJECT FROM A TWO-DIMENSIONAL MEDICAL IMAGE - A method for determining the three-dimensional location of an object in real-time from a two-dimensional medical image obtained with a medical imaging system is provided. For example, the three-dimensional location of an interventional medical device or a marker positioned on such a device may be determined from a two-dimensional x-ray image obtained with an interventional x-ray imaging system. Template images corresponding to the object under different imaging geometries and orientations are produced and are compared to images acquired with the medical imaging system. Similarity measures, such as normalized cross correlation and normalized similarity integral, are used to determine the similarity between a selected template image and the medical images in different stages of refining the position information for the object. | 10-30-2014 |
20140328510 | Method For Characterizing Fission Semi-Tracks in Solids - A method for determining the position and its statistical uncertainty of a fission semi-track in a crystal based on detecting the tip and etch figure of a fission semi-track in a series of transmitted light images. A computer software program for: detecting the tip and etch figure of a fission semi-track in a series of transmitted light images and assessing the viability of the tip using a scoring equation; writing to and loading from a computer database of fission semi-tracks; modifying the scoring equation for assessing fission semi-track tip viability based on the contents of the computer database. A computer database consisting of transmitted light images of fission semi-tracks. A method for determining the statistical probability that a fission semi-track is a real fission semi-track. | 11-06-2014 |
20140328511 | SUMMARIZING SALIENT EVENTS IN UNMANNED AERIAL VIDEOS - A method for summarizing image content from video images received from a moving camera includes detecting foreground objects in the images, determining moving objects of interest from the foreground objects, tracking the moving objects, rating movements of the tracked objects, and generating a list of highly rated segments within the video images based on the ratings. | 11-06-2014 |
20140328512 | SYSTEM AND METHOD FOR SUSPECT SEARCH - A system and method for detecting an object of interest. A system and method may generate a first signature for an object of interest based on an image of the object of interest. A system and method may generate a second signature for a candidate object based on an image of the candidate object. A system and method may calculate a similarity score by relating the first signature to the second signature and may determine the image of the candidate object is an image of the object of interest based on the similarity score. | 11-06-2014 |
20140328513 | Object Information Derived From Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 11-06-2014 |
20140328514 | OBJECT TRACKING DEVICE - In an object tracking device, a search region setting unit sets the search region of an object in a frame image at a present point in time, based on an object region in a frame image at a previous point in time, zoom center coordinates in the frame image at the previous point in time, and a ratio between the zoom scaling factor of the frame image at the previous point in time and the zoom scaling factor of the frame image at the present point in time. A normalizing unit normalizes the image of a search region of the object included in the frame image at the present point in time to a fixed size. A matching unit searches the normalized mage of the search region for an object region similar to a template image. | 11-06-2014 |
20140328515 | POSITIONAL LOCATING SYSTEM AND METHOD - A method and system are disclosed for locating or otherwise generating positional information for an object, such as but not limited generating positional coordinates for an object attached to an athlete engaging in an athletic event. The positional coordinates may be processed with other telemetry and biometrical information to provide real-time performance metrics while the athlete engages in the athletic event. | 11-06-2014 |
20140328516 | Gesture Recognition Method, An Apparatus and a Computer Program for the Same - The invention concerns a gesture recognition method for gesture-based interaction at an apparatus. The method comprises receiving one or more images of an object; creating feature images for the received one or more images; determining binary values for pixels in corresponding locations of said feature images and concatenating the binary values to form a binary string for said pixel; repeating the previous step for each corresponding pixel of said feature image to form a feature map and forming a histogram representation of the feature map. The invention also concerns an apparatus and a computer program. | 11-06-2014 |
20140328517 | SYSTEM AND METHODS FOR IDENTIFICATION OF IMPLANTED MEDICAL DEVICES AND/OR DETECTION OF RETAINED SURGICAL FOREIGN OBJECTS FROM MEDICAL IMAGES - A computer-based system and method(s) are described which detects and identifies implanted medical devices (“IMDs”) and/or retained surgical foreign objects (“RSFOs”) from diagnostic medical images. In some embodiments, the system provides further identification—information on the particular IMD and/or RSFO that has been recognized. For example, the system could be configured to provide information feedback regarding the IMD, such as detailed manual information, safety alerts, recalls, assess its' structural integrity, and/or suggested courses of action in a specific clinical setting/troubleshooting. Embodiments are contemplated in which the system is configured to report possible 3D locations of RSFOs in the surgical field/images. | 11-06-2014 |
20140334666 | CALIBRATION FREE, MOTION TOLERANT EYE-GAZE DIRECTION DETECTOR WITH CONTEXTUALLY AWARE COMPUTER INTERACTION AND COMMUNICATION METHODS - Eye tracking systems and methods include such exemplary features as a display device, at least one image capture device and a processing device. The display device displays a user interface including one or more interface elements to a user. The at least one image capture device detects a user's gaze location relative to the display device. The processing device electronically analyzes the location of user elements within the user interface relative to the user's gaze location and dynamically determine whether to initiate the display of a zoom window. The dynamic determination of whether to initiate display of the zoom window may further include analysis of the number, size and density of user elements within the user interface relative to the user's gaze location, the application type associated with the user interface or at the user's gaze location, and/or the structure of eye movements relative to the user interface. | 11-13-2014 |
20140334667 | AERIAL IMAGE SEGMENTATION FOR REFINERIES - A system receives a two-dimensional digital image of an aerial industrial plant area. Based on requirements of image processing, the image is zoomed in to different sub-images, that are referred to as first images. The system identifies circular tanks, vegetation areas, process areas, and buildings in the first image. The system formulates a second digital image by concatenating the first images. The system creates one or more polygons of the regions segmented in the second digital image. Each polygon encompasses a tank area, a vegetation area, a process area, or a building area in the second digital image, which is a concatenated image of the individual regions. The system displays the second digital image on a computer display device. | 11-13-2014 |
20140334668 | SYSTEM AND METHOD FOR VISUAL MOTION BASED OBJECT SEGMENTATION AND TRACKING - The PMP Growth algorithm described herein provides for image tracking, segmentation and processing in environments where the camera system moves around a great deal, i.e., causing image jumps from one image frame to the next. It also is operative in systems where the objects themselves are making quick movements that alter their path. Attributes of the PMP Growth algorithm allow tracking systems using the PMP Growth algorithm to follow objects a long distance in a scene. This detection and tracking method is designed to track objects within a sequence of video image frames, and includes detecting keypoints in a current image frame of the video image frames, assigning local appearance features to the detected keypoints, establishing Point-Motion-Pairs between two successive image frames of the video image frames, and accumulating additional matches between image locations to form complete coherent motion object models of the objects being tracked. The segmentation aspect permits for the discovery of different coherently moving regions in the images. | 11-13-2014 |
20140334669 | LOCATION INFORMATION DETERMINED FROM DEPTH CAMERA DATA - The subject disclosure is directed towards obtaining relatively precise location data with respect to a mapped space, based upon depth camera coordinates, for tracking a user or other object within the space. Also described are usage scenarios and user experiences that are based upon the current location data. | 11-13-2014 |
20140334670 | Three-Dimensional Object Modelling Fitting & Tracking - Described herein is a method and system for marker-less three-dimensional modelling, fitting and tracking of a skeletal representation of an object in a three-dimensional point cloud. In particular, it concerns the tracking of a human user skeletal representation with respect to time. The method comprises inputting a three-dimensional point cloud derived from a depth map; predetermining a set of control points representing the skeleton of the user, determining a start-up skeleton pose, obtaining an orthographic representation of the user 3D point cloud projected onto a grid by sampling the 3D point cloud with a predetermined static size, determining a set of curvature centres points approximating central axes of main parts of the user, determining the torso plane, and refining and/or defining the principal direction of the body. The method comprises then the step of performing iterative local and global fittings of the set of control points onto the user 3D point cloud and the associated data such as the curvature centre points, using topological and geometric constraints so that to track skeleton posture along the time. Stabilising the skeleton pose; resolving ambiguities; and providing a suitable output are then the last steps of a preferred embodiment of the invention. | 11-13-2014 |
20140334671 | OBJECT RECOGNITION APPARATUS AND METHOD - An object recognition apparatus and method are disclosed. The object recognition apparatus includes: a skin color DB storing skin color information; a pattern light generator that irradiates a pattern light onto an object, which is a part of a human body; an image acquisition unit that receives the pattern light reflected from the object and generates a pattern light image of the object; and an operation unit that recognizes the object based on the skin color information and the pattern light image. | 11-13-2014 |
20140334672 | METHOD FOR DETECTING PEDESTRIANS BASED ON FAR INFRARED RAY CAMERA AT NIGHT - The present invention relates to a method for detecting a pedestrian based on a far infrared ray (IR) camera at night, which provides a method of receiving a thermal image of a pedestrian from a far IR camera, setting a candidate using a DoG filter having a robust characteristic against image noise, and accurately detecting the pedestrian using a classifier based on a behavioral characteristic of the pedestrian. | 11-13-2014 |
20140334673 | COMMODITY RECOGNITION APPARATUS AND METHOD FOR RECOGNIZING COMMODITY BY THE SAME - A commodity recognition apparatus, which recognizes, from an image captured by an image capturing section and stored in a storage section, a commodity imaged in the image, identifies, for each image captured by the image capturing section, an image capturing condition of light source for the image. The commodity recognition apparatus selects the image captured by the image capturing section under a given image capturing condition identified and displays the selected image on a display section. | 11-13-2014 |
20140334674 | METHOD OF DETECTING DATA RELATING TO THERMAL ENERGY RADIATED IN A SCENE USING THE INFRARED RADIATION IMAGE PROCESSING - A method of detecting data relating to thermal energy radiated in a scene uses infrared radiation image processing. A sequential plurality of infrared radiation images of the scene are received that consist of at least two sequential series of images simultaneously detected from respective different points of sight (TC1, TC2), arranged in a predetermined geometrical relationship with respect to each other. Each of the images includes a pixel array, each pixel having a value which is representative of a pixel's fraction of the infrared radiation intensity associated with the array of the image. Successive images of the at least two sequential series of images are processed in order to determine a change in at least one thermal parameter that meets predetermined alarm criteria. An event in the environment is detected based on the change determined in the thermal parameter. | 11-13-2014 |
20140334675 | APPARATUS AND METHOD FOR EXTRACTING MOVEMENT PATH OF MUTUAL GEOMETRIC RELATIONSHIP FIXED CAMERA GROUP - Provided is an apparatus and method for extracting a movement path, the movement path extracting apparatus including an image receiver to receive an image from a camera group in which a mutual positional relationship among cameras is fixed, a geographic coordinates receiver to receive geographic coordinates of a moving object on which the camera group is fixed, and a movement path extractor to extract a movement path of the camera group based on a direction and a position of a reference camera of the camera group using the image and the geographic coordinates. | 11-13-2014 |
20140334676 | MONITORING METHOD AND CAMERA - A method of monitoring a scene by a camera ( | 11-13-2014 |
20140334677 | MULTI-COMPUTER VISION RECOGNITION SYSTEM FOR LEVEL CROSSING OBSTACLE - A multi-computer vision recognition system for a level crossing obstacle is disclosed, comprising vision image systems, a position determination system, an obstacle determination resolution system and a power unit, where vision image systems which may operate all day long operate simultaneously, information of the single vision image systems is each computed by using the position determination system, and then the computed result is introduced to the obstacle determination resolution system for determination, whereby achieving an increased obstacle recognition result and a promoted obstacle recognition accuracy. | 11-13-2014 |
20140334678 | SYSTEM AND DERIVATION METHOD - A system according to one embodiment includes a memory, an aneurysm identification device, a distortion-degree evaluation device, and a rupture risk derivation device. The memory stores medical image data. The aneurysm identification device identifies an aneurysm in the medical image data. The distortion-degree evaluation device quantitatively evaluates a distortion degree of the aneurysm. The rupture risk derivation device derives a rupture risk of the aneurysm from a result of the evaluation. | 11-13-2014 |
20140334679 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM - An information processing apparatus that executes processing for creating an environmental map includes a camera that photographs an image, a self-position detecting unit that detects a position and a posture of the camera on the basis of the image, an image-recognition processing unit that detects an object from the image, a data constructing unit that is inputted with information concerning the position and the posture of the camera and information concerning the object and executes processing for creating or updating the environmental map, and a dictionary-data storing unit storing dictionary data in which object information is registered. The image-recognition processing unit executes processing for detecting an object from the image with reference to the dictionary data. The data constructing unit applies the three-dimensional shape data to the environmental map and executes object arrangement on the environmental map. | 11-13-2014 |
20140334680 | IMAGE PROCESSING APPARATUS - Image processing apparatus | 11-13-2014 |
20140334681 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - Provided is a an image processing apparatus including a candidate detection unit configured to detect each of candidate images serving as candidates for a main subject for a plurality of frames of image data, and a main subject determination unit configured to obtain a degree of stable presence of the candidate images detected by the candidate detection unit within the image data spanning the plurality of frames and to determine a main subject among the candidate images using the degree of stable presence. | 11-13-2014 |
20140334682 | MONITORING DEVICE USING SELECTIVE ATTENTION MODEL AND METHOD FOR MONITORING SAME - A monitoring device is provided, which includes an inputter configured to receive an input of a plurality of images captured at separate positions and a plurality of sound sources heard at separate positions, a saliency map generator configured to generate a plurality of mono saliency maps for the plurality of images and to generate a dynamic saliency map using the plurality of mono saliency maps generated, a position determinator configured to determine the positions of the sound sources through analysis of the plurality of sound sources, a scan path recognizer configured to generate scan paths of the plurality of images based on the generated dynamic saliency map and the determined positions of the sound sources, and an outputter configured to output the generated scan paths. | 11-13-2014 |
20140334683 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM - Provided is an image processing apparatus including a distance information acquisition unit that acquires distance information on a distance up to an object imaged by an image sensor, a pixel value information acquisition unit that acquires pixel value information of an image corresponding to the object, and a tracking unit that tracks the object that moves, based on the acquired distance information and the acquired pixel value information. | 11-13-2014 |
20140341421 | Method for Detecting Persons Using 1D Depths and 2D Texture - A method detects an object in a scene by first determining an active set of window positions from depth data. Specifically, the object can be a person. The depth data are acquired by a depth sensor. For each window position perform the following steps. Assign a window size based on the depth data. Select, a current window from the active set of window positions. Extract a joint feature from the depth data and texture data for the current window, wherein the texture data are acquired by a camera. Classify the joint feature to detect the object. The classifier is trained with joint training features extracted from training data including training depth data and training texture data acquired by the sensor and camera respectively. Finally, the active set of windows position is updated before processing the next current window. | 11-20-2014 |
20140341422 | Systems and Methods for Facial Property Identification - Systems and methods are provided for facial property identification. For example, an image sample is acquired; a first effective area image of a face is acquired in the image sample; first textural features of the first effective area image are extracted: and the first textural features of the first effective area image are classified by race, gender and age using a race classifier, a gender classifier and an age classifier successively to obtain a race property, a gender property and an age property of the face. | 11-20-2014 |
20140341423 | METHOD FOR TRACKING AND FORECASTING MARINE ICE BODIES - A near-real-time tracking and integrated forecasting of marine ice bodies observable on satellite imagery. | 11-20-2014 |
20140341424 | TOOL TRACKING DURING SURGICAL PROCEDURES - A system and method for tracking a surgical implement in a patient can have an imaging system configured to obtain sequential images of the patient, and an image recognition system coupled to the imaging system and configured to identify the surgical implement in individual images. The image recognition system can be configured to identify the surgical implement relative to the patient in one of the images based, at least in part, on an identification of the surgical implement in at least one preceding one of the sequential images, and a probabilistic analysis of individual sections of the one of the images, the sections being selected by the image recognition system based on a position of the surgical implement in the patient as identified in the at least one preceding one of the images. | 11-20-2014 |
20140341425 | PROVIDING VISUAL EFFECTS FOR IMAGES - Implementations relate to providing visual effects for images. In some implementations, a method includes detecting one or more objects in an image. The method identifies one or more important objects of the objects, where the important objects are determined to have an importance measurement satisfying a predetermined threshold indicating their importance to a viewer of the image. The method determines an application of a visual image effect to the image based on the important objects. | 11-20-2014 |
20140341426 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND MEDICAL IMAGING DEVICE - An image processing apparatus includes a storage unit, a slice image generating unit, a region extracting unit and a tubular structure extracting unit. The storage unit stores a volume image of a three-dimensional region of a subject. The slice image generating unit generates a plurality of slice images corresponding to a plurality of slices each substantially perpendicular to a predetermined reference axis from the volume image. The region extracting unit extracts a target region from the plurality of slice images. The tubular structure extracting unit detects an end point from the extracted region, and extracts a tubular structure based on the end point. | 11-20-2014 |
20140341427 | SURVEILLANCE CAMERA SYSTEM AND SURVEILLANCE CAMERA CONTROL APPARATUS - A surveillance system includes an acquisition unit configured to acquire motion information of an object detected from an image captured by an image capturing unit, an association unit configured to associate a recognition processing result of the object with the motion information, and a determination unit configured to determine, from among a plurality of objects detected from the image captured by the image capturing unit, an object to be subjected to recognition processing based on the recognition result associated with the motion information. Thus, even when a plurality of abnormal regions is simultaneously present in a region to be monitored, face recognition can be efficiently performed. | 11-20-2014 |
20140341428 | APPARATUS AND METHOD FOR RECOGNIZING HUMAN BODY IN HYBRID MANNER - An apparatus and a method for recognizing a human body in a hybrid manner are provided. The method includes calculating body information used for recognizing a human body from an input image, detecting a region of the human body in a learning-based human body recognition manner by using the calculated body information, and tracing a movement of the detected region of the human body in a modeling-based human body recognition manner. Thereby, it is possible to quickly perform more accurate and precise recognition of the human body. | 11-20-2014 |
20140341429 | COMBINING MULTI-SENSORY INPUTS FOR DIGITAL ANIMATION - Animating digital characters based on motion captured performances, including: receiving sensory data collected using a variety of collection techniques including optical video, electro-oculography, and at least one of optical, infrared, and inertial motion capture; and managing and combining the collected sensory data to aid cleaning, tracking, labeling, and re-targeting processes. Keywords include Optical Video Data and Inertial Motion Capture. | 11-20-2014 |
20140341430 | Method and Device for Detecting Face, and Non-Transitory Computer-Readable Recording Medium for Executing the Method - In the present disclosure, a plurality of frames of input images sequentially received for a predetermined time interval is obtained, and a face detecting operation is performed on a first frame if a full detecting mode is implemented. If a face is detected from a specific region of the first frame during the face detecting operation, a face tracking mode is implemented, a second frame is divided to produce the divided input image portions of the second frame, and the face tracking operation is performed on a surrounding region of the specific region of the divided input image portions of the second frame that corresponds to the specific region in the first frame. If the face is not detected in the face tracking mode, a partial detecting mode is implemented, and the face detecting operation is performed on image portions resized on divided input image portions of a third frame to which a specific region of the third frame corresponding to the specific region of the first frame belongs. | 11-20-2014 |
20140341431 | PASSABLE SECURITY INSPECTION SYSTEM FOR PERSON - The present invention discloses a through-type of millimetre wave person body security inspection system, wherein a person to be inspected passes through an inspect passage therein for performing a security inspection. The through-type of millimetre wave person body security inspection system provided in accordance with the present invention can make a total body dynamic scanning to the person to be inspected, and obtain millimetre wave images and optical images with respect to the person body, thereby achieving the inspection of prohibited articles hidden within clothing of the person body and an automatic alarm thereof. | 11-20-2014 |
20140341432 | OBJECT RECOGNITION DEVICE AND VEHICLE CONTROLLER - An object recognition device includes a sensor ( | 11-20-2014 |
20140341433 | METHOD FOR FINDING PATHS IN VIDEO - A system for detecting behavior of a target may include: a target detection engine, adapted to detect at least one target from one or more objects from a video surveillance system recording a scene; a path builder, adapted to create at least one mature path model from analysis of the behavior of a plurality of targets in the scene, wherein the at least one mature path model includes a model of expected target behavior with respect to the at least one path model; and a target behavior analyzer, adapted to analyze and identify target behavior with respect to the at least one mature path model. The system way further include an alert generator, adapted to generate an alert based on the identified behavior. | 11-20-2014 |
20140348377 | FIELD OF VISION CAPTURE - A system includes at least one sensor, and a computing device coupled to the at least one sensor. The computing device includes a processor, and a computer-readable storage media having computer-executable instructions embodied thereon. When executed by at least one processor, the computer-executable instructions cause the processor to identifying a dominant eye of the occupant, determine a first position associated with the dominant eye of the occupant, determine a second position associated with the occupant, and determine a first line-of-sight by extending a first line-of-sight between the first position and the second position. | 11-27-2014 |
20140348378 | METHOD AND APPARATUS FOR DETECTING TRAFFIC VIDEO INFORMATION - The present invention provides a method and an apparatus for detecting traffic video information. The method includes: acquiring a traffic video stream; determining color features of each frame of image in the traffic video stream; calculating the inter-frame distance between adjacent frames according to the color features; calculating the boundary of an image clustered frames' group according to the inter-frame distance by adopting an image clustering evaluation standard in RGB space and an image clustering evaluation standard in YUV space respectively; and determining a final boundary of the image clustered frames' group according to the boundaries of the image clustered frames' group in RGB space and YUV space. By using the present invention, the stability of detection results in different environments may be improved. | 11-27-2014 |
20140348379 | Position determination of an object by sensing a position pattern by an optical sensor - An apparatus for determining a position of an object relative to a representation of an image to be represented includes a position pattern generator for generating a position pattern subdivided into a plurality of pattern portions, each of the pattern portions having an unambiguous bit pattern of a plurality of bit patterns, and the bit patterns being Gray-coded in a generalized manner; a combination unit for combining the position pattern with the at least one image to be represented and for providing a corresponding combined image; an optical sensor for optically sensing an image section of the combined image, being correlated with the object position; a filter for extracting at least one bit pattern corresponding to a pattern portion of the position pattern, from the image section and for providing at least one corresponding extracted pattern portion; and a determiner for determining the object position based on the extracted bit pattern. A method for determining the position of an object is also disclosed. | 11-27-2014 |
20140348380 | METHOD AND APPRATUS FOR TRACKING OBJECTS - A method for tracking an object in an object tracking apparatus includes receiving an image frame of an image; and detecting a target, a depth analogous obstacle and an appearance analogous obstacle; tracking the target, the depth analogous obstacle and the appearance analogous obstacle; when the detected target overlaps the depth analogous obstacle, comparing the variation of tracking score of the target with that of the depth analogous obstacle. Further, the method includes continuously tracking the target when the variation of tracking score of the target is below that of the depth analogous obstacle and processing a next frame when the variation of tracking score of the target is above that of the depth analogous obstacle; and re-detecting the target. | 11-27-2014 |
20140348381 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes a detection unit which detects an object which is included in an image; and a determination unit which determines a positional relationship between a detected object and a protective barrier region in a depth direction of the image based on a feature amount relating to at least one of the detected object and the protective barrier region, when the detected object and the protective barrier region overlap with each other in the image. | 11-27-2014 |
20140348382 | PEOPLE COUNTING DEVICE AND PEOPLE TRAJECTORY ANALYSIS DEVICE - The people counting device includes an image acquisition unit to acquire an image from an imaging device, a head coordinate detection unit to detect a head coordinate of a target person from the image, a foot coordinate estimation unit to estimate a foot coordinate of the target person from the detected head coordinate, an individual region detection unit to perform region segmentation of the image and to give an attribute to each of regions, a foot coordinate correction unit to determine whether the target person overlaps another person based on the given attribute and to correct the foot coordinate of the target person estimated by the foot coordinate estimation unit when the persons are determined to overlap each other, a foot coordinate region inside/outside determination unit to determine whether the foot coordinate exists in a detection region set in the image, and a people counting unit to count foot coordinates. | 11-27-2014 |
20140348383 | OBJECT DETECTION APPARATUS - An object detection apparatus for detecting a target object in an input image. The apparatus includes a storage storing, for each of a plurality of part areas forming an area subject to image recognition processing, image recognition dictionaries used to recognize a target object and typed according to variations in appearance of a part of the target object to be detected in the part area. A part score calculator calculates, for each of the part areas, a part score indicative of a degree of similarity between the part area and each of at least some of the image recognition dictionaries. An integrated score calculator calculates an integrated score that is a weighted sum of the part scores for the respective part areas. A determiner determines, on the basis of the integrated score, whether or not the target object is present in the subject area. | 11-27-2014 |
20140348384 | System for Managing Locations of Items - This document discloses a solution for maintaining information on locations of consumer products in a location tracking area. Customers are provided with personal electronic devices comprising a camera sensor arranged to capture images of consumer products provided in the location tracking area. The locations of the customers are also tracked in the location tracking area. The locations of the products may be maintained by monitoring locations where the personal electronic devices capture the images of the consumer products. | 11-27-2014 |
20140348385 | COMPUTER VISION COLLISION AVOIDANCE IN DRILLING OPERATIONS - A system and method for automatically preventing a collision between objects is described. One or more images of a working space may be collected, and a first object may be identified based on the one or more images. Three-dimensional coordinates of the first object may be determined, and a virtual boundary enclosing the identified first object in a three-dimensional coordinate system may be generated based on the three-dimensional coordinates of the first object, wherein the virtual boundary specifies a volume in the working space that a second object in the working space should not occupy. The coordinates in the three-dimensional coordinate system corresponding to the generated virtual boundary may be transmitted to a second processor, and the second processor may control the second object to perform an operation in the working space that includes the first object without contacting the virtual boundary of the first object. | 11-27-2014 |
20140348386 | METHOD AND A SYSTEM FOR OCCUPANCY LOCATION - A method for occupancy location includes capturing a spatially coded image of a scene, identifying a region of interest in the image, generating a pixel plausibility index for each image pixel in the region of interest, and classifying pixels as relating to occupancy responsive to the pixel plausibility index. | 11-27-2014 |
20140348387 | LESION CLASSIFICATION APPARATUS, AND METHOD OF MODIFYING LESION CLASSIFICATION DATA - A method of and apparatus for changing lesion classification data, the method including determining whether at least one mass is included in an image of an object, determining whether the at least one mass corresponds to a lesion by using first data including at least one first information, selecting a false negative (FN) mass which has been determined as not corresponding to the lesion among the at least one mass, based on a first input, and changing the first data to second data by using second information of the selected FN mass. | 11-27-2014 |
20140348388 | METHOD FOR VERIFYING A SURVEYING INSTRUMENT'S EXTERNAL ORIENTATION - Verifying surveying instrument's external orientation during a measurement process, comprising directing the imaging means onto a reference object and detecting a first photographing direction of the imaging means, taking a first image of the reference object in the first photographing direction, memorizing the first image and the first photographing direction as being indicative of the surveying instrument's external orientation, re-directing the imaging means onto the reference object and detecting a second photographing direction of the imaging means, taking a second image of the reference object in the second photographing direction, and comparing a first with a second imaged position of the reference object in the first respectively the second image by image processing as well as the first with the second photographing direction and verifying the surveying instrument's external orientation based on disparities between the first and the second imaged position and/or between the first and the second photographing direction. | 11-27-2014 |
20140355819 | DEVICE AND METHOD FOR ALLOCATING DATA BASED ON AN ARRANGEMENT OF ELEMENTS IN AN IMAGE - A host device may include an imaging unit configured to capture an image of a guest device, and a communication unit configured to communicate with the guest device. The host device may include circuitry configured to identify, in the image of the guest device, identification information corresponding to the guest device, the identification information being displayed on a screen included on the guest device. The circuitry may calculate, based on the identification of the identification information in the image, an arrangement position of the guest device. The circuitry may assign, based on the calculated arrangement position, assigned data to the guest device. The circuitry may transmit one or more of the calculated arrangement position and information associated with the assigned data to the guest device. | 12-04-2014 |
20140355820 | ESTIMATING A POSE OF A CAMERA FOR VOLUME ESTIMATION - What is disclosed a system and method for estimating a position (or pose) of a camera relative to a surface upon which an object rests in an image captured by that camera such that a volume can be estimated for that object. In one embodiment, a matrix K is determined from parameters intrinsic to a camera used to capture image. An amount of a camera translation T is determined with respect to a set of real-world coordinates in (X,Y,Z). An amount of a camera rotation matrix R is determined from camera angles measured with respect to the real-world coordinates. A distance Z | 12-04-2014 |
20140355821 | Object Landmark Detection in Images - Techniques are provided to improve the performance and accuracy of landmark point detection using a Constrained Local Model. The accuracy of feature filters used by the model may be improved by supplying positive and negative sets of image data from training image regions of varying shapes and sizes to a linear support vector machine training algorithm. The size and shape of regions within which a feature filter is to be applied may be determined based on a variance in training image data for a landmark point with which the feature filter is associated. A sample image may be normalized and a confidence map generated for each landmark point by applying the feature filters as a convolution on the normalized image. A vector flow map may be pre-computed to improve the efficiency with which a mean landmark point is adjusted toward a corresponding landmark point in a sample image. | 12-04-2014 |
20140355822 | APPARATUS AND METHOD FOR MATCHING PARKING-LOT OUTLINE - An apparatus and a method for tracing a parking-lot is provided that includes a controller configured to recognize at least one parking-lot from a previous image frame which photographed a surrounding of a vehicle and extract a template according to a type of a parking-lot line of the recognized parking-lot. In addition, the controller is configured to generate a template transformed based on a position information of the parking-lot and calculate similarity by comparing a template generated from a previous image frame with a parking-lot line recognized from a current image frame. A position of a parking-lot is determined according to the calculated similarity and the controller is configured to correct the template based on an information of a parking-lot line extracted from the determined position. | 12-04-2014 |
20140355823 | VIDEO SEARCH APPARATUS AND METHOD - The present invention relates to a video search apparatus and method, and more particularly, to a video search apparatus and method which can be used to search video data collected by a video capture apparatus, such as a closed circuit television (CCTV), for information desired by a user. | 12-04-2014 |
20140355824 | SPECTRAL IMAGE DATA PROCESSING APPARATUS AND TWO-DIMENSIONAL SPECTRAL APPARATUS - A spectral image data processing apparatus which conducts multivariate analysis on spectral image data of a sample, including: a region setting unit configured to set a region of interest for performing multivariate analysis in a sample in which a difference needs to be distinguished, the region of interest being set in accordance with spectral image data of the sample; and an analysis unit configured to perform the multivariate analysis with spectral image data inside the region of interest and spectral image data of region of non-interest which is a region other than the region of interest being distinguished from each other. | 12-04-2014 |
20140355825 | METHOD AND APPARATUS FOR ESTIMATING POSE - A method and apparatus for estimating a pose that estimates a pose of a user using a depth image is provided, the method including, recognizing a pose of a user from a depth image, and tracking the pose of the user using a user model exclusively of one another to enhance precision of estimating the pose. | 12-04-2014 |
20140355826 | DETECTION DEVICE, LEARNING DEVICE, DETECTION METHOD, LEARNING METHOD, AND INFORMATION STORAGE DEVICE - A detection device includes an image acquisition section that acquires an image that has been captured by an imaging section, and includes an image of an object, a distance information acquisition section that acquires distance information based on a distance from the imaging section to the object when the imaging section has captured the image, a feature quantity calculation section that calculates a feature quantity from the acquired image, the feature quantity relating to at least one of a color, a brightness, a color difference, and a spectrum of the object, a learning feature quantity storage section that stores a learning feature quantity calculated by a learning process based on the distance from the imaging section to the object, and a detection section that detects a target area from the image based on the learning feature quantity, the distance information, and the feature quantity. | 12-04-2014 |
20140355827 | AMBIENT ENVIRONMENT DETERMINATION APPARATUS - In an ambient environment determination apparatus, an imager obtains a picture capturing an area ahead of a vehicle, and a street lamp is detected for each detection frame unit of the picture. Then, an urban area determination process is performed that determines whether or not the ambient environment of the vehicle is an urban area based on both of a street lamp detection result of a current detection frame unit and a street lamp detection result of a past detection frame unit of the picture. Further, in a period after the vehicle turns right or left, determination responsiveness with regard to determination whether or the ambient environment is a urban area or a non-urban area is enhanced than that in a period other than the period after the right or left turn. | 12-04-2014 |
20140355828 | SETTING APPARATUS, SETTING METHOD, AND STORAGE MEDIUM - A setting apparatus which sets a detection region for a detection process of detecting a change of an image within a detection region corresponding to an object of detection inputs a first image in which the object of detection is present and a second image in which the object of detection is not present and determines the detection region from the first image and the second image such that the detection process may be performed on a detection region of a third image. | 12-04-2014 |
20140355829 | PEOPLE DETECTION APPARATUS AND METHOD AND PEOPLE COUNTING APPARATUS AND METHOD - According to an aspect of the present invention, there is provided a people counting apparatus including: a reception unit which receives a video of an area including an entrance captured by a video capture device; a line setting unit which sets an inline at the entrance and sets an outline such that a specific region is formed on a side of the inline; a detection unit which detects moving objects in the video using information differences between frames of the received video and detects human moving objects among the detected moving objects; a tracking unit which tracks the movement of each of the detected moving objects; and a counting unit which determines whether each of the moving objects passed the inline and the outline based on the tracked movement of each of the moving objects and counts the number of people based on the determination result. | 12-04-2014 |
20140355830 | METHOD AND APPARATUS FOR PROTECTING EYESIGHT - A method and an apparatus for controlling a display in order to secure an appropriate viewing distance between a digital device and a user who is viewing the digital device is provided. Accordingly, the method determines whether an object exists within a hazardous viewing distance using a 3D camera function provided in the digital device. If it is determined that an object exists within the hazardous viewing distance, the digital device detects a face or eyes from 2D images photographed by the camera. Next, the direction of the face is determined on the basis of the detected results, and it is determined whether a user is viewing a display screen of the digital device based on the determination. If it is determined that a user is viewing a display screen of a digital device, the digital device generates a warning that the user is positioned within a hazardous viewing distance. | 12-04-2014 |
20140355831 | APPARATUS, METHOD AND COMPUTER-READABLE RECORDING MEDIUM FOR DETECTING MOVING OBJECT USING DEPTH MAP - An apparatus, a method and a non-transitory computer-readable recording medium for detecting a moving object using a depth map is provided. The apparatus includes a segment image generator unit that generates a segment image to distinguish each object using a depth image of a current input frame; a background image generator unit that generates a current background image by applying a moving average method to the depth image and a background image of a previous input frame; and a moving mask generator unit that generates a moving mask by comparing the depth image with the current background image to thereby find moving parts in the depth image. | 12-04-2014 |
20140355832 | Method and Device for Following an Object in a Sequence of at Least Two Images - The present invention relates to a method for following an object in a sequence of at least two images termed previous and current. The said method comprises a step for forming a first set E | 12-04-2014 |
20140355833 | Image Processing Apparatus and Method - A method and apparatus for localizing an area in relative movement and for determining the speed and direction thereof in real time is disclosed. Each pixel of an image is smoothed using its own time constant. A binary value corresponding to the existence of a significant variation in the amplitude of the smoothed pixel from the prior frame, and the amplitude of the variation, are determined, and the time constant for the pixel is updated. For each particular pixel, two matrices are formed that include a subset of the pixels spatially related to the particular pixel. The first matrix contains the binary values of the subset of pixels. The second matrix contains the amplitude of the variation of the subset of pixels. In the first matrix, it is determined whether the pixels along an oriented direction relative to the particular pixel have binary values representative of significant variation, and, for such pixels, it is determined in the second matrix whether the amplitude of these pixels varies in a known manner indicating movement in the oriented direction. In each of several domains, histogram of the values in the first and second matrices falling in such domain is formed. Using the histograms, it is determined whether there is an area having the characteristics of the particular domain. The domains include luminance, hue, saturation, speed (V), oriented direction (D1), time constant (CO), first axis (x(m)), and second axis (y(m)). | 12-04-2014 |
20140355834 | Object-Tracking Systems and Methods - A system and method for tracking, identifying, and labeling objects or features of interest, such as follicular units is provided. In some embodiments, tracking is accomplished using unique signature of the follicular unit and image stabilization techniques. According to some aspects pixel data of a region of interest in a first image is compared to pixel data of the regions of interest in a second image, and based on a result of the comparison of pixel data in the region of interest in the first and second images and the signature of the follicular unit, locating the follicular unit in the second image. In some embodiments the follicular unit is searched for in the direction of a motion vector. | 12-04-2014 |
20140363043 | AUTOMATED VISION-BASED CLUTTER DETECTOR AND NOTIFIER - A system and method of monitoring a customer space including obtaining visual data comprising image frames of the customer space over a period of time, defining a region of interest within the customer space, the region of interest corresponding to a portion of the customer space in which customers relocate objects, monitoring the region of interest for at least one predefined clutter condition, and generating a notification when the at least one predefined clutter condition is detected. | 12-11-2014 |
20140363044 | Efficient Machine-Readable Object Detection and Tracking - A method to improve the efficiency of the detection and tracking of machine-readable objects is disclosed. The properties of image frames may be pre-evaluated to determine whether a machine-readable object, even if present in the image frames, would be likely to be detected. After it is determined that one or more image frames have properties that may enable the detection of a machine-readable object, image data may be evaluated to detect the machine-readable object. When a machine-readable object is detected, the location of the machine-readable object in a subsequent frame may be determined based on a translation metric between the image frame in which the object was identified and the subsequent frame rather than a detection of the object in the subsequent frame. The translation metric may be identified based on an evaluation of image data and/or motion sensor data associated with the image frames. | 12-11-2014 |
20140363045 | PRECIPITATION REMOVAL FOR VISION-BASED PARKING MANAGEMENT SYSTEMS - Methods and systems receive a series of images and compare at least two of the images in the series of images to locate items that are in different positions to identify moving items. Such methods and systems further calculate a measure of the moving items within the series of images. Additionally, such methods and systems perform a continuously variable image correction to remove the moving items from the images to produce a series of corrected images. This “continuously variable image correction” increases the amount of image correction for a relatively higher measure of the moving items and decreases the amount of image correction for a relatively lower measure of the moving items, and does so continuously as the measure of the moving items changes within the series of images. | 12-11-2014 |
20140363046 | PRIORITIZATION OF FACIAL RECOGNITION MATCHES BASED ON LIKELY ROUTE - Prioritizing facial recognition matches includes obtaining identification information and a facial image for each visitor entering the monitored environment, the monitored environment having a plurality of cameras at known locations including entry and exit points; obtaining itineraries of the visitors. Itineraries of the visitors are obtained, and based on the entry points and the itineraries of the visitors, likely routes of the visitors are determined through the monitored environment. Responsive to receiving an image captured by a first camera at a first location at an image capture time, the database records are sorted for facial recognition matching with the image from the first camera based on the visitors who routes are likely to place the visitors in proximity to the first camera at the time of image capture. | 12-11-2014 |
20140363047 | ESTIMATOR TRAINING METHOD AND POSE ESTIMATING METHOD USING DEPTH IMAGE - An estimator training method and a pose estimating method using a depth image are disclosed, in which the estimator training method may train an estimator configured to estimate a pose of an object, based on an association between synthetic data and real data, and the pose estimating method may estimate the pose of the object using the trained estimator. | 12-11-2014 |
20140363048 | INTERACTIVE AND AUTOMATIC 3-D OBJECT SCANNING METHOD FOR THE PURPOSE OF DATABASE CREATION - Systems, methods, and devices are described for capturing compact representations of three-dimensional objects suitable for offline object detection, and storing the compact representations as object representation in a database. One embodiment may include capturing frames of a scene, identifying points of interest from different key frames of the scene, using the points of interest to create associated three-dimensional key points, and storing key points associated with the object as an object representation in an object detection database. | 12-11-2014 |
20140363049 | METHOD OF ESTIMATING OPTICAL FLOW ON THE BASIS OF AN ASYNCHRONOUS LIGHT SENSOR - A computer receives asynchronous information originating from a light sensor ( | 12-11-2014 |
20140363050 | THREE-DIMENSIONAL OBJECT DETECTION DEVICE - A three-dimensional object detection device includes an image capturing unit, an image conversion unit, a three-dimensional object detection unit, a light source detection unit a degree-of-certainty assessment unit and a control unit. The degree-of-certainty assessment unit assesses a degree of certainty that a light source is headlights of another vehicle in two lanes over. The control unit sets a threshold value so that the three-dimensional object is more difficult to detect in a forward area of a line connecting the light source and the image capturing unit in the detection frame when the degree of certainty is at a predetermined value or higher, and sets a threshold value so that the three-dimensional object is more difficult to detect in progression from a center side toward front or rear ends of the detection frame when the degree of certainty is less than a predetermined value. | 12-11-2014 |
20140369552 | Method of Establishing Adjustable-Block Background Model for Detecting Real-Time Image Object - A method of establishing an adjustable-block background model for detecting a real-time image object is provided to obtain a surveillance image by a surveillance apparatus. The surveillance image has a plurality of pixels. The method includes steps of: segmenting the surveillance image into a plurality of blocks each having a first pixel and at least one second pixel; defining the first pixel as a major color and comparing the first pixel with the at least one second pixel to determine a number and color information of the major color in the block; merging the blocks having the same major color into a large block to obtain a block background model; and performing image comparison to identify a moving object image. With the establishment of the block background model, a required memory space is effectively reduced while outstanding image display performance is still maintained. | 12-18-2014 |
20140369553 | METHOD FOR TRIGGERING SIGNAL AND IN-VEHICLE ELECTRONIC APPARATUS - A signal triggering method and an in-vehicle electronic apparatus are provided. A plurality of images of a driver is continuously captured by using an image capturing unit, and a face motion information or an eyes open-shut information is obtained by detecting a face motion or an eyes open/shut action of the driver through the images. When the face motion information or the eyes open-shut information matches a threshold information, a specific signal is triggered and transmitted to a specific device. | 12-18-2014 |
20140369554 | FACE BEAUTIFICATION SYSTEM AND METHOD OF USE THEREOF - A face beautification system and a method of face beautification. On embodiment of the face beautification system includes: (1) a coarse feature detector configured to generate an approximation of facial features in an image, (2) an edge-preserving filter configured to reduce distortions in the approximation, and (3) a feature enhancer operable to selectively filter a facial feature from said approximation and carry out an enhancement. | 12-18-2014 |
20140369555 | TRACKER ASSISTED IMAGE CAPTURE - A method for picture processing is described. A first tracking area is obtained. A second tracking area is also obtained. The method includes beginning to track the first tracking area and the second tracking area. Picture processing is performed once a portion of the first tracking area overlapping the second tracking area passes a threshold. | 12-18-2014 |
20140369556 | APPLYING SUPER RESOLUTION FOR QUALITY IMPROVEMENT OF OCR PROCESSING - Systems and methods for improving the quality of recognition of the object based on a series of frame images of objects are described herein. A plurality of images depicting the same object are received. A first image is selected from the plurality of images. The first image may be an image with the highest quality from plurality of images. For each image in the plurality of images, motion estimation of elements of an image in the plurality of images and the first image is performed. Based on the results of motion estimation, motion compensation and signal accumulation of the object in the images in the plurality of images using the first image are performed. A high resolution image of the object obtained based on the motion compensation and signal accumulation is generated. Character recognition on the resulting high resolution image is performed. | 12-18-2014 |
20140369557 | Systems and Methods for Feature-Based Tracking - Disclosed embodiments pertain to feature based tracking. In some embodiments, a camera pose may be obtained relative to a tracked object in a first image and a predicted camera pose relative to the tracked object may be determined for a second image subsequent to the first image based, in part, on a motion model of the tracked object. An updated SE(3) camera pose may then be obtained based, in part on the predicted camera pose, by estimating a plane induced homography using an equation of a dominant plane of the tracked object, wherein the plane induced homography is used to align a first lower resolution version of the first image and a first lower resolution version of the second image by minimizing the sum of their squared intensity differences. A feature tracker may be initialized with the updated SE(3) camera pose. | 12-18-2014 |
20140369558 | SYSTEMS AND METHODS FOR MACHINE CONTROL - A region of space may be monitored for the presence or absence of one or more control objects, and object attributes and changes thereto may be interpreted as control information provided as input to a machine or application. In some embodiments, the region is monitored using a combination of scanning and image-based sensing. | 12-18-2014 |
20140369559 | IMAGE RECOGNITION METHOD AND IMAGE RECOGNITION SYSTEM - An image recognition method includes the following steps: capturing a plurality of images; analyzing the images to get a target object; analyzing the target object to get color information and characteristic information; statistically computing a current image according to the color information and the characteristic information to get a probability distribution map; comparing a difference between the current image and a previous image of the current imago to get dynamic information; and recognizing the target object according to the probability distribution map and the dynamic information. | 12-18-2014 |
20140369560 | Nuclear Image System and Method for Updating an Original Nuclear Image - A nuclear image system for updating an original nuclear image, the nuclear image system comprising: a data memory for storing the original three-dimensional nuclear image; a nuclear radiation detector, which is movable along a freely variable path, for measuring nuclear radiation, in order to obtain nuclear radiation values; a tracking system for tracking the nuclear radiation detector while measuring the nuclear radiation, so that detector coordinates are obtained which indicate a posture of the tracked nuclear radiation detector in relation to an image coordinate system of the nuclear image; a nuclear data input configured to receive the nuclear radiation values from the nuclear radiation detector and the detector coordinates from the tracking system, and to associate the nuclear radiation values with the respective detector coordinates; and an image updating module including an updating rule for changing the original nuclear image on the basis of the nuclear radiation values and the detector coordinates, wherein the image updating module is configured to generate an updated three-dimensional nuclear image by applying the updating rule to the original nuclear image. | 12-18-2014 |
20140369561 | SYSTEM AND METHOD FOR ENHANCING HUMAN COUNTING BY FUSING RESULTS OF HUMAN DETECTION MODALITIES - The present invention discloses a method and a system for enhancing accuracy of human counting in at least one frame of a captured image in a real-time in a predefined area. The present invention detects human in one or more frames by using at least one human detection modality for obtaining the characteristic result of the captured image. The invention further calculates an activity probability associated with each human detection modality. The characteristic results and the activity probability are selectively integrated by using a fusion technique for enhancing the accuracy of the human count and for selecting the most accurate human detection modality. The human is then performed based on the selection of the most accurate human detection modality. | 12-18-2014 |
20140369562 | IMAGE PROCESSOR - An image processor includes an LSRAM accessible with a higher speed than a frame memory and configured to hold a second image in a predetermined range of a first image, an image production unit configured to read an image in a predetermined range of the second image and produce a third image for rough search based on the read image, an MSRAM accessible with a higher speed than the frame memory and configured to hold the third image, a first search unit configured to read the third image and perform first motion search based on the third image, and a second search unit configured to read a fourth image in a predetermined range of the second image based on a search result by the first search unit and perform second motion search that is more detailed than the first motion search based on the fourth image. | 12-18-2014 |
20140369563 | Image Control Method for Defining Images for Waypoints Along a Trajectory - A method including: displaying on a display a reference image; displaying on the display a start position within the reference image; displaying on the display an end position within the reference image; determining a trajectory between the start position and the end position; and defining a target image for each of a plurality of waypoints along the determined trajectory. | 12-18-2014 |
20140369564 | MEDICAL IMAGE DIAGNOSTIC DEVICE AND METHOD FOR SETTING REGION OF INTEREST THEREFOR - The present invention comprises: capturing a medical image of a subject by an image-capturing unit; generating compressed image data by compressing on the basis of a plurality of pixels of uncompressed image data, where the uncompressed image data is image data of the captured medical image of the subject, by an image data compression unit; setting a search range of the compressed image data and also setting a search range of the uncompressed image data, by a search range setting unit; and setting a region of interest for the medical image on the basis of the search range of the uncompressed image data and the search range of the compressed image data, by a region-of-interest setting unit. | 12-18-2014 |
20140369565 | Systems and Methods for Multi-Pass Adaptive People Counting - People are counted in a segment of video with a video processing system that is configured with a first set of parameters. This produces a first output. Based on this first output, a second set of parameters is chosen. People are then counted in the segment of video using the second set of parameters. This produces a second output. People are counted with a video played forward. People are counted with a video played backwards. The results of these two counts are reconciled to produce a more accurate people count. | 12-18-2014 |
20140376768 | Systems and Methods for Tracking Location of Movable Target Object - An automated process uses a local positioning system to acquire location (i.e., position and orientation) data for one or more movable target objects. In cases where the target objects have the capability to move under computer control, this automated process can use the measured location data to control the position and orientation of such target objects. The system leverages the measurement and image capture capability of the local positioning system, and integrates controllable marker lights, image processing, and coordinate transformation computation to provide tracking information for vehicle location control. The resulting system enables position and orientation tracking of objects in a reference coordinate system. | 12-25-2014 |
20140376769 | METHOD FOR DETECTING LARGE SIZE AND PASSENGER VEHICLES FROM FIXED CAMERAS - A method for detecting parking occupancy includes receiving video data from a sequence of frames taken from an associated image capture device monitoring a parking area. The method includes determining at least one candidate region in the parking area. The method includes comparing a size of the candidate region to a size threshold. In response to size of the candidate region meeting and exceeding the size threshold, the method includes determining whether the candidate region includes one of at least one object and no objects. The method includes classifying at least one object in the candidate region as belonging to one of at least two vehicle-types. The method further includes providing vehicle occupancy information to a user. | 12-25-2014 |
20140376770 | STEREOSCOPIC OBJECT DETECTION LEVERAGING ASSUMED DISTANCE - A method of object detection includes receiving a first image taken by a first stereo camera, receiving a second image taken by a second stereo camera, and offsetting the first image relative to the second image by an offset distance selected such that each corresponding pixel of offset first and second images depict a same object locus if the object locus is at an assumed distance from the first and second stereo cameras. The method further includes locating a target object in the offset first and second images. | 12-25-2014 |
20140376771 | SYSTEM FOR COLLECTING GROWTH INFORMATION OF CROPS IN GREENHOUSE - A system provides a collecting growth information of crops in greenhouse. In view of the above, it is possible to estimate the growth and yields of crops depending on the size of the greenhouse and the number of the crops in the greenhouse by collecting growth information of the crops such as plant lengths, leaf areas, internode lengths, fruit color, and the number of fruits of the reference crops. | 12-25-2014 |
20140376772 | DEVICE, OPERATING METHOD AND COMPUTER-READABLE RECORDING MEDIUM FOR GENERATING A SIGNAL BY DETECTING FACIAL MOVEMENT - A device for generating signal by detecting facial movement and operating method thereof is provided, which includes: an image capture unit, and a processing unit. The image capture unit for obtaining an image series. The processing unit receives the images series from the image capture unit, wherein the processing unit includes an image background removal module, an image extracting module, and a comparator, wherein the image background removing module processes each of the image series respectively to obtain a facial image, wherein the feature location module determines a location of a pair of nostrils in the facial image, defines a mouth searching frame, and acquires a data of mouth movements through the mouth searching frame, wherein the comparator compares the data of mouth movements with predetermined facial information, and generates a designated signal according to the comparison result. | 12-25-2014 |
20140376773 | TUNABLE OPERATIONAL PARAMETERS IN MOTION-CAPTURE AND TOUCHLESS INTERFACE OPERATION - The technology disclosed can provide for improved motion capture and touchless interface operations by enabling tunable control of operational parameters without compromising the quality of image based recognition, tracking of conformation and/or motion, and/or characterization of objects (including objects having one or more articulating members (i.e., humans and/or animals and/or machines). Examples of tunable operational parameters include frame rate, field of view, contrast detection, light source intensity, pulse rate, and/or clock rate. Among other aspects, operational parameters can be changed based upon detecting presence and/or motion of an object indicating input (e.g., control information, input data, etc.) to the touchless interface, either alone or in conjunction with presence (or absence or degree) of one or more condition(s) such as accuracy conditions, resource conditions, application conditions, others, and/or combinations thereof. | 12-25-2014 |
20140376774 | ELECTRONIC EQUIPMENT WITH IMAGE ANALYSIS FUNCTION AND RELATED METHOD - An electronic equipment for analyzing an image inside a light-proof container having a portable electronic device with a display screen therein is provided. The electronic equipment analyzes the gray values of each two adjacent pixels of the image to determine a number of boundary points of an area which is illumined by the display screen, linearly fits a number of straight-lines based on the boundary points in different directions, and determines an area bound by the intersections formed by the straight-lines. | 12-25-2014 |
20140376775 | ESTIMATION OF OBJECT PROPERTIES IN 3D WORLD - Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features. | 12-25-2014 |
20140376776 | METHOD FOR SEGMENTING OBJECTS IN IMAGES - A method for identifying an attribute of an object represented in an image comprising data defining a predetermined spatial granulation for resolving the object, where the object is in contact with another object. In an embodiment, the method comprises identifying data whose values indicate they correspond to locations completely within the object, determining a contribution to the attribute provided by the data, and identifying additional data whose values indicate they are not completely within the object. The method next interpolates second contributions to the attribute from the values of the additional data and finds the attribute of the object from the first contribution and second contributions. The attribute may be, for example, a volume, and the values may correspond, for example, to intensity. | 12-25-2014 |
20150010202 | Method for Determining Object Poses Using Weighted Features - A method for determining a pose of an object in a scene by determining a set of scene features from data acquired of the scene and matching the scene features to model features to generate weighted candiate poses when the scene feature matches one of the model features, wherein the weight of the candidate pose is proportional to the model weight. Then, the pose of the object is determined from the candidate poses based on the weights. | 01-08-2015 |
20150010203 | METHODS, APPARATUSES AND COMPUTER PROGRAM PRODUCTS FOR PERFORMING ACCURATE POSE ESTIMATION OF OBJECTS - An apparatus for determining a pose(s) of an object(s) may include a processor and memory storing executable computer code causing the apparatus to at least perform operations including receiving a detected image of at least one face and analyzing the image of the at least one face based on data of at least one model identifying one or more poses. The poses may be related in part to at least one of a position or an orientation of respective faces. The computer program code may further cause the apparatus to determine that the face corresponds to one of the poses based in part on one or more items of data of the image passing criteria identified by the model as corresponding to the pose. Corresponding methods and computer program products are also provided. | 01-08-2015 |
20150010204 | PERSON BEHAVIOR ANALYSIS DEVICE, PERSON BEHAVIOR ANALYSIS SYSTEM, PERSON BEHAVIOR ANALYSIS METHOD, AND MONITORING DEVICE - A behavior analysis/monitoring device includes: a person detection unit configured to detect a person(s) from image information obtained by capturing images covering an area around an item placement area; a part-of-interest detection unit configured to detect, for each person detected by the person detection unit, a part of interest set in a part of an upper body of the person excluding hands and arms; a position measurement unit configured to measure a position of the part of interest detected by the part-of-interest detection unit; and an item pick-up action determination unit configured to obtain a displacement of the part of interest based on the position of the part of interest obtained by the position measurement unit and to determine whether each person detected by the person detection unit performed an item pick-up action based on the displacement of the part of interest of the person. | 01-08-2015 |
20150010205 | APPARATUS AND METHOD FOR INSPECTING PINS ON A PROBE CARD - Embodiments described herein generally relate to methods and apparatuses for ensuring the integrity of probe card assemblies and verifying that probe cards are ready for testing. In one embodiment, an apparatus includes a stage that allows stable and precise movement of a sensor. The stage includes a first support, a second support, and a sensor carrier. A plurality of lifting devices is coupled to the second support and the sensor carrier, providing a more stable and precise movement for the sensor carrier. Methods for identifying objects other than the probes disposed on a surface of a probe card and to determine whether the probe card is ready for use are disclosed. | 01-08-2015 |
20150010206 | GAZE POSITION ESTIMATION SYSTEM, CONTROL METHOD FOR GAZE POSITION ESTIMATION SYSTEM, GAZE POSITION ESTIMATION DEVICE, CONTROL METHOD FOR GAZE POSITION ESTIMATION DEVICE, PROGRAM, AND INFORMATION STORAGE MEDIUM - A photographing unit photographs a face of the user who is looking at a screen displayed on a display unit. An area detecting unit detects, from the photographed image of the photographing unit, an eye area of the user and at least one of a face area of the user or a predetermined part area of the user other than the user's eyes. An areal size/position information obtaining unit obtains areal size information and position information of the eye area, and areal size information and position information of the at least one of the face area or the predetermined part area. A gaze position estimation unit estimates a position in the screen that the user is gazing at, based on the areal size information and the position information. | 01-08-2015 |
20150010207 | DRIVING ASSISTANCE DEVICE AND DRIVING ASSISTANCE METHOD - A current sensor includes: a magneto electric conversion element; and a magnetic field concentrating core applying a magnetic field caused by a measurement object current to the magneto electric conversion element. A planar shape of the magnetic field concentrating core perpendicular to a current flowing direction is a ring shape with a gap. The magneto electric conversion element is arranged in the gap. A part of a conductor for flowing the current is surrounded by the magnetic field concentrating core. The magnetic field concentrating core includes two first magnetic members and at least one second magnetic member, which are stacked alternately in the current flowing direction. Parts of the two first magnetic members adjacent to each other via the one second magnetic member are opposed to each other through a clearance or an insulator. | 01-08-2015 |
20150010208 | MEASURING QUALITY OF EXPERIENCE ASSOCIATED WITH A MOBILE DEVICE - Implementations and techniques for measuring quality of experience associated with a mobile device are generally disclosed. | 01-08-2015 |
20150010209 | REAL TIME PROCESSING OF VIDEO FRAMES - A method and system for real time processing of a sequence of video frames. A current frame in the sequence and at least one frame in the sequence occurring prior to the current frame is analyzed. Each frame includes a two-dimensional array of pixels. The sequence of video frames is received in synchronization with a recording of the video frames in real time. The analyzing includes performing a background subtraction on the at least one frame, which determines a background image and a static region mask associated with a static region consisting of a contiguous distribution of pixels in the current frame. The static region mask identifies each pixel in the static region upon the static region mask being superimposed on the current frame. A determination is made that a persistence requirement, a non-persistence duration requirement, and a persistence duration requirement have been satisfied. | 01-08-2015 |
20150010210 | REAL TIME PROCESSING OF VIDEO FRAMES - A method and system for real time processing of a sequence of video frames. A current frame in the sequence and at least one frame in the sequence occurring prior to the current frame is analyzed. The sequence of video frames is received in synchronization with a recording of the video frames in real time. The analyzing includes performing a background subtraction on the at least one frame, which determines a background image and a static region mask associated with a static region consisting of a contiguous distribution of pixels in the current frame. The static region mask identifies each pixel in the static region upon the static region mask being superimposed on the current frame. A status of a static object is determined as either an abandoned status if the static object is an abandoned object or a removed status if the static object is a removed object. | 01-08-2015 |
20150010211 | REAL TIME PROCESSING OF VIDEO FRAMES - A method and system for real time processing of a sequence of video frames. A current frame in the sequence and at least one frame in the sequence occurring prior to the current frame is analyzed. The sequence of video frames is received in synchronization with a recording of the video frames in real time. The analyzing includes performing a background subtraction on the at least one frame, which determines a background image and a static region mask associated with a static region consisting of a contiguous distribution of pixels in the current frame, which includes executing a mixture of 3 to 5 Gaussians algorithm coupled together in a linear combination by Gaussian weight coefficients to generate the background model, a foreground image, and the static region. The static region mask identifies each pixel in the static region upon the static region mask being superimposed on the current frame. | 01-08-2015 |
20150016666 | Method and Apparatus for Determining Geolocation of Image Contents - A method and apparatus for determining a location of an object depicted in an image are disclosed. The location of an object depicted in an image is determined based on one or more of a camera location at a time the image was captured, object distance data, and camera orientation. Object distance data can include distance to subject data or focal length data. Camera orientation information can include azimuth and elevation angle which can be used to determine a direction from camera an object is located and an elevation of an object with respect to the camera. In one embodiment, image and object data are stored in a database which can be accessed by users to search for images and objects. | 01-15-2015 |
20150016667 | OBJECT RECOGNITION FOR 3D MODELS AND 2D DRAWINGS - A first method is disclosed for recognizing 3D objects in 3D models created by 3D scanners, depth sensing cameras or created by a 3D modeling software application. A second method is disclosed for recognizing 2D objects in drawings. The 3D/2D objects can be individual objects that have simple forms or combined objects that are comprised of a plurality of individual objects that are attached to each other in a certain manner to form one entity. The first and second methods serve a variety of medical, engineering, industrial, gaming and augmented reality applications. | 01-15-2015 |
20150016668 | SETTLEMENT MAPPING SYSTEMS - A system detects settlements from images. A processor reads image data. The processor is programmed by processing only a portion of the image data designated a settlement by a user. The processor transforms the image data into a settlement classification or a non-settlement classification by discriminating pixels within the images based on the user's prior designation. The system alters the appearance of the images rendered by processor to differentiate settlements from non-settlements. | 01-15-2015 |
20150016669 | AUTOMATED REMOTE CAR COUNTING - A system for automated car counting comprises a satellite-based image collection subsystem; a data storage subsystem; and an analysis software module stored and operating on a computer coupled to the data storage subsystem. The satellite-based image collection subsystem collects images corresponding to a plurality of areas of interest and stores them in the data storage subsystem. The analysis module: (a) retrieves images corresponding to an area of interest from the data storage subsystem; (b) identifies a parking space in an image; (c) determines if there is a car located in the parking space; (d) determines a location, size, and angular direction of a car in a parking space; (e) determines an amount of overlap of a car with an adjacent parking space; (f) iterates steps (b)-(e) until no unprocessed parking spaces remain; and (g) iterates steps (a)-(f) until no unprocessed images corresponding to areas of interest remain. | 01-15-2015 |
20150016670 | METHODS AND SYSTEMS FOR IMAGE RECOGNITION - A method and system for image recognition are disclosed. The method includes the steps of acquiring image information for a target object to be recognized at a terminal device; transferring said image information to a server, wherein the server applies feature recognition techniques to the image information, and returns a recognition result; and presenting the recognition result returned by the server at the terminal device. The method and system consistent with the present disclosure may simplify user operations and improve the efficiency and intelligence level of an image recognition system. | 01-15-2015 |
20150016671 | SETTING APPARATUS, OUTPUT METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A setting apparatus that configures a setting for detecting that an object existing at different positions in images corresponding to different times has passed through a detection line or a detection area composites, on an image, an indication indicating a trajectory of an object in an image, and outputs the image on which the indication is composited, as a setting window for setting the detection line or the detection area. | 01-15-2015 |
20150016672 | COMMODITY RECOGNITION APPARATUS AND COMMODITY RECOGNITION METHOD - The commodity recognition apparatus displays a frame for surrounding a commodity in an image captured by an image capturing module. Then the commodity recognition apparatus recognizes a candidate of the commodity imaged in the frame according to the feature amount of the image in the area surrounded by the frame, and outputs information of a highest ranked candidate commodity. If a change instruction for the candidate is received, the commodity recognition apparatus outputs information of the commodity other than the highest ranked candidate selected from the candidates. | 01-15-2015 |
20150016673 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes: an object setting section that sets an object image indicating an object which is placed in an image generated by an image capturing section; a detection section that detects the object included in a synthetic image in which the object image and the image generated by the image capturing section are synthesized; and an output section that outputs output information for setting a detection parameter used in a detection process performed by the detection section, that is, output information in which the synthetic image and a detection result of the detection process are associated. | 01-15-2015 |
20150016674 | METHOD AND APPARATUS FOR CONNECTING DEVICES USING EYE TRACKING - A method for connecting an electronic device using an eye-tracking technique and an electronic device that implements the method are provided. The method includes acquiring eye-tracking information, obtaining image information corresponding to the eye-tracking information, comparing the image information with specific information about at least one external device, and based on the comparison, determining a specific external device to be connected from among the at least one external device. | 01-15-2015 |
20150016675 | TERMINAL APPARATUS, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD - A terminal apparatus includes a first execution unit. The first execution unit includes an acquisition unit that acquires a captured image captured by an image capturing unit, a transmitting unit that transmits the captured image acquired by the acquisition unit to an image search system including a storage unit, in which objects and associated information associated with each other are stored, and an image search server that retrieves associated information associated with an object contained in the transmitted captured image and transmit the retrieved associated information to a transmission source of the captured image, a receiving unit that receives the associated information transmitted from the image search system based on the captured image transmitted from the transmitting unit, and a presenting unit that presents associated information corresponding to an externally-fed parameter, among the associated information received by the receiving unit. | 01-15-2015 |
20150016676 | SYSTEM AND METHOD FOR DETECTING OBJECT USING DEPTH INFORMATION - A system for detecting an object is provided. The system includes a depth image receiver that receives a depth image from a depth camera; a strong classifier that classifies an object region and a non-object region in the depth image based on a characteristic of an object; and an object detector that detects the classified object region, wherein the strong classifier comprises a plurality of weak classifiers which are cascade connected to each other and classifies the object region and the non-object region by passing the depth image through the weak classifiers, the characteristic of the object is extracted based on a center depth value of the depth image, and the plurality of the weak classifiers are generated through a training process for classifying positive training images among a multiple number of positive training images and a multiple number of negative training images. | 01-15-2015 |
20150016677 | HIGH ACCURACY BEAM PLACEMENT FOR LOCAL AREA NAVIGATION - An improved method of high accuracy beam placement for local area navigation in the field of semiconductor chip manufacturing. Preferred embodiments of the present invention can also be used to rapidly navigate to one single bit cell in a memory array or similar structure, for example to characterize or correct a defect in that individual bit cell. High-resolution scanning is used to scan only a “strip” of cells on the one edge of the array (along either the X axis and the Y axis) to locate a row containing the desired cell followed by a similar high-speed scan along the located row (in the remaining direction) until the desired cell location is reached. This allows pattern-recognition tools to be used to automatically “count” the cells necessary to navigate to the desired cell, without the large expenditure of time required to image the entire array. | 01-15-2015 |
20150016678 | APPARATUS AND METHOD OF PREDICTING TURNS OF VEHICLE - An apparatus predicts a turn of a vehicle based on a picked-up image of a forward view of the vehicle. The forward view is imaged by an on-vehicle sensor to repeatedly acquire images. The acquired images include position coordinate information of a light source and information indicating whether the light source is a light source of a preceding vehicle or a light source of an oncoming vehicle. Based on such information, it is determined whether or not the light source is a light source of an oncoming vehicle newly appeared in the images and the light source is in a predetermined area near the left end or in a predetermined area near the right end in the images. When the determination result is affirmative, it is determined that there is a curve in the traveling direction of the vehicle. | 01-15-2015 |
20150016679 | FEATURE EXTRACTION DEVICE, FEATURE EXTRACTION METHOD, AND FEATURE EXTRACTION PROGRAM | 01-15-2015 |
20150016680 | HYBRID PRECISION TRACKING - Disclosed herein are through-the-lens tracking systems and methods which can enable sub-pixel accurate camera tracking suitable for real-time set extensions. That is, the through-the-lens tracking can make an existing lower precision camera tracking and compositing system into a real-time VFX system capable of sub-pixel accurate real-time camera tracking. With this enhanced level of tracking accuracy the virtual cameras can be used to register and render real-time set extensions for both interior and exterior locations. | 01-15-2015 |
20150016681 | THREE-DIMENSIONAL OBJECT DETECTION DEVICE - A three-dimensional object detection device includes an image capturing unit, an image conversion unit, a three-dimensional object detection unit, a movement speed calculation unit, a three-dimensional object assessment unit, a non-detection-object assessment unit and a control unit. The image conversion unit converts a viewpoint of the images to create bird's-eye view images. The three-dimensional object detection unit detects a presence of a three-dimensional object within the predetermined detection area based on difference waveform information. The movement speed calculation unit calculates a movement speed of the three-dimensional object. The non-detection-object assessment unit detect san amount of variability in the movement speed of the three-dimensional object, and assesses whether the three-dimensional object is a non-detection object based on the amount of variability. The control unit inhibits the three-dimensional object assessment unit from assessing that the three-dimensional object is the another vehicle based on the assessment results. | 01-15-2015 |
20150016682 | REFERENCE-BASED MOTION TRACKING DURING NON-INVASIVE THERAPY - During a focused-ultrasound or other non-invasive treatment procedure, the motion of the treatment target or other object(s) of interest can be tracked in real time based on the comparison of treatment images against a reference library of images that have been acquired prior to treatment for the anticipated range of motion and have been processed to identify the location of the target or other object(s) therein. | 01-15-2015 |
20150016683 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - Provided is an image processing device including a global motion detection unit configured to detect a global motion indicating a motion of an entire image, a local motion detection unit configured to detect a local motion indicating a motion of each of areas of an image, and a main subject determination unit configured to determine a main subject based on the global motion and the local motion. | 01-15-2015 |
20150016684 | OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD, AND OBJECT DETECTION PROGRAM - An object detection device includes a raster scan execution unit that executes a raster scan on an input image using a scan window in order to detect an object within the input image which is input by an image input unit, a scan point acquisition unit that acquires scan points of the scan window which are positions on the input image during the execution of the raster scan, and a size-changing unit that changes a relative size of the input image with respect to the scan window. When the relative size is changed by the size-changing unit, an offset is given to the starting positions of the scan points after the change with respect to the starting positions of the scan points before the change. | 01-15-2015 |
20150016685 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND PROGRAM - There is provided an information processing device including a control unit to generate play event information based on a determination whether detected behavior of a user is a predetermined play event. | 01-15-2015 |
20150016686 | IMAGE PROCESSING APPARATUS, METHOD, AND PROGRAM - An image processing apparatus, including a filtering unit that performs filtering on an image using a second order partial differential and calculates a Hessian matrix and an evaluation unit that discriminates a structure included in the image using eigenvalues and eigenvectors of the Hessian matrix, in which the filtering unit includes a correction unit that performs filtering on the image using a first order partial differential of a function representing a hollow sphere having the same radius as the radius of the solid sphere and obtains first order partial differential vectors, and carries out correction to cancel out one of response waveforms of the function representing the solid sphere in each direction, the response waveforms appearing at two positions symmetrically separated with respect to the center of the solid sphere, using values obtained by projecting the first order partial differential vectors onto directions of the eigenvectors. | 01-15-2015 |
20150016687 | METHOD, SYSTEM AND COMPUTER STORAGE MEDIUM FOR FACE DETECTION - In a face detection method, preprocess an image, and extract corners from the preprocessed image. Then, filter and combine the corners to obtain a connected component for the corners. Extract a centroid from the connected component of the corners, and match the centroid with a facial template. Then, calculate a matching probability of the centroid with the facial template, and identify a region formed by centroids having a matching probability greater than or equal to a predetermined value as a candidate face region. With the method described above, the accuracy and efficiency of face detection can be improved. In addition, the present invention provides a face detection system and a computer storage medium. | 01-15-2015 |
20150016688 | SALIENT POINT-BASED ARRANGEMENTS - A variety of methods and systems involving sensor-equipped portable devices, such as smartphones and tablet computers, are described. One particular embodiment decodes a digital watermark from imagery captured by the device and, by reference to watermark payload data, obtains salient point data corresponding to an object depicted in the imagery. Other embodiments obtain salient point data for an object through use of other technologies (e.g., NFC chips). The salient point data enables the device to interact with the object in a spatially-dependent manner. Many other features and arrangements are also detailed. | 01-15-2015 |
20150023549 | DETECTION OF ASTRONOMICAL OBJECTS - Methods and apparatus, including computer program products, implementing and using techniques for detecting astronomical objects. An image frame is received, which includes representations of one or more astronomical objects. The received image frame is divided into several swaths. One or more swaths are selected, which include full or partial representations of one or more astronomical objects. Each of the one or more swaths and each astronomical object represented within the one or more swaths can be designated by a base-limit pair. The base-limit pairs for the selected one or more swaths are compared with base-limit pairs for one or more corresponding swaths using a difference algorithm. A list of differences in the base-limit pairs is created. | 01-22-2015 |
20150023550 | AUTOMATIC EXTRACTION OF BUILT-UP FOOTPRINTS FROM HIGH RESOLUTION OVERHEAD IMAGERY THROUGH MANIPULATION OF ALPHA-TREE DATA STRUCTURES - A system for automatically extracting or isolating structures or areas of interest (e.g., built-up structures such as buildings, houses, shelters, tents; agricultural areas; etc.) from HR/VHR overhead imagery data by way of making as little as a single pass through a hierarchical data structure of input image components (where pixels are grouped into components based on any appropriate definition or measure of dissimilarity between adjacent pixels of the input image) to identify candidate components (e.g., possible structures of interest) free of necessarily having to re-iterate the same operator configured with different threshold parameters for a plurality of values. | 01-22-2015 |
20150023551 | Method and device for detecting falls by image analysis - The present invention relates to a method for detecting a fall of a person by analysis of a stream of video images originating from an image capture device, comprising:
| 01-22-2015 |
20150023552 | SYSTEMS AND METHODS FOR DETERMINING IMAGE SAFETY - Systems and methods are provided for determining the safety of an image, which may be used to determine whether an image is appropriate for a given purpose or for use in a given context. Determining the safety of the image may include analyzing the image to determine the amount of skin exposed in various key body areas of each human represented in the image, such as a photograph. | 01-22-2015 |
20150023553 | IMAGE ANOMALY DETECTION IN A TARGET AREA USING POLARIMETRIC SENSOR DATAPOLARIMETRIC SENSOR DATA - A methodology for detecting image anomalies in a target area for classifying objects therein, in which at least two images of the target area are obtained from a sensor representing different polarization components. The methodology can be used to classify and/or discriminate manmade objects from natural objects in a target area, for example. A data cube is constructed from the at least two images with the at least two images being aligned, such as on a pixel-wise basis. A processor computes the global covariance of the data cube and thereafter locates a test window over a portion of the data cube. The local covariance of the contents of the test window is computed and objects are classified within the test window when an image anomaly is detected in the test window. For example, an image anomaly may be determined when a matrix determinant ratio of the local covariance and the global covariance exceeds a probability ratio threshold. The window can then be moved, e.g., by one or more pixels to form a new test window in the target area, and the above steps repeated until all of the pixels in the data cube have been included in at least one test window. | 01-22-2015 |
20150023554 | IMAGE PROCESSING APPARATUS, COMPUTER-READABLE MEDIUM STORING AN IMAGE PROCESSING PROGRAM, AND IMAGE PROCESSING METHOD - The image processing apparatus for detecting a moving object in a moving image includes a background generation unit configured to generate a background image of the moving image while updating the background image over time. The background generation unit includes a model derivation unit configured to derive a mixed distribution model having one or more distribution models for each pixel of interest, and a background value derivation unit configured to derive one or more background pixel values respectively corresponding to the one or more distribution models. The model derivation unit is configured to generate a new distribution model from pixel values of a plurality of pixels within a local region containing the pixel of interest in a first frame, and update the existing distribution model using a pixel value of the pixel of interest in a second frame that is different from the first frame. | 01-22-2015 |
20150023555 | COMMODITY RECOGNITION APPARATUS AND COMMODITY RECOGNITION METHOD - In accordance with one embodiment, a commodity recognition apparatus receives, if a commodity is recognized as candidate of a target commodity by a recognition module, a selection input of the target commodity from the candidate; then adds appearance feature amount data to the feature amount data stored in a recognition dictionary file in association with an item of the target commodity the selection input of which is received; and meanwhile, writes log data containing the date and time when the appearance feature amount data is added to the recognition dictionary file in a log storage section. | 01-22-2015 |
20150023556 | METHOD AND APPARATUS FOR SELECTING SEED AREA FOR TRACKING NERVE FIBERS IN BRAIN - A method for selecting a seed area for tracking nerve fibers in a brain includes performing registration of an atlas which shows a plurality of areas which are included in the brain and image data which relates to the brain, displaying a brain area list with respect to the plurality of areas, selecting a first area from the atlas based on a first user input with respect to the brain area list, extracting an area of the image data which corresponds to the first area, as a seed area, based on a result of the registration, and generating a first image which corresponds to the seed area from the image data, and displaying the generated first image. | 01-22-2015 |
20150023557 | APPARATUS FOR RECOGNIZING OBJECTS, APPARATUS FOR LEARNING CLASSIFICATION TREES, AND METHOD FOR OPERATING SAME - An object recognition system is provided. The object recognition system for recognizing an object may include an input unit to receive, as an input, a depth image representing an object to be analyzed, and a processing unit to recognize a visible object part and a hidden object part of the object, from the depth image, by using a classification tree. The object recognition system may include a classification tree learning apparatus to generate the classification tree. | 01-22-2015 |
20150023558 | SYSTEM AND METHOD FOR FACE DETECTION AND RECOGNITION USING LOCALLY EVALUATED ZERNIKE AND SIMILAR MOMENTS - The present invention related to a system ( | 01-22-2015 |
20150023559 | Image Processing Apparatus and Method - A method and apparatus for localizing an area in relative movement and for determining the speed and direction thereof in real time is disclosed. Each pixel of an image is smoothed using its own time constant. A binary value corresponding to the existence of a significant variation in the amplitude of the smoothed pixel from the prior frame, and the amplitude of the variation, are determined, and the time constant for the pixel is updated. For each particular pixel, two matrices are formed that include a subset of the pixels spatially related to the particular pixel. The first matrix contains the binary values of the subset of pixels. The second matrix contains the amplitude of the variation of the subset of pixels. In the first matrix, it is determined whether the pixels along an oriented direction relative to the particular pixel have binary values representative of significant variation, and, for such pixels, it is determined in the second matrix whether the amplitude of these pixels varies in a known manner indicating movement in the oriented direction. In each of several domains, histogram of the values in the first and second matrices falling in such domain is formed. Using the histograms, it is determined whether there is an area having the characteristics of the particular domain. The domains include luminance, hue, saturation, speed (V), oriented direction (D | 01-22-2015 |
20150023560 | MULTI-CUE OBJECT ASSOCIATION - Multiple discrete objects within a scene image captured by a single camera track are distinguished as un-labeled from a background model within a first frame of a video data input. Object position, object appearance and/or object size attributes are determined for each of the blobs, and costs determined to assign to existing blobs of existing object tracks as a function of the determined attributes. The un-labeled object blob that has a lowest cost of association with any of the existing object tracks is labeled with the label of that track having the lowest cost, said track is removed from consideration for labeling remaining un-labeled object blobs, and the process iteratively repeated until each of the track labels have been used to label one of the un-labeled blobs. | 01-22-2015 |
20150023561 | DYNAMIC ULTRASOUND PROCESSING USING OBJECT MOTION CALCULATION - A system and method for transforming ultrasound data includes acquiring ultrasound data, calculating object motion from the data, modifying a processing parameter, processing the ultrasound data according to the processing parameter, and outputting the processed ultrasound data. The system and method may additionally include the calculation of a data quality metric that can additionally or alternatively be used with object motion to modify a processing parameter. | 01-22-2015 |
20150030202 | SYSTEM AND METHOD FOR MOVING OBJECT DETECTION AND PROCESSING - A method is provided for an intelligent video processing system based on object detection. The method includes receiving an input video sequence corresponding to a video program, obtaining a plurality of frames of the input video sequence, and obtaining a computational constraint and a temporal rate constraint. The method also includes determining one or more regions of interest (ROIs) of the plurality of frames based on the computational constraint and temporal rate constraint, and selecting a desired set of frames from the plurality of frames based on the ROIs such that the desired set of frames substantially represent a view path of the plurality of frames. Further, the method includes detecting object occurrences from the desired set of frames based on the selected desired set of frames such that a computational cost and a number of frames for detecting the object occurrences are under the computational constraint and temporal rate constraint. | 01-29-2015 |
20150030203 | METHOD AND APPARATUS FOR DETECTING SMOKE FROM IMAGE - Provided are a fire detecting apparatus and a method thereof for detecting a fire, the method includes operations of extracting a feature of at least one object in an input image by using a value of a brightness difference between pixels of the input image or by using an RGB value of the input image; converting the extracted feature of the at least one object into an N dimensional feature; and performing Support Vector Machine (SVM) machine learning on the N dimensional feature of the at least one object. | 01-29-2015 |
20150030204 | APPARATUS AND METHOD FOR ANALYZING IMAGE INCLUDING EVENT INFORMATION - An apparatus and method for analyzing an image including event information for determining a pattern of at least one pixel group corresponding to event information included in an input image, and analyzes at least one of an appearance of an object and a motion of the object, based on the at least one pattern. | 01-29-2015 |
20150030205 | HUMAN BODY SECURITY INSPECTION METHOD AND SYSTEM - The present invention provides a human body security inspection method and system. The method comprises: retrieving in real-time scanning row or column image data of a personal to be inspected; transmitting in real-time the image data to an algorithm processing module and processing these image data by the module; automatically recognizing a suspicious matter by a suspicious matter automatic target recognition technique, after retrieving an image data of an entire scanning image of the personal; any of the following three inspection modes is selected, so as to perform a further processing on basis of the recognition result of the suspicious matter, (1) in a manner of automatic target recognition technique, (2) in a combination manner of the automatic target recognition technique and a privacy protection image; and (3) a combination manner of the automatic target recognition technique, a privacy protection image and human intervention. | 01-29-2015 |
20150030206 | Detecting and Tracking Point Features with Primary Colors - A feature tracking technique for detecting and tracking feature points with primary colors. An energy value may be computed for each color channel of a feature. If the energy of all the channels is above a threshold, then the feature may be tracked according to a feature tracking method using all channels. Otherwise, if the energy of all of the channels is below the threshold, then the feature is not tracked. If the energy of at least one (but not all) of the channels is below the threshold, then the feature is considered to have primary color, and the feature may be tracked according to the feature tracking method using only the one or more channels with energy above the threshold. The feature tracking techniques may, for example, be used to establish point trajectories in an image sequence for various Structure from Motion (SFM) techniques. | 01-29-2015 |
20150036874 | AUTOMATIC GENERATION OF BUILT-UP LAYERS FROM HIGH RESOLUTION SATELLITE IMAGE DATA - A system for automatically extracting interesting structures or areas (e.g., built-up structures such as buildings, tents, etc.) from HR/VHR satellite imagery data using corresponding LR satellite imagery data. The system breaks down HR/VHR input satellite images into a plurality of components (e.g., groups of pixels), organizes the components into a first hierarchical data structure (e.g., a Max-Tree), generates a second hierarchical data structure (e.g., a KD-Tree) from feature elements (e.g., spectral and shape characteristics) of the components, uses LR satellite imagery data to categorize components as being of interest or not, uses the feature elements of the categorized components to train the second data structure to be able to classify all components of the first data structure as being of interest or not, classifies the components of the first data structure with the trained second data structure, and then maps components classified as being of interest into a resultant image. | 02-05-2015 |
20150036875 | METHOD AND SYSTEM FOR APPLICATION EXECUTION BASED ON OBJECT RECOGNITION FOR MOBILE DEVICES - Embodiments of the present invention enable mobile devices to behave as a dedicate remote control for different target devices through camera detection of a particular target device and autonomous execution of applications linked to the detected target device. Also, when identical target devices are detected, embodiments of the present invention may be configured to use visual identifiers and/or positional data associated with the target device for purposes of distinguishing the target device of interest. Additionally, embodiments of the present invention are capable of being placed in a surveillance mode in which camera detection procedures are constantly performed to locate target devices. Embodiments of the present invention may also enable users to engage this surveillance mode by pressing a button located on the mobile device. Furthermore, embodiments of the present invention may be trained to recognize target devices. | 02-05-2015 |
20150036876 | ASSOCIATING A CODE WITH AN OBJECT - Described are machine vision systems, methods, and apparatus, including computer program products for associating codes with objects. In an embodiment, a machine vision system includes an area-scan camera having a field of view (FOV), the area-scan camera disposed relative to a first workspace such that the FOV covers at least a portion of the first workspace and a dimensioner disposed relative to a second workspace. The machine vision system includes a machine vision processor configured to: determine an image location of a code in an image; determine a ray in a shared coordinate space that is a back-projection of the image location of the code; determine one or more surfaces of one or more objects based on dimensioning data; determine a first surface of the one or more surfaces that intersects the 3D ray; and associate the code with an object associated with the first surface. | 02-05-2015 |
20150036877 | SPARSE REDUCED (SPARE) FILTER - The disclosure provides a filtering engine for selecting sparse filter components used to detect a material of interest (or specific target) in a hyperspectral imaging scene and applying the sparse filter to a plurality of pixels in the scene. The filtering engine transforms a spectral reference representing the material of interest to principal components space using the eigenvectors of the scene. It then ranks sparse filter components based on each transformed component of the spectral reference. The filtering engine selects sparse filter components based on their ranks. The filtering engine performs the subset selection quickly because the computations are minimized; it processes only the spectral reference vector and covariance matrix of the scene to do the subset selection rather than process a plurality of pixels in the scene, as is typically done. The spectral filter scores for the plurality of pixels are calculated efficiently using the sparse filter. | 02-05-2015 |
20150036878 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - There is provided an image processing apparatus for composing a plurality of images captured while changing an exposure amount, including a displacement detection means for detecting a displacement amount between the plurality of images, a correction means for correcting a displacement between the images based on the displacement amount detected by the displacement detection means, a moving object region detection means for detecting a moving object region from the plurality of images for which the displacement has been corrected, an image composition means for composing the plurality of images for which the displacement has been corrected, and a moving object processing means for replacing a region corresponding to the moving object region of the composite image composed by the image composition means by an image obtained by performing weighted addition of the plurality of images. | 02-05-2015 |
20150036879 | POSTURE ESTIMATING APPARATUS, POSTURE ESTIMATING METHOD AND STORING MEDIUM - The present invention aims to estimate a more consistent posture in regard to a multi-joint object. A target range image is first input, a human body region is extracted from the input range image, a target joint position candidate is calculated from the input range image, and a joint position is finally determined based on the calculated joint position candidate and a likelihood of each joint to estimate the posture. At this time, joint position permissible range information concerning inter-joint distance and angle of a human body model previously set by learning is obtained from a human body model storing unit, consistency is evaluated for a relation between the joint position candidates of a certain joint and other joint based on the obtained information, and thus the posture corresponding to the best combination of the joint positions is determined. | 02-05-2015 |
20150036880 | ANALYSIS SYSTEM - An analysis system | 02-05-2015 |
20150036881 | IDENTIFYING IOT DEVICES/OBJECTS/PEOPLE USING OUT-OF-BAND SIGNALING/METADATA IN CONJUNCTION WITH OPTICAL IMAGES - The disclosure relates to identifying an object associated with a nearby Internet of Things (IoT) device. In an aspect, a device receives identifying information associated with the nearby IoT device, detects a nearby object in a field of view of a camera application, determines whether or not the nearby object is associated with the nearby IoT device based on the received identifying information, and based on the nearby object being associated with the nearby IoT device, determines that the nearby object corresponds to the object associated with the nearby IoT device. | 02-05-2015 |
20150036882 | AUDITING VIDEO ANALYTICS THROUGH ESSENCE GENERATION - Video analytics data is audited through review of selective subsets of visual images from a visual image stream as a function of a temporal relationship of the images to a triggering alert event. The subset comprehends an image contemporaneous with the triggering alert event and one or more other images occurring before or after the contemporaneous image. The generated subset may be presented for review to determine whether the triggering alert event is a true or false alert, or whether additional data from the visual image stream is required to make such a determination. If determined from the presented visual essence that the additional data is required make the true or false determination, then additional data is presented from the visual image stream for review. | 02-05-2015 |
20150036883 | SYSTEM AND METHOD FOR IDENTIFYING A PARTICULAR HUMAN IN IMAGES USING AN ARTIFICIAL IMAGE COMPOSITE OR AVATAR - A system and method for detecting a particular human in a plurality of images of humans may include one or more processors to receive input data describing the appearance of the particular human via a graphical user interface (GUI). An image representing the particular human may be generated based on the input data. This may take the form of an avatar or artificial image. This artificial or processor-generated image may be used to identify one or more of the humans in said plurality of images as a candidate for the particular human. | 02-05-2015 |
20150036884 | RECOGNIZING GESTURES CAPTURED BY VIDEO - Motions and gestures can be detected using a video capture element of a computing device even when the video capture element is not able to accurately capture the motion. Information about the background in the image information can be determined, and the way in which that background information is occluded can be used to determine the motion. In at least some embodiments, edges are detected in the video information. Images of foreground objects can then be isolated from edges of background images by comparing histograms of multiple frames of video. The remaining data is indicative of a direction and speed of motion, which can be used to infer a determined gesture even though that gesture was not visible in the captured video information. | 02-05-2015 |
20150043770 | SPECKLE SENSING FOR MOTION TRACKING - Speckle sensing for motion tracking is described, for example, to track a user's finger or head in an environment to control a graphical user interface, to track a hand-held device, to track digits of a hand for gesture-based control, and to track 3D motion of other objects or parts of objects in a real-world environment. In various examples a stream of images of a speckle pattern from at least one coherent light source illuminating the object, or which is generated by a light source at the object to be tracked, is used to compute an estimate of 3D position of the object. In various examples the estimate is transformed using information about position and/or orientation of the object from another source. In various examples the other source is a time of flight system, a structured light system, a stereo system, a sensor at the object, or other sources. | 02-12-2015 |
20150043771 | HYBRID METHOD AND SYSTEM OF VIDEO AND VISION BASED ACCESS CONTROL FOR PARKING STALL OCCUPANCY DETERMINATION - Hybrid methods, systems and processor-readable media for video and vision based access control for parking occupancy determination. One or more image frames of a parking area of interest can be acquired from among two or more regions of interest defined with respect to the parking area of interest. The regions of interest can be analyzed for motion detection or image content change detection. An image content classification operation can be performed with respect to a first region of interest among the regions of interest based on the result of the image content change detection. An object tracking operation can then be performed with respect to a second region of interest among the regions of interest if the result of the image content classification operation indicates a presence of one or more objects of interest within the parking area of interest. | 02-12-2015 |
20150043772 | METHOD OF, AND APPARATUS FOR, LANDMARK LOCATION - An apparatus for locating a landmark in a set of image data comprises a landmark location unit that is configured, for each of a plurality of image data items, to obtain from a first two-class classifier a first classification of the image data item as foreground or background, to obtain from a second two-class classifier a second classification of the image data item as foreground or background, and to combine the first classification and the second classification to obtain a combined classification, and wherein the landmark location unit is further configured to use the combined classifications for the plurality of image data items to determine a location for the landmark. | 02-12-2015 |
20150043773 | Accurate Positioning System Using Attributes - A Position Identification Solution offers a way to determine the position of a Mobile Device by defining a set of known positions and an associated set of objects, shapes, or attributes. A Mobile Device determines its position by scanning an object, shape, or attribute using an included camera, and a Mobile Application running on the Mobile Device recognizes a specific object, shape, or attribute, and determines a corresponding position, which is used to compute the position of the Mobile Device. The Position Identification Solution may use shapes, colors, or combinations of shape and colors. The Position Identification Solution may be used together with other positioning systems in a Hybrid Positioning System to compute the position of the Mobile Device with increased accuracy. | 02-12-2015 |
20150043774 | Automatic Planning For Medical Imaging - Disclosed herein is a framework for facilitating automatic planning for medical imaging. In accordance with one aspect, the framework receives first image data of a subject. One or more imaging parameters may then be derived using a geometric model and at least one reference anatomical primitive detected in the first image data. The geometric model defines a geometric relationship between the detected reference anatomical primitive and the one or more imaging parameters. The one or more imaging parameters may be presented, via a user interface, for use in acquisition, reconstruction or processing of second image data of the subject. | 02-12-2015 |
20150043775 | OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD AND PROGRAM - A technique is proposed for enabling stable detection of an object even when the contrast of an image is lowered overall or partially. | 02-12-2015 |
20150043776 | Timing System and Method - A timing system that includes a glyph associated with an object to be timed and at least one camera for capturing images of the glyph or associated object. A computer generates a virtual line, associates the virtual line with at least one of the images, and determines when the glyph or associated object intersects, crosses or has crossed the virtual line. A database records the identity of the glyph and the time that the glyph or associated object intersected or crossed the virtual line. The invention also relates to a related method for determining the time a glyph or object associated therewith passes a predetermined point or line. | 02-12-2015 |
20150043777 | THREE DIMENSIONAL DETECTING DEVICE AND METHOD FOR DETECTING IMAGES THEREOF - A method includes generating a disparity map according to two sets of image data, identifying at least one object in the disparity map, mapping the at least one object onto a plane view, tracking at least one object on the plane view, and providing a robust algorithm about a cross-line time interval and a cross-line degree. The robust algorithm includes detecting whether the at least one object enters a predetermined region when the at least one object crosses a predetermined boundary. | 02-12-2015 |
20150043778 | SYSTEM AND METHOD FOR CONTEXUALLY INTERPRETING IMAGE SEQUENCES - A system and method for contextually interpreting image sequences are provided. The method comprises receiving video from one or more video sources, and generating one or more questions associated with one or more portions of the video based on at least one user-defined objective. The method further comprises sending the one or more portions of the video and the one or more questions to one or more assistants, receiving one or more answers to the one or more questions from the one or more assistants, and determining a contextual interpretation of the video based on the one or more answers and the video. | 02-12-2015 |
20150049902 | Recognition Procedure for Identifying Multiple Items in Images - The disclosure includes a system and method for identifying multiple items in an image. A image recognition application receives a query image of items, computes features of the query image, for each feature finds an indexed image with closest matched features in a database, determines that the shape of the matched features is geometrically consistent, determines whether the query image matches the indexed image, responsive to the query image matching the indexed image, returns inliers, determines a region of interest where the match was found in the query image, removes inliers from the set of features to reduce the set of features and returns all matches found when the query image fails to match the indexed image. | 02-19-2015 |
20150049903 | METHODS AND SYSTEMS FOR DETECTING PATCH PANEL PORTS FROM AN IMAGE IN WHICH SOME PORTS ARE OBSCURED - A method of estimating one or more dimensions of a patch panel may include receiving an image of a patch panel that comprises a plurality of ports and one or more gaps, extracting, by a computing device, a region of interest from the received image, detecting, by the computing device, one or more line segments from the region of interest, determining whether one or more candidate ports can be identified based on at least a portion of the line segments, and in response to determining that one or more candidate ports can be identified, identifying one or more candidate ports, and determining, by the computing device, a gap length associated with the identified candidate ports. | 02-19-2015 |
20150049904 | REFLECTION BASED TRACKING SYSTEM - In one embodiment, a processor can receive data representing a view reflected by a mirror of a plurality of mirrors. The plurality of mirrors may be configured in a space to reflect a plurality of views of structures in the space. The mirror of the plurality of mirrors may include a uniquely identifiable feature distinguishable from other objects in the space. The processor can identify the mirror of the plurality of mirrors according to the uniquely identifiable feature. The processor can also determine an attribute of the structures according to the identified mirror and the data representing the view reflected by the mirror. | 02-19-2015 |
20150049905 | MAP GENERATION FOR AN ENVIRONMENT BASED ON CAPTURED IMAGES - Systems and methods for map generation for an environment based on captured images are disclosed. According to an aspect, a method includes capturing a first image of an environment. The method also includes identifying a reference in the first image. Further, the method includes generating, based on the identified reference, a map of the environment to use for physically orienting a computing device within the environment based on a second image including the reference. | 02-19-2015 |
20150049906 | HUMAN IMAGE TRACKING SYSTEM, AND HUMAN IMAGE DETECTION AND HUMAN IMAGE TRACKING METHODS THEREOF - A human image detection and tracking systems and methods are disclosed. A human image detection method comprises receiving a depth image data from a depth image sensor by an image processing unit, removing a background image of the depth image sensor and outputting a foreground image by the image processing unit, receiving the foreground image and operating a graph-based segment on the foreground image to obtain a plurality of graph blocks by a human image detection unit, determining whether a potential human region exists in the graph blocks, determining whether the potential human region is a potential human head region, determining whether the potential human head region is a real human head region, and regarding the position of the real human head region is the human image position by the human image detection unit if the potential human head region is the real human head region. | 02-19-2015 |
20150049907 | HIGH ACCURACY IMAGE MATCHING APPARATUS AND HIGH ACCURACY IMAGE MATCHING METHOD USING A SKIN MARKER AND A FEATURE POINT IN A BODY - A high accuracy image matching apparatus and a high accuracy image matching method using a skin marker and a feature point in a body, which uses ultrasonic probe or a radiation probe as a portion of the marker for image matching, are disclosed. As an embodiment, the high accuracy image matching apparatus and the high accuracy image matching method using a skin marker and a feature point in a body, use the ultrasonic probe or the radiation probe as a portion of marker for image matching indicating an anatomical feature point to reduce an error in operation point, and more precise operation can be possible and better operation result can be obtained by using the ultrasonic probe or a radiation probe. | 02-19-2015 |
20150049908 | SUBJECT CHANGE DETECTION SYSTEM AND SUBJECT CHANGE DETECTION METHOD - The invention detects a change in a subject by detecting the subject from an image, acquiring a feature quantity distribution that indicates shape information of the detected subject, accumulating the shape information that is indicated by the acquired feature quantity distribution and comparing the shape information a predetermined period of time before with the current shape information by using the accumulated shape information. Here, the invention acquires the feature quantity distribution of the subject from a processing target area extracted from an image area that includes the subject. The invention detects a change in the subject by using accumulated shape change information acquired from the shape information. The invention detects a change in the subject by using averaged shape change information obtained by averaging the shape change information. | 02-19-2015 |
20150049909 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND PROGRAM - Provided an information processing device including ail image acquisition configured to acquire a captured image, a change detection unit configured to detect a change in a state of a subject in a network service recognized from the captured image, and a depiction change unit configured to change a depiction of the subject shown in the captured image in a case where the change detection unit detects the change in the state. | 02-19-2015 |
20150049910 | IMAGING WORKFLOW USING FACIAL AND NON-FACIAL FEATURES - A method for determining an impact score for a digital image includes providing the digital image wherein the digital image includes faces; using a processor to determine an image feature for the faces; using the processor to compute an object impact score for the faces, wherein the object impact score is based at least upon one of the determined image features; weighting the object impact score for the faces based on one of the determined image features for a face; using the processor to compute an impact score for the digital image by combining the weighted object impact scores for the faces in the image; and storing the computed impact score in a processor accessible memory. | 02-19-2015 |
20150049911 | METHOD AND DEVICE FOR SAFEGUARDING A HAZARDOUS WORKING AREA OF AN AUTOMATED MACHINE - A method and device for safeguarding a hazardous working area includes a camera system for generating successive 3-D images of the working area, and an evaluation unit comprising a fail-safe foreign object detector, a classifier, a person tracker and a comparator. The foreign object detector generates a first signal comprising a first item of position information, representative of the position of a foreign object in the protection zone. The classifier attempts to identify the foreign object as a person. The person tracker tracks an identified person using a series of current 3-D images and, after each new 3-D image, determines a second item of position information representing the current position of the identified person. If the position of the foreign object according to the first item of position information and the position of the identified person are different from one another, a control signal for stopping the machine is generated. | 02-19-2015 |
20150049912 | IMAGE PROCESSING TO PREVENT ACCESS TO PRIVATE INFORMATION - A processing resource receives original image data by a surveillance system. The original image data captures at least private information and occurrence of activity in a monitored region. The processing resource applies one or more transforms to the original image data to produce transformed image data. Application of the one or more transforms sufficiently distorts portions of the original image data to remove the private information. The transformed image data includes the distorted portions to prevent access to the private information. However, the distorted portions of the video include sufficient image detail to discern occurrence of the activity in the retail environment. | 02-19-2015 |
20150055820 | MODEL FOR MAPPING SETTLEMENTS - A programmable media includes a graphical processing unit in communication with a memory element. The graphical processing unit is configured to detect one or more settlement regions from a high resolution remote sensed image based on the execution of programming code. The graphical processing unit identifies one or more settlements through the execution of the programming code that executes a multi-instance learning algorithm that models portions of the high resolution remote sensed image. The identification is based on spectral bands transmitted by a satellite and on selected designations of the image patches. | 02-26-2015 |
20150055821 | MULTI-TRACKER OBJECT TRACKING - Systems and approaches are provided for tracking an object using multiple tracking processes. By combining multiple lightweight tracking processes, object tracking can be robust, use a limited amount of power, and enable a computing device to respond to input corresponding to the motion of the object in real time. The multiple tracking processes can be run in parallel to determine the position of the object by selecting the results of the best performing tracker under certain heuristics or combining the results of multiple tracking processes in various ways. Further, other sensor data of a computing device can be used to improve the results provided by one or more of the tracking processes. | 02-26-2015 |
20150055822 | METHOD AND APPARATUS FOR USER RECOGNITION - A method for user recognition, comprising receiving user information from an information unit of a user; detecting the user face in a captured user image; and recognizing the user identity in the captured image according to a predetermined gesture associated with the user information and performed by the user. | 02-26-2015 |
20150055823 | IMAGE INSPECTION METHOD AND INSPECTION REGION SETTING METHOD - An image inspection method executed by an image inspection apparatus includes an acquisition step of acquiring an inspection target object image obtained by capturing an image of an inspection target object, a setting reading step of reading, from a storage device that stores inspection region defining information in advance, the inspection region defining information, an inspection region extraction step of extracting, as an inspection region image, a portion to be an inspection region from the inspection target object image, based on the inspection region defining information, and an inspection processing step of performing inspection on the inspection target object by analyzing the inspection region image. The inspection region defining information comprises information defining an initial contour of the inspection region and information defining a range based on the initial contour as a search range for searching a contour of the inspection region. | 02-26-2015 |
20150055824 | METHOD OF DETECTING A MAIN SUBJECT IN AN IMAGE - A method for detecting a main subject in an image comprises the steps of: (i) computing a plurality of saliency features from the image ( | 02-26-2015 |
20150055825 | OBJECT DETECTION APPARATUS AND STORAGE MEDIUM - Important information about an object is detected using less arithmetic processing. An object detection unit generates an edge image from a color image. The object detection unit evaluates symmetry of an image included in the edge image by performing processing in accordance with the position of a target pixel. The object detection unit identifies a symmetry center pixel forming an object having symmetry. The object detection unit detects an object width for each symmetry center pixel. The object detection unit identifies the width of the object in the vertical direction based on the width of the symmetry center pixels in the vertical direction, and identifies the width of the object in the horizontal direction based on the object width identified for each symmetry center pixel. | 02-26-2015 |
20150055826 | DUST REMOVAL TECHNOLOGY FOR DRIVER VISION LEVERAGE - A system includes a way of improving an image obscured by airborne particles. The system includes a decomposition processor and an image generation processor. The decomposition processor decomposes an object of interest moving in a first image at a first rate of speed into at least one first subspace vector. This decomposition processor also decomposes fine particles moving at a different rate of speed than the object in the first image into at least one second subspace vector. The fine particles obscure the object of interest in a second image. The image generation processor generates based, at least in part on the first subspace vector and the second subspace vector, an image of the object without some of the fine particles obscuring the object of interest. | 02-26-2015 |
20150055827 | Motion Capture While Fishing - Various implementations described herein are directed to a non-transitory computer readable medium having stored thereon computer-executable instructions which, when executed by a computer, may cause the computer to automatically receive motion capture data recorded by one or more cameras. The computer may analyze the motion capture data to detect a cast, catch, or bite. The computer may store a record of the cast, catch, or bite. | 02-26-2015 |
20150055828 | MOVING OBJECT DETECTION METHOD AND SYSTEM - A moving object detection method includes acquiring two depth image frames including depth information, which are obtained by continuously taking images of a moving object, the two depth image frames including a present depth image frame and at least one past depth image frame; dividing each of the two depth image frames into a plurality of blocks; calculating differences between numbers of pixels positioned in respective different depth areas in each of the plurality of blocks in the present depth image frame, and numbers of pixels positioned in respective different depth areas in each of the corresponding plurality of blocks in each of the at least one past depth image frame, which correspond to the plurality of blocks in the present depth image frame; and detecting a moving block in the present depth image frame based on the calculated difference and constituting the detected moving object. | 02-26-2015 |
20150055829 | METHOD AND APPARATUS FOR TRACKING OBJECT - A method and an apparatus for tracking an object are disclosed. The method includes determining a first position by a first tracking template and determining a second position by a second tracking template, the first and second tracking templates being formed based on first and second feature sets, respectively, the first feature set being different from the second feature set, and the first feature set and the second feature set including one or more features; and determining a final position based on the first position and the second position, wherein the first tracking template is updated for each of a predetermined number of frames, the second tracking template is updated based on a predetermined rule, the second tracking template and the first tracking template are independently updated, and the update frequency of the second tracking template is lower than the first tracking template. | 02-26-2015 |
20150055830 | AUTOMATICALLY DETERMINING FIELD OF VIEW OVERLAP AMONG MULTIPLE CAMERAS - Field of view overlap among multiple cameras are automatically determined as a function of the temporal overlap of object tracks determined within their fields-of-view. Object tracks with the highest similarity value are assigned into pairs, and portions of the assigned object track pairs having a temporally overlapping period of time are determined. Scene entry points are determined from object locations on the tracks at a beginning of the temporally overlapping period of time, and scene exit points from object locations at an ending of the temporally overlapping period of time. Boundary lines for the overlapping fields-of-view portions within the corresponding camera fields-of-view are defined as a function of the determined entry and exit points in their respective fields-of-view. | 02-26-2015 |
20150063627 | METHODS AND APPARATUS TO IDENTIFY COMPONENTS FROM IMAGES OF THE COMPONENTS - Methods and apparatus to identify components from images of components are disclosed. An example method includes generating first keypoint signatures of an object from at least one image of the object and identifying the object using the first keypoint signatures. Identifying the object comprises: comparing the first keypoint signatures to assembly reference keypoint signatures at a first level in a hierarchical database, the assembly reference keypoint signatures comprising keypoint signatures of multiple views of assemblies containing sets of components; and based on the comparison of the first keypoint signatures to the assembly reference keypoint signatures, comparing the first keypoint signatures to component keypoint signatures at a second level in a hierarchical database lower than the first level, the component reference keypoint signatures comprising keypoint signatures of the components. | 03-05-2015 |
20150063628 | ROBUST AND COMPUTATIONALLY EFFICIENT VIDEO-BASED OBJECT TRACKING IN REGULARIZED MOTION ENVIRONMENTS - A method and system for video-based object tracking includes detecting an initial instance of an object of interest in video captured of a scene being monitored and establishing a representation of a target object from the initial instance of the object. The dominant motion trajectory characteristic of the target object are then determined and a frame-by-frame location of the target object can be collected in order to track the target object in the video. | 03-05-2015 |
20150063629 | GENERATION OF HIGH RESOLUTION POPULATION DENSITY ESTIMATION CELLS THROUGH EXPLOITATION OF HIGH RESOLUTION SATELLITE IMAGE DATA AND LOW RESOLUTION POPULATION DENSITY DATA SETS - Utilities (e.g., systems, methods, etc.) for automatically generating high resolution population density estimation data sets through manipulation of low resolution population density estimation data sets with high resolution overhead imagery data (e.g., such as overhead imagery data acquired by satellites, aircrafts, etc. of celestial bodies). Stated differently, the present utilities make use of high resolution overhead imagery data to determine how to distribute the population density of a large, low resolution cell (e.g., 1000 m) among a plurality of smaller, high resolution cells (e.g., 100 m) within the larger cell. | 03-05-2015 |
20150063630 | APPARATUS AND METHOD FOR DETECTING OBSTACLE - An apparatus and method for detecting an obstacle include a camera to photograph first and second images at different points in time successively. A calculator is configured to calculate a movement distance and a rotation amount of a vehicle by comparing the two images photographed by the camera with each other. A rotation amount compensator is configured to compensate for the rotation amount of the first image based on the second image. A difference value calculator is configured to calculate a difference value between the first image of which the rotation amount is compensated for and the second image based on the calculated movement distance of the vehicle. An obstacle detector extracts a region having the difference value exceeding an expectation value to detect the obstacle. | 03-05-2015 |
20150063631 | DYNAMIC IMAGE ANALYZING SYSTEM AND OPERATING METHOD THEREOF - A dynamic image analyzing system is provided, comprising: a photographing unit for taking pictures to acquire image, and a processing unit. The processing unit includes a space information analyzing module, a virtual frame forming module, and a transforming module. The space information analyzing module is used to acquire space information of user at the world coordinate system. The virtual frame forming module is used to access the space information of user at world coordinate system and to span a virtual operating frame in front of user, wherein the virtual operating frame comprises a plurality of projecting coordinate systems disturbed in front of user, between two projecting coordinate systems has an angle, and the projecting coordinate systems are sequentially sorted into a semi-arc surface. The transforming module is used to compute the position of user's hands at the world coordinate system into the projecting position at the projecting coordinate systems. | 03-05-2015 |
20150063632 | SYSTEMS, DEVICES AND METHODS FOR TRACKING OBJECTS ON A DISPLAY - Systems, devices and methods for improved tracking with an electronic device are disclosed. The disclosures employ advanced exposure compensation and/or stabilization techniques. The tracking features may therefore be used in an electronic device to improve tracking performance under dramatically changing lighting conditions and/or when exposed to destabilizing influences, such as jitter. Historical data related to the lighting conditions and/or to the movement of a region of interest containing the tracked object are advantageously employed to improve the tracking system under such conditions. | 03-05-2015 |
20150063633 | SENSOR LOCATION AND LOGICAL MAPPING SYSTEM - A system to locate and map a data collection device includes at least one image. The image may be included with a subsystem or a component monitored by the system. A data collection device, such as a smart sensor, is configured to detect a stimulus and to capture the at least one image. The smart sensor is further configured to output an image signal indicating the at least one image. A main control module is in electrical communication with the at least one smart sensor. The main control module is configured to determine the image based on the image signal, and compare the at least one image to a stored image. The main control module is further configured to authenticate the at least one image in response to the at least one image matching the stored image. | 03-05-2015 |
20150063634 | SYSTEM AND METHOD FOR DETECTING CARGO CONTAINER SEALS - Systems and methods are disclosed for detecting container seals. In one implementation, a processing device receives one or more images and processes the one or more images to identify one or more areas of interest within at least one of the one or more images, the one or more areas of interest including one or more areas of the one or more images within which a container seal is relatively likely to be present. The processing device compares the one or more identified areas of interest with one or more templates, each of the one or more templates being associated with one or more respective seal types. The processing device identifies, based on the comparison, one or more matches between the one or more identified areas and the one or more templates, and provides, based on the identification of the one or more matches, an identification of the container seal. | 03-05-2015 |
20150063635 | METHOD AND APPARATUS FOR EVALUATING RESULTS OF GAZE DETECTION - The invention relates to a method and an apparatus for evaluating results of gaze detection, wherein these results are present or are obtained in the form of information which defines, for each of a multiplicity of successive times, a viewing direction detected at this time and a focal point identified thereby in a scene image assigned to this time. For this purpose, the invention provides for the following steps to be carried out: —a temporal change in the viewing direction and/or the focal point is evaluated in order to identify different viewing events which differ from one another by different speeds of an eye movement, wherein saccades and fixations and/or pursuit movements are detected as different types of viewing events and the identified viewing events are classified according to the type thereof, —a period of time spanned by the times is divided into intervals in such a manner that an interval corresponding to a duration of the particular viewing event is assigned to each of the identified viewing events, wherein at least some of these intervals each contain a sequence of a plurality of times, —precisely one of the times or a true subset of the times is selected in each case from each of the intervals assigned to a fixation or a pursuit movement, —and, for each of these selected times, the focal point identified in the scene image assigned to the particular time is mapped to a position corresponding to this focal point in a reference image. | 03-05-2015 |
20150063636 | METHOD AND APPARATUS FOR PROCESSING DIGITAL IMAGES - A method and apparatus for processing digital images are provided. The method includes: recognizing faces from a plurality of images of a jumping subject; determining respective priorities for the plurality of images of the jumping subject, wherein an index of the priorities is based on face recognition information; and aligning the plurality of images of the jumping subject based on the priorities. | 03-05-2015 |
20150063637 | IMAGE RECOGNITION METHOD AND ROBOT - An image recognition method according to one exemplary aspect of the present invention including the steps of: acquiring a shooting image generated by capturing an image of an object using an image generating device; acquiring subject distance information indicating a distance from the object to the image generating device at a target pixel in the shooting image; extracting an image pattern corresponding to the acquired subject distance information from a plurality of image patterns which are created for detecting one detection object in advance and are associated with the different distance information, respectively, and performing a pattern matching using the extracted image pattern against the shooting image. | 03-05-2015 |
20150063638 | COMMODITY REGISTRATION APPARATUS AND COMMODITY REGISTRATION METHOD - In accordance with one embodiment, a commodity registration apparatus recognizes and registers a commodity on the basis of a commodity image captured by an image capturing section, and detects whether or not there is a barcode in the captured commodity image. In a case in which a barcode is detected, the commodity-recognition registration based on the captured commodity image is restrained. In this way, it is prevented that the commodity registration processing based on image recognition is carried out in spite that a barcode is attached, which can reduce the waste of processing time. | 03-05-2015 |
20150063639 | COMMODITY REGISTRATION APPARATUS AND COMMODITY REGISTRATION METHOD - A commodity is learnt and stored in an HDD on the basis of a commodity image captured by an image capturing section. Commodity registration is carried out through a key input. The commodity which is not stored in the HDD yet is stored in the HDD as commodity data when the commodity registration is carried out through a key input, in this way, the registration as a learnt commodity is realized. Then the target commodity captured by the image capturing section is read from the commodity data stored in the HDD. In this way, the commodity image can be added and learnt based on that the unregistered commodity is input through a key operation by the operator. | 03-05-2015 |
20150063640 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - A person detection unit in an image processing apparatus is input an image captured by an image capturing unit and detects a person from the input image. An observation target determination unit determines an observation target in the image captured by the image capturing unit according to an action taken by the person detected by the person detection unit. | 03-05-2015 |
20150063641 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING DEVICE - An image processing apparatus includes: an abnormality candidate region identifying unit configured to identify a candidate region for an abnormality from an image obtained by imaging inside of a lumen of a living body; a surrounding region determining unit configured to determine a surrounding region surrounding the candidate region; a shape information calculating unit configured to calculate shape information of the candidate region and shape information of the surrounding region in a depth direction with respect to a screen; and an abnormality region determining unit configured to determine whether or not the candidate region is an abnormality, based on a correlation between the shape information of the candidate region and the shape information of the surrounding region. | 03-05-2015 |
20150063642 | Computer-Vision-Assisted Location Check-In - In one embodiment, an uploaded multimedia object comprising a photo image or video is subjected to computer vision algorithms to detect and isolate objects within the multimedia object, and the isolated object is searched against a photographic location database containing images of a plurality of locations. Upon detecting a matching object, the location information associated with the photograph in the database containing the matching object may be leveraged to automatically check the user in to the associated location. | 03-05-2015 |
20150063643 | AUTOMATED, REMOTELY-VERIFIED ALARM SYSTEM WITH INTRUSION AND VIDEO SURVEILLANCE AND DIGITIAL VIDEO RECORDING - An automated self-monitored alarm verification solution including at least a premises portion, a server portion, and an end user device portion. Alarm verification includes capturing by an image capture device at least one image in response to a detection event, and transmitting a first data signal including the image to a local signal processing device. The signal processing device transmits a second signal including at least a portion of the image to a remote hosted server according to at least a first set of predetermined parameters. After receiving the second signal, the server transmits a third signal including at least a portion of the image from the hosted server to a user device. Using the user device, a user views the image and indicates a validity, status of the alarm based at least in part on the content of the image. Based at least upon either the validation status indicated by the user, or upon a failure to receive a message including a validation status from the user within a predetermined duration of time, the server portion may send an alarm signal to an emergency response service. | 03-05-2015 |
20150063644 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 03-05-2015 |
20150063645 | Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. | 03-05-2015 |
20150071487 | DYNAMIC LEARNING FOR OBJECT TRACKING - Techniques described herein relate to mobile computing device technologies, such as systems, methods, apparatuses, and computer-readable media for tracking an object from a plurality of objects. In one aspect, the plurality of objects may be similar. Techniques discussed herein propose dynamically learning information associated with each of the objects and discriminating between objects based on their differentiating features. In one implementation, this may be done by maintaining a database associated with each object and updating the dynamic database transferred while the objects are tracked. The tracker uses algorithmic means for differentiating objects by focusing on the differences amongst the objects. For example, in one implementation, the method may weigh the differences between different fingers higher than their associated similarities to facilitate differentiating the fingers. | 03-12-2015 |
20150071488 | IMAGING SYSTEM WITH VANISHING POINT DETECTION USING CAMERA METADATA AND METHOD OF OPERATION THEREOF - A system and method of operation of an imaging system includes: an image sensor for capturing a source image having image metadata; a segment image calculated from the source image; a compass angle calculated for producing a maximum value of an a posteriori probability of the compass angle, the a posteriori probability of the compass angle based on the segment image and the image metadata; an x-axis vanishing point, a y-axis vanishing point, and a z-axis vanishing point calculated based on the compass angle and the image metadata; and a display unit for displaying a display image, the display image based on the source image, the x-axis vanishing point, the y-axis vanishing point, and the z-axis vanishing point. | 03-12-2015 |
20150071489 | Isotropic Feature Matching - A computer-implemented method and apparatus for detecting an object of interest. An edge image is generated from an image of a scene. A sectioned structure comprising a plurality of sections is generated for use in analyzing the edge image. The edge image is analyzed using the sectioned structure to detect a presence of the object of interest in the edge image. | 03-12-2015 |
20150071490 | THREE-DIMENSIONAL OBJECT DETECTION DEVICE AND THREE-DIMENSIONAL OBJECT DETECTION METHOD - A three-dimensional object detection device includes an image capturing unit, an image conversion unit, a three-dimensional object detection unit, a three-dimensional object assessment unit and a control unit. The image conversion unit converts images obtained by the image capturing unit to create bird's-eye view images. The three-dimensional object detection unit detects a presence of a three-dimensional object within a detection area based on differential waveform information or edge information. The stationary three-dimensional object assessment unit assesses whether the detected three-dimensional object is a shadow of a tree along a road traveled by the host vehicle. The three-dimensional object assessment unit assesses whether the three-dimensional object detected is a vehicle within the detection area. The control unit suppresses the assessment that the three-dimensional object is a vehicle when the detected three-dimensional object was determined to be a shadow of a tree along the road traveled by the host vehicle. | 03-12-2015 |
20150071491 | Method And Device For Optically Determining A Position And/Or Orientation Of An Object In Space - The invention relates to a method for optically determining the position and/or orientation of an object in space on the basis of images from at least one camera ( | 03-12-2015 |
20150071492 | ABNORMAL BEHAVIOUR DETECTION - Methods and apparatus for determining whether a provided object track ( | 03-12-2015 |
20150071493 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD OF THE INFORMATION PROCESSING APPARATUS, AND STORAGE MEDIUM - An information processing apparatus includes a self-position/self-orientation calculation unit calculating self-position and/or self-orientation in the predetermined coordinate system based on a marker in acquired imaged image data when it is determined that the marker exists within a predetermined area and the marker is imaged in the imaged image data and based on received position information and physical amounts measured by sensors for measuring the physical amounts to be used for autonomous navigation when the marker does not exist within the predetermined area or the marker is not imaged in the imaged image data. | 03-12-2015 |
20150071494 | METHOD AND APPARATUS FOR PROCESSING IMAGES - A method and apparatus for processing an image, whereby a 3-dimensional (3D) hand model corresponding to a hand image obtained may be generated by capturing an image of a hand of a person are provided. The method includes preparing a 3D hand model database, receiving a 2-dimensional (2D) hand image obtained by capturing an image of a first hand, and detecting a first 3D hand model corresponding to the 2D hand image by using the 3D hand model database. | 03-12-2015 |
20150071495 | METHOD AND APPARATUS FOR LOCATING INFORMATION FROM SURROUNDINGS - An apparatus includes at least one a processor, and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to determine that an input defining a piece of information to be located has been received, determine the location of the information in an environment of the apparatus, and report the location. | 03-12-2015 |
20150078613 | CONTEXT-SENSITIVE GESTURE CLASSIFICATION - Various arrangements for recognizing a gesture are presented. User input may be received that causes a gesture classification context to be applied from a plurality of gesture classification contexts. This gesture classification context may be applied, such as to a gesture analysis engine. After applying the gesture classification context, data indicative of a gesture performed by a user may be received. The gesture may be identified in accordance with the applied gesture classification context. | 03-19-2015 |
20150078614 | METHODS AND SYSTEMS FOR SCENE RECOGNITION - Methods and systems for scene recognition are provided. At least one dark region from an image is searched, and color information for pixels of the at least one dark region is calculated. It is determined whether a proportion of low colorfulness pixels to the pixels of the at lest one dark region is greater than a predefined threshold, wherein when the color information of the respective pixel is less than a specific level, the respective pixel is determined as low colorfulness. when the proportion of low colorfulness pixels to the pixels of the at lest one dark region is greater than the predefined threshold, a scene corresponding to the image is not determined as a backlight scene. | 03-19-2015 |
20150078615 | MARKING AND TRACKING AN AREA OF INTEREST DURING ENDOSCOPY - An area of interest of a patient's organ may be identified based on the presence of a possible lesion during an endoscopic procedure. The location of the area of interest may then be tracked relative to the camera view being displayed to the endoscopist in real-time or near real-time during the endoscopic procedure. If the area of interest is visually marked on the display, the visual marking is moved with the area of interest as it moves within the camera view. If the area of interest moves outside the camera view, a directional indicator may be displayed to indicate the location of the area of interest relative to the camera view to assist the endoscopist in relocating the area of interest. | 03-19-2015 |
20150078616 | Methods Systems Apparatuses Circuits and Associated Computer Executable Code for Video Based Subject Characterization, Categorization, Identification and/or Presence Response - Disclosed are methods, systems, apparatuses, circuits and associated computer executable code for providing video based subject characterization, categorization, identification and/or presence response. According to some embodiments, there is provided a system including a video acquisition module, a video analytics module to extract subject features and a subject presence response module adapted to generate a response to an identification of a specific subject or group of subjects. | 03-19-2015 |
20150078617 | MOBILE TERMINAL AND METHOD FOR GENERATING CONTROL COMMAND USING MARKER ATTACHED TO FINGER - A mobile terminal | 03-19-2015 |
20150078618 | SYSTEM FOR TRACKING DANGEROUS SITUATION IN COOPERATION WITH MOBILE DEVICE AND METHOD THEREOF - A system for tracking dangerous situation in cooperation with a mobile device and method thereof are provided. The system comprises: a first surveillance device installed in a predetermined surveillance area, obtaining image data relating to a target causing dangerous situation and providing the obtained image data; and a control center receiving the image data from the first surveillance device, and when it is determined based on the received image data that dangerous situation occurs, extracting feature data from the image data and transmitting metadata including the extracted feature data to at least one second surveillance device located on neighboring surveillance areas. | 03-19-2015 |
20150078619 | SYSTEM AND METHOD FOR DETECTING OBSTACLES USING A SINGLE CAMERA - The present application provides an obstacle detection system and method thereof. The obstacle detection method comprises: obtaining a first image captured by a camera at a first time point; identifying a vertical edge candidate in the first image, and measuring a first length of the vertical edge candidate based on the first image; obtaining a second image captured by the camera at a second time point; measuring a second length of the vertical edge candidate based on the second image; calculating a difference between the first length and the second length; and comparing the difference with a predetermined length difference threshold, if the difference is greater than the length difference threshold, outputting a message that an obstacle is found. | 03-19-2015 |
20150078620 | Aircraft, Methods for Providing Optical Information, Method for Transmission of Acoustic Information and Method for Observing or Tracking an Object - Provided is an aircraft having a spherical body which generates buoyancy or which may generate buoyancy when filled with gas, wherein the aircraft further comprises four actuation units arranged on the surface of the body for movement of the aircraft in a translation and/or rotation through air, and at least one camera arranged on or in the surface of the body. Further provided is a method for providing optical information to a person in the environment of a flying aircraft, a method for providing optical information about an object and/or surveying of an object, a method for transmission of acoustic information and a method for observing or tracking an object. | 03-19-2015 |
20150078621 | APPARATUS AND METHOD FOR PROVIDING CONTENT EXPERIENCE SERVICE - A method and apparatus for providing content experience service are disclosed. The apparatus includes a camera device tracking unit, a user behavior tracking unit, a real image acquisition unit, a motion information processing unit, a virtual space control unit, and a virtual multi-image generation unit. The camera device tracking unit collects camera motion information. The user behavior tracking unit collects user motion information. The real image acquisition unit photographs a space including the user, and separates a real image into a foreground and a background using a background key table. The motion information processing unit corrects the camera motion information and the user motion information. The virtual space control unit generates virtual space control information and virtual space information. The virtual multi-image generation unit generates a virtual multi-image, and provides the content experience service based on the generated virtual multi-image. | 03-19-2015 |
20150078622 | Systems and Methods for Image Based Tamper Detection - Various embodiments of the present invention provide systems and methods for monitoring movement, and in particular to systems and methods for monitoring monitor targets, and more particularly, to systems and methods for using images in relation to tamper detection. | 03-19-2015 |
20150078623 | ENHANCED FACE RECOGNITION IN VIDEO - The computational resources needed to perform processes such as image recognition can be reduced by determining appropriate frames of image information to use for the processing. In some embodiments, infrared imaging can be used to determine when a person is looking substantially towards a device, such that an image frame captured at that time will likely be adequate for facial recognition. In other embodiments, sound triangulation or motion sensing can be used to assist in determining which captured image frames to discard and which to select for processing based on any of a number of factors indicative of a proper frame for processing. | 03-19-2015 |
20150086071 | METHODS AND SYSTEMS FOR EFFICIENTLY MONITORING PARKING OCCUPANCY - A system and method for determining parking occupancy by constructing a parking area model based on a parking area, receiving image frames from at least one video camera, selecting at least one region of interest from the image frames, performing vehicle detection on the region(s) of interest, determining that there is a change in parking status for a parking space model associated with the region of interest, and updating parking status information for a parking space associated with the parking space model. | 03-26-2015 |
20150086072 | METHODS AND SYSTEMS FOR MONITORING A WORKER PERFORMING A CROWDSOURCED TASK - The disclosed embodiments illustrate methods and systems for monitoring a worker performing a crowdsourced task being presented on a computing device. The method comprises performing at least one of a facial detection processing or an eye tracking processing, on a video stream captured by a camera of the computing device. An inattention instance of the worker is determined based on at least one of the facial detection processing or the eye tracking processing. Further, the inattention instance is communicated to a crowdsourcing platform, wherein the crowdsourced task is received from the crowdsourcing platform. | 03-26-2015 |
20150086073 | IMAGE FRAME PROCESSING INCLUDING USAGE OF ACCELERATION DATA IN ASSISTING OBJECT LOCATION - Apparatuses, methods and storage medium associated with computing, including processing of image frames, are disclosed herein. In embodiments, an apparatus may include an accelerometer and an image processing engine having an object tracking function. The object tracking function may be arranged to track an object from one image frame to another image frame. The object tracking function may use acceleration data output by the accelerometer to assist in locating the object in an image frame. Other embodiments may be described and claimed. | 03-26-2015 |
20150086074 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - Provided is an information processing device including an image acquisition unit configured to acquire a captured image, a parameter acquisition unit configured to acquire a parameter associated with a user of a portable terminal, a specification unit configured to specify the user from the captured image on the basis of the parameter, and a display control unit configured to perform control in a manner that, in a case where the user is not specified by the specification unit, information indicating a movement of the user necessary to specify the user is displayed in the portable terminal. | 03-26-2015 |
20150086075 | SYSTEM AND METHOD FOR FACE TRACKING - Improved face tracking is provided during determination of an image by an imaging device using a low power face tracking unit. In one embodiment, image data associated with a frame and one or more face detection windows from a face detection unit may be received by the face tracking unit. The face detection windows are associated with the image data of the frame. A face list may be determined based on the face detection windows and one or more faces may be selected from the face list to generate an output face list. The output face list may then be provided to a processor of an imaging device for the detection of an image based on at least one of coordinate and scale values of the one or more faces on the output face list. | 03-26-2015 |
20150086076 | Face Recognition Performance Using Additional Image Features - A technique is provided for recognizing faces in an image stream using a digital image acquisition device. A first acquired image is received from an image stream. A first face region is detected within the first acquired image having a given size and a respective location within the first acquired image. First faceprint data uniquely identifying the first face region are extracted along with first peripheral region data around the first face region. The first faceprint and peripheral region data are stored, and the first peripheral region data are associated with the first face region. The first face region is tracked until a face lock is lost. A second face region is detected within a second acquired image from the image stream. Second peripheral region data around the second face region are extracted. The second face region is identified upon matching the first and second peripheral region data. | 03-26-2015 |
20150092978 | METHOD AND SYSTEM FOR RECOGNITION OF ABNORMAL BEHAVIOR - A method for recognizing abnormal behavior is disclosed, the method includes: capturing at least one video stream of data on one or more subjects; extracting body skeleton data from the at least one video stream of data; classifying the extracted body skeleton data as normal behavior or abnormal behavior; and generating an alert, if the extracted skeleton data is classified as abnormal behavior. | 04-02-2015 |
20150092979 | METHOD AND APPARATUS FOR IMAGE COLLECTION AND ANALYSIS - A system that incorporates teachings of the subject disclosure may include, for example, a processor that can detect an event, access location information for a group of mobile communication devices that are each automatically capturing images, and identify a subset of the group of mobile communication devices that are in proximity to the event based on the location information. The processor can provide first image analysis criteria to the subset of the group of mobile communication devices without providing the first image analysis criteria to remaining devices of the group of mobile communication devices where the first image analysis criteria includes first characteristics associated with an object. The processor can receive a first target image that includes the object from a first mobile communication device of the subset of the group of mobile communication devices, where the first target image is selected by the first mobile communication device from among a plurality of images captured by the first mobile communication device based on first image pattern recognition performed by the first mobile communication device utilizing the first image analysis criteria. Other embodiments are disclosed. | 04-02-2015 |
20150092980 | TRACKING PROGRAM AND METHOD - In one embodiment, the present disclosure provides a computer implemented method of determining energy expenditure associated with a user's movement. A plurality of video images of a subject are obtained. From the plurality of video images, a first location is determined of a first joint of the subject at a first time. From the plurality of video images, a second location is determined of the first joint of the subject at a second time. The movement of the first joint of the subject between the first and second location is associated with an energy associated with the movement. | 04-02-2015 |
20150092981 | APPARATUS AND METHOD FOR PROVIDING ACTIVITY RECOGNITION BASED APPLICATION SERVICE - An apparatus includes an image receiving module configured to collect a depth image provided from a camera, a human body detection module configured to detect a human body from the collected depth image, and an activity recognition module configured to recognize an action of the human body on the basis of a 3-dimensional action volume extracted from the human body and a previously learned action model. | 04-02-2015 |
20150092982 | IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREOF - An image processing apparatus is provided. The apparatus includes: a processor configured to process an image according to a preset process in response to receiving the image; and a controller configured to control the processor in order to detect a figure of a human within a video frame based on a feature vector data value according to histograms of oriented gradients (HOG) algorithm of the video frame of the image input to the processor, wherein the controller divides the video frame into a foreground corresponding to a region which includes a moving object and a background corresponding to a region which excludes the foreground, removes the background, converts a target region having a preset area including at least a part of the foreground without the background into a binary image, and derives the feature vector data value from the binary image using a lookup table. | 04-02-2015 |
20150092983 | METHOD FOR CALIBRATION FREE GAZE TRACKING USING LOW COST CAMERA - A method and device for eye gaze estimation with regard to a sequence of images. The method comprises receiving a sequence of first video images and a corresponding sequence of first eye images of a user watching at the first video images; determining first saliency maps associated with at least a part of the first video images; estimating associated first gaze points from the first saliency maps associated with the video images associated with the first eye images; storing of pairs of first eye images/first gaze points in a database; for a new eye image, called second eye image, estimating an associated second gaze point from the estimated first gaze points and from a second saliency map associated with a second video image associated with the second eye image; storing the second eye image and its associated second gaze point in the database. | 04-02-2015 |
20150092984 | FILTERING DEVICE AND ENVIRONMENT RECOGNITION SYSTEM - A filtering device includes an evaluation value deriving module that derives, for a pair of generated images having mutual relevance, multiple evaluation values indicative of correlations between any one of blocks extracted from one of the images and multiple blocks extracted from the other image, respectively, a reference waveform part setting module that sets a reference waveform part of a transition waveform comprised of the multiple evaluation values, the reference waveform part containing the evaluation value having the highest correlation, and a difference value determining module that determines whether one or more similar waveform parts similar to the reference waveform part exist in the transition waveform, and determines, based on the result of the determination, whether the evaluation value with the highest correlation is valid as a difference value. | 04-02-2015 |
20150092985 | UPDATING FILTER PARAMETERS OF A SYSTEM - Techniques are disclosed for estimating one or more parameters in a system. A device obtains measurements corresponding to a first set of features and a second set of features. The device estimates the parameters using an extended Kalman filter based on the measurements corresponding to the first set of features and the second set of features. The measurements corresponding to the first set of features are used to update the one or more parameters, and information corresponding to the first set of features. The measurements corresponding to the second set of features are used to update the parameters and uncertainty corresponding to the parameter. In on example, information corresponding to the second set of features is not updated during the estimating. Moreover, the parameters are estimated without projecting the information corresponding to the second set of features into a null-space. | 04-02-2015 |
20150092986 | FACE RECOGNITION USING DEPTH BASED TRACKING - Face recognition training database generation technique embodiments are presented that generally involve collecting characterizations of a person's face that are captured over time and as the person moves through an environment, to create a training database of facial characterizations for that person. As the facial characterizations are captured over time, they are will represent the person's face as viewed from various angles and distances, different resolutions, and under different environmental conditions (e.g., lighting and haze conditions). Further, over a long period of time where facial characterizations of a person are collected periodically, these characterizations can represent an evolution in the appearance of the person. This produces a rich training resource for use in face recognition systems. In addition, since a person's face recognition training database can be established before it is needed by a face recognition system, once employed, the training will be quicker. | 04-02-2015 |
20150092987 | METHOD OF PROVIDING A DESCRIPTOR FOR AT LEAST ONE FEATURE OF AN IMAGE AND METHOD OF MATCHING FEATURES - A method of providing a descriptor for at least one feature of an image comprises the steps of providing an image captured by a capturing device and extracting at least one feature from the image, and assigning a descriptor to the at least one feature, the descriptor depending on at least one parameter which is indicative of an orientation, wherein the at least one parameter is determined from the orientation of the capturing device measured by a tracking system. The invention also relates to a method of matching features of two or more images. | 04-02-2015 |
20150098607 | Deformable Surface Tracking in Augmented Reality Applications - A computer implemented method for tracking a marker on a deformable surface in augmented reality (AR) applications, comprising: detecting image-key-points in a currently processed video frame of a video-captured scene; performing key-point-correspondence searching and matching the image-key-points with model-key-points are identified from an original image of the marker, comprising: calculating an key-point matching score for each image-key-point; applying a key-point matching score filter on the key-point matching scores; restricting the searching of the image-key-points in the currently processed video frame to within same mesh block determined in a previously processed video frame of the captured video frames; | 04-09-2015 |
20150098608 | MANAGEMENT OF ELECTRONIC RACKS - Computer program product and method for determining a location and identity of an electronic rack by a remote electronic device is disclosed. The method may include capturing an image of the electronic rack with a camera attached to the remote electronic device. The method may further include determining a visual identifying trait of the captured image of the electronic rack. The method may further include comparing the visual identifying trait of the captured image of the electronic rack to a known identifying trait of a known electronic rack identified in an inventory. The method may include identifying the electronic rack based on the comparison of the visual identifying trait of the captured image to the known identified trait. The method may further include determining a location of the electronic rack by the remote electronic device. The method may further include recording the location of the identified electronic rack. | 04-09-2015 |
20150098609 | Real-Time Multiclass Driver Action Recognition Using Random Forests - An action recognition system recognizes driver actions by using a random forest model to classify images of the driver. A plurality of predictions is generated using the random forest model. Each prediction is generated by one of the plurality of decision trees and each prediction comprises a predicted driver action and a confidence score. The plurality of predictions is regrouped into a plurality of groups with each of the plurality of groups associated with one of the driver actions. The confidence scores are combined within each group to determine a combined score associated with each group. The driver action associated with the highest combined score is selected. | 04-09-2015 |
20150098610 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE AND CONTROL METHOD AND CONTROL PROGRAM THEREOF, AND COMMUNICATION TERMINAL AND CONTROL METHOD AND CONTROL PROGRAM THEREOF - Merchandise management is implemented by recognizing a piece of merchandise in an image on a video in real time. A piece of merchandise and m-number of first local feature which are respectively 1-dimensional to i-dimensional feature vectors are stored after being associated with each other, n-number of feature points are extracted from an image on a video captured by an imaging unit, n-number of second local feature which are respectively 1-dimensional to j-dimensional feature vectors are generated, a smaller number of dimensions of the number of dimensions i and the number of dimensions j is selected, and a recognition that the merchandise exists in the image on the video is made when it is determined that a prescribed proportion or more of the m-number of first local feature of the selected number of dimensions correspond to the n-number of second local feature of the selected number of dimensions. | 04-09-2015 |
20150098611 | DETERMINATION METHOD, DETERMINATION DEVICE, DETERMINATION SYSTEM, AND COMPUTER PROGRAM - A determination method for determining reliability of a selective binding amount of a substance to be examined obtained as detection intensity of a label when a labeled substance to be examined binds to a selective binding substance fixed as a spot on a carrier includes: determining a position of the spot in image data obtained by imaging the detection intensity in the carrier and extracting a pixel group corresponding to the spot; calculating a ratio or a difference between a median value of the detection intensity of the pixel group extracted at the determining and a median value of the detection intensity of the pixel group excluding a certain top proportion of and/or a certain bottom proportion of pixels; and determining quality of the reliability based on the ratio or the difference calculated at the calculating and a certain reference value. | 04-09-2015 |
20150098612 | METHOD AND APPARATUS FOR DETECTING LIGHT SOURCE OF VEHICLE - In a light control system, a captured image of a cruising direction of a vehicle is acquired, and a light source is extracted from the captured image. A probability for estimating a light source to be a vehicle light source originating from a vehicle is calculated based on light source parameters for differentiating the light source. A dark section that is darker than the periphery and is present below the light source in the captured image is extracted. The probability is set to be higher for the light source of which the dark section is extracted. The light source having a probability that is a reference value set in advance or higher is estimated to be a light source of another vehicle. When the dark section that is detected as a shadow of a vehicle is detected, the probability of the light source being a vehicle light source is set to be high. | 04-09-2015 |
20150098613 | APPARATUS AND METHODS FOR VIDEO ALARM VERIFICATION - A method for verification of alarms is disclosed. The method involves receiving an alarm signal trigger associated with an alarm signal, receiving video data from a premise associated with the alarm signal, rapidly analyzing the video data to test for the existence a significant event, and when a significant event exists, sending a representation of a segment of interest of the video data, the segment of interest being associated with the significant event, to a user. | 04-09-2015 |
20150098614 | OBJECT TRACKING BASED ON DYNAMICALLY BUILT ENVIRONMENT MAP DATA - A computer-implemented method of tracking a target object in an object recognition system includes acquiring a plurality of images with a camera. The method further includes simultaneously tracking the target object and dynamically building environment map data from the plurality of images. The tracking of the target object includes attempting to estimate a target pose of the target object with respect to the camera based on at least one of the plurality of images and based on target map data. Next, the method determines whether the tracking of the target object with respect to the camera is successful. If not, then the method includes inferring the target pose with respect to the camera based on the dynamically built environment map data. In one aspect the method includes fusing the inferred target pose with the actual target pose even if tracking is successful to improve robustness. | 04-09-2015 |
20150098615 | DYNAMIC EXTENSION OF MAP DATA FOR OBJECT DETECTION AND TRACKING - A computer-implemented method of tracking a target object in an object recognition system includes acquiring a plurality of images with a camera and simultaneously tracking the target object and dynamically building online map data from the plurality of images. Tracking of the target object is based on the online map data and the offline map data. In one aspect, tracking the target object includes enabling only one of the online map data and offline map data for tracking based on whether tracking is successful. In another aspect, tracking the target object includes fusing the online map data with the offline map data to generate a fused online model. | 04-09-2015 |
20150098616 | OBJECT RECOGNITION AND MAP GENERATION WITH ENVIRONMENT REFERENCES - Exemplary methods, apparatuses, and systems for performing object detection on a mobile device are disclosed. A reference dataset comprising a set of reference keyframes for an object captured in a plurality of different lighting environments is obtained. An image of the object in a current lighting environment is captured. Reference keyframes are grouped into respective subsets according to one or more of: a reference keyframe camera position and orientation (pose), a reference keyframe lighting environment, or a combination thereof. Feature points of the image are compared with feature points of the reference keyframes in each of the respective subsets. A candidate subset of reference keyframes from the respective subsets is selected in response to the comparing feature points. A reference keyframe from the candidate subset of reference keyframes is selected for triangulation with the image of the object. | 04-09-2015 |
20150098617 | Method and Apparatus for Establishing a North Reference for Inertial Measurement Units using Scene Correlation - A scene correlation-based target system and related methods are provided. A reference image depicts a remotely-positioned object having identifiable characteristics, wherein a reference directional vector is established relative to the reference image. A target image of a general vicinity of the remotely-positioned object has an unknown directional vector, the target image having at least a portion of the identifiable characteristics. An inertial measuring unit has a scene correlation system, wherein the scene correlation system matches the portion of the identifiable characteristics of the target image with the identifiable characteristics of the reference image, wherein a slew angle between the reference image and the target image is calculated. A target image directional vector is derived from the calculated slew angle and the reference directional vector. | 04-09-2015 |
20150098618 | METHOD AND APPARATUS FOR ESTIMATING ORIENTATION OF BODY, AND COMPUTER READABLE STORAGE MEDIUM OF RECORDING THE METHOD - A method of estimating an orientation of a body is provided. The method includes determining a reference point based on a first region of a body, calculating a translation matrix of a world coordinate system based on the reference point; determining a first vector based on the reference point and a second region of the body, calculating a first rotation matrix rotated by an angle a about a first rotation vector, which is perpendicular to the first vector and a Z-axis of the world coordinate system, as a first rotation axis, determining a second vector based on the first vector and a third region of the body, calculating a second rotation matrix rotated by an angle β about the Z-axis of the world coordinate system as a second rotation axis, and calculating a transformation matrix based on the translation matrix, the first rotation matrix, and the second rotation matrix. | 04-09-2015 |
20150098619 | METHODS AND SYSTEMS FOR DETERMINING AND TRACKING EXTREMITIES OF A TARGET - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may then be determined. | 04-09-2015 |
20150098620 | Position Estimation - Methods and systems are described for determining eye position and/or for determining eye movement based on glints. An exemplary computer-implemented method involves: (a) causing a camera that is attached to a head-mounted display (HMD) to record a video of the eye; (b) while the video of the eye is being recorded, causing a plurality of light sources that are attached to the HMD and generally directed towards the eye to switch on and off according to a predetermined pattern, wherein the predetermined pattern is such that at least two of the light sources are switched on at any given time while the video of the eye is being recorded; (c) analyzing the video of the eye to detect controlled glints that correspond to the plurality of light sources; and (d) determining a measure of eye position based on the controlled glints. | 04-09-2015 |
20150104062 | PROBABILISTIC NEURAL NETWORK BASED MOVING OBJECT DETECTION METHOD AND AN APPARATUS USING THE SAME - The present disclosure proposes a method of moving object detection in variable bit-rate video steams based on probabilistic neural networks, and the method features a background generation module and a moving object detection module. The background generation module produces a model of background images which express properties of variable bit-rate video streams. The moving object detection module distinguishes a moving object in both low and high bit-rate video steams in an efficient manner. The detection result is generated by calculating the output value of the probabilistic neural networks. | 04-16-2015 |
20150104063 | CRUISING ZONE DIVISION LINE RECOGNITION APPARATUS - A cruising zone division line recognition apparatus has an image acquisition device that acquires an image including a road surface ahead of a vehicle, and an image recognition device. The image recognition device adds blurring to an area including the road surface in the acquired image and recognizes a cruising zone division line from the image to which blurring has been added. When blurring is added, a cruising zone division line that is an intermittent double line included in a captured image can be made unclear. Therefore, the recognized cruising zone division line can be prevented from becoming a discontinuous, disjointed line. | 04-16-2015 |
20150104064 | METHOD AND SYSTEM FOR DETECTION OF FOREIGN OBJECTS IN MARITIME ENVIRONMENTS - The present invention provides techniques for detecting foreign objects in a region of interest in maritime environments. Image data indicative of a sequence of successively acquired images of the region of interest, is analyzed to determine candidate points of interest, and data indicative of said points is processed to identify candidate points that are adjacently accumulated in different locations in images data. Grouping data may be then generated based on the identified accumulations of candidate points indicative of a group of said candidate points. The grouping data is processed to identify spatio-temporal correlation between the points in the group and determine a corresponding track function, thereby enabling detection of a presence of a foreign object in the image data. | 04-16-2015 |
20150104065 | APPARATUS AND METHOD FOR RECOGNIZING OBJECT IN IMAGE - An apparatus and method for recognizing an object are provided. The apparatus includes an input component configured to receive an input image that includes a target object, and a processor configured to recognize a target object in the received image using image-object correlation information that represents a correlation between an image and an object. | 04-16-2015 |
20150104066 | Method for improving tracking in crowded situations using rival compensation - A method for tracking objects across a number of image frames includes tracking objects in the frames based on appearance models of each foreground region corresponding to each of the objects and determining if a plurality of the tracked objects overlap. Where a plurality of the tracked objects overlap, the method creates compensated appearance models for each of the plurality of overlapping objects by attenuating common appearance features among the corresponding appearance models; and tracks the plurality of overlapping objects based on the created compensated appearance models. | 04-16-2015 |
20150104067 | METHOD AND APPARATUS FOR TRACKING OBJECT, AND METHOD FOR SELECTING TRACKING FEATURE - A method and an apparatus for tracking an object, and a method for selecting a tracking feature are disclosed. The object tracking method includes tracking, based on a previously selected first tracking feature, the object in a sequence of video frames having the object; when a scene of the video frame is changed, selecting a second tracking feature with optimal tracking performance for the changed scene; and continuing tracking the object based on the selected second tracking feature. According to the object tracking method, a feature with optimal tracking performance for a corresponding scene can be dynamically selected in response to the changed scene in the tracking of a hand, thus it is possible to perform accurate tracking. | 04-16-2015 |
20150104068 | SYSTEM AND METHOD FOR LOCATING FIDUCIALS WITH KNOWN SHAPE - This invention provides a system and method for determining the pose of shapes that are known to a vision system that undergo both affine transformation and deformation. The object image with fiducial is acquired. The fiducial has affine parameters, including degrees of freedom (DOFs), search ranges and search step sizes, and control points with associated DOFs and step sizes. Each 2D affine parameter's search range and the distortion control points' DOFs are sampled and all combinations are obtained. The coarsely specified fiducial is transformed for each combination and a match metric is computed for the transformed fiducial, generating a score surface. Peaks are computed on this surface, as potential candidates, which are refined until a match metric is maximized. The refined representation exceeding a predetermined score is returned as potential shapes in the scene. Alternately the candidate with the best score can be used as a training fiducial. | 04-16-2015 |
20150104069 | Method for Discovering Augmented Reality Object, and Terminal - A method for discovering an augmented reality (AR) object is provided that is applicable to the field of AR technologies. The method for discovering an AR object includes, when it is determined, according to a pre-generated navigation route and a movement speed of a terminal, that a vicinity of a preselected AR object is reached, determining a status of the terminal; when it is determined that the terminal is in a searching state, starting a camera and acquiring a picture; and when the acquired picture includes the AR object, notifying of the AR object. | 04-16-2015 |
20150110344 | IMAGE AND MAP-BASED DETECTION OF VEHICLES AT INTERSECTIONS - A system, device, and methods for image and map-based detection of vehicles at intersections. Once example computer-implemented method for detecting objects includes receiving, from the one or more sensors disposed on a vehicle, image data representative of an image and detecting an object on the image. The method further includes identifying a path extending from the vehicle to the detected object on the image and retrieving map data including lane information. The method further includes comparing the path to a representation of the lane information and determining the position of the detected object based on a comparison of the path, representation of the lane information, and the image. | 04-23-2015 |
20150110345 | REMOTE TRACKING OF OBJECTS - The presently disclosed subject matter includes a tracking system and method which for tracking objects by a sensing unit operable to communicate over a communication link with a control center which enables to execute a command generated at the control center with respect to a selected object in an image captured by the sensing unit, notwithstanding a time-delay between a time when the sensing unit acquires the image with the selected object, to a time when the command is received at the sensing unit with respect to the selected object. | 04-23-2015 |
20150110346 | SPECTRAL IMAGING BASED DECISION SUPPORT, TREATMENT PLANNING AND/OR INTERVENTION GUIDANCE - A method includes obtaining first spectral image data, which includes at least a first component corresponding to a targeted first K-edge based contrast agent administered to a subject if a target of the targeted first K-edge based contrast agent is present in the subject, decomposing the first spectral image data into at least the first component, reconstructing the first component thereby generating a first image of the targeted first K-edge contrast agent, determining if the targeted first K-edge contrast agent is present in the first image, and generating a signal indicating the targeted first K-edge contrast agent is present in the first image in response to determining the targeted first K-edge contrast agent is present in the first image. | 04-23-2015 |
20150110347 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - An image processing device includes a processor; and a memory which stores a plurality of instructions, which when executed by the processor, cause the processor to execute: acquiring a first image and a second image which are captured by cameras having optical axes different from each other; calculating a calculation length of a first portion of a user in a world coordinate system based on parallax of a camera coordinate system of a first portion which is included in the first image and the second image; and detecting non-synchronization state of image capturing timings of the first image and the second image based on a change amount of the calculation length. | 04-23-2015 |
20150110348 | SYSTEMS AND METHODS FOR AUTOMATED DETECTION OF REGIONS OF INTEREST IN RETINAL IMAGES - Embodiments disclose systems and methods that aid in screening, diagnosis and/or monitoring of medical conditions. The systems and methods may allow, for example, for automated identification and localization of lesions and other anatomical structures from medical data obtained from medical imaging devices, computation of image-based biomarkers including quantification of dynamics of lesions, and/or integration with telemedicine services, programs, or software. | 04-23-2015 |
20150110349 | FACE TRACKING APPARATUSES AND METHODS - A face tracking apparatus includes: a face region detector; a segmentation unit; an occlusion probability calculator; and a tracking unit. The face region detector is configured to detect a face region based on an input image. The segmentation unit is configured to segment the face region into a plurality of sub-regions. The occlusion probability calculator configured to calculate occlusion probabilities for the plurality of sub-regions. The tracking unit is configured to track a face included in the input image based on the occlusion probabilities. | 04-23-2015 |
20150110350 | Enhanced Stereo Imaging-Based Metrology - A solution for evaluating an object using physical three-dimensional locations of the various points on the object derived from image data concurrently acquired by two or more cameras (e.g., stereo image data) is provided. Image data concurrently acquired by at least two cameras at each of multiple instants is processed to identify one or more points of an object visible in the image data. A physical three-dimensional location of each such point can be calculated at each instant using the corresponding image data. Additionally, a physical three-dimensional location of one or more points of the object visible only in the image data acquired by one camera can be calculated for each of the three different instants using the image data in which the corresponding point is visible and the physical three-dimensional location of one or more of the points visible in the image data acquired by at least two cameras. | 04-23-2015 |
20150110351 | Face Detection - A data processing system for performing face detection on a stream of frames of image data, the data processing system comprising: a skin patch identifier configured to identify one or more patches of skin colour in a first frame and characterise each patch in the first frame using a respective patch construct of a predefined shape; a first search tile generator configured to generate one or more first search tiles from the one or more patch constructs; and a face detector configured to detect faces in the stream by performing face detection in one or more frames of the stream within the first search tiles. | 04-23-2015 |
20150110352 | Skin Colour Probability Map - A data processing system for performing face detection on a stream of frames of image data, the data processing system comprising: a face detector configured to detect a first face candidate in a first frame by performing face detection within first search tiles defined for the first frame; a colour measurement unit configured to calculate a set of colour parameters including an average colour of the first face candidate expressed according to a predefined colour space; a transformation unit configured to: transform a second frame into the predefined colour space, one of the axes of the colour space being substantially oriented in the direction of maximum variation according to a predetermined distribution of skin colour; and form a skin colour probability map for the second frame by calculating the probability that a given colour is a skin colour from a measure of the colour space distance of that colour from the calculated average colour; and a search tile generator configured to generate second search tiles based on the skin colour probability map for use by the face detector, the second search tiles defining areas of the second frame within which the face detector is to perform face detection so as to detect one or more second face candidates in the second frame. | 04-23-2015 |
20150110353 | PAPER-SHEET HANDLING APPARATUS AND PAPER-SHEET HANDLING METHOD - A paper-sheet handling apparatus ( | 04-23-2015 |
20150110354 | Isolate Extraneous Motions - A system may receive image data and capture motion with respect to a target in a physical space and recognize a gesture from the captured motion. It may be desirable to isolate aspects of captured motion to differentiate random and extraneous motions. For example, a gesture may comprise motion of a user's right arm, and it may be desirable to isolate the motion of the user's right arm and exclude an interpretation of any other motion. Thus, the isolated aspect may be the focus of the received data for gesture recognition. Alternately, the isolated aspects may be an aspect of the captured motion that is removed from consideration when identifying a gesture from the captured motion. For example, gesture filters may be modified to correspond to the user's natural lean to eliminate the effect the lean has on the registry of a motion with a gesture filter. | 04-23-2015 |
20150110355 | VISION-2-VISION CONTROL SYSTEM - A method for controlling an object space having an associated object environment includes the steps of, defining a target set of coordinates in the object space, recognizing the presence of a predetermined object in the object space, and determining a coordinate location of the recognized predetermined object in the object space. The method further includes determining the spatial relationship between the recognized predetermined object and the target set of coordinates, comparing the spatial relationship with predetermined spatial relationship criteria, and if the determined spatial relationship criteria falls within the predetermined spatial relationship criteria, modifying the object space environment. | 04-23-2015 |
20150110356 | Scene Recognition Method and Apparatus - A scene recognition method includes acquiring an image and sensor data corresponding to the image and determining, in accordance with the sensor data, whether a scene of the image is a non-high-dynamic range (HDR) scene. The method also includes extracting an image feature of the image when it is not determined whether the scene of the image is the non-HDR scene and determining, in accordance with the image feature, whether the scene of the image is an HDR scene. | 04-23-2015 |
20150117703 | OBJECT IDENTIFICATION SYSTEM AND METHOD - An object identification method is provided. The method includes dividing an input video into a number of video shots, each containing one or more video frames. The method also includes detecting target-class object occurrences and related-class object occurrences in each video shot. Further, the method includes generating hint information including a small subset of frames representing the input video and performing object tracking and recognition based on the hint information. The method also includes fusing tracking and recognition results and outputting labeled objects based on the combined tracking and recognition results. | 04-30-2015 |
20150117704 | BUS LANE INFRACTION DETECTION METHOD AND SYSTEM - This disclosure provides methods and systems for form a trajectory of a moving vehicle captured with an image capturing device. According to one exemplary embodiment, a method forms a trajectory of a moving vehicle and determines if the vehicle is moving in one of a permitted manner and an unpermitted manner relative to the appropriate motor vehicle lane restriction laws and/or regulations. | 04-30-2015 |
20150117705 | Hybrid Parking Detection - The present invention combines the strengths of the background-subtraction and edge-detection algorithm for parking detection. Being computationally efficient, the background-subtraction algorithm is used whenever possible. On the other hand, being robust, the edge-detection algorithm is used at calibration points, or when the background-subtraction algorithm cannot reliably determine the parking state. | 04-30-2015 |
20150117706 | VISUAL OBJECT TRACKING METHOD - A visual object tracking method includes the steps of: setting an object window having a target in a video image; defining a search window greater than the object window; analyzing an image pixel of the object window to generate a color histogram for defining a color filter which includes a dominant color characteristic of the target; using the color filter to generate an object template and a dominant color map in the object window and the search window respectively, the object template including a shape characteristic of the target, the dominant color map including at least one candidate block; comparing the similarity between the object template and the candidate block to obtain a probability distribution map, and using the probability distribution map to compute the mass center of the target. The method generates the probability map by the color and shape characteristics to compute the mass center. | 04-30-2015 |
20150117707 | SYSTEMS AND METHODS FOR DETERMINING MOTION SALIENCY - Techniques for determining motion saliency in video content using center-surround receptive fields. In some implementations, images or frames from a video may be apportioned into non-overlapped regions, for example, by applying a rectilinear grid. For each grid region, or cell, motion consistency may be measured between the center and surround area of that cell across frames of the video. Consistent motion across the center-surround area may indicate that the corresponding region has low variation. The larger the difference between center-surround motions in a cell, the more likely the region has high motion saliency. | 04-30-2015 |
20150117708 | Three Dimensional Close Interactions - Described herein is a method for detecting, identifying and tracking hand, hand parts, and fingers on the hand ( | 04-30-2015 |
20150117709 | Robust Scale Estimation in Real-Time Monocular SFM for Autonomous Driving - A method for performing three-dimensional (3D) localization requiring only a single camera including capturing images from only one camera; generating a cue combination from sparse features, dense stereo and object bounding boxes; correcting for scale in monocular structure from motion (SFM) using the cue combination for estimating a ground plane; and performing localization by combining SFM, ground plane and object bounding boxes to produce a 3D object localization. | 04-30-2015 |
20150117710 | System for Locating Mobile Display Devices - A system is provided in which a mobile display device, such as a cellular phone, a computing tablet, a mobile computer or any other portable device that incorporates a computing element and a display element, is adapted to display a predetermined pattern. The pattern can be either visible or invisible to human eye. An image sensor (most likely, but not necessarily, an optical one) detects said pattern, and the location of the MDD is determined in relation to the sensor. In one embodiment, the MDD displays a pattern on the device's standard display, for example four points of unique character, such as color or blinking pattern. Such a method will allow a cost-effective way to implement the system, as it requires no additional cost in hardware. In another embodiment, the system recognizes the display unit of the MDD (instead of a specific pattern on the MDD's display) and can determine location of the MDD without any change to the MDD's hardware or software. In yet another embodiment, the system includes bi-directional wireless communication between the receiver and the MDD and software that allows the MDD to exchange information with the system, such as the exact dimensions of the display. | 04-30-2015 |
20150117711 | IMAGE PROCESSING METHOD USING SENSED EYE POSITION - A method is described for processing an image previously captured by a camera and stored in a processor readable memory. The method involves detecting a face within the stored image and detecting a position of the first face within the stored image. The method additionally involves performing region-specific image processing on the stored image based on the detected position of the first face. A computer readable storage medium for storing instructions for processing an image previously captured by a camera and a hand-held camera are also described. | 04-30-2015 |
20150117712 | COMPUTER VISION BASED CONTROL OF A DEVICE USING MACHINE LEARNING - A method for computer vision based control of a device, the method comprising: obtaining a first frame comprising an image of an object within a field of view; identifying the object by applying computer vision algorithms; storing image related shape information of the identified object; obtaining a second frame comprising an image of the object within a field of view and identifying the object in the second frame by using the image related shape information from the first frame; and controlling the device based on the identification of the object. | 04-30-2015 |
20150117713 | Determine Spatiotemporal Causal Interactions in Data - Techniques for detecting outliers in data and determining spatiotemporal causal interactions in the data are discussed. A process collects global positioning system (GPS) points in logs and identifies geographical locations to represent the area where the service vehicles travelled with a passenger. The process models traffic patterns by: partitioning the area into regions, segmenting the GPS points from the logs into time bins, and identifying the GPS points associated with transporting the passenger. The process projects the identified GPS points onto the regions to construct links connecting GPS points located in two or more regions. Furthermore, the process builds a three-dimensional unit cube to represent features of each link. The points farthest away from a center of data cluster are detected as outliers, which represent abnormal traffic patterns. The process constructs outlier trees to evaluate relationships of the outliers and determines the spatiotemporal causal interactions in the data. | 04-30-2015 |
20150125027 | ENHANCED OUTLIER REMOVAL FOR 8 POINT ALGORITHM USED IN CAMERA MOTION ESTIMATION - A method to filter outliers in an image-aided motion-estimation system is provided. The method includes selecting eight-image-points in a first image received from a moving imaging device at at least one processor; selecting eight-image-points in a second image that correspond to the selected eight-image-points in the first image at the at least one processor, the second image being received from the moving imaging device; scaling the selected image-points at the at least one processor so the components of the selected image-points are between two selected values on the order magnitude of 1; building an 8-by-9 matrix (A) from the scaled selected image-points at the at least one processor; determining a condition number for the 8-by-9 matrix at the at least one processor; and rejecting the 8-by-9 matrix built from the selected image-points when the determined condition number is greater than or equal to a condition-number threshold. | 05-07-2015 |
20150125028 | ELECTRONIC DEVICE AND VIDEO OBJECT MOTION TRAJECTORY MODIFICATION METHOD THEREOF - An electronic device and a video object motion trajectory modification method thereof are provided. The electronic device includes a video providing unit and a processing unit. The video providing unit is configured to provide a video. The processing unit is configured to extract a video segment from the video. The video segment includes a plurality of successive frames which include a common object. The processing unit is further configured to calculate at least one curve and one control point thereof. The at least one curve corresponds to a motion trajectory of the common object in the successive frames. The processing unit is also configured to adjust the at least one curve via the control point to modify the motion trajectory. The video object motion trajectory modification method is applied to the electronic device to implement the aforesaid operations. | 05-07-2015 |
20150125029 | METHOD, TV SET AND SYSTEM FOR RECOGNIZING TV STATION LOGO - The present disclosure discloses a method, a TV set and a system for recognizing a TV station logo. The method includes: obtaining a TV screen image; for each of a plurality of pre-stored standard TV station logos, selecting a matching area of the standard TV station logo from the TV screen image according to position information of the standard TV station logo, the position information indicating a position of the standard TV station logo in a TV screen; and recognizing a TV station logo in the TV screen image by matching the standard TV station logos with their respective matching areas. The present disclosure reduces the size of the matching area for logo recognition, solves the low speed problem of conventional logo recognition methods due to the selected matching area being large, and brings the effects of reducing the matching area and improving the speed for logo recognition. | 05-07-2015 |
20150125030 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND PROGRAM - The present technique relates to an image processing device, an image processing system, an image processing method, and a program that enable simple and accurate detection of regions showing a detection object such as a body hair in images. | 05-07-2015 |
20150125031 | THREE-DIMENSIONAL OBJECT DETECTION DEVICE - A three-dimensional object detection has an image capturing device, a three-dimensional object detection unit, a high-luminance area assessment unit and a controller. The image capturing device captures images of an area including a right-side detection area or a left-side detection area rearward of a vehicle. The three-dimensional object detection unit detects a three-dimensional object based on the images acquired by the image capturing device. The high-luminance area assessment unit accesses a first detection area including a high-luminance area complying with a predetermined reference on either the right-side detection area or the left-side detection area. The controller suppresses detection of the three-dimensional object based on image information of the first detection area that was detected, and maintains or promotes detection of the three-dimensional object based on image information of a second detection area other than the first detection area within the right-side detection area or the left-side detection area. | 05-07-2015 |
20150125032 | OBJECT DETECTION DEVICE - The object detection device according to the present invention includes: an image obtainer configured to obtain, from a camera for taking images of a predetermined image sensed area, the images of the predetermined image sensed area at a predetermined time interval sequentially; a difference image creator configured to calculate a difference image between images obtained sequentially by the image obtainer; and a determiner configured to determine whether each of a plurality of blocks obtained by dividing the difference image in a horizontal direction and a vertical direction is a motion region in which a detection target in motion is present or a rest region in which an object at rest is present. The determiner is configured to determine, with regard to each of the plurality of blocks, whether a block is the motion region or the rest region, based on pixel values of a plurality of pixels constituting this block. | 05-07-2015 |
20150125033 | BONE FRAGMENT TRACKING - A method of determining bone fragment navigation may include receiving pre-operative 2D image data of a reference bone structure and a bone fragment. The reference bone structure may include a first set of fiducial markers provided thereon, and the bone fragment may include a second set of fiducial markers provided thereon. The method may further include performing a 2D-3D registration between the pre-operative 2D image data and a 3D model of the reference bone structure and the bone fragment, after manual repositioning of the bone fragment, receiving second 2D image data, performing 2D-2D registration of the first set of fiducial markers and the second set of fiducial markers between the pre-operative 2D image data and the second 2D image data, and determining 3D movement of the bone fragment based at least in part on the 2D-2D registration. | 05-07-2015 |
20150125034 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - To calculate the position and orientation of a target object with high accuracy, an information processing apparatus converts an image feature on a two-dimensional image into a corresponding position in a three-dimensional space, acquires a first registration error between the converted image feature and a geometric feature of a model, acquires a second registration error between a distance point and the geometric feature of the model, and then derives the position and orientation of the target object based on the acquired first registration error and the acquired second registration error. | 05-07-2015 |
20150125035 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM FOR POSITION AND ORIENTATION MEASUREMENT OF A MEASUREMENT TARGET OBJECT - To perform robust position and orientation measurement even in a situation where noise exist, an image including a target object is obtained, an approximate position and orientation of the target object included in the obtained image are obtained, information related to a shadow region of the target object in the obtained image is estimated, the approximate position and orientation are corrected on the basis of the estimated information related to the shadow region, and a position and orientation of the target object in the image are derived on the basis of the corrected approximate position and orientation and held model information. | 05-07-2015 |
20150125036 | Extraction of Video Fingerprints and Identification of Multimedia Using Video Fingerprinting - A video fingerprinting algorithm extracts characteristic features from regions of interest in a media object, such as a video signal. The regions of interest contain the perceptually important parts of the video signal. A fingerprint may be extracted from a target media object, and the fingerprint the target media content may then be matched against multiple regions of interest of known reference fingerprints. This matching may allow identification of complex scenes, inserts, and different versions of the same content presented in, for example, different formats of the media object. | 05-07-2015 |
20150125037 | HEURISTIC MOTION DETECTION METHODS AND SYSTEMS FOR INTERACTIVE APPLICATIONS - A method is provided for motion detection comprising acquiring a series of images of an audience in a viewing area comprising a current image and a previous image, determining a plurality of optical flow vectors, each representing movement of one of a plurality of visual elements from a first location in the previous image to a second location in the current image, storing the optical flow vectors in a current vector map associated with time information, and determining motion by calculating an intensity ratio between the current vector map and at least one prior vector map. The audience is in a theater or other venue having at least one view screen. A video camera captures images of the audience. Audience movements are interpreted and used to control images on the view screen. | 05-07-2015 |
20150125038 | RECOGNITION APPARATUS, METHOD, AND COMPUTER PROGRAM PRODUCT - In an embodiment, a recognition apparatus includes: an obtaining unit configured to obtain positions of a specific part in a coordinate system having a first axis to an n-th axis (n≧2); a calculating unit configured to calculate a movement vector of the specific part; a principal axis selecting unit configured to select a principal axis; a turning point setting unit configured to set a position at which there is a change in the principal axis, and set a position at which there is a change; a section setting unit configured to set a determination target section, and set a previous section; a determining unit configured to calculate an evaluation value of the determination target section and an evaluation value of the immediately previous section and, determine which of the first axis to the n-th axis is advantageous; and a presenting unit configured to perform the determined result. | 05-07-2015 |
20150131848 | SUPPORT VECTOR MACHINE BASED OBJECT DETECTION SYSTEM AND ASSOCIATED METHOD - An exemplary object detection method includes generating feature block components representing an image frame, and analyzing the image frame using the feature block components. For each feature block row of the image frame, feature block components associated with the feature block row are evaluated to determine a partial vector dot product for detector windows that overlap a portion of the image frame including the feature block row, such that each detector window has an associated group of partial vector dot products. The method can include determining a vector dot product associated with each detector window based on the associated group of partial vector dot products, and classifying an image frame portion corresponding with each detector window as an object or non-object based on the vector dot product. Each feature block component can be moved from external memory to internal memory once implementing the exemplary object detection method. | 05-14-2015 |
20150131849 | MOBILE IMAGE ACQUISITION - A set of conditions associated with monitoring a given environment is received. One or more locations in the given environment are determined, based on at least a portion of the received set of conditions, for which data is to be acquired. The given environment is traversed through the one or more locations. Data at the one or more locations is acquired. The acquired data is stored for subsequent review. One or more of the above steps are performed under control of a processing device at least a part of which is mounted on a mobile platform that is configured to move through the given environment. Determination of the one or more locations in the given environment may further include determining an extended region to be observed. | 05-14-2015 |
20150131850 | IDENTIFYING USER ACTIVITIES USING EYE TRACKING DATA, MOUSE EVENTS, AND KEYSTROKES - A computing device classifies user activities for a person interacting with a computer user interface using one or more user interface devices. The computing device receives eye tracking data for the person, which includes a sequence of fixations ordered temporally. Each fixation corresponds to a plurality of consecutive measured gaze points. Each fixation has a duration and location based on the corresponding gaze points. For each fixation, the computing device determines a plurality of features for the fixation, including characteristics of the fixation, context features based on preceding or subsequent fixations, and user interaction features based on information from the user interface devices during the fixation. The computing device assigns a user activity label to the fixation according to the features. The label is selected from a predefined set. The computing device then analyzes the fixations and their assigned user activity labels to make recommendations. | 05-14-2015 |
20150131851 | SYSTEM AND METHOD FOR USING APPARENT SIZE AND ORIENTATION OF AN OBJECT TO IMPROVE VIDEO-BASED TRACKING IN REGULARIZED ENVIRONMENTS - A system and method for optimizing video-based tracking of an object of interest are provided. A video of a regularized motion environment that comprise multiple video frames is acquired and an initial instance of an object of interest in one of the frames is then detected. An expected size and orientation of the object of interest as a function of the location of the object is then determined. The location of the object of interest is then determined in a next subsequent frame using the expected size and orientation of the object of interest. | 05-14-2015 |
20150131852 | OBJECT POSITION DETERMINATION - In embodiments, apparatuses, methods and storage media for human-computer interaction are described. In embodiments, an apparatus may include one or more light sources and a camera. Through capture of images by the camera, the computing device may detect positions of objects of a user, within a three-dimensional (3-D) interaction region within which to track positions of the objects of the user. The apparatus may utilize multiple light sources, which may be disposed at different distances to the display and may illuminate the objects in a direction other than the image capture direction. The apparatus may selectively illuminate individual light sources to facilitate detection of the objects in the direction toward the display. The camera may also capture images in synchronization with the selective illumination. Other embodiments may be described and claimed. | 05-14-2015 |
20150131853 | STEREO MATCHING SYSTEM AND METHOD FOR GENERATING DISPARITY MAP USING SAME - A stereo matching system comprising a face detection unit configured to detect a face area using either one of a reference image and a target image provided thereto to extract information about the detected face area and a support window setting unit configured to compare between the information about the detected face area by the face detection unit and a preset value to set the size of support window. | 05-14-2015 |
20150131854 | IMAGE MEASUREMENT APPARATUS, IMAGE MEASUREMENT METHOD, AND IMAGE MEASURING PROGRAM STORAGE MEDIUM - A partial image extracting unit extracts images of a predetermined size and constant magnification from a tissue region. A mask generating unit generates a mask for removing a region not intended for measurement from the tissue region for each extracted image. A complete mask generating unit generates a temporary complete mask in which the masks generated for each of the images are integrated together, and generates a complete mask in which close portions among unmasked portions in the temporary complete mask are unified into one or more target regions. A measuring unit measures information pertaining to an object to be measured included in the image, and this information is measured for each of the images extracted by the partial image extracting unit. A region information calculating unit calculates, for each target region, information pertaining to the object to be measured from the measured information and from the complete mask. | 05-14-2015 |
20150131855 | GESTURE RECOGNITION DEVICE AND CONTROL METHOD FOR THE SAME - A gesture recognition device configured to detect a gesture from acquired image and generate command issued to a control target instrument according to the gesture, the gesture recognition device comprising: an image acquisition unit configured to acquire an image; a gesture acquisition unit configured to detect a target region performing a gesture from the acquired image, and acquire the gesture based on motion or a shape of the detected target region; a face detection unit configured to detect a face comprised in the acquired image; a correlation unit configured to correlate the detected target region with the detected face using a human body model representing a shape of a human body; a personal identification unit configured to identify a user corresponding to the detected face; and a command generation unit configured to generate a command issued to the control target instrument based on the identified user and the acquired gesture. | 05-14-2015 |
20150131856 | MONITORING DEVICE AND MONITORING METHOD - A monitoring device configured to monitor whether a look of a target person is in a proper state suitable for a working environment, and comprising: an image input unit configured to input an image of the target person; a detector configured to analyze the image input by the image input unit and detect a predetermined region of a human body of the target person; a state estimator configured to estimate a state of the predetermined region detected by the detector; a proper state acquisition unit configured to acquire proper state information on a proper state of the predetermined region according to the working environment; and a controller configured to determine whether a present state of the predetermined region of the target person is proper by comparing an estimation result of the state estimator to the proper state information, and perform control according to a determination result. | 05-14-2015 |
20150131857 | VEHICLE RECOGNIZING USER GESTURE AND METHOD FOR CONTROLLING THE SAME - A vehicle is provided that is capable of preventing malfunction or inappropriate operation of the vehicle due to a passenger error by distinguishing a gesture of a driver from that of the passenger when a gesture of a user is recognized, and a method for controlling the same is provided. The vehicle includes an image capturing unit mounted inside the vehicle and configured to capture a gesture image of a gesture area including a gesture of a driver or a passenger. A controller is configured to detect an object of interest in the gesture image captured by the image capturing unit and determine whether the object of interest belongs to the driver. In addition, the controller is configured to recognize a gesture expressed by the object of interest and generate a control signal that corresponds to the gesture when the object of interest belongs to the driver. | 05-14-2015 |
20150131858 | TRACKING DEVICE AND TRACKING METHOD - A non-transitory computer-readable medium storing a program for tracking a feature point in an image that causes a computer to execute a process, the process includes: calculating first values indicating degree of corner for respective pixels in another image, based on change of brightness values in horizontal direction and vertical direction; calculating second values indicating degree of similarity between respective areas in the another image and a reference area around the feature point in the image, based on comparison between the respective areas and the reference area; calculating third values indicating overall degree of corner and similarity, based on the first values and the second values; and tracking the feature point by identifying a point in the another image corresponding to the feature point in the image, based on the third values. | 05-14-2015 |
20150131859 | METHOD AND APPARATUS FOR TRACKING OBJECT, AND METHOD AND APPARATUS FOR CALCULATING OBJECT POSE INFORMATION - A method and apparatus for tracking an object, and a method and apparatus for calculating object pose information are provided. The method of tracking the object obtains object feature point candidates by using a difference between pixel values of neighboring frames. A template matching process is performed in a predetermined region having the object feature point candidates as the center. Accordingly, it is possible to reduce a processing time needed for the template matching process. The method of tracking the object is robust in terms of sudden changes in lighting and partial occlusion. In addition, it is possible to track the object in real time. In addition, since the pose of the object, the pattern of the object, and the occlusion of the object are determined, detailed information on action patterns of the object can be obtained in real time. | 05-14-2015 |
20150131860 | AIRPORT TARGET TRACKING SYSTEM - A system for tracking objects using an intelligent Video processing system in the context of airport surface monitoring. The system addresses airport surface monitoring operational issues such as all weather conditions, high robustness, and low false report rate. The output can be used to complement existing airport surface monitoring systems. By combining the use of multi-sensors and an adverse weather optimized system, the system is capable of producing an improved stream of information for the target object over traditional computer vision based airport surface monitoring systems. | 05-14-2015 |
20150131861 | MULTI-VIEW OBJECT DETECTION USING APPEARANCE MODEL TRANSFER FROM SIMILAR SCENES - View-specific object detectors are learned as a function of scene geometry and object motion patterns. Motion directions are determined for object images extracted from a training dataset and collected from different camera scene viewpoints. The object images are categorized into clusters as a function of similarities of their determined motion directions, the object images in each cluster are acquired from the same camera scene viewpoint. Zenith angles are estimated for object image poses in the clusters relative to a position of a horizon in the cluster camera scene viewpoint, and azimuth angles of the poses as a function of a relation of the determined motion directions of the clustered images to the cluster camera scene viewpoint. Detectors are thus built for recognizing objects in input video, one for each of the clusters, and associated with the estimated zenith angles and azimuth angles of the poses of the respective clusters. | 05-14-2015 |
20150131862 | HUMAN TRACKING SYSTEM - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities. | 05-14-2015 |
20150131863 | Representative Motion Flow Extraction For Effective Video Classification and Retrieval - Technologies are generally described herein for extracting a representative motion flow from a video. Technologies are also generally described herein for retrieving a video utilizing the representative motion flow. The representative motion flow may be extracted utilizing a sliding window approach to generate interesting motion flows. The representative motion flow may be generated based on the interesting motion flows. | 05-14-2015 |
20150139482 | SYSTEM AND METHOD FOR UPDATING GEOGRAPHIC DATA - According to one aspect, embodiments of the invention provide a system and method for utilizing the effort expended by a user in responding to a CAPTCHA request to automatically transcribe text from images in order to verify, retrieve and/or update geographic data associated with geographic locations at which the images were recorded. | 05-21-2015 |
20150139483 | Interactive Controls For Operating Devices and Systems - An electric device (e.g., module, interactive controller/switch) comprising a gesture sensor can use the gesture sensor to determine (e.g., detect, recognize, identify, etc.) a gesture performed by a user. If the electric device recognizes the gesture as corresponding to a gestural command to control or operate another device and/or system (e.g., such as a light), then the electric device can instruct the other device/system to function or operate in accordance with the gestural command (e.g., turn on or off). In some embodiments, the electric device can also comprise an audio sensor configured to capture audio data. The captured audio data can include a vocal command given by the user. The electric device can analyze the captured audio data. Based on the analysis, if the electric device recognizes the vocal command, then the electric device can cause the other device/system to function or operate in accordance with the vocal command | 05-21-2015 |
20150139484 | TIME SCALE ADAPTIVE MOTION DETECTION - A method and system for efficient non-persistent object motion detection comprises evaluating a video segment to identify at least two first pixel classes corresponding to a plurality of stationary pixels and a plurality of pixels in apparent motion, and evaluating the video segment to identify at least two second pixel classes corresponding to a background and a foreground indicative of the presence of a non-persistent object. The first pixel classes and the second pixel classes can be combined to define a final motion mask in the selected video segment indicative of the presence of a non-persistent object. An output can provide an indication that the object is in motion. | 05-21-2015 |
20150139485 | POSE-ALIGNED NETWORKS FOR DEEP ATTRIBUTE MODELING - Technology is disclosed for inferring human attributes from images of people. The attributes can include, for example, gender, age, hair, and/or clothing. The technology uses part-based models, e.g., Poselets, to locate multiple normalized part patches from an image. The normalized part patches are provided into trained convolutional neural networks to generate feature data. Each convolution neural network applies multiple stages of convolution operations to one part patch to generate a set of fully connected feature data. The feature data for all part patches are concatenated and then provided into multiple trained classifiers (e.g., linear support vector machines) to predict attributes of the image. | 05-21-2015 |
20150139486 | ELECTRONIC EYEGLASSES AND METHOD OF MANUFACTURE THERETO - A system and methods for recognizing certain eye or eyelid gestures such as by opening or closing of eyelid or movement of the pupil as signals to trigger certain predesigned desired events. An embodiment comprises of electronic glasses placed in front of the eye to recognize certain eye or eyelid gestures as signals to control an electronic device such as a chair for the special needs, or TVs or Car system or some video games. This is achieved through an apparatus and method of Integrated Circuits (IC) for the purpose of detecting black or white color; for the purpose of controlling the speed of movement of a wheelchair; a wireless control system for the wheelchair. The electronic glasses embodiment is designed using optical probes (IR), some high capacity relays and some amplifiers to run transistors which work to run relays to separate the control circuit from the power circuits. | 05-21-2015 |
20150139487 | IMAGE PROCESSOR WITH STATIC POSE RECOGNITION MODULE UTILIZING SEGMENTED REGION OF INTEREST - An image processing system comprises an image processor having image processing circuitry and an associated memory. The image processor is configured to implement a gesture recognition system comprising a static pose recognition module. The static pose recognition module is configured to identify a region of interest in at least one image, to represent the region of interest as a segmented region of interest comprising a union of segment sets from respective ones of a plurality of lines, to estimate features of the segmented region of interest, and to recognize a static pose of the segmented region of interest based on the estimated features. The lines from which the respective segment sets are taken illustratively comprise respective parallel lines configured as one of horizontal lines, vertical lines and rotated lines. A given one of the segments in one of the sets may be represented by a pair of segment coordinates. | 05-21-2015 |
20150139488 | METHOD AND SYSTEM OF IDENTIFYING NON-DISTINCTIVE IMAGES/OBJECTS IN A DIGITAL VIDEO AND TRACKING SUCH IMAGES/OBJECTS USING TEMPORAL AND SPATIAL QUEUES - A method and system to identify and locate images/objects which may be characterized as non-distinctive or “feature-less” and which would be difficult to locate by conventional means comprises a plurality of steps including identifying first and second frame markers, increasing granularity between the frame markers, identifying at least one dominant object between the frame markers, normalizing its shape and identifying its edges, dissecting the dominant object into at least two equally sized sections; identifying shape and characteristics of at least one section (the analyzed section) of the dominant object thereby creating section data; applying geometric modeling such that section data from the analyzed section is used to determine overall shape, facets and configuration of the dominant object, thereby forming a geometric model; comparing geometric model to a known reference data-base of objects like the non-distinctive object (the reference object); and assessing the probability that the geometric model so formed represents the desired non-distinctive object. | 05-21-2015 |
20150139489 | TASK ASSISTANCE SYSTEM, TASK ASSISTANCE METHOD, AND PROGRAM - A task assistance system includes: a location detection unit ( | 05-21-2015 |
20150139490 | DISCRIMINATION CONTAINER GENERATION DEVICE AND PATTERN DETECTION DEVICE - A discriminator generation device | 05-21-2015 |
20150139491 | AUTOMATIC OCCLUSION REGION IDENTIFICATION USING RADIATION IMAGING MODALITY - Among other things, one or more systems and/or techniques for identifying an occlusion region in an image representative of an object subjected to examination is provided for herein. Such systems and/or techniques may find particular application in the context of object recognition analysis. An image is generated of the object and an orientation of the object is determined from the image. Based upon the determined orientation of the object relative to the direction the object is translated during examination, one or more parameters utilized for segmenting a second image of the object, identifying features in the image, and/or determining if the image comprises an occlusion region may be adjusted. In this way, the parameters utilized may be a function of the determined orientation of the object, which may mitigate false positives of detected occlusion regions. | 05-21-2015 |
20150139492 | IMAGE RECOGNITION APPARATUS AND DATA REGISTRATION METHOD FOR IMAGE RECOGNITION APPARATUS - An image recognition apparatus is provided, comprising: an extraction unit extracting feature amount data of a subject from an image; a database registering a plurality of pieces of feature amount data extracted from different images of one registered object; and a comparing unit identifying whether or not the subject is the registered object by comparing the feature amount data extracted by the extraction unit and the feature amount data of the registered object registered in the database, a registration unit, using an image for registration, adding feature amount data of the registered object to the database in accordance with a predetermined condition which includes a first condition: if new data, which is the feature amount data extracted from the image for registration, is similar to registered data, which is the feature amount data of the registered object already registered in the database, the new data is not added. | 05-21-2015 |
20150139493 | COMMODITY RECOGNITION APPARATUS AND COMMODITY RECOGNITION METHOD - In accordance with one embodiment, a commodity recognition apparatus detects, from a captured image, an object imaged in the captured image and extracts an appearance feature amount of the object from the image of the object; compares the extracted appearance feature amount with feature amount data of a dictionary file in which feature amount data indicating the surface information of a commodity is stored for each recognition target commodity to calculate a similarity degree indicating how similar the appearance feature amount is to the feature amount data for each recognition target commodity; recognizes whether or not the object is a commodity based on the calculated similarity degree; and specifies and notifies the reason in a case in which the object is not recognized as a commodity. | 05-21-2015 |
20150139494 | SLOW CHANGE DETECTION SYSTEM - A slow change detection system includes: an image acquisition unit adapted to acquire a photographed image including consecutive images of a monitored object; a reference image acquisition unit adapted to acquire a reference image corresponding to the photographed image of the monitored object; a reference image area extraction unit adapted to extract, from the reference image, a reference image area that is an area corresponding to a photographed area of the photographed image; a change detection unit adapted to acquire a change area that is an area of the photographed image which is different from the reference image area and acquire a slow change area by excluding, from the acquired change area, a sudden change area derived from a history of the change area; and a display control unit adapted to superimpose and display information indicating the slow change area on the photographed image. | 05-21-2015 |
20150139495 | ELECTRONIC DEVICE AND METHOD FOR PROCESSING IMAGE THEREOF - A method for processing an image of an electronic device is provided, which includes detecting at least one motion object from image information. When there are two motion objects, determining whether the motion objects are separated from or combined with each other. At least one motion object is extracted the motion objects are separated from or combined with each other, determining whether or not there is a preset motion object synthesis sequence, and recommending a synthesis time point before and after a time point of separation or combination of the motion objects. | 05-21-2015 |
20150139496 | METHOD FOR PROCESSING IMAGE AND ELECTRONIC DEVICE THEREOF - A method of operating an electronic device is provided. The method of operating an electronic device includes displaying an image of a first resolution, determining at least a partial region of the image, and displaying an image of a second resolution corresponding to the partial region. | 05-21-2015 |
20150139497 | LIVENESS DETECTION - An image of a portion of a person's body is accessed, the image having been captured by an image capture device. Using the image, measurements of characteristics in the image are obtained, the characteristics in the image having been selected based on a statistical analysis of characteristics (i) in a plurality of first images taken directly of a person and (ii) in a plurality of second images taken of an image of a person. Based on a liveness function, a score for the image is determined using the obtained measurements of the characteristics in the image. A threshold value is accessed. The score of the image is compared to the accessed threshold value. Based on the comparison of the score of the image to the accessed threshold value, the image is determined to be have been taken by the image capture device imaging the portion of the person's body. | 05-21-2015 |
20150146915 | HARDWARE CONVOLUTION PRE-FILTER TO ACCELERATE OBJECT DETECTION - Systems, apparatus, articles, and methods are described related to a hardware-based convolution pre-filter to accelerate object detection. | 05-28-2015 |
20150146916 | Method of reducing computational demand for image tracking - A method of reducing computational demand for image tracking includes first receiving a plurality of images of a load hoisted with hanging wire in a preset transitional space, setting at least one monitoring point each at the load and the hanging wire, respectively, providing a tracking frame at the outer periphery of each monitoring point, allowing each tracking frame to track on its corresponding monitoring point that each monitoring point is kept on a relative location within each corresponding tracking frame, and producing the same displacement as that of the monitoring point, and lastly computing the displacement of each tracking frame thus obtaining the displacement of each monitoring point in correspondence to the tracking frame to calculate various displacement of the hanging wire and the load, respectively. With this image tracking method, the processing quantity of whole image data is effectively decreased and computational demand of equipments is reduced. | 05-28-2015 |
20150146917 | METHOD AND SYSTEM FOR VIDEO-BASED VEHICLE TRACKING ADAPTABLE TO TRAFFIC CONDITIONS - A method and system for adaptable video-based object tracking includes acquiring video data from a scene of interest and identifying an initial instance of an object of interest in the acquired video data. A representation of a target object is then established. One or more motion parameters associated with said scene of interest are used to adjust the size of a search neighborhood associated with said target object. The target object is then tracked frame-by-frame in the video data. | 05-28-2015 |
20150146918 | VIDEO DEVICE FOR REALTIME PEDALING FREQUENCY ESTIMATION - A video device for realtime pedaling frequency estimation is mounted on a bike and comprises an image capture unit capturing continuous dynamic images of an upper body of a biker; an image recognition unit recognizing images of symmetric regions of the biker and images of swings of the symmetric regions from the continuous dynamic images; a microprocessor calculating a frequency of periodical swings of the biker from the images of the symmetric regions and the images of the swings of the symmetric regions, and then obtaining a pedaling frequency; and a display device presenting the pedaling frequency, wherein a widely-used intelligent handheld device replaces the sensors, display device, and complicated circuits of the conventional cyclometer, and wherein a novel image recognition technology is used to measure the pedaling frequency with a considerable accuracy in a lower cost and a convenient way. | 05-28-2015 |
20150146919 | APPARATUS AND METHOD FOR DETECTING PEDESTRIANS - Provided is an image processing apparatus for detecting pedestrians. The image processing apparatus includes a lane detecting module configured to extract a lane coordinate value from an input image and a pedestrian detecting module configured to set, as a pedestrian region of interest (ROI), a region between a first line passing through ends of first left and right lanes and a second line passing through ends of second left and right lanes which are respectively disposed above the left and right lanes, and search for the pedestrian ROI by using a predetermined window to detect a pedestrian region having a pedestrian feature. | 05-28-2015 |
20150146920 | GESTURE RECOGNITION METHOD AND APPARATUS UTILIZING ASYNCHRONOUS MULTITHREADED PROCESSING - An image processing system comprises an image processor configured to establish a main processing thread and a parallel processing thread for respective portions of a multithreaded gesture recognition process. The parallel processing thread is configured to utilize buffer circuitry of the image processor, such as one or more double buffers of the buffer circuitry, so as to permit the parallel processing thread to run asynchronously to the main processing thread. The parallel processing thread implements one of noise estimation, background estimation and static hand pose recognition for the multithreaded gesture recognition process. Additional processing threads may be established to run in parallel with the main processing thread. For example, the image processor may establish a first parallel processing thread implementing the noise estimation, a second parallel processing thread implementing the background estimation, and a third parallel processing thread implementing the static hand pose recognition. | 05-28-2015 |
20150146921 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - The present technology relates to an information processing apparatus, an information processing method, and a program capable of searching for and tracking a person desired to be searched for and tracked from images captured by a plurality of cameras with high precision. | 05-28-2015 |
20150146922 | TARGET DETECTION DEVICE AND TARGET DETECTION METHOD - A target detection device that determines whether input data acquired from a data input module contains a detection target, the target detection device including: a multi-level data generation module for generating, from the input data, a plurality of data mutually different in an information level, the information level being a degree representing the detection target; an evaluation value calculation module for calculating, for each of the plurality of data, an evaluation value representing a degree of likelihood of the detection target; and a target determination module for determining that the input data contains the detection target when an increasing degree by which the evaluation value calculated for each of the plurality of data mutually different in the information level increases according to increase of the information level is equal to or more than a lower limit value of the increasing degree where the input data contains the detection target. | 05-28-2015 |
20150146923 | SYSTEMS AND METHODS FOR TRACKING A MODEL - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose. | 05-28-2015 |
20150146924 | IMAGE ANALYZING DEVICE, IMAGE ANALYZING METHOD, AND RECORDING MEDIUM STORING IMAGE ANALYZING PROGRAM - An image analyzing device, an image analyzing method, and a recording medium storing an image analyzing program are provided. Each of the image analyzing device, the image analyzing method, and the recording medium storing the image analyzing program recognizes an area where a target is displayed based on a feature value of an input image, generates space recognition information to recognize spatial properties of each portion of the input image, divides the image into a plurality of similar areas according to similarity in feature value of the input image, the similar area having a similar feature value, obtains specified attribute data of the spatial properties to be referred to, from image areas around the recognized area where the target is displayed, recognizes the spatial properties according to the space recognition information, and determines whether a result of recognition is appropriate at the portion where the target is displayed. | 05-28-2015 |
20150146925 | METHOD FOR RECOGNIZING A SPECIFIC OBJECT INSIDE AN IMAGE AND ELECTRONIC DEVICE THEREOF - A method and an apparatus for recognizing a specific object inside an image in an electronic device are provided. The method includes displaying at least one image; detecting at least one gesture; selecting a recognition function related to at least one object existing in the at least one image according to the detected at least one gesture; and recognizing the at least one object using the selected recognition function. | 05-28-2015 |
20150146926 | POWER EFFICIENT USE OF A DEPTH SENSOR ON A MOBILE DEVICE - Systems, apparatus and methods in a mobile device to enable and disable a depth sensor for tracking pose of the mobile device are presented. A mobile device relaying on a camera without a depth sensor may provide inadequate pose estimates, for example, in low light situations. A mobile device with a depth sensor uses substantial power when the depth sensor is enabled. Embodiments described herein enable a depth sensor only when images are expected to be inadequate, for example, accelerating or moving too fast, when inertial sensor measurements are too noisy, light levels are too low or high, an image is too blurry, or a rate of images is too slow. By only using a depth sensor when images are expected to be inadequate, battery power in the mobile device may be conserved and pose estimations may still be maintained. | 05-28-2015 |
20150146927 | ACCELERATED OBJECT RECOGNITION IN AN IMAGE - A method for recognizing an object ( | 05-28-2015 |
20150146928 | APPARATUS AND METHOD FOR TRACKING MOTION BASED ON HYBRID CAMERA - An apparatus and method for tracing a motion of an object using high-resolution image data and low-resolution depth data acquired by a hybrid camera in a motion analysis system used for tracking a motion of a human being. The apparatus includes a data collecting part, a data fusion part, a data partitioning part, a correspondence point tracking part, and a joint tracking part. Accordingly, it is possible to precisely track a motion of the object by fusing the high-resolution image data and the low-resolution depth data, which are acquired by the hybrid camera. | 05-28-2015 |
20150146929 | STATIONARY TARGET DETECTION BY EXPLOITING CHANGES IN BACKGROUND MODEL - Provided is a computer-implemented method for processing one or more video frames. The meth can include generating, by a processor, a change in value of one or more pixels obtained from the one or more video frames; classifying, by the processor, the change in value of the one or more pixels to produce one or more classes of the change in value of the one or more pixels, wherein the one or more classes include one or more of a stationary target, a moving target, a target insertion, a target removal, or a local change; and constructing, by the processor, a listing of detected targets based on the one or more classes. | 05-28-2015 |
20150294140 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD AND PROGRAM - Provided are an information processing system, an information processing method and a program capable of suitably monitoring a moving body related to a plurality of imaging apparatuses. The information processing system of the present invention includes: an input unit | 10-15-2015 |
20150294144 | PASSENGER COUNTING SYSTEM, PASSENGER COUNTING METHOD AND PASSENGER COUNTING PROGRAM - A passenger counting system that can count correctly the number of persons present in a vehicle, including persons sitting on the back seat, is provided. The passenger counting system includes an image capturing device | 10-15-2015 |
20150294148 | EYE GAZE DETECTION APPARATUS, COMPUTER-READABLE RECORDING MEDIUM STORING EYE GAZE DETECTION PROGRAM AND EYE GAZE DETECTION METHOD - An eye-gaze-detection apparatus includes an image acquisition unit that acquires a face image of a user from an imaging apparatus, the face image being captured by the imaging apparatus; a feature-quantity extraction unit that extracts a feature quantity of the face image; an eye-gaze-calculation-determination unit that determines whether or not an eye-gaze calculation process is performed by referring to a rule database, based on the feature quantity extracted by the feature quantity extraction unit, the rule database storing a rule set associating a condition including the feature quantity of the face image with information indicating whether or not the eye gaze calculation process is performed; and an eye-gaze-calculation unit that performs the eye-gaze-calculation process for the user, based on the feature quantity of the face image acquired by the image acquisition unit, when the eye-gaze-calculation-determination unit determines that the eye-gaze-calculation process is performed. | 10-15-2015 |
20150294151 | EDUCATION SITE IMPROVEMENT SUPPORT SYSTEM, EDUCATION SITE IMPROVEMENT SUPPORT METHOD, INFORMATION PROCESSING APPARATUS, COMMUNICATION TERMINAL, AND CONTROL METHODS AND CONTROL PROGRAMS OF INFORMATION PROCESSING APPARATUS AND COMMUNICATION TERMINAL - An apparatus of this invention is directed to an information processing apparatus that aims to improve the quality of the ITC education based on the history of reactions or evaluations to education conducted by different educators to different educatees in different education site environments in education using education application software. An information processing apparatus includes an education site history accumulator that accumulates the history of pieces of education site information representing the reactions or evaluations of education site participants including an educator and an educatee at an education site using an education application software, and the education application software in association with each other, an education site information receiver that receives, from a communication terminal, the pieces of education site information acquired by the communication terminal or a device connected to the communication terminal, and an analysis information generator that generates analysis information of the education site from the received pieces of education site information and the history of the pieces of education site information. | 10-15-2015 |
20150294152 | METHOD OF DETECTION OF POINTS OF INTEREST IN A DIGITAL IMAGE | 10-15-2015 |
20150294156 | SIGHT INFORMATION COLLECTION IN HEAD WORN COMPUTING - Aspects of the present invention relate to methods and systems for collecting and using eye heading and sight heading information in head worn computing. | 10-15-2015 |
20150294158 | Method and System for Tracking Objects - There is provided a system for tracking objects. The system includes a processor and a memory for storing a plurality of sensory data frames. The processor determines a first hypothesized location for each of the objects in each of the plurality of sensory data frames. For each of the plurality of sensory data frames, the processor determines probabilities that the first hypothesized location of each of the objects in a sensory data frame of the plurality of sensory data frames is the same as the first hypothesized location of another object in an adjacent sensory data frame. The processor computes a first optimal trajectory for each of the objects using an algorithm based on the probabilities, checks the first optimal trajectory for each of the objects, and accepts or rejects the first optimal trajectory for each of the objects. | 10-15-2015 |
20150294159 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD AND PROGRAM - Provided are an information processing system, an information processing method and a program capable of suitably predicting, when tracking a person with a plurality of video cameras, an image of a video camera, in which a moving body that is appearing in a video camera will subsequently appear. The information processing system of the present invention includes an interior view angle person position acquisition unit | 10-15-2015 |
20150294163 | IMAGE PROCESSING DEVICE - Provided is an image processing device in which a region targeted for edge extraction of a taken image is divided into a plurality of partial regions | 10-15-2015 |
20150294167 | METHOD AND SYSTEM FOR DETECTING TRAFFIC LIGHTS - A method for detecting traffic lights is provided. The method includes: obtaining a color image captured by a camera; converting the color image into a first monochrome scale image; converting the first monochrome scale image into a first binary image; identifying a first set of candidate blobs in the first binary image based on at least one predetermined geometric parameter; and determine whether a first region in the color image, which first region corresponds to one of the first set of candidate blobs, is a green traffic light using a green traffic light classifier. The accuracy and efficiency may be improved. | 10-15-2015 |
20150294469 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD, AND PROGRAM - An image processing device for recognizing an object corresponding to a registered image registered beforehand from an imaged image, comprising: an obtaining unit configured to obtain the imaged image; a recognizing unit configured to recognize an object corresponding to the registered image from the imaged image; and a detecting unit configured to detect, based on a registered image corresponding to an object recognized from the imaged image thereof, an area where another object is overlapped with the object corresponding to the registered image thereof. | 10-15-2015 |
20150294478 | IMAGE PROCESSING DEVICE USING DIFFERENCE CAMERA - A fast and stable image processing system detecting a mark from a differential image is described. The system includes a display displaying a first image and a second image alternately. A camera captures the first image that contains a mark and the second image. An image processing device detects the mark from a non-zero pixel region of a difference image between the first image that is captured and the second image that is captured. | 10-15-2015 |
20150294479 | FALLBACK DETECTION IN MOTION ESTIMATION - Techniques related to managing the use of motion estimation in video processing are discussed. Such techniques may include determining dividing two video frames each into corresponding regions, generating phase plane correlations for the corresponding regions, determining whether the video frames are motion estimation correlated based on the phase plane correlations, and providing a video frame prediction mode indicator based on the determination. | 10-15-2015 |
20150294481 | MOTION INFORMATION PROCESSING APPARATUS AND METHOD - A motion information processing apparatus according to embodiments includes an acquiring unit and an output unit. The acquiring unit acquires motion information indicating a motion of a person. The output unit outputs support information used to support a motion relating to rehabilitation for the person whose motion information is acquired by the acquiring unit. | 10-15-2015 |
20150294482 | System And Process For Detecting, Tracking And Counting Human Objects Of Interest - A system is disclosed that includes: at least one image capturing device at the entrance to obtain images; a reader device; and a processor for extracting objects of interest from the images and generating tracks for each object of interest, and for matching objects of interest with objects associated with RFID tags, and for counting the number of objects of interest associated with, and not associated with, particular RFID tags. | 10-15-2015 |
20150294483 | VISION-BASED MULTI-CAMERA FACTORY MONITORING WITH DYNAMIC INTEGRITY SCORING - A human monitoring system includes a plurality of cameras and a visual processor. The plurality of cameras are disposed about a workspace area, where each camera is configured to capture a video feed that includes a plurality of image frames, and the plurality of image frames are time-synchronized between the respective cameras. The visual processor is configured to receive the plurality of image frames from the plurality of vision-based imaging devices and determine an integrity score for each respective image frame. The processor may then isolate a foreground section from two or more of the views, determine a principle body axis for each respective foreground section, and determine a location point according to a weighted least squares function amongst the various principle body axes. | 10-15-2015 |
20150302087 | Object Information Derived From Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image. | 10-22-2015 |
20150302253 | Systems and Method for Identifying Locations of Infrastructure Assets Using Aerial Imagery - Embodiments include a computer-implemented method for identifying locations of infrastructure assets. The method including identifying a location of a first infrastructure asset and a location of a second infrastructure asset, identifying a region of interest extending at least partially between the locations of the first and second infrastructure assets, obtaining an aerial image of the region of interest, determining that the aerial image of the region of interest comprises a third infrastructure asset, determining a location of the third infrastructure asset based at least in part on the aerial image, and storing the location of the secondary infrastructure asset in an infrastructure asset datastore. | 10-22-2015 |
20150302256 | PROGRAM, METHOD, AND SYSTEM FOR DISPLAYING IMAGE RECOGNITION PROCESSING SUITABILITY - Provided is a system for displaying image recognition processing suitability which provides the degree to which a photographed image in which a monitoring target object is present at any potential location is suitable for an image recognition process in such a way that a user can understand it easily. A resolution evaluation unit | 10-22-2015 |
20150302258 | OBJECT DETECTION DEVICE - An object detection device detects an object being recognized (such as a pedestrian) in a frame image, and identifies an area where a detected object which is detected in the frame image is present. A frame image is input after the frame image. The object detection device detects the object being recognized in the frame image, and identifies an area where a detected object which is detected in the frame image is present. When a distance from center coordinates of the area to center coordinates of the area smaller than a reference distance, the object detection device determines that the detected object which is detected in the frame image identical to the detected object which is detected in the frame image. | 10-22-2015 |
20150302259 | DRIVING ASSISTANCE DEVICE AND IMAGE PROCESSING PROGRAM - A driving assistance device includes: an imaging unit configured to image surroundings of a vehicle, an image processing unit configured to perform processing on the image imaged by the imaging unit and to generate a display image, and a display unit configured to display the display image generated by the image processing unit. The image processing unit, in response to a set condition, among areas in which the image imaged by the imaging unit is divided, generates a display image in which a visibility of a display area corresponding to a first area which is an imaging area at a side far from the vehicle is lower than a visibility of an ordinary display image thereof. | 10-22-2015 |
20150302262 | METHOD AND DEVICE FOR DETECTING VARIABLE MESSAGE TRAFFIC SIGNS - A method for detecting variable message traffic signs for a vehicle, includes: reading in a vehicle position, comparing the vehicle position to position information of at least one variable message traffic sign in order to determine a presence of a variable message traffic sign in a predefined area around the vehicle and in response to provide a proximity information, and varying a detection instruction of information of a variable message traffic sign in response to the proximity information to detect a variable message traffic sign. | 10-22-2015 |
20150302575 | SUN LOCATION PREDICTION IN IMAGE SPACE WITH ASTRONOMICAL ALMANAC-BASED CALIBRATION USING GROUND BASED CAMERA - A method for predicting location of the sun in an image space. The method includes providing a set of calibration images and offline intrinsic calibration of a camera and optical element. An extrinsic parameter calibration is then performed based on the calibration images and mapping between local three dimensional coordinates and real world three dimensional coordinates to provide an extrinsic projection matrix. The method also includes providing a real time image of the sky and determining sun location in spherical space based on the extrinsic projection matrix and a real time sun location in the world coordinate system for the real time image. A three dimensional vector is then mapped to provide a corrected two dimensional ideal point. Next, an inverse affine transformation is performed to provide a two dimensional real image point in image space. | 10-22-2015 |
20150302576 | Retraction Based Three-Dimensional Tracking of Object Movements - The technology disclosed relates to tracking movement of a real world object in three-dimensional (3D) space. In particular, it relates to mapping, to image planes of a camera, projections of observation points on a curved volumetric model of the real world object. The projections are used to calculate a retraction of the observation points at different times during which the real world object has moved. The retraction is then used to determine translational and rotational movement of the real world object between the different times. | 10-22-2015 |
20150302586 | THREE-DIMENSIONAL OBJECT DETECTION DEVICE - A three-dimensional object detection device includes an image capturing unit, a detection area setting unit, an image conversion unit, a three-dimensional object detection unit, and a relative movement speed calculation unit. The detection area setting unit sets a detection area in a lateral direction rearward of the host vehicle. The image conversion unit converts a viewpoint of the images obtained by the image capturing unit to create bird's-eye view images. The three-dimensional object detection unit detects a presence of a three-dimensional object within the detection area by vehicle width direction detection processing. The detection area setting unit widens the detection area rearward with respect to a direction of vehicle progress when the three-dimensional object is detected in the detection area and the relative movement speed of the three-dimensional object, as calculated by relative movement speed calculation unit, is at a predetermined value or greater. | 10-22-2015 |
20150302591 | SYSTEM FOR DETECTING OBSTACLE USING ROAD SURFACE MODEL SETTING AND METHOD THEREOF - A system for detecting an obstacle includes an image acquisitor configured to acquire image data around a camera. An obstacle detector is configured to apply a road surface model using a horizon or a vanishing point to the image data and perform a sliding window on a road surface region to detect the obstacle. | 10-22-2015 |
20150302606 | COLLISION WARNING SYSTEM - A method of estimating a time to collision (TTC) of a vehicle with an object comprising: acquiring a plurality of images of the object; and determining a TTC from the images that is responsive to a relative velocity and relative acceleration between the vehicle and the object. | 10-22-2015 |
20150302607 | IMAGE PROCESSING APPARATUS AND METHOD - A method and apparatus for localizing an area in relative movement and for determining the speed and direction thereof in real time is disclosed. Each pixel of an image is smoothed using its own time constant. A binary value corresponding to the existence of a significant variation in the amplitude of the smoothed pixel from the prior frame, and the amplitude of the variation, are determined, and the time constant for the pixel is updated. For each particular pixel, two matrices are formed that include a subset of the pixels spatially related to the particular pixel. The first matrix contains the binary values of the subset of pixels. The second matrix contains the amplitude of the variation of the subset of pixels. In the first matrix, it is determined whether the pixels along an oriented direction relative to the particular pixel have binary values representative of significant variation, and, for such pixels, it is determined in the second matrix whether the amplitude of these pixels varies in a known manner indicating movement in the oriented direction. In each of several domains, histogram of the values in the first and second matrices falling in such domain is formed. Using the histograms, it is determined whether there is an area having the characteristics of the particular domain. The domains include luminance, hue, saturation, speed (V), oriented direction (D | 10-22-2015 |
20150307024 | VEHICLE PERIPHERAL OBSTACLE NOTIFICATION SYSTEM - A vehicle peripheral obstacle notification system, used on a construction machine, detects a moving obstacle, gives composite display of the detected moving obstacle, and notifies a user thereof, the moving obstacle being at a location not appearing in a composite bird's-eye view obtained by onboard cameras acquiring images of the surroundings of the vehicle including an image immediately under the vehicle body. The system has a composite image formation part that extracts composite image formation areas from the surrounding images to compose a composite bird's-eye image. A moving obstacle is detected using the surrounding images and it is decided whether the detected position of the moving obstacle is inside or outside the composite image formation areas. If the detected position is determined to be outside the composite image formation areas, information about the detected position is extracted and an image thereof is output for superimposition onto the composite bird's-eye image. | 10-29-2015 |
20150310253 | HANDLING GLARE IN EYE TRACKING - Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking. | 10-29-2015 |
20150310257 | OBJECT IDENTIFICATION USING 3-D CURVE MATCHING - The claimed subject matter provides for systems and/or methods for identification of instances of an object of interest in 2D images by creating a database of 3D curve models of each desired instance and comparing an image of an object of interest against such 3D curve models of instances. The present application describes identifying and verifying the make and model of a car from a possibly single image—after the models have been populated with training data of test images of many makes and models of cars. In one embodiment, an identification system may be constructed by generating a 3D curve model by back-projecting edge points onto a visual hull reconstruction from silhouettes of an instance. The system and methods employ chamfer distance and orientation distance provides reasonable verification performance, as well as an appearance model for the taillights of the car to increase the robustness of the system. | 10-29-2015 |
20150310260 | Determining Which Participant is Speaking in a Videoconference - Aspects herein describe methods and systems of receiving, by one or more cameras, images in which the images comprise facial images of individuals. Aspects of the disclosure describe extracting the facial images from the images received, sorting the extracted facial images into separate groups wherein each group corresponds to the facial images of each individual, and selecting, for each individual, a preferred facial image from each group. The preferred facial images selected are transmitted to a client for display. Aspects of the disclosure also describe selecting either a facial recognition algorithm or an audio triangulation algorithm to use to determine which individual is speaking wherein the selection is based on whether lip movement of one or more of the individuals is visible in the images received from the cameras. | 10-29-2015 |
20150310263 | FACIAL EXPRESSION TRACKING - The description relates to facial tracking. One example can include an orientation structure configured to position the wearable device relative to a user's face. The example can also include a camera secured by the orientation structure parallel to or at a low angle to the user's face to capture images across the user's face. The example can further include a processor configured to receive the images and to map the images to parameters associated with an avatar model. | 10-29-2015 |
20150310264 | Dynamic Gesture Recognition Using Features Extracted from Multiple Intervals - In one embodiment, an image processor comprises image processing circuitry and an associated memory. The image processor is configured to implement a gesture recognition system utilizing the image processing circuitry and the memory. The gesture recognition system implemented by the image processor comprises a dynamic gesture recognition module. The dynamic gesture recognition module is configured to establish a dynamic gesture recognition interval comprising a plurality of image frames, to extract one or more first features from the dynamic gesture recognition interval, to adjust the dynamic gesture recognition interval, to extract one or more second features from the adjusted dynamic gesture recognition interval, and to recognize a dynamic gesture based at least in part on at least a subset of the extracted first and second features. | 10-29-2015 |
20150310265 | Method and System for Proactively Recognizing an Action of a Road User - A method and system are provided to proactively recognize an action of a road user in road traffic, wherein an image of the road user, which is structured in a pixel-wise manner, is captured by way of at least one camera, and corresponding image data is generated. Image data of multiple pixels is grouped in each case by cells, wherein the image comprises multiple cells. A respective centroid is determined based on the image data within a cell. For each of the pixels, the respective distance from the centroids of a plurality of cells is ascertained, wherein a feature vector that is assigned to the pixel is formed based on coordinates of the respective pixel and the centroids. The feature vector is compared to at least one reference vector cluster, and a pose is associated with the road user based on the comparison. The pose is representative of the road user planning to carry out the action. | 10-29-2015 |
20150310266 | DEVICE AND METHOD FOR DETERMINING GESTURE AND OPERATION METHOD OF GESTURE DETERMINING DEVICE - A device for determining a gesture includes a light emitting unit, an image sensing device and a processing circuit. The light emitting unit emits a light beam. The image sensing device captures an image of a hand reflecting the light beam. The processing circuit obtains the image and determines a gesture of the hand by performing an operation on the image; wherein the operation includes: selecting pixels in the image having a brightness greater than or equal to a brightness threshold; dividing the selected pixels; and determining the gesture of the hand according to a number of group of divided pixels. A method for determining a gesture and an operation method of the aforementioned device are also provided. | 10-29-2015 |
20150310273 | STATIC OCCLUSION HANDLING USING DIRECTIONAL PIXEL REPLICATION IN REGULARIZED MOTION ENVIRONMENTS - This disclosure provides a static occlusion handling method and system for use with appearance-based video tracking algorithms where static occlusions are present. The method and system assumes that the objects to be tracked move in according to structured motion patterns within a scene, such as vehicles moving along a roadway. A primary concept is to replicate pixels associated with the tracked object from previous frames to current or future frames when the tracked object coincides with a static occlusion, where the predicted motion of the tracked object is a basis for replication of the pixels. | 10-29-2015 |
20150310274 | METHOD AND SYSTEM FOR AUTOMATICALLY LOCATING STATIC OCCLUSIONS - This disclosure provides a method and system to locate/detect static occlusions associated with an image captured scene including a tracked object. According to an exemplary method, static occlusions are automatically located by monitoring the motion of single or multiple objects in a scene over time and with the use of an associated accumulator array. | 10-29-2015 |
20150310275 | Method and device for calculating number and moving direction of pedestrians - A method for calculating the number and moving direction of pedestrians is provided. in which feature points of a current frame image are extracted; the feature points of the current frame image are compared with those of a selected historical frame image, to obtain moving feature points of the current frame image; directional weighted counting is performed on the moving feature points of the current frame image to obtain the moving direction of the pedestrians; and edge points of pedestrian images are extracted from a foreground image of the current frame image, and performing joint weighted counting on the edge points of the pedestrian images and the moving feature points of the current frame image according to correction coefficients of locations of the respective points, to obtain the number of the pedestrians. A device for calculating the number and moving direction of pedestrians is also provided. | 10-29-2015 |
20150310276 | METHOD FOR THE AUTOMATIC CORRECTION OF ALIGNMENT ERRORS IN STAR TRACKER SYSTEMS - A method for the automatic correction of alignment errors in individual star trackers (R, S) of star tracker systems ( | 10-29-2015 |
20150310278 | SYSTEM AND METHOD FOR BEHAVIORAL RECOGNITION AND INTERPRETRATION OF ATTRACTION - A mobile system for identifying behavior of being physical attracted towards an individual includes a portable computing device carried by a user having at least one camera. For this system, interested individuals are those that exhibit behavioral indications of physical attraction, referred to here as a “check out”. The software includes an individual detector technique, such as infrared, that detects the presence of individuals in the proximity of the device's camera. The software further includes a body position identifier that determines whether an individual detected by the individual detector is facing the portable device. | 10-29-2015 |
20150310280 | MOTION EVENT RECOGNITION AND VIDEO SYNCHRONIZATION SYSTEM AND METHOD - Enables recognition of events within motion data obtained from portable wireless motion capture elements and video synchronization of the events with video as the events occur or at a later time, based on location and/or time of the event or both. May use integrated camera or external cameras with respect to mobile device to automatically generate generally smaller event videos of the event on the mobile device or server. Also enables analysis or comparison of movement associated with the same user, other user, historical user or group of users. Provides low memory and power utilization and greatly reduces storage for video data that corresponds to events such as a shot, move or swing of a player, a concussion of a player, or other medical related events or events, such as the first steps of a child, or falling events. | 10-29-2015 |
20150310307 | METHOD AND APPARATUS FOR ANALYZING MEDIA CONTENT - Aspects of the subject disclosure may include, for example, a method for determining a first set of features in first images of first media content, generating a similarity score by processing the first set of features with a favorability model derived by identifying generative features and discriminative features of second media content that is favored by a viewer, and providing the similarity score to a network for predicting a response by the viewer to the first media content. Other embodiments are disclosed. | 10-29-2015 |
20150310310 | ELECTRONIC DEVICE LOCALIZATION BASED ON IMAGERY - An electronic device includes one or more imaging cameras. After a reset of the device or other specified event, the electronic device identifies an estimate of the device's pose based on location data such as Global Positioning System (GPS) data, cellular tower triangulation data, wireless network address location data, and the like. The one or more imaging cameras may be used to capture imagery of the local environment of the electronic device, and this imagery is used to refine the estimated pose to identify a refined pose of the electronic device. The refined pose may be used to identify additional imagery information, such as environmental features, that can be used to enhance the location based functionality of the electronic device. | 10-29-2015 |
20150310458 | SYSTEM AND METHOD FOR VIDEO-BASED DETECTION OF DRIVE-OFFS AND WALK-OFFS IN VEHICULAR AND PEDESTRIAN QUEUES - A system and method for detecting customer drive-off/walk-off from a customer queue. An embodiment includes acquiring images of a retail establishment, said images including at least a portion of a customer queue region, determining a queue configuration within the images, analyzing the images to detect entry of a customer into the customer queue, tracking a customer detected in the customer queue as the customer progresses within the queue, analyzing the images to detect if the customer leaves the customer queue, and generating a drive-off notification if a customer leaves the queue. | 10-29-2015 |
20150310606 | STORE RESOURCE EVENT DETECTION - A computer vision system ( | 10-29-2015 |
20150310624 | METHOD AND SYSTEM FOR PARTIAL OCCLUSION HANDLING IN VEHICLE TRACKING USING DEFORMABLE PARTS MODEL - Provided is a method and system of tracking partially occluded objects using an elastic deformation model. According to an exemplary method and system, partially occluded vehicles are detected and tracked in a scene including side-by-side drive-thru lanes. A method for updating an event sequence includes acquiring video data of a queue area from at least one image source; searching the frames for subjects located at least near a region of interest (ROI) of defined start points in the video data; tracking a movement of each detected subject through the queue area over a subsequent series of frames; using the tracking, determining if a location of the a tracked subject reaches a predefined merge point where multiple queues in the queue area converge into a single queue lane; in response to the tracked subject reaching the predefined merge point, computing an observed sequence of where the tracked subject places among other subjects approaching an end-event point; and, updating a sequence of end-events to match the observed sequence of subjects in the single queue lane. | 10-29-2015 |
20150310627 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR INDICATING HOSTILE FIRE - A network for indicating and communicating detection of hostile fire, and systems, methods, and computer program products thereof. Hostile fire is optically detected and identified at a first vehicle and such identification is transmitted from the first vehicle to one or more other vehicles in the network. Data regarding hostile fire directed at the first vehicle can be stored at one or more of the other vehicles and even retransmitted to other vehicles or base stations. | 10-29-2015 |
20150310628 | METHOD FOR REDUCING FALSE OBJECT DETECTION IN STOP-AND-GO SCENARIOS - Video-based object tracking accuracy is improved by a tentative identification of objects to be tracked. An identified blob that does not encompass a previously established set of tracking features (“tracker”) triggers initialization of an infant tracker. If that tracker remains the only tracker or becomes the “oldest” tracker associated with a blob identified in subsequent video frames, the “age” of the tracker is increased. If, in subsequent frames, the tracker is encompassed by a blob that is associated with an “older” tracker, the “age” of the tracker is decreased. Infant trackers that reach or exceed a threshold “age” are promoted to adult status. Adult trackers can be processed as being associated with valid objects. Trackers established for blobs identified due to mask segmentation tend not to cause false object detections. When segmentation is corrected, blob segments are combined and redundant trackers for the associated object are demoted and ignored. | 10-29-2015 |
20150317514 | IMAGE PROCESSING APPARATUS AND METHOD OF PROCESSING IMAGE - An image processing apparatus includes a first detecting unit configured to detect an object in an image; a determining unit configured to determine a moving direction of the object detected by the first detecting unit; and a second detecting unit configured to perform detection processing of detecting whether the object detected by the first detecting unit is a specific object on the basis of the moving direction of the object determined by the first determining unit. | 11-05-2015 |
20150317516 | METHOD AND SYSTEM FOR REMOTE CONTROLLING - A method of remote controlling is disclosed. The method comprises: capturing image data from a scene at the vicinity of an appliance, processing the image data to recognize at least one gesture of an individual present in the scene, and changing a state of the appliance based on the at least one recognized gesture. | 11-05-2015 |
20150317519 | OBJECT DETECTION AND EXTRACTION FROM IMAGE SEQUENCES - Object detection and extraction is performed from image sequences utilizing circular buffers for both source images and tracking. The detection and extraction process is performed in relation to the previous and current image, and the current and next image, including: alignment, absolute difference, removal of non-overlaps, and contour detection in difference images. An intersection is performed on these two outputs to retain contours of the current image only. Recovery of missing contour information is performed utilizing gradient tracing, followed by morphological dilation. A splitting process is performed if additional objects are found in a bounding box area. A mask image bounded by object contour is created, color attributes assigned, object verification performed and outliers removed. Then untracked objects are removed from the mask and a mask is output for moving objects with rectangular boundary box information. | 11-05-2015 |
20150317520 | METHOD AND APPARATUS FOR EXTRACTION OF STATIC SCENE PHOTO FROM SEQUENCE OF IMAGES - An apparatus and method for extracting a static background image from a non-static image sequence or video sequence having at least three spatially overlapping frames is presented. Obscured static background areas are filled, according to the disclosure, with actual content as the background area becomes visible over time as non-static objects move with respect to the background. Consecutive image frames are stored in tracking buffers from which alignment is performed and absolute differences determined. Object contours are found in the difference image and bounding boxes determined as object masks. The background is then filled from areas outside these object masks to arrive at a static background image. | 11-05-2015 |
20150317521 | ANALYSIS CONTROL SYSTEM - An analysis control system | 11-05-2015 |
20150317524 | METHOD AND DEVICE FOR TRACKING-BASED VISIBILITY RANGE ESTIMATION - A method is provided for tracking-based visibility range estimation for a vehicle, the method including a step of tracking an object detected in a first image at a first point in time and in a second image at a second point in time, a step of ascertaining a first object luminance of the object and a first distance to the object at the first point in time and also ascertaining a second object luminance of the object and a second distance to the object at the second point in time, and also a step of determining an atmospheric extinction coefficient using the first object luminance, the second object luminance, the first distance, and the second distance, the atmospheric extinction coefficient being in direct correlation to visibility range. | 11-05-2015 |
20150317540 | Providing Image Search Templates - Techniques for providing image search templates are provided. An image search template may be associated with an image search query to aid the user in capturing an image that will be appropriate for processing the search query. The template may be displayed as an overlay during an image capturing process to indicate an appropriate image capturing pose, range, angle, or other view characteristics that may provide more accurate search results. The template may also be used in the image search query to segment the image and identify features relevant to the search query. Images in an image database may be clustered using characteristics of the images or metadata associated with the images in order to establish groups of images from which templates may be derived. The generated templates may be provided to users to assist in capturing images to be used as search engine queries. | 11-05-2015 |
20150317541 | METHODS AND SYSTEMS FOR CUSTOMIZING A PLENOPTIC MEDIA ASSET - Methods and systems are described for providing customized user experiences with media assets created using plenoptic content capture technology. The ability to increase the focus on different objects while the media asset is progressing may allow a user to more easily track the object. Conversely, the ability to decrease the focus on different objects while the media asset is progressing may block, or cloud the display of, the object from being seen by a user. | 11-05-2015 |
20150317797 | Pedestrian tracking and counting method and device for near-front top-view monitoring video - Provided are a pedestrian tracking and counting method and device for a near-front top-view monitoring video, wherein the method includes that a video image under a current monitoring scene is acquired, the acquired video image is compared with a background image, and when it is determined that the video image is a foreground image, each blob in the foreground image is segmented and combined to acquire a target blob representing an individual pedestrian, and tracking and counting are performed according to the centre-of-mass coordinate of each target blob in a detection area to acquire the number of pedestrians under the current monitoring scene. Thus the accuracy of a counting result can be improved. | 11-05-2015 |
20150317828 | METHOD AND SYSTEM FOR GEO-REFERENCING AT LEAST ONE SENSOR IMAGE - Various embodiments relate to a method ( | 11-05-2015 |
20150324352 | SYSTEMS AND METHODS FOR DYNAMICALLY COLLECTING AND EVALUATING POTENTIAL IMPRECISE CHARACTERISTICS FOR CREATING PRECISE CHARACTERISTICS - Aspects of the present disclosure are directed to systems and methods for evaluating an individual's affect or emotional state by extracting emotional meaning from audio, visual and/or textual input into a handset, mobile communication device or other peripheral device. The audio, visual and/or textual input may be collected, gathered or obtained using one or more data modules which may include, but are not limited to, a microphone, a camera, an accelerometer and a peripheral device. The data modules collect one or more sets of potential imprecise characteristics which may then be analyzed and/or evaluated. When analyzing and/or evaluating the imprecise characteristics, the imprecise characteristics may be assigned one or more weighted descriptive values and a weighted time value. The weighted descriptive values and the weighted time value are then compiled or fused to create one or more precise characteristics which may define the emotional state of an individual. | 11-12-2015 |
20150324634 | MONITORING A WAITING AREA | 11-12-2015 |
20150324642 | DETERMINING AN ORIENTATION OF A MOBILE DEVICE - Methods, systems, and devices are described for determining an orientation of a mobile device. One method includes capturing, at the mobile device, an image of at least one illuminated object defining an illuminated reference axis; determining a first angle between the illuminated reference axis and a device reference axis of the mobile device; determining a second angle between the illuminated reference axis and a common reference axis; estimating a third angle between the device reference axis and the common reference axis; and determining an orientation of the mobile device based at least in part on the first angle, the second angle, and the third angle. | 11-12-2015 |
20150324647 | METHOD FOR DETERMINING THE LENGTH OF A QUEUE - Method for determining the length of a queue of objects in a predefined region having at least one entrance and at least one exit, in the course of which errors in the acquisition of objects entering or exiting the region are corrected during the determination of the length of the queue. In a first step, a specific entry signature E of each object entering the predefined region through the at least one entrance is determined with the aid of at least one first image sensor. Thereafter, the specific entry signature E is stored in an entry list of a calculation unit, each entry signature E being provided with an index value i reflecting the temporal sequence of the entries. In addition, a value L reflecting the length of the queue of the objects is increased by one. Furthermore, a specific exit signature A of an object exiting the predefined region through the at least one exit is determined with the aid of at least one second image sensor, the specific exit signature A being stored in an exit list of a calculation unit as exit signature A | 11-12-2015 |
20150324652 | OBJECT RECOGNITION APPARATUS - In an object recognition apparatus mounted on a vehicle, comprising: a plurality of recognizers each adapted to conduct object recognition ahead of the vehicle at intervals; and an object continuity determiner adapted to conduct object continuity determination based on a result of the object recognition conducted by the recognizers; the object continuity determiner determines that, when a first object recognized by any of the object recognizers at time (N) is present at a position within a predetermined area defined by a position of a second object recognized by other of the object recognizers at time (N−1) earlier than the time (N), the first object and the second object are identical to each other to be one object which is kept recognized continuously for a time period ranging from at least the time (N−1) to the time (N). | 11-12-2015 |
20150324655 | Distributive Hierarchical Model for Object Recognition in Video - Various examples are provided for object recognition in video. In one example, among others, a system includes processing circuitry including a processor. The processing circuitry is configured to process a sequence of images to recognize an object in the images, the recognition of the object based upon a hierarchical model. In another example, a method includes determining input data from a plurality of overlapping pixel patches of a video image; determining a plurality of corresponding states based at least in part upon the input data and an over-complete dictionary of filters; and determining a cause based at least in part upon the plurality of corresponding states. The cause may be used to identify an object in the video image. | 11-12-2015 |
20150324967 | SYSTEMS AND METHODS FOR REAL-TIME TUMOR TRACKING - Various embodiments disclose systems and methods for tracking regions (e.g., tumor locations) within living organisms. Some embodiments provide real-time, highly accurate, low latency measurements of tumor location even as the tumor moves with internal body motions. Such measurements may be suitable for closed-loop radiation delivery applications where radiation therapy may be continuously guided to the tumor site even as the tumor moves. Tumor motion may be associated with periodic motion (e.g., respiratory, cardiac) or aperiodic motion (e.g., gross patient motion, internal bowel motion). Various embodiments facilitate accurate radiation delivery to tumor sites exhibiting significant motion, e.g., lung, breast, and liver tumors. | 11-12-2015 |
20150324988 | AUTOMATED TONAL BALANCING - A system for automated tonal balancing, comprising a rectification server that groups and processes images for use in tone-matching and provides them to a tone-matching server, that then performs tone-matching operations on the images and provides them as output for review or storage, and methods for tonal balancing using the system of the invention. | 11-12-2015 |
20150325000 | METHOD AND APPARATUS FOR MOTION DETECTION - Image analysis techniques may be employed to identify moving and/or static object within a sequence of spatial data frames ( | 11-12-2015 |
20150325001 | DEVICE AND METHOD FOR MOTION ESTIMATION AND COMPENSATION - A device for motion estimation in video image data is provided. The device comprises a motion estimation unit ( | 11-12-2015 |
20150325003 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR VISUAL ODOMETRY USING RIGID STRUCTURES IDENTIFIED BY ANTIPODAL TRANSFORM - The subject matter described herein includes methods for visual odometry using rigid structures identified by an antipodal transform. One exemplary method includes receiving a sequence of images captured by a camera. The method further includes identifying rigid structures in the images using an antipodal transform. The method further includes identifying correspondence between rigid structures in different image frames. The method further includes estimating motion of the camera based on motion of corresponding rigid structures among the different image frames. | 11-12-2015 |
20150325004 | MOTION INFORMATION PROCESSING APPARATUS AND METHOD - A motion information processing apparatus of an embodiment includes processing circuitry. The processing circuitry obtains motion information of an object person who executes a walking motion. The processing circuitry generates track information, which indicates a position of a landing point of a foot of the object person and a movement of the object person, based on the motion information obtained. The processing circuitry performs control in such a manner that the track information generated is displayed on a display. | 11-12-2015 |
20150325029 | MECHANISM FOR FACILITAING DYNAMIC SIMULATION OF AVATARS CORRESPONDING TO CHANGING USER PERFORMANCES AS DETECTED AT COMPUTING DEVICES - A mechanism is described for facilitating dynamic simulation of avatars based on user performances according to one embodiment. A method of embodiments, as described herein, includes capturing, in real-time, an image of a user, the image including a video image over a plurality of video frames. The method may further include tracking changes in size of the user image, the tracking of the changes may include locating one or more positions of the user image within each of the plurality of video frames, computing, in real-time, user performances based on the changes in the size of the user image over the plurality of video frames, and dynamically scaling an avatar associated with the user such that the avatar is dynamically simulated corresponding to the user performances. | 11-12-2015 |
20150331078 | Device and method for calibrating tracking systems in imaging systems - A device and a method for calibrating the coordinate system of imaging systems having a tracking system prior or during image data acquisition, e.g. by way of magnetic resonance tomography. | 11-19-2015 |
20150332083 | OBJECT DETECTION SYSTEM - An airborne mine countermeasure system includes a processor coupled to a memory having stored therein software instructions that, when executed by the processor, cause the processor to perform a series of image processing operations. The operations include obtaining input image data from an external image sensor, and extracting a sequence of 2-D slices from the input image data. The operations also include performing a 3-D connected region analysis on the sequence of 2-D slices, and extracting 3-D invariant features in the image data. The operations further include performing coarse filtering, performing fine recognition and outputting an image processing result having an indication of the presence of any mines within the input image data. | 11-19-2015 |
20150332091 | DEVICE AND METHOD OF PROCESSING IMAGE - An image processing device includes a first imaging unit configured to capture an image including an object; a display configured to display the image captured by the first imaging unit; a second imaging unit configured to capture a position of eyes of a user; a gaze map generator configured to generate a gaze map including information about a gaze zone on the display according to passage of time based on the position of the eyes; and an image processor configured to generate a motion picture based on the generated gaze map and the captured image. | 11-19-2015 |
20150332093 | OBJECT TRACKING APPARATUS AND METHOD THEREOF - A method of tracking an object of an object tracking apparatus is provided. The method includes by performing probability propagation between a set of frames where tracking of a target object is completed and a set of frames where tracking of a target object is not completed among a plurality of frames of an image, calculating a probability map for a target object in each frame included the set of frames where tracking is completed, selecting a frame form the set of frames where tracking is not completed based on the calculated probability map, and determining a location of the target object in the selected frame. | 11-19-2015 |
20150332095 | RELEVANT IMAGE DETECTION IN A CAMERA, RECORDER, OR VIDEO STREAMING DEVICE - The filtering tasks that are conventionally applied in a video monitoring application, to distinguish images that may be relevant to the application, are distributed to the image source, or near-source devices. Source devices, such as cameras and playback devices, and near-source devices, such as video concentrators and streaming devices, are configured to include video processing tools that can be used to pre-filter the image data to identify frames or segments of frames that include image information that is likely to be relevant to the receiving video monitoring application. In this manner, the receiving processor need not spend time and resources processing images that are pre-determined to be irrelevant to the receiving application. | 11-19-2015 |
20150332097 | SHORT-TIME STOPPING DETECTION FROM RED LIGHT CAMERA VIDEOS - A method for detecting a vehicle running a stop signal positioned at an intersection includes acquiring a sequence of frames from at least one video camera monitoring an intersection being signaled by the stop signal. The method includes defining a first region of interest (ROI) including a road region located before the intersection on the image plane. The method includes searching the first ROI for a candidate violating vehicle. In response to detecting the candidate violating vehicle, the method includes tracking at least one trajectory of the detected candidate violating vehicle across a number of frames. The method includes classifying the candidate violating vehicle as belonging to one of a violating vehicle and a non-violating vehicle based on the at least one trajectory. | 11-19-2015 |
20150332098 | SYSTEM AND METHOD FOR ESTIMATING VEHICLE DYNAMICS USING FEATURE POINTS IN IMAGES FROM MULTIPLE CAMERAS - A system and method for estimating dynamics of a mobile platform by matching feature points in overlapping images from cameras on the platform, such as cameras in a surround-view camera system on a vehicle. The method includes identifying overlap image areas for any two cameras in the surround-view camera system, identifying common feature points in the overlap image areas, and determining that the common feature points in the overlap image areas are not at the same location. The method also includes estimating three-degree of freedom vehicle dynamic parameters from the matching between the common feature points, and estimating vehicle dynamics of one or more of pitch, roll and height variation using the vehicle dynamic parameters. | 11-19-2015 |
20150332448 | OBJECT DETECTION METHODS, DISPLAY METHODS AND APPARATUSES - Disclosed are object detection method, display methods and apparatuses. The method includes obtaining slice data of inspected luggage in the CT system; generating 3D volume data of objects in the luggage from the slice data; for each object, determining a semantic description including at least a quantifier description of the object based on the 3D volume data; and upon reception of a user selection of an object, presenting the semantic description of the selected object while displaying a 3D image of the object. The above solutions can create a 3D model for objects in the inspected luggage in a relatively accurate manner, and thus provide better basis for subsequent shape feature extraction and security inspection, and reduce omission factor. | 11-19-2015 |
20150332457 | INCREASING ACCURACY OF A PHYSIOLOGICAL SIGNAL OBTAINED FROM A VIDEO OF A SUBJECT - What is disclosed is a system and method for increasing the accuracy of physiological signals obtained from video of a subject being monitored for a desired physiological function. In one embodiment, image frames of a video are received. Successive batches of image frames are processed. For each batch, pixels associated with an exposed body region of the subject are isolated and processed to obtain a time-series signal. If movement occurred during capture of these image frames that is below a pre-defined threshold level then parameters of a predictive model are updated using this batch's time-series signal. Otherwise, the last updated predictive model is used to generate a predicted time-series signal for this batch. The time-series signal is fused with the predicted time-series signal to obtain a fused time-series signal. The time-series signal for each batch is processed to obtain a physiological signal for the subject corresponding to the physiological function. | 11-19-2015 |
20150332458 | METHOD FOR DETECTING POSITIONS OF TISSUES AND APPARATUS USING THE SAME - Disclosed is a method for detecting positions of body tissues and an apparatus using the method. The apparatus according to the present invention comprises a surgery information storage unit storing an examined first image associated with a target bone of surgery, a position measuring unit measuring position values of multiple points on a surface of the target bone of surgery before and after cutting, and a registration control unit for acquiring a second image regarding the remained bone after cutting by applying the shape of the bone changed according to the progression of bone cutting to the first image, and for performing position registration with respect to the second image by using the position values of multiple points on surface of the target bone of surgery after cutting, which is measured by the position measuring unit. | 11-19-2015 |
20150332463 | INTEGRATION OF OPTICAL AREA MONITORING WITH INDUSTRIAL MACHINE CONTROL - An industrial safety system is provided that integrates optical safety monitoring with machine control. The safety system includes an imaging sensor device supporting pixel array processing functions that allow time-of-flight (TOF) analysis to be performed on selected portions of the pixel array, while two-dimensional imaging analysis is performed on the remaining portions of the array, reducing processing load and response time relative to performing TOF analysis for all pixels of the array. The portion of the pixel array designated for TOF analysis can be pre-defined through configuration of the imaging sensor device, or can be dynamically selected based on object detection and classification by the two-dimensional imaging analysis. The imaging sensor device can also implement a number of safety and redundancy functions to achieve a high degree of safety integrity. | 11-19-2015 |
20150332466 | HUMAN HEAD DETECTION IN DEPTH IMAGES - Systems, devices and methods are described including receiving a depth image and applying a template to pixels of the depth image to determine a location of a human head in the depth image. The template includes a circular shaped region and a first annular shaped region surrounding the circular shaped region. The circular shaped region specifies a first range of depth values. The first annular shaped region specifies a second range of depth values that are larger than depth values of the first range of depth values. | 11-19-2015 |
20150332471 | USER HAND DETECTING DEVICE FOR DETECTING USER'S HAND REGION AND METHOD THEREOF - Technology for a method of detecting a user hand by a user hand detecting device. The method according to an aspect of the present invention includes extracting a first mask image from a depth image in which the user hand is imaged; extracting a second mask image having a preset skin color value among regions corresponding to the first mask image in a color image in which the user hand is imaged; generating a skin color value histogram model in a color space different from a region of the color image corresponding to a color region of the second mask image; generating a skin color probability image of the different color space from the color image using the skin color value histogram model and an algorithm for detecting a skin color region; and combining the skin color probability image with the second mask image and detecting the user's hand region. | 11-19-2015 |
20150332475 | DETECTING AND COMPENSATING FOR MOTION BETWEEN A FLASH AND A NO-FLASH IMAGE - Techniques disclosed herein involve determining motion occurring in a scene between the capture of two successively-captured images of the scene using intensity gradients of pixels within the images. These techniques can be used alone or with other motion-detection techniques to identify where motion has occurred in the scene, which can be further used to reduce artifacts that may be generated when images are combined. | 11-19-2015 |
20150334269 | PROCESSING APPARATUS, PROCESSING SYSTEM, AND PROCESSING METHOD - A processing apparatus includes a distance image acquirer, a moving-object detector, and a danger-level determining unit. The distance image acquirer acquires a distance image containing distance information of each pixel. The moving-object detector detects a moving object from the distance image. The danger-level determining unit determines a danger level of the moving object by use of the distance image, and outputs the danger level to a controller that controls a controlled unit in accordance with the danger level. | 11-19-2015 |
20150339519 | MONITORING DEVICE, MONITORING SYSTEM, AND MONITORING METHOD - A monitoring device is provided with a person image analyzer which has a person detector which detects a person from captured moving images and acquires positional information which relates to a person area, and an area state determinator which determines an area state which indicates the state of people in the person area based on the positional information, a mask image setter which sets the mask image which corresponds to the area state, and a moving image output controller which generates and outputs output moving images where the person area is changed to the mask image which corresponds to the area state based on the positional information and the area state which are output from the person image analyzer. | 11-26-2015 |
20150339520 | OBJECT RECOGNITION METHOD AND OBJECT RECOGNITION APPARATUS USING THE SAME - An object recognition method and an object recognition apparatus using the same are provided. In one or more embodiments, a real-time image including a first object is acquired, and a chamfer distance transform is performed on the first object of the real-time image to produce a chamfer image including a first modified object. Preset image templates each including a second object are acquired, and the chamfer distance transform is performed on the second object of each preset image template to produce a chamfer template including a second modified object. When the difference between the first modified object and the second modified object is less than a first preset error threshold, the object recognition apparatus may operate according to a control command corresponding to the preset image template. | 11-26-2015 |
20150339521 | PRIVACY PROTECTION METHOD OF HUMAN BODY SECURITY INSPECTION AND HUMAN BODY SECURITY INSPECTION SYSTEM - The present invention provides a privacy protection method and a human body security inspection system having the same function. The privacy protection method comprises the steps of: acquiring in real-time scanning row or column image data of a personal to be inspected; displaying a physical profile image and an outline image of the personal to be inspected, on basis of the processed image of the scanning row or column image data; transmitting the physical profile image to an equipment end display in a human body security inspection system and displaying it thereon, and displaying the outline image of the personal to be inspected on a remote operation end display of the human body security inspection system; performing the suspicious matter recognition based on the outline image; and correspondingly displaying a suspected frame on the physical profile image, based on the suspicious matter recognized in the outline image. | 11-26-2015 |
20150339523 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - An image processing apparatus including an acquisition unit configured to acquire results of analysis processing for a plurality of images; a designation unit configured to designate a type of a target to be detected from an image; and a determination unit configured to determine, among the plurality of images, an image used for detection processing of the detection target designated by the designation unit based on the type of the detection target designated by the designation unit and the result of the analysis processing. | 11-26-2015 |
20150339528 | AUTOMATED EMISSIONS CAPTURE DETERMINATION - A method of determining emissions captured can include receiving information indicative of transmission of light through a region nearby an air handling apparatus. The information can include information indicative of a first condition wherein emissions to be captured by the air handling apparatus are suppressed and information indicative of a second condition wherein the emissions to be captured by the air handling apparatus are present. A processor circuit can be used to determine a difference between the information about the transmission of light during the first condition and information about the transmission of light during the second condition. The processor circuit can be used to determine a portion of the emissions captured by the air handling apparatus using information about the determined difference. | 11-26-2015 |
20150339529 | DETECTION APPARATUS FOR DETECTING MOVEMENT OF OBJECT, DETECTION METHOD AND STORAGE MEDIUM - The average value calculation section acquires luminance information and color information, from captured images continuously captured in frame as a unit by an image capture unit. The detection method determination section determines either one or both among the luminance information and color information to use in order to detect movement of the predetermined object, based on the luminance information and color information acquired. The motion detection section detects movement of the predetermined object using either one or both among the luminance information and color information based on the result determined. | 11-26-2015 |
20150339537 | CAMERA POSITION POSTURE EVALUATING DEVICE, CAMERA POSITION POSTURE EVALUATING METHOD, AND CAMERA POSITION POSTURE EVALUATING PROGRAM - There is provided a camera position posture evaluating device which can calculate a value which indicates an evaluation of a state of a camera from a viewpoint of to what degree an object appears in an image suitably for image processing. A resolution evaluating means | 11-26-2015 |
20150345942 | CALCULATION THE DURATION TIME IN A CONFINED SPACE - Techniques are provided for calculating an average time people spend at a particular location, such as a store. A plurality of entrance times is identified, where each entrance time corresponds to when one or more objects entered the particular location. A plurality of exit times is also identified, where each exit time corresponds to when one or more objects exited the particular location. An average entrance time is determined based on the plurality of entrance times. An average exit time is determined based on the plurality of exit times. An average time people spend at the particular location is determined based on a difference between the average exit time and the average entrance time. | 12-03-2015 |
20150347814 | EFFICIENT FOREST SENSING BASED EYE TRACKING - Methods, systems, computer-readable media, and apparatuses for novel eye tracking methodologies are presented. Specifically, after an initial determination of a person's eyes within a field of view (FOV), methods of the present disclosures may track the person's eyes even with part of the face occluded, and may quickly re-acquire the eyes even if the person's eyes exit the FOV. Each eye may be tracked individually, at a faster rate of eye tracking due to the novel methodology, and successful eye tracking even at low image resolution and/or quality is possible. In some embodiments, the eye tracking methodology of the present disclosures includes a series of sub-tracker techniques, each performing different eye-tracking functions that, when combined, generate a highest-confidence location of where the eye has moved to in the next image frame. | 12-03-2015 |
20150347827 | IMAGE CAPTURE, PROCESSING AND DELIVERY AT GROUP EVENTS - Methods, systems, and devices are disclosed for image acquisition and distribution of individuals at large events. In one aspect, a method for providing an image of attendees at an event includes operating one or more image capturing devices to record images of attendees of an event situated at locations in an event venue, processing the images to form a processed image, and distributing the processed image to the individual. The processing includes mapping the locations to a grid including coordinates corresponding to predetermined positions associated with the event venue, defining an image space containing an individual at a particular location in the event venue based on the coordinates, and forming the processed image based on the image space. | 12-03-2015 |
20150347828 | INFORMATION PROCESSING APPARATUS AND METHOD FOR CONTROLLING THE SAME - An information processing apparatus detects a moving member that moves in a background area and that includes an object other than a recognition target. The apparatus sets a partial area as a background undetermined area if the moving member is present in the background area and sets a partial area as a background determined area if it is regarded that the recognition target is not present in the background area in each of the partial areas set as the background undetermined area. The apparatus recognizes an operation caused by the recognition target that moves in the background determined area. | 12-03-2015 |
20150347829 | MONITORING INDIVIDUALS USING DISTRIBUTED DATA SOURCES - One or more processors receive data from one or more devices including an image of an individual and information that indicates the identity of the individual. One or both of the image and the information include data that indicates a location. One or more processors analyze the image of an individual and the information to generate a set of identifying characteristics for the individual. Based on a result of the analysis, one or more processors determine whether the set of identifying characteristics of the individual matches a recorded set of identifying characteristics of that individual within a threshold. In response to a determination that there is a match within the threshold, one or more processors associate the location with the individual. | 12-03-2015 |
20150347840 | AUTONOMOUS VEHICLE, AND OBJECT RECOGNIZING METHOD IN AUTONOMOUS VEHICLE - An autonomous vehicle includes a travel vehicle main body, a model data storage, a photographic device, a search region determiner, an image feature point detector, a feature amount calculator and a position detector. The travel vehicle main body autonomously travels to a target position. The model data storage stores model data related to a geometric feature of an object. The photographic device photographs a periphery of the travel vehicle main body at the target position to acquire image data. The search region determiner predicts a position of the object based on the image data, and determines a search region of a predetermined range including the predicted position of the object. The image feature point detector detects a feature point of the image data with respect to the search region. The feature amount calculator calculates a feature amount of a matching candidate point extracted from the feature point. The position detector matches the feature amount of the matching candidate point with the model data to recognize the position of the object based on the image data. | 12-03-2015 |
20150347842 | Eyetracker Mounts for Use with Handheld Devices - A device for performing eyetracking on a handheld device includes and eyetracking camera and an eyetracking camera boom mount. The eyetracking camera boom mount physically and electrically connects a handheld device and the eyetracking camera. The eyetracking camera boom mount includes an extension boom that positions the eyetracking camera behind the user's hands. The extension boom provides the eyetracking camera with a view of the user's eyes that is unobstructed by the user's hands. The device can further include an operating scene camera for monitoring a person's hand operations on the handheld device. The operating scene camera can be mounted on the same extension boom as the eyetracking camera or on a separate extension boom. | 12-03-2015 |
20150347845 | PHOTOGRAPHIC SCENE REPLACEMENT SYSTEM - A photographic scene replacement system includes a photographic scene with a detectable pattern. The system operates to capture a digital photograph of a subject and the photographic scene having the detectable pattern with a digital camera when the subject is arranged between the digital camera and the photographic scene. The system also operates to process the digital photograph at least in part by automatically detecting the detectable pattern in the digital photograph, to distinguish the subject from the photographic scene in the digital photograph. | 12-03-2015 |
20150347846 | TRACKING USING SENSOR DATA - Tracking using sensor data is described, for example, where a plurality of machine learning predictors are used to predict a plurality of complementary, or diverse, parameter values of a process describing how the sensor data arises. In various examples a selector selects which of the predicted values are to be used, for example, to control a computing device. In some examples the tracked parameter values are pose of a moving camera or pose of an object moving in the field of view of a static camera; in some examples the tracked parameter values are of a 3D model of a hand or other articulated or deformable entity. The machine learning predictors have been trained in series, with training examples being reweighted after training an individual predictor, to favour training examples on which the set of predictors already trained performs poorly. | 12-03-2015 |
20150347847 | IMAGE PROCESSING METHOD AND ELECTRONIC DEVICE IMPLEMENTING THE SAME - An image processing method and an electronic device implementing the method are provided. The method includes the electronic device sequentially receiving an image captured by a camera and determining whether a frame of the received image satisfies a predetermined condition. If the predetermined condition is satisfied, the electronic device recognizes an object in the frame. Then the electronic device tracks the object in the frame through tracking data created based on a feature extracted from the recognized object. | 12-03-2015 |
20150347856 | METHOD AND SYSTEM FOR DETECTING SEA-SURFACE OIL - A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may be configured to detect and evaluate the presence of sea-surface oil on the water surrounding an offshore oil platform. The computer vision engine may be configured to segment image data into detected patches or blobs of surface oil (foreground) present in the field of view of an infrared camera (or cameras). A machine learning engine may evaluate the detected patches of surface oil to learn to distinguish between sea-surface oil incident to the operation of an offshore platform and the appearance of surface oil that should be investigated by platform personnel. | 12-03-2015 |
20150347864 | EXTRACTION OF USER BEHAVIOR FROM DEPTH IMAGES - Embodiments described herein use depth images to extract user behavior, wherein each depth image specifies that a plurality of pixels correspond to a user. In certain embodiments, information indicative of an angle and/or curvature of a user's body is extracted from a depth image. This can be accomplished by fitting a curve to a portion of a plurality of pixels (of the depth image) that correspond to the user, and determining the information indicative of the angle and/or curvature of the user's body based on the fitted curve. An application is then updated based on the information indicative of the angle and/or curvature of the user's body. In certain embodiments, one or more average extremity positions of a user, which can also be referred to as average positions of extremity blobs, are extracted from a depth image. An application is then updated based on the average positions of extremity blobs. | 12-03-2015 |
20150347871 | DATA FUSION ANALYSIS FOR MARITIME AUTOMATIC TARGET RECOGNITION - A system and method for performing Automatic Target Recognition by combining the outputs of several classifiers. In one embodiment, feature vectors are extracted from radar images and fed to three classifiers. The classifiers include a Gaussian mixture model neural network, a radial basis function neural network, and a vector quantization classifier. The class designations generated by the classifiers are combined in a weighted voting system, i.e., the mode of the weighted classification decisions is selected as the overall class designation of the target. A confidence metric may be formed from the extent to which the class designations of the several classifiers are the same. This system is also designed to handle unknown target types and subsequent re-integration at a later time, effectively, artificially and automatically increasing the training database size. | 12-03-2015 |
20150348235 | DISTRIBUTED PATH PLANNING FOR MOBILE SENSORS - A method plans paths of a set of mobile sensors with changeable positions and orientations in an environment. Each sensor includes a processor, an imaging system and a communication system. A desired resolution of coverage of the environment is defined, and an achieved resolution of the coverage is initialized. For each time instant and each sensor, an image of the environment is acquired using the imaging system. The achieved resolution is updated according to the image. The sensor is moved to a next position and orientation based on the achieved resolution and the desired resolution. Then, local information of the sensor is distributed to other sensors using the communication system to optimize a coverage of the environment. | 12-03-2015 |
20150348252 | Systems and Methods of Monitoring Waste - Systems, methods, and computer-readable media are disclosed for monitoring waste. Example methods may include monitoring a waste compartment of a waste container, the waste compartment configured to receive waste items, and determining a waste level of waste items in the waste compartment. Methods may include identifying a waste haul threshold indicative of a predetermined waste level at which a waste haul notification is triggered, determining that the waste level meets the waste haul threshold, and triggering the waste haul notification indicating that the waste container is to be emptied based at least in part on the waste level. | 12-03-2015 |
20150348257 | SYSTEMS AND METHODS FOR YAW ESTIMATION - Systems and methods of automatic detection of a facial feature are disclosed. Moreover, methods and systems of yaw estimation of a human head based on a geometrical model are also disclosed. | 12-03-2015 |
20150348265 | Plane Detection and Tracking for Structure from Motion - Plane detection and tracking algorithms are described that may take point trajectories as input and provide as output a set of inter-image homographies. The inter-image homographies may, for example, be used to generate estimates for 3D camera motion, camera intrinsic parameters, and plane normals using a plane-based self-calibration algorithm. A plane detection and tracking algorithm may obtain a set of point trajectories for a set of images (e.g., a video sequence, or a set of still photographs). A 2D plane may be detected from the trajectories, and trajectories that follow the 2D plane through the images may be identified. The identified trajectories may be used to compute a set of inter-image homographies for the images as output. | 12-03-2015 |
20150348274 | INCREMENTAL PRINCIPAL COMPONENT PURSUIT FOR VIDEO BACKGROUND MODELING - An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter. | 12-03-2015 |
20150348282 | IMAGE PROCESSING AND ITEM TRANSPORT - Embodiments relate generally to new and useful systems and methods for processing digital images of items to facilitate item transport and/or handling. In some embodiments, a method is provided for operating an image processing and item transport system having a computing system that includes a digital image processing component. In some embodiments, the method can involve the digital image processing component acquiring a digital image and processing the digital image to obtain a digital representation of an item represented in the digital image, to estimate a dimensional size and/or weight of the item represented in the digital image, and to convert the digital image to a format suitable for outputting. In some embodiments, the method can include processing the estimated dimensional size and/or weight to identify personnel and/or equipment capable of transporting the item represented in the digital image, and outputting a transport instruction identifying the personnel and/or equipment capable of transporting the item represented in the digital image. | 12-03-2015 |
20150348382 | MONITOR INFORMATION PROCESSING DEVICE AND METHOD, AND PROGRAM AND RECORDING MEDIUM - Illicit movement is detected on the basis of authentication result information (D | 12-03-2015 |
20150350608 | SYSTEM AND METHOD FOR ACTIVITY MONITORING USING VIDEO DATA - Embodiments of a method and system described herein enable capture of video data streams from multiple, different devices and the processing of the video data streams. The video data streams are merged such that various data protocols can all be processed with the same worker processors on different types of operating systems, which are typically distributed. An embodiment uses a mobile device (such as a mobile phone) as a device and deploys a video sensor application on the mobile device for encoding consecutive video files, time stamping the consecutive video files, and pushing the consecutive video files to a file server to produce a stable stream of video data. Thus avoiding the inefficiencies associated with having video processing in the data flow loop. | 12-03-2015 |
20150355730 | PERSPECTIVE TRACKING SYSTEM - Resolution of perspective in three dimensions is necessary for intermeshing real players into simulated environments during virtual training exercises. With the advent of high resolution image sensors the ability to sense position and orientation using image capture devices is possible. The combination of small sized sensors and image recognition tracking algorithms allows the tracking element to be placed directly on the device whose perspective is desired. This provides a solution to determining perspective as it provides a direct measurement from the center axis of the observer. This invention employs a perspective tracking device to determine a point-of-gaze or a point-of-aim in a three-dimensional space to a high degree of accuracy. Point-of-gaze may be used to determine views for head mounted displays and aim-points for weapons. The invention may operate in an unconstrained space allowing simulation participants to operate in a larger, open environment. Areas of interest in the environment are bounded by area of interest markers which identify the region and its physical constraints. | 12-10-2015 |
20150356344 | WRINKLE DETECTION APPARATUS AND WRINKLE DETECTION METHOD - A wrinkle detection apparatus is an apparatus for detecting a wrinkle area of skin included in an image. The wrinkle detection apparatus includes: an image obtaining unit that obtains the image including the skin; an area estimation unit that estimates a plurality of image areas, each of the plurality of image areas having a different gloss level of the skin; a parameter determination unit that determines one or more parameter values for each of the plurality of estimated image areas, the one or more parameter values being used to detect the wrinkle area; and a wrinkle detection processing unit that detects the wrinkle area in the image by using the determined one or more parameter values for each of the plurality of image areas. | 12-10-2015 |
20150356345 | SYSTEMS AND METHODS FOR DETECTING, IDENTIFYING AND TRACKING OBJECTS AND EVENTS OVER TIME - A system for detecting, identifying and tracking objects of interest over time is configured to derive object identification data from images captured from one or more image capture devices. In some embodiments of the system, the one or more image capture devices perform a first object detection and identification analysis on images captured by the one or more image capture devices. The system may then transmit the captured images to a server that performs a second object detection and identification analysis on the captures images. In various embodiments, the second analysis is more detailed than the first analysis. The system may also be configured to compile data from the one or more image capture devices and server into a timeline of object of interest detection and identification data over time. | 12-10-2015 |
20150356349 | SYSTEM AND METHODS OF ADAPTIVE SAMPLING FOR EMOTIONAL STATE DETERMINATION - Systems, methods, and non-transitory computer readable media for determining the emotional state of a user are described herein. In one example, the method for determining the emotional state of the user comprises receiving a feed from a sensor at a default sampling frequency, and analyzing the feed to determine facial features of a user. The method further comprises computing an emotional quotient of the user based on the facial features, determining a trigger to re-compute the sampling frequency the feed, based in part on the emotional quotient and computing a new sampling frequency based in part on the trigger. Thereafter, the method comprises generating instructions for the sensor to capture the feed at the new sampling frequency. | 12-10-2015 |
20150356354 | SYSTEMS AND METHODS FOR SEMANTICALLY CLASSIFYING AND NORMALIZING SHOTS IN VIDEO - The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file. | 12-10-2015 |
20150356358 | OBSTACLE DETECTION DEVICE AND OBSTACLE DETECTION METHOD - By a disparity computation unit | 12-10-2015 |
20150356743 | PHOTOGRAPHIC SUBJECT TRACKING DEVICE AND CAMERA - A photographic subject tracking device includes: a first degree-of-similarity calculation unit that calculates degree of similarity between a template image for tracking and an image in search area; a photographic subject position identification unit that identifies a tracked photographic subject position in the input image based on calculated degree of similarity; a second degree-of-similarity calculation unit that calculates a degree of similarity between each of multiple template images for resizing determination, which are generated based on template image for tracking, and image in search area; a matching position identification unit that identifies matching positions of the multiple template images for resizing determination, respectively, in the input image based on calculated degrees of similarity; and a size changing unit that changes an image size of template image for tracking and template images for resizing determination based on a density of the plurality of matching positions identified. | 12-10-2015 |
20150356745 | MULTI-MODE VIDEO EVENT INDEXING - Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode. | 12-10-2015 |
20150356746 | SYSTEMS AND METHODS FOR TRACKING OBJECT ASSOCIATION OVER TIME - A system and method for tracking association of two or more objects over time, according to various embodiments, is configured to determine the association based at least in part on an image. The system may be configured to capture the image, identify two or more objects of interest within the image, determine whether the two or more objects are associated in the image, and store image association data for the two or more objects. In various embodiments the system is configured to create a timeline of object association over time for display to a user. | 12-10-2015 |
20150356840 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING SYSTEM - There is provided an information processing apparatus including an obtaining unit configured to obtain a plurality of segments compiled from at least one media source, wherein each segment of the plurality of segments contains at least one image frame within which a specific target object is found to be captured, and a providing unit configured to provide image frames of the obtained plurality of segments for display along a timeline and in conjunction with a tracking status indicator that indicates a presence of the specific target object within the plurality of segments in relation to time. | 12-10-2015 |
20150360877 | SYSTEM FOR LOADING PARCEL AND METHOD THEREOF - Disclosed is a system for loading a parcel, which loads an object on a loading transportation vehicle through a conveyor belt of a loading unit, including: an image information acquiring unit which acquires image information acquired by photographing the parcel on the conveyor belt; an object recognition unit which measures the size of the object from the image information and calculates a rotation state of the object; and a control unit which controls the speed of the conveyor belt according to the size and the rotation state of the object. | 12-17-2015 |
20150363633 | METHOD AND SYSTEM FOR DISPLAYING STEREO IMAGE BY CASCADE STRUCTURE AND ANALYZING TARGET IN IMAGE - A method and a system for analyzing a target in a stereo image by displaying the stereo image using a cascade structure are disclosed. The method includes for the input stereo image, generating, based on a first relevant feature, rule or model of the stereo image, at least a first first-level structure map, each of the first first-level structure maps being generated based on an individual tolerance level of the first relevant feature, rule or model, and each of the first first-level structure maps including the target at an individual first division level; and at least partly integrating the first first-level structure maps and analyzing the target in the stereo image, to obtain a structure map of a first-level target analysis result including the target. | 12-17-2015 |
20150363636 | IMAGE RECOGNITION SYSTEM, IMAGE RECOGNITION APPARATUS, IMAGE RECOGNITION METHOD, AND COMPUTER PROGRAM - A relative direction relationship is acquired between first and second input area images of a particular person taken from different directions. The particular person is identified by comparing a feature of the first input area image with a feature of a first one of registered area images of the particular person or another person taken from at least three directions, comparing a feature of the second input area image with a feature of a second registered area image of the same person as the person of the first registered area image, and determining whether the person in the first and second input area images is the same person in the first and second registered area images. The first and second registered area images are selected such that the relation between the first and second registered area images is similar to the relation between the first and second input area images. | 12-17-2015 |
20150363637 | ROBOT CLEANER, APPARATUS AND METHOD FOR RECOGNIZING GESTURE - A robot cleaner is provided. The robot cleaner includes a camera obtaining an image including a user; and a control unit extracting, an arm image including an arm, from the image obtained by the camera, calculating an angle of an arm of an user from the arm image, and determining a function intended by the angle of the arm calculated to control execution of the function. | 12-17-2015 |
20150363638 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM - To provide an information processing system, an information processing method, and a program, whereby it is possible, using a plurality of video camera videos, to desirably carry out person tracking. An information processing system includes: an appearance time score computation unit which computes a time from when a first mobile body exits a frame from a video of a first video camera to when a second mobile body enters a frame of a video of a second video camera; and a person association unit which, on the basis of an attribute of the first mobile body, a degree of similarity between the first mobile body and the second mobile body, and the time, determines whether the first mobile body and the second mobile body are the same mobile body. | 12-17-2015 |
20150363639 | 3D Gesture Stabilization for Robust Input Control in Mobile Environments - A non-contact gesture sensor is mounted within a vehicle for vehicle occupants to enter control commands by using hand gestures. The effects of vehicle motion and vibration are stabilized by an electronic circuit that includes an inertial motion sensor (IMU) in rigidly fixed relation to the gesture sensor. An adaptive filter processes the gesture sensor signal and the IMU sensor signal by modeling the arm and hand as a semi-rigid articulated body using a transfer function that relates accelerations measured by the IMU with vehicle motion-induced accelerations of the hand. The filter calculates a noise-reduced gesture signal by subtracting out the motion-induced accelerations and measurement noise. The filter also outputs a confidence measure that controls a threshold circuit that inhibits use of the filtered gesture signal when confidence in the filter's system estimation is low. | 12-17-2015 |
20150363642 | IMAGE RECOGNITION DEVICE AND METHOD FOR REGISTERING FEATURE DATA IN IMAGE RECOGNITION DEVICE - An image recognition device has a database in which pieces of feature data of a plurality of objects are registered while divided into classes for each of the plurality of objects; an identification unit that identifies an unknown object by evaluating which feature data of the class registered in the database is most similar to feature data obtained from an image of the unknown object, and a feature data registration unit that registers feature data in the database. The database is capable of setting a plurality of classes to an identical object. The feature data registration unit, in adding new feature data with respect to a first object already registered in the database, sets a new class other than an existing class with respect to the first object. | 12-17-2015 |
20150363643 | FUSION-BASED OBJECT-RECOGNITION - An object-recognition method and system employing Bayesian fusion algorithm to reiteratively improve probability of correspondence between captured object images and database object images by fusing probability data associated with each of plurality of object image captures. | 12-17-2015 |
20150363644 | ACTIVITY RECOGNITION SYSTEMS AND METHODS - An activity recognition system is disclosed. A plurality of temporal features is generated from a digital representation of an observed activity using a feature detection algorithm. An observed activity graph comprising one or more clusters of temporal features generated from the digital representation is established, wherein each one of the one or more clusters of temporal features defines a node of the observed activity graph. At least one contextually relevant scoring technique is selected from similarity scoring techniques for known activity graphs, the at least one contextually relevant scoring technique being associated with activity ingestion metadata that satisfies device context criteria defined based on device contextual attributes of the digital representation, and a similarity activity score is calculated for the observed activity graph as a function of the at least one contextually relevant scoring technique, the similarity activity score being relative to at least one known activity graph. | 12-17-2015 |
20150363646 | EMITTER TRACKING SYSTEM - An improved emitter tracking system. In aspects of the present teachings, the presence of a desired emitter may be established by a relatively low-power emitter detection module, before images of the emitter and/or its surroundings are captured with a relatively high-power imaging module. Capturing images of the emitter may be synchronized with flashes of the emitter, to increase the signal-to-noise ratio of the captured images. | 12-17-2015 |
20150363649 | METHOD AND APPARATUS FOR UPDATING SCENE MODEL AND VIDEO SURVEILLANCE - The present invention relates to the method for updating scene model and video surveillance. A method is provided for updating a scene model in a video which is composed of a plurality of visual elements, comprising: a classifying step for classifying the visual elements in a scene into stationary visual elements and moving visual elements according to their appearance change rates; a border determining step for determining borders from the scene according to a spatial distribution information of the stationary visual elements and the moving visual elements; and an updating step for updating the scene model according to the determined borders in said scene. | 12-17-2015 |
20150363655 | SMART FACE REDACTION IN NEAR INFRARED VEHICLE WINDSHIELD IMAGES - A system and method for redaction of faces in a windshield within an image that includes detecting a windshield within the captured image via a selected detection process, extracting a windshield region from the detected windshield within the image, and selectively applying an obscuration process to at least a portion of the extracted windshield region. A redacted image is then generated obscuring the face or faces in the windshield using the selectively applied obscuration process. | 12-17-2015 |
20150363673 | SYSTEM AND METHOD FOR SCALABLE SEMANTIC STREAM PROCESSING - A system for collaborative analysis from different processes on different data sources. The system uses a unique approach to lightweight temporary data structures in order to allow communication of interim results among processes, and construction of semantically appropriate reports. The data structures are generated in near real time and their lightweight nature supports massive scaling, including many diverse streaming inputs. | 12-17-2015 |
20150363933 | METHOD AND APPARATUS FOR ESTIMATING POSITION OF PART OF OBJECT - An apparatus configured to estimate a position of a part of an object in an image includes: an image receiver configured to receive the image; a reference point setter configured to set a reference point in the image; a controller configured to generate information about the reference point by repeating a process a predetermined number of times, the process comprising obtaining one piece of direction information about a probability and a direction that the reference point is to be moved to the part of the object by a classifier, and resetting the reference point by moving the reference point a predetermined distance based on the one piece of direction information; and a location estimator configured to estimate a position of the part of the object in the image by using the information about the reference point as the reference point is reset the predetermined number of times. | 12-17-2015 |
20150363935 | ROBOT, ROBOTIC SYSTEM, AND CONTROL DEVICE - A robot includes an arm adapted to move an object, an input reception section adapted to receive input of information (information of a control point in an object coordinate system in a restricted sense) defined by a coordinate system set to the object, and a control section adapted to make the arm operate based on a taken image obtained by imaging the object and the information input. | 12-17-2015 |
20150363938 | Method for Stereo Visual Odometry using Points, Lines and Planes - A method determines a motion between a first and second coordinate system, by first extracting a first set of primitives from a 3D image acquired in the first coordinate system from an environment, and extracting a second set of primitives from a 3D image acquired in the second coordinate system from the environment. Motion hypotheses are generated for different combinations of the first and second sets of primitives using a RANdom SAmple Consensus procedure. Each motion hypothesis is scored using a scoring function learned using parameter learning techniques. Then, a best motion hypothesis is selected as the motion between the first and second coordinate system. | 12-17-2015 |
20150371078 | ANALYSIS PROCESSING SYSTEM - An analysis processing system | 12-24-2015 |
20150371080 | REAL-TIME HEAD POSE TRACKING WITH ONLINE FACE TEMPLATE RECONSTRUCTION - Provided are methods and apparatus for tracking a head pose with online face template reconstruction. The method comprises the steps of retrieving a plurality of frames of images of the user; comparing each of the retrieved frames with a predetermined face template to determine one or more head poses that are monitored successfully and obtain head pose information of the determined one or more head poses; and reconstructing, during the step of comparing, the face template from the obtained head pose information; wherein the reconstructed face template is compared with subsequently retrieved images such that the head poses of the user are tracked in time. | 12-24-2015 |
20150371082 | ADAPTIVE TRACKING SYSTEM FOR SPATIAL INPUT DEVICES - An adaptive tracking system for spatial input devices provides real-time tracking of spatial input devices for human-computer interaction in a Spatial Operating Environment (SOE). The components of an SOE include gestural input/output; network-based data representation, transit, and interchange; and spatially conformed display mesh. The SOE comprises a workspace occupied by one or more users, a set of screens which provide the users with visual feedback, and a gestural control system which translates user motions into command inputs. Users perform gestures with body parts and/or physical pointing devices, and the system translates those gestures into actions such as pointing, dragging, selecting, or other direct manipulations. The tracking system provides the requisite data for creating an immersive environment by maintaining a model of the spatial relationships between users, screens, pointing devices, and other physical objects within the workspace. | 12-24-2015 |
20150371083 | ADAPTIVE TRACKING SYSTEM FOR SPATIAL INPUT DEVICES - An adaptive tracking system for spatial input devices provides real-time tracking of spatial input devices for human-computer interaction in a Spatial Operating Environment (SOE). The components of an SOE include gestural input/output; network-based data representation, transit, and interchange; and spatially conformed display mesh. The SOE comprises a workspace occupied by one or more users, a set of screens which provide the users with visual feedback, and a gestural control system which translates user motions into command inputs. Users perform gestures with body parts and/or physical pointing devices, and the system translates those gestures into actions such as pointing, dragging, selecting, or other direct manipulations. The tracking system provides the requisite data for creating an immersive environment by maintaining a model of the spatial relationships between users, screens, pointing devices, and other physical objects within the workspace. | 12-24-2015 |
20150371088 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE MEDIUM - An information processing apparatus including a memory storing instructions, and at least one processor configured to process the instructions to obtain an orientation, size, and position of a first subject in a first image, and an orientation, size, and position of a second subject in a second image, generate an estimated position of the first subject in the second image based on the orientation, size, and position of the first subject in the first image and the orientation and size of the second subject in the second image, calculate a distance between the estimated position of the first subject in the second image and the position of the second subject in the second image; and determine whether or not the first subject is the second subject based on the distance. | 12-24-2015 |
20150371089 | AUTOMATED REMOTE CAR COUNTING - A system for automated car counting comprises a satellite-based image collection subsystem; a data storage subsystem; and an analysis software module stored and operating on a computer coupled to the data storage subsystem. The satellite-based image collection subsystem collects images corresponding to a plurality of areas of interest and stores them in the data storage subsystem. The analysis module: (a) retrieves images corresponding to an area of interest from the data storage subsystem; (b) identifies a parking space in an image; (c) determines if there is a car located in the parking space; (d) determines a location, size, and angular direction of a car in a parking space; (e) determines an amount of overlap of a car with an adjacent parking space; (f) iterates steps (b)-(e) until no unprocessed parking spaces remain; and (g) iterates steps (a)-(f) until no unprocessed images corresponding to areas of interest remain. | 12-24-2015 |
20150371093 | IMAGE PROCESSING APPARATUS - For a lane line search range set in a captured image obtained through imaging of the travel direction of a vehicle, a first lane line detection process detects a lane line based on a luminance image for more than half of rows within the search range, and detects a lane line based on a color difference image for the other rows, while a second lane line detection process detects a lane line based on the color difference image for a greater number of rows than in the first process. Then, a mode switchover control process performs mode switchover determination between the first and second modes based on the number of rows where a lane line of a predefined width or wider has been detected, from among the rows where a lane line has been detected, and performs mode switchover control based on the determination result. | 12-24-2015 |
20150371096 | HAZARD DETECTION FROM A CAMERA IN A SCENE WITH MOVING SHADOWS - Computerized methods are performable by a driver assistance system while the host vehicle is moving. The driver assistance system includes a camera connectible to a processor. First and second image frames are captured from the field of view of the camera. Corresponding image points of the road are tracked from the first image frame to the second image frame. Image motion between the corresponding image points of the road is processed to detect a hazard in the road. The corresponding image points are determined to be of a moving shadow cast on the road to avoid a false positive detection of a hazard in the road or the corresponding image points are determined not to be of a moving shadow cast on the road to verify detection of a hazard in the road. | 12-24-2015 |
20150371102 | METHOD FOR RECOGNIZING AND LOCATING OBJECT - A method for recognizing and locating an object includes an offline mode process and an online mode process. In the offline mode process, plural sampled edge points of a template image of the object and respective gradient angles and a gravity position of the plural sampled edge points are obtained, and plural similarity score tables are obtained according to the plural sampled edge points, a predetermined detecting distance range and a predetermined gradient angle difference range. In the online mode process, plural edge points of a live image and respective gradient angles are obtained, plural predictive gravity positions are calculated, and plural similarity scores corresponding to the plural predictive gravity positions are summed up. The predictive gravity position with the local maximum of the similarity scores higher than a threshold value is correlated with the gravity position of the template image so as to recognize and locate the object. | 12-24-2015 |
20150371113 | METHOD AND APPARATUS FOR GENERATING TEMPORALLY CONSISTENT SUPERPIXELS - A method and an apparatus for generating superpixels for a sequence of images. A cluster assignment generator generates a cluster assignment for a first image of the sequence of images, e.g. by clustering pixels of the first image into superpixels or by retrieving an initial cluster assignment for the first image and processing only contour pixels with regard to their cluster assignment. A label propagator initializes subsequent images based on a label propagation using backward optical flow. A contour pixel processor then processes only contour pixels with regard to their cluster assignment for subsequent images of the sequence of images. | 12-24-2015 |
20150371384 | CORRELATED DIFFUSION IMAGING SYSTEM AND METHOD FOR IDENTIFICATION OF BIOLOGICAL TISSUE OF INTEREST - There is disclosed a novel form of imaging referred to in this disclosure as “correlated diffusion imaging” or CDI in which the tissue being imaged is characterized by a joint correlation of diffusion signal attenuation across multiple gradient pulse strengths and timings. Advantageously, by taking into account signal attenuation at different water diffusion motion sensitivities, correlated diffusion imaging can provide significantly improved delineation between cancerous tissue and healthy tissue when compared to existing diffusion imaging modalities. In an embodiment, the method comprises performing quantitative evaluation using receiver operating characteristic (ROC) curve analysis, tissue class separability analysis, and visual assessment to study correlated diffusion imaging for the task of identification of biological tissue of interest. In another embodiment, the method comprises comparing T2-weighted imaging results with that obtained using standard diffusion imaging (via the apparent diffusion coefficient (ADC)) and with that obtained using CDI for tissue characterization and analysis. In still another embodiment, the method comprises of a dual-stage signal mixing configuration of CDI that provides better visualization of anatomical information while preserving strong delineation between cancerous tissue and healthy tissue. | 12-24-2015 |
20150371403 | TARGET OBJECT IDENTIFYING DEVICE, TARGET OBJECT IDENTIFYING METHOD AND TARGET OBJECT IDENTIFYING PROGRAM - Monitoring target matching means | 12-24-2015 |
20150371520 | VISION BASED SYSTEM FOR DETECTING DISTRESS BEHAVIOR - A system and method for detecting a distress condition of a person in a monitored location. The system is configured to receive an image stream of the monitored location, and detect a human body or body part within the monitored location. The system maintains and updates a list of areas in which the lack of movement is permitted e.g. bed, sofa, chairs. Upon detecting that the person is no longer moving and exists in a new area that is not in the list of areas, the system enters into an acknowledgement session in which the system asks the person to perform a certain action if everything is fine. If the given action is detected within a pre-determined period the system updates the list of areas to add the new area therein, otherwise the system would execute a pre-defined function representing a response to the distress condition e.g. call 911. | 12-24-2015 |
20150374557 | Systems and Methods for Monitoring and Controlling an Absorbent Article Converting Line - The present disclosure relates to methods and apparatuses for monitoring substrates advancing along a converting apparatus in a machine direction. The apparatus may include an analyzer connected with a line scan camera through a communication network. The analyzer may be configured as a field programmable gate array, an application specific integrated circuit, or a graphical processing unit. In addition, the line scan camera may include a linear array of pixel data and define a linear field of view, wherein the line scan camera is arranged such that the linear field of view extends in the machine direction. The apparatus may further include an illumination source that illuminates the linear field of view. In operation, the substrate is advanced in the machine direction such that a portion of the substrate advances through the linear field of view. In turn, the apparatus may be configured to perform various monitoring and/or control functions. | 12-31-2015 |
20150378431 | EYE-CONTROLLED USER INTERFACE - Techniques for providing an eye-controlled user interface for an electronic device are described. In some examples, a process includes establishing a control link between a device and a visual control circuit, the visual control circuit having an image sensor and a visual feature disposed substantially proximate to the image sensor at a control point, receiving an image by the image sensor, evaluating the image to determine whether an eye is oriented substantially toward the control point, determining whether a control action is intended, and, if the control action is intended, deriving the control action, and using the control link to perform the control action. | 12-31-2015 |
20150379354 | METHOD AND SYSTEM FOR DETECTING MOVING OBJECTS - A moving objects detection method is disclosed. The method may include: identifying a plurality of feature points based on a plurality of video frames; selecting from the plurality of feature points to form a first and a second groups of feature points based on correlations between the plurality of feature points; and identifying in at least one video frame two segments based on the first and the second groups of feature points, respectively, as detected moving objects, where a correlation between two feature points may include a distance component and a movement difference component, where the distance component is related to a distance between the two feature points, and the movement difference component is related to a difference between corresponding movements of the two feature points. A moving objects detection system is also provided. | 12-31-2015 |
20150379355 | A SURVEILLANCE SYSTEM - An automated surveillance system is disclosed in this specification. The system comprises a computing system arranged to receive a plurality of surveillance feeds from a surveillance network and detect characteristics of the surveillance feeds that are indicative of categorised events. Each of the surveillance feeds has a geospatial reference tag that identifies the origin of content contained within the feed. The surveillance system determines a response reaction to detected events and identifies potential response units in the vicinity of the surveillance location using the geospatial reference tag. | 12-31-2015 |
20150379371 | Object Detection Utilizing Geometric Information Fused With Image Data - Two-dimensional and three-dimensional data of a physical scene are combined and analyzed together to identify physical objects physically present in the physical scene. Image features obtained from the two-dimensional data and geometric features obtained from the three-dimensional data are combined with one another such that corresponding image features are associated with corresponding geometric features. Automated object detection mechanisms are directed to the combination of image and geographic features and consider them together in identifying physical objects from the physical scene. Such automated object detection mechanisms utilize machine learning such as selecting and tuning multiple classifiers, with each classifier identifying potential objects based on a specific set of image and geographic features, and further identifying and adjusting weighting factors to be applied to the results of such classifiers, with the weighted combination of the output of the multiple classifiers providing the resulting object identification. | 12-31-2015 |
20150379717 | INFORMATION PROCESSING APPARATUS AND METHOD FOR UPDATING FEATURE VALUES OF PRODUCTS FOR OBJECT RECOGNITION - An information processing apparatus includes a storage unit, an image capturing unit, and a processing unit. The storage unit stores a plurality of feature values to be used for object recognition and an update program for the feature values, with respect to each of products registered for sale. The image capturing unit is configured to acquire an image of a product registered for sale. The processing unit is configured to extract a feature value of the product from the acquired image, select one of the plurality of the feature values corresponding to the product as a replacement target, by executing the update program corresponding to the product, and replace the selected feature value with the extracted feature value. | 12-31-2015 |
20150379726 | BODY MOTION DETECTION DEVICE AND METHOD - A contrast calculating unit calculates, as each of a contrast of a high frequency component and a contrast of a low frequency component of a transformed radiographic image, a contrast in a gradient direction of an edge portion in an analysis region with each of analysis points set by an analysis point setting unit being the center of the analysis region. A ratio calculating unit calculates, for each gradient direction, a ratio of the contrast of the high frequency component to the contrast of the low frequency component. A determining unit determines the smallest ratio as an index indicating the body motion, and determines whether or not there is a body motion during an imaging operation to take the radiographic image based on a result of statistical processing of the indexes at the analysis points. A display control unit displays a result of the determination on a display unit. | 12-31-2015 |
20150379727 | MOTION BASED ADAPTIVE RENDERING - An apparatus, system and method is provided to determine a motion of pixels in local regions of a scene, classify the motion into a speed category, and make decisions on how to render blocks of pixels. In one implementation the motion in a tile is classified into at least three different speed regimes. If the pixels in a tile are in a quasi-static speed regime, a determination is made whether or not to reuse a fraction of pixels from the previous frame. If the pixels are determined to be in a high speed regime, a decision is made whether or not a sampling rate may be reduced. | 12-31-2015 |
20150379729 | AUTOMATICALLY DETERMINING FIELD OF VIEW OVERLAP AMONG MULTIPLE CAMERAS - Field of view overlap among multiple cameras are automatically determined as a function of the temporal overlap of object tracks determined within their fields-of-view. Object tracks with the highest similarity value are assigned into pairs, and portions of the assigned object track pairs having a temporally overlapping period of time are determined. Scene entry points are determined from object locations on the tracks at a beginning of the temporally overlapping period of time, and scene exit points from object locations at an ending of the temporally overlapping period of time. Boundary lines for the overlapping fields-of-view portions within the corresponding camera fields-of-view are defined as a function of the determined entry and exit points in their respective fields-of-view. | 12-31-2015 |
20160004906 | Tracking at Least One Pedestrian - Tracking at least one pedestrian is disclosed. Initially, four or more first images are received showing a first pedestrian and a roadway or a walkway. First static and dynamic characteristics and second static and dynamic characteristics of the first pedestrian are determined. The second static characteristic is compared to the first static characteristic and the second dynamic characteristic is compared to the first dynamic characteristic. It is then determined that the first pedestrian is traversing a portion of the roadway or the walkway. | 01-07-2016 |
20160004908 | SHAPE RECOGNITION DEVICE, SHAPE RECOGNITION PROGRAM, AND SHAPE RECOGNITION METHOD - Provided are a shape recognition device, a shape recognition program, and a shape recognition method capable of obtaining more accurate information for recognizing an outer shape of a target object. A shape recognition device according to the present invention includes: an outer shape detection unit that detects an outer shape of a hand; an extraction point setting unit that sets a plurality of points inside of the detected outer shape as extraction points; a depth level detection unit that measures respective spatial distances to points on a surface of the hand as depth levels, the points respectively corresponding to the plurality of extraction points; and a hand orientation recognition unit that determines which of a palmar side and a back side the hand shows, on the basis of a criterion for fluctuations in the measured depth levels. | 01-07-2016 |
20160004909 | TRACKING USING MULTILEVEL REPRESENTATIONS - Tracking object in frames of video data, including receiving a first tracking position associated with the target object in a first frame of a video sequence; identifying, for a second frame of the video sequence, a plurality of representation levels and at least one node for each representation level; determining, by a processor, a second tracking position in the second frame by estimating motion of the target object in the second frame between the first frame and the second frame; determining, at each representation level by the processor, a value for each node based on a conditional property of the node in the second frame; and adjusting, by the processor, the second tracking position based on the values determined for each of the nodes and interactions between at least some of the nodes at different representation levels. | 01-07-2016 |
20160004913 | APPARATUS AND METHOD FOR VIDEO ANALYTICS - An apparatus for Video Analytics (VA), including an object identifier configured to identify an object in a video image, a property extractor configured to extract properties of the object, a property-of-interest designator configured to designate at least some of the extracted properties as properties of interest for identifying a target, and a target recognizer configured to recognize an object identified in a video image as a target in a case where properties of the object are similar to properties of interest. | 01-07-2016 |
20160004916 | Road Region Detection - A road region detection method is provided. The method includes: obtaining a first image captured by a camera at a first time point and a second image captured by the camera at a second time point (S | 01-07-2016 |
20160004924 | REAL-TIME VIDEO TRACKING SYSTEM - A method for detecting and tracking a target includes detecting the target using a plurality of feature cues, fusing the plurality of feature cues to form a set of target hypotheses, tracking the target based on the set of target hypotheses and a scene context analysis, and updating the tracking of the target based on a target motion model. | 01-07-2016 |
20160004929 | SYSTEM AND METHOD FOR ROBUST MOTION DETECTION - Method and system for detecting objects of interest in a camera monitored area are disclosed. Statistical analysis of block feature data, particularly Sobel edge and spatial high frequency responses is used to model the background of the scene and to segregate foreground objects from the background. This technique provides a robust motion detection scheme prone to catching genuine motions and immune against false alarms. | 01-07-2016 |
20160005172 | SYSTEM AND METHOD FOR DETERMINING ROTATION INVARIANT FEATURE DESCRIPTORS FOR POINTS OF INTEREST IN DIGITAL IMAGES - A system and method for determining rotation invariant feature descriptors for points of interest in digital images for image matching are disclosed. In one embodiment, a point of interest in each of two or more digital images is identified. Further, the digital images are transformed to change location of the point of interest in each of the digital images to a principal point. Furthermore, a rotation invariant feature descriptor is determined for the point of interest in each of the transformed digital images for image matching. | 01-07-2016 |
20160005173 | REMOTE POINTING METHOD - The present invention relates to a remote point method. A remote pointing method according to the present invention comprises capturing images by a first and a second camera disposed being separated spatially from each other; detecting a pointing part in a first image captured by the first camera; determining a region of interest including the pointing part in a second image captured by the second camera; and extracting stereoscopic coordinates of the pointing part within the region of interest. | 01-07-2016 |
20160005174 | SYSTEM AND METHOD FOR SYNCHRONIZING FIDUCIAL MARKERS - A system of active fiducial markers for pose calculation of head mounted displays is described in which information is exchanged among said markers for the purpose of synchronization. Fiducial patterns are made more easily identifiable by selective duty cycles of sub groups of synchronized emitters and means is described to propagate pose information among markers. | 01-07-2016 |
20160005175 | SERVICE PROVISION DEVICE, AND METHOD - A non-transitory recording medium storing a program that causes a computer to execute a process, the process includes: imaging a given object from plural different angles, and extracting from the plural obtained captured images, one or plural captured images having a feature amount that differs from a feature amount in another captured image by more than a specific reference amount; and providing the one or the plural extracted captured images as determination-use images employable in determination as to whether or not the given object is included in a captured image. | 01-07-2016 |
20160005176 | METHOD AND DEVICE FOR CALIBRATION-FREE GAZE ESTIMATION - The invention relates to a method of gaze estimation. As to determine the position of the gaze without calibrating the system used for determining the gaze, the method comprises: detecting at least a location of the centre of at least an eye on at least an eye image of a viewer watching at least a video image displayed on a screen; determining at least a first position of the gaze of the viewer on the screen by using the at least a detected location of the centre of the at least an eye and a mapping function based on centre-bias property of human gaze distribution. The invention also relates to a device configured for estimating the gaze. | 01-07-2016 |
20160008662 | BALL TRACKER CAMERA | 01-14-2016 |
20160012274 | VEHICLE SAFETY SYSTEM AND OPERATING METHOD THEREOF | 01-14-2016 |
20160012281 | SYSTEMS AND METHODS OF GESTURE RECOGNITION | 01-14-2016 |
20160012282 | Object Sensing Device | 01-14-2016 |
20160012283 | Stereoscopic Camera Apparatus | 01-14-2016 |
20160012297 | METHOD AND APPARATUS FOR SURVEILLANCE | 01-14-2016 |
20160012306 | BLOOD DETECTION SYSTEM WITH REAL-TIME CAPABILITY AND METHOD OF OPERATION THEREOF | 01-14-2016 |
20160012308 | Image Capture and Identification System and Process | 01-14-2016 |
20160012597 | FEATURE TRACKABILITY RANKING, SYSTEMS AND METHODS | 01-14-2016 |
20160012598 | VISUAL AND PHYSICAL MOTION SENSING FOR THREE-DIMENSIONAL MOTION CAPTURE | 01-14-2016 |
20160012608 | OBJECT TRACKING DEVICE, OBJECT TRACKING METHOD, AND COMPUTER-READABLE MEDIUM | 01-14-2016 |
20160012609 | Method and System for Cluster-Based Video Monitoring and Event Categorization | 01-14-2016 |
20160012611 | System And Method Of Measuring Distances Related To An Object Utilizing Ancillary Objects | 01-14-2016 |
20160014297 | SALIENT POINT-BASED ARRANGEMENTS | 01-14-2016 |
20160018904 | Gesture Recognition in Vehicles - A method and system for performing gesture recognition of a vehicle occupant employing a time of flight (TOF) sensor and a computing system in a vehicle. An embodiment of the method of the invention includes the steps of receiving one or more raw frames from the TOF sensor, performing clustering to locate one or more body part clusters of the vehicle occupant, calculating the location of the tip of the hand of the vehicle occupant, determining whether the hand has performed a dynamic or a static gesture, retrieving a command corresponding to one of the determined static or dynamic gestures, and executing the command. | 01-21-2016 |
20160019412 | METHOD FOR PERFORMING A FACE TRACKING FUNCTION AND AN ELECTRIC DEVICE HAVING THE SAME - A method for performing a method for performing a performing a face tracking function in an electric device is provided. The electric device has a touch panel, a camera, and a processor. The method includes the following steps. A touch signal is receiving by the touch panel. Under a video call, a face tracking mode is entered based on the touch signal by the processor. Face tracking is performed on a captured frame from the camera to obtain at least one region of interesting (ROI) of the captured frame by the processor, each of the ROI having an image of a face. A target frame is generated by combining the at least one ROI by the processor. The target frame is transmitted to another electric device by the processor, so that the target frame is shown on the another electric device as a video talk frame. | 01-21-2016 |
20160019427 | VIDEO SURVEILLENCE SYSTEM FOR DETECTING FIREARMS - The present invention teaches that a computer system can be taught to analyze a stream of video surveillance imagery for individuals carrying firearms using a machine vision and machine learning system of the cascading classifier type, taught by special methods adapted to the firearm recognition area, in particular, exposure of the trained classifier to pre-categorized firearm images, and may poll a number of types of recognition methods before making a positive firearm recognition. | 01-21-2016 |
20160019436 | GRID DATA PROCESSING METHOD AND APPARATUS - The present invention discloses a grid data record processing method. The method comprising: acquiring influence parameters of lag time of an insulator on which flashover is occurred, the lag time being a time interval from the insulator flashover to tripping of a corresponding breaker in a substation is caused; determining the lag time according to the acquired influence parameters of the lag time and a lag time evaluation model; and determining trip-up records caused by the insulator flashover from grid data records according to the lag time. With the method and apparatus according to embodiments of the present invention, trip-up records caused by insulator flashover can be efficiently determined from grid data records. | 01-21-2016 |
20160019683 | OBJECT DETECTION METHOD AND DEVICE - Disclosed is an object detection method used to detect an object in an image pair corresponding to a current frame. The image pair includes an original image of the current frame and a disparity map of the same current frame. The original image of the current frame includes at least one of a grayscale image and a color image of the current frame. The object detection method comprises steps of obtaining a first detection object detected in the disparity map of the current frame; acquiring an original detection object detected in the original image of the current frame; correcting, based on the original detection object, the first detection object so as to obtain a second detection object; and outputting the second detection object. | 01-21-2016 |
20160019698 | SYSTEMS AND METHODS FOR PEOPLE COUNTING IN SEQUENTIAL IMAGES - Methods for counting persons in images and system therefrom are provided. The method can include obtaining image data for multiple sequential images of a physical area acquired by a camera. The method can also include, based on the image data, generating a background mask for at least one image from the multiple images, where the background mask indicating pixels identified as corresponding to non-moving regions and pixels identified as corresponding to moving regions in the at least one image meeting an exclusion criteria. The method additionally includes, based on the background mask, generating a foreground mask for the at least one image identifying pixels in the image associated with persons and computing an estimate of a number of persons in the physical area based at least on the number of the foreground pixels and pre-defined relationship between a number of pixels and a number of persons for the camera. | 01-21-2016 |
20160019700 | METHOD FOR TRACKING A TARGET IN AN IMAGE SEQUENCE, TAKING THE DYNAMICS OF THE TARGET INTO CONSIDERATION - A method for tracking a target in a sequence of images comprises: a step of detecting objects, a temporal association step, aiming to associate the objects detected in the current image with the objects detected in the previous image based on their respective positions, a step of determining a second target in the current image according to a search area determined for a previous image of the sequence, a step of determining the detected object that best corresponds to a dynamic of a final target, a step of updating the search area for the current image based on the position of the target that best corresponds to the dynamics of the final target, and a step of searching for the final target in the search area for the current image by comparing areas of the current image with a reference model representative of the final target. | 01-21-2016 |
20160026032 | ELECTRONIC SHELF (eShelf) - The invention is an electronic shelf (eShelf). The eShelf uses highly conductive electrodes to solve the long-line addressing problems using very-simple, low-cost manufacturing processes to build very-long, reflective, “no-power”, full-color, liquid crystal displays (LCDs) with perfect image retention. The electronic shelf is composed of an eSheet cholesteric LCD attached to a shelf product sensor pad that can turn a normal store aisle into an interactive, full-color, fun and informative shopping experience. The eShelf is the next generation of in-store smart technology combining product management with customer interaction and advertising. The true success of the eShelf will depend on the countless apps that will run on or interact with the eShelf to help customers make their purchasing decisions. These software applications will allow the eShelf to interact with the customers smart mobile device, such as, a tablet, smartphone, smartwatch, or Google Glass. | 01-28-2016 |
20160026245 | System and Method for Probabilistic Object Tracking Over Time - A system and method are provided for object tracking in a scene over time. The method comprises obtaining tracking data from a tracking device, the tracking data comprising information associated with at least one point of interest being tracked; obtaining position data from a scene information provider, the scene being associated with a plurality of targets, the position data corresponding to targets in the scene; applying a probabilistic graphical model to the tracking data and the target data to predict a target of interest associated with an entity being tracked; and performing at least one of: using the target of interest to determine a refined point of interest; and outputting at least one of the refined point of interest and the target of interest. | 01-28-2016 |
20160026257 | WEARABLE UNIT FOR SELECTIVELY WITHHOLDING ACTIONS BASED ON RECOGNIZED GESTURES - A wearable apparatus and method are provided for selectively disregarding triggers originating from persons other than a user of the wearable apparatus. The wearable apparatus comprises a wearable image sensor configured to capture image data from an environment of the user of the wearable apparatus. The wearable apparatus also includes at least one processing device programmed to receive the captured image data and identify in the image data a trigger. The trigger is associated with at least one action to be performed by the wearable apparatus. The processing device is also programmed to determine, based on the image data, whether the trigger identified in the image data is associated with a person other than the user of the wearable apparatus, and forgo performance of the at least one action if the trigger identified in the image data is determined to be associated with a person other than the user. | 01-28-2016 |
20160026847 | PUPIL DETECTION - Embodiments that relate to determining an estimated pupil region of an eye are disclosed. In one embodiment a method includes receiving an image of an eye, with the image comprising a plurality of pixels. A rough pupil region is generated using at least a subset of the plurality of pixels. A plurality of pupil boundary point candidates are extracted from the rough pupil region, with each of the candidates weighted based on color values of at least two neighbor pixels. A parametric curve may be fitted to the weighted pupil boundary point candidates to determine the estimated pupil region of the eye of the user. | 01-28-2016 |
20160026848 | GLOBAL-SCALE OBJECT DETECTION USING SATELLITE IMAGERY - A system for performing global scale object detection using satellite imagery, comprising an object detection server that receives and analyzes image data to identify objects within an image via a curated computational method, and a curation interface that enables a user to curate image information for use in object identification, and a method for a curated computational method for performing global scale object detection. | 01-28-2016 |
20160026853 | WEARABLE APPARATUS AND METHODS FOR PROCESSING IMAGE DATA - A wearable apparatus and method are provided for processing images including product descriptors. In one implementation, a wearable apparatus for processing images including a product descriptor is provided. The wearable apparatus includes a wearable image sensor configured to capture a plurality of images from an environment of a user of the wearable apparatus. The wearable apparatus also includes at least one processing device programmed to analyze the plurality of images to identify one or more of the plurality of images that include an occurrence of the product descriptor. Based on analysis of the one or more identified images, the at least one processing device is also programmed to determine information related to the occurrence of the product descriptor. The at least one processing device is further configured to cause the information and an identifier of the product descriptor to be stored in a memory. | 01-28-2016 |
20160026857 | IMAGE PROCESSOR COMPRISING GESTURE RECOGNITION SYSTEM WITH STATIC HAND POSE RECOGNITION BASED ON DYNAMIC WARPING - An image processing system comprises an image processor having image processing circuitry and an associated memory. The image processor is configured to implement a gesture recognition system comprising a static pose recognition module. The static pose recognition module is configured to identify a hand region of interest in at least one image, to extract a contour of the hand region of interest, to compute a feature vector based at least in part on the extracted contour, and to recognize a static pose of the hand region of interest utilizing a dynamic warping operation based at least in part on the feature vector. | 01-28-2016 |
20160026863 | PUPIL DETECTION DEVICE AND PUPIL DETECTION METHOD - A pupil detection device includes an identification unit that identifies a pupil region from a captured image of an eye, an extractor that extracts a contour of the pupil region identified by the identification unit, a selector that selects a plurality of points on the contour of the pupil region extracted by the extractor, a center calculator that calculates a center of a circle passing through the plurality of points selected by the selector, and a pupil detector that detects a center of the pupil region from the center of a circle calculated by the center calculator. | 01-28-2016 |
20160026865 | VISION-BASED SYSTEM FOR DYNAMIC WEATHER DETECTION - A method of detecting a dynamic weather event includes the steps of: (a) receiving video images of a scene from a camera; (b) dividing each of the video images into multiple regions, in which a region is defined by a range of distances from the camera to objects in the scene; (c) selecting a region; and (d) segmenting the selected region into a plurality of three-dimensional (3D) image patches, in which each 3D image patch includes a time-sequence of T patches, with each patch comprised of N×M pixels, wherein N, M and T are integer numbers. The method also includes the following steps: measuring an image intensity level in each of the 3D image patches; masking 3D image patches containing image intensity levels that are above a first threshold level, or below a second threshold level; and extracting features in each 3D image patch that is not discarded by the masking step. Based on the extracted features, the method makes a binary decision on detecting a dynamic weather event. | 01-28-2016 |
20160026867 | WEARABLE APPARATUS AND METHOD FOR CAPTURING IMAGE DATA USING MULTIPLE IMAGE SENSORS - A wearable apparatus and method are provided for capturing image data. In one implementation, a wearable apparatus for capturing image data is provided. The wearable apparatus includes a plurality of image sensors for capturing image data of an environment of a user. Each of the image sensors is associated with a different field of view. The wearable apparatus also includes a processing device programmed to process image data captured by at least two of the image sensors to identify an object in the environment. The processing device is also programmed to identify a first image sensor, which has a first optical axis closer to the object than a second optical axis of a second image sensor. After identifying the first image sensor, the processing device is also programmed to process image data from the first image sensor using a first processing scheme, and process image data from the second image sensor using a second processing scheme. | 01-28-2016 |
20160026868 | WEARABLE APPARATUS AND METHOD FOR PROCESSING IMAGES INCLUDING PRODUCT DESCRIPTORS - A wearable apparatus and method are provided for processing images including product descriptors. In one implementation, a wearable apparatus for processing images including a product descriptor is provided. The wearable apparatus includes a wearable image sensor configured to capture a plurality of images from an environment of a user of the wearable apparatus. The wearable apparatus also includes at least one processing device programmed to analyze the plurality of images to identify one or more of the plurality of images that include an occurrence of the product descriptor. Based on analysis of the one or more identified images, the at least one processing device is also programmed to determine information related to the occurrence of the product descriptor. The at least one processing device is further configured to cause the information and an identifier of the product descriptor to be stored in a memory. | 01-28-2016 |
20160026870 | WEARABLE APPARATUS AND METHOD FOR SELECTIVELY PROCESSING IMAGE DATA - A wearable apparatus and method are provided for capturing image data. In one implementation, a wearable apparatus for selectively processing images is provided. The wearable apparatus includes an image sensor configured to capture a plurality of images from an environment of a user. The wearable apparatus also includes at least one processing device programmed to access at least one rule for classifying images. The at least one processing device is also programmed to classify, according to the at least one rule, at least a first subset of the plurality of images as key images and at least a second subset of the plurality images as auxiliary images. The at least one processing device is further programmed to delete at least some of the auxiliary images. | 01-28-2016 |
20160026871 | OBTAINING INFORMATION FROM AN ENVIRONMENT OF A USER OF A WEARABLE CAMERA SYSTEM - A wearable apparatus and method are provided for executing actions based on triggers identified in an environment of a user. In one implementation, a wearable apparatus for storing information related to objects identified in an environment of a user is provided. The wearable apparatus includes a wearable image sensor configured to capture a plurality of images from the environment of the user and at least one processing device. The processing device may be programmed to process the plurality of images to detect an object entering a receptacle, process at least one of the plurality of images that includes the object to determine at least a type of the object, and based on the type of the object, generate information related to an action to be taken related to the object. | 01-28-2016 |
20160026876 | SYSTEM AND METHOD OF RECOGNIZING TRAVELLED LANE OF VEHICLE - Disclosed are a system and a method of recognizing a travelled lane of a vehicle, which recognize a currently travelled lane by using an image obtained through a camera of a vehicle or information received from a road system. Further, the system and the method detect whether the vehicle changes a lane by using sensing data of the vehicle, current location information, road map data, and the like, and when it is confirmed that the vehicle changes a lane or enters a new lane, the system and the method make the vehicle immediately recognize a currently travelled lane by combining information about the change of a lane or the new lane with information about the first recognized travelled lane. Accordingly, the present invention may make a vehicle recognize a travelled lane even when it is difficult to recognize a lane by a camera, and rapidly obtain information about the change of a lane and an entry lane, thereby determining a final lane. Further, the present invention provides a corresponding vehicle and surrounding vehicles with information about the final lane, so that the information may be utilized in an advanced driver assistance system (ADAS) or a V2V application service. | 01-28-2016 |
20160026880 | DRIVING ASSIST SYSTEM FOR VEHICLE AND METHOD THEREOF - A driving assist system for a vehicle and a method thereof includes a broadband camera which photographs the surrounding area of the vehicle to create an image including four channels of light information having different wavelengths. The broadband image data and the position of the vehicle are matched so that a road and an obstacle are easily recognized using a minimal number of cameras while driving the vehicle. Recognition performance of a drivable area is significantly improved, navigation for the vehicle is easily measured, and thebiometric recognizing abilities of a driver monitoring camera is improved, thereby improving the convenience of a driver by the improvement of a the performance of a driving assist device. | 01-28-2016 |
20160026888 | IDENTIFYING OBJECTS IN AN IMAGE USING CODED REFERENCE IDENTIFIERS - Image processing is performed to identify an image of a physical object within a digital image. A boundary of the image of the physical object may be determined. A coded reference identifier that is contained within the boundary of the image of the physical object may be recognized. A database record for the coded reference identifier may be associated with a database record for the physical object. | 01-28-2016 |
20160026890 | DEFINING REGION FOR MOTION DETECTION - A method, performed by a computer device, may include receiving a request to set up motion detection for a camera. The method may include generating a selection grid for a field of view associated with the camera, wherein the selection grid includes a plurality of grid elements; selecting one or more grid elements of the plurality of grid elements; and configuring motion detection for a video feed from the camera based on the selected one or more grid elements. | 01-28-2016 |
20160026898 | METHOD AND SYSTEM FOR OBJECT DETECTION WITH MULTI-SCALE SINGLE PASS SLIDING WINDOW HOG LINEAR SVM CLASSIFIERS - The invention provides methods and systems for reliably detecting objects in a received video stream from a camera. Objects are selected and a bound around selected objects is calculated and displayed. Bounded objects can be tracked. Bounding is performed by using Histogram of Oriented Gradients and linear Support Vector Machine classifiers. | 01-28-2016 |
20160027167 | System and Method for Analyzing and Processing Food Product - Systems and methods are described that provide a fast and simple way of processing meat or food products. Information is compiled and analyzed regarding the condition of a carcass, meat product, styling of the meat product and associated tray or package. Information is used in various processes, including determining which further processing steps are required. The information is also stored for future reference and analysis. | 01-28-2016 |
20160027177 | Creating Camera Clock Transforms from Image Information - Systems and methods are provided for using imagery depicting a timekeeping device to determine a clock offset for a particular image capture device. The clock offset can be used to correct timestamps associated with one or more images captured by such image capture device. One example method includes analyzing imagery depicting at least in part a timekeeping device to determine a first time displayed by the timekeeping device in the imagery. The method includes determining whether the first time comprises a 12-hour value or a 24-hour value. The method includes, when it is determined that the first time comprises a 12-hour value, determining a corresponding 24-hour value for the 12-hour value based at least in part on information contained within a plurality of images. The method includes determining a clock offset between the 24-hour value and the first timestamp. One example system includes a timestamp correction engine for correcting timestamps. | 01-28-2016 |
20160027181 | Accelerating Object Detection - Accelerating object detection techniques are described. In one or more implementations, adaptive sampling techniques are used to extract features from an image. Coarse features are extracted from the image and used to generate an object probability map. Then, dense features are extracted from high-probability object regions of the image identified in the object probability map to enable detection of an object in the image. In one or more implementations, cascade object detection techniques are used to detect an object in an image. In a first stage, exemplars in a first subset of exemplars are applied to features extracted from the multiple regions of the image to detect object candidate regions. Then, in one or more validation stages, the object candidate regions are validated by applying exemplars from the first subset of exemplars and one or more additional subsets of exemplars. | 01-28-2016 |
20160027208 | IMAGE ANALYSIS METHOD - A method for analysing a point cloud, the method comprising:
| 01-28-2016 |
20160034751 | OBJECT TRACKING AND BEST SHOT DETECTION SYSTEM - A method and system using face tracking and object tracking is disclosed. The method and system use face tracking, location, and/or recognition to enhance object tracking, and use object tracking and/or location to enhance face tracking. | 02-04-2016 |
20160034760 | Method for Accurately Determining the Position and Orientation of Each of a Plurality of Identical Recognition Target Objects in a Search Target Image - Embodiments of the invention relate to detecting the number, position, and orientation of objects when a plurality of recognition target objects are present in a search target image. Dictionary image data is provided, including a recognition target pattern, a plurality of feature points of the recognition target pattern, and an offset (O | 02-04-2016 |
20160034773 | ROBUST INDUSTRIAL OPTICAL CHARACTER RECOGNITION - Systems, methods, and articles to provide robust optical character recognition (OCR) for use in industrial environments. One or more implementations include utilizing Histogram of Oriented Gradients (HOG) features with a sliding window approach as a robust and computationally efficient method of OCR. The implementations are relatively simple to use because there are relatively few parameters to adjust, which allows a non-expert user to setup or modify the system and achieve desirable performance. One reason this is possible is because the implementations described herein do not require character segmentation, which can be difficult to optimize. | 02-04-2016 |
20160034779 | High Speed Searching For Large-Scale Image Databases - Embodiments are provided to search for a dictionary image corresponding to a target image. The method includes detecting keypoints in a set of dictionary images. The set of dictionary images includes at least one dictionary image having a plurality of pixels. At least one random pair of pixels is selected among the detected keypoints of the dictionary image on the basis of candidate coordinates for pixels distributed around the detected keypoints of the dictionary image. A feature vector of each keypoint of the dictionary image is calculated, including calculating a difference in brightness between the selected pairs of pixels of the dictionary image. The calculated difference in brightness is an element of the feature vector. Keypoints of a target image are detected. | 02-04-2016 |
20160034784 | ABNORMALITY DETECTION APPARATUS, ABNORMALITY DETECTION METHOD, AND RECORDING MEDIUM STORING ABNORMALITY DETECTION PROGRAM - An abnormality detection apparatus, an abnormality detection method, and an abnormality detection program are provided. Each of the abnormality detection apparatus, an abnormality detection method, and an abnormality detection program extracts a target image to be monitored and a reference image, respectively, from target video to be monitored, detects an abnormality based on a difference between the target image to be monitored and the reference image, and displays an image indicating a difference between the target image to be monitored and the reference image on a monitor. Moreover, an abnormality detection system is provided including the abnormality detection apparatus, a video that captures an image of a target to be monitored, and a monitor. | 02-04-2016 |
20160035080 | ADVANCED AIRCRAFT VISION SYSTEM UTILIZING MULTI-SENSOR GAIN SCHEDULING - An enhanced vision system is provided for an aircraft performing a landing maneuver. Accordingly to non-limiting embodiments, a processor onboard the aircraft receives data from sensors or systems onboard the aircraft and determines a position of the aircraft relative to a runway using the data. Responsive to this determination, the processor adjusts the gain of a first vision system and a second vision system. Images from the first vision system and the second vision system are merged and displayed to the pilot until the completion of the landing maneuver. | 02-04-2016 |
20160035081 | Methods and Systems for Object Detection using Laser Point Clouds - Methods and systems for object detection using laser point clouds are described herein. In an example implementation, a computing device may receive laser data indicative of a vehicle's environment from a sensor and generate a two dimensional (2D) range image that includes pixels indicative of respective positions of objects in the environment based on the laser data. The computing device may modify the 2D range image to provide values to given pixels that map to portions of objects in the environment lacking laser data, which may involve providing values to the given pixels based on the average value of neighboring pixels positioned by the given pixels. Additionally, the computing device may determine normal vectors of sets of pixels that correspond to surfaces of objects in the environment based on the modified 2D range image and may use the normal vectors to provide object recognition information to systems of the vehicle. | 02-04-2016 |
20160035098 | STATE ESTIMATION APPARATUS, STATE ESTIMATION METHOD, AND INTEGRATED CIRCUIT - The purpose of the present invention is to provide a state estimation apparatus that appropriately estimates the internal state of an observation target by determining likelihoods from a plurality of observations. An observation obtaining unit of the state estimation system obtains, at given time intervals, a plurality of observation data obtained from an observable event. The observation selecting unit selects a piece of observation data from the plurality of pieces of observation data obtained by the observation obtaining unit based on a posterior probability distribution data obtained at a preceding time t−1. The likelihood obtaining unit obtains likelihood data based on the observation data selected by the observation selecting unit and predicted probability distribution data obtained through prediction processing using the posterior probability distribution data. The posterior probability distribution estimation unit estimates posterior probability distribution data representing a state of the observable event based on the predicted probability distribution data obtained by the likelihood obtaining unit and the likelihood data. The prior probability distribution output unit outputs prior probability distribution data based on the posterior probability distribution data estimated by the posterior probability distribution estimation unit as prior probability distribution data at a next time t+1. | 02-04-2016 |
20160035099 | DEPTH ESTIMATION APPARATUS, IMAGING DEVICE, AND DEPTH ESTIMATION METHOD - A depth estimation apparatus including: an imaging device which generates a first image signal and a second image signal by imaging an object at different phases; a storage unit configured to store model data defining a relationship between (i) lens blur and phase difference of the object in images and (ii) position of the object in the images in the depth axis; and a detecting unit configured to detect a position of the object in the depth axis from the first image signal and the second image signal, using the model data, wherein a phase difference between the first image signal and the second image signal is smaller than or equal to 15% in terms of a base line length. | 02-04-2016 |
20160039429 | VEHICLE DRIVER IDENTIFICATION - A user gesture is detected based on received data from one or more motion sensors. User gesture attributes are identified including at least one of hand vectoring, wrist articulation, and finger articulation from the gesture including respective movements of each of a plurality of a user's fingers. Based on the gesture attributes, a user and an action to be performed in a vehicle are identified. The action is performed in the vehicle to control at least one vehicle component based on the gesture. | 02-11-2016 |
20160042221 | DETERMINING LENS CHARACTERISTICS - Embodiments relating to determining characteristics of eyeglass lenses are disclosed. A head-mounted display device comprises a camera communicatively coupled to a computing device and including an optical axis having a center point. Light sources are configured to emit light rays toward the lens to produce lens glints. The light sources are in a light source plane that is spaced from a lens plane by an offset distance of between 8 mm and 12 mm. The light sources are either spaced vertically from a line perpendicular to the light source plane and extending through the center point by a distance between 13 mm and 53 mm, or spaced horizontally from the line by a distance of between 13 mm and 80 mm. Lens characterization program logic identifies an image location of each lens glint, and outputs an estimated lens shape model comprising the one or more lens characteristics. | 02-11-2016 |
20160042226 | SENTIMENT ANALYSIS IN A VIDEO CONFERENCE - In an approach to determine a sentiment of an attendee of a video conference, the computer receives a video of an attendee of a video conference and, then, determines, based, at least in part, on the video of the attendee, a first sentiment of the attendee. Furthermore, in the approach the computer receives an indication of an attendee activity on a first application and determines, based, in part, on the attendee activity whether the first sentiment of the attendee is related to the video conference. | 02-11-2016 |
20160042227 | SYSTEM AND METHOD FOR DETERMINING VIEW INVARIANT SPATIAL-TEMPORAL DESCRIPTORS FOR MOTION DETECTION AND ANALYSIS - A method and system for determining view invariant spatial-temporal descriptors encoding details of both motion dynamics and posture interactions that are highly representative and discriminative. The method and system describe determining posture interactions descriptors and motion dynamics descriptors by utilizing cosine similarity approach thereby rendering the descriptors to be view invariant. | 02-11-2016 |
20160042228 | SYSTEMS AND METHODS FOR RECOGNITION AND TRANSLATION OF GESTURES - A system for recognizing hand gestures, comprising a gesture database configured to store information related to a plurality of gestures; a recognition controller configured to capture data related to a hand gesture being performed by a user; a recognition module configured to: determine hand characteristic information from the captured data, determine finger characteristic information from the captured data, compare the hand and finger characteristic information to the information stored in the database to determine a most likely gesture, and outputting the determined most likely gesture. | 02-11-2016 |
20160042231 | DETECTION APPARATUS FOR DETECTING MOVEMENT OF OBJECT, DETECTION METHOD AND STORAGE MEDIUM - The average value calculation section acquires luminance information and color information, from captured images continuously captured in frame as a unit by an image capture unit. The detection method determination section determines either one or both among the luminance information and color information to use in order to detect movement of the predetermined object, based on the luminance information and color information acquired. The motion detection section detects movement of the predetermined object using either one or both among the luminance information and color information based on the result determined. | 02-11-2016 |
20160042242 | INFORMATION PROCESSOR, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM - An information processor includes: a similarity data generation portion generating, for a position within the search range, similarity data that represents the calculated similarity to the image in the reference block in association with the position within the search range; a similarity correction portion smoothing the similarity data in a direction of space on a basis of similarity data; a result evaluation portion detecting a position with a maximum similarity value in each piece of the smoothed similarity data; a depth image generation portion generating a depth image by associating the position of the subject in the depth direction with an image plane; and an output information generation section performing given information processing on a basis of the subject position in a three-dimensional space using the depth image and outputting the result of information processing. | 02-11-2016 |
20160042243 | OBJECT MONITORING SYSTEM, OBJECT MONITORING METHOD, AND MONITORING TARGET EXTRACTION PROGRAM - First imaging means | 02-11-2016 |
20160042525 | APPARATUS AND METHOD FOR VISUALIZATION OF REGION OF INTEREST - There is provided an apparatus for visualizing a region of interest (ROI) in a Computer Aided Diagnosis (CAD) system. The apparatus includes: an image receiver configured to receive images; an ROI acquirer configured to acquire the ROI from a current image; and an ROI visualizer configured to, in response to acquisition of the ROI from the current image, output visualization information for visualizing the ROI acquired from the current image based on a change between the ROI acquired from the current image and an ROI acquired from a previous image. | 02-11-2016 |
20160042528 | METHOD AND APPARATUS FOR DETERMINING A SEQUENCE OF TRANSITIONS - An apparatus and a method of determining a sequence of transitions for a varying state of a system, wherein the system is described by a finite number n of states, and wherein a transition from a current state to a next state causes a cost in dependence of a distance that is dependent on a previous state, the current state, and the next state. The method comprises: combining each two consecutive states to generate super states, wherein the cost for a transition from a current super state to a next super state only depends on the current super state and the next super state; in an iterative process, applying a dynamic programming algorithm to the super states in order to determine a minimum accumulated cost for each varying super state and to determine a preceding super state that led to the minimum accumulated cost; and after a final iteration, determining a final super state with the minimum accumulated cost and retrieving the sequence of the preceding super states leading to the final super state with the minimum accumulated cost. | 02-11-2016 |
20160042533 | X-RAY IMAGING SYSTEM AND IMAGE PROCESSING DEVICE - An X-ray imaging system includes an X-ray imaging device and an image processing device including a reconstruction unit and an estimation unit. The X-ray imaging device uses a Talbot or Talbot-Lau interferometer including gratings disposed in a line. The X-ray imaging device obtains sets of moire fringe images by fringe scanning multiple times between which arrangement of the gratings is changed. In the fringe scanning, one of the gratings is moved relatively to the remaining grating. The reconstruction unit generates, on the basis of the sets, a reconstructed image which is a differential phase image, an X-ray absorption image and/or a small-angle scattering image. The estimation unit estimates, on the basis of the reconstructed image, a relative position of the moved grating from a reference position of the grating at each imaging in the fringe scanning. | 02-11-2016 |
20160044206 | Information Conveying Method and System - An information conveying method is implemented by an information conveying system coupled to a first electronic device associated with a service provider and a second electronic device associated with a user. The system is programmed tor receive a reference image; create a data packet based, on to-be-conveyed information associated with the service provider, and link the data packet to the reference image; upon receiving a captured image from the second electronic device, generate a characteristic code of the captured image; determine whether the captured image matches the reference image; and when the determination made is affirmative, transmit the data packet to the second electronic device. | 02-11-2016 |
20160048721 | SYSTEM AND METHOD FOR ACCURATELY ANALYZING SENSED DATA - A system for analyzing sensed data. A triggering mechanism is responsive to the presence of a target. A sensor acquires sensed data of the target, for example, an image. A processor analyzes the sensed data to detect the target. The signals generated by the triggering mechanism and the sensor are reconciled. In the reconciliation of the signals, when a pair of signals each indicate the presence of the target within a predefined time period, a target data set corresponding to the pair of signals is generated. When the presence of the target is indicated by only one of the triggering mechanism and sensor, the detection of target or the failure to detect the target is more reliable is reconciled. If the signal indicating detection of the target is determined to be more reliable, a target data set is generated. A method for analyzing sensed data is also disclosed. | 02-18-2016 |
20160048726 | Three-Dimensional Hand Tracking Using Depth Sequences - In the field of Human-computer interaction (HCI), i.e., the study of the interfaces between people (i.e., users) and computers, understanding the intentions and desires of how the user wishes to interact with the computer is a very important problem. The ability to understand human gestures, and, in particular, hand gestures, as they relate to HCI, is a very important aspect in understanding the intentions and desires of the user in a wide variety of applications. In this disclosure, a novel system and method for three-dimensional hand tracking using depth sequences is described. Some of the major contributions of the hand tracking system described herein include: 1.) a robust hand detector that is invariant to scene background changes; 2.) a bi-directional tracking algorithm that prevents detected hands from always drifting closer to the front of the scene (i.e., forward along the z-axis of the scene); and 3.) various hand verification heuristics. | 02-18-2016 |
20160048738 | Method and System for Recognizing User Activity Type - The present invention discloses a method and system for recognizing a user activity type, where the method includes: collecting an image of a location in which a user is located; extracting, from the image, characteristic data of an environment in which the user is located and characteristic data of the user; and obtaining, by recognition, an activity type of the user by using an image recognition model related to an activity type or an image library related to an activity type and the characteristic data. | 02-18-2016 |
20160048740 | SYSTEM AND METHOD FOR PROCESSING IMAGE DATA - The PLACEMETER PLATFORM APPARATUSES, METHODS AND SYSTEMS (“PM-PLATFORM”) transform sensor data and/or feedback via PM-PLATFORM components into notifications, updates, coupons, promotions, transactions and/or activities notifications, updates, coupons, promotions, transactions and/or activities. In one implementation, the PM-PLATFORM comprises a sensor, a memory, and a processor disposed in communication with the sensor and memory, the memory storing processor-issuable instructions to receive raw environment data at a sensor for at least two discrete points in time, analyze the received raw environment data locally to determine an at least one occupancy metric, store the occupancy metric, receive further raw environment data for a further point in time, process the further raw environment data to determine a further occupancy metric, compare the further occupancy metric to at least one previous occupancy metric, and issue a notification based on the comparison. | 02-18-2016 |
20160048953 | TARGETS, FIXTURES, AND WORKFLOWS FOR CALIBRATING AN ENDOSCOPIC CAMERA - The present disclosure relates to calibration assemblies and methods for use with an imaging system, such as an endoscopic imaging system. A calibration assembly includes: an interface for constraining engagement with an endoscopic imaging system; a target coupled with the interface so as to be within the field of view of the imaging system, the target including multiple of markers having calibration features that include identification features; and a processor configured to identify from first and second images obtained at first and second relative spatial arrangements between the imaging system and the target, respectively, at least some of the markers from the identification features, and using the identified markers and calibration feature positions within the images to generate calibration data. | 02-18-2016 |
20160048967 | METHOD AND DEVICE FOR DETERMINING A DISTANCE BETWEEN TWO OPTICAL BOUNDARY SURFACES WHICH ARE SPACED APART FROM EACH OTHER ALONG A FIRST DIRECTION - A method is provided for determining the distance between two optical boundary surfaces spaced apart from each other in a first direction. A first image is ascertained wherein the plane into which the pattern acquired coincides with a first of two optical boundary surfaces or has the smallest distance to the first optical boundary surface in a first direction. A position of the first image in the first direction is determined. A second image is ascertained wherein the plane into which the pattern acquired coincides with a second of two optical boundary surfaces or has the smallest distance to the second optical boundary surface in the first direction. The position of the second image in the first direction is determined. The distance is calculated by means of determined positions of the first and second image. | 02-18-2016 |
20160048975 | ASSEMBLY COMPRISING A RADAR AND AN IMAGING ELEMENT - An assembly comprising a radar and a camera for both deriving data relating to a golf ball and a golf club at launch, radar data relating to the ball and club being illustrated in an image provided by the camera. The data illustrated may be trajectories of the ball/club/club head, directions and/or angles, such as an angle of a face of the golf club striking the ball, the lie angle of the club head or the like. An assembly of this type may also be used for defining an angle or direction in the image and rotating e.g. an image of the golfer to have the determined direction or angle coincide with a predetermined angle/direction in order to be able to compare different images. | 02-18-2016 |
20160048976 | CAMERA APPARATUS AND METHOD FOR TRACKING OBJECT IN THE CAMERA APPARATUS - Disclosed is a camera apparatus capable of tracking a target object based on motion of a camera sensed by a motion sensor and a method for tracking an object in the camera apparatus are provided. The method includes obtaining, by a electronic device including a first sensor and a second sensor, one or more images corresponding to at least one object using the first sensor, by displaying the one or more images via a display operatively coupled with the electronic device, obtaining motion data corresponding to movement of at least part of the electronic device identified in relation with obtaining the one or more images, tracking, from a corresponding displayed image of the one or more images, a position corresponding to the at least one object based at least in part on the motion data, and presenting, via the display, at least one portion of information with respect to the position corresponding to the at least one object. | 02-18-2016 |
20160048977 | Method and Device for Detecting Face, and Non-Transitory Computer-Readable Recording Medium for Executing the Method - In the present disclosure, a plurality of frames of input images sequentially received for a predetermined time interval is obtained, and a face detecting operation is performed on a first frame if a full detecting mode is implemented. If a face is detected from a specific region of the first frame during the face detecting operation, a face tracking mode is implemented, a second frame is divided to produce the divided input image portions of the second frame, and the face tracking operation is performed on a surrounding region of the specific region of the divided input image portions of the second frame that corresponds to the specific region in the first frame. If the face is not detected in the face tracking mode, a partial detecting mode is implemented, and the face detecting operation is performed on image portions resized on divided input image portions of a third frame to which a specific region of the third frame corresponding to the specific region of the first frame belongs. | 02-18-2016 |
20160055373 | IMAGE PROCESSING SYSTEM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing apparatus including: a communication unit receiving first feature amounts, which include coordinates of feature points in an image acquired by another image processing apparatus, and position data showing a position in the image of a pointer that points at a location in a real space; an input image acquisition unit acquiring an input image by image pickup of the real space; a feature amount generating unit generating second feature amounts including coordinates of feature points set in the acquired input image; a specifying unit comparing the first feature amounts and the second feature amounts and specifying, based on a comparison result and the position data, a position in the input image of the location in the real space being pointed at by the pointer; and an output image generating unit generating an output image displaying an indicator indicating the specified position. | 02-25-2016 |
20160055389 | VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND RECORDING MEDIUM - A video processing apparatus includes: a first detection unit configured to detect a moving object from a movie; a second detection unit configured to detect an object having a predetermined shape from the movie; an extraction unit configured to extract a partial region of a region in which the second detection unit has detected the object having the predetermined shape in the movie; and a discrimination unit configured to discriminate whether the object detected by the second detection unit is a certain object depending on a ratio of a size of an overlapping region to a size of an extracted region extracted by the extraction unit, the overlapping region being a region where a region in which the first detection unit has detected the moving object in the movie and the extracted region overlap with each other. | 02-25-2016 |
20160055645 | PEOPLE COUNTING DEVICE AND PEOPLE COUNTING METHOD - A people counting device includes an edge extracting unit configured to extract an edge from a planar image of a target area, and a circle candidate detecting unit configured to detect a circle candidate included in the planar image based on the edge extracted by the edge extracting unit. The people counting device further includes a person determining unit configured to calculate a brightness gradient for each edge pixel constituting an edge of each circle candidate detected by the circle candidate detecting unit and determine that a circle candidate whose uniformity of brightness gradients for the edge pixels of the circle candidate is higher than a reference is a person's head portion, and a people counting unit configured to count the number of circle candidates determined to be a person's head portion by the person determining unit. | 02-25-2016 |
20160055647 | MEDICAL IMAGE PROCESSING APPARATUS AND METHOD FOR MEDICAL IMAGE PROCESSING - A medical image processing apparatus according to an embodiment includes processing circuitry. The processing circuitry obtains a plurality of medical image groups in which respective motions of a part inside a subject have been photographed in time series and executes certain processing on the acquired medical image groups. The processing circuitry analyzes the motions in the respective medical image groups. The processing circuitry generates a medical image in which the motions in the respective medical image groups substantially match with each other based on the analyzed motions. | 02-25-2016 |
20160055648 | NON-UNIFORM CURVE SAMPLING METHOD FOR OBJECT TRACKING - A method of tracking an object in a plurality of image frames includes receiving an initial contour associated with the edge object in a first one of the image frames. A plurality of first measurement points distributed non-uniformly along the initial contour are determined. The first measurement points are biased to relatively high information portions of the initial contour. A set of subsequent contours are estimated from the initial contour in a second image frame. An identical plurality of second measurement points are determined along each of the set of estimated subsequent contours in the second image frame using the same non-uniform distribution of the first measurement points in the first image frame. The method selects at least one contour of the set of estimated subsequent contours using a confidence measure determined from the second measurement points as distributed along the selected subsequent contour. | 02-25-2016 |
20160058407 | MEDICAL IMAGE PROCESSING APPARATUS AND MEDICAL IMAGE PROCESSING METHOD - A medical image processing apparatus includes a processing circuitry. The processing circuitry obtains volume data including a tubular organ. The processing circuitry extracts the tubular organ from the volume data. The processing circuitry calculates each of a plurality of feature quantities at a plurality of positions in the tubular organ. The processing circuitry calculates a graph indicating a distribution of the plurality of feature quantities at the plurality of positions. The processing circuitry displays the graph and the tubular organ on a display, the displayed graph being aligned with the displayed tubular organ. | 03-03-2016 |
20160063303 | METHOD AND APPARATUS FOR EYE GAZE TRACKING - The invention relates to method and apparatus of an eye gaze tracking system. In particular, the present invention relates to method and apparatus of an eye gaze tracking system using a generic camera under normal environment, featuring low cost and simple operation. The present invention also relates to method and apparatus of an accurate eye gaze tracking system that can tolerate large illumination changes. | 03-03-2016 |
20160063312 | IMAGE ANALYSIS APPARATUS, IMAGE ANALYSIS METHOD, AND IMAGE ANALYSIS PROGRAM - An image analysis apparatus that analyzes a skin condition from a video of the face of a subject captured with an imaging part includes a tracking part configured to track the amount of changes of multiple tracking points arranged in advance in an analysis region of the face based on a change in the expression of the face included in the video, and obtain the compression ratio of the skin in the analysis region based on the amount of changes, and a skin condition analysis part configured to analyze the skin condition of the subject based on the compression ratio obtained by the tracking part. | 03-03-2016 |
20160063319 | METHOD AND APPARATUS FOR TRACKING GAZE - A method for tracking a gaze includes determining a position of a first center point of a cornea by using at least two lighting reflection points detected from an eyeball area of a first face image of a user and calculating a first vector connecting at least two first image feature points detected from the first face image to the position of the first center point of the cornea. A position of a second center point of the cornea is determined using the first vector and a position of the feature point detected from the second face image. A second vector is determined using the position of the second center point of the cornea and a position of a center point of a pupil. The gaze of the user is tracked by using the second vector. An apparatus for tracking a gaze is also disclosed. | 03-03-2016 |
20160063330 | Methods and Systems for Vision-Based Motion Estimation - Aspects of the present invention are related to methods and systems for vision-based computation of ego-motion. | 03-03-2016 |
20160063341 | LOCATION CALCULATION SYSTEM AND NON-TRANSITORY COMPUTER-READABLE MEDIUM - A location calculation system that includes circuitry that stores vector information regarding a plurality of first vectors each connecting a specific point-relevant location, that is associated with each of a plurality of specific points set along a road, with a location of a predetermined feature, in relation to position information regarding each position of the plurality of specific points; stores predetermined feature location information regarding the location of the predetermined feature; obtains image data ahead of a moving body; calculates a location of the predetermined feature in an image of the image data as obtained by the circuitry; calculates a location of the moving body based on the location of the predetermined feature as calculated by the circuitry and the vector information; and outputs the location of the moving body as calculated by the circuitry. | 03-03-2016 |
20160063344 | LONG-TERM STATIC OBJECT DETECTION - Software for static object detection that performs the following operations: (i) detecting an object that is present in at least one image of a set of images, wherein the set of images correspond to a time period; (ii) identifying a set of corner points for the detected object; (iii) tracking the object's presence in the set of images over the time period, wherein the object's presence is determined by matching the set of images to a template generated based on the identified corner points; and (iv) identifying the object as a static object when an amount of time corresponding to the object's presence in the set of images is greater than a predefined threshold. | 03-03-2016 |
20160063589 | APPARATUS AND METHOD FOR SMART PHOTOGRAPHY - A method and a system for smart photography are disclosed. The system may include a retrieval module comprising one or more processors configured to determine a description of the item from the identity of the item. The system may include a sensor configured to capture an image including an image of the item. The system may include an image processing module comprising the one or more processors configured to identify the image of the item in the captured image using the description of the item and configured to generate from the captured image a generated image comprising the image of the item and an identification indicator of the image of the item. The image may be identified based on comparing the image with descriptions of other identified items. Optionally, the system may include an identification device configured to determine an identity of an item. | 03-03-2016 |
20160063702 | MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING METHOD AND MEDICAL IMAGE DEVICE - A medical image processing apparatus according to an embodiment includes an estimation circuitry and a tracking circuitry. The estimation circuitry is configured to estimate the activity of the myocardium across a plurality of images at different time phases from a group of images where a plurality of images containing a myocardium are chronologically arranged. The tracking circuitry is configured to set a search range for tracking the myocardium in the group according to the activity of the myocardium and perform the tracking. | 03-03-2016 |
20160063708 | A SYSTEM AND METHOD FOR OPTIMIZING FIDUCIAL MARKER AND CAMERA POSITIONS/ORIENTATIONS - A method for optimizing fiducial marker and camera positions/orientations is realized to simulate camera and fiducial positions and pose estimation algorithm to find best possible marker/camera placement comprises the steps of: acquiring mesh data representing possible camera positions and feasible orientation boundaries of cameras on the environment of tracked object; acquiring mesh data representing possible active marker positions and feasible orientation placements of markers on a tracked object; pose data representing possible poses of tracked object under working conditions; initializing the control parameter for camera placement; create initial solution strings for camera placement; solving marker placement problem for the current camera placement; evaluating the quality of the current LED and camera placement taking pose coverage, pose accuracy, number of placed markers, number of placed camera etc. into account; determining if a stopping criterion is satisfied. | 03-03-2016 |
20160063710 | THREE-DIMENSIONAL OBJECT RECOGNITION APPARATUS, THREE-DIMENSIONAL OBJECT RECOGNITION METHOD, AND VEHICLE - A three-dimensional object recognition apparatus according to the invention includes: an omnidirectional sensor that measures surrounding objects in all directions, and generates positional information capable of specifying positions of the objects as a result of the measurement; a three-dimensional measurement device that measures an object within a certain measurement range among the surrounding objects, and generates three-dimensional shape information capable of specifying a three-dimensional shape of the object as a result of the measurement; and a control unit that updates a shape to be recognized as the three-dimensional shape of the object based on the three-dimensional shape information generated by the three-dimensional measurement device when the object is within the measurement range of the three-dimensional measurement device. The control unit tracks the object based on the positional information generated by the omnidirectional sensor, even after the object has moved out of the measurement range of the three-dimensional measurement device. | 03-03-2016 |
20160063719 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR DISPARITY ESTIMATION OF FOREGROUND OBJECTS IN IMAGES - In an example embodiment, method, apparatus and computer program product are provided. The method includes facilitating receipt of first image (I | 03-03-2016 |
20160063727 | SYSTEMS AND METHODS FOR IMAGE SCANNING - A method for image scanning by an electronic device is described. The method includes obtaining an image pyramid including a plurality of scale levels and at least a first pyramid level for a frame. The method also includes providing a scanning window. The method further includes scanning at least two of the plurality of scale levels of the frame at a plurality of scanning window locations. A number of scanning window locations is equal for each scale level of the at least two scale levels of the first pyramid level. | 03-03-2016 |
20160063728 | Intelligent Nanny Assistance - Methods and systems of an intelligent nanny assistant are described. A method may involve determining whether a subject of concern is approaching a predefined area of an environment. The method may also involve controlling one or more devices in the environment to provide information in a way that attracts the subject of concern to move away from the predefined area in response to a determination that the subject of concern is approaching the predefined area. | 03-03-2016 |
20160063730 | POSTURE DETECTION SYSTEM WITH RETROREFLECTOR COMPRISING A WIRE-MESHING - The general field of the invention is that of systems for detecting the posture of a moving object. The system may include a fixed electro-optical device of known orientation comprising an emission source, an image sensor and image analysis means, and an optical assembly comprising an optical retroreflector arranged on the moving object. The optical retroreflector of the system is an optical sphere of variable index comprising a transparent hemisphere and a reflecting hemisphere. It comprises a meshing comprising at least three opaque wires, of small thickness and known geometrical arrangement. The image of the retroreflector lit by the source forms a reflection the image sensor, said reflection comprising at least the two images of the shadow of one of the three wires. The image analysis means detect the orientation of the leak line given by said images, said orientation being representative of one of the parameters of the posture of the moving object. | 03-03-2016 |
20160070989 | System and method for pet face detection - Pet faces can be detected within an image using a cascaded adaboost classifier trained to identify pet faces with a high detection rate and a high false positive rate. The identified potential windows are further processed by a number of category aware cascade classifiers, which are each trained to identify pet faces of a particular category with a high detection rate and low false positive rate. The potential pet faces identified by the category aware classifiers may be processed to verify whether a pet's face is present or not. | 03-10-2016 |
20160071275 | SYSTEMS AND METHODS FOR LIVENESS ANALYSIS - In a system for determining liveness of an image presented for authentication, a reference signal is rendered on a display, and a reflection of the rendered signal from a target is analyzed to determine liveness thereof. The analysis includes spatially and/or temporally band pass filtering the reflected signal, and determining RGB values for each frame in the reflected signal and/or each pixel in one or more frames of the reflected signal. Frame level and/or pixel-by-pixel correlations between the determined RGB values and the rendered signal are computed, and a determination of whether an image presented is live or fake is made using either or both correlations. | 03-10-2016 |
20160071276 | SYSTEM AND METHOD FOR DETERMINING GEO-LOCATION(S) IN IMAGES - Determining GPS coordinates of some image point(s) positions in at least two images using a processor configured by program instructions. Receiving position information of some of the positions where an image capture device captured an image. Determining geometry by triangulating various registration objects in the images. Determining GPS coordinates of the image point(s) positions in at least one of the images. Saving GPS coordinates to memory. This system and method may be used to determine GPS coordinates of objects in an image. | 03-10-2016 |
20160071277 | OBJECT RETRIEVAL APPARATUS AND OBJECT RETRIEVAL METHOD - An object retrieval apparatus includes a storage and a retrieval. The storage stores first to N-th space index information relating to X | 03-10-2016 |
20160071280 | HARDWARE ARCHITECTURE FOR REAL-TIME EXTRACTION OF MAXIMALLY STABLE EXTREMAL REGIONS (MSERs) - Hardware architecture for real-time extraction of maximally stable extremal regions (MSERs) is disclosed. The architecture includes a communication interface and processing circuitry that are configured in hardware to receive a data stream of an intensity image in real-time and provide labels for image regions within the intensity image that match a given intensity threshold. The communication interface and processing circuitry are also configured in hardware to find extremal regions within the intensity image based upon the labels and to determine MSER ellipses parameters based upon the extremal regions and MSER criteria. In at least one embodiment, the MSER criteria include minimum and maximum MSER areas, and an acceptable growth rate value for MSER area. In another embodiment, the MSER criteria include a nested MSER tolerance value. | 03-10-2016 |
20160071285 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing device, apparatus, method and non-transitory computer-readable storage medium are disclosed. An information processing device may include a memory storing instructions, and at least one processor configured to process the instructions to generate a comparison image by transforming a reference image, associate the comparison image with a class variable representing an object included in the reference image, calculate a degree of difference between an input patch which is an image representing a sub-region of an input image and a comparison patch which is an image representing a sub-region of the comparison image, estimate a displacement vector between the input patch and the comparison patch, calculate a first degree of reliability corresponding to the displacement vector and the class variable on the basis of the displacement vector and the degree of difference, calculate a second degree of reliability for each comparison patch on the basis of the first degree of reliability, and identify the object is represented by the class variable associated with the comparison image including the comparison patch whose second degree of reliability is greater than a predetermined threshold value, as a recognition target. | 03-10-2016 |
20160071287 | SYSTEM AND METHOD OF TRACKING AN OBJECT - The invention relates to detecting and tracking objects in a sequence of images. In particular, a method, software and system for tracking a non-rigid object in a plurality of images. Initially, a first set of parameters for a parameterised shape model are generated ( | 03-10-2016 |
20160071428 | SCORING DEVICE AND SCORING METHOD - A scoring device has an acquisition unit that acquires image data in which a singer is photographed, a detector that detects a feature associated with an expression or a facial motion during singing as a facial feature of the singer from the image data acquired by the acquisition unit, a calculator that calculates a score for singing action of the singer based on the feature detected by the detector, and an output unit that outputs the score. | 03-10-2016 |
20160078272 | METHOD AND SYSTEM FOR DISMOUNT DETECTION IN LOW-RESOLUTION UAV IMAGERY - A method for dismount detection in low-resolution UAV imagery, comprising providing an input image, processing a greyscale distribution of the input image, determining a rough classification in the input image based on the grayscale distribution, determining the optimal parameters based on the rough classification, estimating one or more potential dismount locations, applying an area filter to the one or more potential dismount locations, removing undesired locations from the one or more potential dismount locations, applying one or more secondary filters to the resulting one or more potential dismount locations, assigning a probability to the one or more potential dismount locations, and assessing desirability of the one or more potential dismount locations. | 03-17-2016 |
20160078273 | GLOBAL-SCALE DAMAGE DETECTION USING SATELLITE IMAGERY - A system for performing global-scale damage detection using satellite imagery, comprising a damage detection server that receives and analyzes image data to identify objects within an image via a curated computational method, and a curation interface that enables a user to curate image information for use in object identification, and a method for a curated computational method for performing global scale damage detection. | 03-17-2016 |
20160078287 | METHOD AND SYSTEM OF TEMPORAL SEGMENTATION FOR GESTURE ANALYSIS - A method, system and non-transitory computer readable medium for recognizing gestures are disclosed, the method includes capturing at least one three-dimensional (3D) video stream of data on a subject; extracting a time-series of skeletal data from the at least one 3D video stream of data; isolating a plurality of points of abrupt content change called temporal cuts, the plurality of temporal cuts defining a set of non-overlapping adjacent segments partitioning the time-series of skeletal data; identifying among the plurality of temporal cuts, temporal cuts of the time-series of skeletal data having a positive acceleration; and classifying each of the one or more pair of consecutive cuts with the positive acceleration as a gesture boundary. | 03-17-2016 |
20160078288 | MOVING BODY POSITION ESTIMATING DEVICE, MOVING BODY POSITION ESTIMATING METHOD, AND NON-TRANSITORY RECORDING MEDIUM - A moving body position estimating device includes an acquisition unit and a processor. The acquisition unit acquires information including first data relating to a first image and second data relating to a second image including a difference between the first image and the second image accompanying a movement of a moving body. The processor implements estimating, based on the information, a direction of a rotation accompanying the movement; detecting, based on the direction of the rotation, first feature points inside a first region inside the first image and second feature points inside a second region inside the first image; and determining first corresponding points inside a third region inside the second image and estimating a change of a position of the moving body based on each of the first feature points and each of the first corresponding points. | 03-17-2016 |
20160078290 | SCANNER GESTURE RECOGNITION - A scanner having an integrated camera is used to capture gestures made in a field of view of the camera. The captured gestures are translated to scanner commands recognized by the scanner. The scanner executes the recognized commands. | 03-17-2016 |
20160078295 | ATTRIBUTE-BASED ALERT RANKING FOR ALERT ADJUDICATION - Alerts to object behaviors are prioritized for adjudication as a function of relative values of abandonment, foregroundness and staticness attributes. The attributes are determined from feature data extracted from video frame image data. The abandonment attribute indicates a level of likelihood of abandonment of an object. The foregroundness attribute quantifies a level of separation of foreground image data of the object from a background model of the image scene. The staticness attribute quantifies a level of stability of dimensions of a bounding box of the object over time. Alerts are also prioritized according to an importance or relevance value that is learned and generated from the relative abandonment, foregroundness and staticness attribute strengths. | 03-17-2016 |
20160078296 | IMAGE PICKUP APPARATUS, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD - An information processing apparatus comprises: an object detection unit to detect an object included in a frame image based on a feature amount of the frame image, and generate and output object information concerning the detected object; an event detection unit to detect an event of the object based on the object information output by the object detection unit, and generate and output event concern information concerning the detected event of the object; and a transmission unit to transmit the frame image, the event concern information, concerning the frame image, output by the event detection unit, and time information concerning the frame image, as associating them with others. Thus, even in a case where event detection timing and event occurrence timing are different from each other, it is possible to perform a display by which a user can easily confirm the event. | 03-17-2016 |
20160078301 | IMAGING PROCESSING SYSTEM AND METHOD AND MANAGEMENT APPARATUS - An imaging processing system includes one or more image capturing apparatuses, a reading unit configured to read biometric information from an authentication object person, a similarity calculation unit configured to calculate similarity based on a result of comparing biometric information read by the reading unit with true biometric information of the authentication object person, an authentication unit configured to perform authentication based on a comparison between the similarity calculated by the similarity calculation unit and a preliminarily set threshold, and a control unit configured to control, if the authentication performed by the authentication unit is successful, imaging processing, which is performed by the image capturing apparatus, based on the similarity calculated by the similarity calculation unit. | 03-17-2016 |
20160078318 | TERMINAL DEVICE, INFORMATION PROCESSING DEVICE, OBJECT IDENTIFYING METHOD, PROGRAM, AND OBJECT IDENTIFYING SYSTEM - A device, apparatus, and method provide logic for processing information. In one implementation, a device may include an image acquisition unit configured to acquire an image, and a transmission unit configured to transmit information associated with the image to an information processing apparatus, such as a server. The server may be associated with a first feature quantity dictionary. The device also may include a receiving unit configured to receive a second feature quantity dictionary from the server in response to the transmission. The second feature quantity dictionary may include less information than the first feature quantity dictionary, and the server may generate the second feature quantity dictionary based on the image information and the first feature quantity dictionary. The device may include an identification unit configured to identify an object within the image using the second feature quantity dictionary. | 03-17-2016 |
20160078321 | INTERFACING AN EVENT BASED SYSTEM WITH A FRAME BASED PROCESSING SYSTEM - A method of interfacing an event based processing system with a frame based processing system is presented. The method includes converting multiple events into a frame. The events may be generated from an event sensor. The method also includes inputting the frame into the frame based processing system. | 03-17-2016 |
20160078323 | METHOD AND APPARATUS FOR COUNTING PERSON - A counting method and apparatus are provided. The method and/or apparatus includes generating a regression tree by inputting information about a moving object contained in a plurality of images, in response to a new image being input, inputting information about a moving object contained in the new input image to the regression tree, and determining the number of people contained in the new image based on a result value of the regression tree. | 03-17-2016 |
20160078623 | Method and apparatus for acquiring and fusing ultrasound images with pre-acquired images - A method of fusing ultrasound images with pre-acquired images includes acquiring a first 3D ultrasound image in real-time; identifying at least one known reference pattern or object within a pre-acquired 3D image and the 3D ultrasound image; registering the real-time 3D ultrasound image with the pre-acquired 3D image by using the reference pattern or object; fusing the 3D ultrasound images of a sequence of real-time 3D ultrasound images; with the pre-acquired 3D image using data of the co-registration; and displaying fusion tomographic images, each including the corresponding real-time ultrasound image and the pre-acquired image. The registration between real-time 3D ultrasound images and the pre-acquired 3D image is performed continuously in the background by continuing to acquire the real-time 3D ultrasound images and to register them with the pre-acquired 3D images using the reference pattern or patterns or the object or objects. | 03-17-2016 |
20160078627 | SYSTEM AND METHOD FOR DETERMINING THE THREE-DIMENSIONAL LOCATION AND ORIENATION OF IDENTIFICATION MARKERS - A three-dimensional position and orientation tracking system comprises one or more pattern tags, each comprising a plurality of contrasting portions, a tracker for obtaining image information about the pattern tags, a database with geometric information describing patterns on pattern tags; and a controller for receiving and processing the image information from the tracker, accessing the database to retrieve geometric information, and comparing the image information with the geometric information. The contrasting portions are arranged in a rotationally asymmetric pattern and at least one of the contrasting portions on a pattern tag comprising a perimeter with a polygonal shape. The pattern tags may be borne on tracking markers that have a three-dimensional shaped surface. The tracking system can be implemented in a surgical monitoring system in which the pattern tags are attached to tracking markers or are themselves tracking markers. A method associated with the system employs the rotationally asymmetric patterns on the tags to determine the three-dimensional locations and orientations of items bearing the tags using non-stereo image information. | 03-17-2016 |
20160078636 | IMAGE-BASED SURFACE TRACKING - A method of image-tracking by using an image capturing device ( | 03-17-2016 |
20160078904 | CONTENT MANAGEMENT SYSTEM, MANAGEMENT CONTENT GENERATING METHOD, MANAGEMENT CONTENT PLAY BACK METHOD, AND RECORDING MEDIUM - In a content management system, the still image extracting unit extracts a plurality of frames of still image data from the moving image data based on the motion of the person of interest. The scene determining unit determines a scene of the moving image including a still image corresponding to each of the plurality of frames of the still image data. The management marker registration unit registers, as a management marker, each of the plurality of frames of still image data or an image feature amount of each still image in association with a scene of a moving image corresponding to each still image. The management image generator generates management image data including at least two pieces of the still image data. | 03-17-2016 |
20160085297 | NON-TRANSITORY COMPUTER READABLE MEDIUM, INFORMATION PROCESSING APPARATUS, AND POSITION CONVERSION METHOD - A non-transitory computer readable medium stores a program that causes a computer to execute a position conversion process. The process includes storing actual layout information and virtual layout information, the actual layout information defining a layout of a first subject within an actual structure, the virtual layout information defining a layout of a second subject within a virtual structure; acquiring a human position within the actual structure; and converting the human position in the actual layout information into the human position in the virtual layout information. | 03-24-2016 |
20160085310 | TRACKING HAND/BODY POSE - Tracking hand or body pose from image data is described, for example, to control a game system, natural user interface or for augmented reality. In various examples a prediction engine takes a single frame of image data and predicts a distribution over a pose of a hand or body depicted in the image data. In examples, a stochastic optimizer has a pool of candidate poses of the hand or body which it iteratively refines, and samples from the predicted distribution are used to replace some candidate poses in the pool. In some examples a best candidate pose from the pool is selected as the current tracked pose and the selection processes uses a 3D model of the hand or body. | 03-24-2016 |
20160085312 | GESTURE RECOGNITION SYSTEM - A gesture recognition system includes a candidate node detection unit coupled to receive an input image in order to generate a candidate node; a posture recognition unit configured to recognize a posture according to the candidate node; a multiple hands tracking unit configured to track multiple hands by pairing between successive input images; and a gesture recognition unit configured to obtain motion accumulation amount according to tracking paths from the multiple hands tracking unit, thereby recognizing a gesture. | 03-24-2016 |
20160085496 | APPARATUS AND METHOD OF CONTROLLING MOBILE TERMINAL BASED ON ANALYSIS OF USER'S FACE - An apparatus and method of controlling a mobile terminal by detecting a face or an eye in an input image are provided. The method includes performing face recognition on an input image facing and being captured by an image input unit equipped on the front face of the mobile terminal; determining, based on the face recognition, user state information that includes whether a user exists, a direction of the user's face, a distance from the mobile terminal, and/or a position of the user's face; and performing a predetermined function of the mobile terminal according to the user state information. According to the method, functions of the mobile terminal may be controlled without direct inputs from the user. | 03-24-2016 |
20160086015 | METHOD AND SYSTEM FOR AUTOMATED FACE DETECTION AND RECOGNITION - The present invention relates to a figure recognition system and method for automatic detection, tracking and recognition of a human face image. 2D image data in the surveillance zone are remotely collected by using an optical sensor, the faces of all persons in the surveillance zone are detected, and corresponding positions are determined. The face is detected, the detected face's feature coordinate is estimated, and the detected face and the feature are tracked in the next frame while processing the video sequence. Image quality of each detected face is determined according to parameters of focus, brightness, contrast, and the presence of glasses. Recognition methods stored in the repository for each detected face are adjusted by considering the face image quality computation value, and a biometric feature set is generated by using the recognition method selected for each detected face. The figure is recognized according to the watch list by using the biometric feature generated by comparing each detected face and a template set stored in the database. A new user registration process is performed and the recognition method is adapted automatically by considering the watch list. | 03-24-2016 |
20160086023 | APPARATUS AND METHOD FOR CONTROLLING PRESENTATION OF INFORMATION TOWARD HUMAN OBJECT - A human object recognition unit recognizes a human object included in a captured image data. A degree-of-interest estimation unit estimates a degree of interest of the human object in acquiring information, based on a recognition result obtained by the human object recognition unit. An information acquisition unit acquires information as a target to be presented to the human object. An information editing unit generates information to be presented to the human object from the information acquired by the information acquisition unit, based on the degree of interest estimated by the degree-of-interest estimation unit. An information display unit outputs the information generated by the information editing unit. | 03-24-2016 |
20160086024 | OBJECT DETECTION METHOD, OBJECT DETECTION APPARATUS, AND PROGRAM - An object detection method includes an image acquisition step of acquiring an image including a target object, a layer image generation step of generating a plurality of layer images by one or both of enlarging and reducing the image at a plurality of different scales, a first detection step of detecting a region of at least a part of the target object as a first detected region from each of the layer images, a selection step of selecting at least one of the layer images based on the detected first detected region and learning data learned in advance, a second detection step of detecting a region of at least a part of the target object in the selected layer image as a second detected region, and an integration step of integrating a detection result detected in the first detection step and a detection result detected in the second detection step. | 03-24-2016 |
20160086025 | POSE TRACKER WITH MULTI THREADED ARCHITECTURE - Tracking pose of an articulated entity from image data is described, for example, to control a game system, natural user interface or for augmented reality. In various examples a plurality of threads execute on a parallel computing unit, each thread processing data from an individual frame of a plurality of frames of image data captured by an image capture device. In examples, each thread is computing an iterative optimization process whereby a pool of partially optimized candidate poses is being updated. In examples, one or more candidate poses from an individual thread are sent to one or more of the other threads and used to replace or add to candidate poses at the receiving thread(s). | 03-24-2016 |
20160086031 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - An image processing apparatus includes a processor that executes a process. The process includes converting an input image into multiple types of images based on values of pixels in the input image, and selecting at least one image from the multiple types of images based on luminance information of the multiple types of images. | 03-24-2016 |
20160086039 | METHOD AND DEVICE FOR AUTOMATIC DETECTION AND TRACKING OF ONE OR MULTIPLE OBJECTS OF INTEREST IN A VIDEO - The invention relates to a method for automatic detection and tracking of one or multiple objects of interest in a video sequence comprising several successive frames ( | 03-24-2016 |
20160086040 | WIDE BASELINE OBJECT DETECTION STEREO SYSTEM - When detecting an object of interest, such as a bicyclist passing a truck, two downward looking cameras both capture images of the cyclist and detect the cyclist as a deviation from the flat ground plane. The ground plane is reconstructed using a homography (projection) matrix of each camera and compared. Where the camera images do not agree, the ground is not flat. The cyclist is located as the intersection of the rays extending to either end of the area of disagreement between the images. | 03-24-2016 |
20160086046 | ENHANCED CONTRAST FOR OBJECT DETECTION AND CHARACTERIZATION BY OPTICAL IMAGING BASED ON DIFFERENCES BETWEEN IMAGES - Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels. | 03-24-2016 |
20160086050 | SALIENT FEATURES TRACKING APPARATUS AND METHODS USING VISUAL INITIALIZATION - Apparatus and methods for detecting and utilizing saliency in digital images. In one implementation, salient objects may be detected based on analysis of pixel characteristics. Least frequently occurring pixel values may be deemed as salient. Pixel values in an image may be compared to a reference. Color distance may be determined based on a difference between reference color and pixel color. Individual image channels may be scaled when determining saliency in a multi-channel image. Areas of high saliency may be analyzed to determine object position, shape, and/or color. Multiple saliency maps may be additively or multiplicative combined in order to improve detection performance (e.g., reduce number of false positives). Methodologies described herein may enable robust tracking of objects utilizing fewer determination resources. Efficient implementation of the methods described below may allow them to be used for example on board a robot (or autonomous vehicle) or a mobile determining platform. | 03-24-2016 |
20160086051 | APPARATUS AND METHODS FOR TRACKING SALIENT FEATURES - Apparatus and methods for detecting and utilizing saliency in digital images. In one implementation, salient objects may be detected based on analysis of pixel characteristics. Least frequently occurring pixel values may be deemed as salient. Pixel values in an image may be compared to a reference. Color distance may be determined based on a difference between reference color and pixel color. Individual image channels may be scaled when determining saliency in a multi-channel image. Areas of high saliency may be analyzed to determine object position, shape, and/or color. Multiple saliency maps may be additively or multiplicative combined in order to improve detection performance (e.g., reduce number of false positives). Methodologies described herein may enable robust tracking of objects utilizing fewer determination resources. Efficient implementation of the methods described below may allow them to be used for example on board a robot (or autonomous vehicle) or a mobile determining platform. | 03-24-2016 |
20160086052 | APPARATUS AND METHODS FOR SALIENCY DETECTION BASED ON COLOR OCCURRENCE ANALYSIS - Apparatus and methods for detecting and utilizing saliency in digital images. In one implementation, salient objects may be detected based on analysis of pixel characteristics. Least frequently occurring pixel values may be deemed as salient. Pixel values in an image may be compared to a reference. Color distance may be determined based on a difference between reference color and pixel color. Individual image channels may be scaled when determining saliency in a multi-channel image. Areas of high saliency may be analyzed to determine object position, shape, and/or color. Multiple saliency maps may be additively or multiplicative combined in order to improve detection performance (e.g., reduce number of false positives). Methodologies described herein may enable robust tracking of objects utilizing fewer determination resources. Efficient implementation of the methods described below may allow them to be used for example on board a robot (or autonomous vehicle) or a mobile determining platform. | 03-24-2016 |
20160086055 | ENHANCED CONTRAST FOR OBJECT DETECTION AND CHARACTERIZATION BY OPTICAL IMAGING USING FORMED DIFFERENCE IMAGES - Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels. | 03-24-2016 |
20160086337 | METHOD FOR DETECTING OBJECT AND OBJECT DETECTING APPARATUS - A method for detecting an object includes inputting information of a moving object included in a plurality of images and generating a regression tree. In response to input of a new image, the system communicates information of a moving object included in the newly inputted image into the regression tree, and determines a size of a person included in the new image based on a resultant value of the regression tree. | 03-24-2016 |
20160086342 | REGION DETECTION DEVICE, REGION DETECTION METHOD, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, PROGRAM, AND RECORDING MEDIUM - The image processing apparatus includes a region detection unit that detects a face region of the attention person, an attention person movement region of the moving image, the entire region of the attention person, and an attention person transfer region of the moving image, a region image extraction unit that extracts an image of the face region of the attention person, an image of the attention person movement region of the moving image, an image of the entire region of the attention person, and an image of the attention person transfer region of the moving image, which respectively correspond to the face region of the attention person, the attention person movement region of the moving image, the entire region of the attention person, and the attention person transfer region of the moving image, from the still image, and a composite image generation unit that generates a composite image. | 03-24-2016 |
20160086344 | VISUAL TRACKING OF AN OBJECT - Method for visual tracking of at least one object represented by a cluster of points with which information is associated, characterised in that it includes steps to: receive (E1) data representing a set of space-time events, determine (E2) the probability that an event in the set belongs to the cluster of points representing the at least one object, for each event in the received set, determine (E3) whether or not an event belongs to the cluster of points as a function of the determined probability for the event considered, for each event in the received set, update (E4) information associated with the cluster of points for at least one object, for each event for which it was determined in the previous step that it belongs to the cluster of points, calculate (E4, E5) the position, size and orientation of the at least one object as a function of the updated information. | 03-24-2016 |
20160086346 | REMOTE OPERATED SELECTIVE TARGET TREATMENT SYSTEM - A remote operated selective target treatment system including a firing robot having a weapon and an optoelectronic sighting device, a central processing unit and a control screen displaying the prepared image of the target, and a control device. The central processing unit prepares the image intended for display, and includes an input module receiving digital images, an image analyzer receiving the image from the input module and detaching the target image from its environment, a modelling device modelling the contour of the image, a comparator connected to a library of silhouettes receiving the modelled image and checking it against the silhouettes, and an exclusion module receiving an image from the comparator and using a library of masks to apply a mask to the image and transmit the prepared image for display on screen. | 03-24-2016 |
20160086348 | RECOGNITION APPARATUS, METHOD, AND COMPUTER PROGRAM PRODUCT - In an embodiment, a recognition apparatus includes: an obtaining unit configured to obtain positions of a specific part in a coordinate system having a first axis to an n-th axis (n≧2); a calculating unit configured to calculate a movement vector of the specific part; a principal axis selecting unit configured to select a principal axis; a turning point setting unit configured to set a position at which there is a change in the principal axis, and set a position at which there is a change; a section setting unit configured to set a determination target section, and set a previous section; a determining unit configured to calculate an evaluation value of the determination target section and an evaluation value of the immediately previous section and, determine which of the first axis to the n-th axis is advantageous; and a presenting unit configured to perform the determined result. | 03-24-2016 |
20160086349 | TRACKING HAND POSE USING FOREARM-HAND MODEL - Tracking hand pose from image data is described, for example, to control a natural user interface or for augmented reality. In various examples an image is received from a capture device, the image depicting at least one hand in an environment. For example, a hand tracker accesses a 3D model of a hand and forearm and computes pose of the hand depicted in the image by comparing the 3D model with the received image. | 03-24-2016 |
20160086350 | APPARATUSES, METHODS AND SYSTEMS FOR RECOVERING A 3-DIMENSIONAL SKELETAL MODEL OF THE HUMAN BODY - The ARS offers tracking, estimation of position, orientation and full articulation of the human body from marker-less visual observations obtained by a camera, for example an RGBD camera. An ARS may provide hypotheses of the 3D configuration of body parts or the entire body from a single depth frame. The ARS may also propagates estimations of the 3D configuration of body parts and the body by mapping or comparing data from the previous frame and the current frame. The ARS may further compare the estimations and the hypotheses to provide a solution for the current frame. An ARS may select, merge, refine, and/or otherwise combine data from the estimations and the hypotheses to provide a final estimation corresponding to the 3D skeletal data and may apply the final estimation data to capture parameters associated with a moving or still body. | 03-24-2016 |
20160092719 | METHOD FOR MONITORING, IDENTIFICATION, AND/OR DETECTION USING A CAMERA BASED ON A COLOR FEATURE - The present disclosure relates to a method for camera identification and detection on color features. In some aspects, the method comprises: 1) starting the camera, the image processing unit and the display unit at the beginning of testing; 2) using the camera to capture the color characteristic value; and 3) moving the untested object to the detection area of the camera for it to be detected, wherein the image processing unit extracts the mean color characteristic value from the color pixels of the detection area. | 03-31-2016 |
20160092727 | TRACKING HUMANS IN VIDEO IMAGES - A processor accesses a first video image and a second video image from a sequence of video images and applies a patch descriptor technique to determine a first portion of the first video image that encompasses a first person. The processor determines a location of the first person in the second video image by comparing keypoints in the first portion of the first video image to one or more keypoints in the second video image. | 03-31-2016 |
20160092732 | METHOD AND APPARATUS FOR RECOGNITION AND MATCHING OF OBJECTS DEPICTED IN IMAGES - A method includes identifying one or more objects in one or more images of real-world scenes associated with a user, adding the identified one or more objects to a list of real-world objects associated with the user, assigning each object in the list of real-world objects to an object class based on object recognition, and providing a notification to the user that a content item has been associated with an object class assigned to one of the objects on the list of real-world objects associated with the user. A computer readable storage medium stores one or more computer programs, and an apparatus includes a processor-based device. | 03-31-2016 |
20160092733 | SYSTEM AND METHOD FOR SEAT OCCUPANCY DETECTION FROM CEILING MOUNTED CAMERA USING ROBUST ADAPTIVE THRESHOLD CRITERIA - A method for detecting sitting behavior includes acquiring a sequence of frames capturing a scene-of-interest at an overhead view. The method includes detecting at least one empty seat within the scene-of-interest and associating the seat as being unoccupied and the frame as a reference frame. The method includes extracting reference features describing a region of the unoccupied seat in the reference frame and quantifying the reference features to form a reference feature vector. The method includes extracting features describing the region in a given frame and quantifying the features to form a current feature vector. The method includes measuring a change in a feature vector over time using the reference feature vector and the current feature vector. The method includes and determining a status of the seat in the given frame as being one of occupied and unoccupied based on the change in the feature vector. | 03-31-2016 |
20160092734 | SYSTEM AND METHOD FOR DETECTING SETTLE DOWN TIME USING COMPUTER VISION TECHNIQUES - A method for detecting settle-down time in a space includes acquiring a sequence of frames capturing a select space from a first camera. The method includes determining an initial time for computing a duration it takes for an associated occupant to settle into a seat in the select space. The method includes determining one or more candidate frames from the sequence of frames where one or both of a sitting behavior and seat occupancy is observed at the seat. The method includes determining a final frame and a final time associated with the final frame from the one or more candidate frames. The method includes computing the settle-down time using the initial and the final times. | 03-31-2016 |
20160092735 | SCANNING WINDOW IN HARDWARE FOR LOW-POWER OBJECT-DETECTION IN IMAGES - An apparatus includes a hardware sensor array including a plurality of pixels arranged along at least a first dimension and a second dimension of the array, each of the pixels capable of generating a sensor reading. A hardware scanning window array includes a plurality of storage elements arranged along at least a first dimension and a second dimension of the hardware scanning window array, each of the storage elements capable of storing a pixel value based on one or more sensor readings. Peripheral circuitry for systematically transfers pixel values, based on sensor readings, into the hardware scanning window array, to cause different windows of pixel values to be stored in the hardware scanning window array at different times. Control logic coupled to the hardware sensor array, the hardware scanning window array, and the peripheral circuitry, provides control signals to the peripheral circuitry to control the transfer of pixel values. | 03-31-2016 |
20160092736 | SYSTEM AND METHOD FOR OBJECT RE-IDENTIFICATION - A method of identifying, with a camera, an object in an image of a scene, by determining the distinctiveness of each of a number of attributes of an object of interest, independent of the camera viewpoint, determining the detectability of each of the attributes based on the relative orientation of a candidate object in the image of the scene, determining a camera setting for viewing the candidate object based on the distinctiveness of an attribute, so as to increase the detectability of the attribute, and capturing an image of the candidate object with the camera setting to determine the confidence that the candidate object is the object of interest. | 03-31-2016 |
20160092738 | Method and System for Motion Vector-Based Video Monitoring and Event Categorization - A computer system processes a video stream to detect a start of a first motion event candidate in the video stream, and in response to detecting the start of the first motion event candidate in the video stream, initiates event recognition processing on a first video segment associated with the start of the first motion event candidate. Initiating the event recognition processing further includes: determining a motion track of a first object identified in the first video segment; generating a representative motion vector for the first motion event candidate based on the motion track of the first object; and sending the representative motion vector for the first motion event candidate to an event categorizer, where the event categorizer assigns a respective motion event category to the first motion event candidate based on the representative motion vector of the first motion event candidate. | 03-31-2016 |
20160092741 | OPTIMIZING THE DETECTION OF OBJECTS IN IMAGES - A method, system, and computer program product, for detecting objects of interest in a digital image. At least positional data associated with a vehicle is received. Geographical information associated with the positional data is received. A probability of detecting an object of interest within a corresponding geographic area associated with the vehicle is determined based on the geographical data. The probability is compared to a given threshold. An object detection process is at least one of activated and maintained in an activated state in response to the probability being one of above and equal to the given threshold. The object detection process detects objects of interest within at least one image representing at least one frame of a video sequence of an external environment. | 03-31-2016 |
20160092742 | METHOD FOR INSTANT RECOGNITION OF TRAFFIC LIGHTS COUNTDOWN IMAGE - A method for instant recognition of traffic lights countdown image that can quickly scan and confirm the circular feature image of a traffic light, and retrieve the countdown image thereof by calculating the displacement ratio from the circular image, then enhance, cut and converse the countdown image to display a feature image thereof, and proceed similarity comparison with collected data to calculate the percentage of similarity. The method eventually brings out a result from the image comparisons, so as to fulfill the effectiveness of searching and instantly recognizing the countdown image of a traffic light. | 03-31-2016 |
20160092753 | Method and System to Characterize Video Background Changes as Abandoned or Removed Objects - A method and system for analyzing video data in a security system. An analysis compares a current frame to a background model. The analysis system compares the background model to the current frame to identify changed pixel patches. The analysis system uses morphological image processing to generate masks based on the changed pixel patches. Next, the analysis system applies the masks to the background model and the current frames to determine whether the changed pixel patches are characteristic of abandoned or removed objects within the video data. | 03-31-2016 |
20160093035 | POSITION DETECTION DEVICE, PROJECTOR, AND POSITION DETECTION METHOD - A position detection device capable of preventing an operation that is not intended by an operator from being detected as an input is provided. The position detection device includes: a detecting section that detects an indicator that performs an operation with respect to a screen, and an indicator different from the indicator; an imaging section that forms a captured image obtained by imaging a range including the screen; and a control section that detects a motion of the indicator with respect to the screen and a position of the indicator with respect to the screen based on data on the captured image of the imaging section to determine whether or not to detect an operation based on the indicator as an input. | 03-31-2016 |
20160093036 | POSITION DETECTION DEVICE, PROJECTOR, AND POSITION DETECTION METHOD - A position detection device capable of preventing an operation that is not intended by an operator from being detected as an input is provided. The position detection device includes: a detecting section that detects an indicator that performs an operation with respect to a screen, and a target different from the indicator; an imaging section that images a range including the screen to form a captured image; and a control section that detects a motion of the indicator based on data on the captured image of the imaging section to determine whether or not to detect an operation based on the indicator as an input. | 03-31-2016 |
20160093038 | INFORMATION PROCESSING APPARATUS RECOGNIZING MULTI-TOUCH OPERATION, CONTROL METHOD THEREOF, AND STORAGE MEDIUM - Conventionally, when an input operation is recognized based on a three-dimensional measurement of user's hands, it was difficult to discriminate whether a plurality of detected hands is a pair of hands of a user or a plurality of users. An information processing apparatus according to the present invention includes an image acquiring unit configured to acquire information about an image capturing a space on an operation surface, a position identifying unit configured to identify the position where each of a plurality of objects to be used for an operational input has entered into the space based on information about the acquired image, and an association unit configured to identify a combination of a plurality of objects based on the position identified for each of the plurality of objects and associate the combined plurality of objects with each other. | 03-31-2016 |
20160093051 | SYSTEMS AND METHODS FOR A DUAL MODALITY SENSOR SYSTEM - The present disclosure provides systems and methods for using two imaging modalities for imaging an object at two different resolutions. For example, the system may utilize a first modality (e.g., ultrasound or electromagnetic radiation) to generate image data at a first resolution. The system may then utilize the other modality to generate image data of portions of interest at a second resolution that is higher than the first resolution. In another embodiment, one imaging modality may be used to resolve an ambiguity, such as ghost images, in image data generated using another imaging modality. | 03-31-2016 |
20160093052 | METHOD AND APPARATUS FOR DETECTING OBSTACLE BASED ON MONOCULAR CAMERA - Method and apparatus for detecting obstacle based on monocular camera are provided. The method includes: obtaining a target frame image and its adjacent frame image shot by the monocular camera; deleting an unstable feature point from an initial feature point set of the adjacent frame image to obtain a preferred feature point set; dividing a target feature point set to obtain several target feature point subsets; judging whether the target feature point subset corresponds to obstacle based on a change of a distance between points within a ground projection point set of the target feature point subset from adjacent frame time instant to target frame time instant; and determining a union of all the target feature point subsets which are judged as corresponding to obstacles as an obstacle point set of the target frame image. | 03-31-2016 |
20160093053 | DETECTION METHOD AND DETECTION APPARATUS FOR DETECTING THREE-DIMENSIONAL POSITION OF OBJECT - A detection apparatus for detecting a three-dimensional position of an object includes: an image storage unit that stores sequentially two images imaged when a robot is moving; a position/orientation information storage unit that stores position/orientation information of the robot when each image is imaged; a position information storage unit that detects an object from each image and stores position information of the object; a line-of-sight information calculating unit that calculates line-of-sight information of the object in a robot coordinate system using the position/orientation information of the robot which is associated with each image and the position information of the object; and a three-dimensional position detecting unit that detects a three-dimensional position of the object based on an intersection point of the line-of-sight information. | 03-31-2016 |
20160093055 | INFORMATION PROCESSING APPARATUS, METHOD FOR CONTROLLING SAME, AND STORAGE MEDIUM - An apparatus includes an extraction unit configured to extract an area showing an arm of a user from a captured image of a space into which the user inserts the arm, a reference point determination unit configured to determine a reference point within a portion corresponding to a hand of the user in the area, a feature amount acquisition unit configured to obtain a feature amount of the hand corresponding to an angle around the reference point, and an identification unit configured to identify a shape of the hand in the image by using a result of comparison between the feature amount obtained by the feature amount acquisition unit and a feature amount obtained from dictionary data indicating a state of the hand. The feature amount is obtained from the dictionary data corresponding to an angle around a predetermined reference point. | 03-31-2016 |
20160093064 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - An image processing apparatus is provided with a spatial information calculation unit for calculating spatial information of a subject, which is the information of an area in which the subject in an image is predicted to be present, a first area setting unit for setting a first area in the image based on the spatial information, a second area setting unit for setting a second area outside the first area, a first feature amount calculation unit for calculating a first feature amount of the first area, a second feature amount calculation unit for calculating a second feature amount of the second area, the second feature amount being a feature amount of the same type as the first feature amount, and an saliency calculation unit for calculating a degree of visual saliency of the subject. | 03-31-2016 |
20160093065 | METHOD FOR DETECTING AN OBJECT IN AN ENVIRONMENTAL REGION OF A MOTOR VEHICLE, DRIVER ASSISTANCE SYSTEM AND MOTOR VEHICLE - A method for detecting an object captured by a camera in an environmental region of a vehicle based on a temporal sequence of images of the environmental region is disclosed. An electronic evaluation device is used to determine at least one characteristic pixel of the object in a first image of the sequence of images, and the determined characteristic pixel is tracked in at least a second image and a flow vector each having a vertical component and a horizontal component is provided by the tracking. A first depth component, which is perpendicular to the vertical component and the horizontal component, is determined based on the vertical component, and a second depth component, perpendicular to the vertical component and the horizontal component, is determined based on the horizontal component. When the first and second depth component correspond within a tolerance range, a validated final depth component is provided. | 03-31-2016 |
20160093097 | ORIENTATION INVARIANT OBJECT IDENTIFICATION USING MODEL-BASED IMAGE PROCESSING - A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second is a module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. To limit the search space, it is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images. The system also includes the generation of shadows and allows the user to manipulate the sun angle to approximate the lighting conditions of the test range in the provided video. | 03-31-2016 |
20160094705 | Message Read Confirmation Using Eye Tracking - An electronic device generates a message read confirmation by using eye tracking. The device tracks a position of a user's eye while the user is viewing a displayed electronic message. The device generates a plurality of features associated with the user's viewing of the electronic message based on the tracked position of the eye. The generated features include, for example, a number of lines of the displayed electronic message viewed by the user. The device then generates a message read confirmation after determining that the user has read the displayed electronic message based on the generated plurality of features. The tracking of the eye position can be implemented by capturing images representing the eye position. Based on analyzing a series of the captured images, the device can also determine that the eye has stayed within a threshold distance and, responsively, enhance (e.g., zoom) the displayed electronic message. | 03-31-2016 |
20160096509 | VEHICLE ACCESS SYSTEM - A method for an access system for a vehicle includes the steps of recording a first optical information in a specific area surrounding the vehicle and detecting an object approaching the vehicle as a function of the recorded first optical information. As a function of detecting the approaching object, a second optical information in the surrounding area is recorded, and gesture information of the object is identified as a function of the second optical information. A locking device or door opening device of the access system is triggered as a function of gesture information. | 04-07-2016 |
20160098600 | DATASET CREATION FOR TRACKING TARGETS WITH DYNAMICALLY CHANGING PORTIONS - A mobile platform visually detects and/or tracks a target that includes a dynamically changing portion, or otherwise undesirable portion, using a feature dataset for the target that excludes the undesirable portion. The feature dataset is created by providing an image of the target and identifying the undesirable portion of the target. The identification of the undesirable portion may be automatic or by user selection. An image mask is generated for the undesirable portion. The image mask is used to exclude the undesirable portion in the creation of the feature dataset for the target. For example, the image mask may be overlaid on the image and features are extracted only from unmasked areas of the image of the target. Alternatively, features may be extracted from all areas of the image and the image mask used to remove features extracted from the undesirable portion. | 04-07-2016 |
20160098601 | METHOD AND SYSTEM FOR A MOBILE TERMINAL TO ACHIEVE USER INTERACTION BY SIMULATING A REAL SCENE - A method and a system for a mobile terminal to achieve use interaction by simulating a real scene are disclosed. The method comprises: formulating a scene task for a 3D virtual scene; uploading the information of the 3D virtual scene and the scene task to a server to obtain a shared link; searching for and transmitting the shared link to nearby mobile terminals, sending an invitation and waiting for participation of the nearby mobile terminals; if the invitation is received by the nearby mobile terminals, then reading the information of the 3D virtual scene and the scene task and uploading corresponding personal information by the nearby mobile terminals; and changing locations of user roles in the 3D virtual scene according to positioning information of the mobile terminal, receiving a user operation instruction to make interactions via the user roles, and recording the user behaviors corresponding to the personal information. | 04-07-2016 |
20160098606 | Approaching-Object Detection System and Vehicle - An approaching-object detection system detects an approaching object based on images captured by an imaging device. The approaching-object detection system comprises an extraction unit, a distortion correction unit, an object detection unit, and an approaching-object classification unit. The extraction unit extracts a partial image of one far side and a partial image of the other far side from each of the images. The distortion correction unit corrects distortion of the partial images. The object detection unit detects an object from the corrected partial images through pattern matching with reference to preinstalled image information on the object. The approaching-object classification unit classifies the detection result of the object detection unit as an approaching object or not. | 04-07-2016 |
20160098612 | STATISTICAL APPROACH TO IDENTIFYING AND TRACKING TARGETS WITHIN CAPTURED IMAGE DATA - A facility implementing systems and/or methods for creating statistically significant signatures for targets of interest and using those signatures to identify and locate targets of interest within image data, such as an array of pixels, captured still images, video data, etc., is described. Embodiments of the facility generate statistically significant signatures based at least in part on series approximations (e.g., Fourier Series, Gram-Charlier Series) of image data. The disclosed techniques allow for a high degree of confidence in identifying and tracking targets of interest within visual data and are highly tolerant of translation and rotation in identifying objects using the statistically significant signatures. | 04-07-2016 |
20160098620 | METHOD AND SYSTEM FOR OBJECT IDENTIFICATION - A system and method for object classification is provided. The system includes a computing device that typically comprises a processor configured to receive data and detect an object within the data. Once an object is detected, it can be decomposed into sub-objects and connectivities. Based on the sub-objects and connectivities parameters can be generated. Moreover, based on at least one of sub-objects, connectivities and parameters objective measures can be generated. The object can then be classified based on the objective measures. The parameters can be linked into into linked parameters. Linked classification measures can be generated based on linked parameters. The system can also detect environment objects that form the environment of the detected object. Similar to an object, an environment object can be decomposed into environment sub-objects, and subsequently to environment parameters. Objective measure generation can then be further based on the environment | 04-07-2016 |
20160098830 | Method and System for Determining a Number of Transfer Objects Which Move Within An Observed Region - The invention proposes a method for determining a number ( | 04-07-2016 |
20160098833 | System and Method for Measurement of Myocardial Mechanical Function - There is provided a system and method for evaluation of cardiac images, wherein enhanced evaluation of myocardial mechanical function is possible. The system and methods include methods for segmentation of cardiac images obtained via cMRI or other imaging modalities, wherein the segmentation allows for fusion of these images with images obtained from a different modality such as echocardiogram and/or LE-MRI. The fused images may then be used to provide a diagnosis or a recommendation for a procedure, such as implantation of a cardiac pacemaker. Moreover, follow-up evaluation may be done using only one imaging modality, such as echocardiogram, for example. The system and methods disclosed herein further provide additional post-processing, such as computation of mid-myocardial strain, which can further be useful in diagnosis and planning. | 04-07-2016 |
20160100092 | OBJECT TRACKING DEVICE AND TRACKING METHOD THEREOF - An object tracking device and a tracking method thereof are provided. The method, adopted by an object tracking device, includes: detecting, by a first multimedia sensor, an environment to generate a first multimedia sensor output; monitoring, by a processing circuit, the first multimedia sensor output from the first multimedia sensor system; configuring, by the processing circuit, a setting for a second multimedia sensor based on the first multimedia sensor output; and monitoring, by the second multimedia sensor, the environment based on the setting to generate a second multimedia output. | 04-07-2016 |
20160104033 | SYSTEM AND METHOD FOR FACE CAPTURE AND MATCHING - According to an example, a face capture and matching system may include a memory storing machine readable instructions to receive captured images of an area monitored by an image capture device, and detect one or more faces in the captured images. The memory may further store machine readable instructions to track movement of the one or more detected faces in the area monitored by the image capture device, and based on the one or more tracked detected faces, select one or more images from the captured images to be used for identifying the one or more tracked detected faces. The memory may further store machine readable instructions to select one or more fusion techniques to identify the one or more tracked detected faces using the one or more selected images. The face capture and matching system may further include a processor to implement the machine readable instructions. | 04-14-2016 |
20160104042 | SYSTEMS, METHODS, AND DEVICES FOR IMAGE MATCHING AND OBJECT RECOGNITION IN IMAGES USING FEATURE POINT OPTIMIZATION - An image matching technique locates feature points in a template image such as a logo and then does the same in a test image. Feature points of a template image are determined under various transformations and used to determine a set of composite feature points for each template image. The composite feature points are used to determine if the template image is present in a test image. | 04-14-2016 |
20160104046 | DEVICE AND METHOD FOR SAFEGUARDING AN AUTOMATICALLY OPERATING MACHINE - A device for safeguarding a monitoring area in which an automatically operating machine is arranged, having a camera system for monitoring the monitoring area, a configuration unit for defining at least one protection area within the monitoring area, and an analysis unit for triggering a safety-related function. The camera system supplies camera images of the protection area and the analysis unit analyzes whether a foreign object is present in the protection area or penetrates into the protection area. The analysis unit is configured to classify a foreign object, which is present in the protection area or penetrates into the protection area, by analysis of the camera images, to determine on the basis of one or more features characteristic of welding sparks whether the foreign object is a welding spark. The analysis unit triggers the safety-related function if the foreign object has not been recognized as a welding spark. | 04-14-2016 |
20160104047 | IMAGE RECOGNITION SYSTEM FOR A VEHICLE AND CORRESPONDING METHOD - An image recognition system and method for a vehicle, including at least two camera units, each being configured to record an image of a road in the vicinity of the vehicle and to provide image data representing the respective image of the road, a first image processor configured to combine the image data provided by the at least two camera units into a first top-view image. The first top-view image is aligned to a road image plane, a first feature extractor configured to extract lines from the first top-view image, a second feature extractor configured to extract an optical flow from the first top-view image and a second top-view image, generated before the first top-view image by the first image processor, and a kerb detector configured to detect kerbs in the road based on the extracted lines and the extracted optical flow and provide kerb data representing the detected kerbs. | 04-14-2016 |
20160104059 | IDENTIFYING VISUAL STORM SIGNATURES FROM SATELLITE IMAGES - Satellite images from vast historical archives are analyzed to predict severe storms. We extract and summarize important visual storm evidence from satellite image sequences in a way similar to how meteorologists interpret these images. The method extracts and fits local cloud motions from image sequences to model the storm-related cloud patches. Image data of an entire year are adopted to train the model. The historical storm reports since the year 2000 are used as the ground-truth and statistical priors in the modeling process. Experiments demonstrate the usefulness and potential of the algorithm for producing improved storm forecasts. A preferred method applies cloud motion estimation in image sequences. This aspect of the invention is important because it extracts and models certain patterns of cloud motion, in addition to capturing the cloud displacement. | 04-14-2016 |
20160104282 | DRUG INFORMATION ACQUISITION DEVICE AND METHOD - A row of V-shaped grooves is formed in the bottom of each of imaging trays | 04-14-2016 |
20160104295 | METHODS AND SYSTEM FOR BLASTING VIDEO ANALYSIS - Blasting video analysis techniques and systems are presented using vibration compensated background analysis with automated determination of blast origin coordinates and highest point coordinates using blast outline coordinates for post-origin frames. Blast expansion trajectories of the highest particle are estimated in frames preceding the highest point frame, and estimated blast parameters including maximum height and initial blast velocity are computed independent of blast data. | 04-14-2016 |
20160104297 | METHOD AND DEVICE FOR COUNTING OBJECTS IN IMAGE DATA IN FRAMES, A FRAME OF SAID IMAGE DATA IN FRAMES INCLUDING AT LEAST ONE OBJECT, SUCH AS CANS, BOTTLES, AND PACKAGING, COMPUTER PROGRAM AND COMPUTER PROGRAM PRODUCT - A method for counting objects such as cans, bottles, and packaging in image data in frames, a frame of said image data in frames, including an image of at least one object, the method comprising the steps of identification of a most pronounced characteristic for each object in a frame of said image data in frames, tracking of a position of the characteristic in the image data in frames by identifying the characteristic in at least one additional frame, and modification of a count by determining the position is outside a border. | 04-14-2016 |
20160110591 | LARGE VENUE SURVEILLANCE AND REACTION SYSTEMS AND METHODS USING DYNAMICALLY ANALYZED EMOTIONAL INPUT - Certain example embodiments relate to large venue surveillance and reaction systems and/or methods that take into account both subjective emotional attributes of persons having relations to the large venues, and objective measures such as, for example, actual or expected wait times, current staffing levels, numbers of customers to be serviced, etc. Pre-programmed scenarios are run in real-time as events stream in over one or more electronic interfaces, with each scenario being implemented as a logic sequence that takes into account at least an aspect of a representation of an inferred emotional state. The scenarios are run to (a) determine whether an incident might be occurring and/or might have occurred, and/or (b) dynamically determine a responsive action to be taken. A complex event processing engine may be used in this regard. The analysis may be used in certain example embodiments to help improve customer satisfaction at the large venue. | 04-21-2016 |
20160110592 | IMAGE PROCESSING DEVICE, METHOD AND PROGRAM FOR MOVING GESTURE RECOGNITION USING DIFFERENCE IMAGES - An image processing device includes a difference image generation unit which generates a difference image by obtaining a difference between frames of a cutout image which is obtained by cutting out a predetermined region on a photographed image; a feature amount extracting unit which extracts a feature amount from the difference image; and a recognition unit which recognizes a specific movement of an object on the photographed image based on the feature amount which is obtained from the plurality of difference images which are aligned in time sequence. | 04-21-2016 |
20160110593 | IMAGE BASED GROUND WEIGHT DISTRIBUTION DETERMINATION - A sequence of images is processed to interpret movements of a user. The user's contour and center of gravity are determined and tracked. Based on points of contact between the user and the environment, and upon tracked movement of the center of gravity, forces impressed by the user upon the points of contact with the environment may be deduced by constraint analysis. This center-of-mass model of user movements may be used in conjunction with a skeletal model of the user to provide verification of the validity of the skeletal model. The center-of-mass model may also be used alternatively with the skeletal model fails during those times when use of the skeletal model is problematic. | 04-21-2016 |
20160110594 | SCALE INDEPENDENT TRACKING PATTERN - In one aspect, a computer implemented method of motion capture, the method includes tracking the motion of a dynamic object bearing a pattern configured such that a first portion of the patterns is tracked at a first resolution and a second portion of the pattern is tracked at a second resolution. The method further includes causing data representing the motion to be stored to a computer readable medium. | 04-21-2016 |
20160110602 | AREA INFORMATION ESTIMATING DEVICE, AREA INFORMATION ESTIMATING METHOD, AND AIR CONDITIONING APPARATUS - An area information estimating device has a camera that photographs an image within a sensing area, a person detector that detects a person from the image from the camera, and a position calculator that calculates an existence position of the person in the sensing area based on a coordinate on the image in which the person is detected by the person detector. The person detector excludes the person in whom a relationship between the coordinate and a size on the image does not satisfy a predetermined condition from a detection target. | 04-21-2016 |
20160110603 | SCANNING WINDOW IN HARDWARE FOR LOW-POWER OBJECT-DETECTION IN IMAGES - An apparatus includes a hardware sensor array including a plurality of pixels arranged along at least a first dimension and a second dimension of the array, each of the pixels capable of generating a sensor reading. A hardware scanning window array includes a plurality of storage elements arranged along at least a first dimension and a second dimension of the hardware scanning window array, each of the storage elements capable of storing a pixel value based on one or more sensor readings. Peripheral circuitry for systematically transfers pixel values, based on sensor readings, into the hardware scanning window array, to cause different windows of pixel values to be stored in the hardware scanning window array at different times. Control logic coupled to the hardware sensor array, the hardware scanning window array, and the peripheral circuitry, provides control signals to the peripheral circuitry to control the transfer of pixel values. | 04-21-2016 |
20160110604 | TRACKING APPARATUS, TRACKING METHOD, AND NON-TRANSITORY STORAGE MEDIUM STORING TRACKING PROGRAM - A tracking apparatus includes a tracking subject setting unit, a tracking subject search unit, an obstructing detection unit, and an obstructing area detection unit. The tracking subject search unit searches for the tracking subject by using at least one of luminance of the image data, color of the image data, a result of face detection, and a result of focus detection. The obstructing detection unit detects that the tracking subject is obstructed by comparing a standard focus detection information with a focus detection information obtained at a tracking position. The obstructing area detection unit detects an obstructing area. The tracking subject search unit exclusively sets a search area around the obstructing area and searches the set search area for the tracking subject by using information different from the result of focus detection, when the obstructing of the tracking subject is detected. | 04-21-2016 |
20160110605 | SYSTEMS AND METHODS FOR IMAGE-FEATURE-BASED RECOGNITION - Methods and systems are described herein that allow a user to capture a single image snapshot from video, print, or the world around him or her, and obtain additional information relating to the media itself or items of interest displayed in the snapshot. A fingerprint of the snapshot is used as a query and transmitted to the server. Image Feature-Based Recognition, as described herein, uses a feature index to identify a smaller set of candidate matches from a larger database of images based on the fingerprint. Novel methods and systems using a distance metric and a radical hash table design exploit probabilistic effects and allow distinct image features to be preferred over redundant ones, allowing only the more distinctive data points to remain resident within the index, yielding a lean index that can be quickly used in the identification process. | 04-21-2016 |
20160110609 | METHOD FOR OBTAINING A MEGA-FRAME IMAGE FINGERPRINT FOR IMAGE FINGERPRINT BASED CONTENT IDENTIFICATION, METHOD FOR IDENTIFYING A VIDEO SEQUENCE, AND CORRESPONDING DEVICE - A temporal section that is defined by boundary images is selected in a video sequence. A maximum of k stable image frames are selected in the temporal section of image frames having a lowest temporal activity. Image fingerprints are computed from the selected stable image frames. A mega-frame image fingerprint data structure is constructed from the computed fingerprints. | 04-21-2016 |
20160110610 | IMAGE PROCESSOR, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM - Disclosed herein is an image processor including: a depth image acquisition portion adapted to acquire a depth image for an image frame making up the movie, the depth image representing, as a pixel value on an image plane, a subject distance from an imaging device; an edge extraction portion adapted to generate an edge image for the image frame and identify a picture area of a tracked target in the depth image on the basis of an outline of the tracked target represented by tracking results for the image frame at a previous time so as to extract an edge in an area limited on the basis of the picture area; and a tracking section adapted to compare the extracted edge against an outline candidate for the tracked target to find a likelihood so as to estimate the outline of the tracked target and output the outline as tracking results. | 04-21-2016 |
20160110612 | METHODS AND SYSTEMS FOR OBJECT-RECOGNITION AND LINK INTEGRATION IN A COMPOSITE VIDEO STREAM - Disclosed herein are methods and systems for object recognition and link integration in a composite video stream. One embodiment takes the form of a process that includes detecting an object of interest in a set of video frames. The process also includes tracking the movements of the detected object of interest across a subset of the video frames in the set of video frames. The process further includes generating a composite video stream from the video frames in the subset. The composite video stream shows the tracked movements of the detected object of interest without showing background data from the video frames in the subset. The process also includes outputting the generated composite video stream. | 04-21-2016 |
20160110613 | SYSTEM AND METHOD FOR CROWD COUNTING AND TRACKING - A video analytic system includes a depth stream sensor, spatial analysis module, temporal analysis module, and analytics module. The spatial analysis module iteratively identifies objects of interest based on local maximum or minimum depth stream values within each frame, removes identified objects of interest, and repeats until all objects of interest have been identified. The temporal analysis module associates each object of interest in the current frame with an object of interest identified in a previous frame, wherein the temporal analysis module utilizes the association between current frame objects of interest and previous frame objects of interest to generate temporal features related to each object of interest. The analytics module detects events based on the received temporal features. | 04-21-2016 |
20160110614 | PERSON DETECTION SYSTEM, METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - A system for distinguishing a first group of persons from a second group of persons among plural persons who are present inside an area, and the system includes processing circuitry that: detects first positions of the plural persons who are present inside the area based on a heat image; detects second positions of the second group of persons who are present inside the area based on identification signals transmitted from portable wireless terminals of the second group of persons; and determines, as the first group of persons, those who are present at positions different from the second positions based on the first and second positions. | 04-21-2016 |
20160110616 | APPARATUS FOR RECOGNIZING LANE PARTITION LINES - An apparatus for recognizing lane partition lines on opposite sides of a traveling lane in a processing area of a forward image captured by a camera mounted in a vehicle. In the apparatus, a lane change determiner is configured to determine whether or not there is a lane change made by the vehicle. A processing area changer is configured to, while it is determined by the lane change determiner that there is a lane change, change the processing area from a predefined processing area to a processing area that can accommodate the lane change. | 04-21-2016 |
20160110622 | METHOD, COMPUTER PROGRAM PRODUCT, AND SYSTEM FOR PROVIDING A SENSOR-BASED ENVIRONMENT - Method, computer program product, and system of visual identification of an item selected by a person during a transaction within an environment having a plurality of items. The method includes acquiring, using a visual sensor disposed within the environment, image information that includes at least a portion of the selected item, and analyzing the image information to determine a set of possible items for classifying the selected item. The method further includes selecting, based on personal profile information associated with the person, one of the set of possible items to thereby identify the selected item. | 04-21-2016 |
20160110623 | METHOD AND APPARATUS FOR SETTING REGION OF INTEREST - According to a first aspect of the present invention, a method for setting a region of interest may include: detecting a mark in an image frame photographed by a camera; and setting a region of interest using the detected mark. | 04-21-2016 |
20160110624 | Methods and Systems for Detection of a Consumable in a System - A computer-implemented method for detecting a presence of an object-of-interest in a system is provided. The method includes imaging the first object-of-interest including an identifier, wherein the imaging generates a first set of image data and determining the portion of the image data including the identifier based on a predetermined location. The method further includes dividing the portion of the image data including the identifier into at least two segments Next, the presence of the object-of-interest is determined by determining if intensity values within each segment exceed a presence threshold. | 04-21-2016 |
20160110625 | METHOD FOR NEAR-REALTIME WORKSPACE MAPPING - Motorized machinery, such as overhead cranes, are widely used in industries all over the world. It is not easy to move crane payloads without oscillation, increasing the likelihood of obstacle collisions and other accidents. One possible solution to such problems could be aiding the operator with a dynamic map of the workspace that shows the current position of obstacles. This method discloses the use of a camera to take images of the workspace, using imaging blurring to smooth the obtained images, and drawing contours to produce an individual, near real-time map of the workspace. In one or more embodiments, known obstacles may be tagged in a manner which is readable by the camera. This image and historical images of the same workspace are layered on top of one another to produce a map of obstacles on the workspace floor. This imaging and layering can produce a near real-time map of obstacles that can be used to guide heavy motorized machinery around a workspace without incident. | 04-21-2016 |
20160110631 | APPARATUS AND METHOD FOR DETECTING OBJECT USING MULTI-DIRECTIONAL INTEGRAL IMAGE - An apparatus and method for detecting an object using a multi-directional integral image are disclosed. The apparatus includes an area segmentation unit, an integral image calculation unit, and an object detection unit. The area segmentation unit places windows having a size of x*y on a full image having w*h pixels so that they overlap each other at their edges, thereby segmenting the full image into a single area, a double area and a quadruple area. The integral image calculation unit calculates a single directional integral image for the single area, and calculates multi-directional integral images for the double and quadruple areas. The object detection unit detects an object for the full image using the single directional integral image and the multi-directional integral images. | 04-21-2016 |
20160110878 | MOTION ESTIMATION IN REAL-TIME VISUAL ODOMETRY SYSTEM - A motion determination system is disclosed. The system may receive a first camera image and a second camera image. The system may receive a first range image corresponding to the first camera image. The system may generate a first range map by fusing the first camera image and the first range image. The system may iteratively process a plurality of first features in the first range map to determine a change in position of the machine. The plurality of second features may correspond to the plurality of first features, and each of the plurality of first and second features is denoted by feature points in an image space of the camera. | 04-21-2016 |
20160110880 | IMAGE PROCESSING FOR LAUNCH PARAMETERS MEASUREMENT OF OBJECTS IN FLIGHT - An example embodiment includes a method of measuring launch parameters of an object in flight. The method includes capturing images of an object in flight. A radius of the object and a center of the object are identified in each of the images. A velocity, an elevation angle, and an azimuth angle are calculated based on the radius of the object, the center of the object, and pre-measured camera alignment values. The method further includes cropping the images to a smallest square that bounds the object and flattening the images from spherical representations to Cartesian representations. The method also includes converting the Cartesian representations to polar coordinates with a range of candidate centers of rotations. Based on a fit of the polar image pair, the spin axis and spin rate are measured. | 04-21-2016 |
20160110882 | APPARATUS AND METHOD FOR DETECTING MULTIPLE OBJECTS USING ADAPTIVE BLOCK PARTITIONING - An apparatus for detecting multiple objects using adaptive block partitioning is disclosed. An object contour extracting unit configured to extract a contour information of an object using a local binary pattern LBP and difference image between adjacent images. An adaptive block partitioning unit configured to perform a block partitioning of an object not overlapped based on the extracted contour information. A motion quantization unit configured to calculate a motion orientation histogram MOH of the object by performing N-directional quantization about a motion vector. An object detection unit configured to detect the object using a block of the partitioned object, the contour information and the MOH, and estimate a moving direction of the object after performing labeling the detected object. The apparatus may process effectively data through eight-directional quantization of a motion vector of an object using motion information provided in advance from an ISP chip, detect proper area of the object in the unit of a block with minimizing motion error of the object through the block with adaptive size and orientation histogram, and estimate simultaneously moving direction of the object with the detection of the object. | 04-21-2016 |
20160110884 | SYSTEMS AND METHODS FOR IDENTIFYING OBJECTS WITHIN VIDEO CONTENT AND ASSOCIATING INFORMATION WITH IDENTIFIED OBJECTS - Systems and methods for identifying objects, such as advertised items or other content, within video content, which may be sequitur or non-sequitur in nature. The identified objects may then be select from within video content by a user to access metadata associated with the objects. The identified objects may be identified to viewers by cues. Cues may be oral, visual or both oral and visual. One or more frames corresponding to a period of video depicting identified objects are displayed in separate object identifiers that may be viewed by the viewer and from within which the identified objects may be selected by the viewer. | 04-21-2016 |
20160110885 | CLOUD BASED VIDEO DETECTION AND TRACKING SYSTEM - A method for detecting and tracking multiple moving targets from airborne video within the framework of a cloud computing infrastructure. The invention simultaneously utilizes information from an optical flow generator and an active-learning histogram matcher in a complimentary manner so as to rule out erroneous data that may otherwise, separately, yield false target information. The invention utilizes user-based voice-to-text color feature description for track matching with hue features from image pixels. | 04-21-2016 |
20160112682 | SYSTEM AND METHOD OF AUTOMATICALLY DETERMINING MATERIAL REACTION OR SENSITIVITY USING IMAGES - The disclosure extends to systems, methods and computer program products for automatically determining whether an energetic substance or a material has experienced a reaction (“go”) or a non-reaction (“no-go”) during an insult from an impact, friction, ESD or other small-scale sensitivity testing device. The systems, methods, and computer program products of the disclosure use a video capturing device, a CPU or computer, sensitivity test equipment, and a set of rules or instructions to be followed for quantifying and determining whether a reaction has occurred or not. | 04-21-2016 |
20160117546 | METHOD FOR DRIVER FACE DETECTION IN VIDEOS - A method for use with a stream of images defining a video. The method includes the steps of periodically conducting a face finding operation on an image in the stream. In respect to the last image in the stream preceding the image in which one or more faces was found, a tracker based upon wavelet decomposition is used to find a face for each face found in the last image for which no counterpart was found in the image. | 04-28-2016 |
20160117555 | APPARATUS AND METHOD FOR ROBUST EYE/GAZE TRACKING - At least one image registering unit records at least one series of images representing a subject. A control unit controls an operation sequence for the at least one image registering unit in such a manner that a subsequent data processing unit receives a repeating sequence of image frames there from, wherein each period contains at least one image frame of a first resolution and at least one image frame of a second resolution being different from the first resolution. Based on the registered image frames, the data processing unit produces eye/gaze tracking data with respect to the subject. | 04-28-2016 |
20160117558 | METHOD AND APPARATUS FOR SECURING COMPUTER VIDEO AND AUDIO SUBSYSTEMS - In general, embodiments of the invention include methods and apparatuses for securing otherwise unsecured computer audio and video subsystems. Embodiments of the invention perform watermarking of video and/or audio data streams output by a computer system. Additional security features that are included in embodiments of the invention include fingerprinting, snooping, capturing streams for local or remote analytics or archiving, and mixing of secure system content with local audio and video content. | 04-28-2016 |
20160117564 | SCANNING WINDOW IN HARDWARE FOR LOW-POWER OBJECT-DETECTION IN IMAGES - An apparatus includes a hardware sensor array including a plurality of pixels arranged along at least a first dimension and a second dimension of the array, each of the pixels capable of generating a sensor reading. A hardware scanning window array includes a plurality of storage elements arranged along at least a first dimension and a second dimension of the hardware scanning window array, each of the storage elements capable of storing a pixel value based on one or more sensor readings. Peripheral circuitry for systematically transfers pixel values, based on sensor readings, into the hardware scanning window array, to cause different windows of pixel values to be stored in the hardware scanning window array at different times. Control logic coupled to the hardware sensor array, the hardware scanning window array, and the peripheral circuitry, provides control signals to the peripheral circuitry to control the transfer of pixel values. | 04-28-2016 |
20160117824 | POSTURE ESTIMATION METHOD AND ROBOT - An image recognition method according to one aspect of the present invention acquires a camera image generated by capturing a subject using a camera (three-dimensional sensor). A plurality of coordinates corresponding to a plurality of pixels included in a predetermined area in the camera image are acquired. Subject distance information indicating a distance from the subject to the camera in the plurality of pixels is acquired. Then the posture of the subject surface included in the subject in the predetermined area is estimated based on the plurality of coordinates and the plurality of pieces of subject distance information that have been acquired. | 04-28-2016 |
20160117827 | APPARATUS AND METHOD FOR VISUALIZING LOITERING OBJECTS - A method for visualizing loitering objects includes: detecting at least one object determined to have been in a selected area of an input image for a preset time period; obtaining representative still images of each of the detected at least one object in respective time periods during the preset time period; and displaying the representative still images in a time order, or generating a video summary in which images of each of the detected at least one object, respectively included in the representative still images, are displayed together on a single image with indication of the time order. | 04-28-2016 |
20160117828 | METHOD FOR ESTIMATING POSITION OF TARGET BY USING IMAGES ACQUIRED FROM CAMERA AND DEVICE AND COMPUTER-READABLE RECORDING MEDIUM USING THE SAME - A method for estimating a position of a target by using an image acquired from a camera is provided. The method includes the steps of: (a) setting multiple virtual estimated reference points by dividing a view-path; (b) comparing altitude values of the respective estimated reference points with those of respective points on terrain; (c) searching neighboring virtual estimated reference points among the multiple virtual estimated reference points to satisfy a requirement under which a difference between an altitude z | 04-28-2016 |
20160117837 | MODIFICATION OF AT LEAST ONE PARAMETER USED BY A VIDEO PROCESSING ALGORITHM FOR MONITORING OF A SCENE - There is provided a method, a device and a system for modifying at least one parameter used by a video processing algorithm, such as a motion detection algorithm, an object detection algorithm, or an object tracking algorithm, for monitoring of a scene ( | 04-28-2016 |
20160117838 | MULTIPLE-MEDIA PERFORMANCE MECHANISM - Embodiments of a system and method are disclosed for adjusting the execution of a multi-media performance in response to a live performance. Embodiments track and receive data regarding elements of the live performance, analyze the live performance data, determine the pace of the live performance, and use this data to appropriately execute elements of the accompanying multi-media performance. Some embodiments may perform this analysis and adjustment automatically. Certain embodiments may use voice detection devices, motion detection devices and musical detection devices to track the live performance. The live performance may be analyzed in relation to an electronic script, which may contain instructions related to the execution of the multi-media elements. | 04-28-2016 |
20160117839 | IMAGE OUTPUT DEVICE, IMAGE OUTPUT METHOD, AND COMPUTER PROGRAM PRODUCT - According to an embodiment, an image output device includes an extractor, a search unit, an associate unit, and a controller. The extractor is configured to extract a first parameter that varies in accordance with a movement of an object from at least one first image of the object, and extract a second parameter that varies in accordance with a movement of the object from each second image of the object. The search unit is configured to search for a second parameter similar to the first parameter. The associate unit is configured to associate the first image from which the first parameter is extracted with the second image from which the second parameter that is retrieved with respect to the first parameter is extracted. The controller is configured to instruct an output unit to output an image based on the first and second images that are associated to each other. | 04-28-2016 |
20160117840 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM - An information processing apparatus includes a tracking unit configured to identify a tracking region including a tracking object in an image, an identification unit configured to identify a motion region inside the tracking region in the image, a derivation unit configured to derive a ratio of the motion region relative to the tracking region, and a determination unit configured to determine whether to continue tracking the tracking object based on the derived ratio. | 04-28-2016 |
20160117841 | OBJECT DETECTION APPARATUS - An object detection apparatus for detecting an object around a moving object carrying the apparatus by transmitting a probe wave and receiving reflections of the probe wave from the object via a plurality of ranging sensors attached to the moving object. In the apparatus, a tentative same-object determiner is configured to, if it is determined by a same-object determiner that the objects detected in the current and previous cycles are not the same, determine whether or not the objects detected in the current and previous cycles are likely to be the same. A determination suspender is configured to, if it is determined that the objects detected in the current and previous cycles are likely to be the same, suspend determining that the objects detected in the current and previous cycles are not the same. | 04-28-2016 |
20160117848 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, POI INFORMATION CREATION SYSTEM, WARNING SYSTEM, AND GUIDANCE SYSTEM - Even when the brightness of an image in a video changes, detection of an object in the background is realized by accurately separating the object in the background from the background of the image. An image processing device in accordance with the present invention executes a process of converting a color space of a processing target image and acquiring color information on the converted color space, a process of calculating, for the target image, an average value of the brightness of color information on a target region that contains an object to be detected, and a process of comparing, for the target region, the brightness of color information on each pixel with the calculated average value, a process of generating a corrected image with corrected brightness/darkness, and a process of extracting the object on the basis of the corrected image (See FIG. | 04-28-2016 |
20160125232 | DYNAMIC FACE IDENTIFICATION - Systems and methods associated with dynamic face identification are disclosed. One example method includes matching a query face against a set of clusters in a dynamic collection. Matching the query face against the set of clusters may facilitate identifying a person associated with the query face. The example method also includes matching the query face against a set of images in a static gallery to identify the person. Matching the query face against the static gallery may be performed when matching the query face against the set of clusters fails to identify the person. The example method also includes updating the set of clusters in the dynamic collection using the query face. | 05-05-2016 |
20160125234 | Athletic Attribute Determinations from Image Data - Systems and methods for determining athletic attributes are disclosed. Aspects of this disclosure relate to determining athletic attributes of an athlete from image data. One or more determinations may be based alterations of image data between different images, such as alterations in pixels representing objects or portions of objects. Image data may be utilized to determine whether certain thresholds are met. Various threshold levels may be applied to one or more objects represented in the image data. Landmarks/distance calibrations may be utilized from time-stamped image data to allow for precise measuring of performance (including, but not limited to: sprint or agility times, flight time for vertical jump, distance for throws). Data retrieved or derived from the image data may be used in scoring and/or ranking athletes. Such data may be used to provide training advice or regimes to the athletes or other individuals, such as coaches or trainers. | 05-05-2016 |
20160125235 | IMAGE SEGMENTATION METHOD AND IMAGE SEGMENTATION DEVICE - An image segmentation method and an image segmentation device are provided. The method comprises receiving a video image of a dynamic movement of a target object, acquiring a full-image optical flow of the video image to estimate a first displacement of each pixel therein, acquiring a background optical flow of the video image to estimate a second displacement of a background pixel therein; comparing the first displacement with the second displacement to obtain a foreground region of the target object; extracting feature points in the video image in the foreground region, calculating a probability density of the feature points to determine a number of the target objects; performing visual tracking and movement trajectory analysis on the target object to track the same; performing stationary judgment and image segmentation on the target object according to an interframe displacement of the feature points, an interframe cutting window similarity and tracking box scaling. | 05-05-2016 |
20160125236 | IMAGE IDENTIFICATION METHOD AND IMAGE IDENTIFICATION DEVICE - An image identification method and an image identification device are provided. The method comprises acquiring a hand feature region within a sight from a first view by skin color detection; acquiring a feature and a position of a tip of a finger from the hand feature region by performing a pattern recognition for a morphological feature of a stretched hand; recording an interframe displacement of a feature point of the tip of the finger when the tip of the finger delimits a periphery of a target object to obtain a delimiting trajectory from the interframe displacement, closing the delimiting trajectory to form a full-perimeter geometry; projecting the full-perimeter geometry on a plane where a direction of the sight is perpendicular to a plane where the target object is located to obtain a projection region, performing an image identification using the projection region as an identification region of the target object. | 05-05-2016 |
20160125242 | METHOD, SYSTEM AND APPARATUS FOR PROVIDING IMPROVED AUDIENCE PARTICIPATION - The present disclosure provides an apparatus for detecting placards in a captured image, comprising: input circuitry operable to receive the captured image; detector circuitry operable to detect placards in the captured image on a basis of a predetermined shape and/or colour of the placards, the detector circuitry being operable to detect placards of a plurality of different shapes and/or colours; and counter circuitry operable to count a number of detected placards of each different shape and/or colour. | 05-05-2016 |
20160125243 | HUMAN BODY PART DETECTION SYSTEM AND HUMAN BODY PART DETECTION METHOD - A human body part detection system includes: a learning mode storing unit storing a learning model; a depth image acquisition unit acquiring a depth image; a foreground human extraction unit extracting a human area; and a human body part detection unit detecting the human body part based on the human area and the learning model. The detection unit calculates a direction of a geodesic path at a first point based on a shortest geodesic path from a base point to a first point, selects a pixel pair at positions obtained after rotating positions of a pixel pair for calculation of the feature in the learning model in accordance with the direction, calculates a feature at the first point based on depth of the selected pair, and determines a label corresponding to the human body part based on the feature at the first point and learning model. | 05-05-2016 |
20160125247 | SURVEILLANCE SYSTEM AND SURVEILLANCE METHOD - A surveillance system including at least one image capture device and a processor, and a surveillance method are provided. The image capture device is coupled to the processor and captures surveillance images. The processor analyzes the correlation between multiple on site data corresponding to the surveillance images and event information. Each on site data includes time information and detail information. Therefore, the processor determines that the event information is more relevant to the surveillance image corresponding to the detail information having a higher occurrence frequency in the duration of the event information. | 05-05-2016 |
20160125248 | METHOD AND SERVICE SERVER FOR PROVIDING PASSENGER DENSITY INFORMATION - A method and service server for providing passenger density information are provided. The service server for providing passenger density information of a car according to an embodiment of the invention may include: a motion vector detection unit that detects motion vectors generated by the movements of passengers from a captured image of the inside of the car; a head recognition unit that recognizes the heads of passengers from the image; and a density information generation unit that generates the passenger density information of the car by using one or more of the motion vectors, a result of head recognition of the passengers, and tag sensor information received from sensors installed in the car. | 05-05-2016 |
20160125249 | BLUR OBJECT TRACKER USING GROUP LASSO METHOD AND APPARATUS - A method and apparatus for tracking an object across a plurality of sequential images, where certain of the images contain motion blur. A plurality of normal templates of a clear target object image and a plurality of blur templates of the target object are generated. In the next subsequent image frame, a plurality of bounding boxes are generated of potential object tracking positions about the target object location in the preceding image frame. For each bounding box image frame, a reconstruction error is generated that one bounding box has a maximum probability that it is the object tracking result in the subsequent image frame. | 05-05-2016 |
20160125252 | IMAGE RECOGNITION APPARATUS, PROCESSING METHOD THEREOF, AND PROGRAM - An image recognition apparatus ( | 05-05-2016 |
20160125586 | ICE ANALYSIS BASED ON ACTIVE AND PASSIVE RADAR IMAGES - An ice analyzer includes processing circuitry configured to receive a radiometer image including a geographic area including ice, receive a radar image including at least a portion of the geographic area, perform ice/water discrimination of the radiometer image and the radar image, generate a passive ice/water mask and an active ice/water mask based on the ice/water discrimination, merge the passive ice/water mask and the active ice/water mask into a typing mask, and type the ice based on the typing mask. | 05-05-2016 |
20160125587 | APPARATUS, METHOD, AND PROGRAM PRODUCT FOR TRACKING ITEMS - Apparatuses, methods, systems, and program products are disclosed for tracking items. An identification module identifies an item using one or more sensors of an information handling device. A location module receives location data for the item in response to identifying the item. A communication module shares the location data with one or more different information handling devices. | 05-05-2016 |
20160125598 | APPARATUS AND METHOD FOR AUTOMATED DETECTION OF LUNG CANCER - A method of computer aided detection of cancerous nodules of lung tissue, the method comprising: loading a plural-ity of contiguous tomographic images of a target area having a common axis; automatically detecting an object of interest in one of the loaded tomographic images of the target area; tracking the relative motion of the detected object of interest in adjacent ones of the loaded contiguous tomographic images; and identifying a potential lung cancer nodule responsive to the tracking. | 05-05-2016 |
20160125609 | Three Dimensional Recognition from Unscripted Sources Technology (TRUST) - The invention is a device and method for recognizing individuals of interest by analyzing images taken under real world lighting conditions with imperfect viewing. Recognition attributes are identified by running a plurality of processing algorithms on the image data which a) extract indices of recognition that are markers relating to specific individuals, b) create morphable, three dimensional computer graphics models of candidate individuals based on the indices of recognition, c) apply the viewing conditions from the real world data imagery to the three dimensional models, and d) declare recognition based on a high degree of correlation between the morphed model and the raw data image within a catalog of the indices of recognition of individuals of interest. The invention further encompasses the instantiation of the processing on very high thruput processing elements that may include FPGAs or GPUs. | 05-05-2016 |
20160125618 | METHOD, DEVICE, AND SYSTEM FOR PRE-PROCESSING A VIDEO STREAM FOR SUBSEQUENT MOTION DETECTION PROCESSING - There is provided a method for pre-processing a video stream for subsequent motion detection processing. The method comprises receiving a video stream of images, wherein each image in the video stream is represented by a first plurality of bits; enhancing the video stream of images by, for each image in the video stream: comparing the image to at least one previous image in the video stream so as to identify pixels where the image differs from the at least one previous image in the video stream, enhancing the image in those pixels where the image differs from the at least one previous image in the video stream; and converting the enhanced video stream of images so as to produce a converted video stream of images for subsequent motion detection processing, wherein each image in the converted video stream is represented by a second plurality of bits being lower than the first plurality of bits. | 05-05-2016 |
20160125620 | DEVICE, SYSTEM AND METHOD FOR AUTOMATED DETECTION OF ORIENTATION AND/OR LOCATION OF A PERSON - The present invention relates to a device, system and method for automated detection of orientation and/or location of a person. To increase the robustness and accuracy, the proposed device comprises an image data interface ( | 05-05-2016 |
20160125633 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT TO REPRESENT MOTION IN COMPOSITE IMAGES - In an example embodiment a method, apparatus and computer program product are provided. The method includes facilitating access of a plurality of images associated with a scene comprising at least one moving object, and segmenting the plurality of images into foreground regions and background regions based on changes in corresponding image regions between the images. The foreground regions comprise the at least one moving object. The method includes determining at least one object parameter associated with the at least one moving object in the foreground regions and generating a background image based on the background regions, and modifying at least one of the foreground regions and the background image to represent a motion of the at least one moving object based on the at least one object parameter. The method includes generating a composite image based on the modified at least one of the foreground regions and the background image. | 05-05-2016 |
20160125645 | GRADING AND MONITORING OF A GEOGRAPHICAL REGION - A grading and monitoring system that evaluates a quality of index of a neighborhood via satellite images in described. The system utilizes a fuzzy-logic rule based technique in determining the quality of the neighborhood. The crisp input parameters that define the characteristics of a neighborhood are first fuzzified and based on a set of rules that are obtained from an experts knowledge, an output fuzzy set of type-2 is obtained. Further, the output fuzzy set is aggregated and type-reduced to obtain an output crisp value corresponding to the neighborhoods quality. The system also monitors changes in the neighborhood quality in predetermined time intervals. | 05-05-2016 |
20160132532 | MULTI-TIER INTELLIGENT INFRASTRUCTURE MANAGEMENT SYSTEMS FOR COMMUNICATIONS SYSTEMS AND RELATED EQUIPMENT AND METHODS - Methods of identifying available connector ports on rack mounted equipment use an image capture device to capture an image of a front face of an equipment rack. The captured image is compared to at least one stored image. A patch cord insertion status of at least one connector port included on an item of equipment that is mounted on the equipment rack is then determined based at least in part on the comparison of the captured image to the at least one stored image | 05-12-2016 |
20160132714 | FIRE URGENCY ESTIMATOR IN GEOSYNCHRONOUS ORBIT (FUEGO) - A fire detector is disclosed that successively images a particular area from geosynchronous Earth orbit satellite to attain very good signal-to-noise ratios against Poisson fluctuations within one second. Differences between such images allow for the automatic detection of small fires greater than 12 square meters. Imaging typically takes place in transparent bands of the infrared spectrum, thereby rendering smoke from the fire and light clouds somewhat transparent. Several algorithms are disclosed that can help reduce false fire alarms, and their efficiencies are shown. Early fire detection and response would be of great value in the United States and other nations, as wild land fires destroy property and lives and contribute around five percent of the US global carbon dioxide contribution. Such apparatus would incorporate modern imaging detectors, software, and algorithms able to detect heat from early and small fires, and yield detection times on a scale of minutes. | 05-12-2016 |
20160132716 | METHOD AND DEVICE FOR RECOGNIZING DANGEROUSNESS OF OBJECT - Disclosed is an object dangerousness recognition method comprising steps of generating, based on an image captured by a stereo camera, a heterogeneous point cloud of an object in the image, each point in the heterogeneous point cloud having depth information and planar image information; determining, based on the depth information and the planar image information of each point in the heterogeneous point cloud, a solid shape of the object, and then, generating a first dangerousness parameter according to the solid shape; determining, based on the depth information and the planar image information of each point in the heterogeneous point cloud, a surface feature of the object, and then, generating a second dangerousness parameter according to the surface feature; and generating, based on the first and second dangerousness parameters, a comprehensive dangerousness parameter of the object. | 05-12-2016 |
20160132728 | Near Online Multi-Target Tracking with Aggregated Local Flow Descriptor (ALFD) - Systems and methods are disclosed to track targets in a video by capturing a video sequence, detecting data association between detections and targets, where detections are generated using one or more image based detectors (tracking-by-detections); identifying one or more target of interests and estimating a motion of each individual; and applying an Aggregated Local Flow Descriptor to accurately measure an affinity between a pair of detections and a Near Online Multi-target Tracking to perform multiple target tracking given a video sequence. | 05-12-2016 |
20160132729 | METHODS AND APPARATUS TO MEASURE BRAND EXPOSURE IN MEDIA STREAMS - Methods and apparatus to measure brand exposure in media streams are disclosed. An example apparatus disclosed herein includes a brand identifier detector to compare first data associated with a first scene of a media stream with second data associated with a reference scene including a first brand identifier to detect the first brand identifier in the first scene of the media stream. The example apparatus also includes a measure and tracking module to combine respective locations of the first brand identifier in respective frames of a first sequence of image frames forming the first scene to determine a weighted location for the first brand identifier. The example apparatus further includes a report generator to report appearance data corresponding to the first brand identifier, the appearance data including the weighted location for the first brand identifier. | 05-12-2016 |
20160132731 | VIDEO SURVEILLANCE SYSTEM, VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND VIDEO PROCESSING PROGRAM - A video processing apparatus includes a video analyzer that analyzes video data captured by a surveillance camera, detects an event belonging to a specific category, and outputs a detection result, a display controller that displays, together with a video of the video data, a category setting screen for setting a category of an event included in the video, and a learning data accumulator that accumulates, as learning data together with the video data, category information set in accordance with an operation by an operator to the category setting screen. The video analyzer performs learning processing by using the learning data accumulated in the learning data accumulator. | 05-12-2016 |
20160132732 | Remote Heart Rate Estimation - For remote heart rate estimation, a method detects an object of interest (OOI) in each image of a video data and tracks the OOI in each image of the video data. The method identifies a region of interest (ROI) within the OOI and generates a plurality of super pixels from a plurality of pixels in each ROI. The method further generates a super-pixel time series from the plurality of super pixels in each image and removes interfering signals from the super-pixel time series. The method further models the super-pixel time series as a super-pixel model and calculates a heart beat signal from the super-pixel model. The method calculates heart characteristics from the heart beat signal. The heart characteristics include one or more of a heart rate, an inter-beat interval, and a heart rate variability. | 05-12-2016 |
20160132752 | Classifying User Activities Using Eye Fixation Clustering, Fixation Features, and Regions of Interest - A computing device classifies user activities. The device receives eye tracking data for a person viewing a page having multiple contiguous regions. The eye tracking data comprises a temporal sequence of fixations, where each fixation has a duration and a location. The device partitions the fixations into clusters, where each cluster has a consecutive sub-sequence of the fixations. The device assigns a provisional user activity label to each fixation based on a set of characteristics of the fixation. The device also groups together consecutive fixations that have the same label to partition the fixations into groups. For each group that matches a respective cluster, the device retains the provisional label assignment as a final user activity label assigned to each of the fixations in the respective group. The device also reconciles non-matching groups with non-matching clusters, using the regions, to form a set of non-overlapping modified groups. | 05-12-2016 |
20160132754 | INTEGRATED REAL-TIME TRACKING SYSTEM FOR NORMAL AND ANOMALY TRACKING AND THE METHODS THEREFOR - The ability to identify anomalous behavior in video recordings is important for security and public safety. Current identification techniques, however, suffer from a number of limitations. The present invention describes a novel identification technique that permits unsupervised, automatic identification of moving objects and anomaly detection in real-time recordings (MovA). The present invention specifically utilizes a novel real-time manifold learning system (RML), which generates a semantic crowd behavior descriptor that the inventors call a Trackogram. The Trackogram can be used to identify anomalous crowd behavior collected from video recordings in a real-time manner. MovA can be used to detect anomaly in standard video datasets. Importantly, MovA is also able to identify anomalies in night-vision stereo sequences. Ultimately, MovA could be incorporated into a number of existing products, including video monitoring cameras or night-vision goggles. | 05-12-2016 |
20160133002 | METHOD AND DEVICE TO DETERMINE LANDMARK FROM REGION OF INTEREST OF IMAGE - At least some example embodiments disclose a device and a method for determining a landmark of an image. The device may compare, to a key landmark set as a reference landmark, a first candidate landmark detected from a region of interest (ROI) of an input image and a second candidate landmark tracked from a previous frame, and determine a landmark similar to the key landmark to be a final landmark. | 05-12-2016 |
20160133014 | Marking And Tracking An Area Of Interest During Endoscopy - An area of interest of a patient's organ may be identified based on the presence of a possible lesion during an endoscopic procedure. The location of the area of interest may then be tracked relative to the camera view being displayed to the endoscopist in real-time or near real-time during the endoscopic procedure. If the area of interest is visually marked on the display, the visual marking is moved with the area of interest as it moves within the camera view. If the area of interest moves outside the camera view, a directional indicator may be displayed to indicate the location of the area of interest relative to the camera view to assist the endoscopist in relocating the area of interest. | 05-12-2016 |
20160133021 | IMAGING POSITION DETERMINATION DEVICE AND IMAGING POSITION DETERMINATION METHOD - An imaging position determination device includes an image reception unit that acquires an image and a position of a person within a monitoring area, an eye state detection unit that detects an open and closed state of eyes of a person from the image acquired by the image reception unit, an eye state map creation unit that creates an eye state map which shows an eye state of the person in the monitoring area based on the open and closed state of eyes of the person that is acquired by the eye state detection unit, and an adjustment amount estimation unit that determines an imaging position of the person in the monitoring area based on the eye state map that is created by the eye state map creation unit. | 05-12-2016 |
20160133022 | SYSTEMS AND METHODS FOR TRACKING AN OBJECT - A method for tracking an object by an electronic device is described. The method includes detecting an object position in an initial frame to produce a detected object position. The method also includes measuring one or more landmark positions based on the detected object position or a predicted object position. The method further includes predicting the object position in a subsequent frame based on the one or more landmark positions. The method additionally includes determining whether object tracking is lost. The method also includes avoiding performing object detection for the subsequent frame in a case that object tracking is maintained. | 05-12-2016 |
20160133044 | Alternate Viewpoint Image Enhancement - In one embodiment, panoramic images, images bubbles, or any two-dimensional views of three-dimensional subject matter are enhanced with one or more alternate viewpoints. A controller receives data indicative of a point on the two-dimensional perspective and accesses a three-dimensional location based on the point. The controller selects an image bubble based on the three-dimensional location. The three-dimensional location may be determined according to a depth map corresponding to the point. A portion of the image bubble is extracted and incorporated into the two-dimensional perspective. The resulting image may be a seamless enhanced resolution image or include a picture-in-picture enhanced resolution window including subject matter surrounding the selected point. | 05-12-2016 |
20160135903 | SINGLE-MARKER NAVIGATION - A medical data processing method for determining the spatial relationship of a first medical device relative to a second medical device, the method being constituted to be executed by a computer and comprising the following steps: d) acquiring first medical device position data comprising first medical device position information describing the position of the first medical device, wherein the first medical device position data is acquired based on reading, from a marker device having for example a fixed spatial relationship relative to the first medical device, the first medical device position information or information which allows access to the first medical device position information; e) acquiring second medical device position data comprising second medical position information describing the position of the second medical device; f) determining, based on the first medical device position data and the second medical device position data, relative position data comprising relative position information describing the spatial relationship of the second medical device relative to the first medical device. | 05-19-2016 |
20160137157 | OBJECT RECOGNITION DEVICE - An object recognition device includes: a radar that detects objects in the vicinity of a vehicle a camera that detects objects by capturing an image of the vicinity of the vehicle an identical object recognition unit configured to recognize that an object detected by the radar and an object detected by the camera are the same object when the objects are present within a predetermined position range. The identical object recognition unit determines whether the objects are the same object by reducing the predetermined position range when detection using the radar device or the camera is interrupted and then detection is started again. | 05-19-2016 |
20160137202 | TRAVEL LANE MARKING RECOGNITION APPARATUS - A travel lane marking probability calculating unit calculates a travel lane marking probability of each of travel lane marking candidates based on a calculation condition. A travel lane marking recognizing unit recognizes, as a travel lane marking, a travel lane marking candidate having a travel lane marking probability that is a threshold value or higher, among the travel lane marking candidates. A lane change detecting unit detects that an own vehicle is in the midst of a lane change. When the own vehicle is in the midst of a lane change, a condition changing unit changes the calculation condition to allow the travel lane marking probability to be more easily increased compared to when the own vehicle is not in the midst of a lane change, or changes the threshold value to be lower than that when the own vehicle is not in the midst of a lane change. | 05-19-2016 |
20160138893 | REMOTELY GUIDED GUN-FIRED AND MORTAR ROUNDS - A method for guiding a gun-fired or mortared round towards an intended target. The method including: capturing image data from an image pick-up device during a descent of the round; transmitting the image data to a control platform remotely located from the round; receiving the image date at the control platform; displaying the image data to a user at the control platform; manually identifying one or more features in the image data on the monitor; and stabilizing the image on the monitor based on the one or more identified features. | 05-19-2016 |
20160140385 | USER IDENTIFICATION SYSTEM AND METHOD FOR IDENTIFYING USER - The present invention discloses an identification system which includes an image sensor, a storage unit and a comparing unit. The image sensor captures a plurality of images of the motion trajectory generated by a user at different timings. The storage unit has stored motion vector information of a group of users including or not including the user generating the motion trajectory. The comparing unit compares the plurality of images with the motion vector information to identify the user. The present invention also provides an identification method. | 05-19-2016 |
20160140386 | SYSTEM AND METHOD FOR TRACKING AND RECOGNIZING PEOPLE - A tracking and recognition system is provided. The system includes a computer vision-based identity recognition system configured to recognize one or more persons, without a priori knowledge of the respective persons, via an online discriminative learning of appearance signature models of the respective persons. The computer vision-based identity recognition system includes a memory physically encoding one or more routines, which when executed, cause the performance of constructing pairwise constraints between the unlabeled tracking samples. The computer vision-based identity recognition system also includes a processor configured to receive unlabeled tracking samples collected from one or more person trackers and to execute the routines stored in the memory via one or more algorithms to construct the pairwise constraints between the unlabeled tracking samples. | 05-19-2016 |
20160140391 | AUTOMATIC TARGET SELECTION FOR MULTI-TARGET OBJECT TRACKING - Techniques related to automatic target object selection from multiple tracked objects for imaging devices are discussed. Such techniques may include generating one or more object selection metrics such as accumulated distances from frame center, accumulated velocities, and trajectory comparisons of predicted to actual trajectories for tracked objects and selecting the target object based on the object selection metric or metrics. | 05-19-2016 |
20160140392 | METHOD AND SYSTEM FOR PROCESSING VIDEO CONTENT - Various aspects of a method and system to process video content for extraction of moving objects from image sequences of the video content are disclosed herein. In an embodiment, the method includes determination of one or more object contours of one or more moving objects in the video content. A first object bounding box (OBB) that encompasses a first object contour of a first moving object is created based on the determined one or more object contours. A first object mask for the first moving object is generated in a first destination image frame, based on infilling of the first object contour in the created first OBB. | 05-19-2016 |
20160140393 | IDENTIFICATION APPARATUS AND IDENTIFICATION METHOD - An identification apparatus for identifying vehicles and/or vehicle components includes: an image signal receiving unit receiving from an image signal source an image signal containing image data of a vehicle and/or of at least one vehicle component; a memory unit storing data of vehicles and/or at least one vehicle component; a comparison unit comparing the image data received with image data of the data sets stored in the memory unit, and to identify those data sets which contain image data that match the received image data; and an image signal data generating unit which generate output image signal data for at least one output image, the output image signal data containing at least a portion of the received image data and data from the data sets stored in the memory unit which have been identified by the comparison unit. | 05-19-2016 |
20160140394 | VISUAL OBJECT TRACKING SYSTEM WITH MODEL VALIDATION & MANAGEMENT - System, apparatus, method, and computer readable media for on-the-fly captured image data object tracking. An image or video stream is processed to detect and track an object in concurrence with generation of the stream by a camera module. In one exemplary embodiment, HD image frames are processed at a rate of 30 fps, or more, to track one or more target object. In embodiments, object detection is validated prior to employing detected object descriptor(s) as learning data to generate or update an object model. A device platform including a camera module and comporting with the exemplary architecture may provide 3A functions based on objects robustly tracked in accordance with embodiments. | 05-19-2016 |
20160140395 | ADAPTIVE SAMPLING FOR EFFICIENT ANALYSIS OF EGO-CENTRIC VIDEOS - A method, non-transitory computer-readable medium, and apparatus for adaptive sampling an ego-centric video to extract features for performing an analysis are disclosed. For example, the method captures the ego-centric video, determines a spatio-temporal location of interest within the ego-centric video, applies an adaptive sampling centered around the spatio-temporal location of interest to obtain one or more spatio-temporal patches, extracts one or more features using the one or more spatio-temporal patches and performs an analysis based on the one or more features. | 05-19-2016 |
20160140397 | SYSTEM AND METHOD FOR VIDEO CONTENT ANALYSIS USING DEPTH SENSING - A method and system for performing video content analysis based on two-dimensional image data and depth data are disclosed. Video content analysis may be performed on the two-dimensional image data, and then the depth data may be used along with the results of the video content analysis of the two-dimensional data for tracking and event detection. | 05-19-2016 |
20160140399 | OBJECT DETECTION APPARATUS AND METHOD THEREFOR, AND IMAGE RECOGNITION APPARATUS AND METHOD THEREFOR - An object detection apparatus includes an extraction unit configured to extract a plurality of partial areas from an acquired image, a distance acquisition unit configured to acquire a distance from a viewpoint for each pixel in the extracted partial area, an identification unit configured to identify whether the partial area includes a predetermined object, a determination unit configured to determine, among the partial areas identified to include the predetermined object by the identification unit, whether to integrate identification results of a plurality of partial areas that overlap each other based on the distances of the pixels in the overlapping partial area, and an integration unit configured to integrate the identification results of the plurality of partial areas determined to be integrated to detect a detection target object from the integrated identification result of the plurality of partial areas. | 05-19-2016 |
20160140420 | MEANS FOR USING MICROSTRUCTURE OF MATERIALS SURFACE AS A UNIQUE IDENTIFIER - It describes methods to automatically authenticate an object by comparing some object images with reference images, the object images being characterized by the fact that visual elements used for comparison are non-disturbing for the naked eye. In some described approaches it provides the operator with visible features to locate the area to be imaged. It also proposes ways for real-time implementation enabling user friendly detection using mobile devices like smart phones. | 05-19-2016 |
20160140695 | ADAPTIVE PATH SMOOTHING FOR VIDEO STABILIZATION - Techniques and architectures for video stabilization can transform a shaky video to a steady-looking video. A path smoothing process can generate an optimized camera path for video stabilization. With a large smoothing kernel, a path smoothing process can remove both high frequency jitters and low frequency bounces, and at the same time can preserve discontinuous camera motions (such as quick panning or scene transition) to avoid excessive cropping or geometry distortion. A sliding window based implementation includes a path smoothing process that can be used for real-time video stabilization. | 05-19-2016 |
20160140727 | A METHOD FOR OBJECT TRACKING - The present invention relates to a method for object tracking where the tracking is realized based on object classes, where the classifiers of the objects are trainable without a need for supervision and where the tracking errors are reduced and robustness is increased. | 05-19-2016 |
20160140728 | HEAD MOUNTED DISPLAY, DISPLAY SYSTEM, CONTROL METHOD OF HEAD MOUNTED DISPLAY, AND COMPUTER PROGRAM - A transmission-type head mounted display includes a detection unit that detects a first target from outside scenery, an image display unit which is capable of transmitting the outside scenery and is capable of displaying an image, and a display image setting unit that causes the image display unit to display a first moving image which is a moving image associated with the detected first target. | 05-19-2016 |
20160140732 | TOPOLOGY DETERMINATION FOR NON-OVERLAPPING CAMERA NETWORK - Image-matching tracks the movements of the objects from initial camera scenes to ending camera scenes in non-overlapping cameras. Paths are defined through scenes for pairings of initial and ending cameras by different respective scene entry and exit points. For each of said camera pairings a combination path having a highest total number of tracked movements relative to all other combinations of one path through the initial and ending camera scene is chosen, and the scene exit point of the selected path through the initial camera and the scene entry point of the selected path into the ending camera define a path connection of the initial camera scene to the ending camera scene. | 05-19-2016 |
20160140848 | LANE CORRECTION SYSTEM, LANE CORRECTION APPARATUS AND METHOD OF CORRECTING LANE - The embodiment provides a method of correcting a lane. The method includes receiving first lane information detected by a lane departure warning system; comparing the received first lane information with previously stored second lane information to identify a degree of variation of a lane as a function for time; sensing whether a fault detection of the received first lane information exists according to the identified degree of variation of the lane; correcting the received first lane information when the fault detection of the first lane information is sensed; and transmitting the corrected lane information to the lane departure warning system. | 05-19-2016 |
20160148040 | APPARATUS AND METHOD FOR OBJECT DETECTION BASED ON DOMINANT PIXEL INFORMATION - Provided are an apparatus and a method for detecting an object in an image, and particularly, an apparatus and a method for detecting a vehicle in an image. The present invention has been made in an effort to provide an apparatus and a method for object detection based on dominant pixel information which generate an average image and a standard deviation image of training object images, acquire a feature area representing a feature of a training object, and detect an object by using a value acquired by calculating a similarity between the average image in the feature area and a target image as a feature vector to efficiently detect a target object with a small calculation amount. | 05-26-2016 |
20160148044 | SYSTEM AND METHOD FOR COMPUTER VISION BASED TRACKING OF AN OBJECT - A system and method for computer vision based tracking of a human form may include detecting a shape of an object in an image of a space and determining the probability of object having a human form shape based on movement of the object. If the probability of the object of being of a human form is above a predetermined threshold the object is tracked and if the probability is below the threshold then the tracking is terminated. Occupancy in the space may be determined based on the tracking of the object. | 05-26-2016 |
20160148054 | Fast Object Tracking Framework For Sports Video Recognition - A solution is provided for object tracking in a sports video is disclosed. A determination whether a position of the object was identified in a previous video frame is made. If the position of the object was identified in the previous video frame, a new position of the object is identified in a current video frame based on the identified position of the object in the previous video frame. An expected position of the object is identified based on the identified position of the object in the previous video frame and a trained object classification model. A determination is made whether the new position is consistent with the expected position; if the new position is consistent with the expected position, the new position is stored as the position of the object in the current frame. | 05-26-2016 |
20160148058 | TRAFFIC VIOLATION DETECTION - A method for detecting a vehicle running a stop signal positioned at an intersection includes acquiring a sequence of frames from at least one video camera monitoring an intersection being signaled by the stop signal. The method includes defining a first region of interest (ROI) including a road region located before the intersection on the image plane. The method includes searching the first ROI for a candidate violating vehicle. In response to detecting the candidate violating vehicle, the method includes tracking at least one trajectory of the detected candidate violating vehicle across a number of frames. The method includes classifying the candidate violating vehicle as belonging to one of a violating vehicle and a non-violating vehicle based on the at least one trajectory. | 05-26-2016 |
20160148063 | TRAFFIC LIGHT DETECTION - A method and a system for traffic light detection are provided. The method may include: obtaining a color image; calculating pixel response values for pixels of the color image, respectively, where each of the pixel response values may be calculated using R, G, and B values of a corresponding pixel directly, such that pixel response values of red traffic light pixels are substantially distributed on a first side of a predetermined range and pixel response values of green traffic light pixels are substantially distributed on a second side of the predetermined range which is opposite to the first side; identifying pixels whose pixel response values are distributed on the first side or the second side as candidate pixels; identifying candidate blobs based on the candidate pixels; and verifying whether the candidate blobs are traffic lights. Efficiency and reliability may be improved. | 05-26-2016 |
20160148068 | IMAGE PROCESSING APPARATUS AND METHOD, AND ELECTRONIC DEVICE - Embodiments of the present disclosure provide an image processing apparatus and method, and an electronic device, wherein the apparatus includes: an image acquiring unit configured to acquire an image shot in real-time; a detecting unit configured to detect an object needing a fuzzy processing and/or a part thereof in the image shot in real-time; and a processing unit configured to perform a fuzzy processing of the detected object and/or a part thereof. The object needing a fuzzy processing and/or a part thereof is automatically detected in the shooting process to perform corresponding fuzzy processing, rather than recording the shot image in advance, reproducing the recorded image and searching therein, and finally performing a fuzzy processing. Thus the present disclosure can increase the efficiency of the fuzzy processing, improve the accuracy and integrity of the processing result, and reduce the risk of secrete divulgation caused by omission or error processing. | 05-26-2016 |
20160148071 | SYSTEMS AND METHODS FOR OBJECT DETECTION - An object detection system and a method of detecting an object in an image are disclosed. In an embodiment, a method for detecting the object includes computing one or more feature planes of one or more types for each image pixel of the image. A plurality of cells is defined in the image, where each cell includes first through n | 05-26-2016 |
20160148367 | OPERATING A COMPUTING DEVICE BY DETECTING ROUNDED OBJECTS IN AN IMAGE - A method is disclosed for operating a computing device. One or more images of a scene captured by an image capturing device of the computing device is processed. The scene includes an object of interest that is in motion and that has a rounded shape. The one or more images are processed by detecting a rounded object that corresponds to the object of interest. Position information is determined based on a relative position of the rounded object in the one or more images. One or more processes are implemented that utilize the position information determined from the relative position of the rounded object. | 05-26-2016 |
20160148373 | IMAGE RECORDING SYSTEM - Image recording devices and systems are disclosed along with methods for image recording. The systems can be in communication with a manual imaging device having an imaging probe configured to scan a volume of tissue and output scan images. The systems can be further configured to electronically receive first and second images and to calculate an image-to-image spacing between the first and second images. The systems can further perform an image quality analysis on the scan images and record the scan images if movement of the imaging probe is detected and the scan images satisfy the image quality analysis. The systems can also include a position tracking system. Position sensors and/or orientation sensors can be coupled to the imaging probe to determine the position and orientation of the imaging probe. The systems can be configured to associate the position and orientation data with the scanned images. | 05-26-2016 |
20160148381 | OBJECT RECOGNITION DEVICE AND OBJECT RECOGNITION METHOD - A category selection portion selects a face orientation based on an error between the positions of feature points (the eyes and the mouth) on the faces of each face orientation and the positions of feature points, corresponding to the feature points on the faces of each category, on the face of a collation face image. A collation portion collates the registered face images of the face orientation selected by the category selection portion and the collation face image with each other, and the face orientations are determined so that face orientation ranges where the error with respect to each individual face orientation is within a predetermined value are in contact with each other or overlap each other. The collation face image and the registered face images can be more accurately collated with each other. | 05-26-2016 |
20160148390 | METHOD AND SYSTEM FOR PROCESSING A SEQUENCE OF IMAGES TO IDENTIFY, TRACK, AND/OR TARGET AN OBJECT ON A BODY OF WATER - Airborne tracking systems use sensors to track objects of interest. In order to track the objects of interests, the sensors need to be steered such that the object is kept, ideally, in the center of the sensors field of view. Automatic steering of optical sensors requires the generation of a track on an object of interest. When tracking boats on the water, current approaches to image processing may generate multiple detections on the object of interest. Embodiments of the present disclosure solve the track multiplicity problem by grouping tracks associated with the object of interest into a cluster and by estimating a most likely location of the object within the cluster of tracks. Based on the estimated location, embodiments of the present disclosure outputs a single track for the object. The single track is used by an automatic steering system to maintain a sensor aimed at the object of interest. | 05-26-2016 |
20160148391 | METHOD AND SYSTEM FOR HUMAN MOTION RECOGNITION - A system and method for human motion recognition are provided. The system includes a video sequence decomposer, a feature extractor, and a motion recognition module. The video sequence decomposer decomposes a video sequence into a plurality of atomic actions. The feature extractor extracts features from each of the plurality of atomic actions, the features including at least a motion feature and a shape feature. And the motion recognition module performs motion recognition for each of the plurality of atomic actions in response to the features. | 05-26-2016 |
20160153906 | METHOD OF DETECTING PARTICLES BY DETECTING A VARIATION IN SCATTERED RADIATION | 06-02-2016 |
20160154827 | TERMINAL APPARATUS, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD | 06-02-2016 |
20160154992 | WRINKLE CARE SUPPORT DEVICE AND METHOD FOR SUPPORTING WRINKLE CARE | 06-02-2016 |
20160154996 | ROBOT CLEANER AND METHOD FOR CONTROLLING A ROBOT CLEANER | 06-02-2016 |
20160154999 | OBJECTION RECOGNITION IN A 3D SCENE | 06-02-2016 |
20160155011 | SYSTEM AND METHOD FOR PRODUCT IDENTIFICATION | 06-02-2016 |
20160155232 | METHOD, SYSTEM AND APPARATUS FOR DISPLAYING SURGICAL ENGAGEMENT PATHS | 06-02-2016 |
20160155233 | CONTROL DEVICE WITH PASSIVE REFLECTOR | 06-02-2016 |
20160155235 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM | 06-02-2016 |
20160155239 | Cell Tracking Device and Method, and Storage Medium Non-Transitory Storing Computer-Readable Cell Tracking Programs | 06-02-2016 |
20160155241 | Target Detection Method and Apparatus Based On Online Training | 06-02-2016 |
20160159320 | DETECTION OF SEATBELT POSITION IN A VEHICLE - Method for detecting seatbelt positioning in a vehicle having a seatbelt assembly and the image sensor and vehicle having the same. The seatbelt assembly includes belt webbing with a predefined webbing pattern. An image sensor is configured to take an image of at least a portion of the belt webbing. A controller is operatively connected to the image sensor and has a processor and tangible, non-transitory memory on which is recorded instructions for executing a method for detecting positioning of the belt webbing. The controller is configured to determine a latch status of the seatbelt assembly as being latched or unlatched. If the latch status is latched, the controller is configured to take an image of at least a portion of the belt webbing with the image sensor. The method includes determining if the belt webbing is in a preferred position based at least partially on said image. | 06-09-2016 |
20160162039 | METHOD AND SYSTEM FOR TOUCHLESS ACTIVATION OF A DEVICE - A method and system are provided for computer vision based control of a device by obtaining an image via a camera, the camera in communication with a device; detecting in the image a user pointing at the camera; and controlling the device based on the detection of the user pointing at the camera. | 06-09-2016 |
20160162577 | Method for Segmenting and Tracking Content in Videos Using Low-Dimensional Subspaces and Sparse Vectors - A method segments and tracks content in a video stream including sets of one or more images by first determining measured data from each set of one or more images. An adaptive step-size parameter and a low-dimensional subspace characterizing motion of the content the measured data are initialized. A sparse vector representing a sparse component that characterizes the motion of the content different from the motion of the content characterized by the low-dimensional subspace is determined. A change in the low-dimensional subspace for the measured data is determined using a proximal point iteration and the parameter, which is updated according to the change. A low-rank subspace matrix representing the low-dimensional subspace is updated according to the change and the parameter. Then, the low-rank matrix representing the low-dimensional subspace and the sparse vector are outputted. | 06-09-2016 |
20160162727 | ELECTRONIC DEVICE AND EYE-DAMAGE REDUCTION METHOD OF THE ELECTRONIC DEVICE - In a method for reducing eye damage, caused by watching a display screen, executed in an electronic device, a start time of eye exposure to the display screen is set. At least one image of an object in front of the display screen is captured using an image capturing device. If there is a face region and an eye region of a person in the image, a period of time that the person continuously views the display screen is calculated. If the period of time exceeds a preset time, a message to take a break is issued. | 06-09-2016 |
20160162733 | METHOD AND A DEVICE FOR TRACKING CHARACTERS THAT APPEAR ON A PLURALITY OF IMAGES OF A VIDEO STREAM OF A TEXT - The tracking method comprises, for at least a first image of the text having at least a first line of characters:
| 06-09-2016 |
20160162735 | METHOD FOR DETECTING FACE DIRECTION OF A PERSON - A method for detecting face direction of a person includes receiving a face image of the person. The method further includes determining whether the person is wearing glasses, based on the face image. The method also includes determining whether the number of reflection points of light in a glasses region of the face image is four or more at the time of detecting the glasses region. The method also includes aligning the reflection points of light in order of size, upon determining that the number of reflection points of light is four or more. The method also includes detecting two virtual images of the light, based on the aligning. The method also includes detecting a face direction vector based on the two virtual images of the light. | 06-09-2016 |
20160162742 | LIDAR-BASED CLASSIFICATION OF OBJECT MOVEMENT - Within machine vision, object movement is often estimated by applying image evaluation techniques to visible light images, utilizing techniques such as perspective and parallax. However, the precision of such techniques may be limited due to visual distortions in the images, such as glare and shadows. Instead, lidar data may be available (e.g., for object avoidance in automated navigation), and may serve as a high-precision data source for such determinations. Respective lidar points of a lidar point cloud may be mapped to voxels of a three-dimensional voxel space, and voxel clusters may be identified as objects. The movement of the lidar points may be classified over time, and the respective objects may be classified as moving or stationary based on the classification of the lidar points associated with the object. This classification may yield precise results, because voxels in three-dimensional voxel space present clearly differentiable statuses when evaluated over time. | 06-09-2016 |
20160162750 | Method Of Generating A Training Image For An Automated Vehicle Object Recognition System - In a method of generating a training image for teaching of a camera-based object recognition system suitable for use on an automated vehicle which shows an object to be recognized in a natural object environment, the training image is generated as a synthetic image by a combination of a base image taken by a camera and of a template image in that a structural feature is obtained from the base image and is replaced with a structural feature obtained from the template image by means of a shift-map algorithm. | 06-09-2016 |
20160163039 | INFORMATION PROCESSING DEVICE, MAP UPDATE METHOD, PROGRAM, AND INFORMATION PROCESSING SYSTEM - There is provided an information processing device including: a global map acquiring unit that acquires at least a part of a global map representing positions of objects in a real space where a plurality of users are in activity; a local map generating unit that generates a local map representing positions of nearby objects detectable by a device of one user among the plurality of users; and an updating unit that updates the global map based on position data of objects included in the local map | 06-09-2016 |
20160163046 | SYSTEM AND METHOD FOR QUANTIFICATION OF ESCHERICHIA COLI BACTERIA IN WATER - A system and method for quantification of | 06-09-2016 |
20160163064 | APPARATUS AND METHOD FOR RESOURCE-ADAPTIVE OBJECT DETECTION AND TRACKING - An apparatus for providing object information based on an image sequence including a plurality of images is provided. The apparatus includes an object detector for conducting object detection on three or more images of the plurality of images of the image sequence to obtain the object information, wherein each image of the image sequence on which object detection is conducted, is an object-detected image of the image sequence, and wherein each image of the image sequence on which object detection is not conducted, is not an object-detected image of the image sequence. Moreover, the apparatus includes an object tracker for conducting object tracking on one or more images of the image sequence to obtain the object information. | 06-09-2016 |
20160163065 | BACKGROUND MODEL FOR COMPLEX AND DYNAMIC SCENES - Systems and methods for viewing a scene depicted in a sequence of video frames and identifying and tracking objects between separate frames of the sequence. Each tracked object is classified based on known categories and a stream of context events associated with the object is generated. A sequence of primitive events based on the stream of context events is generated and stored together, along with detailed data and generalized data related to an event. All of the data is then evaluated to learn patterns of behavior that occur within the scene. | 06-09-2016 |
20160163091 | ELECTRONIC APPARATUS AND METHOD FOR INCREMENTAL POSE ESTIMATION AND PHOTOGRAPHING THEREOF - An electronic apparatus and a method for incremental pose estimation and photographing are provided. In the method, at least two images of a 3D object are captured at different positions encircling the 3D object. Displacements and angular displacements are detected when capturing the images. Features of the 3D object in the images, the displacements and the angular displacements are used to estimate a central position of the 3D object and a distance between the electronic apparatus and the 3D object. A circumference suitable for capturing the images of the 3D object is estimated based on the distance and is divided into several segments, and a timing interval of a timer is adjusted based on a length of the segments. A camera of the electronic apparatus is triggered at intervals set by the timer to capture the images of the 3D object. | 06-09-2016 |
20160171283 | Data-Enhanced Video Viewing System and Methods for Computer Vision Processing | 06-16-2016 |
20160171285 | METHOD OF DETECTING OBJECT IN IMAGE AND IMAGE PROCESSING DEVICE | 06-16-2016 |
20160171293 | GESTURE TRACKING AND CLASSIFICATION | 06-16-2016 |
20160171295 | Human Body Pose Estimation | 06-16-2016 |
20160171296 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD | 06-16-2016 |
20160171301 | METHOD AND APPARATUS FOR TRACKING TARGET OBJECT | 06-16-2016 |
20160171305 | PERSONAL AUGMENTED REALITY | 06-16-2016 |
20160171311 | Computer Vision Pipeline and Methods for Detection of Specified Moving Objects | 06-16-2016 |
20160171313 | MACHINE-IMPLEMENTED METHOD AND SYSTEM FOR RECOGNIZING A PERSON HAILING A PUBLIC PASSENGER VEHICLE | 06-16-2016 |
20160171317 | MONITORING METHOD AND APPARATUS USING A CAMERA | 06-16-2016 |
20160171318 | A METHOD FOR DETECTING A VEHICLE SUNVISOR'S STATE | 06-16-2016 |
20160171319 | DRIVER CHECK APPARATUS | 06-16-2016 |
20160171331 | PERFORMING OBJECT DETECTION IN AN IMAGE | 06-16-2016 |
20160171339 | USER TERMINAL DEVICE AND METHOD OF RECOGNIZING OBJECT THEREOF | 06-16-2016 |
20160171346 | IMAGE RECOGNITION METHOD AND APPARATUS, IMAGE VERIFICATION METHOD AND APPARATUS, LEARNING METHOD AND APPARATUS TO RECOGNIZE IMAGE, AND LEARNING METHOD AND APPARATUS TO VERIFY IMAGE | 06-16-2016 |
20160171429 | Realogram Scene Analysis of Images: Shelf and Label Finding | 06-16-2016 |
20160171684 | Device, System and Method for Skin Detection | 06-16-2016 |
20160171702 | OPTICAL TRACKING | 06-16-2016 |
20160171705 | METHOD AND DEVICE FOR AUTOMATICALLY IDENTIFYING A POINT OF INTEREST IN A DEPTH MEASUREMENT ON A VIEWED OBJECT | 06-16-2016 |
20160171713 | APPARATUS FOR GENERATING MOTION EFFECTS AND COMPUTER READABLE MEDIUM FOR THE SAME | 06-16-2016 |
20160171714 | REGISTRAITON SYSTEM FOR REGISTERING AN IMAGING DEVICE WITH A TRACKING DEVICE | 06-16-2016 |
20160171715 | METHOD AND APPARATUS FOR TRACKING AN OBJECT | 06-16-2016 |
20160171852 | REAL-TIME VIDEO ANALYSIS FOR SECURITY SURVEILLANCE | 06-16-2016 |
20160175615 | APPARATUS, METHOD, AND PROGRAM FOR MOVABLE PART TRACKING AND TREATMENT | 06-23-2016 |
20160180149 | VIDEO SURVEILLANCE SYSTEM AND METHOD FOR FRAUD DETECTION | 06-23-2016 |
20160180156 | SYSTEM AND METHOD FOR DETECTING, TRACKING AND COUNTING HUMAN OBJECTS OF INTEREST USING A COUNTING SYSTEM AND A DATA CAPTURE DEVICE | 06-23-2016 |
20160180157 | METHOD FOR SETTING A TRIDIMENSIONAL SHAPE DETECTION CLASSIFIER AND METHOD FOR TRIDIMENSIONAL SHAPE DETECTION USING SAID SHAPE DETECTION CLASSIFIER | 06-23-2016 |
20160180159 | METHOD AND SYSTEM FOR DETECTING PEDESTRIANS | 06-23-2016 |
20160180171 | BACKGROUND MAP FORMAT FOR AUTONOMOUS DRIVING | 06-23-2016 |
20160180173 | Method and System for Queue Length Analysis | 06-23-2016 |
20160180176 | OBJECT DETECTION APPARATUS | 06-23-2016 |
20160180187 | METHOD OF GENERATING DESCRIPTOR FOR INTEREST POINT IN IMAGE AND APPARATUS IMPLEMENTING THE SAME | 06-23-2016 |
20160180195 | Augmenting Layer-Based Object Detection With Deep Convolutional Neural Networks | 06-23-2016 |
20160180196 | OBJECT RE-IDENTIFICATION USING SELF-DISSIMILARITY | 06-23-2016 |
20160180197 | SYSTEM AND METHOD TO IMPROVE OBJECT TRACKING USING MULTIPLE TRACKING SYSTEMS | 06-23-2016 |
20160180509 | COMMODITY IDENTIFICATION DEVICE AND COMMODITY RECOGNITION NAVIGATION METHOD | 06-23-2016 |
20160180516 | POSITION DETECTION APPARATUS, LENS APPARATUS, IMAGE PICKUP SYSTEM, MACHINE TOOL APPARATUS, POSITION DETECTION METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM WHICH ARE CAPABLE OF DETECTING ABNORMALITY | 06-23-2016 |
20160180530 | ERROR ESTIMATION IN REAL-TIME VISUAL ODOMETRY SYSTEM | 06-23-2016 |
20160180531 | Method To Determine Distance Of An Object From An Automated Vehicle With A Monocular Device | 06-23-2016 |
20160180532 | SYSTEM FOR IDENTIFYING A POSITION OF IMPACT OF A WEAPON SHOT ON A TARGET | 06-23-2016 |
20160180533 | Sequencing Products Recognized in a Shelf Image | 06-23-2016 |
20160180535 | GEOREFERENCING METHOD AND SYSTEM | 06-23-2016 |
20160180537 | METHOD AND SYSTEM FOR MODIFYING A BEACON LIGHT SOURCE FOR USE IN A LIGHT BASED POSITIONING SYSTEM | 06-23-2016 |
20160180541 | SENSOR NOISE PROFILE | 06-23-2016 |
20160180542 | INFORMATION PROCESSING APPARATUS, NON-TRANSITORY STORAGE MEDIUM ENCODED WITH COMPUTER READABLE INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD | 06-23-2016 |
20160180543 | VIDEO TRACKER HAVING DIGITAL SIGNAL PROCESSOR | 06-23-2016 |
20160180545 | METHOD AND ELECTRONIC DEVICE FOR OBJECT TRACKING IN A LIGHT-FIELD CAPTURE | 06-23-2016 |
20160180546 | SYSTEM AND METHOD TO IMPROVE OBJECT TRACKING USING TRACKING FINGERPRINTS | 06-23-2016 |
20160180547 | METHOD, SYSTEM AND APPARATUS FOR PROCESSING AN IMAGE | 06-23-2016 |
20160180549 | Distinguishing Between Stock Keeping Units Using Hough Voting Methodology | 06-23-2016 |
20160180550 | METHOD FOR MEASURING OBJECT AND SMART DEVICE | 06-23-2016 |
20160180667 | DOORBELL CAMERA PACKAGE DETECTION | 06-23-2016 |
20160180865 | VIDEO-BASED SOUND SOURCE SEPARATION | 06-23-2016 |
20160188953 | AUTOMATED REMOTE CAR COUNTING - A system for analysis of remotely sensed image data of parking facilities, storage lots, or road regions, for determining patterns over time and determine time-based information such as facility capacities or vehicle movement. | 06-30-2016 |
20160188965 | Image Processing Sensor Systems - An image processing sensor system functions as a standalone unit to capture images and process the resulting signals to detect objects or events of interest. The processing significantly improves selectivity and specificity of detection objects and events in a series of motions that may precede a patient who is at elevated risk of falling. | 06-30-2016 |
20160188968 | OBJECT DETECTION APPARATUS, OBJECT DETECTION METHOD, AND OBJECT DETECTION SYSTEM - An object detection apparatus is capable of estimating the size of a moving object easily based on images. An object detection apparatus ( | 06-30-2016 |
20160188976 | GENERATION OF HIGH RESOLUTION POPULATION DENSITY DATA SETS THROUGH EXPLOITATION OF HIGH RESOLUTION OVERHEAD IMAGERY DATA AND LOW RESOLUTION POPULATION DENSITY DATA SETS - Utilities (e.g., systems, methods, etc.) for automatically generating high resolution population density estimation data sets through manipulation of low resolution population density estimation data sets with high resolution overhead imagery data (e.g., such as overhead imagery data acquired by satellites, aircrafts, etc. of celestial bodies). Stated differently, the present utilities make use of high resolution overhead imagery data to determine how to distribute the population density of a large, low resolution cell (e.g., 1000 m) among a plurality of smaller, high resolution cells (e.g., 100 m) within the larger cell. | 06-30-2016 |
20160188978 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE PROGRAM, AND RECORDING MEDIUM - An information processing method includes: acquiring setting pseudo multipole information on a pseudo multipole (S | 06-30-2016 |
20160188980 | Video Triggered Analyses - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving a video feed of a scene that includes an object in at least a portion of the scene. Tracking the object using an object tracking algorithm. Detecting a change in the object from a first frame of the video feed to a second frame of the video feed. Automatically causing an analysis to be performed on a portion of the video feed that includes the object and the change in the object in response to detecting the change. Other implementations of this aspect include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices. | 06-30-2016 |
20160188982 | ATTRIBUTE-BASED ALERT RANKING FOR ALERT ADJUDICATION - Alerts to object behaviors are prioritized for adjudication as a function of relative values of abandonment, foregroundness and staticness attributes. The attributes are determined from feature data extracted from video frame image data. The abandonment attribute indicates a level of likelihood of abandonment of an object. The foregroundness attribute quantifies a level of separation of foreground image data of the object from a background model of the image scene. The staticness attribute quantifies a level of stability of dimensions of a bounding box of the object over time. Alerts are also prioritized according to an importance or relevance value that is learned and generated from the relative abandonment, foregroundness and staticness attribute strengths. | 06-30-2016 |
20160189365 | METHOD FOR IDENTIFYING LOCATION OF ELECTRONIC APPARATUS, ELECTRONIC APPARATUS, AND OPERATING METHOD OF SERVER - A method of identifying a location of an electronic apparatus is provided. The method includes obtaining information about a region in which the electronic apparatus is located, obtaining an image through an image sensor included in the electronic apparatus, identifying location information about one or more objects included in the image, and identifying a location of the electronic apparatus by using the information about the region and the location information. | 06-30-2016 |
20160189381 | SIGNAL DETECTION, RECOGNITION AND TRACKING WITH FEATURE VECTOR TRANSFORMS - A method for obtaining object surface topology in which image frames of a scene (e.g., video frames from a user passing a smartphone camera over an object) are transformed into dense feature vectors, and feature vectors are correlated to obtain high precision depth maps. Six dimensional pose is determined from the video sequence, and then used to register patches of pixels from the frames. Registered patches are aligned and then correlated to local shifts. These local shifts are converted to precision depth maps, which are used to characterize surface detail of an object. Feature vector transforms are leveraged in a signal processing method comprising several levels of interacting loops. At a first loop level, a structure from motion loop process extracts anchor features from image frames. At another level, an interacting loop process extracts surface texture, as noted. At additional levels, object forms are segmented from the images, and objects are counted and/or measured. At still a higher level, the lower level data structures providing feature extraction, 3D structure and pose estimation, and object surface registration are exploited by higher level loop processes for object identification (e.g., using machine learning classification), digital watermark or bar code reading and image recognition from the registered surfaces stored in lower level data structures. | 06-30-2016 |
20160189384 | METHOD FOR DETERMINING THE POSE OF A CAMERA AND FOR RECOGNIZING AN OBJECT OF A REAL ENVIRONMENT - A method for determining the pose of a camera relative to a real environment includes the following steps: taking at least one image of a real environment by means of a camera, the image containing at least part of a real object, performing a tracking method that evaluates information with respect to correspondences between features associated with the real object and corresponding features of the real object as it is contained in the image of the real environment, so as to obtain conclusions about the pose of the camera, determining at least one parameter of an environmental situation, and performing the tracking method in accordance with the at least one parameter. Analogously, the method can also be utilized in a method for recognizing an object of a real environment in an image taken by a camera. | 06-30-2016 |
20160189385 | Geometric Fingerprinting for Localization of a Device - Systems, apparatuses, and methods are provided for developing a fingerprint database for and determining the geographic location of an end-user device (e.g., vehicle, mobile phone, smart watch, etc.) with the database. A fingerprint database may be developed by receiving a depth map for a location in a path network, and then identifying physical structures within the depth map. The depth map may be divided, at each physical structure, into one or more horizontal planes at one or more elevations from a road level. Two-dimensional feature geometries may be extracted from the horizontal planes. At least a portion of the extracted feature geometries may be encoded into the fingerprint database. | 06-30-2016 |
20160189391 | MOBILE, WEARABLE, AUTOMATED TARGET TRACKING SYSTEM - The mobile, wearable, automated target tracking system is designed to enable an image and/or sound recording device, such as a video camera or directional microphone, to automatically follow a subject (or target) in order to keep that subject within the image frame or sound range that is being recorded. The automated target tracking system makes it possible to capture both the action and subject simultaneously without requiring a cameraman to manually operate the equipment. The indoor/outdoor, automated tracking system is designed to be independent of the video/sound recording device and may utilize a smartphone for location sensing and control. Both the target (or subject) and the tracking device may be moving, so the tracking device is designed to adjust position on 3 axes, azimuth (pan), elevation (tilt) and horizon (roll). Since the compact, battery-operated tracking device is mobile and wearable, it enables the user to capture the subject and all the action while also participating in the activity at the same time. | 06-30-2016 |
20160189392 | OBJECT TRACKING APPARATUS, CONTROL METHOD THEREFOR AND STORAGE MEDIUM - An image capture apparatus functions as an object tracking apparatus for tracking an object included in provided images, registers a partial image indicating an object as a template, and performs template matching for estimating a region by using the template and histogram matching for registering a histogram of a partial image indicating the object and estimating a region by using the histogram. In a case where a distance between the estimation region based on the histogram matching and the estimation region based on the template matching is within a predetermined range, the estimation region based on the template matching is employed as an object region, and in a case where the distance between the estimation region based on the histogram matching and the estimation region based on the template matching is not within the predetermined range, the estimation region based on the histogram matching is employed as the object region. | 06-30-2016 |
20160189395 | INFORMATION PROCESSING APPARATUS, RECORDING MEDIUM, AND INFORMATION PROCESSING METHOD - An information processing apparatus includes a first acquisition unit, a second acquisition unit, and an associating unit. The first acquisition unit acquires first identification information stored on an object carried by a person. The second acquisition unit acquires second identification information identifying the person. When the same combination as a combination of the first identification information acquired on first date and time, and the second identification information acquired on second date and time corresponding to the first date and time is acquired on third date and time different from the first date and time, the associating unit associates the first identification information with the second identification information in the combination. | 06-30-2016 |
20160189499 | Photo comparison and security process called the Flicker Process. - A monitoring support apparatus which supports a monitoring system using a comparison method for real time and archived film and/or photographs. It relates to image capturing devices and, particularly, to an image capturing device which can automatically compare photographs and/or film and compare the differences in a selected time or an archive to a present situation. This relates to systems for video viewing/monitoring films or photographs and determining what changes have occurred. The process comprises: a general Flicker Process: Step | 06-30-2016 |
20160195926 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD | 07-07-2016 |
20160196468 | EXTRACTION OF USER BEHAVIOR FROM DEPTH IMAGES | 07-07-2016 |
20160196474 | IMAGE PROCESSING APPARATUS AND LANE PARTITION LINE RECOGNITION SYSTEM INCLUDING THE SAME | 07-07-2016 |
20160196543 | INFORMATION PROCESSING APPARATUS, STORE SYSTEM AND INFORMATION PROCESSING METHOD | 07-07-2016 |
20160196652 | METHOD AND APPARATUS | 07-07-2016 |
20160196654 | MAP CREATION APPARATUS, MAP CREATION METHOD, AND COMPUTER-READABLE RECORDING MEDIUM | 07-07-2016 |
20160196663 | TRACKING APPARATUS | 07-07-2016 |
20160196728 | METHOD AND SYSTEM FOR DETECTING A SECURITY BREACH IN AN ORGANIZATION | 07-07-2016 |
20160202065 | OBJECT LINKING METHOD, OBJECT LINKING APPARATUS, AND STORAGE MEDIUM | 07-14-2016 |
20160202756 | GAZE TRACKING VIA EYE GAZE MODEL | 07-14-2016 |
20160203367 | VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND VIDEO PROCESSING PROGRAM | 07-14-2016 |
20160203371 | Method and Apparatus for Projective Volume Monitoring | 07-14-2016 |
20160203372 | METHOD AND CONTROL DEVICE FOR IDENTIFYING AN OBJECT IN A PIECE OF IMAGE INFORMATION | 07-14-2016 |
20160203376 | OBJECT ESTIMATION APPARATUS AND OBJECT ESTIMATION METHOD | 07-14-2016 |
20160203610 | METHOD AND APPARATUS FOR RECOGNIZING OBJECT | 07-14-2016 |
20160203614 | METHOD AND APPARATUS OF DETECTING OBJECT USING EVENT-BASED SENSOR | 07-14-2016 |
20160203615 | DIRECTIONAL OBJECT DETECTION | 07-14-2016 |
20160252492 | BREATH ALCOHOL IGNITION INTERLOCK SYSTEM | 09-01-2016 |
20160252646 | SYSTEM AND METHOD FOR VIEWING IMAGES ON A PORTABLE IMAGE VIEWING DEVICE RELATED TO IMAGE SCREENING | 09-01-2016 |
20160253560 | VISIBILITY ENHANCEMENT DEVICES, SYSTEMS, AND METHODS | 09-01-2016 |
20160253565 | COMPUTER-READABLE MEDIUM STORING THEREIN IMAGE PROCESSING PROGRAM, IMAGE PROCESSING DEVICE, AND IMAGE PROCESSING METHOD | 09-01-2016 |
20160253576 | AUTOMATIC IDENTIFICATION OF CHANGES IN OBJECTS | 09-01-2016 |
20160253579 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM | 09-01-2016 |
20160253808 | DETERMINATION OF OBJECT DATA BY TEMPLATE-BASED UAV CONTROL | 09-01-2016 |
20160253821 | IDENTIFYING AN OBJECT IN A VOLUME BASED ON CHARACTERISTICS OF LIGHT REFLECTED BY THE OBJECT | 09-01-2016 |
20160255283 | METHOD FOR DIFFERENTIATING BETWEEN BACKGROUND AND FOREGROUND OF SCENERY AND ALSO METHOD FOR REPLACING A BACKGROUND IN IMAGES OF A SCENERY | 09-01-2016 |
20160377529 | System and Assessment of Reflective Objects Along a Roadway - A system for classifying different types of sheeting materials of road signs depicted in a videostream compares estimated retroreflectivity values against known minimum retroreflectivity values for each of a plurality of colors. Once a road sign has been identified in the videostream, the frames associated with that road sign are analyzed to determine each of a plurality of colors present on the road sign. An estimated retroreflectivity for each of the plurality of colors present on the road sign is then determined. By comparing the estimated retroreflectivity for each of the plurality of colors against known minimum retroreflectivity values for the corresponding color for different types of sheeting materials, an accurate determination of the classification of the sheeting material of the road sign is established. Preferably, certain conditions of gross failure of the sheeting material are filtered out before classification of the sheeting material is determined. | 12-29-2016 |
20160379049 | VIDEO MONITORING METHOD, VIDEO MONITORING SYSTEM AND COMPUTER PROGRAM PRODUCT - The present disclosure relates to a video monitoring method, a video monitoring system and a computer program product. The video monitoring method comprises: obtaining video data collected by a video data collecting apparatus; and based on pre-set scene information and the video data, performing determination and statistics of monitored objects in a scene to be monitored corresponding to the scene information. | 12-29-2016 |
20160379053 | Method and Apparatus for Identifying Object - A method and apparatus for identifying an object are disclosed. The method includes: performing linear feature detection on an image to be identified by using a linear feature detecting method to obtain detected linear features, wherein the linear feature detection method transforms detection of linear features in an image space to detection of extremal points in another space and assigns larger weights to continuous image points than to discrete image points during the transformation by using a continuous cluster factor; and identifying an object to be identified from the detected linear features by considering characteristics of the object to be identified. The method and apparatus for identifying an object of the invention, when used to detect and identify weak linear objects in high resolution remote sensing images, can effectively suppress the system noise and ambient noise, thereby successfully identifying the interested object and avoiding false alarms. Moreover, short line segments can also be identified. | 12-29-2016 |
20160379055 | GRAPH-BASED FRAMEWORK FOR VIDEO OBJECT SEGMENTATION AND EXTRACTION IN FEATURE SPACE - A method for graph-based spatiotemporal video segmentation and automatic target object extraction in high-dimensional feature space includes using a processor to automatically analyze an entire volumetric video sequence; using the processor to construct a high-dimensional feature space that includes color, motion, time, and location information so that pixels in the entire volumetric video sequence are reorganized according to their unique and distinguishable feature vectors; using the processor to create a graph model that fuses the appearance, spatial, and temporal information of all pixels of the video sequence in the high-dimensional feature space; and using the processor to group pixels in the graph model that are inherently similar and assign the same labels to them to form semantic spatiotemporal key segments. | 12-29-2016 |
20160379060 | IMAGE SURVEILLANCE METHOD AND IMAGE SURVEILLANCE DEVICE THEREOF - An image surveillance method is applied to surveillance of a plurality of targets. The image surveillance method includes setting a priority of each target in advance, identifying at least one of the plurality of targets to obtain the priority of the at least one of the plurality of targets when the at least one of the plurality of targets enters at least one surveillance region, and determining whether to monitor the at least one of the plurality of targets according to the priority of the at least one of the plurality of targets. | 12-29-2016 |
20160379061 | METHODS, DEVICES AND SYSTEMS FOR DETECTING OBJECTS IN A VIDEO - Methods, devices and systems for performing video content analysis to detect humans or other objects of interest a video image is disclosed. The detection of humans may be used to count a number of humans, to determine a location of each human and/or perform crowd analyses of monitored areas. | 12-29-2016 |
20160379062 | 3-D MODEL BASED METHOD FOR DETECTING AND CLASSIFYING VEHICLES IN AERIAL IMAGERY - A computer implemented method for determining a vehicle type of a vehicle detected in an image is disclosed. An image having a detected vehicle is received. A number of vehicle models having salient feature points is projected on the detected vehicle. A first set of features derived from each of the salient feature locations of the vehicle models is compared to a second set of features derived from corresponding salient feature locations of the detected vehicle to form a set of positive match scores (p-scores) and a set of negative match scores (n-scores). The detected vehicle is classified as one of the vehicle models models based at least in part on the set of p-scores and the set of n-scores. | 12-29-2016 |
20160379067 | VEHICLE VISION SYSTEM WITH DIRT DETECTION - A vision system for a vehicle includes a camera having an image sensor and a lens, with the lens exposed to the environment exterior the vehicle. An image processor is operable to process multiple frames of image data captured by the camera and processes captured image data to detect a blob in a frame of captured image data. Responsive to processing a first frame of captured image data, and responsive to the image processor determining a first threshold likelihood that a detected blob is indicative of a contaminant, the image processor adjusts processing when processing subsequent frames of captured image data. Responsive to the image processor determining a second threshold likelihood that the detected blob is indicative of a contaminant when processing subsequent frames of captured image data, the image processor determines that the detected blob is representative of a contaminant at the lens of the camera. | 12-29-2016 |
20160379075 | IMAGE RECOGNITION DEVICE AND IMAGE RECOGNITION METHOD - An image recognition device includes: a plurality of first charge storage circuits that store signal charges generated by photoelectric conversion sections; a plurality of second charge storage circuits that store signal charges generated by the photoelectric conversion sections; a first charge read circuit section that reads a pixel signal and outputs an image as a first image; a second charge read circuit section that reads a pixel signal and outputs an image as a second image; a read circuit selection section that selects one of the first charge read circuit section and the second charge read circuit section; and a feature amount determination section, wherein the feature amount determination section determines a detection target subject according to a feature amount of a subject in the second image, and whether to perform the determination for a subject in the first image is determined based on the determination result. | 12-29-2016 |
20160379076 | ARTICLE RECOGNITION APPARATUS AND IMAGE PROCESSING METHOD - According to one embodiment, an article recognition apparatus includes an image acquisition unit, a recognition unit, a region detection unit, a storage unit, and a determination unit. The recognition unit recognizes each of the articles. The region detection unit determines article region information. The storage unit stores article information including a reference value for the article region information. The determination unit determines that an unrecognized article exists, if the reference value for the article region information of each article which the recognition unit recognized does not match with the article region information. | 12-29-2016 |
20160379365 | CAMERA CALIBRATION DEVICE, CAMERA CALIBRATION METHOD, AND CAMERA CALIBRATION PROGRAM - A technique is disclosed for easily performing calibration of a camera by using a MMS. A calibration device for a camera that is configured to photograph the sun includes an in-image sun position identifying unit | 12-29-2016 |
20160379367 | IMAGE PROCESSING APPARATUS - According to one embodiment, an image processing apparatus includes an image acquiring unit, a commodity identifying unit, a commodity map generator and a commodity determination unit. The image acquiring unit acquires a number of images that are photographed, with photography ranges being varied gradually. The image identifying unit identifies a commodity and the position of that commodity based on the photographed images. The commodity map generator generates a commodity map from the photographed images, based on the commodity and commodity position identified by the image identifying unit. The commodity determination unit generates a commodity inspection map which represents differences between the commodity and commodity position shown in the commodity map and those shown in commodity layout plan information representing a commodity layout plan. | 12-29-2016 |
20160379369 | WIRELESS AIRCRAFT AND METHODS FOR OUTPUTTING LOCATION INFORMATION OF THE SAME - The present invention is to provide a wireless aircraft and a method for outputting location information to reduce a cost, simplify the process, and output the necessary information. The wireless aircraft | 12-29-2016 |
20160379370 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - To decrease the time and work needed for the measurement of a target object, an information processing apparatus includes an acquisition unit configured to acquire measurement data on a target object from a measuring apparatus, an extraction unit configured to extract a partial region of the target object that contains a geometric feature for use in estimation of a position-and-orientation of the target object, based on one or more pieces of measurement data acquired by the acquisition unit, a determination unit configured to determine a position-and-orientation of the measuring apparatus configured to measure the partial region extracted by the extraction unit, and an output unit configured to output the position-and-orientation determined by the determination unit. | 12-29-2016 |
20160379373 | Methods Circuits Devices Systems and Associated Computer Executable Code for Multi Factor Image Feature Registration and Tracking - Disclosed are methods, circuits, devices, systems and associated executable code for Multi factor image feature registration and tracking, wherein utilized factors include both static and dynamic parameters within a video feed. Assessed factors may originate from a heterogeneous set of sensors including both video and audio sensors. Acoustically acquired scene information may supplement optically acquired information. | 12-29-2016 |
20160379375 | Camera Tracking Method and Apparatus - A camera tracking method includes obtaining an image set of a current frame; separately extracting feature points of each image in the image set of the current frame; obtaining a matching feature point set of the image set according to a rule that scene depths of adjacent regions on an image are close to each other; separately estimating, a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of the current frame and a three-dimensional location of the scene point in a local coordinate system of a next frame; estimating a motion parameter of the binocular camera on the next frame using invariance of center-of-mass coordinates to rigid transformation according to the three-dimensional location of the scene point corresponding to the matching feature points; and optimizing the motion parameter of the binocular camera on the next frame. | 12-29-2016 |
20170236000 | METHOD OF EXTRACTING FEATURE OF IMAGE TO RECOGNIZE OBJECT | 08-17-2017 |
20170236009 | AUTOMATED CAMERA STITCHING | 08-17-2017 |
20170236010 | IMAGE PICKUP APPARATUS, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD | 08-17-2017 |
20170236014 | AUGMENTED OBJECT DETECTION USING STRUCTURED LIGHT | 08-17-2017 |
20170236015 | IMAGE PROCESSING DEVICE, ALARMING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD | 08-17-2017 |
20170236030 | OBJECT DETECTION APPARATUS, OBJECT DETECTION METHOD, AND STORAGE MEDIUM | 08-17-2017 |
20170236037 | METHODS FOR OBJECT RECOGNITION AND RELATED ARRANGEMENTS | 08-17-2017 |
20170236277 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING PROGRAM, AND IMAGE PROCESSING METHOD | 08-17-2017 |
20170236285 | POSITION DETERMINING TECHNIQUES USING IMAGE ANALYSIS OF MARKS WITH ENCODED OR ASSOCIATED POSITION DATA | 08-17-2017 |
20170236293 | Enhanced Contrast for Object Detection and Characterization By Optical Imaging Based on Differences Between Images | 08-17-2017 |
20170236301 | INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING INFORMATION PROCESSING APPARATUS, AND STORAGE MEDIUM | 08-17-2017 |
20170236302 | IMAGE PROCESSING METHOD, MOBILE DEVICE AND METHOD FOR GENERATING A VIDEO IMAGE DATABASE | 08-17-2017 |
20170237968 | SYSTEMS AND METHODS FOR FACILITATING THREE-DIMENSIONAL RECONSTRUCTION OF SCENES FROM VIDEOS | 08-17-2017 |
20180022621 | Control of Industrial Water Treatment Via Digital Imaging | 01-25-2018 |
20180024050 | Determining A Weed Percentage And Agricultural Control Device | 01-25-2018 |
20180024244 | System and Method for Increasing Resolution of Images Obtained from a Three-Dimensional Measurement System | 01-25-2018 |
20180024639 | AUTOMATED LEARNING AND GESTURE BASED INTEREST PROCESSING | 01-25-2018 |
20180024641 | METHOD AND SYSTEM FOR 3D HAND SKELETON TRACKING | 01-25-2018 |
20180025211 | CELL TRACKING CORRECTION METHOD, CELL TRACKING CORRECTION DEVICE, AND STORAGE MEDIUM WHICH STORES NON-TRANSITORY COMPUTER-READABLE CELL TRACKING CORRECTION PROGRAM | 01-25-2018 |
20180025220 | IMAGE CAPTURE, PROCESSING AND DELIVERY AT GROUP EVENTS | 01-25-2018 |
20180025227 | METHODS AND APPARATUS TO MEASURE BRAND EXPOSURE IN MEDIA STREAMS | 01-25-2018 |
20180025230 | Method and System for Motion Vector-Based Video Monitoring and Event Categorization | 01-25-2018 |
20180025232 | SURVEILLANCE | 01-25-2018 |
20180025235 | CROWDSOURCING THE COLLECTION OF ROAD SURFACE INFORMATION | 01-25-2018 |
20180025239 | METHOD AND IMAGE PROCESSING APPARATUS FOR IMAGE-BASED OBJECT FEATURE DESCRIPTION | 01-25-2018 |
20180025245 | SKETCH MISRECOGNITION CORRECTION SYSTEM BASED ON EYE GAZE MONITORING | 01-25-2018 |
20180025500 | METHOD OF TRACKING ONE OR MORE MOBILE OBJECTS IN A SITE AND A SYSTEM EMPLOYING SAME | 01-25-2018 |
20180027242 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM | 01-25-2018 |
20190143891 | ALERT CONTROL APPARATUS, ALERT CONTROL METHOD, AND RECORDING MEDIUM | 05-16-2019 |
20190145768 | Object Distance Detection Device | 05-16-2019 |
20190147216 | PUPIL POSITIONING DEVICE AND METHOD AND DISPLAY DRIVER OF VIRTUAL REALITY DEVICE | 05-16-2019 |
20190147220 | DETECTING OBJECTS IN VIDEO DATA | 05-16-2019 |
20190147221 | POSE ESTIMATION AND MODEL RETRIEVAL FOR OBJECTS IN IMAGES | 05-16-2019 |
20190147224 | NEURAL NETWORK BASED FACE DETECTION AND LANDMARK LOCALIZATION | 05-16-2019 |
20190147229 | METHODS AND SYSTEMS FOR PLAYING MUSICAL ELEMENTS BASED ON A TRACKED FACE OR FACIAL FEATURE | 05-16-2019 |
20190147233 | METHOD OF DETERMINING JOINT STRESS FROM SENSOR DATA | 05-16-2019 |
20190147235 | RECOGNITION OF ACTIVITY IN A VIDEO IMAGE SEQUENCE USING DEPTH INFORMATION | 05-16-2019 |
20190147237 | Human Body Posture Data Acquisition Method and System, and Data Processing Device | 05-16-2019 |
20190147243 | IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM | 05-16-2019 |
20190147245 | THREE-DIMENSIONAL OBJECT DETECTION FOR AUTONOMOUS ROBOTIC SYSTEMS USING IMAGE PROPOSALS | 05-16-2019 |
20190147246 | SYSTEM AND METHOD FOR PROVIDING AUGMENTED REALITY INTERACTIONS OVER PRINTED MEDIA | 05-16-2019 |
20190147253 | Autonomous Vehicle Lane Boundary Detection Systems and Methods | 05-16-2019 |
20190147256 | SERVER DEVICE AND IN-VEHICLE DEVICE | 05-16-2019 |
20190147257 | METHOD FOR DETECTING TRAFFIC SIGNS | 05-16-2019 |
20190147260 | Systems and Methods for Moving Object Predictive Locating, Reporting, and Alerting | 05-16-2019 |
20190147264 | CONCENTRATION DETERMINATION APPARATUS, CONCENTRATION DETERMINATION METHOD, AND PROGRAM FOR CONCENTRATION DETERMINATION | 05-16-2019 |
20190147265 | DISTRACTED DRIVING DETERMINATION APPARATUS, DISTRACTED DRIVING DETERMINATION METHOD, AND PROGRAM FOR DISTRACTED DRIVING DETERMINATION | 05-16-2019 |
20190147268 | EYELID OPENING/CLOSING DETERMINATION APPARATUS AND DROWSINESS DETECTION APPARATUS | 05-16-2019 |
20190147269 | INFORMATION PROCESSING APPARATUS, DRIVER MONITORING SYSTEM, INFORMATION PROCESSING METHOD AND COMPUTER-READABLE STORAGE MEDIUM | 05-16-2019 |
20190147283 | DEEP CONVOLUTIONAL NEURAL NETWORKS FOR CRACK DETECTION FROM IMAGE DATA | 05-16-2019 |
20190147284 | SPATIO-TEMPORAL ACTION AND ACTOR LOCALIZATION | 05-16-2019 |
20190147285 | OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD AND NON-TRANSITORY COMPUTER READABLE MEDIUM | 05-16-2019 |
20190147292 | IMAGE RETRIEVING APPARATUS, IMAGE RETRIEVING METHOD, AND SETTING SCREEN USED THEREFOR | 05-16-2019 |
20190147587 | SYSTEM AND METHOD FOR TOOL MAPPING | 05-16-2019 |
20190147596 | SYSTEMS AND METHODS FOR HORIZON IDENTIFICATION IN AN IMAGE | 05-16-2019 |
20190147597 | MAXIMUM CONNECTED DOMAIN MARKING METHOD, TARGET TRACKING METHOD, AND AUGMENTED REALITY/VIRTUAL REALITY APPARATUS | 05-16-2019 |
20190147601 | INFORMATION PROCESSING APPARATUS, BACKGROUND IMAGE UPDATE METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM | 05-16-2019 |
20190147602 | HYBRID AND SELF-AWARE LONG-TERM OBJECT TRACKING | 05-16-2019 |
20190147605 | DETECTION TARGET POSITIONING DEVICE, DETECTION TARGET POSITIONING METHOD, AND SIGHT TRACKING DEVICE | 05-16-2019 |
20190147607 | SYSTEMS AND METHODS FOR GAZE TRACKING FROM ARBITRARY VIEWPOINTS | 05-16-2019 |
20190147610 | End-to-End Tracking of Objects | 05-16-2019 |
20190147611 | OBJECT SENSING SYSTEM, OBJECT SENSING METHOD, AND RECORDING MEDIUM STORING PROGRAM CODE | 05-16-2019 |
20190147613 | ESTIMATION OF HUMAN ORIENTATION IN IMAGES USING DEPTH INFORMATION | 05-16-2019 |
20190147615 | INFORMATION PROVIDING APPARATUS AND INFORMATION PROVIDING METHOD | 05-16-2019 |
20190147616 | Method and device for image rectification | 05-16-2019 |
20190147617 | METHOD AND APPARATUS FOR PROCESSING A PLURALITY OF UNDIRECTED GRAPHS | 05-16-2019 |
20190147647 | SYSTEM AND METHOD FOR DETERMINING GEO-LOCATION(S) IN IMAGES | 05-16-2019 |
20190149696 | Aerial Imaging Privacy Enhancement System | 05-16-2019 |
20190149740 | Image tracking device | 05-16-2019 |
20220136315 | INFORMATION PROCESSING DEVICE - An information processing device of the present invention includes: a matching means that executes a matching process of performing matching between an object within a captured image obtained by capturing a pre-passing side region of a gate and a previously registered object; a distance estimating means that estimates a distance from the gate to the object within the captured image by using a reference value set based on an attribute of the object within the captured image; and a gate controlling means that controls opening and closing of the gate based on a result of the matching and the estimated distance to the object within the captured image. | 05-05-2022 |
20220136316 | INFORMATION PROCESSING DEVICE - An information processing device of the present invention includes: a matching means that executes a matching process of performing matching between an object within a captured image obtained by capturing a pre-passing side region of a gate and a previously registered object; a distance estimating means that estimates a distance from the gate to the object within the captured image by using a reference value set based on an attribute of the object within the captured image; and a gate controlling means that controls opening and closing of the gate based on a result of the matching and the estimated distance to the object within the captured image. | 05-05-2022 |
20220138451 | SYSTEM AND METHOD FOR IMPORTANCE RANKING FOR COLLECTIONS OF TOP-DOWN AND TERRESTRIAL IMAGES - A method includes obtaining an object comprising a plurality of images, each image comprising at least one of: one or more tie points or one or more stray points. The method also includes determining an importance score of each tie point of the one or more tie points in the images. The method also includes determining an importance score of each stray point of the one or more stray points in the images. The method also includes determining an importance score of the object based on the importance score of the one or more tie points and the importance score of the one or more stray points. The method also includes providing the importance score of the object as an output for selection of the object for image labeling. | 05-05-2022 |
20220138459 | RECOGNITION SYSTEM OF HUMAN BODY POSTURE, RECOGNITION METHOD OF HUMAN BODY POSTURE, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A recognition system of human body posture includes a source image device, a storage device, and a processing device. The storage device is configured to store a posture recognition model and the posture recognition model is configured for inputting a skeleton image and outputting a recognition result. The skeleton image includes a skeleton and the skeleton includes a plurality of joints and a plurality of limbs. Each of the limbs corresponds to a limb color, and each of the limb colors is different from each other. The processing device is configured to: generate the skeleton images from the pending recognition images; input the skeleton images into the posture recognition model respectively to output the recognition result which corresponds to the skeleton images inputted; and determine whether abnormal information is sent according to the recognition result. | 05-05-2022 |
20220138485 | PROCESS FOR DETECTION OF THE PRESENCE OF AN OBJECT IN A FIELD OF VISION OF A FLIGHT TIME SENSOR - In an embodiment a method for detecting a presence of at least one object in a field of view of a time of flight sensor includes successively generating, by the time of flight sensor, histograms, each histogram comprising several classes associating a number of photons detected at a given acquisition period, adding several successively generated histograms so as to obtain a summed histogram and analyzing the summed histogram to detect the presence of at least one object in the field of view of the time of flight sensor. | 05-05-2022 |
20220138493 | METHOD AND APPARATUS WITH ADAPTIVE OBJECT TRACKING - Disclosed is a method and apparatus for adaptive tracking of a target object. The method includes method of tracking an object, the method including estimating a dynamic characteristic of an object in an input image based on frames of the input image, determining a size of a crop region for a current frame of the input image based on the dynamic characteristic of the object, generating a cropped image by cropping the current frame based on the size of the crop region, and generating a result of tracking the object for the current frame using the cropped image. | 05-05-2022 |
20220138926 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus acquires a plurality of images related to a specified article, decides suitability of each of the plurality of images for estimating a state of the article, and outputs state information indicating an estimated state of the specified article based on a result of the decision and at least a part of the plurality of images. | 05-05-2022 |
20220138959 | ASSET TRACKING SYSTEMS - The disclosed technology includes image-based systems and methods for object tracking within an asset area. Some exemplary methods include receiving an indication of a first object entering an asset area and receiving data indicative of a plurality of captured images. The methods also include performing, by at least one processor, object classification of the first object based on one or more of the plurality of captured images. The methods further include determining a first object location of the first object based at least in part on the object classification, and outputting an indication of the first object location. | 05-05-2022 |
20220138964 | FRAME PROCESSING AND/OR CAPTURE INSTRUCTION SYSTEMS AND TECHNIQUES - Techniques and systems are provided for processing one or more frames or images. For instance, a process for determining exposure for one or more frames includes obtaining a motion map for one or more frames. The process includes determining, based on the motion map, motion associated with the one or more frames of a scene. The motion corresponds to movement of one or more objects in the scene relative to a camera used to capture the one or more frames. The process includes determining, based on the determined motion, a number of frames and an exposure for capturing the number of frames. The process further includes sending a request to capture the number of frames using the determined exposure duration. | 05-05-2022 |
20220138967 | Markerless Motion Capture of Animate Subject with Prediction of Future Motion - A motion prediction system for predicting the motion of a random animate subject. A first neural network is a markerless motion capture network, trained to receive video data of the subject and to process the video data to generate a time sequence of musculoskeletal motion capture data. A second neural network is a motion prediction network, trained to receive the musculoskeletal motion capture data and to process the data to generate a prediction of the subject's location based on position change in position of joints and/or muscles. | 05-05-2022 |
20220139077 | STITCHED IMAGE - Various embodiments associated with a composite image are described. In one embodiment, a handheld device comprises a launch component configured to cause a launch of a projectile. The projectile is configured to capture a plurality of images. Individual images of the plurality of images are of different segments of an area. The system also comprises an image stitch component configured to stitch the plurality of images into a composite image. The composite image is of a higher resolution than a resolution of individual images of the plurality of images. | 05-05-2022 |
20220139078 | UNMANNED AERIAL VEHICLE, COMMUNICATION METHOD, AND PROGRAM - The present disclosure relates to an unmanned aerial vehicle, a communication method, and a program capable of more accurately identifying an identification target. | 05-05-2022 |
20220139086 | DEVICE AND METHOD FOR GENERATING OBJECT IMAGE, RECOGNIZING OBJECT, AND LEARNING ENVIRONMENT OF MOBILE ROBOT - According to the present invention, disclosed are a device and a method of generating an object image, recognizing an object, and learning an environment of a mobile robot which perform a deep learning algorithm which allows a robot to create a map and load environment information acquired during the autonomous movement while the autonomous mobile robot is being charged and may be used for an application which finds out a location by finally recognizing objects such as furniture using a method of checking a location of the recognized objects to mark the location on the map. | 05-05-2022 |
20220139093 | TRAVEL ENVIRONMENT ANALYSIS APPARATUS, TRAVEL ENVIRONMENT ANALYSIS SYSTEM, AND TRAVEL ENVIRONMENT ANALYSIS METHOD - An object is to provide a travel environment analysis apparatus that accurately estimates an event occurring outside of a vehicle. A travel environment analysis apparatus includes a gaze point concentration area detector, a gaze point concentration area event estimation unit, and an information output unit. The gaze point concentration area detector sequentially detects a gaze point concentration area that is an area outside of a plurality of vehicles and gazed by occupants of the plurality of vehicles, based on line-of-sight information related to lines of sight of the occupants. The gaze point concentration area event estimation unit estimates, when a new gaze point concentration area which is a gaze point concentration area newly coming up is detected, an event occurring in the new gaze point concentration area. The information output unit outputs information of the new gaze point concentration area and information of the event. | 05-05-2022 |
20220139180 | CUSTOM EVENT DETECTION FOR SURVEILLANCE CAMERAS - A system trains and uses event recognition models for recognizing custom events types defined by a user within a camera feed of a surveillance camera, The camera can be fixed-view, with a relatively constant position and angle, and the background of the video images video can be likewise relatively constant. A user interface receives, from a user, positive and negative samples of the event in question, such as a designation of live or pre-recorded portions of a camera feed as being positive or negative examples of the event in question. Based on the samples, the user system trains an event recognition model (e.g., using few-shot learning techniques) to detect occurrences of custom event types in the camera feed. A response is performed based on detected occurrences of the event. The user can flag mistakes (false positive or false negative) which can be incorporated into the model to enhance its accuracy. | 05-05-2022 |