Patent application number | Description | Published |
20120304085 | Multi-Sensor Surveillance System with a Common Operating Picture - A method and apparatus for processing video data streams for an area. Objects are identified in the area from images in the video data streams. The video data streams are generated by cameras. First locations are identified for the objects using the images. The first locations are defined using a coordinate system for the images. Graphical representations are formed for the objects using the images. The graphical representations are displayed for the objects in second locations in a model of the area on a display system with respect to features in the area that are represented in the model. The second locations are defined using a geographic coordinate system for the model. A first location in the first locations for an object in the objects corresponds to a second location in the second locations for a corresponding graphical representation in the graphical representations. | 11-29-2012 |
20130083959 | Multi-Modal Sensor Fusion - A method and apparatus for processing images. A sequence of images for a scene is received from an imaging system. An object in the scene is detected using the sequence of images. A viewpoint of the imaging system is registered to a model of the scene using a region in the model of the scene in which an expected behavior of the object is expected to occur. | 04-04-2013 |
20140093127 | Method and System for Using Fingerprints to Track Moving Objects in Video - A method and system for tracking moving objects in a sequence of images. In one illustrative embodiment, a current image in the sequence of images is segmented into a plurality of segments. Segments in the plurality of segments belonging to a same motion profile are fused together to form a set of master segments. A set of target segments is identified from the set of master segments. The set of target segments represent a set of moving objects in the current image. A set of fingerprints is created for use in tracking the set of moving objects in a number of subsequent images in the sequence of images. | 04-03-2014 |
20140132733 | Backfilling Points in a Point Cloud - An apparatus, system, and method for increasing points in a point cloud. In one illustrative embodiment, a two-dimensional image of a scene and the point cloud of the scene are received. At least a portion of the points in the point cloud are mapped to the two-dimensional image to form transformed points. A fused data array is created using the two-dimensional image and the transformed points. New points for the point cloud are identified using the fused data array. The new points are added to the point cloud to form a new point cloud. | 05-15-2014 |
20150036870 | AUTOMATED GRAPH LOCAL CONSTELLATION (GLC) METHOD OF CORRESPONDENCE SEARCH FOR REGISTRATION OF 2-D AND 3-D DATA - According to an embodiment, a 2-dimensional (2-D) image of a geographical region is transformed via a regional maxima transform (RMT) or an edge segmenting and boundary filling (ESBF) transform to produce a filtered 2-D image. The filtered 2-D image is iteratively eroded and opened to produce a processed EO 2-D image, 2-D object shape morphology is extracted from the processed EO 2-D image, and 2-D shape properties are extracted from the 2-D object shape morphology. A height slice of a 3-dimensional (3-D) point cloud comprising 3-D coordinate and intensity measurements of the geographical region is generated, and slice object shape morphology is extracted from the height slice. Slice shape properties from the slice object shape morphology are extracted, and the 2-D image is constellation matched to the height slice based on the 2-D shape properties and the slice shape properties. | 02-05-2015 |
20150036916 | STEREO-MOTION METHOD OF THREE-DIMENSIONAL (3-D) STRUCTURE INFORMATION EXTRACTION FROM A VIDEO FOR FUSION WITH 3-D POINT CLOUD DATA - According to an embodiment, a method for generating a 3-D stereo structure comprises registering and rectifying a first image frame and a second image frame by local correction matching, extracting a first scan line from the first image frame, extracting a second scan line from the second image frame corresponding to the first scan line, calculating a pixel distance between the first scan line and the second scan line for each pixel for a plurality of pixel shifts, calculating a smoothed pixel distance for each pixel for the pixel shifts by filtering the pixel distance for each pixel over the pixel shifts, and determining a scaled height for each pixel of the first scan line, the scaled height comprising a pixel shift from among the pixel shifts corresponding to a minimal distance of the smoothed pixel distance for the pixel. | 02-05-2015 |