Patent application number | Description | Published |
20080291279 | Method and System for Performing Video Flashlight - In an immersive surveillance system, videos or other data from a large number of cameras and other sensors is managed and displayed by a video processing system overlaying the data within a rendered 2D or 3D model of a scene. The system has a viewpoint selector configured to allow a user to selectively identify a viewpoint from which to view the site. A video control system receives data identifying the viewpoint and based on the viewpoint automatically selects a subset of the plurality of cameras that is generating video relevant to the view from the viewpoint, and causes video from the subset of cameras to be transmitted to the video processing system. As the viewpoint changes, the cameras communicating with the video processor are changed to hand off to cameras generating relevant video to the new position. Playback in the immersive environment is provided by synchronization of time stamped recordings of video. Navigation of the viewpoint on constrained paths in the model or map-based navigation is also provided. | 11-27-2008 |
20090237508 | METHOD AND APPARATUS FOR PROVIDING IMMERSIVE SURVEILLANCE - A method and apparatus for providing immersive surveillance wherein a remote security guard may monitor a scene using a variety of imagery sources that are rendered upon a model to provide a three-dimensional conceptual view of the scene. Using a view selector, the security guard may dynamically select a camera view to be displayed on his conceptual model, perform a walk through of the scene, identify moving objects and select the best view of those moving objects and so on. | 09-24-2009 |
20090310867 | BUILDING SEGMENTATION FOR DENSELY BUILT URBAN REGIONS USING AERIAL LIDAR DATA - A method for extracting a 3D terrain model for identifying at least buildings and terrain from LIDAR data is disclosed, comprising the steps of generating a point cloud representing terrain and buildings mapped by LIDAR; classifying points in the point cloud, the point cloud having ground and non-ground points, the non-ground points representing buildings and clutter; segmenting the non-ground points into buildings and clutter; and calculating a fit between at least one building segment and at least one rectilinear structure, wherein the fit yields the rectilinear structure with the fewest number of vertices. The step of calculating further comprises the steps of (a) calculating a fit of a rectilinear structure to the at least one building segment, wherein each of the vertices has an angle that is a multiple of 90 degrees; (b) counting the number of vertices; (c) rotating the at least one building segment about an axis by a predetermined increment; and (d) repeating steps (a)-(c) until a rectilinear structure with the least number of vertices is found. | 12-17-2009 |
20100073482 | Method and apparatus for providing a scalable multi-camera distributed video processing and visualization surveillance system - A scalable architecture for providing real-time multi-camera distributed video processing and visualization. An exemplary system comprises at least one video capture and storage system for capturing and storing a plurality of input videos, at least one vision based alarm system for detecting and reporting alarm situations or events, and at least one video rendering system (e.g., a video flashlight system) for displaying an alarm situation in a context that speeds up comprehension and response. One advantage of the present architecture is that these systems are all scalable, such that additional sensors (e.g., cameras, motion sensors, infrared sensors, chemical sensors, biological sensors, temperature sensors and like) can be added in large numbers without overwhelming the ability of security forces to comprehend the alarm situation. | 03-25-2010 |
20100103196 | SYSTEM AND METHOD FOR GENERATING A MIXED REALITY ENVIRONMENT - A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world. | 04-29-2010 |
20110077813 | AUDIO BASED ROBOT CONTROL AND NAVIGATION - A computer implemented method for unattended detection of a current terrain to be traversed by a mobile device is disclosed. Visual input of the current terrain is received for a plurality of positions. Audio input corresponding to the current terrain is received for the plurality of positions. The video input is fused with the audio input using a classifier. The type of the current terrain is classified with the classifier. The classifier may also be employed to predict the type of terrain proximal to the current terrain. The classifier is constructed using an expectation-maximization (EM) method. | 03-31-2011 |
20110176722 | SYSTEM AND METHOD OF PROCESSING STEREO IMAGES - The present invention is a system and a method for processing stereo images utilizing a real time, robust, and accurate stereo matching system and method based on a coarse-to-fine architecture. At each image pyramid level, non-centered windows for matching and adaptive upsampling of coarse-level disparities are performed to generate estimated disparity maps using the ACTF approach. In order to minimize propagation of disparity errors from coarser to finer levels, the present invention performs an iterative optimization, at each level, that minimizes a cost function to generate smooth disparity maps with crisp occlusion boundaries. | 07-21-2011 |
20120206596 | UNIFIED FRAMEWORK FOR PRECISE VISION-AIDED NAVIGATION - A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units. | 08-16-2012 |
20140176603 | METHOD AND APPARATUS FOR MENTORING VIA AN AUGMENTED REALITY ASSISTANT - A method and apparatus for training and guiding users comprising generating a scene understanding based on video and audio input of a scene of a user performing a task in the scene, correlating the scene understanding with a knowledge base to produce a task understanding, comprising one or more goals, of a current activity of the user, reasoning, based on the task understanding and a user's current state, a next step for advancing the user towards completing one of the one or more goals of the task understanding and overlaying the scene with an augmented reality view comprising one or more visual and audio representation of the next step to the user. | 06-26-2014 |
20140316191 | Biofeedback Virtual Reality Sleep Assistant - Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep. | 10-23-2014 |
20140316192 | Biofeedback Virtual Reality Sleep Assistant - Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep. | 10-23-2014 |