Patent application number | Description | Published |
20090182786 | APPLICATION COHERENCY MANAGER - An application coherency manager (ACM) implements and manages the interdependencies of simulation, data, and platform information to simplify the task of organizing and executing large simulations composed of numerous models and data files. One or more file systems or repositories stories raw data in the form of files, data, or models, and a graphical users interface (GUI) enabling a user to enter and receive results from a query involving the files, data, or models. One or more coherency checking modules (CCMs) are operative to determine the types and versions of, and compatibility between, the files, data, or models. A database stores processed information about the file systems or repositories and the results of previous queries, and a data aggregator and manager (DAM) that manages the flow of information between the file system or repository, the GUI, the CCMs, and the database. The invention is applicable to simulation and non-simulation type applications such as document control, source code control, image libraries, etc. | 07-16-2009 |
20090184809 | RAPID PROTOTYPING AND MACHINE VISION FOR RECONFIGURABLE INTERFACES - A system and method including software to aid in generation of panels and control instruments rapidly generates a station that can support a variety of control interfaces. Rapid-Prototyped Panels, or RP-Panels, replicate existing systems (for simulation, training, gaming, etc.) or from new designs (for human factors testing, as functional product, etc.). The controls have tactile and visual characteristics similar or identical to their functional component counterparts such as buttons, knobs, switches, pedals, joysticks, steering wheels, and touch panels but are modular and use alternative data transfer modes (potentiometers, fiber optics, RFID, machine vision, etc.) to track and analyze the response of the controls. The response is then transmitted to the host programs. With this method a user can design and fabricate a reconfigurable interface to interact with virtual environments for various applications such as simulation, training, virtual instrumentation, gaming, human factors testing, etc. | 07-23-2009 |
20090322671 | TOUCH SCREEN AUGMENTED REALITY SYSTEM AND METHOD - An improved augmented reality (AR) system integrates a human interface and computing system into a single, hand-held device. A touch-screen display and a rear-mounted camera allows a user interact the AR content in a more intuitive way. A database storing graphical images or textual information about objects to be augmented. A processor is operative to analyze the imagery from the camera to locate one or more fiducials associated with a real object, determine the pose of the camera based upon the position or orientation of the fiducials, search the database to find Graphical images or textual information associated with the real object, and display graphical images or textual information in overlying registration with the imagery from the camera. | 12-31-2009 |
20100045701 | AUTOMATIC MAPPING OF AUGMENTED REALITY FIDUCIALS - Systems and methods expedite and improve the process of configuring an augmented reality environment. A method of pose determination according to the invention includes the step of placing at least one synthetic fiducial in a real environment to be augmented. A camera, which may include apparatus for obtaining directly measured camera location and orientation (DLMO) information, is used to acquire an image of the environment. The natural and synthetic fiducials are detected, and the pose of the camera is determined using a combination of the natural fiducials, the synthetic fiducial if visible in the image, and the DLMO information if determined to be reliable or necessary. The invention is not limited to architectural environments, and may be used with instrumented persons, animals, vehicles, and any other augmented or mixed reality applications. | 02-25-2010 |
20110102419 | ORIENTATION INVARIANT OBJECT IDENTIFICATION USING MODEL-BASED IMAGE PROCESSING - A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second is a module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. To limit the search space, it is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images. The system also includes the generation of shadows and allows the user to manipulate the sun angle to approximate the lighting conditions of the test range in the provided video. | 05-05-2011 |
20120263348 | ORIENTATION INVARIANT OBJECT IDENTIFICATION USING MODEL-BASED IMAGE PROCESSING - A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. It is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images. | 10-18-2012 |
20140320486 | ORIENTATIOIN INVARIANT OBJECT IDENTIFICATION USING MODEL-BASED IMAGE PROCESSING - A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second is a module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. To limit the search space, it is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images. The system also includes the generation of shadows and allows the user to manipulate the sun angle to approximate the lighting conditions of the test range in the provided video. | 10-30-2014 |
20150253775 | ALL WEATHER AUTONOMOUSLY DRIVEN VEHICLES - Autonomously driven vehicles operate in rain, snow and other adverse weather conditions. An on-board vehicle sensor has a beam with a diameter that is only intermittently blocked by rain, snow, dustor other obscurant particles. This allows an obstacle detection processor is to tell the difference between obstacles, terrain variations and obscurant particles, thereby enabling the vehicle driving control unit to disregard the presence of obscurant particles along the route taken by the vehicle. The sensor may form part of a LADAR or RADAR system or a video camera. The obstacle detection processor may receive time-spaced frames divided into cells or pixels, whereby groups of connected cells or pixels and/or cells or pixels that persist over longer periods of time are interpreted to be obstacles or terrain variations. The system may further including an input for receiving weather-specific configuration parameters to adjust the operation of the obstacle detection processor. | 09-10-2015 |