Patent application number | Description | Published |
20080306628 | Multi-Modal Push Planner for Humanoid Robots - Multi-modal planning method and system that search a path for the most constrained mode first, and then expands the searches for path in a less constrained mode. By searching the path for the most constrained mode first, less resource are wasted on searching for paths that does not result in a feasible path in the most constrained mode. Multi-modal planning is performed by precomputing feasibility and utility of transition configurations of two adjacent modes. The feasibility is used to exclude non-feasible transition configurations in the most constrained mode from being sampled. The utility is used to bias sampling of the transition configuration so that transition configurations with higher utility are sampled with higher probability. Paths of configurations with higher utility and efficiency are obtained by biasing the sampling of the transition configurations. | 12-11-2008 |
20090080699 | 3D Beverage Container Localizer - Objects placed on a flat surface are identified and localized by using a single view image. The single view image in the perspective projection is transformed to a normalized image in a pseudo plan to view to enhance detection of the bottom or top shapes of the objects. One or more geometric features are detected from the normalized image by processing the normalized image. The detected geometric features are analyzed to determine the identity and the location the objects on the flat surface. | 03-26-2009 |
20090105879 | EVALUATION OF COMMUNICATION MIDDLEWARE IN A DISTRIBUTED HUMANOID ROBOT ARCHITECTURE - A publish-subscribe architecture based blackboard system for selecting and exchanging selected information among a plurality of processing modules using filters for implementing conditions described in a procedural language to reduce the amount of information transmitted between the processing modules. More than one filter may be dynamically installed in a message hub to select and collect the published information to be sent to a certain subscribing module. By using the procedural language to describe the filters, the message hub can more intelligently select the information to be sent to the subscribing module. This reduces the amount of information transmitted via communication channels. Further, the subscribing module may be relieved from the task of filtering the information received from the message hub, allowing the subscribing module to devote more resources to other operations. | 04-23-2009 |
20090290758 | Rectangular Table Detection Using Hybrid RGB and Depth Camera Sensors - Objects having a flat surface such as a table are detected by processing a depth image and a color image. A mask indicating an area likely to include an object having the flat surface is generated by processing a depth image including the depth information. A color image corresponding to the depth image is then cropped using the mask to detect a portion of the color image that likely include the object having the flat surface. Geometric features of the cropped color image such as lines are then detected to determine the location and orientation of the object having the flat surface. A subset of the detected geometric features is selected as outlines of the flat surface. | 11-26-2009 |
20110004341 | Panoramic Attention For Humanoid Robots - A robot using less storage and computational resources to embody panoramic attention. The robot includes a panoramic attention module with multiple levels that are hierarchically structured to process different levels of information. The top-level of the panoramic attention module receives information about entities detected from the environment of the robot and maps the entities to a panoramic map maintained by the robot. By mapping and storing high-level entity information instead of low-level sensory information in the panoramic map, the amount of storage and computation resources for panoramic attention can be reduced significantly. Further, the mapping and storing of high-level entity information in the panoramic map also facilitates consistent and logical processing of different conceptual levels of information. | 01-06-2011 |
20120191460 | SYNCHRONIZED GESTURE AND SPEECH PRODUCTION FOR HUMANOID ROBOTS - A system or method for generating gestures in a robot during generation of a speech output by the robot by analyzing a speech text and selecting appropriate gestures from a plurality of candidate gestures. The speech text is analyzed and tagged with information relevant to generating of the gestures. Based on the speech text, the tagged information and other relevant information, a gesture identifier is selected. A gesture template corresponding to the gesture identifier is retrieved and then processed by adding relevant parameter to generate a gesture descriptor representing a gesture to be taken by the robot. A gesture motion is planned based on the gesture descriptor and analysis of timing associated with the speech. Actuator signals for controlling the actuators such as arms and hands are generated based on the planned gesture motion. | 07-26-2012 |
20120314020 | MOVE-IT: MONITORING, OPERATING, VISUALIZING, EDITING INTEGRATION TOOLKIT FOR RECONFIGURABLE PHYSICAL COMPUTING - A user interface screen for displaying data associated with operation of a robot where the user interface screen includes one or more windows that can be rotated and then minimized into an icon to render space for other windows. As user input for moving a window is received, the window moves to an edge. As further user input is received, the window is rotated about an axis and then minimized into an icon. In this way, the windows presented on the screen can be intuitively operated by a user. | 12-13-2012 |
20130293582 | METHOD TO GENERATE VIRTUAL DISPLAY SURFACES FROM VIDEO IMAGERY OF ROAD BASED SCENERY - Generating a virtual model of environment in front of a vehicle based on images captured using an image capturing. The Images captured on an image capturing device of a vehicle are processed to extract features of interest. Based on the extracted features, a virtual model of the environment is constructed. The virtual model includes one or more surfaces. Each of the surfaces may be used as a reference surface to attach and move graphical elements generated to implement augmented reality (AR). As the vehicle moves, the graphical elements move as if the graphical elements are affixed to the one of the surfaces. By presenting the graphical elements to move together with real objects in front of the vehicle, a driver perceives the graphical elements as being part of the actual environment and reduces distraction or confusion associated with the graphical elements. | 11-07-2013 |
20140267263 | AUGMENTED REALITY HEADS UP DISPLAY (HUD) FOR LEFT TURN SAFETY CUES - A method, augmented reality driving system and device safely guide a vehicle driver to perform a left turn. A vehicle navigator detects a left turn based upon proximity and speed for a vehicle. A target sensor determines a current position and a relative vector for an oncoming vehicle in a lane for opposing traffic that is approaching the left turn. An augmented reality controller three dimensionally maps a forward view including the oncoming vehicle and spatially overlays an augmented reality display on a volumetric heads up display for a driver of the vehicle by projecting a target path of the oncoming vehicle based upon the vector and relative vector and by projecting a left turn path. | 09-18-2014 |
20140267398 | AUGMENTED REALITY HEADS UP DISPLAY (HUD) FOR YIELD TO PEDESTRIAN SAFETY CUES - An augmented reality driver system, device, and method safely guide a vehicle driver to yield to pedestrians. A vehicle navigator determines a turn lane based upon proximity to a vehicle. A target sensor detects a pedestrian entering the turn lane and to determine a crosswalk path across the turn lane. An augmented reality controller three dimensionally maps a forward view including the pedestrian, and spatially overlays an augmented reality display on the volumetric heads up display for a driver of the vehicle by projecting a yielding indication adjacent to the crosswalk path. | 09-18-2014 |
20140267402 | VOLUMETRIC HEADS-UP DISPLAY WITH DYNAMIC FOCAL PLANE - A heads-up display device for displaying graphic elements in view of a user while the user views an environment through a display screen. The heads-up display device includes at least one projector that projects a graphic element on a frontal focal plane in view of the user while the user views the environment through the display screen, and at least one projector that projects a graphic element on a ground-parallel focal plane in view of the user while the user views the environment through the display screen. The projector that projects the graphic element on the frontal focal plane is mounted on an actuator that linearly moves the projector so as to cause the frontal focal plane to move in a direction of a line-of-sight of the user. The projector that projects the ground-parallel focal plane is fixedly arranged such that the ground-parallel focal plane is static. | 09-18-2014 |
20140268353 | 3-DIMENSIONAL (3-D) NAVIGATION - One or more embodiments of techniques or systems for 3-dimensional (3-D) navigation are provided herein. A heads-up display (HUD) component can project graphic elements on focal planes around an environment surrounding a vehicle. The HUD component can cause these graphic elements to appear volumetric or 3-D by moving or adjusting a distance between a focal plane and the vehicle. Additionally, a target position for graphic elements can be adjusted. This enables the HUD component to project graphic elements as moving avatars. In other words, adjusting the focal plane distance and the target position enables graphic elements to be projected in three dimensions along an x, y, and z axis. Further, a moving avatar can be ‘animated’ by sequentially projecting the avatar on different focal planes, thereby providing an occupant with the perception that the avatar is moving towards or away from the vehicle. | 09-18-2014 |
20140272812 | DRIVER TRAINING SYSTEM USING HEADS-UP DISPLAY AUGMENTED REALITY GRAPHICS ELEMENTS - A driver training system includes a training controller, a heads-up display device, and a driving cue adherence controller. The training controller is configured to receive inputs related to an operational state of a vehicle and an environment surrounding the vehicle, and to determine a driving cue based on the received inputs. The heads-up display device is configured to present the driving cue as an augmented reality graphic element in view of a driver by projecting graphic elements on a windshield of the vehicle. The driving cue adherence controller is configured to continuously determine a current level of adherence to the driving cue, and an aggregate level of adherence to the driving cue based on the continuously determined current level of adherence to the driving cue over a predetermined time period. The heads-up display device is configured to present the continuously determined aggregate level of adherence in view of the driver. | 09-18-2014 |
20140282259 | INFORMATION QUERY BY POINTING - Navigating through objects or items in a display device using a first input device to detect pointing of a user's finger to an object or item and using a second input device to receive the user's indication on the selection of the object or the item. An image of the hand is captured by the first input device and is processed to determine a location on the display device corresponding to the location of the fingertip of the pointing finger. The object or the item corresponding to the location of the fingertip is selected after the second input device receives predetermined user input from the second input device. | 09-18-2014 |
20140354691 | VOLUMETRIC HEADS-UP DISPLAY WITH DYNAMIC FOCAL PLANE - A heads-up display device for displaying graphic elements in view of a user while the user views an environment through a display screen. The heads-up display device includes at least one projector that projects a graphic element on a frontal focal plane in view of the user while the user views the environment through the display screen, and at least one projector that projects a graphic element on a ground-parallel focal plane in view of the user while the user views the environment through the display screen. The projector that projects the graphic element on the frontal focal plane is mounted on an actuator that linearly moves the projector so as to cause the frontal focal plane to move in a direction of a line-of-sight of the user. The projector that projects the ground-parallel focal plane is fixedly arranged such that the ground-parallel focal plane is static. | 12-04-2014 |
20140354692 | VOLUMETRIC HEADS-UP DISPLAY WITH DYNAMIC FOCAL PLANE - A heads-up display device for displaying graphic elements in view of a user while the user views an environment through a display screen. The heads-up display device includes at least one projector that projects a graphic element on a dynamic, frontal focal plane in view of the user while the user views the environment through the display screen, and at least one projector that projects a graphic element on a static, ground-parallel focal plane in view of the user while the user views the environment through the display screen. A controller determines a target graphic element position and a graphic element size based on the target graphic element position for the graphic element projected on the frontal focal plane, so as to provide the user with an immersive three-dimensional heads-up display. | 12-04-2014 |
20140362195 | ENHANCED 3-DIMENSIONAL (3-D) NAVIGATION - One or more embodiments of techniques or systems for 3-dimensional (3-D) navigation are provided herein. A heads-up display (HUD) component can project, render, display, or present graphic elements on focal planes around an environment surrounding a vehicle. The HUD component can cause these graphic elements to appear volumetric or 3-D by moving or adjusting a distance between a focal plane and the vehicle. Objects within the environment may be tracked, identified, and corresponding graphic elements may be projected on, near, or around respective objects. For example, the HUD component may project graphic elements or pointers on pedestrians such that it may alert the driver or operator of the vehicle of their presence. These pointers may stay ‘stuck’ on the pedestrian as he or she is walking within the environment. Metadata associated with objects may be presented, such as address information, ratings, telephone numbers, logos, etc. | 12-11-2014 |
20140365228 | INTERPRETATION OF AMBIGUOUS VEHICLE INSTRUCTIONS - Various exemplary embodiments relate to a command interpreter for use in a vehicle control system in a vehicle for interpreting user commands, a vehicle interaction system including such a command interpreter, a vehicle including such a vehicle interaction system, and related method and non-transitory machine-readable storage medium, including: a memory and a processor, the processor being configured to: receive, from at least one human via a first input device, a first input having a first type; receive a second input having a second type via a second input device, wherein the second type comprises at least one of sensed information describing a surrounding environment of the vehicle and input received from at least one human; interpret both the first input and the second input to generate a system instruction; and transmit the system instruction to a different system of the vehicle. | 12-11-2014 |
20140375543 | SHARED COGNITION - A system includes at least one sensor, at least one display, and a computing device coupled to the at least one sensor and the at least one display. The computing device includes a processor, and a computer-readable storage media having computer-executable instructions embodied thereon. When executed by at least one processor, the computer-executable instructions cause the processor to receive information from at least a first occupant, identify an object based at least partially on the received information, and present, on the at least one display, a first image associated with the object to a second occupant. The first image is aligned substantially between an eye position of the second occupant and the object such that the at least one display appears to one of overlay the first image over the object and position the first image adjacent to the object with respect to the eye position of the second occupant. | 12-25-2014 |
20150022426 | SYSTEM AND METHOD FOR WARNING A DRIVER OF A POTENTIAL REAR END COLLISION - A system for indicating braking intensity to a main vehicle has an observational device monitoring positional and speed data of at least one vehicle proximate the main vehicle. A control unit is coupled to the observational device. The control unit processes the positional and speed data monitored by the observational device and generates graphical representations of the at least one vehicle proximate the main vehicle and graphical representations of a braking intensity level of the at least one vehicle proximate the main vehicle. | 01-22-2015 |
20150062168 | SYSTEM AND METHOD FOR PROVIDING AUGMENTED REALITY BASED DIRECTIONS BASED ON VERBAL AND GESTURAL CUES - A method and system for providing augmented reality based directions. The method and system include receiving a voice input based on verbal cues provided by one or more vehicle occupants in a vehicle. The method and system also include receiving a gesture input and a gaze input based on gestural cues and gaze cues provided by the one or more vehicle occupants in the vehicle. The method and system additionally include determining directives based on the voice input, the gesture input and the gaze input and associating the directives with the surrounding environment of the vehicle. Additionally, the method and system include generating augmented reality graphical elements based on the directives and the association of the directives with the surrounding environment of the vehicle. The method and system further include displaying the augmented reality graphical elements on a heads-up display system of the vehicle. | 03-05-2015 |