Patent application number | Description | Published |
20140067869 | METHOD AND APPARATUS FOR CONTENT ASSOCIATION AND HISTORY TRACKING IN VIRTUAL AND AUGMENTED REALITY - A machine-implemented method includes establishing a virtual or augmented reality entity, and establishing a state for the entity having a state time and state properties including a state spatial arrangement. The data entity and state are stored, and are subsequently received and outputted at a time other than the state time so as to exhibit a “virtual time machine” functionality. An apparatus includes a processor, a data store, and an output. A data entity establisher, a state establisher, a storer, a data entity receiver, a state receiver, and an outputter are instantiated on the processor. | 03-06-2014 |
20140118570 | METHOD AND APPARATUS FOR BACKGROUND SUBTRACTION USING FOCUS DIFFERENCES - First and second images are captured at first and second focal lengths, the second focal length being longer than the first focal length. Element sets are defined with a first element of the first image and a corresponding second element of the second image. Element sets are identified as background if the second element thereof is more in-focus than or as in-focus as the first element. Background elements are subtracted from further analysis. Comparisons are based on relative focus, e.g. whether image elements are more or less in-focus. Measurement of absolute focus is not necessary, nor is measurement of absolute focus change; images need not be in-focus. More than two images, multiple element sets, and/or multiple categories and relative focus relationships also may be used. | 05-01-2014 |
20140125557 | METHOD AND APPARATUS FOR A THREE DIMENSIONAL INTERFACE - Method and apparatus for interacting with a three dimensional interface. In the method, a three dimensional interface with at least one virtual object is generated. An interaction zone is defined and generated, enclosing some or all of the object. A stimulus of the interaction zone, e.g. approach/contact with a finger/stylus is defined, and a response to the stimulus is defined, e.g. changes to the object, system actions, feedback, etc. When the stimulus is sensed the response is executed. The apparatus includes a processor that generates a three dimensional interface with at least one virtual object, defines an interaction zone for the object, and defines a stimulus and a response. A display outputs the interface and object. A camera or other sensor detects stimulus of the interaction zone, whereupon the processor generates a response signal. The apparatus may be part of a head mounted display. | 05-08-2014 |
20140139340 | METHOD AND APPARATUS FOR POSITION AND MOTION INSTRUCTION - World data is established, including real-world position and/or real-world motion of an entity. Target data is established, including planned or ideal position and/or motion for the entity. Guide data is established, including information for guiding a person or other subject in bringing world data into match with target data. The guide data is outputted to the subject as virtual and/or augmented reality data. Evaluation data may be established, including a comparison of world data with target data. World data, target data, guide data, and/or evaluation data may be dynamically updated. Subjects may be instructed in positions and motions by using guide data to bring world data into match with target data, and by receiving evaluation data. Instruction includes physical therapy, sports, recreation, medical treatment, fabrication, diagnostics, repair of mechanical systems, etc. | 05-22-2014 |
20140159862 | METHOD AND APPARATUS FOR USER-TRANSPARENT SYSTEM CONTROL USING BIO-INPUT - A wearable sensor vehicle with a bio-input sensor and a processor. When the vehicle is worn, the sensor is arranged so as to sense bio-input from the user. The sensor senses bio-input, the processor compares the bio-input to a standard, and if the standard is met the processor indicates a response. The user may be uniquely identified from the bio-input. One or more systems on or communicating with the vehicle may be controlled transparently, without requiring direct action by the user. Control actions may include security identification of the user, logging in to accounts or programs, setting preferences, etc. The sensor collects bio-input substantially without instruction or dedicated action from the user; the processor compares bio-input against the standard substantially without instruction or dedicated action from the user; and the processor generates and/or implements a response based on the bio-input substantially without instruction or dedicated action from the user. | 06-12-2014 |
20150092021 | APPARATUS FOR BACKGROUND SUBTRACTION USING FOCUS DIFFERENCES - First and second images are captured at first and second focal lengths, the second focal length being longer than the first focal length. Element sets are defined with a first element of the first image and a corresponding second element of the second image. Element sets are identified as background if the second element thereof is more in-focus than or as in-focus as the first element. Background elements are subtracted from further analysis. Comparisons are based on relative focus, e.g. whether image elements are more or less in-focus. Measurement of absolute focus is not necessary, nor is measurement of absolute focus change; images need not be in-focus. More than two images, multiple element sets, and/or multiple categories and relative focus relationships also may be used. | 04-02-2015 |
20150093022 | METHODS FOR BACKGROUND SUBTRACTION USING FOCUS DIFFERENCES - First and second images are captured at first and second focal lengths, the second focal length being longer than the first focal length. Element sets are defined with a first element of the first image and a corresponding second element of the second image. Element sets are identified as background if the second element thereof is more in-focus than or as in-focus as the first element. Background elements are subtracted from further analysis. Comparisons are based on relative focus, e.g. whether image elements are more or less in-focus. Measurement of absolute focus is not necessary, nor is measurement of absolute focus change; images need not be in-focus. More than two images, multiple element sets, and/or multiple categories and relative focus relationships also may be used. | 04-02-2015 |
20150093030 | METHODS FOR BACKGROUND SUBTRACTION USING FOCUS DIFFERENCES - First and second images are captured at first and second focal lengths, the second focal length being longer than the first focal length. Element sets are defined with a first element of the first image and a corresponding second element of the second image. Element sets are identified as background if the second element thereof is more in-focus than or as in-focus as the first element. Background elements are subtracted from further analysis. Comparisons are based on relative focus, e.g. whether image elements are more or less in-focus. Measurement of absolute focus is not necessary, nor is measurement of absolute focus change; images need not be in-focus. More than two images, multiple element sets, and/or multiple categories and relative focus relationships also may be used. | 04-02-2015 |