Patent application number | Description | Published |
20130131985 | WEARABLE ELECTRONIC IMAGE ACQUISITION AND ENHANCEMENT SYSTEM AND METHOD FOR IMAGE ACQUISITION AND VISUAL ENHANCEMENT - The system comprises a wearable, electronic image acquisition and processing system (or visual enhancement system) to guide visually impaired individuals through their environment, providing information to the user about nearby objects of interest, potentially dangerous obstacles, their location, and potential paths to their destination. | 05-23-2013 |
20150103011 | Holographic Interaction Device - A holographic interaction device is described. In one or more implementations, an input device includes an input portion comprising a plurality of controls that are configured to generate signals to be processed as inputs by a computing device that is communicatively coupled to the controls. The input device also includes a holographic recording mechanism disposed over a surface of the input portion, the holographic recording mechanism is configured to output a hologram in response to receipt of light, from a light source, that is viewable by a user over the input portion. | 04-16-2015 |
20150160785 | Object Detection in Optical Sensor Systems - Object detection techniques for use in conjunction with optical sensors is described. In one or more implementations, a plurality of inputs are received, each of the inputs being received from a respective one of a plurality of optical sensors. Each of the plurality of inputs are classified using machine learning as to whether the inputs are indicative of detection of an object by a respective said optical sensor. | 06-11-2015 |
20150199018 | 3D SILHOUETTE SENSING SYSTEM - A 3D silhouette sensing system is described which comprises a stereo camera and a light source. In an embodiment, a 3D sensing module triggers the capture of pairs of images by the stereo camera at the same time that the light source illuminates the scene. A series of pairs of images may be captured at a predefined frame rate. Each pair of images is then analyzed to track both a retroreflector in the scene, which can be moved relative to the stereo camera, and an object which is between the retroreflector and the stereo camera and therefore partially occludes the retroreflector. In processing the image pairs, silhouettes are extracted for each of the retroreflector and the object and these are used to generate a 3D contour for each of the retroreflector and object. | 07-16-2015 |
20150205445 | Global and Local Light Detection in Optical Sensor Systems - Global and local light detection techniques in optical sensor systems are described. In one or more implementations, a global lighting value is generated that describes a global lighting level for a plurality of optical sensors based on a plurality of inputs received from the plurality of optical sensors. An illumination map is generated that describes local lighting conditions of respective ones of the plurality of optical sensors based on the plurality of inputs received from the plurality of optical sensors. Object detection is performed using an image captured using the plurality of optical sensors along with the global lighting value and the illumination map. | 07-23-2015 |
20150242680 | POLARIZED GAZE TRACKING - Embodiments that relate to determining gaze locations are disclosed. In one embodiment a method includes shining light along an outbound light path to the eyes of the user wearing glasses. Upon detecting the glasses, the light is dynamically polarized in a polarization pattern that switches between a random polarization phase and a single polarization phase, wherein the random polarization phase includes a first polarization along an outbound light path and a second polarization orthogonal to the first polarization along a reflected light path. The single polarization phase has a single polarization. During the random polarization phases, glares reflected from the glasses are filtered out and pupil images are captured. Glint images are captured during the single polarization phase. Based on pupil characteristics and glint characteristics, gaze locations are repeatedly detected. | 08-27-2015 |
20150271449 | Integrated Interactive Space - Techniques for implementing an integrative interactive space are described. In implementations, video cameras that are positioned to capture video at different locations are synchronized such that aspects of the different locations can be used to generate an integrated interactive space. The integrated interactive space can enable users at the different locations to interact, such as via video interaction, audio interaction, and so on. In at least some embodiments, techniques can be implemented to adjust an image of a participant during a video session such that the participant appears to maintain eye contact with other video session participants at other locations. Techniques can also be implemented to provide a virtual shared space that can enable users to interact with the space, and can also enable users to interact with one another and/or objects that are displayed in the virtual shared space. | 09-24-2015 |
20150279083 | REAL-TIME THREE-DIMENSIONAL RECONSTRUCTION OF A SCENE FROM A SINGLE CAMERA - A combination of three computational components may provide memory and computational efficiency while producing results with little latency, e.g., output can begin with the second frame of video being processed. Memory usage may be reduced by maintaining key frames of video and pose information for each frame of video. Additionally, only one global volumetric structure may be maintained for the frames of video being processed. To be computationally efficient, only depth information may be computed from each frame. Through fusion of multiple depth maps from different frames into a single volumetric structure, errors may average out over several frames, leading to a final output with high quality. | 10-01-2015 |