Patent application number | Description | Published |
20110234481 | ENHANCING PRESENTATIONS USING DEPTH SENSING CAMERAS - A depth camera and an optional visual camera are used in conjunction with a computing device and projector to display a presentation and automatically correct the geometry of the projected presentation. Interaction with the presentation (switching slides, pointing, etc.) is achieved by utilizing gesture recognition/human tracking based on the output of the depth camera and (optionally) the visual camera. Additionally, the output of the depth camera and/or visual camera can be used to detect occlusions between the projector and the screen (or other target area) in order to adjust the presentation to not project on the occlusion and, optionally, reorganize the presentation to avoid the occlusion. | 09-29-2011 |
20120056982 | DEPTH CAMERA BASED ON STRUCTURED LIGHT AND STEREO VISION - A depth camera system uses a structured light illuminator and multiple sensors such as infrared light detectors, such as in a system which tracks the Motion of a user in a field of view. One sensor can be optimized for shorter range detection while another sensor is optimized for longer range detection. The sensors can have a different baseline distance from the illuminator, as well as a different spatial resolution, exposure time and sensitivity. In one approach, depth values are obtained from each sensor by matching to the structured light pattern, and the depth values are merged to obtain a final depth map which is provided as an input to an application. The merging can involve unweighted averaging, weighted averaging, accuracy measures and/or confidence measures. In another approach, additional depth values which are included in the merging are obtained using stereoscopic matching among pixel data of the sensors. | 03-08-2012 |
20120075534 | INTEGRATED LOW POWER DEPTH CAMERA AND PROJECTION DEVICE - A video projector device includes a visible light projector to project an image on a surface or object, and a visible light sensor, which can be used to obtain depth data regarding the object using a time-of-flight principle. The sensor can be a charge-coupled device which obtains color images as well as obtaining depth data. The projected light can be provided in successive frames. A frame can include a gated sub-frame of pulsed light followed by continuous light, while the sensor is gated, to obtain time of flight data, an ungated sub-frame of pulsed light followed by continuous light, while the sensor is ungated, to obtain reflectivity data and a background sub-frame of no light followed by continuous light, while the sensor is gated, to determine a level of background light. A color sub-frame projects continuous light, while the sensor is active. | 03-29-2012 |
20120082346 | TIME-OF-FLIGHT DEPTH IMAGING - Techniques are provided for determining depth to objects. A depth image may be determined based on two light intensity images. This technique may compensate for differences in reflectivity of objects in the field of view. However, there may be some misalignment between pixels in the two light intensity images. An iterative process may be used to relax a requirement for an exact match between the light intensity images. The iterative process may involve modifying one of the light intensity images based on a smoothed version of a depth image that is generated from the two light intensity images. Then, new values may be determined for the depth image based on the modified image and the other light intensity image. Thus, pixel misalignment between the two light intensity images may be compensated. | 04-05-2012 |
20120154542 | PLURAL DETECTOR TIME-OF-FLIGHT DEPTH MAPPING - A depth-mapping method comprises exposing first and second detectors oriented along different optical axes to light dispersed from a scene, and furnishing an output responsive to a depth coordinate of a locus of the scene. The output increases with an increasing first amount of light received by the first detector during a first period, and decreases with an increasing second amount of light received by the second detector during a second period different than the first. | 06-21-2012 |
20120154557 | COMPREHENSION AND INTENT-BASED CONTENT FOR AUGMENTED REALITY DISPLAYS - A method and system that enhances a user's experience when using a near eye display device, such as a see-through display device or a head mounted display device is provided. The user's intent to interact with one or more objects in the scene is determined. An optimized image is generated for the user based on the user's intent. The optimized image is displayed to the user, via the see-through display device. The optimized image visually enhances the appearance of objects that the user intends to interact with in the scene and diminishes the appearance of objects that the user does not intend to interact with in the scene. The optimized image can visually enhance the appearance of the objects that increase the user's comprehension. The optimized image is displayed to the user, via the see-through display device. | 06-21-2012 |
20130129224 | COMBINED DEPTH FILTERING AND SUPER RESOLUTION - Systems and methods for increasing the resolution of a depth map by identifying and updating false depth pixels are described. In some embodiments, a depth pixel of the depth map is initially assigned a confidence value based on curvature values and localized contrast information. The curvature values may be generated by applying a Laplacian filter or other edge detection filter to the depth pixel and its neighboring pixels. The localized contrast information may be generated by determining a difference between the maximum and minimum depth values associated with the depth pixel and its neighboring pixels. A false depth pixel may be identified by comparing a confidence value associated with the false depth pixel with a particular threshold. The false depth pixel may be updated by assigning a new depth value based on an extrapolation of depth values associated with neighboring pixel locations. | 05-23-2013 |
20130131836 | SYSTEM FOR CONTROLLING LIGHT ENABLED DEVICES - A system for controlling infrared (IR) enabled devices by projecting coded IR pulses from an active illumination depth camera is described. In some embodiments, a gesture recognition system includes an active illumination depth camera such as a depth camera that utilizes time-of-flight (TOF) or structured light techniques for obtaining depth information. The gesture recognition system may detect the performance of a particular gesture associated with a particular electronic device, determine a set of device instructions in response to detecting the particular gesture, and transmit the set of device instructions to the particular electronic device utilizing coded IR pulses. The coded IR pulses may imitate the IR pulses associated with a remote control protocol. In some cases, the coded IR pulses transmitted may also be used by the active illumination depth camera for determining depth information. | 05-23-2013 |
20130188022 | 3D ZOOM IMAGER - A 3D imager comprising two cameras having fixed wide-angle and narrow angle FOVs respectively that overlap to provide an active space for the imager and a controller that determines distances to features in the active space responsive to distances provided by the cameras and a division of the active space into near, intermediate, and far zones. | 07-25-2013 |
20140002611 | TIME-OF-FLIGHT DEPTH IMAGING | 01-02-2014 |
20140055591 | CALIBRATION OF EYE TRACKING SYSTEM - Embodiments are disclosed that relate to calibrating an eye tracking system for a computing device. For example, one disclosed embodiment provides, in a computing device comprising a gaze estimation system, a method of calibrating the gaze estimation system. The method includes receiving a request to log a user onto the computing device, outputting a passcode entry display image to a display device, receiving image data from one or more eye tracking cameras, and from the image data, determining a gaze scanpath representing a path of a user's gaze on the passcode entry display image. The method further includes comparing the gaze scanpath to a stored scanpath for the user, and calibrating the gaze estimation system based upon a result of comparing the gaze scanpath to the stored scanpath for the user. | 02-27-2014 |