Patent application number | Description | Published |
20130242054 | GENERATING HI-RES DEWARPED BOOK IMAGES - Systems and methods for generating high resolution dewarped images for an image of a document captured by a 3D stereo digital camera system, or a mobile phone camera capturing a sequence of images, which may improve OCR performance. Example embodiments include a compact stereo camera with two sensors mounted at fixed locations, and a multi-resolution pipeline to process and to dewarp the images using a three dimensional surface model based on curve profiles of the computed depth map. Example embodiments also include a mobile phone including a camera which captures a sequence of images, and a processor which computes a disparity map using the captured sequence of image frames, computes a model of the at least one document page by generating a cylindrical three dimensional geometric surface using the computed disparity map, and renders a dewarped image from the computed model. | 09-19-2013 |
20140313122 | SYSTEMS AND METHODS FOR ENABLING GESTURE CONTROL BASED ON DETECTION OF OCCLUSION PATTERNS - Described is an approach to enabling gesture interactions for the viewport widget in a graphical user interface (GUI) library. The gesture interactions may include continuous operations such as panning, zooming and rotating of the viewport's content with fingers (or styluses). The approach is based on using a camera to detect occlusion patterns in a sensor grid rendered over the viewport. The sensor grid consists of sensor blobs, which are small blobs of pixels with a distinct color. A sensor blob is aware of its location in both the viewport's coordinate system and the camera's coordinate system, and triggers an occlusion event at the location when it is occluded by a finger (or stylus). Robust techniques are devised to eliminate unintentional gestures, provide visual guidance and feedback for interactions, and minimize the visual interference of the sensor grid with the viewport's content. | 10-23-2014 |
20140313136 | SYSTEMS AND METHODS FOR FINGER POSE ESTIMATION ON TOUCHSCREEN DEVICES - Described are systems and methods for estimating finger pose of a user during a tactile input event. In one implementation, the system incorporates: a touch-sensitive display device configured to detect a tactile event and to determine a contact point of an object and the touch-sensitive display device, the contact point associated with the tactile event; a camera configured to capture an image of an area proximal to the surface of the touch-sensitive display device; and a central processing unit configured, in response to the detection of the tactile event, to determine information on a pose of the object based on the captured image and the determined contact point. | 10-23-2014 |
20140313363 | SYSTEMS AND METHODS FOR IMPLEMENTING AND USING GESTURE BASED USER INTERFACE WIDGETS WITH CAMERA INPUT - Described is approach to gesture interaction that is based on user interface widgets. In order to detect user gestures, the widgets are provided with hotspots that are monitored using a camera for predetermined patterns of occlusion. A hotspot is a region where user interacts with the widget by making a gesture over it. The user's gesture may be detected without user physically touching the surface displaying the widget. The aforesaid hotspots are designed to be visually salient and suggestive of the type of gestures that can be received from the user. Described techniques are advantageous in relation to conventional systems, such as systems utilizing finger tracking, in that they can better support complex tasks with repeated user actions. In addition, they provide better perceived affordance than conventional systems that attempt to use widgets that are not designed for gesture input, or in-the-air gesture detection techniques that lack any visual cues. | 10-23-2014 |
20150212595 | SYSTEMS AND METHODS FOR HIDING AND FINDING DIGITAL CONTENT ASSOCIATED WITH PHYSICAL OBJECTS VIA CODED LIGHTING - A method involving: designating, based on an instruction received from a user, an area within an illumination field of a projector; using the projector to project a light encoded with coordinate information; receiving a content or a content information from the user; associating, using the processing unit, the designated area within an illumination field of the projector with the content or the content information received from the user; detecting the light encoded with the coordinate information using a mobile device positioned within the illumination field of the projector; determining a position of the mobile device within the illumination field of the projector based on the detected light encoded with the coordinate information; and causing, on condition that the determined position of the mobile device is within the designated area, the mobile device to display the content. | 07-30-2015 |