Patent application number | Description | Published |
20120314914 | ENHANCED FACE RECOGNITION IN VIDEO - The computational resources needed to perform processes such as image recognition can be reduced by determining appropriate frames of image information to use for the processing. In some embodiments, infrared imaging can be used to determine when a person is looking substantially towards a device, such that an image frame captured at that time will likely be adequate for facial recognition. In other embodiments, sound triangulation or motion sensing can be used to assist in determining which captured image frames to discard and which to select for processing based on any of a number of factors indicative of a proper frame for processing. | 12-13-2012 |
20130004016 | USER IDENTIFICATION BY GESTURE RECOGNITION - A user can be identified and/or authenticated to an electronic device by analyzing aspects of a motion or gesture made by that user. At least one imaging element of the device can capture image information including the motion or gesture, and can determine time-dependent information about that motion or gesture in two or three dimensions of space. The time-dependent information can be used to identify varying speeds, motions, and other such aspects that are indicative of a particular user. The way in which a gesture or motion is made, in addition to the motion or gesture itself, can be used to authenticate an individual user. While other persons can learn the basic gesture or motion, the way in which each person makes that gesture or motion will generally be at least slightly different, which can be used to prevent unauthorized access to sensitive information, protected functionality, or other such content. | 01-03-2013 |
20130016102 | SIMULATING THREE-DIMENSIONAL FEATURES - Image information displayed on an electronic device can be modified based at least in part upon a relative position of a user with respect to a device. Mapping, topological or other types of positional data can be used to render image content from a perspective that is consistent with a viewing angle for the current relative position of the user. As that viewing angle changes, as a result of movement of the user and/or the device, the content can be re-rendered or otherwise updated to display the image content from a perspective that reflects the change in viewing angle. Simulations of effects such as parallax and occlusions can be used with the change in perspective to provide a consistent user experience that provides a sense of three-dimensional content even when that content is rendered on a two-dimensional display. Lighting, shading and/or other effects can be used to enhance the experience. | 01-17-2013 |
20130258117 | USER-GUIDED OBJECT IDENTIFICATION - A user attempting to obtain information about an object can capture image information including a view of that object, and the image information can be used with a matching or identification process to provide information about that type of object to the user. In order to narrow the search space to a specific category, and thus improve the accuracy of the results and the speed at which results can be obtained, the user can be guided to capture image information with an appropriate orientation. An outline or other graphical guide can be displayed over image information captured by a computing device, in order to guide the user in capturing the object from an appropriate direction and with an appropriate scale for the type of matching and/or information used for the matching. Such an approach enables three-dimensional objects to be analyzed using conventional two-dimensional identification algorithms, among other such processes. | 10-03-2013 |
20130342459 | FINGERTIP LOCATION FOR GESTURE INPUT - A user can use a finger, or other such object, to provide input to a computing device. The finger does not have to contact the device, but can be positioned and/or oriented in such a way that the device can determine an input that the user is attempting to provide, such as an element or icon that the user is intended to select. One or more cameras can capture image information, which can be analyzed to attempt to determine the location and/or orientation of the finger. If the finger is at least partially outside a field of view of the camera(s), the device can use a sensor (e.g., EMF) to attempt to determine a location of at least a portion of the finger, which can be used with the image information to determine the location and/or orientation of the finger. Other estimation processes can be used as well. | 12-26-2013 |
20140085245 | DISPLAY INTEGRATED CAMERA ARRAY - Motions or gestures can provide input to an electronic device by capturing images of a feature used to provide the motions or gestures, then analyzing the images. Conventional cameras have a limited field of view, creating a “dead zone” near the device that is outside the field of view. Various embodiments utilize an array of detectors positioned behind a display screen that are configured to operate as a large, low resolution camera. The array can resolve objects within a distance of the device sufficient to cover at least a portion of the dead zone. In some embodiments the device can include one or more infrared (IR) emitters to emit IR light that can be reflected by an object in the dead zone and detected by the detectors. The use of multiple emitters at different locations enables at least some depth information to be determined from the array images. | 03-27-2014 |
20140126777 | ENHANCED FACE RECOGNITION IN VIDEO - The computational resources needed to perform processes such as image recognition can be reduced by determining appropriate frames of image information to use for the processing. In some embodiments, infrared imaging can be used to determine when a person is looking substantially towards a device, such that an image frame captured at that time will likely be adequate for facial recognition. In other embodiments, sound triangulation or motion sensing can be used to assist in determining which captured image frames to discard and which to select for processing based on any of a number of factors indicative of a proper frame for processing. | 05-08-2014 |
20140247346 | APPROACHES FOR DEVICE LOCATION AND COMMUNICATION - An electronic device can utilize image capture technology to detect the presence and location of another device. Using this information, the electronic device can display, in a user interface, a graphical element representing a detected device, along with identity information and the location of the detected device relative to the electronic device. The location of each detected device relative to the electronic device can be tracked and thus the graphical element can be updated in the user interface. | 09-04-2014 |
20140253440 | Dorsal Touch Input - A back touch sensor positioned on a back surface of a device accepts user input in the form of touches. The touches on the back touch sensor map keys on a virtual keyboard, a pointer input, and so forth. Touches on a touch sensor positioned on a front surface provide additional input while also allowing the user to grasp and hold the device. | 09-11-2014 |
20150062006 | FEATURE TRACKING FOR DEVICE INPUT - A user can emulate touch screen events with motions and gestures that the user performs at a distance from a computing device. A user can utilize specific gestures, such as a pinch gesture, to designate portions of motion that are to be interpreted as input, to differentiate from other portions of the motion. A user can then perform actions such as text input by performing motions with the pinch gesture that correspond to words or other selections recognized by a text input program. A camera-based detection approach can be used to recognize the location of features performing the motions and gestures, such as a hand, finger, and/or thumb of the user. | 03-05-2015 |
20150062121 | THREE-DIMENSIONAL INTERFACE FOR CONTENT LOCATION - Instances of content, such as search results or browse items, can be displayed using a plurality of three-dimensional elements, with selected pieces of information for each instance placed upon faces, sides, or other portions of those elements. A user can view similar information for each of the instances of content by rotating the elements, such as by interacting with an input element or rotating a portable computing device rendering the elements. The user can apply various filtering criteria or value ranges, whereby the relative position of the elements in three-dimensional space can be adjusted based at least in part upon the applied values. By rotating the elements, applying criteria, and changing the camera view of the elements, a user can quickly compare a large number of instances of context according to a number of different criteria, and can quickly locate items of interest from a large selection of items. | 03-05-2015 |
20150078623 | ENHANCED FACE RECOGNITION IN VIDEO - The computational resources needed to perform processes such as image recognition can be reduced by determining appropriate frames of image information to use for the processing. In some embodiments, infrared imaging can be used to determine when a person is looking substantially towards a device, such that an image frame captured at that time will likely be adequate for facial recognition. In other embodiments, sound triangulation or motion sensing can be used to assist in determining which captured image frames to discard and which to select for processing based on any of a number of factors indicative of a proper frame for processing. | 03-19-2015 |