Patent application number | Description | Published |
20130091462 | MULTI-DIMENSIONAL INTERFACE - A device can display content on a page associated with a dimension. A user can adjust an orientation of the device to adjust a displayed orientation of the page, enabling pages for additional dimensions to be displayed. A user can select one of these dimensions, and adjust an orientation of the device to enable the user to access content for the selected dimension. The change in orientation can be a tilt or flick of the device in a first direction to select a dimension, and then a user could tilt or flick the device in another direction to view pages, sub-dimensions, or other groupings of content among that dimension. Such an approach can enable a user to quickly locate content corresponding to a sub-dimension without having to scroll down a long page of content or otherwise manually navigate to specific content. | 04-11-2013 |
20130258117 | USER-GUIDED OBJECT IDENTIFICATION - A user attempting to obtain information about an object can capture image information including a view of that object, and the image information can be used with a matching or identification process to provide information about that type of object to the user. In order to narrow the search space to a specific category, and thus improve the accuracy of the results and the speed at which results can be obtained, the user can be guided to capture image information with an appropriate orientation. An outline or other graphical guide can be displayed over image information captured by a computing device, in order to guide the user in capturing the object from an appropriate direction and with an appropriate scale for the type of matching and/or information used for the matching. Such an approach enables three-dimensional objects to be analyzed using conventional two-dimensional identification algorithms, among other such processes. | 10-03-2013 |
20130342672 | USING GAZE DETERMINATION WITH DEVICE INPUT - A computing device, in a locked operational state, captures image information of a user which is analyzed to determine the direction of the user's gaze. When the user's gaze is determined to be substantially in the direction of the device, a predetermined input from the user, such as a tap or a voice command, will provide the user with access to at least some functionality of the device that was previously unavailable. If, however, the computing device detects what appears to be the predetermined input, but the user's gaze direction is not in the direction of the device, the computing device will remain in the locked operational state. Therefore, in accordance with various embodiments, gaze determination is utilized as an indication that the user intends to unlock at least some additional functionality of the computing device. | 12-26-2013 |
20140211067 | USER-GUIDED OBJECT IDENTIFICATION - A user attempting to obtain information about an object can capture image information including a view of that object, and the image information can be used with a matching or identification process to provide information about that type of object to the user. In order to narrow the search space to a specific category, and thus improve the accuracy of the results and the speed at which results can be obtained, the user can be guided to capture image information with an appropriate orientation. An outline or other graphical guide can be displayed over image information captured by a computing device, in order to guide the user in capturing the object from an appropriate direction and with an appropriate scale for the type of matching and/or information used for the matching. Such an approach enables three-dimensional objects to be analyzed using conventional two-dimensional identification algorithms, among other such processes. | 07-31-2014 |
20140337791 | Mobile Device Interfaces - Electronic devices, interfaces for electronic devices, and techniques for interacting with such interfaces and electronic devices are described. For instance, this disclosure describes an example electronic device that includes sensors, such as multiple front-facing cameras to detect orientation and/or location of the electronic device relative to an object and one or more inertial sensors. Users of the device may perform gestures on the device by moving the device in-air and/or by moving their head, face, or eyes relative to the device. In response to these gestures, the device may perform operations. | 11-13-2014 |
20150052475 | PROJECTIONS TO FIX POSE OF PANORAMIC PHOTOS - Aspects of the disclosure relate generally to adding or correcting orientation and/or location information for panoramic images. For example, some panoramic images may be associated with inaccurate orientation (or location) information or simply no orientation information at all. In this regard, a user may be provided with the ability to add or adjust the orientation of a panoramic image. The panoramic image can be projected onto a plane using stereographic projection and displayed relative to its location (and orientation, if available) on a map. This can allow a user to quickly identify inaccuracies and to make adjustments or corrections. | 02-19-2015 |
20150189188 | USER-GUIDED OBJECT IDENTIFICATION - A user attempting to obtain information about an object can capture image information including a view of that object, and the image information can be used with a matching or identification process to provide information about that type of object to the user. In order to narrow the search space to a specific category, and thus improve the accuracy of the results and the speed at which results can be obtained, the user can be guided to capture image information with an appropriate orientation. An outline or other graphical guide can be displayed over image information captured by a computing device, in order to guide the user in capturing the object from an appropriate direction and with an appropriate scale for the type of matching and/or information used for the matching. Such an approach enables three-dimensional objects to be analyzed using conventional two-dimensional identification algorithms, among other such processes. | 07-02-2015 |
20150287316 | Encoding Location-Based Reminders - Systems and methods for encoding location-based reminders are provided. Data indicative of a request for a location-based reminder can be received. The data indicative of the request can include data indicative of a user placement of a reminder in a visual representation of the geographic area, such as an image captured of the geographic area or a visual representation of the three-dimensional model of the geographic area. A selected location within a three-dimensional model of a geographic area can be identified based on the data indicative of the user placement of the reminder. Three-dimensional geographic coordinates corresponding to the selected location can be determined using the three-dimensional model and associated with the location-based reminder. The location-based reminder can then be triggered based at least in part on signals indicative of user position and/or orientation in the geographic area. | 10-08-2015 |