Patent application number | Description | Published |
20100317332 | MOBILE DEVICE WHICH AUTOMATICALLY DETERMINES OPERATING MODE - A mobile device such as a cell phone is used to remotely control an electronic appliance such as a television or personal computer. In a setup phase, the mobile device captures an image of the electronic appliance and identifies and stores scale-invariant features of the image. A user interface configuration such as a virtual keypad configuration, and a communication protocol, can be associated with the stored data. Subsequently, in an implementation phase, another image of the electronic appliance is captured and compared to the stored features in a library to identify a match. In response, the associated user interface configuration and communication protocol are implemented to control the electronic appliance. In a polling and reply process, the mobile device captures a picture of a display of the electronic device and compares it to image data which is transmitted by the electronic appliance. | 12-16-2010 |
20100317371 | CONTEXT-BASED INTERACTION MODEL FOR MOBILE DEVICES - A context-aware mobile device such as a cell phone automatically determines appropriate user interface (UI) settings to implement at different times and/or locations. A behavior of the mobile device is tracked by determining locations visited and UI settings which are manually configured by the user. Patterns in the movement and UI settings relative to one another and to time are detected. When a particular location or time is subsequently reached which corresponds to the pattern, an appropriate UI setting can be implemented, thereby relieving the user of this task. Locations can be detected by electromagnetic signals at different locations, such as from a Wi-Fi network, Bluetooth network, RF or infrared beacon, or a wireless point-of-sale terminal. An identifier from the signals such as an SSID can be stored. Labels for locations can be automatically assigned, or the user can be prompted to provide a label for commonly visited locations. | 12-16-2010 |
20110181527 | Device, Method, and Graphical User Interface for Resizing Objects - A method for resizing a currently selected user interface object includes simultaneously displaying on a touch-sensitive display the currently selected user interface object having a center, and a plurality of resizing handles for the currently selected user interface object. The method also includes detecting a first contact on a first resizing handle in the plurality of resizing handles, and detecting movement of the first contact across the touch-sensitive display. The method further includes, in response to detecting movement of the first contact, when a second contact is detected on the touch-sensitive display while detecting movement of the first contact, resizing the currently selected user interface object about the center of the currently selected user interface object. | 07-28-2011 |
20110181528 | Device, Method, and Graphical User Interface for Resizing Objects - A method for resizing a currently selected user interface object includes simultaneously displaying on a touch-sensitive display the currently selected user interface object having a center, and a plurality of resizing handles for the currently selected user interface object. The method also includes detecting a first contact on a first resizing handle in the plurality of resizing handles, and detecting movement of the first contact across the touch-sensitive display. The method further includes, in response to detecting movement of the first contact, when a second contact is detected on the touch-sensitive display while detecting movement of the first contact, resizing the currently selected user interface object about the center of the currently selected user interface object. | 07-28-2011 |
20110181529 | Device, Method, and Graphical User Interface for Selecting and Moving Objects - A method performed at a computing device with a touch-sensitive display includes: displaying a plurality of user interface objects on the display, including a currently selected first user interface object; detecting a first contact on the first user interface object; detecting movement of the first contact across the display; moving the first user interface object in accordance with the movement of the first contact; while detecting movement of the first contact across the display: detecting a first finger gesture on a second user interface object; and, in response: selecting the second user interface object; moving the second user interface object in accordance with movement of the first contact subsequent to detecting the first finger gesture; and continuing to move the first user interface object in accordance with the movement of the first contact. | 07-28-2011 |
20110185321 | Device, Method, and Graphical User Interface for Precise Positioning of Objects - A method includes, at a computing device with a touch-sensitive display: displaying a user interface object on the touch-sensitive display; detecting a contact on the user interface object; while continuing to detect the contact on the user interface object: detecting an M-finger gesture, distinct from the contact, in a first direction on the touch-sensitive display, where M is an integer; and, in response to detecting the M-finger gesture, translating the user interface object a predefined number of pixels in a direction in accordance with the first direction. | 07-28-2011 |
20120026100 | Device, Method, and Graphical User Interface for Aligning and Distributing Objects - At a multifunction device with a display and a touch-sensitive surface, a plurality of objects are displayed on the display. The device detects a first contact on the touch-sensitive surface. While detecting the first contact, the device detects a first gesture that includes movement of a second contact and a third contact on the touch-sensitive surface. In response to detecting the first gesture, the device determines a contact axis based on a location of the second contact relative to a location of the third contact on the touch-sensitive surface. The device determines an object-alignment axis based on the contact axis, and repositions one or more of the objects so as to align at least a subset of the objects on the display along the object-alignment axis. | 02-02-2012 |
20120030568 | Device, Method, and Graphical User Interface for Copying User Interface Objects Between Content Regions - An electronic device displays a user interface object in a first content region on a touch-sensitive display. The device detects a first finger input on the user interface object. While detecting the first finger input, the device detects a second finger input on the touch-sensitive display. When the first finger input is an M-finger contact, wherein M is an integer, in response to detecting the second finger input, the device selects a second content region and displays a copy of the user interface object in the second content region. After detecting the second finger input, the device detects termination of the first finger input while the copy of the user interface object is displayed in the second content region. In response to detecting termination of the first finger input, the device maintains display of the copy of the user interface object in the second content region. | 02-02-2012 |
20120030569 | Device, Method, and Graphical User Interface for Reordering the Front-to-Back Positions of Objects - At a multifunction device with a display and a touch-sensitive surface, a plurality of objects are displayed on the display. The plurality of objects have a first layer order. A first contact is detected at a location on the touch-sensitive surface that corresponds to a location of a respective object of the plurality of objects. While detecting the first contact, a gesture that includes a second contact is detected on the touch-sensitive surface. In response to detecting the gesture, the plurality of objects are reordered in accordance with the gesture to create a second layer order that is different from the first layer order. In some embodiments, the position of the respective object within the first order is different from the position of the respective object within the second order. | 02-02-2012 |
20120030570 | Device, Method, and Graphical User Interface for Copying Formatting Attributes - An electronic device simultaneously displays on a touch-sensitive display a first user interface object and a second user interface object. The second user interface object has formatting attributes, one or more of which are distinct from corresponding formatting attributes in the first user interface object. The device detects a first contact on the first user interface object and a second contact on the second user interface object. While continuing to detect the first contact and the second contact, the device detects movement of the second contact across the touch-sensitive display, and moves the second user interface object in accordance with the movement of the second contact. The device changes one or more formatting attributes for the second user interface object to match corresponding formatting attributes for the first user interface object if the second user interface object contacts the first user interface object while moving. | 02-02-2012 |
20120030624 | Device, Method, and Graphical User Interface for Displaying Menus - An electronic device displays one or more user interface objects on a touch-sensitive display. The device detects a finger contact on the touch-sensitive display at a first location away from any of the displayed user interface objects, and determines whether a predefined condition is satisfied. The predefined condition includes that the finger contact continues to be detected on the touch-sensitive display. While the predefined condition is satisfied: the device displays a first menu of a plurality of menus; detects a rotation of the finger contact; and replaces display of the first menu with display of a second menu of the plurality of menus in accordance with the rotation of the finger contact. | 02-02-2012 |
20120188174 | Device, Method, and Graphical User Interface for Navigating and Annotating an Electronic Document - A device, configured to operate in a first operational mode at some times and in a second operational mode at other times, detects a first gesture having a first gesture type; in response to detecting the first gesture: in accordance with a determination that the device is in the first operational mode, performs an operation having a first operation type; and, in accordance with a determination that the device is in the second operational mode, performs an operation having a second operation type; detects a second gesture having a second gesture type; and in response to detecting the second gesture: in accordance with a determination that the device is in the first operational mode, performs an operation having the second operation type; and in accordance with a determination that the device is in the second operational mode, performs an operation having the first operation type. | 07-26-2012 |
20120192056 | Device, Method, and Graphical User Interface with a Dynamic Gesture Disambiguation Threshold - An electronic device with a display, a touch-sensitive surface, one or more processors, and memory detects a first portion of a gesture, and determines that the first portion has a first gesture characteristic. The device selects a dynamic disambiguation threshold in accordance with the first gesture characteristic. The dynamic disambiguation threshold is used to determine whether to perform a first type of operation or a second type of operation when a first kind of gesture is detected. The device determines that the gesture is of the first kind of gesture. After selecting the dynamic disambiguation threshold, the device determines whether the gesture meets the dynamic disambiguation threshold. When the gesture meets the dynamic disambiguation threshold, the device performs the first type of operation, and when the gesture does not meet the dynamic disambiguation threshold, the device performs the second type of operation. | 07-26-2012 |
20120192057 | Device, Method, and Graphical User Interface for Navigating through an Electronic Document - An electronic device with a display and a touch-sensitive surface stores a document having primary content, supplementary content, and user-generated content. The device displays a representation of the document in a segmented user interface on the display. Primary content of the document is displayed in a first segment of the segmented user interface and supplementary content of the document is concurrently displayed in a second segment of the segmented user interface distinct from the first segment. The device receives a request to view user-generated content of the document. In response to the request, the device maintains display of the previously displayed primary content, ceases to display at least a portion of the previously displayed supplementary content, and displays user-generated content of the document in a third segment of the segmented user interface distinct from the first segment and the second segment. | 07-26-2012 |
20120192065 | Device, Method, and Graphical User Interface for Navigating and Annotating an Electronic Document - A device, configured to operate in a first operational mode at some times and in a second operational mode at other times, detects a first gesture having a first gesture type; in response to detecting the first gesture: in accordance with a determination that the device is in the first operational mode, performs an operation having a first operation type; and, in accordance with a determination that the device is in the second operational mode, performs an operation having a second operation type; detects a second gesture having a second gesture type; and in response to detecting the second gesture: in accordance with a determination that the device is in the first operational mode, performs an operation having the second operation type; and in accordance with a determination that the device is in the second operational mode, performs an operation having the first operation type. | 07-26-2012 |
20120192068 | Device, Method, and Graphical User Interface for Navigating through an Electronic Document - An electronic device with a display and a touch-sensitive surface stores a document having primary content, supplementary content, and user-generated content. The device displays a representation of the document in a segmented user interface on the display. Primary content of the document is displayed in a first segment of the segmented user interface and supplementary content of the document is concurrently displayed in a second segment of the segmented user interface distinct from the first segment. The device receives a request to view user-generated content of the document. In response to the request, the device maintains display of the previously displayed primary content, ceases to display at least a portion of the previously displayed supplementary content, and displays user-generated content of the document in a third segment of the segmented user interface distinct from the first segment and the second segment. | 07-26-2012 |
20120192093 | Device, Method, and Graphical User Interface for Navigating and Annotating an Electronic Document - A device, configured to operate in a first operational mode at some times and in a second operational mode at other times, detects a first gesture having a first gesture type; in response to detecting the first gesture: in accordance with a determination that the device is in the first operational mode, performs an operation having a first operation type; and, in accordance with a determination that the device is in the second operational mode, performs an operation having a second operation type; detects a second gesture having a second gesture type; and in response to detecting the second gesture: in accordance with a determination that the device is in the first operational mode, performs an operation having the second operation type; and in accordance with a determination that the device is in the second operational mode, performs an operation having the first operation type. | 07-26-2012 |
20120192101 | Device, Method, and Graphical User Interface for Navigating through an Electronic Document - An electronic device with a display and a touch-sensitive surface stores a document having primary content, supplementary content, and user-generated content. The device displays a representation of the document in a segmented user interface on the display. Primary content of the document is displayed in a first segment of the segmented user interface and supplementary content of the document is concurrently displayed in a second segment of the segmented user interface distinct from the first segment. The device receives a request to view user-generated content of the document. In response to the request, the device maintains display of the previously displayed primary content, ceases to display at least a portion of the previously displayed supplementary content, and displays user-generated content of the document in a third segment of the segmented user interface distinct from the first segment and the second segment. | 07-26-2012 |
20120192102 | Device, Method, and Graphical User Interface for Navigating through an Electronic Document - An electronic device with a display and a touch-sensitive surface stores a document having primary content, supplementary content, and user-generated content. The device displays a representation of the document in a segmented user interface on the display. Primary content of the document is displayed in a first segment of the segmented user interface and supplementary content of the document is concurrently displayed in a second segment of the segmented user interface distinct from the first segment. The device receives a request to view user-generated content of the document. In response to the request, the device maintains display of the previously displayed primary content, ceases to display at least a portion of the previously displayed supplementary content, and displays user-generated content of the document in a third segment of the segmented user interface distinct from the first segment and the second segment. | 07-26-2012 |
20120192117 | Device, Method, and Graphical User Interface with a Dynamic Gesture Disambiguation Threshold - An electronic device with a display, a touch-sensitive surface, one or more processors, and memory detects a first portion of a gesture, and determines that the first portion has a first gesture characteristic. The device selects a dynamic disambiguation threshold in accordance with the first gesture characteristic. The dynamic disambiguation threshold is used to determine whether to perform a first type of operation or a second type of operation when a first kind of gesture is detected. The device determines that the gesture is of the first kind of gesture. After selecting the dynamic disambiguation threshold, the device determines whether the gesture meets the dynamic disambiguation threshold. When the gesture meets the dynamic disambiguation threshold, the device performs the first type of operation, and when the gesture does not meet the dynamic disambiguation threshold, the device performs the second type of operation. | 07-26-2012 |
20120192118 | Device, Method, and Graphical User Interface for Navigating through an Electronic Document - An electronic device with a display and a touch-sensitive surface stores a document having primary content, supplementary content, and user-generated content. The device displays a representation of the document in a segmented user interface on the display. Primary content of the document is displayed in a first segment of the segmented user interface and supplementary content of the document is concurrently displayed in a second segment of the segmented user interface distinct from the first segment. The device receives a request to view user-generated content of the document. In response to the request, the device maintains display of the previously displayed primary content, ceases to display at least a portion of the previously displayed supplementary content, and displays user-generated content of the document in a third segment of the segmented user interface distinct from the first segment and the second segment. | 07-26-2012 |
20120235925 | Device, Method, and Graphical User Interface for Establishing an Impromptu Network - An electronic device with a touch-sensitive surface and a device motion sensor detects a predefined gesture on the touch-sensitive surface. The predefined gesture has one or more gesture components. The device detects a predefined movement of the electronic device with the device motion sensor. The predefined movement has one or more movement components. In response to detecting the predefined gesture and the predefined movement, the device, in accordance with a determination that the one or more gesture components and the one or more movement components satisfy predefined concurrency criteria, performs a predefined operation that is associated with concurrent detection of the predefined gesture and the predefined movement, and in accordance with a determination that the one or more gesture components and the one or more movement components do not satisfy the predefined concurrency criteria, foregoes performing the predefined operation. | 09-20-2012 |
20120240025 | Device, Method, and Graphical User Interface for Automatically Generating Supplemental Content - An electronic device with a display and a touch-sensitive surface displays a portion of a document in a primary user interface for the document. The portion of the document includes a respective author-specified term. The respective author-specified term is associated with corresponding additional information supplied by an author of the document, and the corresponding additional information is not concurrently displayed with the author-specified term in the portion of the document. The device also receives a request to annotate the respective author-specified term in the portion of the document; and in response to the request to annotate the respective author-specified term: annotates the respective author-specified term in the primary user interface; and generates instructions for displaying, in a supplemental user interface for the document distinct from the primary user interface, the respective author-specified term and at least a portion of the corresponding additional information for the respective author-specified term. | 09-20-2012 |
20120240037 | Device, Method, and Graphical User Interface for Displaying Additional Snippet Content - An electronic device concurrently displays snippets including a first snippet and a second snippet. The first snippet includes first displayed snippet content corresponding to a first portion of content from a document associated with the first snippet. The second snippet includes second displayed snippet content corresponding to a second portion of content from a document associated with the second snippet. The device detects a gesture associated with the first snippet, which includes detecting a first contact and a second contact and detecting movement of the first contact relative to the second contact. In response, the device modifies the first snippet to include an additional portion of content from the document associated with the first snippet that was not included in the first displayed snippet content and maintains display of the second snippet without adding any additional content from the document associated with the second snippet. | 09-20-2012 |
20120240042 | Device, Method, and Graphical User Interface for Establishing an Impromptu Network - An electronic device with a touch-sensitive surface and a device motion sensor detects a predefined gesture on the touch-sensitive surface. The predefined gesture has one or more gesture components. The device detects a predefined movement of the electronic device with the device motion sensor. The predefined movement has one or more movement components. In response to detecting the predefined gesture and the predefined movement, the device, in accordance with a determination that the one or more gesture components and the one or more movement components satisfy predefined concurrency criteria, performs a predefined operation that is associated with concurrent detection of the predefined gesture and the predefined movement, and in accordance with a determination that the one or more gesture components and the one or more movement components do not satisfy the predefined concurrency criteria, foregoes performing the predefined operation. | 09-20-2012 |
20120240074 | Device, Method, and Graphical User Interface for Navigating Between Document Sections - An electronic device with a display and a touch-sensitive surface displays a page of a first multi-page section of a document and a navigation bar configured to navigate through only pages in the first multi-page section of the document. The device detects a predefined gesture at a location on the touch-sensitive surface that corresponds to a predefined portion of the navigation bar. In response to detecting the predefined gesture, the device displays a navigation user interface that enables selection of a page of the document that is outside of the first multi-page section. The device receives an input in the navigation user interface that indicates selection of a page of a second multi-page section of the document outside of the first multi-page section. In response to receiving the input, the device displays the selected page of the second multi-page section of the document. | 09-20-2012 |
20130047115 | CREATING AND VIEWING DIGITAL NOTE CARDS - Systems, techniques, and methods are presented for creating digital note cards and presenting a graphical user interface for interacting with digital note cards. For example, content from an electronic book can be displayed in a graphical user interface. Input can be received in the graphical user interface highlighting a portion of the content and creating a note, the note including user generated content. A digital note card can be created where one side of the digital note card includes the highlighted text, and the other side of the digital note card includes the note. The digital note card can be displayed in the graphical user interface. | 02-21-2013 |
20130073932 | Interactive Content for Digital Books - This disclosure describes systems, methods, and computer program products for presenting interactive content for digital books. In some implementations, a graphical user interface (GUI) is presented that allows a user to view and interact with content embedded in a digital book. The interactive content can include, but is not limited to, text, image galleries, multimedia presentations, video, HTML, animated and static diagrams, charts, tables, visual dictionaries, review questions, three-dimensional (3D) animation and any other known media content. For example, various touch gestures can be used by the user to move through images and multimedia presentations, play video, answer review questions, manipulate three-dimensional objects, and interact with HTML. | 03-21-2013 |
20130073998 | AUTHORING CONTENT FOR DIGITAL BOOKS - This disclosure describes systems, methods, and computer program products for authoring content for digital books. In some implementations, a single graphical user interface (GUI) is presented that allows an author to design a layout for the digital book, including editing text and inserting various types of interactive elements in the text. The GUI functions as both a digital book layout design tool and a word processor to facilitate the building of a digital book. The relative page location of inserted widgets can be determined by a user-selectable anchor point placed within the text. An outline view of the digital book can be created and presented in the GUI based on a hierarchical structure determined by paragraph styles applied to the text. The GUI can provide a hybrid glossary and index page for allowing the author to create and manage a glossary and index for the digital book. | 03-21-2013 |
20130155071 | Document Collaboration Effects - Various features and processes related to document collaboration are disclosed. In some implementations, animations are presented when updating a local document display to reflect changes made to the document at a remote device. In some implementations, a user can selectively highlight changes made by collaborators in a document. In some implementations, a user can select an identifier associated with another user to display a portion of a document that includes the other user's cursor location. In some implementations, text in document chat sessions can be automatically converted into hyperlinks which, when selected, cause a document editor to perform an operation. | 06-20-2013 |