Patent application number | Description | Published |
20090005087 | Newsreader for Mobile Device - Providing information to a mobile device can include receiving a translation request from a mobile device, wherein the translation request includes a resource locator identifying information in a native format; accessing the information identified by the resource locator, wherein the information is retrieved from a local cache if available and otherwise is retrieved from a source associated with the resource locator; translating at least a portion of the information identified by the resource locator to generate a translated file in a supported format; and transmitting the translated file to the mobile device. Further, the information retrieved from a source associated with the resource locator can be stored in the local cache. Additionally, the information identified by the resource locator can be cleared from the local cache after a predetermined amount of time. | 01-01-2009 |
20120307096 | Metadata-Assisted Image Filters - This disclosure pertains to devices, methods, systems, and computer readable media for generating and/or interpreting image metadata to determine input parameters for various image processing routines, e.g., filters that distort or enhance an image, in a way that provides an intuitive experience for both the user and the software developer. Such techniques may attach the metadata to image frames and then send the image frames down an image processing pipeline to one or more image processing routines. Image metadata may include face location information, and the image processing routine may include an image filter that processes the image metadata in order to keep the central focus (or foci) of the image filter substantially coincident with one or more of the faces represented in the face location information. The generated and/or interpreted metadata may also be saved to a metadata track for later application to unfiltered image data. | 12-06-2012 |
20130050263 | Device, Method, and Graphical User Interface for Managing and Interacting with Concurrently Open Software Applications - While in a first mode, a first electronic device displays on a touch-sensitive display a first application view that corresponds to a first application. In response to detecting a first input, the electronic device enters a second mode, and concurrently displays in a first predefined area an initial group of application icons with at least a portion of the first application view adjacent to the first predefined area. While in the second mode, in response to detecting a first touch gesture on an application icon that corresponds to a second application, the electronic device displays a popup view corresponding to a full-screen-width view of the second application on a second electronic device. In response to detecting one or more second touch gestures within the popup view, the electronic device performs an action in the second application that updates a state of the second application. | 02-28-2013 |
20130328917 | SMART COVER PEEK - A tablet device includes a display configured to present visual content, a sensor array configured to detect a status of a foldable flap in relation to the display, and a processor configured to operate the tablet device in accordance with the determined status of the foldable flap in relation to the display. In one embodiment, the processor receives a setting value and uses the setting value to execute an application in accordance with the determined relationship of the flap and the display. | 12-12-2013 |
20140035823 | Dynamic Context-Based Language Determination - Methods, systems, computer-readable media, and apparatuses for facilitating message composition are presented. In some embodiments, an electronic computing device can receive user input and determine a set of contextual attributes based on the user input. The device can determine a language based on the set of contextual attributes to determine the language desired to be used for the message composition and switch a keyboard layout to one corresponding to the determined language. Further, the device can determine one or more languages that may be used in the message composition based on the set of contextual attributes and enable functionalities associated with those languages. Further, in some embodiments, the device can determine one or more languages from the user's dictation based on the set of contextual attributes and generate a textual representation of the audio input. | 02-06-2014 |
20140040741 | Smart Auto-Completion - Auto-completion techniques are provided. In some embodiments, a multimedia object can be determined based upon a received textual input. A displayable representation of the multimedia object can be provided as an auto-complete suggestion. In response to user selection of the displayable representation, the received textual input can be replaced with a representation that enables the multimedia object to be accessed. In some embodiments, a mathematical operation can be performed based upon the received textual input. The result of the operation can be provided as an auto-complete suggestion. In response to user selection of the suggestion, the received textual input can be replaced with the result of the mathematical calculation. | 02-06-2014 |
20140089417 | COMPLEX HANDLING OF CONDITIONAL MESSAGES - System, methods, and apparatuses are provided for outputting a notification of a conditional communication (e.g., a message) based on one or more conditions. The output can be controlled by a receiver's device or by a server communicably coupled with the receiver's device, on which a set of applications are loaded. For example, a filter can determine whether a communication has an associated condition. The condition can require specific values of a variable property associated with the receiver's device. A selector module can determine a subset of application(s) that are used to obtain the variable property. When the condition is satisfied, a notification of the communication can be output. | 03-27-2014 |
20140218372 | INTELLIGENT DIGITAL ASSISTANT IN A DESKTOP ENVIRONMENT - Methods and systems related to interfaces for interacting with a digital assistant in a desktop environment are disclosed. In some embodiments, a digital assistant is invoked on a user device by a gesture following a predetermined motion pattern on a touch-sensitive surface of the user device. In some embodiments, a user device selectively invokes a dictation mode or a command mode to process a speech input depending on whether an input focus of the user device is within a text input area displayed on the user device. In some embodiments, a digital assistant performs various operations in response to one or more objects being dragged and dropped onto an iconic representation of the digital assistant displayed on a graphical user interface. In some embodiments, a digital assistant is invoked to cooperate with the user to complete a task that the user has already started on a user device. | 08-07-2014 |
20140358409 | Location-Based Features for Commute Assistant - Some embodiments provide a commute application that provides a first presentation of several stops along a route. The commute application also receives a selection of a stop from the several stops along the route. The commute application further provides a second presentation for displaying several different routes that traverse through the selected stop. | 12-04-2014 |
20140358410 | User Interface Tools for Commute Assistant - Some embodiments provide a commute application that receives a selection of a route from several different routes. Each route in the several different routes includes several stops. In response, the commute application also provides a dynamic focus table that includes a first portion for displaying a schedule of stops along the selected route and a second portion for displaying metadata regarding the selected route. The metadata presented in the second portion is automatically updated whenever a different schedule of stops is displayed in the first portion of the dynamic focus table. | 12-04-2014 |
20140358411 | Architecture for Distributing Transit Data - Some embodiments provide a program that receives from several data providers route data and graphical representation of route data (e.g., transit systems, schedules, stops, etc.) for different localities. The program also stores this data on a set of servers for later retrieval and transmission to commute applications operating in different localities. The program further retrieves from external vendors location data of transit vehicles that traverse routes based on the route data and schedule data. The location data is for transmitting to commute applications. | 12-04-2014 |
20150058723 | Device, Method, and Graphical User Interface for Moving a User Interface Object Based on an Intensity of a Press Input - An electronic device, with a touch-sensitive surface and a display, includes one or more sensors to detect intensity of contacts with the touch-sensitive surface. The device displays a user interface object on the display. The device further detects a press input on the touch-sensitive surface while a focus selector is at a first location in a user interface. In response to detecting the press input on the touch-sensitive surface, upon determining that the press input has an intensity above a predefined activation threshold, the device moves the user interface object directly to the first location in the user interface; and upon determining that the press input has an intensity below the predefined activation threshold and meets gradual-movement criteria, the device moves the user interface object toward the first location in the user interface in accordance with the intensity of the press input. | 02-26-2015 |
20150062052 | Device, Method, and Graphical User Interface for Transitioning Between Display States in Response to a Gesture - An electronic device displays a user interface in a first display state. The device detects a first portion of a gesture on a touch-sensitive surface, including detecting intensity of a respective contact of the gesture. In response to detecting the first portion of the gesture, the device displays an intermediate display state between the first display state and a second display state. In response to detecting the end of the gesture: if intensity of the respective contact had reached a predefined intensity threshold prior to the end of the gesture, the device displays the second display state; otherwise, the device redisplays the first display state. After displaying an animated transition between a first display state and a second state, the device, optionally, detects an increase of the contact intensity. In response, the device displays a continuation of the animation in accordance with the increasing intensity of the respective contact. | 03-05-2015 |
20150067495 | Device, Method, and Graphical User Interface for Providing Feedback for Changing Activation States of a User Interface Object - An electronic device with a touch-sensitive surface, a display, and one or more sensors to detect intensity of contacts with the touch-sensitive surface displays a user interface object having a plurality of activation states; detects a contact on the touch-sensitive surface; and detects an increase of intensity of the contact from a first intensity to a second intensity. In response to detecting the increase in intensity, the device: changes activation states M times, and generates a tactile output on the touch-sensitive surface corresponding to each change in activation state. The device detects a decrease of intensity of the contact from the second intensity to the first intensity; and in response to detecting the decrease in intensity, the device: changes activation states N times, and generates a tactile output on the touch-sensitive surface corresponding to each change in activation state, where N is different from M. | 03-05-2015 |
20150067497 | Device, Method, and Graphical User Interface for Providing Tactile Feedback for Operations Performed in a User Interface - An electronic device with a touch-sensitive surface and a display displays a user interface object on the display, detects a contact on the touch-sensitive surface, and detects a first movement of the contact across the touch-sensitive surface, the first movement corresponding to performing an operation on the user interface object, and, in response to detecting the first movement, the device performs the operation and generates a first tactile output on the touch-sensitive surface. The device also detects a second movement of the contact across the touch-sensitive surface, the second movement corresponding to reversing the operation on the user interface object, and in response to detecting the second movement, the device reverses the operation and generates a second tactile output on the touch-sensitive surface, where the second tactile output is different from the first tactile output. | 03-05-2015 |
20150067513 | Device, Method, and Graphical User Interface for Facilitating User Interaction with Controls in a User Interface - An electronic device, with a touch-sensitive surface and a display, includes one or more sensors to detect intensity of contacts with the touch-sensitive surface. The device displays, on the display, a first control for controlling a first operation. The device detects, on the touch-sensitive surface, a first input that corresponds to the first control; and in response to detecting the first input: in accordance with a determination that the first input meets first control-activation criteria but does not include a contact with a maximum intensity above a respective intensity threshold, the device performs the first operation; and in accordance with a determination that the first input includes a contact with an intensity above the respective intensity threshold, the device displays a second control for performing a second operation associated with the first operation. | 03-05-2015 |
20150067560 | Device, Method, and Graphical User Interface for Manipulating Framed Graphical Objects - An electronic device with a touch-sensitive surface, a display, and one or more sensors to detect intensity of contacts with the touch-sensitive surface displays a graphical object inside of a frame on the display, and detects a gesture. Detecting the gesture includes: detecting a contact on the touch-sensitive surface while a focus selector is over the graphical object, and detecting movement of the contact across the touch-sensitive surface. In response to detecting the gesture: in accordance with a determination that the contact meets predefined intensity criteria, the device removes the graphical object from the frame; and in accordance with a determination that the contact does not meet the predefined intensity criteria, the device adjusts an appearance of the graphical object inside of the frame. | 03-05-2015 |
20150067563 | Device, Method, and Graphical User Interface for Moving and Dropping a User Interface Object - An electronic device detects a contact associated with a focus selector that controls movement of a respective user interface object; and while continuously detecting the contact, the device detects first movement of the contact. In response to detecting the first movement of the contact, the device moves the focus selector and the respective user interface object, and determines an intensity of the contact. The device detects second movement of the contact and in response to detecting the second movement of the contact: when the contact meets respective intensity criteria, the device moves the focus selector and the user interface object; and when the contact does not meet the respective intensity criteria, the device moves the focus selector without moving the user interface object. | 03-05-2015 |
20150067596 | Device, Method, and Graphical User Interface for Displaying Additional Information in Response to a User Contact - An electronic device, with a touch-sensitive surface and a display, includes one or more sensors to detect intensity of contacts with the touch-sensitive surface. The device detects a contact on the touch-sensitive surface while a focus selector corresponding to the contact is at a respective location on the display associated with additional information not initially displayed on the display. While the focus selector is at the respective location, upon determining that the contact has an intensity above a respective intensity threshold before a predefined delay time has elapsed with the focus selector at the respective location, the device displays the additional information associated with the respective location without waiting until the predefined delay time has elapsed; and upon determining that the contact has an intensity below the respective intensity threshold, the device waits until the predefined delay time has elapsed to display the additional information associated with the respective location. | 03-05-2015 |
20150067601 | Device, Method, and Graphical User Interface for Displaying Content Associated with a Corresponding Affordance - An electronic device with a display, a touch-sensitive surface, and sensors to detect intensity of contacts with the touch-sensitive surface displays, on the display, an affordance corresponding to respective content at a respective size and detects a gesture that includes an increase in intensity of a contact followed by a subsequent decrease in intensity of the contact. In response to the increase in intensity, the device decreases a size of the affordance below the respective size. In response to the subsequent decrease in intensity: when a maximum intensity of the contact is above a content-display intensity threshold, the device ceases to display the affordance and displays at least a portion of the respective content; and when a maximum intensity of the contact is below the content-display intensity threshold, the device increases the size of the affordance to the respective size and forgoes displaying the respective content. | 03-05-2015 |
20150067602 | Device, Method, and Graphical User Interface for Selecting User Interface Objects - An electronic device with a display, touch-sensitive surface and one or more sensors to detect intensity of contacts with the touch-sensitive surface displays a first user interface object and detects first movement of the contact that corresponds to movement of a focus selector toward the first user interface object. In response to detecting the first movement, the device moves the focus selector to the first user interface object; and determines an intensity of the contact. After detecting the first movement, the device detects second movement of the contact. In response to detecting the second movement of the contact, when the contact meets selection criteria based on an intensity of the contact, the device moves the focus selector and the first user interface object; and when the contact does not meet the selection criteria, the device moves the focus selector without moving the first user interface object. | 03-05-2015 |