Patent application number | Description | Published |
20090094560 | HANDLE FLAGS - The claimed subject matter provides techniques to effectuate and facilitate efficient and flexible selection of display objects. The system can include devices and components that acquire gestures from pointing instrumentalities and thereafter ascertains velocities and proximities in relation to the displayed objects. Based at least upon these ascertained velocities and proximities falling below or within threshold levels, the system displays flags associated with the display object. | 04-09-2009 |
20100100716 | Conserving Power Using Predictive Modelling and Signaling - Methods and systems for conserving power using predictive models and signaling are described. Parameters of a power management policy are set based on predictions based on user activity and/or signals received from a remote computer which define a user preference. In an embodiment, the power management policy involves putting the computer into a sleep state and periodically waking it up. On waking, the computer determines whether to remain awake or to return to the sleep state dependent upon the output of a predictive model or signals that encode whether a remote user has requested that computer remain awake. Before returning to the sleep state, a wake-up timer is set and this timer triggers the computer to subsequently wake-up. The length of time that the timer is set to may depend on factors such as the request from the remote user, context sensors and usage data. | 04-22-2010 |
20130127738 | DYNAMIC SCALING OF TOUCH SENSOR - Embodiments are disclosed that relate to dynamically scaling a mapping between a touch sensor and a display screen. One disclosed embodiment provides a method including setting a first user interface mapping that maps an area of the touch sensor to a first area of the display screen, receiving a user input from the user input device that changes a user interaction context of the user interface, and in response to the user input, setting a second user interface mapping that maps the area of the touch sensor to a second area of the display screen. The method further comprises providing to the display device an output of a user interface image representing the user input at a location based on the second user interface mapping. | 05-23-2013 |
20130262888 | CONSERVING POWER USING PREDICTIVE MODELLING AND SIGNALING - Methods and systems for conserving power using predictive models and signaling are described. Parameters of a power management policy are set based on predictions based on user activity and/or signals received from a remote computer which define a user preference. In an embodiment, the power management policy involves putting the computer into a sleep state and periodically waking it up. On waking, the computer determines whether to remain awake or to return to the sleep state dependent upon the output of a predictive model or signals that encode whether a remote user has requested that computer remain awake. Before returning to the sleep state, a wake-up timer is set and this timer triggers the computer to subsequently wake-up. The length of time that the timer is set to may depend on factors such as the request from the remote user, context sensors and usage data. | 10-03-2013 |
20140350928 | Method For Finding Elements In A Webpage Suitable For Use In A Voice User Interface - A voice interface for web pages or other documents identifies interactive elements such as links, obtains one or more phrases of each interactive element, such as link text, title text and alternative text for images, and adds the phrases to a grammar which is used for speech recognition. A click event is generated for an interactive element having a phrase which is a best match for the voice command of a user. In one aspect, the phrases of currently-displayed elements of the document are used for speech recognition. In another aspect, phrases which are not displayed, such as title text and alternative text for images, are used in the grammar. In another aspect, updates to the document are detected and the grammar is updated accordingly so that the grammar is synchronized with the current state of the document. | 11-27-2014 |
20140350941 | Method For Finding Elements In A Webpage Suitable For Use In A Voice User Interface (Disambiguation) - A disambiguation process for a voice interface for web pages or other documents. The process identifies interactive elements such as links, obtains one or more phrases of each interactive element, such as link text, title text and alternative text for images, and adds the phrases to a grammar which is used for speech recognition. A group of interactive elements are identified as potential best matches to a voice command when there is no single, clear best match. The disambiguation process modifies a display of the document to provide unique labels for each interactive element in the group, and the user is prompted to provide a subsequent spoke command to identify one of the unique labels. The selected unique label is identified and a click event is generated for the corresponding interactive element. | 11-27-2014 |
Patent application number | Description | Published |
20080214233 | CONNECTING MOBILE DEVICES VIA INTERACTIVE INPUT MEDIUM - A mobile device connection system is provided. The system includes an input medium to detect a device position or location. An analysis component determines a device type and establishes a connection with the device. The input medium can include vision systems to detect device presence and location where connections are established via wireless technologies. | 09-04-2008 |
20080250012 | IN SITU SEARCH FOR ACTIVE NOTE TAKING - A system and method that facilitates and effectuates in situ search for active note taking. The system and method includes receiving gestures from a stylus and a tablet associated with the system. Upon recognizing the gesture as belonging to a set of known and recognized gestures, the system creates an embeddable object, initiates a search with terms indicated by the gesture, associates the search results with the created object and inserts the object in close proximity with the terms that instigated the search. | 10-09-2008 |
20090128483 | ADVANCED NAVIGATION TECHNIQUES FOR PORTABLE DEVICES - The present invention provides a unique system and method that facilitates navigating smoothly and gracefully through any type of content viewable on portable devices such as cell-phones, PDAs, and/or any other hybrids thereof. In addition, such navigation can be performed while preserving perspective and context with respect to a larger amount of content. Pointing devices can also be used to navigate through content—the amount or detail of the content being dependant on the speed of the pointing device. Additionally, a semi-transparent overview of content can be overlaid a zoomed-in portion of content to provide perspective to the zoomed in portion. Content shown in the semi-transparent overview can depend on the location of the pointing device with respect to the content. | 05-21-2009 |
20090187824 | SELF-REVELATION AIDS FOR INTERFACES - Systems and/or methods are provided that facilitates revealing assistance information associated with a user interface. An interface can obtain input information related to interactions between the interface and a user. In addition, the interface can output assistance information in situ with the user interface. Further, a decision component that determines the in situ assistance information output by the interface based at least in part on the obtained input information. | 07-23-2009 |
20100053154 | METHODS FOR AUTOMATED AND SEMIAUTOMATED COMPOSITION OF VISUAL SEQUENCES, FLOWS, AND FLYOVERS BASED ON CONTENT AND CONTEXT - A system with the ability to dynamically compose a sequence of visual views or flows allowing a single object or region, or multiple objects or regions, to be viewed from different perspectives and visual distances is described. The sequence of views can provide smooth flyovers over positions and details on objects that are deemed to be of interest, with changes in zoom level and/or velocity that are functions of the estimated complexity and/or unfamiliarity with features of the object. In an example, a flyover displaying different views on a map of a city arterial system on a small-screened mobile device is composed based on current traffic conditions, swooping up and down with parabolic trajectories, based on distances being traversed, and pausing at times over key traffic jams and other findings of interest based on the estimated visual complexity and predicted atypicality of situations. | 03-04-2010 |
20100318293 | RETRACING STEPS - Techniques for creating breadcrumbs for a trail of activity are described. The trail of activity may be created by recording movement information based on inferred actions of walking, not walking, or changing floor levels. The movement information may be recorded with an accelerometer and a pressure sensor. A representation of a list of breadcrumbs may be visually displayed on a user interface of a mobile device, in a reverse order to retrace steps. In some implementations, a compass may additionally or alternatively be used to collect directional information relative to the earth's magnetic poles. | 12-16-2010 |
20120240043 | Self-Revelation Aids for Interfaces - Systems and/or methods are provided that facilitates revealing assistance information associated with a user interface. An interface can obtain input information related to interactions between the interface and a user. In addition, the interface can output assistance information in situ with the user interface. Further, a decision component that determines the in situ assistance information output by the interface based at least in part on the obtained input information. | 09-20-2012 |
20130115879 | Connecting Mobile Devices via Interactive Input Medium - A mobile device connection system is provided. The system includes an input medium to detect a device position or location. An analysis component determines a device type and establishes a connection with the device. The input medium can include vision systems to detect device presence and location where connections are established via wireless technologies. | 05-09-2013 |
20130217416 | CLIENT CHECK-IN - Client check-in techniques are described. In embodiments thereof, a mobile device includes a communication interface for notification communication with one or more other devices associated with the mobile device. The mobile device has a selectable control for user selection to initiate a check-in notification that indicates a location of the mobile device and a timestamp of the date and time. The mobile device also includes a check-in service that is implemented to initiate communication of the check-in notification to the other associated devices responsive to a user selection to initiate the check-in notification. | 08-22-2013 |
20130225152 | AUTOMATICALLY QUIETING MOBILE DEVICES - In implementations of automatically quieting mobile devices, a mobile device includes a communication interface for communicating with other devices that are associated with the mobile device, and the other devices correspond to respective users of the devices. A device quiet service is implemented to initiate a device quiet control that quiets one or more of the other associated devices that are controllable by the mobile device, and the device quiet service initiates communication of the device quiet control to the associated devices. A device quiet control can be initiated to restrict communication functions of the other associated devices, such as for a designated time duration. Alternatively or in addition, a device quiet control can quiet the other associated devices at a designated location, during an event, within a designated quiet zone, and/or quiet the associated devices that are proximate the mobile device at a location. | 08-29-2013 |
20130227431 | PRIVATE INTERACTION HUBS - In embodiments of private interaction hubs, a mobile device has memory storage to maintain hub data that is associated with a private interaction hub, where the hub data includes multiple types of displayable data that is editable by different types of device applications. The memory storage at the device also maintains private data that is displayable and is viewable with one of the device applications. The mobile device also includes a display device to display the multiple types of the hub data in a hub user interface of a hub application. The display device can also display the private data and a subset of the hub data that are both associated with a device application in a device application user interface. | 08-29-2013 |
20130295872 | MOBILE DEVICE EMERGENCY SERVICE - Mobile device emergency service techniques are described. In embodiments, a client device includes one or more modules implemented at least partially in hardware and configured to implement an emergency service. The emergency service configured to support operations including generating a user interface for display on a display device, receiving one or more inputs usable to form an emergency contacts list that includes a plurality of emergency contacts, and causing the emergency contacts list to be communicated to one or more other client devices for use in generating a message to be communicated automatically and without user intervention to the emergency contacts in the emergency contacts list responsive to a trigger. | 11-07-2013 |
20130298037 | HUB COORDINATION SERVICE - In implementations of a hub coordination service, a device includes a communication interface for communication coordination with one or more associated devices of the device, and the associated devices correspond to hub members. A hub manager is implemented to receive a task input to create a task for one or more of the hub members to complete. The hub manager can register the task in a hub that is a private, shared space of the hub members, and then initiate communication of the task to respective associated devices of the one or more hub members for notification of the task to be completed. | 11-07-2013 |
20130303143 | MOBILE DEVICE SAFE DRIVING - In embodiments of mobile device safe driving, a mobile device can display a device lock screen on an integrated display device, and transition from the device lock screen to display a driving mode lock screen. The transition to display the driving mode lock screen occurs without receiving a PIN code entered on the device lock screen. The mobile device implements a safe driving service that is implemented to activate a safe driving mode of the mobile device, and disable features of the mobile device while the safe driving mode is activated. | 11-14-2013 |
20130305319 | HUB KEY SERVICE - In embodiments of a hub key service, a device includes a communication interface for communication coordination with one or more associated devices of the device, and the associated devices correspond to hub members. A hub manager is implemented to generate an electronic key that includes access permissions, which are configurable to enable controlled access for the hub members, such as to a building, vehicle, media device, or location. The hub manager can then correlate the electronic key with the device to enable access to the building, vehicle, media device, or location with the device utilized as the electronic key. | 11-14-2013 |
20130305354 | RESTRICTED EXECUTION MODES - In embodiments of restricted execution modes, a mobile device can display a device lock screen on an integrated display device, and transition from the device lock screen to display a shared space user interface of a shared space. The transition to display the shared space user interface is without receiving a PIN code entered on the device lock screen. The mobile device implements a restricted execution service that is implemented to activate a restricted execution mode of the mobile device, and restrict access of a device application to device content while the restricted execution mode is activated. The restricted execution service can also allow a shared device application that is included in the shared space access to the device content while the restricted execution mode is activated. | 11-14-2013 |
20140055499 | METHODS FOR AUTOMATED AND SEMIAUTOMATED COMPOSITION OF VISUAL SEQUENCES, FLOWS, AND FLYOVERS BASED ON CONTENT AND CONTEXT - A system with the ability to dynamically compose a sequence of visual views or flows allowing a single object or region, or multiple objects or regions, to be viewed from different perspectives and visual distances is described. The sequence of views can provide smooth flyovers over positions and details on objects that are deemed to be of interest, with changes in zoom level and/or velocity that are functions of the estimated complexity and/or unfamiliarity with features of the object. In an example, a flyover displaying different views on a map of a city arterial system on a small-screened mobile device is composed based on current traffic conditions, swooping up and down with parabolic trajectories, based on distances being traversed, and pausing at times over key traffic jams and other findings of interest based on the estimated visual complexity and predicted atypicality of situations. | 02-27-2014 |
20140100779 | METHODS FOR AUTOMATED AND SEMIAUTOMATED COMPOSITION OF VISUAL SEQUENCES, FLOWS, AND FLYOVERS BASED ON CONTENT AND CONTEXT - A system with the ability to dynamically compose a sequence of visual views or flows allowing a single object or region, or multiple objects or regions, to be viewed from different perspectives and visual distances is described. The sequence of views can provide smooth flyovers over positions and details on objects that are deemed to be of interest, with changes in zoom level and/or velocity that are functions of the estimated complexity and/or unfamiliarity with features of the object. In an example, a flyover displaying different views on a map of a city arterial system on a small-screened mobile device is composed based on current traffic conditions, swooping up and down with parabolic trajectories, based on distances being traversed, and pausing at times over key traffic jams and other findings of interest based on the estimated visual complexity and predicted atypicality of situations. | 04-10-2014 |
Patent application number | Description | Published |
20100321275 | MULTIPLE DISPLAY COMPUTING DEVICE WITH POSITION-BASED OPERATING MODES - Described is a multiple display computing device, including technology for automatically selecting among various operating modes so as to display content on the displays based upon their relative positions. For example concave modes correspond to inwardly facing viewing surfaces of both displays, such as for viewing private content from a single viewpoint. Convex modes have outwardly facing outwardly surfaces, such that private content is shown on one display and public content on another. Neutral modes are those in which the viewing surfaces of the displays are generally on a common plane, for single user or multiple user/collaborative viewing depending on each display's output orientation. The displays may be movably coupled to one another, or may be implemented as two detachable computer systems coupled by a network connection. | 12-23-2010 |
20120214542 | AUTOMATIC ANSWERING OF A MOBILE PHONE - The present disclosure relates to a mobile phone and a method for answering such a phone automatically without user input. In one embodiment, the mobile phone detects that a call is being received. A proximity sensor is then used to detect the presence of a nearby object. For example, this allows a determination to be made whether the mobile phone is within a pocket of the user while the phone is ringing. Then a determination is made whether the proximity sensor changes states. For example, if a user removes the phone from their pocket, the proximity sensor switches from detecting something proximal to detecting that the phone is no longer in the user's pocket. Next, a determination is made whether the proximity sensor is again next to an object, such as an ear. If so, the mobile phone can be automatically answered without further user input. | 08-23-2012 |
20120231838 | CONTROLLING AUDIO OF A DEVICE - Techniques and tools are described for controlling an audio signal of a mobile device. For example, information indicative of acceleration of the mobile device can be received and correlation between the information indicative of acceleration and exemplar whack event data can be determined. An audio signal of the mobile device can be controlled based on the correlation. | 09-13-2012 |
20130117365 | EVENT-BASED MEDIA GROUPING, PLAYBACK, AND SHARING - Exemplary methods, apparatus, and systems are disclosed for capturing, organizing, sharing, and/or displaying media. For example, using embodiments of the disclosed technology, a unified playback and browsing experience for a collection of media can be created automatically. For instance, heuristics and metadata can be used to assemble and add narratives to the media data. Furthermore, this representation of media can recompose itself dynamically as more media is added to the collection. While a collection may use a single user's content, sometimes media that is desirable to include in the collection is captured by friends and/or others at the same event. In certain embodiments, media content related to the event can be automatically collected and shared among selected groups. Further, in some embodiments, new media can be automatically incorporated into a media collection associated with the event, and the playback experience dynamically updated. | 05-09-2013 |
20130124207 | VOICE-CONTROLLED CAMERA OPERATIONS - A computing device (e.g., a smart phone, a tablet computer, digital camera, or other device with image capture functionality) causes an image capture device to capture one or more digital images based on audio input (e.g., a voice command) received by the computing device. For example, a user's voice (e.g., a word or phrase) is converted to audio input data by the computing device, which then compares (e.g., using an audio matching algorithm) the audio input data to an expected voice command associated with an image capture application. In another aspect, a computing device activates an image capture application and captures one or more digital images based on a received voice command. In another aspect, a computing device transitions from a low-power state to an active state, activates an image capture application, and causes a camera device to capture digital images based on a received voice command. | 05-16-2013 |
20130324194 | AUTOMATIC ANSWERING OF A MOBILE PHONE - The present disclosure relates to a mobile phone and a method for answering such a phone automatically without user input. In one embodiment, the mobile phone detects that a call is being received. A proximity sensor is then used to detect the presence of a nearby object. For example, this allows a determination to be made whether the mobile phone is within a pocket of the user while the phone is ringing. Then a determination is made whether the proximity sensor changes states. For example, if a user removes the phone from their pocket, the proximity sensor switches from detecting something proximal to detecting that the phone is no longer in the user's pocket. Next, a determination is made whether the proximity sensor is again next to an object, such as an ear. If so, the mobile phone can be automatically answered without further user input. | 12-05-2013 |