Patent application number | Description | Published |
20090077501 | Method and apparatus for selecting an object within a user interface by performing a gesture - One embodiment of the present invention provides a system that facilitates invoking a command. During operation, the system suggests with a graphic element a gesture to use to invoke a command. The system then receives the gesture from a user at a device. Note that the gesture is received via an input mechanism, and also note that the gesture is a predetermined manipulation of the input mechanism. The system then determines a graphic element within the user interface that is associated with the gesture. Finally, upon determining the object associated with the gesture, the system invokes the command associated with the graphic element. | 03-19-2009 |
20110181524 | Copy and Staple Gestures - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 07-28-2011 |
20110185299 | Stamp Gestures - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 07-28-2011 |
20110185300 | BRUSH, CARBON-COPY, AND FILL GESTURES - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 07-28-2011 |
20110185318 | EDGE GESTURES - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 07-28-2011 |
20110185320 | Cross-reference Gestures - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 07-28-2011 |
20110191704 | CONTEXTUAL MULTIPLEXING GESTURES - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 08-04-2011 |
20110191718 | Link Gestures - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 08-04-2011 |
20110191719 | Cut, Punch-Out, and Rip Gestures - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 08-04-2011 |
20110205163 | Off-Screen Gestures to Create On-Screen Input - Bezel gestures for touch displays are described. In at least some embodiments, the bezel of a device is used to extend functionality that is accessible through the use of so-called bezel gestures. In at least some embodiments, off-screen motion can be used, by virtue of the bezel, to create screen input through a bezel gesture. Bezel gestures can include single-finger bezel gestures, multiple-finger/same-hand bezel gestures, and/or multiple-finger, different-hand bezel gestures. | 08-25-2011 |
20110209039 | MULTI-SCREEN BOOKMARK HOLD GESTURE - Embodiments of a multi-screen bookmark hold gesture are described. In various embodiments, a hold input is recognized at a first screen of a multi-screen system, and the hold input is recognized when held in place proximate an edge of a journal page that is displayed on the first screen. A motion input is recognized at a second screen of the multi-screen system while the hold input remains held in place. A bookmark hold gesture can then be determined from the recognized hold and motion inputs, and the bookmark hold gesture is effective to bookmark the journal page at a location of the hold input on the first screen. | 08-25-2011 |
20110209057 | MULTI-SCREEN HOLD AND PAGE-FLIP GESTURE - Embodiments of a multi-screen hold and page-flip gesture are described. In various embodiments, a hold input is recognized at a first screen of a multi-screen system, and the hold input is recognized when held to select a journal page that is displayed on the first screen. A motion input is recognized at a second screen of the multi-screen system, and the motion input is recognized while the hold input remains held in place. A hold and page-flip gesture can then be determined from the recognized hold and motion inputs, and the hold and page-flip gesture is effective to maintain the display of the journal page while one or more additional journal pages are flipped for display on the second screen. | 08-25-2011 |
20110209058 | MULTI-SCREEN HOLD AND TAP GESTURE - Embodiments of a multi-screen hold and tap gesture are described. In various embodiments, a hold input is recognized at a first screen of a multi-screen system, and the hold input is recognized when held to select a displayed object on the first screen. A tap input is recognized at a second screen of the multi-screen system, and the tap input is recognized when the displayed object continues being selected. A hold and tap gesture can then be determined from the recognized hold and tap inputs. | 08-25-2011 |
20110209088 | Multi-Finger Gestures - Bezel gestures for touch displays are described. In at least some embodiments, the bezel of a device is used to extend functionality that is accessible through the use of so-called bezel gestures. In at least some embodiments, off-screen motion can be used, by virtue of the bezel, to create screen input through a bezel gesture. Bezel gestures can include single-finger bezel gestures, multiple-finger/same-hand bezel gestures, and/or multiple-finger, different-hand bezel gestures. | 08-25-2011 |
20110209089 | MULTI-SCREEN OBJECT-HOLD AND PAGE-CHANGE GESTURE - Embodiments of a multi-screen object-hold and page-change gesture are described. In various embodiments, a hold input is recognized at a first screen of a multi-screen system, and the hold input is recognized when held in place to select a displayed object on the first screen. A motion input is recognized at a second screen of the multi-screen system, where the motion input is recognized while the displayed object remains held in place and is effective to change one or more journal pages. An object-hold and page-change gesture can then be determined from the recognized hold and motion inputs. | 08-25-2011 |
20110209093 | RADIAL MENUS WITH BEZEL GESTURES - Bezel gestures for touch displays are described. In at least some embodiments, the bezel of a device is used to extend functionality that is accessible through the use of so-called bezel gestures. In at least some embodiments, off-screen motion can be used, by virtue of the bezel, to create screen input through a bezel gesture. Bezel gestures can include single-finger bezel gestures, multiple-finger/same-hand bezel gestures, and/or multiple-finger, different-hand bezel gestures. | 08-25-2011 |
20110209097 | Use of Bezel as an Input Mechanism - Bezel gestures for touch displays are described. In at least some embodiments, the bezel of a device is used to extend functionality that is accessible through the use of so-called bezel gestures. In at least some embodiments, off-screen motion can be used, by virtue of the bezel, to create screen input through a bezel gesture. Bezel gestures can include single-finger bezel gestures, multiple-finger/same-hand bezel gestures, and/or multiple-finger, different-hand bezel gestures. | 08-25-2011 |
20110209098 | On and Off-Screen Gesture Combinations - Bezel gestures for touch displays are described. In at least some embodiments, the bezel of a device is used to extend functionality that is accessible through the use of so-called bezel gestures. In at least some embodiments, off-screen motion can be used, by virtue of the bezel, to create screen input through a bezel gesture. Bezel gestures can include single-finger bezel gestures, multiple-finger/same-hand bezel gestures, and/or multiple-finger, different-hand bezel gestures. | 08-25-2011 |
20110209099 | Page Manipulations Using On and Off-Screen Gestures - Bezel gestures for touch displays are described. In at least some embodiments, the bezel of a device is used to extend functionality that is accessible through the use of so-called bezel gestures. In at least some embodiments, off-screen motion can be used, by virtue of the bezel, to create screen input through a bezel gesture. Bezel gestures can include single-finger bezel gestures, multiple-finger/same-hand bezel gestures, and/or multiple-finger, different-hand bezel gestures. | 08-25-2011 |
20110209100 | MULTI-SCREEN PINCH AND EXPAND GESTURES - Embodiments of multi-screen pinch and expand gestures are described. In various embodiments, a first input is recognized at a first screen of a multi-screen system, and the first input includes a first motion input. A second input is recognized at a second screen of the multi-screen system, and the second input includes a second motion input. A pinch gesture or an expand gesture can then be determined from the first and second motion inputs that are associated with the recognized first and second inputs. | 08-25-2011 |
20110209101 | MULTI-SCREEN PINCH-TO-POCKET GESTURE - Embodiments of a multi-screen pinch-to-pocket gesture are described. In various embodiments, a first motion input to a first screen region is recognized at a first screen of a multi-screen system, and the first motion input is recognized to select a displayed object. A second motion input to a second screen region is recognized at a second screen of the multi-screen system, and the second motion input is recognized to select the displayed object. A pinch-to-pocket gesture can then be determined from the recognized first and second motion inputs within the respective first and second screen regions, the pinch-to-pocket gesture effective to pocket the displayed object. | 08-25-2011 |
20110209102 | MULTI-SCREEN DUAL TAP GESTURE - Embodiments of a multi-screen dual tap gesture are described. In various embodiments, a first tap input to a displayed object is recognized at a first screen of a multi-screen system. A second tap input to the displayed object is recognized at a second screen of the multi-screen system, and the second tap input is recognized approximately when the first tap input is recognized. A dual tap gesture can then be determined from the recognized first and second tap inputs. | 08-25-2011 |
20110209103 | MULTI-SCREEN HOLD AND DRAG GESTURE - Embodiments of a multi-screen hold and drag gesture are described. In various embodiments, a hold input is recognized at a first screen of a multi-screen system when the hold input is held in place. A motion input is recognized at a second screen of the multi-screen system, and the motion input is recognized to select a displayed object while the hold input remains held in place. A hold and drag gesture can then be determined from the recognized hold and motion inputs. | 08-25-2011 |
20110209104 | MULTI-SCREEN SYNCHRONOUS SLIDE GESTURE - Embodiments of a multi-screen synchronous slide gesture are described. In various embodiments, a first motion input is recognized at a first screen of a multi-screen system, and the first motion input is recognized when moving in a particular direction across the first screen. A second motion input is recognized at a second screen of the multi-screen system, where the second motion input is recognized when moving in the particular direction across the second screen and approximately when the first motion input is recognized. A synchronous slide gesture can then be determined from the recognized first and second motion inputs. | 08-25-2011 |
20120236026 | Brush, Carbon-Copy, and Fill Gestures - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 09-20-2012 |