Patent application number | Description | Published |
20130127867 | FREESTYLE DRAWING SUPPORTED BY STENCIL EDGE SHAPES - A graphical user interface displays a shape that is moveable from a first position in the graphical user interface to a second position. An input device receives freeform drawing data. The graphical user interface displays an edgeless subset of the freeform drawing data that is (i) drawn by a user at a time in which the edgeless subset of the freeform drawing data is in a different position in the graphical user interface than each of a plurality of edges of the shape and (ii) is located in a predetermined region with respect to each of the plurality of edges. A processor detects an edge touching subset of the freeform drawing data that is drawn by the user at a time in which the edge touching subset of the freeform drawing data is touching at least one of the plurality of edges of the shape. | 05-23-2013 |
20130127878 | PHYSICS RULES BASED ANIMATION ENGINE - At an animation authoring component, an inputted movement of an object displayed in a graphical user interface is received. Further, at a physics animation rule engine, a physics generated movement of the object that results from a set of physics animation rules is applied to the inputted movement. In addition, at the graphical user interface, the inputted movement of the object is displayed in addition to the physics generated movement of the object. At the animation authoring component, the physics generated movement of the object in addition to the inputted movement of the object is recorded. | 05-23-2013 |
20130127910 | Drawing Support Tool - A graphical user interface displays a shape. Further, a buffer region that is adjacent to an edge of the shape is displayed at the graphical user interface. In addition, a set of drawing data located within the buffer region is received from a user input device. A first subset of the drawing data that is located in the buffer region at a predetermined distance from the edge and a second subset of the drawing data in the buffer region that is located at a distance from the edge that exceeds the predetermined distance are determined with a processor. Further, the first subset of drawing data is displayed. In addition, the second subset of drawing data is prevented from being displayed at the distance. The process also displays the second subset of drawing data at the predetermined distance. | 05-23-2013 |
20130132878 | TOUCH ENABLED DEVICE DROP ZONE - A touch enabled device includes a touch enabled graphical user interface that displays a canvas region and a drop zone region. The canvas region displays an object. The drop zone region displays an area that is distinct from the canvas region. Further, the touch enabled device includes a processor that positions the object within the drop zone upon receiving a request to move the object from the canvas to the drop zone and a batch processing command that the processor performs on the drop zone region such that the batch processing command is performed on the object within the drop zone and any other objects within the drop zone region. | 05-23-2013 |
20130132888 | USER INTERFACE FOR A TOUCH ENABLED DEVICE - A graphical user interface displays a first portion of a data file. Further, a switch indicator is displayed. In addition, a first input that has a first proximity within a range of predetermined first proximities with respect to the switch indicator is received at a processor operably connected to the graphical user interface. In addition, a second portion of the data file based on the first input is displayed at the graphical user interface. Further, a second input that has a second proximity within a range of predetermined second proximities with respect to the switch indicator is received at the processor. The range of predetermined second proximities is distinct from the range of first predetermined proximities. In addition, the graphical user interface displays a subset of the second portion of the data file based on the second input. | 05-23-2013 |
20130132907 | SHAPE PIXEL RENDERING - A list of coordinate locations of an icon is determined. Further, each of the coordinate locations is tagged with scaling data to generate a plurality of tagged coordinate locations for the icon. The scaling data indicates an automatic scaling adjustment based on a size of the icon being changed to accommodate a resolution of a display. In addition, the icon is automatically scaled based on the plurality of tagged coordinate locations for the icon and the resolution of the display to generate a scaled icon. The scaled icon is rendered in the display. | 05-23-2013 |
20130198158 | Enhanced Content and Searching Features Provided by a Linked-To Website - Methods and systems are disclosed that allow a linked-to web page to be provided using information about a linked-from web page. The linked-to web page, for example, may be provided with enhanced content, additional content, suggestion features, or searching features. Certain of the methods are useful in the context of a user using a search engine web page to search for and link to other web pages. An exemplary method can be performed by a server that provides such a linked-to web page. The server receives a request to provide the linked-to web page and parses the request to identify information, such as, search terms that were entered on the search engine web page. The server can use the search terms or other information associated with the linked-from web page to determine what content should be provided or how it should be provided for the linked-to web page. | 08-01-2013 |
20130326342 | OBJECT SCALABILITY AND MORPHING AND TWO WAY COMMUNICATION METHOD - In various example embodiments, a system and method for providing scalability and morphing for transitions between states of a user interface are provided. In example embodiments, a request to transition from a departure state of a first page of the user interface to a destination state of a second page of the user interface is received. A plurality of elements and element attributes within the departure state and the destination state are identified. For each of the elements, a transition mechanism to be applied to an element to transition to the destination state is determined. The transition to the destination state is automatically generated by applying, for each of the elements, the determined transition mechanism. | 12-05-2013 |
20130335333 | EDITING CONTENT USING MULTIPLE TOUCH INPUTS - Multitouch capabilities can be used to allow a user to set adjust one or more application control parameters at the same time as editing commands are provided via touch input. The control parameters may relate to how/what edit commands are provided, such as allowing for varying brush characteristics, colors, gradients, and the like used in editing graphics or other document content. Additionally or alternatively, the control parameters may relate to a design canvas or other depiction of the document, such as allowing rotation, position, or magnification of the canvas while the document is edited. | 12-19-2013 |
20140032772 | METHODS AND SYSTEMS FOR USING METADATA TO REPRESENT SOCIAL CONTEXT INFORMATION - A method includes establishing an interaction session between a plurality of devices associated with a plurality of users, respectively. Access to an asset by a first device in the interaction session is detected. Session metadata relating to the interaction session is associated with the asset. The asset may be an asset that was generated by another device during another interaction session or it may have been generated by the first device in the interaction session. | 01-30-2014 |
20140040789 | TOOL CONFIGURATION HISTORY IN A USER INTERFACE - An example device may present content within a user interface on a display screen. The user interface may support a tool that is controllable by a user to modify the content. Such a tool may be configurable to have various effects on the content. The tool may have a current configuration that specifies a current effect of the tool on the content presented in the user interface, and the current configuration may be distinct from a previous configuration that specifies a previous effect of the tool on the content. The device presents a first icon that indicates the current configuration of a tool and may present a second icon that indicates a previous configuration of the tool. The device may detect user input that indicates a request that the current configuration be replaced with the previous configuration. The device may configure the tool according to the previous configuration. | 02-06-2014 |
20140267063 | Touch Input Layout Configuration - Touch input layout creation is described. In one or more implementations, a number of touch inputs is determined that were detected through proximity to a touchscreen device. A user interface is configured to have a number of cells based on the determined number of touch inputs, the cells configured to have a size along a first axis based at least in part on an available area along the first axis within the user interface to display the cells and a size along a second axis based at least in part on a location of one of more of the touch inputs. | 09-18-2014 |
20140289747 | Methods and Systems for Using a Mobile Device for Application Input - A instance of a runtime environment at each of a first and second computing device can allow an application at the first computing device to access hardware resources of the second computing device via the runtime environment. For instance, one device can comprise a mobile device and the other device can comprise a desktop computer, a laptop computer, or a home entertainment device. The first and second instance of the runtime environment can be configured to communicate with one another through a common messaging format of the runtime environment. For example, an editing or design application at one device may use a touch-enabled display at the second device to select tools, manipulate 3-D representations, or otherwise provide input data. As another example, a game at a mobile device can use the runtime environment to provide visual and audio output using a television set-top box running the runtime environment. | 09-25-2014 |
20140327628 | PHYSICAL OBJECT DETECTION AND TOUCHSCREEN INTERACTION - An input device, such as a multifunction straight edge or a keyboard, has a recognizable contact shape when placed on a touchscreen display surface of a computing device. The contact shape of the input device can be a defined pattern of contact points, and a location and orientation of the input device on the touchscreen display surface is determinable from the defined pattern of the contact points. The input device includes an interaction module that interfaces with a companion module of the computing device. The companion module can initiate a display of an object responsive to the input device being recognized on the touchscreen display surface. The interaction module can receive a user input to the input device, and communicate the user input to the companion module of the computing device to modify the display of the object on the touchscreen display surface. | 11-06-2014 |
20140331141 | CONTEXT VISUAL ORGANIZER FOR MULTI-SCREEN DISPLAY - In various example embodiments, a system and method for context visual organization for multi-screen display are provided. In example embodiments, assets are retrieved from one or more external sources. The assets are organized into containers that are viewable across multiple display devices that function as a single display. Each of the containers includes a portion of the plurality of assets that correspond to a context of the container. The assets are displayed in their respective containers across the multiple display devices. An indication of a touch gesture applied to one of the multiple display devices to manipulate an object presented on the multiple display devices is received. An action based on the touch gesture is performed. | 11-06-2014 |
20140376887 | MOBILE DEVICE VIDEO SELECTION AND EDIT - In embodiments of mobile device video selection and edit, a mobile device includes an integrated digital camera that records video clips, and implements a video service that interfaces with the digital camera. A video capture user interface can be displayed that includes a selectable control to mark a video segment of a video clip while the video clip is being recorded or played back for viewing. A video select user interface can display portions of the video clips in a grid format with marked video segments identified by video segment selectors, which can be selected to increase or decrease the length of a marked video segment. A video arrange user interface can then display a list view of the marked video segments, as well as a shareable video compilation of the marked video segments. | 12-25-2014 |