Entries |
Document | Title | Date |
20080235628 | 3-D DISPLAY FOR TIME-BASED INFORMATION - A computer-implemented method of displaying information about first and second pluralities of time-based events, the method involving: displaying perspective representations of each of a plurality of timelines including a first timeline and a second timeline, wherein the perspective representation of the first timeline is made up of perspective images of representations of the events of the first plurality of events arrayed along the first timeline at locations in time corresponding to those events and the perspective representation of the second timeline is made up of perspective images of representations of the events of the second plurality of events arrayed along the second timeline at locations in time corresponding to those events; enabling a user to select a current time; and in response to the user selecting the current time, displaying perspective representations of a portion of each of the first and second timelines as determined by the user selected current time. | 09-25-2008 |
20080270944 | METHOD AND SYSTEM FOR CROSS-SCREEN COMPONENT COMMUNICATION IN DYNAMICALLY CREATED COMPOSITE APPLICATIONS - A method and system for cross-screen component communication in dynamically created composite applications. Meta-data in the mark-up for a source component (e.g. eXtensible Markup Language—XML information) in a dynamically created composite application includes indications of which screens target components are located on. These indications are contained in definitions of logical connections established between components referred to as “cross-page wire” definitions. Executable objects, referred to as “cross-page wire” executable objects, are generated based on the cross-page wire definitions in the source component mark-up. The cross-page wire executable objects are executed by a run-time entity, such as a “property broker” or the like, in response to a change in a property value for which the cross-page wire has been defined, in order to deliver a new value of that property to one or more target components located on screens different from the screen on which the source component is located. | 10-30-2008 |
20080270945 | Interior spaces in a geo-spatial environment - A method, apparatus, and system of interior spaces in a geo-spatial environment are disclosed. In one embodiment, the method includes providing a plurality of user profiles, each user profile in the plurality of user profiles including an associated specific geographic location, selecting a user profile in the plurality of user profiles, generating a first virtual interior space view of a first structure associated with a first specific geographic location of the selected user profile in the plurality of user profiles, and generating, with the first virtual interior space view, at least one wiki profile associated with the first virtual interior space view. The method may also include generating, with the at least one wiki profile, content appended to the at least one wiki profile. Furthermore, the method may include capturing a second virtual interior space view of a second structure associated with a second specific geographic location. | 10-30-2008 |
20080270946 | Multidimensional Structured Data Visualization Method and Apparatus, Text Visualization Method and Apparatus, Method and Apparatus for Visualizing and Graphically Navigating the World Wide Web, Method and Apparatus for Visualizing Hierarchies - A method of displaying correlations among information objects includes receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed. | 10-30-2008 |
20080288893 | Method and Device for Visualization of Information - The invention relates to a method of visualizing information about objects for the benefit of a user. The method comprises providing a substantially 2-D space with a plurality of separate locations; providing a set of objects with a plurality of objects; linking each of the objects in the 2-D space to an associated location in the 2-D space; and representing a 3-D virtual environment which is defined by the 2-D space; and at least accentuating and representing objects on the basis of altitudes in the 3-D virtual environment relative to the 2-D space. | 11-20-2008 |
20080295035 | Projection of visual elements and graphical elements in a 3D UI - A method includes determining user interface data based at least on a projection of visual element information from a projector at a first location in a three-dimensional space onto a screen object defined in the space. The screen object forms at least a portion of a user interface. The method includes determining an area of the screen object viewable by a camera positioned in a second location in the three dimensional space, and communicating the user interface data corresponding to the area to a display interface suitable for coupling to one or more displays. Apparatus and computer-readable media are also disclosed. | 11-27-2008 |
20080295036 | Display Control Apparatus, Display Method, and Computer Program - A display control apparatus includes a search unit for searching for a second content related to a first content in accordance with at least part of metadata attached to each of the first content and the second content, a generating unit for generating a three-dimensional display model, the three-dimensional display model including a first layer and a second layer, the first layer having one of a first image and a first character representing the first content arranged therewithin, and the second layer having one of a second image and a second character representing the second content arranged therewithin, and a display control unit for controlling displaying one of the first image and the first character and one of the second image and the second character using the three-dimensional display model. | 11-27-2008 |
20080307366 | REFLECTIONS IN A MULTIDIMENSIONAL USER INTERFACE ENVIRONMENT - A graphical user interface has a back surface disposed from a viewing surface to define a depth. A visualization of receptacle is disposed between the back surface and a viewing surface and contains a visualization object. A reflection surface is defined such that a reflection of the visualization object is displayed on the reflection surface. | 12-11-2008 |
20090064051 | INTERACTIVE SYSTEM FOR VISUALIZATION AND RETRIEVAL OF VIDEO DATA - A cube-based three-dimensional interactive system is provided for visualization and retrieval of video data. The system includes at least one interactive cube having eight nodes, each node being linked to a specific data item. The data items on the nodes are organized in space or time. Textual information about a respective data item appears upon traversing the corresponding node and the data items opens up upon selection. The interactive cube can be expanded to include more than eight nodes, such as 12 or 18 nodes. The system can also have multiple cubes that are connected to form a one-level, multi-level or multi-dimensional hypercube | 03-05-2009 |
20090094556 | USER DEFINED SCENARIOS IN A THREE DIMENSIONAL GEO-SPATIAL SYSTEM - A method, apparatus, and article of manufacture provide the ability to store user defined scenarios in a three-dimensional system. A 3D view of a real world scene is displayed, using a three-dimensional (3D) graphics application. Plug-ins are installed into the 3D graphics application. A user selects a subset of the plug-ins, defines settings for the subset of plug-ins, and defines a visualization trait for each plug-in in the subset. The user associates an identification of the selected subset, the settings, and the visualization trait with a scenario bookmark that is saved. The bookmark can be selected by a user to display a visualization of a scenario based on the selected subset, settings, and visualization trait. | 04-09-2009 |
20090094557 | SUN-SHADOW SIMULATION IN A GEOSPATIAL SYSTEM - A method, apparatus, and article of manufacture provide the ability to display a sun and shadow simulation in a 3D system. A 3D view of a real world scene is displayed, using a 3D graphics application, on a display device. A plug-in is installed into the application. A calendar period (e.g., a month, day, and year) is defined by the user. A timeline arc is displayed with the calendar period defining a radius of the arc, and starting stopping endpoints of the timeline arc defining an interval of time during the calendar period. A timeline slider is displayed on the arc that indicates a time of day within the calendar period. A visualization is displayed, in the 3D view, of shadows cast by a sun on objects in the 3D view. A position of the sun is based on the calendar period and the time of day. | 04-09-2009 |
20090094558 | Viewport overlays to expose alternate data representations - A method, apparatus, and article of manufacture provide the ability to display (using a 3D graphics application) an overlayed window containing an alternate data representation in a three-dimensional system. A first 3D view of a real world scene (that includes a first set of data layers) is displayed on a display device. The user selects a set of entities that together define an alternate representation of the first 3D view. The alternate representation is a second set of data layers that is different than the first set of data layers. An overlayed window is displayed on top of the first 3D view and displays the alternate representation. | 04-09-2009 |
20090113348 | METHOD AND APPARATUS FOR A USER INTERFACE WITH PRIORITY DATA - A system and corresponding method for providing a 3-dimensional (3-D) user interface displays images in a 3-D coordinate system. The method includes receiving user data input information. The method also compares the user data input information to frequently used terms and generates priority information based on the comparison. The generated priority information is displayed as holographic images in a 3-D coordinate system. Sensors are configured to sense user interaction within the 3-D coordinate system, so that a processor may receive user interaction information including the selected priority information from the sensors. The sensors are able to provide information to the processor that enables the processor to correlate user interaction with images in the 3-D coordinate system. The system may be used for interconnecting or communicating between two or more components connected to an interconnection medium (e.g., a bus) within a single computer or digital data processing system. | 04-30-2009 |
20090265667 | Techniques for Providing Three-Dimensional Virtual-World Presentations - A technique for providing a three-dimensional (3D) virtual-world (VW) presentation includes selecting a 3D real-world (RW) presentation. One or more messages, including 3D VW presentation steps that are associated with the 3D RW presentation, are then received at a VW presentation object that includes a VW presentation root script. The one or more messages are passed from the VW presentation root script to VW relay scripts (RSs) included in respective VW presentation objects associated with the 3D VW presentation. The one or more messages are then broadcast from the VW RSs to VW presentation execution scripts (PESs) that are associated with the 3D VW presentation. Finally, the 3D VW presentation is provided based on executed ones of the VW PESs. | 10-22-2009 |
20090282369 | System and Method for Muulti-Dimensional Organization, Management, and Manipulation of Remote Data - The Quantum Matrix system is a multi-dimensional, multi-threaded exchange environment for data organization, management, and manipulation. Data is organized into a multi-dimensional structure of nodes. Nodes may represent data, absence of data, or another set of nodes. The multi-dimensional structure or portions of it can be automatically created from a file system. One or more associations are also defined for the multi-dimensional structure. An association indicates a relationship between a node and another node, data, or a set of nodes. The multi-dimensional structure is then displayed three-dimensionally and navigated. Relational logic, Boolean algebra, or a scripting language can be applied to the nodes, data, and associations to produce a resultant set of nodes. Furthermore, portions of the multi-dimensional structure can be isolated with the use of planes to ease navigation. Furthermore, the Quantum Matrix system may have a client-server architecture, with the client running on a mobile device. | 11-12-2009 |
20090300551 | INTERACTIVE PHYSICAL ACTIVITY AND INFORMATION-IMPARTING SYSTEM AND METHOD - A method of imparting information includes interactively selectively displaying information to a person or user, based on the physical location of the person relative to a display screen upon which the information is displayed. A tracking system is operatively coupled to the display that selectively displays the information. The tracking system tracks the physical location of the person, and displays different information depending upon the physical location of the person. The display may include displays of virtual objects, such as cubes or other shapes. The view of the objects may be varied within the display as the user moves within physical space, varying the apparent position of the virtual objects as the user moves. The varying of the apparent position of the virtual objects may reveal information that was not visible to the user in other virtual positions (corresponding to other physical positions of the user). | 12-03-2009 |
20090327969 | SEMANTIC ZOOM IN A VIRTUAL THREE-DIMENSIONAL GRAPHICAL USER INTERFACE - A GUI adapted for use with portable electronic devices such as media players is provided in which interactive objects are arranged in a virtual three-dimensional space (i.e., one represented on a two-dimensional display screen). The user manipulates controls on the player to maneuver through the 3-D space by zooming and steering to objects of interest which can represent various types of content, information or interactive experiences. The 3-D space mimics real space in that close objects appear larger to user while distant objects appear smaller. The close objects will typically represent higher level content, information, or interactive experiences while the distant objects represent more detailed content, information, or experiences. This GUI navigation feature, referred to as a semantic zoom, makes it easy for the user to maintain a clear understanding of his location within the 3-D space at all times. | 12-31-2009 |
20100058247 | METHODS AND SYSTEMS OF A USER INTERFACE - One embodiment of the application provides a method including segmenting a 3D polygon mesh into a plurality of widgets, defining a state variable for each widget, defining a behavior for each widget, and assembling a three-dimensional user interface from the widgets, the state variables, and the behaviors. | 03-04-2010 |
20100107127 | Apparatus and method for manipulating virtual object - Disclosed is a virtual object manipulating apparatus and method. The virtual object manipulating apparatus connects a virtual object in a 3D virtual world with a virtual object manipulating apparatus, senses a grab signal from a user, and determines a grab type of the virtual object based on the sensed grab signal and the connection between the virtual object and the virtual object manipulating apparatus. | 04-29-2010 |
20100125812 | METHOD AND APPARATUS FOR MARKING A POSITION OF A REAL WORLD OBJECT IN A SEE-THROUGH DISPLAY - A method for marking a position of a real world object on a see-through display is provided. The method includes capturing an image of a real world object with an imaging device. A viewing angle and a distance to the object are determined. A real world position of the object is calculated based on the viewing angle to the object and the distance to the object. A location on the see-through display that corresponds to the real world position of the object is determined. A mark is then displayed on the see-through display at the location that corresponds to the real world object. | 05-20-2010 |
20100138792 | NAVIGATING CONTENT - One embodiment of the invention involves a computer-implemented method in which information obtained from a uniform resource locator is converted into at least one texture. The texture is mapped onto a surface of a three-dimensional object located in the virtual three-dimensional space thereby forming a three-dimensional navigation mechanism. | 06-03-2010 |
20100169836 | INTERFACE CUBE FOR MOBILE DEVICE - A computing device presents, on a screen, a three-dimensional rendering of an interface cube that includes a representation of a user's contacts displayed on at least one surface of the interface cube. The computing device receives a communication item from a peripheral application, where the communication item is associated with a particular contact of the user's contacts. The computing device creates a graphic based on the communication item and displays the graphic at a location on the representation of the user's contacts that corresponds to the location of the particular contact within a sequence of the user's contacts. | 07-01-2010 |
20100169837 | Providing Web Content in the Context of a Virtual Environment - Information URLs may be associated with three dimensional objects in a three dimensional virtual environment. When a URL is selected, an overlay web rendering engine renders a web page associated with the URL over the object in the three dimensional virtual environment. The web page may include rich content, interactive content, or any other type of web content supported by the user's local browser and browser plugging. The user may interact with the content in the overlay web rendering engine to obtain successive layers of content or to affect the object in the virtual environment. The web page is rendered with a transparent background so that the three dimensional content of the virtual environment continues to be visible through the web page and provides context for the overlayed content. Information URLs may be used to provide information about objects, Avatars, or the virtual environment itself. | 07-01-2010 |
20100169838 | ANALYSIS OF IMAGES LOCATED WITHIN THREE-DIMENSIONAL ENVIRONMENTS - Images are analyzed within a 3D environment that is generated based on spatial relationships of the images and that allows users to experience the images in the 3D environment. Image analysis may include ranking images based on user viewing information, such as the number of users who have viewed an image and how long an image was viewed. Image analysis may further include analyzing the spatial density of images within a 3D environment to determine points of user interest. | 07-01-2010 |
20100287510 | ASSISTIVE GROUP SETTING MANAGEMENT IN A VIRTUAL WORLD - Systems, methods and articles of manufacture are disclosed for presenting a visual cue to a user in a virtual world. A cursor cycle allows the user to specify an avatar of focus by cycling through avatars in the virtual world. Visual cues of an avatar of focus are presented to the user. The user may define a cursor mask to include specific avatars. Visual cues of the cursor mask or of all avatars may be summarized and presented to the user. The user may also specify a threshold for a visual cue. A visual cue that is detected to exceed the specified threshold is presented to the user. | 11-11-2010 |
20100287511 | METHOD AND DEVICE FOR ILLUSTRATING A VIRTUAL OBJECT IN A REAL ENVIRONMENT - The invention relates to a method for representing a virtual object in a real environment, having the following steps: generating a two-dimensional representation of a real environment by means of a recording device, ascertaining a position of the recording device relative to at least one component of the real environment, segmenting at least one area of the real environment in the two-dimensional image on the basis of non-manually generated 3D information for identifying at least one segment of the real environment in distinction to a remaining part of the real environment while supplying corresponding segmentation data, and merging the two-dimensional image of the real environment with the virtual object or, by means of an optical, semitransparent element, directly with reality with consideration of the segmentation data. The invention permits any collisions of virtual objects with real objects that occur upon merging with a real environment to be represented in a way largely close to reality. | 11-11-2010 |
20100299640 | TRACKING IN A VIRTUAL WORLD - A status update of a real world entity is received. A previous status of a virtual world entity is transformed into a current status of the virtual world entity based on the status update of the real world entity. The virtual world entity may be part of a virtual world and may correspond to the real world entity in a real world. Further, the virtual world entity and the virtual world may be generated by a computer. | 11-25-2010 |
20100333037 | DIORAMIC USER INTERFACE HAVING A USER CUSTOMIZED EXPERIENCE - The present disclosure teaches a solution for a user customizable abstraction layer for tailoring all operating system, application, and web based interfaces. The interface differs from conventional user interfaces by presenting a dynamic interface which can enable user access across all domains and applications with which the user can interact. The interface can be dynamically built as a user interacts with clients (e.g., devices/applications). Clients can utilize common usage patterns, installed application, installed themes, personal information, and the like, to create a highly customized adaptive user designed and modifiable interface. | 12-30-2010 |
20110022988 | PROVIDING USER INTERFACE FOR THREE-DIMENSIONAL DISPLAY DEVICE - A 3D display device is provided. The 3D display device provides a 3D preview image to be displayed and a control menu for setting various parameters for the 3D preview image to a user and thus enables the user to optimize the 3D parameters before viewing 3D content and view the 3D content | 01-27-2011 |
20110035707 | STEREOSCOPIC DISPLAY DEVICE AND DISPLAY METHOD - A display control device and method to generate stereoscopic images in a graphical user interface (GUI) to be displayed on a display panel. A pop-out amount unit or operation calculates a pop-out amount indicating a perceived distance that at least a portion of the stereoscopic image pops out into space from the display panel, and a display controller or control operation controls display of the stereoscopic image with the calculated amount of pop-out on the display panel. | 02-10-2011 |
20110113382 | ACTIVITY TRIGGERED PHOTOGRAPHY IN METAVERSE APPLICATIONS - A system, method and program product for collecting image data from within a metaverse. A system is provided that includes: a graphical user interface (GUI) for allowing a user to install and administer a camera within the metaverse; a system for collecting image data from the camera based on an occurrence of a triggering event associated with the camera; and a system for storing or delivering the image data for the user. | 05-12-2011 |
20110119631 | METHOD AND APPARATUS FOR OPERATING USER INTERFACE BASED ON USER'S VISUAL PERSPECTIVE IN ELECTRONIC DISPLAY DEVICE - A method and apparatus for operating a three-dimensional user interface in an electronic display device, according to a user's visual perspective from which a user looks at the device, are provided. In the method, the apparatus activates a three-dimensional mode in response to a user's request, and determines the user's visual perspective according to a predefined user's input received in the three-dimensional mode. Then the apparatus displays a user interface converted according to the user's visual perspective. | 05-19-2011 |
20110126159 | GUI PROVIDING METHOD, AND DISPLAY APPARATUS AND 3D IMAGE PROVIDING SYSTEM USING THE SAME - A graphical user interface (GUI) providing method, a display apparatus and a three-dimensional (3D) image providing system using the same are provided. The GUI providing method includes: generating a first GUI for changing settings for a 3D image and a second GUI for changing an environment; and outputting the first GUI and the second GUI. Thus, the settings for the 3D image can be changed more easily and conveniently. | 05-26-2011 |
20110126160 | METHOD OF PROVIDING 3D IMAGE AND 3D DISPLAY APPARATUS USING THE SAME - A method of providing a three-dimensional (3D) image and a 3D display apparatus applying the same are provided. If a predetermined instruction is input in 2D mode, display mode is changed to 3D mode. A predetermined format is applied to an incoming image, and the resultant image is displayed in 3D mode. If the predetermined instruction is input again in 3D mode, another format is applied to the incoming image and the resultant image is displayed. As a result, a viewer can conveniently select a 3D image format of the incoming image. | 05-26-2011 |
20110131536 | GENERATING AND RANKING INFORMATION UNITS INCLUDING DOCUMENTS ASSOCIATED WITH DOCUMENT ENVIRONMENTS - Embodiments described herein are directed to forming information units. Digital documents associated with collaborative navigation behavior information can be identified and an information unit can be generated using transition probabilities calculated from collaborative navigation information. The information unit including at least a subset of the digital documents identified in the collaborative navigation behavior information. A rank of information unit based on the collaborative navigation behavior information can be calculated. | 06-02-2011 |
20110138336 | METHOD FOR DISPLAYING BROADCASTING DATA AND MOBILE TERMINAL THEREOF - A method for displaying broadcasting data and a mobile terminal thereof are discussed, wherein the method includes according to an embodiment: receiving broadcasting data and displaying the broadcasting data on a display of a mobile terminal, turning on a switching panel unit mounted on the display and displaying the broadcasting data in a 3-D image; generating a first proximity signal through a proximity sensor of the mobile terminal to enter a broadcasting data display change preparatory step; and changing the display of the broadcasting data responsive to a user detection signal generated by the proximity sensor or a touch sensor of the mobile terminal. | 06-09-2011 |
20110197167 | ELECTRONIC DEVICE AND METHOD FOR PROVIDING GRAPHICAL USER INTERFACE (GUI) - An electronic device and a method for providing a Graphical User Interface (GUI) are disclosed. The electronic device includes a storage configured to store a first set of pixel data and a second set of pixel data, a controller configured to detect a request for providing a Graphical User Interface (GUI) for recording reservation and access the first set of pixel data and the second set of pixel data in response to detecting the request and a formatter configured to convert format of the first set of pixel data and the second set of pixel data to output 3D format. | 08-11-2011 |
20110202885 | Apparatus and method for comparing, sorting and presenting objects - An apparatus for comparing, sorting and presenting an objects comprising a storage module ( | 08-18-2011 |
20110231802 | ELECTRONIC DEVICE AND METHOD FOR PROVIDING USER INTERFACE THEREOF - An electronic device and a method for providing an User Interface (UI) are disclosed. The method for providing a User Interface (UI) in an electronic device according to the present invention, the method may includes receiving a request for provision of the UI, collecting information for configuring the requested UI, classifying the collected information according to a first criterion so as to generate a plurality of pages, hierarchizing the generated pages and arranging the layers according to a second criterion so as to form a multilayer UI and providing the formed multilayer UI as the requested UI. | 09-22-2011 |
20110246949 | Methods and System for Modifying Parameters of Three Dimensional Objects Subject to Physics Simulation and Assembly - A set of atomic three dimensional objects that can be joined together in a workspace to form one or more complex three dimensional objects, each atomic object includes one or more object join features parameterized to enable joining with one or more parameterized join features of another atomic object in the set of objects, and a shape that may be modified according to one or more parametrically defined constraint attributes. A user may reshape and or resize one or more of the atomic three dimensional objects prior to joining the three dimensional objects together at the appropriate parameterized join features to form one or more of the complex three dimensional objects. | 10-06-2011 |
20110246950 | 3D MOBILE USER INTERFACE WITH CONFIGURABLE WORKSPACE MANAGEMENT - Systems and methods of a 3D mobile user interface with configurable workspace management are disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, of a three-dimensional, multi-layer user interface of a mobile device in a mobile network. User environment may include one or more layers or levels of applications, services, or accounts that are all easily accessible to and navigable by the user. For example, an indicator can be used to access a workspace in 3D representing a category or grouping of services or applications for the user. The user can customize or create a unique, non-mutually exclusive grouping, aggregation, or category of applications, services, accounts, or items. The grouping of indicators can be used to swiftly and efficiently navigate to a desired application, service, account or item, in a 3D-enabled user environment. | 10-06-2011 |
20110296352 | ACTIVE CALIBRATION OF A NATURAL USER INTERFACE - A system and method are disclosed for periodically calibrating a user interface in a NUI system by performing periodic active calibration events. The system includes a capture device for capturing position data relating to objects in a field of view of the capture device, a display and a computing environment for receiving image data from the capture device and for running applications. The system further includes a user interface controlled by the computing environment and operating in part by mapping a position of a pointing object to a position of an object displayed on the display. The computing environment periodically recalibrates the mapping of the user interface while the computing environment is running an application. | 12-01-2011 |
20110296353 | Method and system implementing user-centric gesture control - A user-centric method and system to identify user-made gestures to control a remote device images the user using a three-dimensional image system, and defines at least one user-centric three-dimensional detection zone dynamically sized appropriately for the user, who is free to move about. Images made within the detection zone are compared to a library of stored gestures, and the thus identified gesture is mapped to an appropriate control command signal coupleable to the remote device. The method and system also provides of a first user to hand off control of the remote device to a second user. | 12-01-2011 |
20110302535 | Method for selection of an object in a virtual environment - The invention relates to a method for selection of a first object in a first virtual environment, the first object being represented in the first environment with a size of value less than a threshold value. In order to make the selection of the first object more convivial, the method comprises steps for:
| 12-08-2011 |
20120011474 | ANALYSIS OF COMPLEX DATA OBJECTS AND MULTIPLE PARAMETER SYSTEMS - A computer facilitates multiple parameters data analysis by special visualization and navigation methods. Data to be analyzed is loaded from an external source the computer displays the data in response to user input using a variety of methods including data tables, slices of data spaces, hierarchically navigated data spaces, dynamic slice tables, filters, sorting, color-mapping, numerical operations, and other methods. | 01-12-2012 |
20120023453 | Device, Method, and Graphical User Interface for Navigating Through a Hierarchy - A multifunction device displays a view of a top level of a hierarchical user interface. The hierarchical user interface has a plurality of levels including the top level and one or more lower levels. In response to detecting a first input, the device displays a view of at least one of the lower levels and at least a predefined portion of the view of the top level. While displaying a view of a respective lower level and concurrently displaying at least the predefined portion of the view of the top level, the device detects a second input. When the second input corresponds to a request to enter a content modification mode for the respective lower level, the device enters the content modification mode for the respective lower level and ceases to display the predefined portion of the view of the top level. | 01-26-2012 |
20120047465 | Information Processing Device, Information Processing Method, and Program - There is provided an information processing device including an acquisition section configured to acquire a curved movement of a body of a user as an operation, a display control section configured to display an object in a virtual three-dimensional space, and a process execution section configured to execute a process on the object based on the acquired operation. The object may be arranged on a first curved plane based on a virtual position of the user set in the virtual three-dimensional space, the first curved plane corresponding to the curved movement. | 02-23-2012 |
20120047466 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing device including an acquisition section configured to acquire an operation vector based on a movement of a body part of a user, a correction section configured to correct a direction of the acquired operation vector, and a process execution section configured to execute a process in accordance with the corrected operation vector. | 02-23-2012 |
20120089949 | METHOD AND COMPUTING DEVICE IN A SYSTEM FOR MOTION DETECTION - A computing device in a system for motion detection comprises an image processing device to determine a motion of an object of interest, and a graphical user interface (GUI) module to drive a virtual role based on the motion determined by the image processing device. The image processing device comprises a foreground extracting module to extract a foreground image from each of a first image of the object of interest taken by a first camera and a second image of the object of interest taken by a second camera, a feature point detecting module to detect feature points in the foreground image, a depth calculating module to calculate the depth of each of the feature points based on disparity images associated with the each feature point, the depth calculating module and the feature point detecting module identifying a three-dimensional (3D) position of each of the feature points, and a motion matching module to identify vectors associated with the 3D positions of the feature points and determine a motion of the object of interest based on the vectors. | 04-12-2012 |
20120102435 | STEREOSCOPIC IMAGE REPRODUCTION DEVICE AND METHOD FOR PROVIDING 3D USER INTERFACE - A stereoscopic image reproduction device for providing a 3D user interface includes a UI generator which generates a user interface, a depth information processor which generates a 3D depth for the user interface, and a formatting unit which generates a 3D user interface for the user interface by using the 3D depth. The depth information processor may be integrated with the formatting unit. Various factors used to generate 3D depth perception include at least any one of blur, textual gradient, linear perspective, shading, color, brightness, and chroma, which results in a 3D-type user interface (UI) being shown on a stereoscopic image display. | 04-26-2012 |
20120151416 | CONTROLLING THREE-DIMENSIONAL VIEWS OF SELECTED PORTIONS OF CONTENT - Some embodiments of the inventive subject matter are directed to presenting a first portion of content and a second portion of content in a two-dimensional view via a graphical user interface and detecting an input associated with one or more of the first portion of the content and the second portion of the content. Some embodiments are further directed to selecting the first portion of the content in response to the detecting of the input, and changing the presenting of the first portion of the content from the two-dimensional view to a three-dimensional view in response to the selecting the first portion of the content. Some embodiments are further directed to continuing to present the second portion of the content in the two-dimensional view while changing the presenting of the first portion of the content to the three-dimensional view. | 06-14-2012 |
20120174037 | Media Content User Interface Systems and Methods - Exemplary media content user interface systems and methods are disclosed herein. An exemplary method includes a media content access subsystem displaying a plurality of display elements representative of a plurality of media content instances and that flow through a graphical representation of a water cycle in accordance with one or more flow heuristics, detecting a user interaction, and dynamically adjusting the flow of the one or more display elements in accordance with the user interaction. Corresponding systems and methods are also disclosed. | 07-05-2012 |
20120180000 | METHOD AND SYSTEM FOR SIMULATING THREE-DIMENSIONAL OPERATING INTERFACE - A method and a system for simulating a three-dimensional (3D) operating interface are provided. The method includes defining a partition line to partition a display frame of a screen into a first area and a second area, and defining a size of a unit grid to establish a first grid plane and a second grid plane in the first area and the second area respectively, the first grid plane and the second grid plane forming a simulated 3D grid space. The method also includes taking the unit grid as a unit to define an object size and an initial grid coordinate of an object. The initial grid coordinate is on one of the first and the second grid planes. The method further includes mapping out a simulated 3D space in the simulated 3D grid space for displaying the object according to the initial grid coordinate and the object size. | 07-12-2012 |
20120192114 | THREE-DIMENSIONAL, MULTI-DEPTH PRESENTATION OF ICONS ASSOCIATED WITH A USER INTERFACE - A three-dimensional display presents a plurality of icons that are associated with a user interface. These icons include at least a first icon presented at a first depth of presentation and at least a second icon presented at a second, different depth of presentation. By one approach this first icon is available for interaction by an input component of the user interface while the second icon is unavailable for interaction by the input component of the user interface. The aforementioned first depth of presentation may be substantially coincide with a surface, for example, a touch-sensitive display, of the corresponding electronic device. So configured, the first icon (which is presently available for selection) appears at a depth that coincides with that surface. This approach can serve to facilitate three-dimensional presentation of an icon based on whether it is available for interaction via an input component of a user interface. | 07-26-2012 |
20120216149 | METHOD AND MOBILE APPARATUS FOR DISPLAYING AN AUGMENTED REALITY - A mobile apparatus and method for displaying an Augmented Reality (AR) in the mobile apparatus. The mobile apparatus captures an image of a current environment of the mobile apparatus, displays the image, detects mapping information corresponding to the current environment from among mapping information stored in the mobile apparatus, maps a three-dimensional (3D) Graphical User Interface (GUI) of detected mapping information onto the displayed image, based on a relative location relationship between the detected mapping information, and adjusts a display status of the 3D GUI, while maintaining the relative location relationship between the detected mapping information. | 08-23-2012 |
20120233573 | TECHNIQUES TO PRESENT HIERARCHICAL INFORMATION USING ORTHOGRAPHIC PROJECTIONS - Techniques to present hierarchical information as orthographic projections are described. An apparatus may comprise an orthographic projection application arranged to manage a three dimensional orthographic projection of hierarchical information. The orthographic projection application may comprise a hierarchical information component operative to receive hierarchical information representing multiple nodes at different hierarchical levels, and parse the hierarchical information into a tree data structure, an orthographic generator component operative to generate a graphical tile for each node, arrange graphical tiles for each hierarchical level into graphical layers, and arrange the graphical layers in a vertical stack, and an orthographic presentation component operative to present a three dimensional orthographic projection of the hierarchical information with the stack of graphical layers each having multiple graphical tiles. Other embodiments are described and claimed. | 09-13-2012 |
20120240084 | Graphic User Interface for Interactions with Datasets and Databases and a Method to Manage Information Units - Methods for data visualisation, browsing, entry, modification, querying, processing, storage and transfer are described herein. | 09-20-2012 |
20120246599 | INTUITIVE DATA VISUALIZATION METHOD - A computer software program, method and system has a data visualization scheme in the form of plural identifiable virtual characters in a familiar virtual environment that is relevant for the characters and in which the characters act in the context of the environment and in a manner that is indicative of the data or data set portrayed by each character. From the actions and interactions of the virtual characters in the context of the virtual environment, information about the nature and interactions of the data and data sets is quickly and intuitively appreciated by a viewer. | 09-27-2012 |
20120284670 | ANALYSIS OF COMPLEX DATA OBJECTS AND MULTIPLE PARAMETER SYSTEMS - A computer facilitates multiple parameters data analysis by special visualization and navigation methods. Data to be analyzed is loaded from an external source the computer displays the data in response to user input using a variety of methods including data tables, slices of data spaces, hierarchically navigated data spaces, dynamic slice tables, filters, sorting, color-mapping, numerical operations, and other methods. | 11-08-2012 |
20120290987 | System and Method for Virtual Object Placement - A computer system and method according to the present invention can receive multi-modal inputs such as natural language, gesture, text, sketch and other inputs in order to manipulate graphical objects in a virtual world. The components of an agent as provided in accordance with the present invention can include one or more sensors, actuators, and cognition elements, such as interpreters, executive function elements, working memory, long term memory and reasoners for object placement approach. In one embodiment, the present invention can transform a user input into an object placement output. Further, the present invention provides, in part, an object placement algorithm, along with the command structure, vocabulary, and the dialog that an agent is designed to support in accordance with various embodiments of the present invention. | 11-15-2012 |
20120297345 | Three-Dimensional Animation for Providing Access to Applications - A three-dimensional animation for providing access to applications is described. In some implementations, a three-dimensional multi-level dock is displayed. The multi-level dock can be animated to appear to slide into view on a graphical user interface in response to user input. The levels of the multi-level dock can be configured to display selectable graphical objects representing applications available on a computing device. A user can select a graphical object to invoke a corresponding application. The three-dimensional multi-level dock can be animated to slide out of view on the graphical user interface in response to the selection of an application object or in response to other user input. | 11-22-2012 |
20120304126 | THREE-DIMENSIONAL GESTURE CONTROLLED AVATAR CONFIGURATION INTERFACE - A method for controlling presentation to a user of a primary user experience of a software application is provided. The method includes displaying a third-person avatar in a 3D virtual scene that defines a user interface for controlling presentation of the primary user experience. The method further includes sensing controlling movements of the user within a physical space in which the user is located and causing display of controlled movements of the third-person avatar within the 3D virtual scene so that the controlled movements visually replicate the controlling movements. The method further includes detecting a predefined interaction of the third-person avatar with a user interface element displayed in the 3D virtual scene, and controlling presentation of the primary user experience in response to detecting the predefined interaction. | 11-29-2012 |
20120304127 | Information Presentation in Virtual 3D - A method, system and program product for assisting a presentation owner in creating and presenting information to audience users in a virtual 3D cyclorama-like environment. A presentation object tool provides behavior in the cyclorama object to assist the presentation owner in resolving graphic objects into the cyclorama and in placing information onto the graphic objects. The presenter object tool also provides behavior in the graphic objects to allow the presentation owner to expand a graphic object into a larger viewing size, to increment and decrement the placement of graphic objects within the cyclorama's presentation space, and to place an expanded graphic object into a home viewing position for presentation to audience users. | 11-29-2012 |
20120304128 | THREE-DIMENSIONAL MENU SYSTEM USING MANUAL OPERATION TOOLS - Disclosed is an augmented reality-based three-dimensional menu system using manual operation tools. According to the present invention, the three-dimensional menu system comprises: a display device; at least one pair of manual operation tools which are manually operated by the user, and are in a hexahedral shape; an image acquisition device which acquires images for the manual operation tools; and a menu augmentation unit which tracks the manual operation tools from the acquired images, and augments menu items in the vicinity of the manual operation tools of the acquired images, thereby outputting the augmented menu items to the display device. | 11-29-2012 |
20130007669 | SYSTEM AND METHOD FOR EDITING INTERACTIVE THREE-DIMENSION MULTIMEDIA, AND ONLINE EDITING AND EXCHANGING ARCHITECTURE AND METHOD THEREOF - A system and method are provided to edit interactive three-dimensional multimedia. A user interface of the system is provided with an event level template that includes event series levels with multiple event developing points. Through the user interface, multiple interactive events related to a first character of the event developing point are edited. Through a three-dimensional engine, interactive relevances are built up between interactive events and multiple materials inside one or more database. When the interactive three-dimensional multimedia with multiple materials is output, the interactive events corresponding to the event developing points are performed according to a user command. An online editing and exchanging method integrated with the system and method is also provided to share pre-edited templates on an exchange server; each of the pre-edited templates is extracted from an interactive three-dimensional multimedia pre-edited by the system and method. | 01-03-2013 |
20130024819 | SYSTEMS AND METHODS FOR GESTURE-BASED CREATION OF INTERACTIVE HOTSPOTS IN A REAL WORLD ENVIRONMENT - Systems and methods provide for gesture-based creation of interactive hotspots in a real world environment. A gesture made by a user in a three-dimensional space in the real world environment is detected by a motion capture device such as a camera, and the gesture is then identified and interpreted to create a “hotspot,” which is a region in three-dimensional space through which a user interacts with a computer system. The gesture may indicate that the hotspot is anchored to the real world environment or anchored to an object in the real world environment. The functionality of the hotspot is defined in order to identify the type of gesture which will initiate the hotspot and associate the activation of the hotspot with an activity in the system, such as control of an application on a computer or an electronic device connected with the system. | 01-24-2013 |
20130091471 | VISUAL SEARCH AND THREE-DIMENSIONAL RESULTS - Methods, systems, graphical user interfaces, and computer-readable media for visually searching and exploring a set of objects are provided. A computer system executes a method that generates three-dimensional representations or two-dimensional representations for a set of objects in response to a user interaction with an interface that displays the three-dimensional representations or the two-dimensional representations. The interface includes filter controls, sorting controls, and classification controls, which are dynamically altered based on the content of a user query or the attributes of the objects in the three-dimensional representations or two-dimensional representations. | 04-11-2013 |
20130132909 | METHOD AND APPARATUS FOR DISPLAYING A POLYHEDRAL USER INTERFACE - Provided is a method and apparatus for providing a three-dimensional (3D) polyhedral user interface. A first polyhedron may be formed of a plurality of blocks which are mapped with a plurality of pieces of information. A user may manipulate rotation of some of the blocks to generate a second polyhedron. | 05-23-2013 |
20130174098 | METHOD AND RECORDED MEDIUM FOR PROVIDING 3D INFORMATION SERVICE - A method of providing a 3D information service at a user terminal includes: receiving a first request of a user for displaying information; and displaying information elements, which have different depths along the Z axis orthogonal to a screen (XY plane), by rotating the information elements about any one of the X axis and the Y axis, where the rotational axis of each of the information elements is set at different points on the YZ plane or the XZ plane. According to certain embodiments of the invention, the information elements on a screen may be shown as planar elements in a still screen for greater legibility, but when the information elements are in motion, such as for changing the screen or moving a content element, the motion is provided with differing speeds according to depth, thereby providing a sense of spatial perception unique to 3-dimensional images. | 07-04-2013 |
20130198693 | THREE-DIMENSIONAL ANIMATION TECHNOLOGY FOR DESCRIBING AND MANIPULATING PLANT GROWTH - This disclosure concerns systems and methods for the prediction and physical three-dimensional representation of plant growth and development. In some embodiments, systems and/or methods of the disclosure may be used to represent the growth of a particular plant (e.g., a maize cultivar) under particular environmental conditions, and/or to represent the differences in growth characteristics between a particular plant and another plant. | 08-01-2013 |
20130212536 | SYSTEM AND PROCESS FOR ROOF MEASUREMENT USING AERIAL IMAGERY - The present disclosure shows creating a first layer and a second layer, in computer memory and substantially overlapping at least a segment of line from said first layer with at least a segment of another line from said second layer. A first non-dimensional attribute is different from said second non-dimensional attribute of the two lines. A user length field enabling a client with said interactive file to override at least one of said length numeric values, where said area operator may automatically recalculate area based on said length field override is shown. Also, providing a visual marker that is moveable on said computer monitor around said aerial imagery region, which may be moved, to more precisely identify the location of the building roof structure is shown. | 08-15-2013 |
20130227492 | APPARATUS FOR CONTROLLING THREE-DIMENSIONAL IMAGES - A system that incorporates teachings of the present disclosure may include, for example, computer-readable storage medium having computer instructions to receive from a media processor one or more scaling characteristics of a three-dimensional (3D) image, present a user interface (UI) for controlling a presentation of the 3D image at a presentation device communicatively coupled to the media processor, wherein the UI is adapted to the scaling characteristics of the 3D image, detect a manipulation of the UI, and transmit to the media processor instructions for adapting the presentation of the 3D image at the presentation device according to the detected manipulation of the UI. Other embodiments are disclosed. | 08-29-2013 |
20130283213 | ENHANCED VIRTUAL TOUCHPAD - A method, including receiving, by a computer, a two-dimensional image (2D) containing at least a physical surface and segmenting the physical surface into one or more physical regions. A functionality is assigned to each of the one or more physical regions, each of the functionalities corresponding to a tactile input device, and a sequence of three-dimensional (3D) maps is received, the sequence of 3D maps containing at least a hand of a user of the computer, the hand positioned on one of the physical regions. The 3D maps are analyzed to detect a gesture performed by the user, and based on the gesture, an input is simulated for the tactile input device corresponding to the one of the physical regions. | 10-24-2013 |
20130311951 | METHOD AND APPARATUS FOR DYNAMICALLY ADJUSTING GAME OR OTHER SIMULATION DIFFICULTY - A method for use with a simulation includes running the simulation, receiving information from a control interface used by a user to interact with the simulation, analyzing the received information, forming at least an indication of the user's level of skill based on the analysis of the received information, and adjusting a difficulty level of the simulation based on the indication of the user's level of skill. A storage medium storing a computer program executable by a processor based system and an apparatus for use with a simulation are also disclosed. | 11-21-2013 |
20140007016 | PRODUCT FITTING DEVICE AND METHOD | 01-02-2014 |
20140007017 | SYSTEMS AND METHODS FOR INTERACTING WITH SPATIO-TEMPORAL INFORMATION | 01-02-2014 |
20140013281 | CONTROLLING THREE-DIMENSIONAL VIEWS OF SELECTED PORTIONS OF CONTENT - Some embodiments of the inventive subject matter are directed to determining that at least a portion of content would be obscured by a border of a graphical user interface if the content were to be presented in a two-dimensional state via the graphical user interface, and presenting the at least the portion of the content in a stereoscopic three-dimensional state in response to the determining that the at least the portion of the content would be obscured by the border of the graphical user interface, wherein a stereoscopic depth effect of the stereoscopic three-dimensional state makes the at least the portion of the content appear to extend beyond the border of the graphical user interface. | 01-09-2014 |
20140019916 | Online Store - A method and system are presently disclosed. The method discloses providing tracking program code to a visitor console, process action data from the visitor console, wherein the tracking program generates the action data based on a webpage of a website displayed on the visitor console, display a three dimensional graphical representation of the website to a user console, display at least one computer generated character interacting with the three dimensional graphical representation of the website, wherein the at least one computer generated character interacts with the three dimensional graphical representation of the website based on the action data. | 01-16-2014 |
20140019917 | DISAMBIGUATION OF MULTITOUCH GESTURE RECOGNITION FOR 3D INTERACTION - A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously. | 01-16-2014 |
20140089861 | METHOD AND SYSTEM FOR DISPLAYING THREE-DIMENSION INTERFACE BASED ON ANDROID SYSTEM - The present disclosure provides a method based on an android system for displaying a three-dimension interface, including: a three-dimension engine library transmitting a user operation command to an android system service layer; the android system service layer transmitting the user operation command to a java terminal; the java terminal generating a responding instruction according to the user operation command and sending the responding instruction to the android system service layer; the android system service layer sending the responding instruction to the three-dimension engine library; and the three-dimension engine library controlling a three-dimension model document to load a three-dimension model corresponding to the responding instruction and redrawing the three-dimension interface accordingly. In the present disclosure, with the android system service layer, a decline of processing ability when the system resources are overloaded in an instant can be avoided. | 03-27-2014 |
20140096087 | Method and device for software interface display on terminal, and computer storage medium - A method and device for software interface display on a terminal is described. The method includes that: a terminal acquires an interface dragging instruction, and records a dragging distance corresponding to the interface dragging instruction; and performs coordinate transformation on each pixel of a screenshot of an interface displayed on a current window according to the dragging distance and a preset rule for coordinate transformation, and redisplays the screenshot of each interface according to a result of the coordinate transformation, achieving the 3D effect or simulated 3D visual effect. With the present method, it is possible to implement software interface display in 3D on a terminal. | 04-03-2014 |
20140137050 | 3D MODELING USER INTERFACE METHOD - The 3D modeling user interface (UI) method provides a 2D scalable grid on a computer screen that allows a user to extrude a 3D shape therefrom. The 3D shape is then presented on the display screen, which also shows the grid that the shape was extruded from. In addition to 2D grids, the UI allows the user to define 2D concentric circular patterns on a surface of the 3D shape, from which the user can extrude a 3D projection of the concentric circular patterns. A previously defined grid can be extended or bent into an arcuate or curvy grid according to manipulations by the user. Moreover a grid can be folded back on itself by the user. Additionally the UI provides groups of user-defined wavy splines that can be extruded from a displayed surface. | 05-15-2014 |
20140143733 | IMAGE DISPLAY APPARATUS AND METHOD FOR OPERATING THE SAME - An image display apparatus and a method for operating the same are disclosed. The method for operating the image display apparatus includes displaying a two-dimensional (2D) content screen, converting 2D content into three-dimensional (3D) content when a first hand gesture is input and displaying the converted 3D content. Therefore, it is possible to increase user convenience. | 05-22-2014 |
20140149943 | Method and apparatus for generating dynamic wallpaper - The present disclosure discloses a method and apparatus for generating a dynamic wallpaper, including: a basic visual effect control parameter is initialized and a 3D transformation parameter is set; a background and particles are rendered based on the 3D transformation parameter and the basic visual effect control parameter to generate a dynamic wallpaper; the 3D transformation parameter and the basic visual effect control parameter are updated based on a touch mode and a touch position upon detection of a user's touch action on a screen; and the background and the particles in the dynamic wallpaper are re-rendered based on the updated 3D transformation parameter and basic visual effect control parameter. By means of the technical solutions of the present disclosure, the dynamic wallpaper generated according to the present disclosure can provide people with an intuitive effect of 3D depth motion particles and an interaction enjoyment, and have unique features in terms of user experience such as visual effect and interaction effect compared to existing static wallpapers and dynamic wallpapers. | 05-29-2014 |
20140181755 | VOLUMETRIC IMAGE DISPLAY DEVICE AND METHOD OF PROVIDING USER INTERFACE USING VISUAL INDICATOR - Provided is a volumetric image display apparatus for providing a user interface using a visual indicator, the apparatus including a recognition unit to recognize an input object in a predetermined three-dimensional (3D) recognition space, a visual indicator location determining unit to determine a location of the visual indicator in a predetermined volumetric image display space, based on the input object, and a display unit to display the visual indicator in the predetermined volumetric image display space, based on the location of the visual indicator. | 06-26-2014 |
20140201683 | DYNAMIC USER INTERACTIONS FOR DISPLAY CONTROL AND MEASURING DEGREE OF COMPLETENESS OF USER GESTURES - The technology disclosed relates to distinguishing meaningful gestures from proximate non-meaningful gestures in a three-dimensional (3D) sensory space. In particular, it relates to calculating spatial trajectories of different gestures and determining a dominant gesture based on magnitudes of the spatial trajectories. The technology disclosed also relates to uniformly responding to gestural inputs from a user irrespective of a position of the user. In particular, it relates to automatically adapting a responsiveness scale between gestures in a physical space and resulting responses in a gestural interface by automatically proportioning on-screen responsiveness to scaled movement distances of gestures in the physical space, user spacing with the 3D sensory space, or virtual object density in the gestural interface. The technology disclosed further relates to detecting if a user has intended to interact with a virtual object based on measuring a degree of completion of gestures and creating interface elements in the 3D space. | 07-17-2014 |
20140201684 | DYNAMIC USER INTERACTIONS FOR DISPLAY CONTROL AND MANIPULATION OF DISPLAY OBJECTS - The technology disclosed relates to distinguishing meaningful gestures from proximate non-meaningful gestures in a three-dimensional (3D) sensory space. In particular, it relates to calculating spatial trajectories of different gestures and determining a dominant gesture based on magnitudes of the spatial trajectories. The technology disclosed also relates to uniformly responding to gestural inputs from a user irrespective of a position of the user. In particular, it relates to automatically adapting a responsiveness scale between gestures in a physical space and resulting responses in a gestural interface by automatically proportioning on-screen responsiveness to scaled movement distances of gestures in the physical space, user spacing with the 3D sensory space, or virtual object density in the gestural interface. The technology disclosed further relates to detecting if a user has intended to interact with a virtual object based on measuring a degree of completion of gestures and creating interface elements in the 3D space. | 07-17-2014 |
20140215406 | MOBILE TERMINAL - The invention discloses a mobile terminal, comprising a display unit configured to display an image to be displayed in three-dimensional space beyond the mobile terminal, a determination module configured to determine an instruction input by a user through the displayed image via spatial position detection, and a processing module configured to execute an operation corresponding to the determined instruction. In the invention, both the display of the image in the three-dimensional space beyond the mobile terminal and the spatial position detection are beneficial to execute the operation corresponding to the instruction from the user accurately, so that the user can observe the image conveniently, the limitation of a screen size is eliminated, the instruction from the user can be judged and executed accurately, and user experience is improved. | 07-31-2014 |
20140245230 | FULL 3D INTERACTION ON MOBILE DEVICES - Systems and methods may provide for displaying a three-dimensional (3D) environment on a screen of a mobile device, and identifying a user interaction with an area behind the mobile device. In addition, the 3D environment can be modified based at least in part on the first user interaction. Moreover, the 3D environment may be modified based on movements of the mobile device as well as user interactions with the mobile device, allowing the user to navigate through the virtual 3D environment by moving the mobile/handheld device. | 08-28-2014 |
20140245231 | PRIMITIVE FITTING APPARATUS AND METHOD USING POINT CLOUD - A primitive fitting apparatus is provided. The primitive fitting apparatus may include a selecting unit to receive, from a user, a selection of points used to fit a primitive a user desires to fit from a point cloud, an identifying unit to receive a selection of the primitive from the user and to identify the selected primitive, and a fitting unit to fit the primitive to correspond to the points, using the points and primitive. | 08-28-2014 |
20140282266 | PROVIDING INFORMATION REGARDING CONSUMABLE ITEMS TO USERS - Various embodiments are directed to systems and methods for characterizing consumable items. A computer system receives a plurality of characteristics describing a first consumable item. The computer system derives first, second and third dimension values from respective first, second and third sets of the plurality of characteristics. The computer system generates a user interface, where the user interface depicts a three-dimensional space and comprises an icon representing the first consumable item. Generating the user interface comprises positioning the icon in the depicted three-dimensional space at a position corresponding to the first, second and third dimension values. | 09-18-2014 |
20140298269 | DETECTING, REPRESENTING, AND INTERPRETING THREE-SPACE INPUT: GESTURAL CONTINUUM SUBSUMING FREESPACE, PROXIMAL, AND SURFACE-CONTACT MODES - Systems and methods for detecting, representing, and interpreting three-space input are described. Embodiments of the system, in the context of an SOE, process low-level data from a plurality of sources of spatial tracking data and analyze these semantically uncorrelated spatiotemporal data and generate high-level gestural events according to dynamically configurable implicit and explicit gesture descriptions. The events produced are suitable for consumption by interactive systems, and the embodiments provide one or more mechanisms for controlling and effecting event distribution to these consumers. The embodiments further provide to the consumers of its events a facility for transforming gestural events among arbitrary spatial and semantic frames of reference. | 10-02-2014 |
20140304660 | DISCOVERING AND PRESENTING DECOR HARMONIZED WITH A DECOR STYLE - Technology is disclosed for discovering décor harmonized with a décor style (“the technology”). The décor includes décor items, e.g. artworks, paintings, pictures, artifacts, architectural pieces, arrangement of artworks, color selection, room décor, rugs, mats, furnishings, household items, fashion, clothes, jewelry, car interiors, garden arrangements etc. The technology facilitates analyzing user input to identify a décor style from a décor style dictionary, obtaining décor that harmonizes with décor style, and presenting a representation of the décor to the user. The décor style dictionary includes décor styles that are generated based on an analysis of content, including images and description of décor, from a plurality of sources. The décor styles can be based on a number of concepts, including a theme of the décor, a color/color palette, a mood of the person, a fashion era, a type of architecture, etc. The technology facilitates presentation of discovered décor using computer generated imagery techniques. | 10-09-2014 |
20140304661 | DISCOVERING AND PRESENTING DECOR HARMONIZED WITH A DECOR STYLE - Technology is disclosed for discovering décor harmonized with a décor style (“the technology”). The décor includes décor items, e.g. artworks, paintings, pictures, artifacts, architectural pieces, arrangement of artworks, color selection, room décor, rugs, mats, furnishings, household items, fashion, clothes, jewelry, car interiors, garden arrangements etc. The technology facilitates analyzing user input to identify a décor style from a décor style dictionary, obtaining décor that harmonizes with décor style, and presenting a representation of the décor to the user. The décor style dictionary includes décor styles that are generated based on an analysis of content, including images and description of décor, from a plurality of sources. The décor styles can be based on a number of concepts, including a theme of the décor, a color/color palette, a mood of the person, a fashion era, a type of architecture, etc. The technology facilitates presentation of discovered décor using computer generated imagery techniques. | 10-09-2014 |
20140337802 | INTUITIVE GESTURE CONTROL - In an embodiment, the computing unit determines a spherical volume region corresponding to a sphere and lying in front of a display device and the midpoint of the volume region. Furthermore, the computing unit inserts manipulation possibilities related to the output image into image regions of the output image. An image capture device captures a sequence of depth images and communicates it to the computing unit. The computing unit ascertains therefrom whether and, if appropriate, at which of a plurality of image regions the user points with an arm or a hand, whether the user performs a predefined gesture that differs from pointing at the output image or an image region, or whether the user performs a grasping movement with regard to the volume region. Depending on the result of the evaluation, the computing unit may activate a manipulation possibility, perform an action, or rotates the three-dimensional structure. | 11-13-2014 |
20140359535 | APPARATUS AND METHOD FOR MANIPULATING THE ORIENTATION OF AN OBJECT ON A DISPLAY DEVICE - A method of manipulating a three-dimensional object displays a first view of a three-dimensional object on a touchscreen. The touchscreen has three-dimensional views associated with at least one pre-specified visually undelineated portion of the touchscreen. The method receives a touch input on the touchscreen in a visually undelineated portion, and determines a second view of the three-dimensional object based on the view assigned to the visually undelineated portion that received the touch input. The method displays the second view of the three-dimensional object on the touchscreen. | 12-04-2014 |
20140359536 | THREE-DIMENSIONAL (3D) HUMAN-COMPUTER INTERACTION SYSTEM USING COMPUTER MOUSE AS A 3D POINTING DEVICE AND AN OPERATION METHOD THEREOF - A three-dimensional user interface system includes at least one pointing/input device and an imaging device configured for capturing one or more image frames each providing at least two different views of a scene including the at least one pointing/input device. The imaging device is a multi-view imaging device which provides at least two different views of the scene per each of the one or more image frames captured. One or more software programs calculate from reference points in the image frames at least a spatial and a velocity parameter of the at least one pointing/input device when moved through a three-dimensional space and for rendering on a graphical user interface of a computing device a visual marker corresponding to the spatial and velocity parameters of the at least one pointing/input device in three-dimensional space. Methods for three-dimensional pointing and/or data input incorporating the described system are also provided. | 12-04-2014 |
20140372956 | METHOD AND SYSTEM FOR SEARCHING AND ANALYZING LARGE NUMBERS OF ELECTRONIC DOCUMENTS - The current document is directed to methods and systems for accessing, searching, analyzing, and visualizing electronically stored information, including electronic documents. These methods and systems construct graph-like representations of information searches that can be visualized and manipulated in three dimensions. The three-dimensional rendering of search results allows for very large numbers of search results to be visualized conveniently using a graphical user interface displayed on an electronic display device. Methods and systems provide for three-dimensional manipulation of graph-like renderings of search results, visualization-assisted searching, and a large number of research tools for discovering and storing various types of links, connections, and relationships between electronically stored information entities. | 12-18-2014 |
20150033191 | SYSTEM AND METHOD FOR MANIPULATING AN OBJECT IN A THREE-DIMENSIONAL DESKTOP ENVIRONMENT - An electronic device, method and interface for the device, for performing an action with a processor through a three-dimensional desktop environment is disclosed. A three-dimensional desktop environment is generated by a display and projected into a real space. At least one ultrasonic transducer propagates an ultrasonic pulse into the real space and receives a reflection of the ultrasonic pulse from a user object in the real space. A user action of the user object within the three-dimensional desktop environment is determined using the reflection of the ultrasonic pulse. The processor performs the action based on the determined user action. | 01-29-2015 |
20150040072 | THREE DIMENSIONAL IMAGE DIMENSION MAPPING - In an example embodiment, a first view of a real-world object is displayed from a first angle. Then a two dimensional shape is overlaid over the first view. Then user interface manipulations of the two dimensional shape are received from a user, defining the boundaries of a first side of the real-world object in the first view. Then a second view of the real-world object is displayed from a second angle. Following that, the manipulated two dimensional shape may be overlaid over the second view. Additional user interface manipulations of the two dimensional shape may be received from the user, defining the boundaries of the first side of the real-world object in the second view. Then dimensions of the first side of the real-world object are derived from the manipulated two-dimensional object from the second view. | 02-05-2015 |
20150089452 | System and Method for Collaborative Computing - A system for facilitating collaborative visual expression on a portable computing device comprises a virtual workspace module constructing a virtual workspace at the portable computing device, the virtual workspace configured to host objects including one or more objects defining one or more subspaces of the virtual workspace. A synchronization module communicates with a cooperating computing device via a wireless communication link to synchronize the virtual workspace with another virtual workspace constructed at the cooperating computing device. A graphical user interface module generates a graphical user interface that presents an individual view of the virtual workspace to a user of the portable computing device and allows visual expression within the virtual workspace via a touch input device associated with the portable computing device. A touch input data processing module processes data pertaining to touch input detected by the touch input device in connection with the graphical user interface. | 03-26-2015 |
20150106767 | METHOD AND APPARATUS FOR ADDRESSING OBSTRUCTION IN AN INTERFACE - A user, a manipulator such as a hand, and at least one entity such as a virtual or augmented reality object are in an interface such as a 3D environmental interface. The manipulation distance is the distance between a reference feature of the user and a manipulation feature of the manipulator. The entity distance is the distance between the reference feature and an entity feature of the entity. When the manipulation distance becomes greater than the entity distance, the entity is caused to fade, disappear, move out of the way, shrink, etc. so as to be less of an obstruction to the user's field of view, for example to avoid obstructing more distant entities. Other factors than the manipulation distance and entity distance may be considered in determining whether to reduce the obstructivity of the entity, and exceptions to the obstruction relation may be considered. | 04-16-2015 |
20150106768 | Three Dimensional User Interface Effects On A Display By Using Properties Of Motion - The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user. | 04-16-2015 |
20150135143 | SYSTEMS AND METHODS FOR SPEED-ADJUSTABLE MODEL NAVIGATION - Systems and methods for speed-adjustable model navigation are provided. In aspects, a model platform includes a model engine and a speed tool that operates with the model engine to generate a graphical view of a geological model. Various features of the geological object may be encoded or reflected in the geological model, including the composition, pressure, temperate, structure, fracture lines, and other aspects of a hydrocarbon deposit, cavity, or other geological structure. The user may operate the speed tool to examine the histogram of color or intensity of the pixels or voxels of regions of the model view, and set a speed curve to control how quickly or slowly a cursor or other control may move through or traverse a region, based on the color, intensity, or other value. Regions of interest may be explored more efficiently and accurately. | 05-14-2015 |
20150143301 | Evaluating Three-Dimensional Geographical Environments Using A Divided Bounding Area - Embodiments relate to evaluating structures located in a geographic area that is divided into divided bounding areas using a three-dimensional environment representing the structures. In an embodiment, a computer-implemented method includes a method for dividing a three-dimensional environment depicting the structures into the divided bounding areas where each divided bounding area encompasses a portion of structures. In the method, an evaluation web page for each divided bounding area is retrieved from a structure evaluation server where each evaluation web page provides an evaluation status for each divided bounding area. At least one structure encompassed by a selected divided bounding area is evaluated based on a visual representation of the structure. An updated evaluation web page for the selected divided bounding area is provided to the structure evaluation server where the updated evaluation web page provides an updated evaluation status for the selected divided bounding area. | 05-21-2015 |
20150331570 | UPDATING ASSETS RENDERED IN A VIRTUAL WORLD ENVIRONMENT BASED ON DETECTED USER INTERACTIONS IN ANOTHER WORLD - A settings controller outputs a settings interface through which a user may select from among multiple selectable options to specify one or more data associations in databases accessed by an asset location controller to selectively assign a detected user interaction in another world to a displayable rendering in a virtual world based on a selection of the one or more data associations applicable to the detected user interaction and the virtual world. The settings controller, responsive to a user selecting, through the settings interface, one or more particular selectable options to enter one or more particular data associations for one or more particular databases, assigns the one or more particular data associations to the one or more particular databases for specifying the displayable rendering of the detected user interaction in the another world to the displayable rendering in the virtual world. | 11-19-2015 |
20160034028 | PERSPECTIVE BASED TAGGING AND VISUALIZATION OF AVATARS IN A VIRTUAL WORLD - A system for perspective based tagging and visualization of avatars in a virtual world may include determining if another avatar has moved within a predetermined proximity range of a user's avatar in a virtual world. The system may also include allowing the user to tag the other avatar with information in response to the other avatar being within the predetermined proximity range of the user's avatar. | 02-04-2016 |
20160117077 | Multi-Depth-Interval Refocusing Method and Apparatus and Electronic Device - A multi-depth-interval refocusing method, apparatus and electronic device are provided. The method includes displaying an image on a display device; acquiring user input, and determining, in the displayed image according to the user input, a refocus area including at least two discontinuous depth intervals, where each depth interval in the at least two discontinuous depth intervals is constituted by at least one depth plane, each depth plane contains at least one focus pixel, and depths of object points corresponding to focus pixels contained on a same depth plane are the same; performing refocusing processing on an image within the refocus area to display a refocused image on the display device, where the refocused image has a visually distinguishable definition difference relative to an area, except the refocus area, in the displayed image; and displaying the refocused image on the display device. Therefore, multi-depth-interval refocusing is implemented. | 04-28-2016 |
20160188160 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING PROGRAM, AND MOBILE TERMINAL DEVICE - An information processing apparatus includes a 3D(three dimensional)-matrix arranging portion and a luminance/transmittance display controller portion. When a plurality of contents are list-displayed, the 3D-matrix arranging portion arranges the plurality of contents in a 3D-matrix-like form (“3D-matrix,” hereinbelow) in accordance with predetermined three axes. The 3D matrix includes a plurality of planes each containing a group of contents of the plurality of contents. The luminance/transmittance display controller portion displays on a display portion the respective groups of contents of the plurality of contents arranged in the 3D matrix, in a manner that luminance is gradually reduced in a direction from contents on a front-most plane of the plurality of planes to contents on a rear-most plane, and transmittance is gradually increased in the direction from the contents on the front-most plane to the contents on the rear-most plane. | 06-30-2016 |
20160196037 | METHOD OF CONTROLLING USER INPUT AND APPARATUS TO WHICH THE METHOD IS APPLIED | 07-07-2016 |