Entries |
Document | Title | Date |
20090064052 | On-Line Product Catalogue and Ordering System, and the Presentation of Multimedia Content - A method of presenting an on-line product catalogue and ordering system to a user, said method comprising providing a graphical user interface representing a virtual space through which users may navigate, said virtual space including representations representing products, through which a user may obtain product information and order a product on-line, and representations representing other users of the system, through which the user may identify such other users and communicate with a selected other user regarding the products on offer. | 03-05-2009 |
20090083669 | NAVIGATION SYSTEM FOR A 3D VIRTUAL SCENE - A navigation system for navigating a three-dimensional (3D) scene that includes a model or object with which a user can interact. The system accommodates and helps both novice and advanced users. To do this, the system provides a focus point that can be positioned on a model surface and with respect to which the tools of the system operate. The point is a geometry sphere that can have axial rings that show the orientation of the scene and the relative position of the view in the scene based on sphere size. | 03-26-2009 |
20090083670 | AUDIO-VISUAL NAVIGATION AND COMMUNICATION - Communicating information through a user platform by representing, on a user platform visual display, spatial publishing objects as entities static locations within a three-dimensional spatial publishing object space. Each spatial publishing object associated with information, and each presenting a subset of the associated information. Establishing a user presence at a location within the spatial publishing object space. The user presence, in conjunction with a user point-of-view, being navigable by the user in at least a two-dimensional sub-space of the spatial publishing object space. | 03-26-2009 |
20090089714 | THREE-DIMENSIONAL WEBSITE VISUALIZATION - Methods, systems, and apparatuses for visualizing content of one or more websites are provided. An amount of available content contained by at least one website is determined. A three-dimensional object is displayed having a surface. Each location of the surface of the three-dimensional object corresponds to a portion of the determined amount of available content contained by the website(s). A user is enabled to select a location of the surface of the three-dimensional object. The selected location is mapped to the corresponding portion of the determined amount of available content contained by the website(s). Content of the website(s) corresponding to the selected location, such as a web page, video, audio, an RSS feed, etc., can be accessed by selection of the location. Interaction of the user with the website(s) can be tracked, and the resulting tracking information can be displayed on the three-dimensional object. | 04-02-2009 |
20090187863 | STORAGE MEDIUM STORING INPUT PROCESSING PROGRAM AND INPUT PROCESSING APPARATUS - A game system includes a game apparatus and a controller. For example, whether an operation of waving the controller has been performed is determined based on at least one of a pointed position on a display screen by the controller and acceleration. A wind object is generated and moved in virtual game space according to the waving operation. When it is determined that the wind object has collided with a windmill object disposed in the virtual game space, the windmill object is influenced by the wind and its rotation speed is changed. | 07-23-2009 |
20090217207 | DISPLAY OF A MEDICAL IMAGE - A device for displaying a medical image is provided. The device includes a processing unit, a display, a remote control, a communication interface, and a software module. The processing unit is operable to process the medical image information. The display is operable to display the medical image information. The remote control is operable to register a user movement by at least one motion-sensitive sensor. The communication interface is operable to transfer the user movement to the processing unit. The software module is associated with the processing unit. The software module is operable to reconcile the user movement with the medical image information so that the user movement is reproduced as a virtual movement of the displayed medical image information. | 08-27-2009 |
20090222768 | Graphical User Control for Multidimensional Datasets - A user control is provided for use with a multidimensional dataset that allows a user to graphically set the bounds for one or more of the dimensions of data selected from the dataset. The graphical user control includes a wireframe cube representing the extent of data in the dataset and a selector box within the data cube. A user can indicate a selected perspective and orientation of the data by selecting a portion of an edge of the selector box, and a visual indication of the selected perspective and orientation is provided. The user further can select a desired portion of the data by changing a size and/or a position of the selector box within the data cube. The graphical user control further includes a visual indicator representing the fourth dimension of the dataset which allows the user to identify and select a further subset of the data defined by the selector box. The graphical user control further includes one or more navigation buttons that allow the user to rotate a view around the selector box, the view reflecting the selected perspective and orientation of data in the dataset. | 09-03-2009 |
20090241067 | APPARATUS, SYSTEM, AND METHOD FOR ROTATIONAL GRAPHICAL USER INTERFACE NAVIGATION - A display module displays a first face of a virtual multidimensional solid comprising a plurality of faces. Each face comprises graphical user interface controls for a unique function set. An input module receives a rotational command that rotates the multidimensional solid around at least one axis. A rotation module rotates the multidimensional solid to display a second face in response to the rotational command, displaying the rotation of the multidimensional solid. | 09-24-2009 |
20090259975 | LIST DISPLAY APPARATUS, LIST DISPLAY METHOD AND GRAPHICAL USER INTERFACE - A list display apparatus includes: a picture generating unit for generating a three-dimensional list picture, the three-dimensional list picture having a plurality of lower item cards having a respective lower items in a hierarchical structure being unfolded and expanded in a bellows configuration when shifting from an upper level to a lower level of the hierarchical structure, or the plurality of lower item cards being folded and collapsed in a bellows configuration when shifting from the lower level to the upper level of the hierarchical structure; and a control unit for outputting the three-dimensional list picture to a predetermined display unit, thereby displaying the three-dimensional list picture. | 10-15-2009 |
20090259976 | Swoop Navigation - This invention relates to navigating in a three dimensional environment. In an embodiment, a target in the three dimensional environment is selected when a virtual camera is at a first location. A distance between the virtual camera and the target is determined. The distance is reduced, and a tilt is determined as a function of the reduced distance. A second location of the virtual camera is determined according to the tilt, the reduced distance, and the position of the target. Finally, the camera is oriented to face the target. In an example, the process repeats until the virtual camera is oriented parallel to the ground, and the distance is close to the target. In another example, the position of the target moves. | 10-15-2009 |
20090265668 | 3D INPUT/NAVIGATION DEVICE WITH FREEZE AND RESUME FUNCTION - 3D input/navigation device, method and computer program product for controlling an object in a three-dimensional space by an operator, wherein an object can be switched in a frozen condition in which the movements of the object are suppressed with respect to at least one direction such that the object can only be moved in a surface or along a line determined by the operator and can be switched in a released condition in which the object is resumed or released such that it is freely controllable corresponding to a relative position of the device in a reference system. | 10-22-2009 |
20090313585 | METHOD AND COMPUTERIZED USER INTERFACE FOR PRESENTATION OF MULTIPLE IMAGE DATA SETS - In a method and user interface for the presentation of multiple image data sets within the scope of a comparative evaluation, a determination is made of at least three organization parameters that describe a sorting of images within an image data set and/or across image data sets, at least one organization parameter is associated with at least one dimension of a three-dimensional matrix, which one dimension is associated with a spatial direction, the images of the image data sets are arranged in the three-dimensional matrix according to the sorting, using the organization parameters and the dimensions, and at least a portion of the images is shown on a presentation device according to their arrangement in the three-dimensional matrix and the spatial directions. | 12-17-2009 |
20100199221 | NAVIGATION OF A VIRTUAL PLANE USING DEPTH - A touchless HCI provides a virtual surface in three-dimensional space. The touchless HCI may receive input regarding a user movement, process the input to generate clean gesture data and analyze at least one dynamical variable to determine an interpreted action based upon a relationship of the clean gesture data with respect to the virtual surface. | 08-05-2010 |
20100251186 | Mobile Terminal Providing Graphic User Interface and Method of Providing Graphic User Interface Using the Same - A mobile terminal providing a graphic user interface and method of providing a graphic user interface are disclosed. In one embodiment a mobile terminal may comprise a control unit, a touch screen display unit coupled to the control unit, and a memory storing instructions to be executed by the control unit. In one embodiment a method may comprise displaying and then moving one or more icons around the screen of the mobile terminal, detecting a designation of a point on the screen, and executing a predefined action based on a distance from a location of the designated point to a location of at least one of the icons. Another embodiment may detect an application related event, display an indicator icon, and execute a predefined action in response to the event detection. | 09-30-2010 |
20100262938 | SYSTEMS AND METHODS FOR GENERATING A MEDIA GUIDANCE APPLICATION WITH MULTIPLE PERSPECTIVE VIEWS - Systems and methods are provided for navigating a media guidance application with multiple perspective views. A first of a plurality of media guidance objects may be displayed in a first perspective view that appears flat on the screen. A second media guidance object may be displayed in a second perspective view that appears to be going into the screen creating the appearance of a fold between the first and second media guidance application objects at a location where the first perspective view changes into the second perspective view. The second media guidance object in the second perspective view may be caused to slide through the fold into the first perspective view. The second media guidance object displayed in the first perspective view may be selected. An action may be performed for a media asset corresponding to the selected media guidance object. | 10-14-2010 |
20100281433 | Computer Method and Apparatus Specifying Avatar Entrance and Exit - Computer method and apparatus controls avatar relative to a subject virtual environment, in particular entrance to and/or exit from the subject environment. An entrance/exit specification engine provides a plurality of characteristics of the subject environment and/or of an avatar representing a corresponding user in the subject environment. A script generator responsive to the entrance/exit specification engine generates scripts as a function of the plurality of characteristics. The generated scripts form a script collection executable with the avatar. Execution of the generated scripts on a processor of the corresponding user controls avatar entrance to and/or exit from the subject environment. | 11-04-2010 |
20110010675 | Use of Real Time Location Information for User Authentication and Authorization in Virtual Environments - Provided is a method for authentication and verification of a user in a virtual world (VU) based upon such criteria as the physical location of a user in the real world and the logical location of the user's avatar in the VU. The disclosed technology combines physical and application aspects of security to enhance security options within virtual environments. In addition to traditional credential-based authentication, physical constraints corresponding to the real world and logical locations in a VU are employed, wherein an authentication server requires each component to be in the proper association state location or proximity before authenticating a user. Further, the disclosed technology provides for the termination of a user's authentication if the user moves from an approved physical or VU location to an unapproved location. Techniques are provided to track a user's credentials and real-time physical and logical location of a user. | 01-13-2011 |
20110016434 | KNOWLEDGE-BASED POLYMORPH UNDOCKABLE TOOLBAR - A software control method and apparatus for implementing a knowledge-based polymorph undockable toolbar within an object scene. The undockable toolbar can be used to perform actions on objects created and managed by computer software applications. A knowledge-based polymorph undockable toolbar can merges into a relatively small area, tools for executing various commands that would require substantial screen space if represented by standard icons on a toolbar. The present invention can be used to manipulate non-constrained objects or groups of objects included in an assembly that are linked to each other by constraints. The knowledge based polymorph undockable toolbar can also act to reduce the number of user interactions needed to perform a manipulation task. | 01-20-2011 |
20110078634 | SYSTEMS AND METHODS FOR NAVIGATING A THREE-DIMENSIONAL MEDIA GUIDANCE APPLICATION - Systems and methods for navigating a three-dimensional (3D) media guidance application are provided. A first selectable media guidance object may be displayed on a screen that when viewed through the stereoscopic optical device may appear in a first plane. A second selectable media guidance object may be displayed on the screen that when viewed through the stereoscopic optical device may appear in a second plane. The first and second planes may be perceived to intersect an axis normal to the display in different locations. A user selection of at least one of the first and second selectable media guidance objects may be received. An action of moving a cursor in 3D space or selecting one of the selectable media guidance objects displayed in the 3D space may be performed based on the user selection. The user selection may be performed with an input device having an accelerometer. | 03-31-2011 |
20110107270 | Treatment planning in a virtual environment - A method and apparatus for treatment planning are described. A treatment planning system provides a computer-simulated virtual environment including a virtual artifact that is a three-dimensional simulation of a patient anatomy, wherein the three-dimensional simulation is generated from one or more diagnostic images taken of the patient anatomy. The treatment planning system performs a treatment planning operation associated with the virtual artifact in response to a user interaction with the virtual environment. | 05-05-2011 |
20110107271 | System And Method For Providing A Dynamic User Interface For A Dense Three-Dimensional Scene With A Plurality Of Compasses - A system and method for providing a dynamic user interface for a dense three-dimensional scene with a plurality of compasses is presented. Clusters of semantically scored documents are placed in a three-dimensional scene and arranged as a cluster spine. Each cluster spine is projected into a two-dimensional display. A compass and another compass are provided via a heads-up display generator to logically frame at least one of the cluster spines within the two-dimensional display. A label is generated to identify each concept in one or more cluster spines appearing with the compass and the another compass, respectively. Slots are defined in the two-dimensional display and positioned circumferentially around the compass and the another compass. Each label is assigned to the slot outside of the compass or the another compass for the cluster spine having a closest angularity to the respective slot. | 05-05-2011 |
20110113383 | Apparatus and Methods of Computer-Simulated Three-Dimensional Interactive Environments - Computer-simulated three-dimensional environments include automatable and/or constrainable camera control, for video gameplay, real estate and/or landscape demonstrations, or any other digitizable environment. In gameplay applications, the system can be customizably programmed to automatically adapt to the environment based on the player's location within the virtual environment, information about what the programmer believes is relevant (or wants to make relevant) in the scene being displayed, and other factors. Certain embodiments of the inventive apparatus and methods generally automatically incorporate and honor the “rules of cinematography,” but also preferably include other “action video game” principles that can override or trump those rules. | 05-12-2011 |
20110126161 | POSITIONAL EFFECTS IN A THREE-DIMENSIONAL DESKTOP ENVIRONMENT - Systems, methods and articles of manufacture are disclosed for arranging display elements in a three-dimensional desktop environment. In one embodiment, each display element may include an attribute. A user request may be received to apply the positional effect to the display elements. The positional effect may be applied to the display elements based on the attribute, responsive to the user request. | 05-26-2011 |
20110302536 | USER MOVEMENT INTERPRETATION IN COMPUTER GENERATED REALITY - Technologies are generally described for a system for interpreting user movement in computer generated reality. In some examples, the system includes a user interface effective to generate movement data relating to movement of the user interface. In some examples, the system further includes a processor receive the movement data. In some examples, the processor is further effective to define a coordinate system based on the movement data and map the movement data to the coordinate system to produce mapped movement data. In some examples, the processor is further effective to determine a feature of the mapped movement data and to map the feature to a code. In some examples, the processor is further effective to send the code to the application and receive application data from the application in response to the code. In some examples, the processor is further effective to generate an image based on the application data. | 12-08-2011 |
20120030630 | MULTISCALE THREE-DIMENSIONAL ORIENTATION - A multiscale data engine is configured to generate a three-dimensional (3D) environment based on a multiscale 3D dataset. The multiscale data engine is also configured to generate a spatial hierarchy within the 3D environment by selectively grouping 3D objects within the 3D environment. The multiscale data engine is further configured to identify specific 3D objects within the 3D environment in response to input received from an end-user and based on spatial properties associated with the 3D objects. The multiscale data engine is also configured to generate various navigation graphical user interfaces (GUIs) that allow the end-user to navigate the 3D environment. | 02-02-2012 |
20120084733 | METHOD AND DEVICE FOR DISPLAYING AND BROWSING A MULTI-FACETED DATA SET - This invention aims to provide a method and apparatus for displaying and/or browsing a multi-faceted data set containing hierarchical subject labels. In the present invention, subject labels can be located into a 3D space. Complex information, such as, the relationship between subject labels and weights of respective subject labels can be presented by displaying the 3D space. In this way, the screen size can be reduced and the user experience is improved. | 04-05-2012 |
20120192115 | System and Method for Interactive Projection and Playback of Relevant Media Segments onto the Facets of Three-Dimensional Shapes - A system for interactive media skimming and search on a device comprises a scene manager building a model of a 3D scene of a multimedia segment of the media visible on a screen of the device, facets of scene objects in the 3D scene are used to dynamically convey visual imagery as a texture from a multimedia source and maintaining a logical navigable relationship between the scene objects, the object facets and the multimedia segments, and further comprising an interaction manager, a user manager securely storing user information and preferences, a playback component initiating rough or high definition playback, a texturizer creating a 2D texture artifact from a set of the multimedia segments in the media for a 3D facet, a transformer transforming and modifying pixels, a cache and scene heuristics maintaining a set of 3D scenes comprising objects, object facets, a virtual camera, and positions of the objects. | 07-26-2012 |
20120216150 | SYSTEM AND METHOD FOR MANIPULATING OBJECTS IN A GRAPHICAL USER INTERFACE - A system, computer-readable storage medium including instructions, and a computer-implemented method for manipulating one or more objects in a graphical user interface for a display device is presented. A start of a path selection mode is detected. A first path traversed by a cursor in the graphical user interface is detected, with the first path intersecting one or more objects in a plurality of objects displayed in the graphical user interface of the display device. A curve corresponding to the first path is displayed in the graphical user interface. An end of the path selection mode is detected. A selection state of the one or more objects is updated based on the curve, the selection state including a selected state and a deselected state. | 08-23-2012 |
20120284671 | SYSTEMS AND METHODS FOR INTERFACE MANGEMENT - Methods and systems for interface management are provided. First, a plurality of interfaces arranged in sequence is provided. The interfaces are placed in a circle across a 3D space to form a 3D object, and the interfaces include pages or menus. Then, a signal is received, and in response to the signal, the position of the 3D object viewed on a screen of the electronic device are adjusted, wherein the 3D object is located at a virtual distance behind and away from the screen, and the virtual distance are gradually varied. | 11-08-2012 |
20120284672 | VIRTUAL ROOM-BASED LIGHT FIXTURE AND DEVICE CONTROL - In one embodiment, a virtual room-based user interface includes one or more virtual rooms. Each virtual room is rendered from one or more images captured of a corresponding physical room of a structure, and includes depictions of one or more light fixtures within the physical room, one or more furnishings within the physical room and one or more boundaries of the physical room. A user selects a particular depiction of a particular light fixture within a particular virtual room. In response, a state of the particular light fixture within the corresponding physical room is changed. Also, appearance of the particular virtual room is updated such that the depiction of the particular light fixture shows the particular light fixture with the changed state and the depictions of the one or more boundaries or the one or more furnishings show lighting effects resulting from the changed state. | 11-08-2012 |
20120297346 | THREE DIMENSIONAL BUILDING CONTROL SYSTEM AND METHOD - The system helps facility managers and other users to efficiently navigate through a building or complex of buildings, and quickly gather information for (and control) individual building systems or groups of systems. A method includes displaying an image representing at least a portion of a building, wherein at least part of the image is a three-dimensional representation; and displaying a representation of a device associated with the building, wherein the representation of the device is selectable through the user interface. | 11-22-2012 |
20120304130 | SYSTEMS, METHODS, AND COMPUTER-READABLE MEDIA FOR MONITORING COMMUNICATIONS ON A NETWORK - Network monitoring systems, computer-readable storage media, and methods monitor a network. Communication data is captured from the network in a substantially passive manner. The communication data is organized to represent a plurality of conversations between a plurality of hosts on the network. Each conversation of the plurality includes a first address of a first host of the plurality of hosts, a service port identifier on the first host, and a second address of a second host of the plurality of hosts. Information correlated to at least some of the plurality of conversations is presented on a graphical user interface. | 11-29-2012 |
20120311503 | GESTURE TO TRIGGER APPLICATION-PERTINENT INFORMATION - A system is disclosed for interpreting a gesture which triggers application-pertinent information, such as altering a display to bring objects which are farther away into larger and clearer view. In one example, the application is a golfing game in which a user may perform a peer gesture which, when identified by the application, alters the view to display portions of a virtual golf hole nearer to a virtual green into larger and clearer view. | 12-06-2012 |
20120317517 | VIRTUAL UNIVERSE AVATAR ACTIVITIES REVIEW - A proximity threshold of an avatar is defined with respect to proximity to an artifact located within a virtual universe domain. Activity by the avatar within the virtual universe domain is tracked, with activity data generated from the tracking. The activity data is analyzed to determine proximity of the avatar to the artifact within the proximity threshold, and a report is generated from the analyzing, the report noting a determined proximity of the avatar to the artifact within the proximity threshold. In one aspect, the report is provided to a supervisory entity. | 12-13-2012 |
20130014064 | Predictive, Multi-Layer Caching Architectures - Predictive, multi-layer caching architectures may be used to predict which elements a user is most likely to navigate to within a collection of elements associated with a predefined layout and, in response, to increase the accessibility of these elements to a client device of the user. For instance, the techniques may utilize a predictive, multi-layer caching architecture for storing these predicted elements to decrease the latency to render these images if the user navigates within the collection of elements in the predicted manner. The collection of elements may comprise images (e.g., a 3D model, a map, etc.), video files, audio files, text files, or any other type of file that is consumable on a client device. | 01-10-2013 |
20130055165 | Depth Adaptive Modular Graphical User Interface - This invention provides a computerized system's graphical user interface to display two-dimensional content based on a depth navigation allowing the user to navigate inward and outward through modules of content without horizontal or vertical scrolling. Display of said content follows specific arrangements or virtual projections onto animated three -or more- dimensional objects to maintain the navigation paradigm and circumvent horizontal and vertical scrolling. | 02-28-2013 |
20130080977 | DYNAMIC CREATION OF VIRTUAL REGIONS - In various embodiments, virtual universe regions are dynamically generated within a virtual universe based on user requests. Dynamic generation allows virtual universe users or “residents” to create virtual universe regions that are tailored to their desired specifications. Additionally, in some implementations, virtual universe users may have the option to instantly discard or retain a created region after evaluation based on whether the region meets the user's expectations. Furthermore, dynamic generation of regions may increase user satisfaction and provide additional means for revenue generation for the virtual universe administrator and for virtual universe businesses and entrepreneurs. | 03-28-2013 |
20130097563 | MULTIDIMENSIONAL-DATA-ORGANIZATION METHOD - The present invention relates to a multiple-dimension-data-organization method. More specifically, this method uses an n-dimensional cube (M-cube) in which each face displays the data on a 2-D planar surface and in which the x and y axes may be changed in accordance with the user's request. More specifically still, the data to be viewed may be easily changed by the user by means of a simple rotation command using a touch-sensitive interface. | 04-18-2013 |
20130132910 | BELT ADAPTED TO MOVEMENTS IN VIRTUAL REALITY - Disclosed is a device ( | 05-23-2013 |
20130159935 | GESTURE INPUTS FOR NAVIGATING IN A 3D SCENE VIA A GUI - Techniques for manipulating a three-dimensional scene displayed via a multi-touch display include receiving information associated with an end-user touching a multi-touch display at one or more screen locations, determining a hand movement based on the information associated with the end-user touching the multi-touch display, determining a command associated with the hand movement, and causing the three-dimensional to be manipulated based on the command and the one or more screen locations. The disclosed techniques advantageously provide more intuitive and user-friendly approaches for interacting with a 3D scene displayed on a computing device that includes a multi-touch display. | 06-20-2013 |
20130159936 | CONTENT DISPLAY DEVICE, CONTENT DISPLAY METHOD, PORTABLE TERMINAL, PROGRAM, AND RECORDING MEDIUM - In a content display device ( | 06-20-2013 |
20130179841 | System and Method for Virtual Touring of Model Homes - There is disclosed a system and method for providing a virtual tour of a model property. In an embodiment, a computer-implemented method comprises providing one or more interactive three dimensional viewing modes for viewing the model property. An interactive user control is provided for moving between the one or more interactive three dimensional interactive viewing modes, wherein, the transition between one or more interactive three dimensional viewing modes is continuous. The one or more interactive three dimensional viewing modes may include a floor plan view mode, and a three dimensional room view mode, with a seamless transition in between giving the viewer a sensation of zooming into or out of a selected room. In another embodiment, an interactive first person walk-through view mode is provided whereby a user can interact with one or more features in a room. The user is able to modify a design of one or more features in the model based upon preferences of the user. | 07-11-2013 |
20130191787 | Systems and Methods for Acceleration-Based Motion Control of Virtual Tour Applications - The present invention relates to systems and methods for reliably detecting motion control of mobile devices to navigate virtual tour applications. In one embodiment, a computerized hand-held mobile device is configured to telespot from a first virtual tour environment to a second virtual tour environment upon detection of an intentional user motion, such as a flick, using a motion sensor. Upon detection of a potentially telespotting motion that is greater than a threshold and a viewing field of the mobile device substantially overlapping with an annotated link of the virtual tour, the mobile device telespots from the first virtual tour environment of the virtual tour to the second virtual tour environment of the virtual tour. | 07-25-2013 |
20130212538 | IMAGE-BASED 3D ENVIRONMENT EMULATOR - An image-based 3D environment emulator incorporates a 3D engine. The background or decor of the 3D environment is created using a series of 2D images, and 3D objects are rendered by the 3D engine. The 2D image displayed on a 2D plane and the 3D objects are projected onto the same plane. The 2D image is visible behind the 3D objects and appears blended therewith. A 3D illusion is created and the user can interact with the 3D objects as he navigates throughout the environment. Navigation from image to image is calculated in real time. A viewing position of the 3D objects inside a 3D space created by the 3D engine is updated to reflect a new viewing position and/or viewing angle in accordance with navigation instructions received from a user. A new 2D image is provided and the projection of the 3D objects is updated accordingly. | 08-15-2013 |
20130275920 | SYSTEMS AND METHODS FOR RE-ORIENTATION OF PANORAMIC IMAGES IN AN IMMERSIVE VIEWING ENVIRONMENT - A computerized display device re-orientates panoramic images in a limited field-of-view immersive viewing environment. The display device orientates to affect the field of view (FOV) of a corresponding virtual panoramic reality FOV. Upon executing a user command, the orientation within the immersive viewing environment is disassociated from the orientation of the device in the real world. The device tracks changes in orientation, and detects when the change in orientation exceeds a threshold, and if so, smoothly re-orientate the virtual panoramic reality orientation and FOV to correspond to device orientation and implied FOV. | 10-17-2013 |
20130290908 | SYSTEMS AND METHODS FOR CREATING AND UTILIZING HIGH VISUAL ASPECT RATIO VIRTUAL ENVIRONMENTS - An interactive virtual method for manufacturing plant construction, planning, design, touring, and/or management, the method comprising: providing a user with a first interactive, virtual environment comprising facility information and equipment information; proposing a question; and navigating the environment to obtain answers to the proposed questions; wherein said method is a computer-based environment comprising a high visual aspect ratio and wherein said method does not employ computer-aided design. | 10-31-2013 |
20130326424 | User Interface For Navigating In a Three-Dimensional Environment - A computer-implemented method provides for navigating into a three-dimensional scene. The method: displays a graphical tool having the shape of a parallelepiped represented in isometric projection; selects one side of the graphical tool; and displays the orthographic view associated to the selected side. Each of the parallelpiped's six sides is associated with an orthographic view of the scene, the three background sides being unfolded for them to be visible. The graphical tool is arranged so that all faces are accurately selectable by the user. | 12-05-2013 |
20130332889 | CONFIGURABLE VIEWCUBE CONTROLLER - A method, apparatus, system, and computer program product provide the ability to display representative properties of a three-dimensional scene view. A 3D scene and a 3D representation of a coordinate system of the 3D scene are displayed. Different faces of the 3D representation represent and correspond to different viewpoints of the 3D scene. Different statistics for features of the 3D scene are reflected on the different faces of the 3D representation based on the viewpoint corresponding to each face. Manipulation of the 3D representation identifies and selects a different viewpoint of the 3D scene which is then reoriented accordingly. | 12-12-2013 |
20130339906 | Virtual Reality Promotion Experience - A user is provided with an interactive three-dimensional visual environment with which the user can virtually experience a retail setting in association with a promotional campaign. The virtual experience includes a plurality of dynamically-loaded promotional assets which include virtual representations of retail products offered for sale or promotional materials related to the retail products. Promotional assets may be developed in accordance with a promotional theme, and several promotional themes may be developed as part of a product's promotional campaign. To facilitate the visualization of different promotional themes, the user is able to selectively interchange promotional assets in the three-dimensional visual environment to virtually experience the retail setting under each promotional theme and to compare the virtual promotional experiences. | 12-19-2013 |
20140215407 | METHOD AND SYSTEM IMPLEMENTING USER-CENTRIC GESTURE CONTROL - A user-centric method and system to identify user-made gestures to control a remote device images the user using a three-dimensional image system, and defines at least one user-centric three-dimensional detection zone dynamically sized appropriately for the user, who is free to move about. Images made within the detection zone are compared to a library of stored gestures, and the thus identified gesture is mapped to an appropriate control command signal coupleable to the remote device. The method and system also provides of a first user to hand off control of the remote device to a second user. | 07-31-2014 |
20140245232 | VERTICAL FLOOR EXPANSION ON AN INTERACTIVE DIGITAL MAP - A digital map of a geographic area is displayed via a user interface, and a 3D representation of a multi-story building located in the geographic area is displayed on the digital map. The 3D representation includes multiple stacked floor maps corresponding to the floors of the multi-story building. In response to the detection of a pinch gesture that is applied to the 3D representation, a vertical distance between each floor map is expanded relative to the detected pinch gesture to reveal features of an internal map that corresponds to each floor map. | 08-28-2014 |
20140325455 | VISUAL 3D INTERACTIVE INTERFACE - Techniques for generating and displaying a visual three-dimensional (3D) interactive interface are described. According to an exemplary embodiment, a 3D perspective view of a user-selectable user interface element is displayed on display screen of a device. The 3D perspective view of the element may have an apparent position that extends outward from the display screen of the device into a three-dimensional space outside the display screen of the device. Thereafter, a motion detection system may detect a user motion at or proximate to the apparent position of the user interface element in the three-dimensional space outside the display screen of the user device. According to an exemplary embodiment, the detected user motion may be classified as a user selection of the element. According to an exemplary embodiment, an operation associated with the selected element may be performed, in response to the user selection of the element. | 10-30-2014 |
20140344761 | Relocation Between Virtual Environments Based Upon Promotional and Alert Conditions - Awards of value are awarded to residents of a virtual universe for consenting to be teleported in response to invitations for teleportation. The consent can be made conditional upon any of a variety of circumstances which can be specified by a resident of the virtual universe for a corresponding avatar and which form rules for auto-teleportation. These conditions can be stored and searched and avatar and location status monitored and compared to the conditions to control issuance of invitations to increase the likelihood that an invitation will be automatically accepted. A delay before acceptance is also preferably provided to provide for graceful conclusion or termination of current avatar activity. | 11-20-2014 |
20150040073 | Zoom, Rotate, and Translate or Pan In A Single Gesture - Embodiments relate to navigating through a three dimensional environment on a mobile device using a single gesture. A first user input is received, indicating that two or more objects have touched a view of the mobile device. Two or more target locations on a surface of the three-dimensional environment corresponding to the two or more objects touching the view of the mobile device are determined. A second user input indicating that the two objects have performed a motion while touching the view of the mobile device is received. Camera parameters for the virtual camera, based on the received second user input, are determined. The virtual camera is moved within the three dimensional environment according to the determined camera parameters, such that the two or more target locations remain corresponding to the two or more objects touching the view of the mobile device. Moving the virtual camera may include zooming, rotating, tilting, and panning the virtual camera. | 02-05-2015 |
20150074611 | Three-Dimensional Tilt and Pan Navigation Using a Single Gesture - Systems and methods for providing tilt and pan navigation within a three-dimensional environment in response to a single gesture are provided. An exemplary computer-implemented method includes receiving, by one or more computing devices, data describing a drag performed by a user. The computer-implemented method includes, in response to the drag, incrementally adjusting, by the one or more computing devices, a tilt angle of a virtual camera until the tilt angle equals one of a maximum tilt angle or a minimum tilt angle, and panning, by the one or more computing devices, the virtual camera with respect to a rendering of a three-dimensional model. An exemplary system includes a client device and a server in operative communication over a network. | 03-12-2015 |
20150113483 | Method for Human-Computer Interaction on a Graphical User Interface (GUI) - The invention provides a method for human-computer interaction on a graphical user interface (GUI), a GUI, a navigation tool, computers and computer operated devices. The method includes the steps of: determining coordinates of a pointer with, or relative, to an input device; determining coordinates of interactive objects of which at least two objects are displayed; establishing a threshold in relation to the interactive objects and in relation to space about them; prioritizing the interactive objects in relation to their distance and/or direction to the pointer; moving the interactive objects and thresholds relative to the object priority; repeating the above steps every time the coordinates of the pointer changes; and performing an action when a threshold is reached. | 04-23-2015 |
20150135144 | APPARATUS FOR OBTAINING VIRTUAL 3D OBJECT INFORMATION WITHOUT REQUIRING POINTER - Disclosed is an apparatus for obtaining 3D virtual object information which includes a 3D coordinates calculation portion for calculating 3D coordinates data for a body of a user to extract first space coordinates and second space coordinates from the calculated 3D coordinates data, a touch location calculation portion for calculating virtual object contact point coordinates for the surface of the virtual object building on the 3D map information that is met by a line connecting the first space coordinates and the second space coordinates extracted from the 3D coordinates calculation portion; and a space location matching portion for extracting virtual object (location) belonging to the virtual object contact point coordinates data calculated from the touch location calculation portion, and providing the extracted corresponding information of the virtual object to a display portion of a user's terminal or the apparatus for obtaining the 3D virtual object information. | 05-14-2015 |
20150324114 | SYSTEM AND METHOD FOR INTERACTIVE 3D SURGICAL PLANNING AND MODELLING OF SURGICAL IMPLANTS - A method and system for interactive 3D surgical planning are provided. The method and system provide 3D visualisation and manipulation of at least one anatomical feature in response to intuitive user inputs, including gesture inputs. In aspects, fracture segmentation and reduction, screw placement and fitting, and plate placement and contouring in a virtual 3D environment are provided. | 11-12-2015 |
20150331555 | THREE-DIMENSIONAL SPACE FOR NAVIGATING OBJECTS CONNECTED IN HIERARCHY - Disclosed herein are systems, methods, and non-transitory computer-readable storage media for browsing objects organized in a hierarchy using a three-dimensional user interface. Some embodiments of the present technology involve a platform that renders an interface that represents objects that are hierarchically connected in three-dimensional space and that allows navigation through the hierarchy by moving through the three-dimensional space. | 11-19-2015 |
20150331575 | Method and System for Intent Centric Multi-Facet Content Presentation - Methods, systems, and programming for presenting personalized content. In one example, a plurality pieces of content are retrieved in accordance with an estimated intent determined with respect to a user. A three-dimensional (3D) viewing construct is generated based on the plurality pieces of content. The 3D viewing construct is to be rendered in a user viewing interface comprising a plurality of content display panels. Each of the plurality of content display panels is used to display at least one of the plurality pieces of content. Navigation information from an interaction between the user and the user viewing interface is received. The 3D viewing construct is dynamically updated based on the navigation information. | 11-19-2015 |
20150331576 | MANIPULATING VIRTUAL ENVIRONMENT USING NON-INSTRUMENTED PHYSICAL OBJECT - A method of manipulating a three-dimensional image file including a virtual object includes obtaining image information in a processing device of a non-instrumented physical object manipulated by a user, such image information including movement information; and causing virtual movement of the virtual object based on the movement information. A method of shaping a virtual object includes obtaining image information including movement information; and determining a shape of the virtual object based on the movement information. A method of modifying a virtual object includes obtaining image information including movement information; and altering a virtual surface appearance of at least a part of the virtual object based on the movement information. Systems and computer-readable media are also described. | 11-19-2015 |
20150363072 | SYSTEM, METHOD AND INTERFACE FOR VIEWER INTERACTION RELATIVE TO A 3D REPRESENTATION OF A VEHICLE - A system, method and interface for viewer interaction relative to a 3D representation of a vehicle are provided including providing a viewer interface, presenting a 3D vehicle representation to a viewer, receiving input from a viewer via the viewer interface relative to a desired aspect or perspective of the vehicle, and adjusting or changing the 3D vehicle representation to correspond with the viewer indicated desired aspect or perspective. | 12-17-2015 |
20160004314 | APPLICATION SWAP BASED ON SMART DEVICE POSITION - A smart device capable of switching between at least two applications based on the position of the smart device and a method of doing the same is provided. | 01-07-2016 |
20160092068 | VISUALIZATION OF ADDRESSES - Provided are techniques for visualization of addresses. Search criteria for an entity is received. One or more addresses for the entity are identified based on the search criteria. For each of the one or more addresses, a floor associated with the entity at that address is identified, related information for the floor that includes photographs of views seen from that floor are identified, and, on a 2-dimensional (2-D) electronic map, a 3-dimensional (3-D) structure that indicates the floor and provides access to the related information is plotted. | 03-31-2016 |
20160139769 | VISUALIZATION OF AN OBJECT USING A VISUAL QUERY SYSTEM - A computer-implemented method for visualizing data about an object. A hierarchy of image blocks is generated using an action scheme and a part. Instructions identifying a hierarchy of image blocks and the action scheme are generated. The hierarchy of image blocks is communicated to a graphical user interface. An image area is identified in an image block in the hierarchy of image blocks in the graphical user interface. A query is generated to identify a location of the part within the object. The query is based on a type of search, a spatial region, and the action scheme. An indicator representing the location of the part identified by the query is displayed. | 05-19-2016 |
20160147408 | VIRTUAL MEASUREMENT TOOL FOR A WEARABLE VISUALIZATION DEVICE - Disclosed are a technique of generating and displaying a virtual measurement tool in a wearable visualization device, such as a headset, glasses or goggles equipped to provide an augmented reality and/or virtual reality experience for the user. In certain embodiments, the device generates the tool by determining multiple points, each at a different location in a three-dimensional space occupied by the user, based on input from the user, for example, by use of gesture recognition, gaze tracking and/or speech recognition. The device displays the tool so that the tool appears to the user to be overlaid on a real-time, real view of the user's environment. | 05-26-2016 |