Entries |
Document | Title | Date |
20080266323 | Augmented reality user interaction system - An augmented reality user interaction system includes a wearable computer equipped with at least one camera to detect one or more fiducial markers worn by a user. A user-mounted visual display worn by the user is employed to display visual 3D information. The computer detects in an image a fiducial marker worn by the user, extracts a position and orientation of the fiducial marker in the image, and superimposes on the image a visual representation of a user interface component directly on or near the user based on the position and orientation. | 10-30-2008 |
20080297535 | Terminal device for presenting an improved virtual environment to a user - The virtual environment terminal device comprises a user terminal device that interfaces the user to a computer controlled virtual reality system via the sense of touch by applying forces, vibrations, and/or motions to the user to emulate the sensations that the user would encounter in the environment emulated by the virtual reality system while also providing a three-dimensional image of the workspace by using a view splitting device to display the screens of two different monitors to respective ones of the user's two eyes. | 12-04-2008 |
20090021531 | Window or door showing remote scenery in real-life motion - A device and method to show scenery in a window ( | 01-22-2009 |
20090091583 | Apparatus and method for on-field virtual reality simulation of US football and other sports - An apparatus and method are disclosed for simulating United States football and other sports that are held on a playing field. The user stands in an area that at least approximates an actual playing field, and an apparatus incorporated into a football helmet or other headgear worn by the user superimposes simulation images onto the field of view of the user, creating an illusion of simulated action taking place on the actual field where the user is standing. This makes the information and skills conveyed by the simulation directly relevant and immediately useful. Preferred embodiments track the location and orientation of the user and thereby allow the user to participate in the simulation. In another aspect, essentially the same apparatus and method are used to simulate driving or flying of vehicles without the need of an expensive mockup of the interior of the vehicle. | 04-09-2009 |
20090109240 | Method and System for Providing and Reconstructing a Photorealistic Three-Dimensional Environment - The present invention relates to a method and system for providing and reconstructing a photorealistic environment, by integrating a virtual item into it, comprising: (a) a dedicated marker, placed in a predefined location within an environment, in which a virtual item has to be integrated, for enabling determining the desired location of said virtual item within said environment; (b) a conventional camera for taking a picture or shooting a video clip of said environment, in which said marker was placed, and then providing a corresponding images of said environment; and (c) one or more servers for receiving said corresponding image of said environment from said camera, processing it, and outputting a photorealistic image that contains said virtual item integrated within it, comprising: (c.1.) a composer for composing a photorealistic image from said corresponding image of said environment; (c.2.) an image processing unit for processing said corresponding image and for determining the location of said marker within said environment; (c.3.) a configuration database for storing configurations and other data; and (c.4.) an image rendering unit for reconstructing the photorealistic image by integrating said virtual item into said predefined location of the photographed environment, wherein said marker is located. | 04-30-2009 |
20090109241 | IMAGE DISPLAY SYSTEM, IMAGE DISPLAY APPARATUS, AND CONTROL METHOD THEREOF - Upon receiving a communication switching instruction from a first wireless access point used for communication with an image processing apparatus, an image display apparatus disconnects communication with the first wireless access point. Simultaneously, the image display apparatus transmits, to a second wireless access point, a link request to establish communication with the second wireless access point of a new communication destination included in the switching instruction. The image display apparatus displays, on a display unit, a captured image continuously acquired from am image capturing unit until switching from the first wireless access point to the second wireless access point finishes as communication destination switching. | 04-30-2009 |
20090147025 | METHOD AND SYSTEM FOR MODIFICATION OF TURF TV PARTICIPANT DECORATIONS BASED ON MULTIPLE REAL-TIME FACTORS - A method of modifying sporting event participant decorations displayed on a fiber optic “Turf TV” playing surface based on multiple real-time factors. A decoration utility calculates a direction of movement of a player or object in proximity to the playing surface, which is configured to display images, during a live sporting event. The utility adds a graphical aura to a real-time graphical image displayed in proximity to the player on the playing surface. The utility animates the aura in response to wind and/or noise in proximity to the playing surface. The utility modifies the aura based on pre-defined custom attributes, penalties, errors, and/or player status. If the player moves, the utility adds a graphical player trail to the image. The utility also adds a graphical object trail that includes previous locations of an object. The object trail may also include spin and a visual appearance corresponding to an object height. | 06-11-2009 |
20090167787 | AUGMENTED REALITY AND FILTERING - A system (and corresponding method) that can enhance a user experience by augmenting real-world experiences with virtual world data to is provided. The augmented reality system discloses various techniques to personalize real-world experiences by overlaying or interspersing virtual capabilities (and data) with real world situations. The innovation can also filter, rank, modify or ignore virtual-world information based upon a particular real-world class, user identity or context. | 07-02-2009 |
20090237419 | METHOD AND APPARATUS FOR EVOKING PERCEPTIONS OF AFFORDANCES IN VIRTUAL ENVIRONMENTS - Methods and apparatus are provided for evoking perceptions of affordances in a user/virtual environment interface. The method involves recognizing the absence or inadequacy of certain sensory stimuli in the user/virtual environment interface, and then creating sensory stimuli in the virtual environment to substitute for the recognized absent or inadequate sensory stimuli. The substitute sensory stimuli are typically communicated to the user (e.g., visually and/or audibly) as properties and behavior of objects in the virtual environment. Appropriately designed substitute sensory stimuli can evoke perceptions of affordances for the recognized absent or inadequate sensory stimuli in the user/virtual environment interface. | 09-24-2009 |
20090244097 | System and Method for Providing Augmented Reality - A system and method for providing augmented reality. A method comprises retrieving a specification of an environment of the electronic device, capturing optical information of the environment of the electronic device, and computing the starting position/orientation from the captured optical information and the specification. The use of optical information in addition to positional information from a position sensor to compute the starting position may improve a viewer's experience with a mobile augmented reality system. | 10-01-2009 |
20090289956 | VIRTUAL BILLBOARDS - Disclosed are methods and apparatus for implementing a reality overlay device. A reality overlay device captures information that is pertinent to physical surroundings with respect to a device, the information including at least one of visual information or audio information. The reality overlay device may transmit at least a portion of the captured information to a second device. For instance, the reality overlay device may transmit at least a portion of the captured information to a server via the Internet, where the server is capable of identifying an appropriate virtual billboard. The reality overlay device may then receive overlay information for use in generating a transparent overlay via the reality overlay device. The transparent overlay is then superimposed via the device using the overlay information, wherein the transparent overlay provides one or more transparent images that are pertinent to the physical surroundings. Specifically, one or more of the transparent images may operate as “virtual billboards.” Similarly, a portable device such as a cell phone may automatically receive a virtual billboard when the portable device enters an area within a specified distance from an associated establishment. | 11-26-2009 |
20090315916 | ON-THE-FLY CREATION OF VIRTUAL PLACES IN VIRTUAL WORLDS - A specification of a set of objects associated with at least one virtual world is obtained. The objects are laid out in a three-dimensional virtual representation. An on-the-fly virtual place is created in the virtual world, based on the layout. The virtual place depicts the set of objects in the three-dimensional virtual representation and enables navigation and interaction therewith | 12-24-2009 |
20100026714 | MIXED REALITY PRESENTATION SYSTEM - An image composition unit outputs a composition image of a physical space and virtual space to a display unit. The image composition unit calculates, as difference information, a half of the difference between an imaging time of the physical space and a generation completion predicted time of the virtual space. The difference information and acquired position and orientation information are transmitted to an image processing apparatus. A line-of-sight position prediction unit updates previous difference information using the received difference information, calculates, as the generation completion predicted time, a time ahead of a receiving time by the updated difference information, and predicts the position and orientation of a viewpoint at the calculated generation completion predicted time using the received position and orientation information. The virtual space based on the predicted position and orientation, and the generation completion predicted time are transmitted to a VHMD. | 02-04-2010 |
20100045700 | DEVICE FOR WATCHING REAL-TIME AUGMENTED REALITY AND METHOD FOR IMPLEMENTING SAID DEVICE - The invention relates to a real-time augmented-reality watching device ( | 02-25-2010 |
20100045701 | AUTOMATIC MAPPING OF AUGMENTED REALITY FIDUCIALS - Systems and methods expedite and improve the process of configuring an augmented reality environment. A method of pose determination according to the invention includes the step of placing at least one synthetic fiducial in a real environment to be augmented. A camera, which may include apparatus for obtaining directly measured camera location and orientation (DLMO) information, is used to acquire an image of the environment. The natural and synthetic fiducials are detected, and the pose of the camera is determined using a combination of the natural fiducials, the synthetic fiducial if visible in the image, and the DLMO information if determined to be reliable or necessary. The invention is not limited to architectural environments, and may be used with instrumented persons, animals, vehicles, and any other augmented or mixed reality applications. | 02-25-2010 |
20100091036 | Method and System for Integrating Virtual Entities Within Live Video - The present application provides a method and system for inserting virtual entities into live video with proper depth and obscuration. The virtual entities are drawn using a model of the real terrain, animated virtual entities, and a location of the live camera and field of view. The virtual entities are then merged with the live video feed. The merging can occur in real-time so that virtual entity is inserted into the live video feed in real-time. | 04-15-2010 |
20100103196 | SYSTEM AND METHOD FOR GENERATING A MIXED REALITY ENVIRONMENT - A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world. | 04-29-2010 |
20100149213 | Virtual Penetrating Mirror Device for Visualizing of Virtual Objects within an Augmented Reality Environment - A virtual penetrating mirror ( | 06-17-2010 |
20100164990 | SYSTEM, APPARATUS, AND METHOD FOR AUGMENTED REALITY GLASSES FOR END-USER PROGRAMMING - A system, apparatus, and method is provided for augmented reality (AR) glasses ( | 07-01-2010 |
20100182340 | SYSTEMS AND METHODS FOR COMBINING VIRTUAL AND REAL-TIME PHYSICAL ENVIRONMENTS - Systems, methods and structures for combining virtual reality and real-time environment by combining captured real-time video data and real-time 3D environment renderings to create a fused, that is, combined environment, including capturing video imagery in RGB or HSV/HSV color coordinate systems and processing it to determine which areas should be made transparent, or have other color modifications made, based on sensed cultural features, electromagnetic spectrum values, and/or sensor line-of-sight, wherein the sensed features can also include electromagnetic radiation characteristics such as color, infra-red, ultra-violet light values, cultural features can include patterns of these characteristics, such as object recognition using edge detection, and whereby the processed image is then overlaid on, and fused into a 3D environment to combine the two data sources into a single scene to thereby create an effect whereby a user can look through predesignated areas or “windows” in the video image to see into a 3D simulated world, and/or see other enhanced or reprocessed features of the captured image. | 07-22-2010 |
20100194782 | METHOD AND APPARATUS FOR CREATING VIRTUAL GRAFFITI IN A MOBILE VIRTUAL AND AUGMENTED REALITY SYSTEM - A method and apparatus is provided for easily creating virtual graffiti that will be left for a particular device to view. During operation a device will be placed near a first point that is used to define a boundary for the virtual graffiti. The device will locate the first point, and use the point to define the boundary. The device will receive an image that is to be used as virtual graffiti, and will fit the image within the boundary of the virtual graffiti. For example, the device may be consecutively placed near four points that will define a polygon to be used as the boundary for the virtual graffiti. An image will then be received, and the image will be fit within the polygon. | 08-05-2010 |
20100245387 | SYSTEMS AND METHODS FOR COMBINING VIRTUAL AND REAL-TIME PHYSICAL ENVIRONMENTS - Systems, methods and structures for combining virtual reality and real-time environment by combining captured real-time video data and real-time 3D environment renderings to create a fused, that is, combined environment, including capturing video imagery in RGB or HSV/HSV color coordinate systems and processing it to determine which areas should be made transparent, or have other color modifications made, based on sensed cultural features, electromagnetic spectrum values, and/or sensor line-of-sight, wherein the sensed features can also include electromagnetic radiation characteristics such as color, infra-red, ultra-violet light values, cultural features can include patterns of these characteristics, such as object recognition using edge detection, and whereby the processed image is then overlaid on, and fused into a 3D environment to combine the two data sources into a single scene to thereby create an effect whereby a user can look through predesignated areas or “windows” in the video image to see into a 3D simulated world, and/or see other enhanced or reprocessed features of the captured image. | 09-30-2010 |
20100253700 | Real-Time 3-D Interactions Between Real And Virtual Environments - Systems and methods providing for real and virtual object interactions are presented. Images of virtual objects can be projected onto the real environment, now augmented. Images of virtual objects can also be projected to an off-stage invisible area, where the virtual objects can be perceived as holograms through a semi-reflective surface. A viewer can observe the reflected images while also viewing the augmented environment behind the pane, resulting in one perceived uniform world, all sharing the same Cartesian coordinates. One or more computer-based image processing systems can control the projected images so they appear to interact with the real-world object from the perspective of the viewer. | 10-07-2010 |
20100271394 | SYSTEM AND METHOD FOR MERGING VIRTUAL REALITY AND REALITY TO PROVIDE AN ENHANCED SENSORY EXPERIENCE - A system and method of merging virtual reality sensory detail from a remote site into a room environment at a local site. The system preferably includes at least one image server; a plurality of image collection devices; a display system, comprising display devices, a control unit, digital processor and a viewer position detector. The control unit preferably receives the viewer position information and transmits instructions to the digital processor. The digital processor preferably processes source data representing an aggregated field of view from the image capturing devices in accordance with the instructions received from the control unit and outputs refined data representing a desired display view to be displayed on the one or more display devices wherein the viewer position detector dynamically determines the position of the viewer in the room environment and changes the desired display view corresponding to position changes of the viewer. | 10-28-2010 |
20100277504 | METHOD AND SYSTEM FOR SERVING THREE DIMENSION WEB MAP SERVICE USING AUGMENTED REALITY - Disclosed is a method for a 3-dimensional (3D) web map service using augmented reality, the method including downloading a mapping information file where 2-dimensional (2D) marker information and 3D modeling data are mapped, receiving map data including the 2D marker information from a map data providing server, rendering a map to a frame buffer in advance using the received map data, extracting an identification (ID) of the 3D modeling data through detecting 2D marker information from the map data and searching the mapping information file, extracting the 3D modeling data corresponding to the detected 2D marker information from a 3D modeling database using the ID of the 3D modeling data, additionally rendering the 3D modeling data to the frame buffer after processing the 3D modeling data, and rendering the rendered data to a screen. | 11-04-2010 |
20100309225 | IMAGE MATCHING FOR MOBILE AUGMENTED REALITY - Embodiments of a system and method for mobile augmented reality are provided. In certain embodiments, a first image is acquired at a device. Information corresponding to at least one second image matched with the first image is obtained from a server. A displayed image on the device is augmented with the obtained information. | 12-09-2010 |
20100328344 | METHOD AND APPARATUS FOR AN AUGMENTED REALITY USER INTERFACE - An approach is provided for an augmented reality user interface. An image representing a physical environment is received. Data relating to a horizon within the physical environment is retrieved. A section of the image to overlay location information based on the horizon data is determined. Presenting of the location information within the determined section to a user equipment is initiated. | 12-30-2010 |
20110001760 | METHOD AND SYSTEM FOR DISPLAYING AN IMAGE GENERATED BY AT LEAST ONE CAMERA - The invention is directed to a method and system for displaying an image generated by at least one camera ( | 01-06-2011 |
20110018903 | AUGMENTED REALITY DEVICE FOR PRESENTING VIRTUAL IMAGERY REGISTERED TO A VIEWED SURFACE - An augmented reality device for inserting virtual imagery into a user's view of their physical environment. The device comprises: a see-through display device including a wavefront modulator; a camera for imaging a surface in the physical environment; and a controller. The controller is configured for capturing an image of the surface; determining the virtual imagery to be displayed at a predetermined position relative to the surface; determining a position of the surface relative to the augmented reality device; generating an image based on the virtual imagery and on the position of the surface relative to the augmented reality device; and displaying the generated image via the display device. Based on pixel depth information, the controller modulates the wavefront curvature of light emitted for each pixel so that the user sees the virtual imagery at the predetermined position relative to the surface regardless of changes in position of the user's eyes with respect to the display device. | 01-27-2011 |
20110063324 | IMAGE PROJECTION SYSTEM, IMAGE PROJECTION METHOD, AND IMAGE PROJECTION PROGRAM EMBODIED ON COMPUTER READABLE MEDIUM - An image projection system includes a projector to project a projected image onto a drawing surface of a whiteboard, a drawn-image detecting portion to detect a drawn image which is drawn on the drawing surface of the whiteboard while the projector is projecting the projected image, and a modification portion which, in the case where the drawn image is detected, to specify from the projected image which is projected onto the projection surface a part including at least a drawn image part overlapping the detected drawn image, and modify the specified part of the projected image, on the basis of the detected drawn image, so as to emphasize or cancel the drawn image. | 03-17-2011 |
20110074816 | SYSTEMS AND METHODS FOR INTEGRATING GRAPHIC ANIMATION TECHNOLOGIES IN FANTASY SPORTS CONTEST APPLICATIONS - Systems and methods for integrating graphic animation technologies with fantasy sports contest applications are provided. This invention enables a fantasy sports contest application to depict plays in various sporting events using graphic animation. The fantasy sports contest application may combine graphical representation of real-life elements such as, for example, player facial features, with default elements such as, for example, a generic player body, to create realistic graphic video. The fantasy sports contest application may provide links to animated videos for depicting plays on contest screens in which information associated with the plays may be displayed. The fantasy sports contest application may play the animated video for a user in response to the user selecting such a link. In some embodiment of the present invention, the fantasy sports contest application may also customize animated video based on user-supplied setup information. For example, the fantasy sports contest application may provide play information and other related data to allow a user to generate animated videos using the user's own graphics processing equipment and graphics animation program. | 03-31-2011 |
20110084983 | Systems and Methods for Interaction With a Virtual Environment - Systems and methods for interaction with a virtual environment are disclosed. In some embodiments, a method comprises generating a virtual representation of a user's non-virtual environment, determining a viewpoint of a user in a non-virtual environment relative to a display, and displaying, with the display, the virtual representation in a spatial relationship with the user's non-virtual environment based on the viewpoint of the user. | 04-14-2011 |
20110090252 | MARKERLESS AUGMENTED REALITY SYSTEM AND METHOD USING PROJECTIVE INVARIANT - Disclosed herein are a markerless augmented reality system and method for extracting feature points within an image and providing augmented reality using a projective invariant of the feature points. The feature points are tracked in two images photographed while varying the position of an image acquisition unit, a set of feature points satisfying a plane projective invariant is obtained from the feature points, and augmented reality is provided based on the set of feature points. Accordingly, since the set of feature points satisfies the plane projective invariant even when the image acquisition unit is moved and functions as a marker, a separate marker is unnecessary. In addition, since augmented reality is provided based on the set of feature points, a total computation amount is decreased and augmented reality is more efficiently provided. | 04-21-2011 |
20110090253 | AUGMENTED REALITY LANGUAGE TRANSLATION SYSTEM AND METHOD - A real-time augmented-reality machine translation system and method are provided herein. | 04-21-2011 |
20110096093 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM - There is provided an image processing device, including: a data storage unit storing feature data indicating a feature of appearance of an object; an environment map generating unit for generating an environment map representing a position of one or more objects existing in a real space based on an input image obtained by imaging the real space using an imaging device and the feature data stored in the data storage unit; and an output image generating unit for generating an output image obtained by erasing an erasing target object from the input image based on a position of the erasing target object specified out of objects present in the input image represented in the environment map and a position of the imaging device. | 04-28-2011 |
20110102459 | AUGMENTED REALITY GAMING VIA GEOGRAPHIC MESSAGING - Geographic gaming via a scalable, wireless geographic broadcast protocol enables multiplayer gaming between communication devices without relying on traditional network elements. Games can be fully distributed over an ad hoc network of mobile communications devices. The scalable nature of the wireless geographic broadcast protocol enables multiplayer games to function equally well in both remote areas with no or little network service and in crowded areas containing both game players and other users of mobile communications devices. Wireless geographic broadcast messages distributed among multiplayer game participants can be used to control gameplay features and/or game elements of multiplayer games. Embodiments include simulated artillery battles, simulated throw and catch games, and simulated reconnaissance elements. | 05-05-2011 |
20110102460 | PLATFORM FOR WIDESPREAD AUGMENTED REALITY AND 3D MAPPING - A client device sends the following data to the servers: still frames from captured video and in some embodiments other data such as GPS coordinates, compass reading, and accelerometer data. The servers break down each frame into feature points and match those feature points to existing point cloud data to determine client device's point of view (POV). The servers send the resulting information back to the client device, which uses the POV information to render augmentation content on a video stream. Information sent by client devices to the server can be used to augment the feature-point cloud. | 05-05-2011 |
20110128300 | Augmented reality videogame broadcast programming - There is provided a system and method for integrating a virtual rendering system and a video capture system using flexible camera control to provide an augmented reality. There is provided a method comprising receiving input data from a plurality of clients for modifying a virtual environment presented using the virtual rendering system, obtaining, from the virtual rendering system, a virtual camera configuration of a virtual camera in the virtual environment, programming the video capture system using the virtual camera configuration to correspondingly control a robotic camera in a real environment, capturing a video capture feed using the robotic camera, obtaining a virtually rendered feed using the virtual camera showing the modifying of the virtual environment, rendering the composite render by processing the feeds, and outputting the composite render to the display. | 06-02-2011 |
20110148922 | APPARATUS AND METHOD FOR MIXED REALITY CONTENT OPERATION BASED ON INDOOR AND OUTDOOR CONTEXT AWARENESS - Provided are an apparatus and method for mixed reality content operation based on indoor and outdoor context awareness. The apparatus for mixed reality content operation includes a mixed reality visualization processing unit superposing at least one of a virtual object and a text on an actual image which is acquired through the camera to generate a mixed reality image; a context awareness processing unit receiving at least one of sensed data peripheral to the mobile device and a location and posture data of the camera to perceive a peripheral context of the mobile device on the basis of the received data; and a mixed reality application content driving unit adding a content in the mixed reality image to generate an application service image, the content being provided in a context linking type according to the peripheral context. | 06-23-2011 |
20110157223 | ELECTRO-OPTIC VISION SYSTEMS - An image processing system for displaying an augmented real scene image. The system includes a device for identifying a geographic position, a processor for combining a real scene image at the position with information about the real scene to produce an augmented image and a display for displaying the augmented image. | 06-30-2011 |
20110187743 | TERMINAL AND METHOD FOR PROVIDING AUGMENTED REALITY - A first terminal shares a digital marker edited in a digital marker editing mode and an object corresponding to the edited digital marker with a second terminal using a wireless communication technology. If a digital marker is displayed on an image display unit of the first terminal, the second terminal photographs the digital marker using a camera, and synthesizes an object corresponding to the photographed digital marker with a real-time video image obtained through the camera to display a merged image as augmented reality. Then, the second terminal receives input information for changing the digital marker from a user, and transmits the received input information to the first terminal. The first terminal changes a digital marker using the input information received from the second terminal. The second terminal photographs the changed digital marker, and displays an object corresponding to the changed digital marker. | 08-04-2011 |
20110187744 | SYSTEM, TERMINAL, SERVER, AND METHOD FOR PROVIDING AUGMENTED REALITY - A system, a terminal, a server, and a method for providing an augmented reality are capable of providing environment information data in a direction viewed by a user from a current position. The server for providing an augmented reality manages information data to be provided to the terminal in a database according to a section. If the server receives current position information and direction information of the terminal from the terminal connected according to an execution of an augmented reality mode, the server searches information data in a direction in which the terminal faces in a section in which the terminal is currently located from the database, and transmits the searched information data to the terminal. | 08-04-2011 |
20110187745 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY INFORMATION - A system and method for providing augmented reality (AR) information to a mobile communication terminal in a mobile communication system is provided. If the mobile communication terminal is determined to have entered a service cell providing AR information, the mobile communication terminal transmits an AR information request including position information to a server. Upon receiving the AR information request signal, the server determines AR information including at least one tag pattern provided in the service cell and information associated with the tag pattern and transmits the AR information to the mobile communication terminal. | 08-04-2011 |
20110205242 | Augmented Reality Design System - An augmented reality design system is disclosed. The augmented reality design system allows a user to create a design for an article in real time using a proxy. The system can be configured using a head mounted display for displaying at least one virtual design element over a proxy located in a real-world environment. The system can also be configured using a projector that projects at least one virtual design element onto a proxy located in the real world. | 08-25-2011 |
20110205243 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, PROGRAM, AND IMAGE PROCESSING SYSTEM - There is provided an image processing apparatus including: an input image acquisition unit for obtaining an input image generated by taking an image of a real space; an image recognition unit for recognizing, when a first user-input representing a start of manipulation is detected, a manipulator used for manipulating a virtual object, wherein the manipulator appears in the input image; a calculation unit for calculating, according to a result of the recognition of the manipulator provided by the image recognition unit, a position on a screen of a display device at which the virtual object is to be displayed; a display control unit for displaying the virtual object at the position of the screen of the display device calculated by the calculation unit; and a communication unit for transmitting, when the first user-input is detected, a first notification signal for notifying the start of manipulation to another apparatus displaying the same virtual object. | 08-25-2011 |
20110216088 | INTERPRETATION OF CONSTRAINED OBJECTS IN AUGMENTED REALITY - Technologies are generally described for interpretation of constrained objects in augmented reality. An example system may comprise a processor, a memory arranged in communication with the processor, and a display arranged in communication with the processor. An example system may further comprise a sensor arranged in communication with the processor. The sensor may be effective to detect measurement data regarding a constrained object. The sensor may be configured to send the measurement data to the processor. The processor may be effective to receive the measurement data, determine a model for the object, and process the measurement data to produce weighted measurement data. The processor may also be effective to apply a filter to the model and to the weighted measurement data to produce position information regarding the object, which may be utilized to generate an image based on the position information. The display may be effective to display the image. | 09-08-2011 |
20110216089 | ALIGNMENT OF OBJECTS IN AUGMENTED REALITY - Technologies are generally described for aligning objects in augmented reality. In some examples, a processor may be adapted to receive detected image data and virtual object data. In some examples, the processor may further be adapted to generate and apply weights to log-likelihood functions at intensity and feature levels based on the virtual object data and detected image data. In some examples, the processor may further be adapted to add the weighted log-likelihood function at intensity level to the weighted log-likelihood function at feature level to produce a cost function. In some examples, the processor may further be adapted to determine transformation parameters based on the cost function that may be used to align the detected image data with virtual object data. | 09-08-2011 |
20110216090 | REAL-TIME INTERACTIVE AUGMENTED REALITY SYSTEM AND METHOD AND RECORDING MEDIUM STORING PROGRAM FOR IMPLEMENTING THE METHOD - The present invention relates to a real-time interactive system and method regarding an interactive technology between miniatures in real environment and digital contents in virtual environment, and a recording medium storing a program for performing the method. An exemplary embodiment of the present invention provides a real-time interactive augmented reality system including: an input information acquiring unit acquiring input information for an interaction between real environment and virtual contents in consideration of a planned story; a virtual contents determining unit determining the virtual contents according to the acquired input information; and a matching unit matching the real environment and the virtual contents by using an augmented position of the virtual contents acquired in advance. According to the present invention, an interaction between the real environment and the virtual contents can be implemented without tools, and improved immersive realization can be obtained by augmentation using natural features. | 09-08-2011 |
20110221769 | ROBUST OBJECT RECOGNITION BY DYNAMIC MODELING IN AUGMENTED REALITY - Technologies are generally described for providing a robust object recognition scheme based on dynamic modeling. Correlations in fine scale temporal structure of cellular regions may be employed to group the regions together into higher-order entities. The entities represent a rich structure and may be used to code high level objects. Object recognition may be formatted as elastic graph matching. | 09-15-2011 |
20110221770 | SELECTIVE MOTOR CONTROL CLASSIFICATION - Techniques for detecting and classifying motion of a human subject are generally described. More particularly, techniques are described for detecting and classifying motion as either a broad selection or a precise selection to facilitate interaction with augmented reality (AR) systems. An example may include detecting a motion of human and repeatedly analyzing a step response associated with the motion to determine one or more of a peak time t | 09-15-2011 |
20110221771 | Merging of Grouped Markers in An Augmented Reality-Enabled Distribution Network - The present invention relates to systems and methods for merging smart markers in augmented reality. The system includes a server supporting the presentation of information within augmented reality of a plurality of participants. A communication network facilitates the transfer of information from the server to devices of the plurality of participants. A first participant in the plurality of participants is associated with a view of augmented reality and a plurality of smart markers, with each smart marker having an attribute. A merged group of smart markers includes smart markers from within the plurality of smart markers with said attribute being a first common attribute, wherein smart markers in the merged group are displayable within the augmented reality of the first participant. | 09-15-2011 |
20110221772 | System And Methods For Generating Virtual Clothing Experiences - A system for generating a virtual clothing experience has a display for mounting with a wall, one or more digital cameras for capturing first images of the person standing in front of the display, an image processing module for synthesizing the first images and for generating a display image on the display that substantially appears, to the person, like a reflection of the person in a mirror positioned at the display. The cameras capture second images of a garment with the person; the module synthesizes the second images with the first images to generate the image that substantially appears, to the person, that the reflection wears the garment. A home version of the system may be formed with a home computer and a database storing garment images in cooperation with a manufacturer. | 09-15-2011 |
20110227945 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing device includes a virtual space recognition unit for analyzing 3D space structure of a real space to recognize a virtual space, a storage unit for storing an object to be arranged in the virtual space, a display unit for making a display unit display the object arranged in the virtual space, a direction of gravitational force detection unit for detecting a direction of gravitational force of a real space, and a direction of gravitational force reflection unit for reflecting the direction of gravitational force detected by the detection unit in the virtual space. | 09-22-2011 |
20110242133 | AUGMENTED REALITY METHODS AND APPARATUS - Augmented reality methods and apparatus are described according to some aspects of the disclosure. In one aspect, a method of experiencing augmented data includes using a source system, emitting a dynamic symbol which changes over time, using a consumption system, receiving the emission of the source system, using the consumption system, analyzing the emission which was received by the consumption system to determine whether the dynamic symbol is present in the emission, and using the consumption system, generating a representation of augmented data to be consumed by a user of the consumption system as a result of the analyzing determining that the dynamically changing symbol is present in the emission of the source system. | 10-06-2011 |
20110242134 | METHOD FOR AN AUGMENTED REALITY CHARACTER TO MAINTAIN AND EXHIBIT AWARENESS OF AN OBSERVER - Methods and systems for enabling an augmented reality character to maintain and exhibit awareness of an observer are provided. A portable device held by a user is utilized to capture an image stream of a real environment, and generate an augmented reality image stream which includes a virtual character. The augmented reality image stream is displayed on the portable device to the user. As the user maneuvers the portable device, its position and movement are continuously tracked. The virtual character is configured to demonstrate awareness of the user by, for example, adjusting its gaze so as to look in the direction of the portable device. | 10-06-2011 |
20110254859 | IMAGE PROCESSING SYSTEM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing apparatus including: a communication unit receiving first feature amounts, which include coordinates of feature points in an image acquired by another image processing apparatus, and position data showing a position in the image of a pointer that points at a location in a real space; an input image acquisition unit acquiring an input image by image pickup of the real space; a feature amount generating unit generating second feature amounts including coordinates of feature points set in the acquired input image; a specifying unit comparing the first feature amounts and the second feature amounts and specifying, based on a comparison result and the position data, a position in the input image of the location in the real space being pointed at by the pointer; and an output image generating unit generating an output image displaying an indicator indicating the specified position. | 10-20-2011 |
20110254860 | MOBILE DEVICE FOR AUGMENTED REALITY APPLICATION - The mobile device includes a visual input device, for capturing external visual information having real visual background information, and a processing device. The processing device is for associating a selected application with the external visual information, and for executing the selected application based on the external visual information and on user-related input information. The processing device for generating a visual output signal related to at least one virtual visual object in response to the application is further configured to provide the visual output signal to a projector device included within the mobile device such that the projector device will be configured to project said visual output signal related to the at least one virtual visual object onto the visual background, thereby modifying said external visual information. | 10-20-2011 |
20110254861 | INFORMATION DISPLAYING APPARATUS AND INFORMATION DISPLAYING METHOD - An information displaying apparatus capable of providing an easy-to-see display of information that the user wants to know out of information related to objects seen in a captured real-world image. The information displaying apparatus ( | 10-20-2011 |
20110279478 | Virtual Tagging Method and System - A method for associating a virtual tag with a geographical location, comprising the steps of displaying ( | 11-17-2011 |
20110279479 | Narrowcasting From Public Displays, and Related Methods - A user with a cell phone interacts, in a personalized session, with an electronic sign system. In some embodiments, the user's location relative to the sign is discerned from camera imagery—either imagery captured by the cell phone (i.e., of the sign), or captured by the sign system (i.e., of the user). Demographic information about the user can be estimated from imagery captured acquired by the sign system, or can be accessed from stored profile data associated with the user. The sign system can transmit payoffs (e.g., digital coupons or other response data) to viewers—customized per user demographics. In some arrangements, the payoff data is represented by digital watermark data encoded in the signage content. The encoding can take into account the user's location relative to the sign—allowing geometrical targeting of different payoffs to differently-located viewers. Other embodiments allow a user to engage an electronic sign system for interactive game play, using the cell phone as a controller. | 11-17-2011 |
20110298824 | SYSTEM AND METHOD OF VIRTUAL INTERACTION - A system for virtual interaction, comprising two or more portable electronic devices, is provided. Each device comprises, in turn, coordinate referencing means operable to define a coordinate system common to the portable electronic devices with respect to a physically defined reference position, position estimation means operable to detect a the physical position of its respective portable electronic device with respect to the reference position, virtual environment generation means operable to generate a virtual environment, and communication means operable to transmit positional data using the common coordinate system from that portable electronic device to another portable electronic device. The virtual environment is shared in common between the portable electronic devices. The virtual environment uses the common co-ordinate system within which each portable electronic device defines a position for itself responsive to its physical position with respect to the reference position. | 12-08-2011 |
20110298825 | IMAGE PROCESSING METHOD AND IMAGE PROCESSING APPARATUS - A CG image having a transparency parameter is superimposed on a shot image, which is an image picked up by an image-pickup device, to obtain a combined image. The combined image is displayed in a combined-image-display region. In the combined image, a mask region of the CG image is set based on parameter information used to extract a region of a hand. The transparency parameter of the CG image is set based on a ratio of the size of the region of the CG image excluding the mask region to the size of the shot image. By checking the combined image, which is displayed in the combined-image-display region, the user can set the parameter information by a simple operation. | 12-08-2011 |
20110304647 | INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD - An information processing section of a game apparatus executes a program which includes: acquiring a real world image; setting the most recent view matrix of a virtual camera based on a detected marker S | 12-15-2011 |
20110304648 | MOBILE TERMINAL AND METHOD FOR OPERATING THE MOBILE TERMINAL - A mobile terminal and a method for operating the mobile terminal are provided. The method senses touch of the mobile terminal in a predetermined mode and senses movement of the mobile terminal upon determining that the mobile terminal has been gripped based on the touch and then changes the mode of the mobile terminal according to the predetermined mode and the movement of the mobile terminal. This method enhances user convenience since it is possible to change the mode of the mobile terminal through movement of the mobile terminal while the mobile terminal is gripped. | 12-15-2011 |
20110310120 | TECHNIQUES TO PRESENT LOCATION INFORMATION FOR SOCIAL NETWORKS USING AUGMENTED REALITY - Techniques to present location information using augmented reality are described. An apparatus may comprise an augmentation system operative to augment an image with information for an individual, the image having a virtual object representing a real object. The augmentation system may comprise a location component operative to determine location information for the real object, a virtual information component operative to retrieve location information for an individual, and a proximity component operative to determine whether location information for the real object substantially matches location information for the individual. The augmentation system may further comprise an augmentation component operative to augment the virtual object with information for the individual to form an augmented object when the location information for the real object substantially matches the location information for the individual. Other embodiments are described and claimed. | 12-22-2011 |
20110316880 | METHOD AND APPARATUS PROVIDING FOR ADAPTATION OF AN AUGMENTATIVE CONTENT FOR OUTPUT AT A LOCATION BASED ON A CONTEXTUAL CHARACTERISTIC - An apparatus may include a contextual characteristic determiner configured to determine a contextual characteristic of a location. A sensory device may collect sensed data and location information which is used to determine the contextual characteristic of the location. The sensed data, contextual characteristic and/or location information may be compiled into a database by a database compiler. Further, an ambient content package sharer may request and/or provide sensed data and/or determined contextual characteristics to other devices or the database compiler for inclusion in the database. An augmentative content adaptor may thereby provide for adaptation of an augmentative content for output at the location based on the contextual characteristic. Contextual characteristics may include audible contextual characteristics and visual contextual characteristics. | 12-29-2011 |
20120001938 | METHODS, APPARATUSES AND COMPUTER PROGRAM PRODUCTS FOR PROVIDING A CONSTANT LEVEL OF INFORMATION IN AUGMENTED REALITY - An apparatus for providing a constant level of information in an augmented reality environment may include a processor and memory storing executable computer program code that cause the apparatus to at least perform operations including determining a first number of points of interest associated with a first set of real world objects of a current location(s). The first set of real world objects is currently displayed. The computer program code may further cause the apparatus to determine whether the first number is below a predetermined threshold and may increase a view range of a device to display a second set of real world objects. The view range may be increased in order to increase the first number to a second number of points of interest that corresponds to the threshold, based on determining that the first number is below the threshold. Corresponding methods and computer program products are also provided. | 01-05-2012 |
20120001939 | METHODS, APPARATUSES AND COMPUTER PROGRAM PRODUCTS FOR AUTOMATICALLY GENERATING SUGGESTED INFORMATION LAYERS IN AUGMENTED REALITY - An apparatus for automatically suggesting information layers in augmented reality may include a processor and memory storing executable computer program code that cause the apparatus to at least perform operations including providing layers of information relating to virtual information corresponding to information indicating a current location of the apparatus. The computer program code may further cause the apparatus to determine that a layer(s) of information is enabled to provide virtual information for display. The virtual information corresponds to locations of real world objects in or proximate to the current location. The computer program code may further cause the apparatus to determine other information layers associates with content for the current location based on the number of items virtual information for the enabled layer being below a threshold and automatically suggest one or more other layers of information for selection. Corresponding methods and computer program products are also provided. | 01-05-2012 |
20120007884 | APPARATUS AND METHOD FOR PLAYING MUSICAL INSTRUMENT USING AUGMENTED REALITY TECHNIQUE IN MOBILE TERMINAL - An apparatus and a method related to an application of a mobile terminal using an augmented reality technique capture an image of a musical instrument directly drawn/sketched by a user to recognize the particular relevant musical instrument, and provide an effect of playing the musical instrument on the recognized image as if a real instrument were being played. The apparatus preferably includes an image recognizer and a sound source processor. The image recognizer recognizes a musical instrument on an image through a camera. The sound source processor outputs the recognized musical instrument on the image on a display unit to use the same for a play, and matches the musical instrument play on the image to a musical instrument play output on the display unit. | 01-12-2012 |
20120007885 | System and Method for Viewing Golf Using Virtual Reality - A system and method for viewing artificial reality (AR) messages on a golf course, where the messages are geo-referenced artificial reality words or symbols to indicate distances, tips, targets or other information to the golfer. Typically, the AR messages are geo-referenced to a fixed location on the golf hole, such as a hazard or green. Using the spectator's chosen location as the viewing origin, an artificial reality message or object is inserted into the golfer's perspective view of the golf hole. Outings and contests can be held even if the matches are separated by hours or days, and outcomes and information published to select groups or individuals. | 01-12-2012 |
20120007886 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - Disclosed herein is an information processing apparatus configured to edit video, including: a computer graphics image generation block configured to execute realtime rendering of a computer graphics animation by use of a timeline time with a fraction permitted for a seconds value that is a minimum unit as a parameter indicative of a temporal position of the computer graphics animation; an operation input block configured to enter a user operation for specifying progression of the computer graphics animation; and a control block configured to control the computer graphics image generation block in response to the user operation entered through the operation input block. | 01-12-2012 |
20120019557 | DISPLAYING AUGMENTED REALITY INFORMATION - A device may obtain location information of an AR display device and obtain identifiers associated with objects that are within a field of view of the AR display device based on the location information. In addition, the device may obtain, for each of the objects, AR information based on the identifiers and determine, for each of the objects, a distance of the object from the AR display device. Furthermore, the device may generate, for each of the objects, images of the AR information at a virtual distance from the AR display device, the virtual distance corresponding to the determined distance. The device may display the generated images at the AR display device. | 01-26-2012 |
20120019558 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY SERVICE USING SOUND - An apparatus and method for providing an AR service using a sound. When a user starts an AR service providing function in a mobile communication terminal, a sound signal is received, whether the sound signal is carrying additional information is determined by analyzing the sound signal, if the additional information is carried on the sound signal, the additional information is extracted, data associated with the extracted additional information is acquired, and the AR service is provided using the acquired data. Accordingly, various kinds of additional information may be provided, and the AR service may be provided. | 01-26-2012 |
20120026190 | METHODS AND SYSTEMS FOR ATTITUDE DIFFERENTIATION IN ENHANCED VISION IMAGES - Methods and systems are provided for displaying information on a flight deck display onboard an aircraft. An exemplary method comprises obtaining image data for an imaging region and displaying, on the display device, a graphical representation of a first portion of the image data using a first visually distinguishable characteristic and a graphical representation of a second portion of the image data using a second visually distinguishable characteristic. The first portion corresponds to a portion of the image data above an attitude reference and the second portion corresponds to a portion of the image data below the attitude reference, and the first visually distinguishable characteristic and the second visually distinguishable characteristic are different. | 02-02-2012 |
20120026191 | METHOD FOR DISPLAYING AUGMENTATION INFORMATION IN AN AUGMENTED REALITY SYSTEM - A method for displaying augmentation information in an augmented reality system ( | 02-02-2012 |
20120026192 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY (AR) USING USER RECOGNITION INFORMATION - An apparatus and method for providing Augmented Reality (AR) using user recognition information includes: acquiring user recognition information for a real object; selecting a virtual object that is to be mapped to the acquired user recognition information; and adding the user recognition information as mapping information for the selected virtual object. | 02-02-2012 |
20120032977 | APPARATUS AND METHOD FOR AUGMENTED REALITY - Disclosed is a method for augmented reality. A real world image including a marker is generated, the marker is detected from the real world image, an object image corresponding to the detected marker is combined with the real world image, and the combined image is displayed. | 02-09-2012 |
20120038668 | METHOD FOR DISPLAY INFORMATION AND MOBILE TERMINAL USING THE SAME - A method for controlling information on a mobile terminal may be provided. The method may include displaying an image (at least one object) on a display of the mobile terminal, receiving information regarding movement of a pointer with respect to the displayed image on the display, obtaining augmented reality (AR) information regarding the object based on the received information regarding movement of the pointer, and displaying the image and the augmented reality (AR) information related to the at least one object on the display of the mobile terminal. | 02-16-2012 |
20120038669 | USER EQUIPMENT, SERVER, AND METHOD FOR SELECTIVELY FILTERING AUGMENTED REALITY - An augmented reality filter selecting user equipment, includes a display unit display to display a real image including a target object and a plurality of filter icons, a user input unit to receive a user command, the user command including a selection of a target filter icon and a movement of the target filter icon, and a control unit to control the display unit to display the target object and filtered AR information corresponding to the target object. A method for selecting a filter includes displaying a real image including a target object and a plurality of filter icons, receiving a command to select a target filter icon, applying the target filter icon by moving the selected target filter icon onto a target object on the displayed image, and displaying the target object and information corresponding to the target object, in which the target filter icon is applied. | 02-16-2012 |
20120038670 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY INFORMATION - An apparatus, system, and method for providing augmented reality (AR) information of a concealed object are disclosed. The method for providing AR information of a concealed object by a terminal connectable to a server via a wired and/or wireless communication network may include acquiring an image of a real environment; defining a reference object included in the acquired image; obtaining image capturing position information about a position of the image and reference object recognition information of the defined reference object; transmitting the obtained image capturing position information and the reference object recognition information to the server; receiving information about concealed objects from the server, the concealed objects being disposed behind the reference object along or about a direction from the image capturing position to the reference object; and outputting the received information about concealed objects. | 02-16-2012 |
20120038671 | USER EQUIPMENT AND METHOD FOR DISPLAYING AUGMENTED REALITY WINDOW - A user equipment to display an augmented reality (AR) window includes a display unit to display an image and AR windows corresponding to objects included in the image, and a control unit to determine an arrangement pattern of the AR windows by adjusting at least one of a size, a display location, a display pattern, and a color of the AR windows and to control the display unit to display the AR windows in the determined arrangement pattern, together with the objects. A method includes detecting the object in the image, generating the AR window corresponding to the object, determining an arrangement pattern of the AR window based on an adjustment of an attribute, and displaying the AR window in the determined arrangement pattern along with the object. | 02-16-2012 |
20120044263 | TERMINAL DEVICE AND METHOD FOR AUGMENTED REALITY - A terminal device and method for augmented reality (AR) is disclosed herein. The terminal device including: a communication unit to communicate with an object server, the object server storing images of a plurality of objects and property information corresponding to levels of each object; an object recognition unit to recognize an object contained in the image; and a control unit to receive property information from the object server corresponding to a pixel value of the recognized object from and to combine the received property information and the recognized object. | 02-23-2012 |
20120044264 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY - An apparatus and method for providing augmented reality (AR) includes acquiring an image of a real world including a first object, setting the first object as a reference object, acquiring a photographing position, a photographing direction, and a distance value between the reference object and the photographing position, acquiring map information corresponding to the photographing position and a photographing direction, mapping the reference object to the map information by using the acquired distance value, detecting AR information of the objects from the map information, and outputting the detected AR information. | 02-23-2012 |
20120056898 | IMAGE PROCESSING DEVICE, PROGRAM, AND IMAGE PROCESSING METHOD - There is provided an image processing device including: a recognition unit configured to recognize a plurality of users present in an input image captured by an imaging device; an information acquisition unit configured to acquire display information to be displayed in association with each user recognized by the recognition unit; a weight determination unit configured to determine a weight of each user recognized by the recognition unit; and an output image generation unit configured to generate an output image by determining a display position of the display information associated with each user on the basis of the weight of each user determined by the weight determination unit and overlaying the display information on the input image in the determined display position. | 03-08-2012 |
20120062595 | METHOD AND APPARATUS FOR PROVIDING AUGMENTED REALITY - There is provided a method of providing Augmented Reality (AR) using the relationship between objects in a server that is accessible to at least one terminal through a wired/wireless communication network, including: recognizing a first object-of-interest from first object information received from the terminal; detecting identification information and AR information about related objects associated with the first object-of-interest, and storing the identification information and AR information about the related objects; recognizing, when receiving second object information from the terminal, a second object-of-interest using the identification information about the related objects; and detecting AR information corresponding to the second object-of-interest from the AR information about the related objects, and transmitting the detected AR information to the terminal. | 03-15-2012 |
20120062596 | PROVIDING AUGMENTED REALITY INFORMATION - A system, method and computer program product for providing augmented reality information is disclosed. The method includes capturing an image of a set of items with an image capturing component coupled to a network-enabled computing device associated with a user identifier. The captured image is processed to identify each item of the set of items while a predefined list of user's preferences is retrieved using the user identifier. For each identified item, checking is made if the item matches a condition related to the predefined list of user's preferences. And based on the matching result, item information is conveyed to the network-enabled computing device and overlaid on the image. | 03-15-2012 |
20120069051 | Method and System for Compositing an Augmented Reality Scene - Disclosed are systems and methods for compositing an augmented reality scene, the methods including the steps of extracting, by an extraction component into a memory of a data-processing machine, at least one object from a real-world image detected by a sensing device; geometrically reconstructing at least one virtual model from at least one object; and compositing AR content from at least one virtual model in order to augment the AR content on the real-world image, thereby creating AR scene. Preferably, the method further includes; extracting at least one annotation from the real-world image into the memory of the data-processing machine for modifying at least one virtual model according to at least one annotation. Preferably, the method further includes: interacting with AR scene by modifying AR content based on modification of at least one object and/or at least one annotation in the real-world image. | 03-22-2012 |
20120069052 | METHOD AND TERMINAL FOR PROVIDING DIFFERENT IMAGE INFORMATION IN ACCORDANCE WITH THE ANGLE OF A TERMINAL, AND COMPUTER-READABLE RECORDING MEDIUM - The present invention includes a method for providing different images by referring to angles of a terminal. The method includes the steps of: (a) if the angle falls under an angular range which includes 0 degree with respect to the horizontal plane, setting a content provided through the terminal to be map information, if the angle falls under an angular range which includes 90 degrees with respect to the horizontal plane, setting a content provided through the terminal to be preview image, and if the angle falls under an angular range which includes 180 degrees with respect to the horizontal plane, setting a content provided through the terminal to be weather; (b) acquiring information on the angle by using a sensor; (c) creating information on an image based on the set content; and (d) providing a user with the created information. | 03-22-2012 |
20120075341 | METHODS, APPARATUSES AND COMPUTER PROGRAM PRODUCTS FOR GROUPING CONTENT IN AUGMENTED REALITY - An apparatus for grouping content in an augmented reality environment may include a processor and memory storing executable computer code that cause the apparatus to at least perform operations including receiving a detection of real world objects, of a current location, that are currently displayed. The computer program code may further cause the apparatus to determine whether one or more of the real world objects are located along a line of direction and determine virtual objects that correspond to the real world objects located along the line. The computer program code may further cause the apparatus to display an item of visible indicia signifying a group, associated with the virtual objects, that is positioned so as to correspond to at least one of the real world objects located along the line. Corresponding methods and computer program products are also provided. | 03-29-2012 |
20120075342 | AUGMENTING IMAGE DATA BASED ON RELATED 3D POINT CLOUD DATA - Embodiments of the invention describe processing a first image data and 3D point cloud data to extract a first planar segment from the 3D point cloud data. This first planar segment is associated with an object included in the first image data. A second image data is received, the second image data including the object captured in the first image data. A second planar segment related to the object is generated, where the second planar segment is geometrically consistent with the object as captured in the second image data. This planar segment is generated based, at least in part, on the second image data, the first image data and the first planar segment. | 03-29-2012 |
20120075343 | AUGMENTED REALITY (AR) SYSTEM AND METHOD FOR TRACKING PARTS AND VISUALLY CUEING A USER TO IDENTIFY AND LOCATE PARTS IN A SCENE - An AR system both identifies and visually tracks parts for a user by maintaining spatial awareness of the user's pose and provides instructions to the user for the use of those parts. Tracking the identified parts, both inside and outside the current Field of View (FOV), and any missing parts for use with the current instruction improves the effectiveness and efficiency of both novice and experienced user alike. | 03-29-2012 |
20120075344 | INFORMATION DISPLAY DEVICE - To provide an information display device for displaying at least one item of display target information in respective screen element, receiving, while catalog display takes place, an instruction operation which is made utilizing display target information shown in the screen elements displayed as a catalog, and executing a process based on the instruction operation. | 03-29-2012 |
20120075345 | METHOD, TERMINAL AND COMPUTER-READABLE RECORDING MEDIUM FOR PERFORMING VISUAL SEARCH BASED ON MOVEMENT OR POSITION OF TERMINAL - The present invention includes a method for performing visual search based on a movement and/or an angular position of a terminal. The method includes the steps of: (a) sensing a movement and/or an angular position of a terminal by using at least one of sensors; (b) determining whether a triggering event occurs or not by referring to at least one of the sensed movement and the sensed angular position of the terminal; and (c) if the triggering event occurs, allowing visual search to be performed for at least one of objects included in an output image displayed on the terminal at the time of the occurrence of the triggering event; wherein the output image is generated in a form of augmented reality by combining an image inputted through the terminal in real time with information relevant thereto. | 03-29-2012 |
20120081392 | ELECTRONIC DEVICE OPERATION ADJUSTMENT BASED ON FACE DETECTION - An electronic device and methods of use thereof are described. The electronic device having at least a front facing image capture device and a front facing display device arranged to display visual content is described. In one embodiment, the front facing camera can capture an image that can include at least image content. The image content can process in such a way that an operational state of the electronic device is modified in accordance with processed image content. In a particular embodiment, the modification of the current operating state can include aligning an orientation of visual content presented by the front facing display with a current facial orientation of a user. | 04-05-2012 |
20120081393 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY USING VIRTUAL OBJECTS - A method for providing AR information, in which the method includes receiving virtual object setting information by a first terminal, in which the virtual object setting information including virtual object selection information and movement setting information; and transmitting a request to a server for uploading a virtual object onto a real-world image of a target location based on the virtual object setting information. An apparatus to provide AR information, in which the apparatus includes a communication unit to process signals received from a server and to transmit signals to the server; a display unit to display a real-world image of a target location; a manipulation unit to receive a user input signal; and a control unit to receive virtual object setting information and to request the server to upload a virtual object onto the real-world image of the target location. | 04-05-2012 |
20120081394 | SYSTEM AND METHOD OF IMAGE AUGMENTATION - A method of image augmentation for an image of a book includes capturing an image of the book, detecting at least a portion of at least one fiduciary marker of the book within the image, estimating placement of the book's spine based upon the detected portion of the fiduciary marker, hypothesising possible positions for edges of a rigid leaf being turned in the book based upon estimated placement of the spine, processing the book image to identify edges within the image, comparing elements of the identified edges with the hypothesised positions for edges of the rigid leaf, selecting one of the hypothesised positions that best coincides with the compared elements of the processed image as representative of the position of the rigid leaf being turned in the book, and augmenting the book image with a virtual graphic element arranged in accordance with the selected representative position of the rigid leaf. | 04-05-2012 |
20120086727 | METHOD AND APPARATUS FOR GENERATING AUGMENTED REALITY CONTENT - An approach is provided for providing augmented reality based on tracking. Information, including location information, orientation information, or a combination thereof of a device is determined. A representation of a location indicated based, at least in part, on the information is determined. One or more items are selected to associate with one or more points within the representation. Display information is determined to be generated, the display information including the one or more items overlaid on the representation based, at least in part, on the one or more points. | 04-12-2012 |
20120086728 | SYSTEM AND METHOD FOR TRANSITIONING BETWEEN INTERFACE MODES IN VIRTUAL AND AUGMENTED REALITY APPLICATIONS - One preferred embodiment of the present invention includes a method for transitioning a user interface between viewing modes. The method of the preferred embodiment can include detecting an orientation of a mobile terminal including a user interface disposed on a first side of the mobile terminal, wherein the orientation of the mobile terminal includes an imaginary vector originating at a second side of the mobile terminal and projecting in a direction substantially opposite the first side of the mobile terminal. The method of the preferred embodiment can also include transitioning between at least two viewing modes in response to the imaginary vector intersecting an imaginary sphere disposed about the mobile terminal at a first latitudinal point having a predetermined relationship to a critical latitude of the sphere. | 04-12-2012 |
20120086729 | ENTERTAINMENT DEVICE, SYSTEM, AND METHOD - An entertainment device generates a composite image with a combiner that combines camera-captured images with a computer-generated image of an object resting on a virtual surface. The device also includes a detector that detects image movement in the captured images in one or more contact point regions corresponding to image positions at which the object contacts the virtual surface. The device further comprises an initiator for initiating movement of the object to a new position with respect to the virtual surface in response to detected motion in the contact point regions. The detector detects whether a first image area corresponding to a captured image feature is greater than a predetermined proportion of a second image area corresponding to a full field of view of the camera. If the first image area is greater than the predetermined proportion, the initiator initiates movement of the object to an avoidance position. | 04-12-2012 |
20120092368 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY (AR) INFORMATION - Provided are a server, a terminal, and a method of providing Augmented Reality (AR) using a mobile tag in the terminal which is accessible to the server through a wired/wireless communication network. The method includes writing content in response to a tag creation request to create a tag; setting location information of the tag; setting, if the tag is a mobile tag, the location information of the mobile tag to a variable state and setting movement information of the mobile tag; and transmitting information about the mobile tag including the content, the location information, and the movement setting information of the mobile tag to the server, and requesting the server to register the mobile tag. | 04-19-2012 |
20120092369 | DISPLAY APPARATUS AND DISPLAY METHOD FOR IMPROVING VISIBILITY OF AUGMENTED REALITY OBJECT - Provided are a display apparatus and a display method for improving visibility of each object by differently displaying each object from the background when providing an augmented reality (AR) service. The display apparatus and the display method may improve visibility of each object by outputting a list of overlapped objects or a map of overlapped objects. Also, the display apparatus and the display method may improve visibility of each object by enlarging a complex area, in which objects are densely disposed, to reduce overlapping of the objects. | 04-19-2012 |
20120092370 | APPARATUS AND METHOD FOR AMALGAMATING MARKERS AND MARKERLESS OBJECTS - An apparatus to provide AR includes a marker recognition unit to recognize objects in reality information, an amalgamation determining unit to determine whether the objects are amalgamated, an amalgamation processing unit to determine an attribute of each of the recognized objects and to generate an amalgamated object based on the determined attributes, and an object processing unit to map the amalgamated object to the reality information and to display the mapped amalgamated object. A method for amalgamating objects in AR includes recognizing objects in reality information, determining whether the objects are amalgamated, determining an attribute of each of the recognized objects, generating an amalgamated object based on the determined attribute, mapping the amalgamated object to the reality information, and displaying the mapped amalgamated object. | 04-19-2012 |
20120092371 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING SYSTEM - Provided is an information processing apparatus including an image acquisition unit for acquiring a real space image including an image of another apparatus, a coordinate system generation unit for generating a spatial coordinate system of the real space image acquired by the image acquisition unit, and a transmission unit for transmitting spatial information constituting the spatial coordinate system generated by the coordinate system generation unit to the other apparatus sharing the spatial coordinate system. | 04-19-2012 |
20120092372 | METHOD FOR PROVIDING INFORMATION ON OBJECT WITHIN VIEW OF TERMINAL DEVICE, TERMINAL DEVICE FOR SAME AND COMPUTER-READABLE RECORDING MEDIUM - The present invention relates to a method for providing information on an object included in a visual field of a terminal in a form of augmented reality (AR) by using an image inputted to the terminal and its relating information. The method includes the steps of: (a) specifying the visual field of the terminal corresponding to the inputted image by referring to at least one piece of information on a location, a displacement and a viewing angle of the terminal; and (b) acquiring a graphic element corresponding to the object, included in the visual field of the terminal, whose identity is recognized by using a technology for matching a building image and displaying the acquired graphic element with the inputted image in the form of the augmented reality by providing the graphic element on a location of the object displayed on a screen of the terminal. | 04-19-2012 |
20120092373 | METHOD FOR PROVIDING INFORMATION ON OBJECT WHICH IS NOT INCLUDED IN VISUAL FIELD OF TERMINAL DEVICE, TERMINAL DEVICE AND COMPUTER READABLE RECORDING MEDIUM - The present invention relates to a method for providing information on an object excluded in a visual field of a terminal in a form of augmented reality (AR) by using an image inputted to the terminal and information related thereto. The method includes the steps of: (a) specifying the visual field of the terminal corresponding to the inputted image by referring to at least one piece of information on a location, a displacement and a viewing angle of the terminal; (b) searching an object(s) excluded in the visual field of the terminal; and (c) displaying guiding information on the searched object(s) with the inputted image in a form of the augmented reality; wherein the visual field is specified by a viewing frustum whose vertex corresponds to the terminal. | 04-19-2012 |
20120098859 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY USER INTERFACE - An apparatus and method for providing an augmented reality (AR) user interface. The method includes acquiring an image containing at least one object; recognizing the object from the acquired image; detecting AR information related to the recognized object; classifying the detected AR information into groups according to specific property information; generating a user interface that displays the groups of AR information separately. | 04-26-2012 |
20120105473 | LOW-LATENCY FUSING OF VIRTUAL AND REAL CONTENT - A system that includes a head mounted display device and a processing unit connected to the head mounted display device is used to fuse virtual content into real content. In one embodiment, the processing unit is in communication with a hub computing device. The processing unit and hub may collaboratively determine a map of the mixed reality environment. Further, state data may be extrapolated to predict a field of view for a user in the future at a time when the mixed reality is to be displayed to the user. This extrapolation can remove latency from the system. | 05-03-2012 |
20120105474 | METHOD AND APPARATUS FOR DETERMINING LOCATION OFFSET INFORMATION - An approach is provided for determining location offset information. A correction manager determines to present, at a device, a location-based display including one or more representations of one or more location-based features. Next, the correction manager receives an input for specifying offset information for at least one of the one or more representations with respect to the location-based display. Then, the correction manager determines to present the one or more representations in the location-based display based, at least in part, on the offset information. | 05-03-2012 |
20120105475 | Range of Focus in an Augmented Reality Application - A computer-implemented augmented reality method includes receiving one or more indications, entered on a mobile computing device by a user of the mobile computing device, of a distance range for determining items to display with an augmented reality application, the distance range representing geographic distance from a base point where the mobile computing device is located. The method also includes selecting, from items in a computer database, one or more items that are located within the distance range from the mobile computing device entered by the user, and providing data for representing labels for the selected one or more items on a visual display of the mobile computing device, the labels corresponding to the selected items, and the items corresponding to geographical features that are within the distance range as measure from the mobile computing device. | 05-03-2012 |
20120105476 | Range of Focus in an Augmented Reality Application - A computer-implemented augmented reality method includes receiving one or more indications, entered on a mobile computing device by a user of the mobile computing device, of a distance range for determining items to display with an augmented reality application, the distance range representing geographic distance from a base point where the mobile computing device is located. The method also includes selecting, from items in a computer database, one or more items that are located within the distance range from the mobile computing device entered by the user, and providing data for representing labels for the selected one or more items on a visual display of the mobile computing device, the labels corresponding to the selected items, and the items corresponding to geographical features that are within the distance range as measure from the mobile computing device. | 05-03-2012 |
20120105477 | APPARATUS AND METHOD FOR DISPLAYING DATA IN PORTABLE TERMINAL - An apparatus and method for displaying data in a portable terminal to control data displayed on a projection beam screen. The apparatus includes a beam projector unit for displaying data on a beam screen, at least one camera unit for capturing the data displayed on the beam screen, and a controller for extracting a differential region between data to be displayed on the beam screen and the displayed data captured by the camera unit and displaying the data on the beam screen according to a display screen region excluding the differential region therefrom. | 05-03-2012 |
20120113140 | Augmented Reality with Direct User Interaction - Augmented reality with direct user interaction is described. In one example, an augmented reality system comprises a user-interaction region, a camera that captures images of an object in the user-interaction region, and a partially transparent display device which combines a virtual environment with a view of the user-interaction region, so that both are visible at the same time to a user. A processor receives the images, tracks the object's movement, calculates a corresponding movement within the virtual environment, and updates the virtual environment based on the corresponding movement. In another example, a method of direct interaction in an augmented reality system comprises generating a virtual representation of the object having the corresponding movement, and updating the virtual environment so that the virtual representation interacts with virtual objects in the virtual environment. From the user's perspective, the object directly interacts with the virtual objects. | 05-10-2012 |
20120113141 | TECHNIQUES TO VISUALIZE PRODUCTS USING AUGMENTED REALITY - Techniques to visual products using augmented reality are described. An apparatus may comprise an augmentation system having a pattern detector component operative to receive an image with a first virtual object representing a first real object, and determine a location parameter and a scale parameter for a second virtual object based on the first virtual object, an augmentation component operative to retrieve the second virtual object representing a second real object from a data store, and augment the first virtual object with the second virtual object based on the location parameter and the scale parameter to form an augmented object, and a rendering component operative to render the augmented object in the image with a scaled version of the second virtual object as indicated by the scale parameter at a location on the first virtual object as indicated by the location parameter. Other embodiments are described and claimed. | 05-10-2012 |
20120113142 | AUGMENTED REALITY INTERFACE FOR VIDEO - A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality. | 05-10-2012 |
20120113143 | AUGMENTED REALITY SYSTEM FOR POSITION IDENTIFICATION - A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality. | 05-10-2012 |
20120113144 | AUGMENTED REALITY VIRTUAL GUIDE SYSTEM - A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality. | 05-10-2012 |
20120113145 | AUGMENTED REALITY SURVEILLANCE AND RESCUE SYSTEM - A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality. | 05-10-2012 |
20120120101 | AUGMENTED REALITY SYSTEM FOR SUPPLEMENTING AND BLENDING DATA - A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality. | 05-17-2012 |
20120120102 | SYSTEM AND METHOD FOR CONTROLLING DEVICE - Provided is a system and method for controlling a device using Augmented Reality (AR). A system for controlling a device using Augmented Reality (AR) includes a device server, an AR server, and a portable terminal. The device server registers information about each device. The AR server generates an AR screen displaying type information and service-related information of at least one device searched in response to a request of a portable terminal by using the registered device information, and provides the generated AR screen to the portable terminal. The portable terminal connects with a device selected among devices displayed on the AR screen and performs a specific function with the connected device. | 05-17-2012 |
20120120103 | ALIGNMENT CONTROL IN AN AUGMENTED REALITY HEADPIECE - This patent discloses a method for providing an augmented image in a see-through head mounted display. The method includes capturing an image of a scene containing objects and displaying the image to a viewer. The method also includes capturing one or more additional image(s) of the scene in which the viewer indicates a misalignment between the displayed image and a see-through view of the scene. The captured images are then compared to determine an image adjustment to align corresponding objects in displayed images to the objects in the see-through view of the scene. This method provides augmented image information that is displayed in correspondence to the image adjustments so the viewer sees an augmented image comprised of the augmented image information overlaid and aligned to the see-through view. | 05-17-2012 |
20120127201 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY USER INTERFACE - An apparatus and method for providing an augmented reality user interface are provided. The method may be as follows. An augmented reality image is stored. The augmented reality is obtained by overlapping an image with augmented reality information, which is related to at least one object included in the image. The stored augmented reality image and an augmented reality image, which is captured in real time, are output through a divided display user interface at the same time. | 05-24-2012 |
20120127202 | SYSTEM AND METHOD FOR PROVIDING DELIVERY INFORMATION - Provided are a system and a method for logistics delivery using augmented reality, which are used for efficient logistics delivery through generation of an optimal delivery route and improvement of a delivery rate. The present invention is constituted by a mobile terminal which is used by a delivery man, a logistics information system, and a mobile communication company's server. When the delivery man scans an image in a delivery scheduled area by using the mobile terminal, a mobile apparatus displays detailed delivery information regarding a delivery point in an area for each set area unit and whether a customer is positioned in the vicinity of an address and generates the optimal delivery route. | 05-24-2012 |
20120127203 | MIXED REALITY DISPLAY - An image processing device includes capture optics for capturing light-field information for a scene, and a display unit for providing a display of the scene to a viewer. A tracking unit tracks relative positions of a viewer's head and the display and the viewer's gaze to adjust the display based on the relative positions and to determine a region of interest on the display. A virtual tag location unit determines locations to place one or more virtual tags on the region of interest, by using computational photography of the captured light-field information to determine depth information of an object in the region of interest. A mixed-reality display is produced by combining display of the virtual tags with the display of objects in the scene. | 05-24-2012 |
20120133676 | STORAGE MEDIUM HAVING STORED THEREON IMAGE PROCESSING PROGRAM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - At least one virtual object for which a predetermined color is set is placed in a virtual world. In a captured image captured by a real camera, at least one pixel corresponding to the predetermined color is detected, using color information including at least one selected from the group including RGB values, a hue, a saturation, and a brightness of each pixel of the captured image. When the pixel corresponding to the predetermined color has been detected, a predetermined process is performed on the virtual object for which the predetermined color is set. An image of the virtual world where at least the virtual object is placed is displayed on a display device. | 05-31-2012 |
20120139941 | INFORMATION DISPLAY SYSTEM, INFORMATION DISPLAY APPARATUS AND NON-TRANSITORY STORAGE MEDIUM - The matching processor acquires the key information such as position information and/or the like by the key information acquirer, and notifies to an external apparatus via the communicator. The matching processor stores the acquired key information in the storer corresponding to the identification information which is acquired from the external apparatus. When the acquired identification information does not exist in a AR data management table, the matching processor acquires the AR data by notifying the identification information to the external apparatus. When there is no empty record in a key information management table, key information of which usage date and time is old is considered as a deletion target. Moreover, when there is no empty record in the AR data management table, the AR data of which usage date and time is old is considered as the deletion target. | 06-07-2012 |
20120147039 | TERMINAL AND METHOD FOR PROVIDING AUGMENTED REALITY - A terminal to provide augmented reality includes a camera unit to capture a real-world view having a marker comprising a first region and a second region; a memory unit to store an object corresponding to the marker, first control information to control a first part of the object, and second control information to control a second part of the object; an object control unit to control the first part of the object based on the first control information if the first region is selected, and to control the second part of the object based on the second control information if the second region is selected; an image processing unit to synthesize the object with the real-world view into a synthesized view; and a display unit to display the synthesized view. | 06-14-2012 |
20120147040 | APPARATUS AND METHOD FOR PROVIDING WIRELESS NETWORK INFORMATION - In a terminal to display wireless network information by use of augmented reality, the terminal is able to be connected to various servers that provide wireless network information through a communication network. In a method for displaying wireless network information using augmented reality, a real-world image is acquired. Wireless network information for a region, corresponding to a location of the image, is acquired from the server. The acquired wireless network information is overlaid onto the real-world image and displayed. | 06-14-2012 |
20120147041 | APPARATUS AND METHOD FOR SEARCHING ACCESS POINTS IN PORTABLE TERMINAL - An apparatus and method combine an augmented reality scheme with an AP search function and visually provide positions of searched APs as well as names of the searched APs and strength of signals received from the searched APs. The apparatus includes a communication unit for receiving signals of APs around the portable terminal, an input unit for receiving input for searching the APs, a camera unit for photographing environments around the portable terminal when searching the APs, a display unit for outputting an image photographed by the camera unit on a preview picture, an AP attribute ascertaining unit for ascertaining attributes of the APs which exist around the portable terminal and ascertaining positions of the APs, and a controller for outputting the APs on the preview picture to correspond to the positions of the ascertained APs. | 06-14-2012 |
20120147042 | ELECTRONIC PUBLICATION VIEWER, METHOD FOR VIEWING ELECTRONIC PUBLICATION, PROGRAM, AND INTEGRATED CIRCUIT - An electronic publication viewer ( | 06-14-2012 |
20120147043 | OPTICAL COMMUNICATION APPARATUS AND OPTICAL COMMUNICATION METHOD - An optical communication apparatus is disclosed. The optical communication apparatus includes a light transmission section, a light reception section, and a control section. The light transmission section causes a light emitting portion which outputs light in a visual line direction of a user to transmit information. The light reception section causes a light receiving portion which receives light from the visual line direction of the user to receive information. The control section determines whether or not another optical communication apparatus is an information communication target based on identification information when the light reception section has optically received communication request information and the identification information from the other optical communication apparatus and causes the light transmission section to optically transmit communication response information to the other optical communication apparatus when the control section has determined that the other optical communication apparatus be the information communication target. | 06-14-2012 |
20120154439 | APPARATUS AND METHOD FOR OPERATING MULTIPLE OBJECT OF AUGMENTED REALITY SYSTEM - An apparatus for operating multiple objects in an augmented reality system converts a reference point recognized in an input image into copyable basic data, then copies the basic data to each position of a screen where the image is to be output, and then augments an object by using a copy of the basic data as the reference point. | 06-21-2012 |
20120154440 | AUGMENTED 2D REPRESENTATION OF MOLECULAR STRUCTURES - A method, computing apparatus, and computer readable medium, for augmenting and displaying a 2D-representation of a molecular structure, or assemblage of molecular structures, augmented with various graphical elements. The technology further provides various functionality that permits a user to define the form and number of types of graphical elements to apply to a 2-D structure. | 06-21-2012 |
20120154441 | AUGMENTED REALITY DISPLAY SYSTEM AND METHOD FOR VEHICLE - A system includes a head front display device, an eye position tracking camera to track movement of a driver's irises, a front view camera to take a picture of a front view of the driver, a head front display device controller to implement at least one of an angle change, forward movement, backward movement, upward movement, downward movement, leftward movement, and rightward movement of the head front display device, an image adjuster to adjust an object displayed on the head front display device in association with an object of an actual view seen through the front window of the vehicle based on positions of the driver's irises obtained through the eye position tracking camera and an image of the front view obtained by the front view camera, and a display unit controlled by the image adjuster and configured to display information on the head front display device. | 06-21-2012 |
20120162254 | OBJECT MAPPING TECHNIQUES FOR MOBILE AUGMENTED REALITY APPLICATIONS - Techniques are disclosed that involve mobile augmented reality (MAR) applications in which users (e.g., players) may experience augmented reality (e.g., altered video or audio based on a real environment). Such augmented reality may include various alterations. For example, particular objects may be altered to appear differently. Such alterations may be based on stored profiles and/or user selections. Further features may also be employed. For example, in embodiments, characters and/or other objects may be sent (or caused to appear) to other users in other locations. Also, a user may leave a character at another location and receive an alert when another user/player encounters this character. Also, characteristics of output audio may be affected based on events of the MAR application. | 06-28-2012 |
20120162255 | TECHNIQUES FOR MOBILE AUGMENTED REALITY APPLICATIONS - Techniques are disclosed that involve mobile augmented reality (MAR) applications in which users (e.g., players) may experience augmented reality. Further, the actual geographical position of MAR application objects (e.g., players, characters, and other objects) may be tracked, represented, and manipulated. Accordingly, MAR objects may be tracked across multiple locations (e.g., multiple geographies and player environments). Moreover, MAR content may be manipulated and provided to the user based on a current context of the user. | 06-28-2012 |
20120162256 | MACHINE-IMPLEMENTED METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR ENABLING A USER TO VIRTUALLY TRY ON A SELECTED GARMENT USING AUGMENTED REALITY - In a machine-implemented method for use with a handheld device, a user is able to virtually try on a selected garment using augmented reality. The machine-implemented method includes: (A) establishing a garment database containing information corresponding to at least one garment, the information corresponding to each garment including a backside image of the garment; (B) establishing a marker database containing feature information of a backside marker; and (C) upon determining from a captured image of the user who is tagged with at least one physical maker that the physical marker corresponds to the backside marker, retrieving from the garment database the backside image of a selected garment, and superimposing the retrieved backside image onto the captured image of the user to form a composite image for display on a screen of the handheld device. | 06-28-2012 |
20120162257 | AUTHENTICATION APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY (AR) INFORMATION - A authentication method for providing augmented reality (AR) information includes acquiring an image of a real-world environment including a target object; identifying the target object in the acquired image; requesting data related to the target object to a server; receiving encoded data related to the target object from the server; authenticating the encoded data; and outputting the authenticated data as AR information. A terminal to perform authentication to provide AR information includes a communication unit to receive and transmit a signal from and to a server; a display to output a target object and data related to the target object; and a controller to receive encoded data related to the target object from the server, to authenticate the encoded data, and to output the authenticated data on the display. | 06-28-2012 |
20120162258 | METHOD, SYSTEM, AND COMPUTER-READABLE RECORDING MEDIUM FOR PROVIDING INFORMATION ON AN OBJECT USING VIEWING FRUSTUMS - The present invention relates to a method for providing information on an object by using viewing frustums. The method includes the steps of: (a) specifying at least two viewing frustums whose vertexes are visual points of respective user terminals; and (b) calculating a degree of interest in the object by referring to the object commonly included in both a first viewing frustum whose vertex is a visual point of a first user terminal and a second one whose vertex is a visual point of a second user terminal. | 06-28-2012 |
20120176409 | Computer-Readable Storage Medium Having Image Processing Program Stored Therein, Image Processing Apparatus, Image Processing System, and Image Processing Method - A storage medium has stored therein an image processing program that causes a computer of an image processing apparatus, which is connected to a real camera for taking an image of a real space and a display device that allows the real space to be viewed on a display area thereof, to operate as real space image obtaining means, specific object detection means, calculation means, setting means, identification means, event providing means, virtual space image generation means, and display control means. | 07-12-2012 |
20120176410 | METHOD FOR REPRESENTING VIRTUAL INFORMATION IN A REAL ENVIRONMENT - The invention relates to a method for ergonomically representing virtual information in a real environment, comprising the following steps: providing at least one view of a real environment and of a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup comprising at least one display device, ascertaining a position and orientation of at least one part of the system setup relative to at least one component of the real environment, subdividing at least part of the view of the real environment into a plurality of regions comprising a first region and a second region, with objects of the real environment within the first region being placed closer to the system setup than objects of the real environment within the second region, and blending in at least one item of virtual information on the display device in at least part of the view of the real environment, considering the position and orientation of said at least one part of the system setup, wherein the virtual information is shown differently in the first region than in the second region with respect to the type of blending in in the view of the real environment. | 07-12-2012 |
20120176411 | GPS-Based Location and Messaging System and Method - A system and method for viewing a target in a background from a user's perspective. In one form, the views are selectable by the user on, for example, a GPS equipped cell phone, to include a view from the participant's position, zoom, pan, and tilt views, or views from another geographic location, giving increased situational awareness and identification of the target. Other information can be conveyed, such as messages or advertisements, on a billboard, which may be a geo-referenced area on or near the target. Preferably, an orientation mechanism shows when the device is correctly pointed to a target. | 07-12-2012 |
20120182313 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY IN WINDOW FORM - An apparatus to provide an augmented reality includes a window detector to determine a first region and a second region; an information processor to identify a first portion of a virtual-world image layer to be displayed in the first region based on a first viewing direction; and an image processor to display the first portion in the first region, and to display a real-world image layer in the second region. A method for providing an augmented reality includes determining a first region and a second region; identifying a first portion of a virtual-world image layer to be displayed in the first region based on a first viewing direction; and displaying the first portion in the first region, and displaying a real-world image layer in the second region. | 07-19-2012 |
20120188279 | Multi-Sensor Proximity-Based Immersion System and Method - Systems and methods for interaction with a virtual environment are disclosed. In some embodiments, a method comprises generating a virtual representation of a user's non-virtual environment, determining a viewpoint of a user in a non-virtual environment relative to a display, and displaying, with the display, the virtual representation in a spatial relationship with the user's non-virtual environment based on the viewpoint of the user. | 07-26-2012 |
20120194548 | SYSTEM AND METHOD FOR REMOTELY SHARING AUGMENTED REALITY SERVICE - An augmented reality (AR) system and method for remotely sharing an AR service using different markers are provided. The AR system includes a plurality of client devices and a host device. The host device of the AR system may set a sharing area of different makers, and may enable sharing of information included in the sharing area among remotely located client devices. The client device of the AR system may display an AR object identified in the sharing area and may share information related to the AR object through an AR service. | 08-02-2012 |
20120194549 | AR GLASSES SPECIFIC USER INTERFACE BASED ON A CONNECTED EXTERNAL DEVICE TYPE - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content, wherein the eyepiece includes a user interface based on a connected external device type. | 08-02-2012 |
20120194550 | SENSOR-BASED COMMAND AND CONTROL OF EXTERNAL DEVICES WITH FEEDBACK FROM THE EXTERNAL DEVICE TO THE AR GLASSES - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content, wherein the eyepiece includes sensor-based command and control of external devices with feedback from the external device to the eyepiece. | 08-02-2012 |
20120194551 | AR GLASSES WITH USER-ACTION BASED COMMAND AND CONTROL OF EXTERNAL DEVICES - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content, wherein the eyepiece has a user-action based command and control of external devices. | 08-02-2012 |
20120194552 | AR GLASSES WITH PREDICTIVE CONTROL OF EXTERNAL DEVICE BASED ON EVENT INPUT - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content, wherein the eyepiece includes predictive control of external device based on an event input. | 08-02-2012 |
20120194553 | AR GLASSES WITH SENSOR AND USER ACTION BASED CONTROL OF EXTERNAL DEVICES WITH FEEDBACK - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content, wherein the eyepiece includes sensor and user action based control of external devices with feedback. | 08-02-2012 |
20120194554 | INFORMATION PROCESSING DEVICE, ALARM METHOD, AND PROGRAM - An apparatus comprising a memory storing instructions is provided. The apparatus includes a control unit for executing the instructions to send signals to display, for a user, a first virtual image superimposed onto an image of real space, the image of real space comprising an image of a potential source of interest for the user. The control unit further executes instructions to send signals to send signals to analyze the image of real space to detect the potential source of interest. The control unit further executes instructions to send signals to notify the user of the potential source of interest. | 08-02-2012 |
20120200600 | HEAD AND ARM DETECTION FOR VIRTUAL IMMERSION SYSTEMS AND METHODS - Systems and methods for detection of the head and arms of a user to interact with an immersive virtual environment are disclosed. In some embodiments, a method comprises generating a virtual representation of a non-virtual environment, determining a position of a user relative to the display using an overhead sensor when the user is within a predetermined proximity to a display, determining a position of a user's head relative to the display using the overhead sensor, and displaying the virtual representation on the display in a spatial relationship with the non-virtual environment based on the position of the user's head relative to the display. | 08-09-2012 |
20120200601 | AR GLASSES WITH STATE TRIGGERED EYE CONTROL INTERACTION WITH ADVERTISING FACILITY - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content, wherein the eyepiece includes a state triggered eye control interaction with advertising facility. | 08-09-2012 |
20120200602 | HAND IMAGE FEEDBACK - An image generation method and system. The method includes receiving by a computing apparatus from a video recording device attached to a backside of a video monitor connected to the computing apparatus, a video data stream comprising a first video image of an input device connected to the computing apparatus and a second video image of a users hands enabling switches on the input device. An input device image associated with the input device is displayed. The computing apparatus super-imposes and displays a hand image associated with the user's hands over the input device image. The computing apparatus adjusts a brightness of the hand image such that the input device image is visible through the hand image. | 08-09-2012 |
20120206485 | AR GLASSES WITH EVENT AND SENSOR TRIGGERED USER MOVEMENT CONTROL OF AR EYEPIECE FACILITIES - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content, wherein the eyepiece includes event and sensor triggered user movement control. | 08-16-2012 |
20120212508 | PROVIDING A CORRECTED VIEW BASED ON THE POSITION OF A USER WITH RESPECT TO A MOBILE PLATFORM - A mobile platform displays a corrected view of an image and/or augmented reality (AR) data based on the position of the user with respect to the mobile platform. The corrected view is produced by determining a position of the user with respect to the mobile platform using an image of the user from a backward facing camera. The display information is provided in the form of an image or video frame of the environment captured with a forward facing camera or AR data. The position of the user with respect to the mobile platform is used to determine the portion of the display information to be displayed that is aligned with the line of sight between the user and the mobile platform so that the displayed information is aligned with the real world environment. | 08-23-2012 |
20120212509 | Providing an Interactive Experience Using a 3D Depth Camera and a 3D Projector - An interaction system is described which uses a depth camera to capture a depth image of a physical object placed on, or in vicinity to, an interactive surface. The interaction system also uses a video camera to capture a video image of the physical object. The interaction system can then generate a 3D virtual object based on the depth image and video image. The interaction system then uses a 3D projector to project the 3D virtual object back onto the interactive surface, e.g., in a mirrored relationship to the physical object. A user may then capture and manipulate the 3D virtual object in any manner. Further, the user may construct a composite model based on smaller component 3D virtual objects. The interaction system uses a projective texturing technique to present a realistic-looking 3D virtual object on a surface having any geometry. | 08-23-2012 |
20120218296 | METHOD AND APPARATUS FOR FEATURE-BASED PRESENTATION OF CONTENT - An approach is provided for location-based presentation of content. A content service platform determines one or more representations of at least one structure. The content service platform also processes and/or facilitates a processing of the one or more representations to determine one or more features of the one or more representations. The content service platform further causes, at least in part, designation of the one or more features as elements of a virtual display area, wherein the one or more representations comprise, at least in part, the virtual display area. The content service platform also causes, at least in part, presentation of one or more outputs of one or more applications, one or more services, or a combination thereof in the virtual display area. | 08-30-2012 |
20120218297 | AUGMENTED REALITY PRESENTATIONS - Technology is generally disclosed for augmented-reality presentations. In some embodiments, the technology can receive an indication of a user's sensitivity to an aspect of a presentation, receive general content relating to the presentation, receive overlay content relating to the presentation, combine the received general content and the received overlay content to create the presentation, render the presentation. The overlay content may respond to the user's sensitivity. | 08-30-2012 |
20120218298 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE AND TANGIBLE RECORDING MEDIUM RECORDING INFORMATION PROCESSING PROGRAM - An example system includes an information processing device, which includes: an arrangement item including a first marker; an arrangement region providing object which provides an arrangement region and includes a plurality of second markers; and an information processing device including: an imaging processing unit to generate a captured image with an imaging device; a positional relationship judgment unit to judge a positional relationship between the first marker of the arrangement item and at least one of the second markers, from a captured image which includes the first marker of the arrangement item and at least one of the plurality of second markers of the arrangement region providing object; an information superimposition unit to superimpose predetermined information based on the positional relationship, onto the captured image; and a display processing unit to cause a display device to display the captured image on which the predetermined information is superimposed. | 08-30-2012 |
20120218299 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE AND TANGIBLE RECODING MEDIUM RECORDING INFORMATION PROCESSING PROGRAM - An example information processing system which includes a plurality of information processing devices, the respective information processing devices carrying out imaging by an imaging device, wherein the respective information processing devices include: an imaging processing unit to generate a captured image by sequentially capturing images of a real space; a virtual space setting unit to set a virtual space commonly used by another information processing device which captures an image of one of an imaging object that is included in a captured image, and an imaging object, at least a portion of external appearance of which matches the imaging object, based on at least the portion of the imaging object included in the captured image; and a transmission unit to send data relating to change in a state of the virtual space, to the other information processing device, when the change in the state of the virtual space is detected. | 08-30-2012 |
20120218300 | IMAGE PROCESSING SYSTEM, METHOD AND APPARATUS, AND COMPUTER-READABLE MEDIUM RECORDING IMAGE PROCESSING PROGRAM - An example image processing apparatus has a captured image acquisition unit for acquiring a captured image captured by an imaging device, a feature detection unit for detecting the markers from the captured image, a reference acquisition unit for acquiring, based on each of the detected markers, a coordinate system serving as a reference indicating a position and attitude in a space, and a relative relation information acquisition unit for acquiring, based on the captured image in which a plurality of the markers are detected, relative relation information indicating a relative relation in position and attitude of a plurality of coordinate systems acquired for the respective markers. | 08-30-2012 |
20120218301 | SEE-THROUGH DISPLAY WITH AN OPTICAL ASSEMBLY INCLUDING A WEDGE-SHAPED ILLUMINATION SYSTEM - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content. The optical assembly includes a light transmissive wedge-shaped illumination system with angle selective coatings and an LED lighting system coupled to an edge of the wedge. An angled surface of the wedge directs light from the LED lighting system to uniformly irradiate a reflective image display to produce an image that is reflected through the illumination system to provide the displayed content to the user. | 08-30-2012 |
20120223966 | TERMINAL TO PROVIDE AUGMENTED REALITY - A terminal that displays augmented reality, allows tag information of various recognized objects to be chosen and displayed in a more efficient manner. The terminal that displays augmented reality allows the terminal to enter into an augmented reality mode is executed, and provides location information about the terminal and the recognized objects, a category selection ability to select categories of tag information, and a tag information control layout for displaying the selected tag information and different stages on a display in a more efficient manner. | 09-06-2012 |
20120223967 | Dynamic Perspective Video Window - Systems and methods are disclosed for generating an image for a user based on an image captured by a scene-facing camera or detector. The user's position relative to a component of the system is determined, and the image captured by the scene-facing detector is modified based on the user's position. The resulting image represents the scene as seen from the perspective of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. | 09-06-2012 |
20120223968 | DISPLAY PROCESSING DEVICE, DISPLAY METHOD, AND PROGRAM - A display processing device extracts a marker image from an image (captured image) of a page in a book. Afterward, a curved plane is created according to the extracted marker image to represent the degree of curvature of the page. Then, a virtual object is distorted to match the curved plane and overlaid on the captured image for display on an HMD. | 09-06-2012 |
20120223969 | DEVICE FOR CAPTURING AND DISPLAYING IMAGES OF OBJECTS, IN PARTICULAR DIGITAL BINOCULARS, DIGITAL CAMERA OR DIGITAL VIDEO CAMERA - The invention relates to a device for capturing and displaying images of objects, in particular digital binoculars (1), a digital camera (14) or a digital video camera, the device comprising: a digital storage medium (24, 27) including a database, which includes elements including meta information (10, 20) for objects, in particular geographic data and/or names of a landscape, a mountain range or a building or names of plants or animals; a comparison device (33); and a superposition device (34) for superimposing meta information (10, 20) selected by the comparison device ( | 09-06-2012 |
20120229508 | THEME-BASED AUGMENTATION OF PHOTOREPRESENTATIVE VIEW - On a display configured to provide a photorepresentative view from a user's vantage point of a physical environment in which the user is located, a method is provided comprising receiving, from the user, an input selecting a theme for use in augmenting the photorepresentative view. The method further includes obtaining, optically and in real time, environment information of the physical environment and generating a spatial model of the physical environment based on the environment information. The method further includes identifying, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment. The method further includes based on such analysis, displaying, on the display, an augmentation of an identified feature, the augmentation being associated with the theme. | 09-13-2012 |
20120229509 | SYSTEM AND METHOD FOR USER INTERACTION - A system is used for user interaction. When the system is in use, a signal source is configured to provide an image signal to a retina display unit. The retina display unit is configured to project the image signal provided by the signal source onto a user's retina such that the user visually senses a virtual interface. The image signal is displayed on the virtual interface. A camera unit is configured to capture the user's body motion. An identification-interaction unit is configured to determine an interactive operation command corresponding to the user's body motion and transmit the interactive operation command to the signal source. | 09-13-2012 |
20120229510 | STORAGE MEDIUM HAVING STORED THEREON INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD - An action of a first object placed in a virtual world is controlled on the basis of body state data output from a portable display apparatus. An action of a second object placed in the virtual world is controlled on the basis of touch position data based on a touch position on a touch panel provided on a surface of a display screen of the portable display apparatus. Then, a first image including at least a part of the first object and at least a part of the second object is generated, and the first image is displayed on the portable display apparatus. | 09-13-2012 |
20120236029 | SYSTEM AND METHOD FOR EMBEDDING AND VIEWING MEDIA FILES WITHIN A VIRTUAL AND AUGMENTED REALITY SCENE - A preferred method for viewing embedded media in a virtual and augmented reality (VAR) scene can include at a viewer device, defining a real orientation of the viewer device relative to a projection matrix; and orienting a VAR scene on the viewer device in response to the real orientation in block, in which the VAR scene includes one or both of visual data and orientation data. The preferred method can further include selecting a media file in the VAR scene, wherein the media file is selected at a media location correlated at least to the real orientation of the viewer device; and activating the media file in the VAR scene at the media location. The preferred method and variations thereof functions to allow a viewer to interact with media that is embedded, tagged, linked, and/or associated with a VAR scene viewable on the viewer device. | 09-20-2012 |
20120236030 | SEE-THROUGH NEAR-EYE DISPLAY GLASSES INCLUDING A MODULAR IMAGE SOURCE - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content. The optical assembly comprises a reflective image display that generates and reflects image light to an optically flat film then to a curved partially reflecting mirror of the optical assembly that reflects a portion of the image light from the image source and transmits a portion of the scene light from a see-through view of the surrounding environment to the user's eye as a combined image. The optical assembly comprises a modular image source, wherein the modular image source is mounted in a frame of the eyepiece such that its position with respect to a user's eye can be adjusted. | 09-20-2012 |
20120236031 | SYSTEM AND METHOD FOR DELIVERING CONTENT TO A GROUP OF SEE-THROUGH NEAR EYE DISPLAY EYEPIECES - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content. The optical assembly includes absorptive polarizers or anti-reflective coatings to reduce stray light. | 09-20-2012 |
20120242694 | MONOCULAR HEAD MOUNTED DISPLAY - According to one embodiment, a monocular head mounted display comprising an information acquisition section, an image data generation section, and an image display section. The information acquisition section acquires solid body position information on a position of a solid body located on ground around a user, and indication position information on an indication position for the user. The image data generation section generates image data including an information object to provide provision information to the user. The image display section displays an image based on the image data on one eye of the user in superimposition on a real scene. The image data generation section generates the image data so as to move the information object in the image so that the information object is superimposed on the indication position after placing the information object in the image so that the information object is superimposed on the solid body. | 09-27-2012 |
20120242695 | Augmented Reality System for Public and Private Seminars - Described are computer implemented augmented reality techniques for public and private seminars that includes receive an indication of a start of a segment of a live presentation, the live presentation comprising at least one segment, with the at least one segment having at least one presentation component; receive information related to a plurality of users, receive rules to analyze the information and select from the information, private information pertaining to a particular user, with the selected information being relevant to the at least one presentation component of the at least one segment of the live presentation, generate an image that when rendered on a display device renders the private information pertaining to the particular user for that presentation component, and send the image of the private information to a device associated with the particular user. | 09-27-2012 |
20120242696 | Augmented Reality In A Virtual Tour Through A Financial Portfolio - Disclosed are techniques for providing a presentation as a virtual tour through a user's portfolio based on receiving signals that correspond to user movements in the physical world and processing the signals to select generated images associated with the user's movements to generate an image that when rendered on a display device renders a visual representation of the portfolio in a virtual world. | 09-27-2012 |
20120242697 | SEE-THROUGH NEAR-EYE DISPLAY GLASSES WITH THE OPTICAL ASSEMBLY INCLUDING ABSORPTIVE POLARIZERS OR ANTI-REFLECTIVE COATINGS TO REDUCE STRAY LIGHT - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content. The optical assembly includes absorptive polarizers or anti-reflective coatings to reduce stray light. | 09-27-2012 |
20120242698 | SEE-THROUGH NEAR-EYE DISPLAY GLASSES WITH A MULTI-SEGMENT PROCESSOR-CONTROLLED OPTICAL LAYER - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content. The optical assembly includes a multi-segment optical layer that provides a display characteristic adjustment that is dependent on displayed content requirements and surrounding environmental conditions. Each segment of the multi-segment optical layer is individually controlled by the integrated processor. | 09-27-2012 |
20120242699 | MODIFICATION OF TURF TV PARTICIPANT DECORATIONS BASED ON MULTIPLE REAL-TIME FACTORS - A method, system and compute program product for modifying sporting event participant decorations displayed on a fiber optic “Turf TV” playing surface. A utility calculates a direction of movement of a player or object in proximity to the playing surface, which is configured to display images, during a live sporting event. The utility adds a graphical aura to a real-time graphical image displayed in proximity to the player on the playing surface. The utility animates the aura in response to wind and/or noise in proximity to the playing surface. The utility modifies the aura based on pre-defined custom attributes, penalties, errors, and/or player status. If the player moves, the utility adds a graphical player trail to the image. The utility also adds a graphical object trail that includes previous locations of an object. The object trail may also include spin and a visual appearance corresponding to an object height. | 09-27-2012 |
20120249586 | METHOD AND APPARATUS FOR PROVIDING COLLABORATION BETWEEN REMOTE AND ON-SITE USERS OF INDIRECT AUGMENTED REALITY - An apparatus for enabling provision of collaboration of remote and on-site users of indirect augmented reality may include at least one processor and at least one memory including computer program code. The at least one memory and the computer program code may be configured, with the processor, to cause the apparatus to perform at least selecting a stored image including a virtual representation of a real world location based on position information and orientation information of a mobile terminal, causing provision of first visual content to be displayed at the mobile terminal based on the virtual representation, causing provision of second visual content to be displayed at a remote device based on the virtual representation, and enabling collaborative interaction between a user of the mobile terminal and a user of the remote device with respect to the first visual content and the second visual content. A corresponding method and computer program product are also provided. | 10-04-2012 |
20120249587 | KEYBOARD AVATAR FOR HEADS UP DISPLAY (HUD) - In some embodiments, the invention involves using a heads up display (HUD) or head mounted display (HMD) to view a representation of a user's fingers with an input device communicatively connected to a computing device. The keyboard/finger representation is displayed along with the application display received from a computing device. In an embodiment, the input device has an accelerometer to detect tilting movement in the input device, and send this information to the computing device. An embodiment provides visual feedback of key or control actuation in the HUD/HMD display. Other embodiments are described and claimed. | 10-04-2012 |
20120249588 | Augmented Reality Data Center Visualization - Datacenter datasets and other information are visually displayed in an augmented reality view using a portable device. The visual display of this information is presented along with a visual display of the actual datacenter environment. The combination of these two displays allows installers and technicians to view instructions or other data that are visually correlated to the environment in which they are working. | 10-04-2012 |
20120249589 | Method for the Output of Graphic Driving Indications - The method outputs graphic driving indications for assisting a motor vehicle driver in a driving maneuver. The graphic driving indications are displayed by a head-up display. A first graphic driving indication is in the form of a traffic lane change indication pointing out to the driver the direction from a traffic lane traveled at the beginning of the maneuver to a desired traffic lane. A second graphic driving indication is output in the form of a contact-analog traffic lane marking graphically emphasizing the desired traffic lane relative to other traffic lanes. A third graphic driving indication is in the form of a contact-analog maneuvering impulse including a driving funnel originating from the desired traffic lane and corresponding to the driving maneuver. A fourth graphic driving indication is a symbolic maneuvering display indication which symbolically displays the beginning driving maneuver after the vehicle enters into the driving funnel. | 10-04-2012 |
20120249590 | SELECTIVE HAND OCCLUSION OVER VIRTUAL PROJECTIONS ONTO PHYSICAL SURFACES USING SKELETAL TRACKING - A head mounted device provides an immersive virtual or augmented reality experience for viewing data and enabling collaboration among multiple users. Rendering images in a virtual or augmented reality system may include performing operations for capturing an image of a scene in which a virtual object is to be displayed, recognizing a body part present in the captured image, and adjusting a display of the virtual object based upon the recognized body part. The rendering operations may also include capturing an image with a body mounted camera, capturing spatial data with a body mounted sensor array, recognizing objects within the captured image, determining distances to the recognized objects within the captured image, and displaying the virtual object on a head mounted display. | 10-04-2012 |
20120249591 | SYSTEM FOR THE RENDERING OF SHARED DIGITAL INTERFACES RELATIVE TO EACH USER'S POINT OF VIEW - A head mounted device provides an immersive virtual or augmented reality experience for viewing data and enabling collaboration among multiple users. Rendering images in a virtual or augmented reality system may include capturing an image and spatial data with a body mounted camera and sensor array, receiving input indicating a first anchor surface, calculating parameters with respect to the body mounted camera and displaying a virtual object such that the virtual object appears anchored to the selected first anchor surface. Further rendering operations may include receiving a second input indicating a second anchor surface within the captured image that is different from the first anchor surface, calculating parameters with respect to the second anchor surface and displaying the virtual object such that the virtual object appears anchored to the selected second anchor surface and moved from the first anchor surface. | 10-04-2012 |
20120249592 | Situational Awareness Components of an Enhanced Vision System - A virtual sphere provided by an enhanced vision system includes synthetic imagery filling said virtual sphere and a common view window mapped to a dedicated position within the synthetic imagery. Imagery of the line of sight of a user is displayed in the common view window. By providing the common view window, visual communication between all users may be possible. By connecting a virtual user to the enhanced vision system and by displaying the imagery for the line of sight of the virtual user in the common view window, the workload of a human operator may be reduced and the time line of actions may be shortened. The enhanced vision system of the present invention may be used, but is not limited to, in a military aircraft to enhance the situational awareness of the flight crew. | 10-04-2012 |
20120256953 | SYSTEMS AND METHODS FOR MANAGING ERRORS UTILIZING AUGMENTED REALITY - Systems for managing errors utilizing augmented reality are provided. One system includes a transceiver configured to communicate with a systems management console, capture device for capturing environmental inputs, memory storing code comprising an augmented reality module, and a processor. The processor, when executing the code comprising the augmented reality module, is configured to perform the method below. One method includes capturing an environmental input, identifying a target device in the captured environmental input, and querying the systems management console regarding a status condition for the target device. Also provided are physical computer storage mediums including a computer program product for performing the above method. | 10-11-2012 |
20120256954 | Interference Based Augmented Reality Hosting Platforms - Interference-based augmented reality hosting platforms are presented. Hosting platforms can include networking nodes capable of analyzing a digital representation of scene to derive interference among elements of the scene. The hosting platform utilizes the interference to adjust the presence of augmented reality objects within an augmented reality experience. Elements of a scene can constructively interfere, enhancing presence of augmented reality objects; or destructively interfere, suppressing presence of augmented reality objects. | 10-11-2012 |
20120256955 | SYSTEM AND METHOD FOR ENABLING AUGMENTED REALITY IN REPORTS - In accordance with various embodiments of the present invention, the electronic device checks whether a viewing application capable of rendering augmented reality (AR) is present in the electronic device. If such a viewing application is present, the viewing application identifies any augmented reality (AR) markers present in the physical document. On identifying an AR marker in the document, the viewing application (or the electronic device) fetches relevant content from a predefined source and displays the relevant content to the user. The relevant content may be any of a chart, a 3D chart, a report, a video recording, and so forth. In case the viewing application is not already present, the electronic device downloads the viewing application from a predefined location. | 10-11-2012 |
20120256956 | DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND PROGRAM - Aspects of the present invention include a display control device comprising a determining unit configured to determine an orientation of a real object in a real space image. The device may also comprise a control unit configured to select between first and second orientations of a virtual object based on the real object orientation, one of the first or second virtual object orientations aligning the virtual object with the orientation of the real object, and output an image of the virtual object based on the selected orientation, for display on an associated display device. | 10-11-2012 |
20120262485 | SYSTEM AND METHOD OF INPUT PROCESSING FOR AUGMENTED REALITY - A method of input processing for augmented reality comprises the steps of capturing a video image, generating an augmented image layer for superposition over the captured video image, and for a region of the augmented image layer, detecting for each pixel in the region a property of a corresponding pixel in the captured video image, and mapping with a first mapping the property detected for each pixel of the region back to a reference two-dimensional array of pixels; and generating an input based upon the property values as mapped to the reference two-dimensional array of pixels. | 10-18-2012 |
20120262486 | SYSTEM AND METHOD OF USER INTERACTION FOR AUGMENTED REALITY - A method of user interaction in augmented reality comprises the steps of capturing a video image of a scene, and for each pixel in at least a sub-region of the captured video, classifying the pixel as either a skin or non-skin pixel responsive to whether the colour of the pixel exceeds a predetermined threshold purity of red; and generating a mask based upon the classification of the pixels of the captured video, generating an augmentation image layer to superpose on the captured video image, and limiting a mode of combination of the captured video and the augmentation image layer, responsive to the mask. | 10-18-2012 |
20120268490 | AUGMENTED REALITY EXTRAPOLATION TECHNIQUES - Augmented reality extrapolation techniques are described. In one or more implementations, an augmented-reality display is rendered based at least in part on a first basis that describes a likely orientation or position of at least a part of the computing device. The rendered augmented-reality display is updated based at least in part on data that describes a likely orientation or position of the part of the computing device that was assumed during the rendering of the augmented-reality display. | 10-25-2012 |
20120268491 | Color Channels and Optical Markers - Color channel optical marker techniques are described. In one or more implementations, a plurality of color channels obtained from a camera are examined, each of the color channels depicting an optical marker having a different scale than another optical maker depicted in another one of the color channels. At least one optical marker is identified in a respective one of the plurality of color channels and an optical basis is computed using the identified optical marker usable to describe at least a position or orientation of a part of the computing device. | 10-25-2012 |
20120268492 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - Provided is an image processing apparatus including an image acquisition unit for acquiring an input image, a selection unit for selecting a recognition method of an object shown in the input image from a plurality of recognition methods, a recognition unit for recognising the object shown in the input image using the recognition method selected by the selection unit, and a display control unit for superimposing a virtual object that is associated with the object recognised by the recognition unit onto the input image and displaying the virtual object. The display control unit changes display of the virtual object according to the recognition method selected by the selection unit. | 10-25-2012 |
20120268493 | INFORMATION PROCESSING SYSTEM FOR AUGMENTED REALITY - An example of an information processing system according to a present disclosure controls a single virtual object based on a plurality of pieces of real object information including posture information that indicates respective postures of a plurality of real objects in order to perform control of a virtual object in wider variations when controlling the virtual object using a so-called AR marker. | 10-25-2012 |
20120268494 | HEAD MOUNTED DISPLAY AND CONTROL METHOD THEREFOR - A viewpoint information calculation unit ( | 10-25-2012 |
20120287158 | DISPLAY APPARATUS, CONTROL METHOD FOR DISPLAY APPARATUS, AND STORAGE MEDIUM - To display a captured image, a display apparatus includes an acquisition unit, a display unit, and a setting unit. The acquisition unit acquires, from an external apparatus, additional information that is to be displayed in association with an object in the captured image. The display unit displays the additional information acquired by the acquisition unit in association with the object. The setting unit allows a user to set a set number of objects in association with which the additional information is to be displayed by the display unit. The acquisition unit acquires, from the external apparatus, the additional information according to the set number of objects. | 11-15-2012 |
20120287159 | VIEWING OF REAL-TIME, COMPUTER-GENERATED ENVIRONMENTS - A method of generating a view of a computer-generated environment using a location in a real-world environment, comprising receiving real-time data regarding the location of a device in the real-world environment; mapping the real-time data regarding the device into a virtual camera within a directly-correlating volume of space in the computer-generated environment; updating the virtual camera location using the real-time data, such that the virtual camera is assigned a location in the computer-generated environment which corresponds to the location of the device in the real-world environment; and using the virtual camera to generate a view of the computer-generated environment from the assigned location in the computer-generated environment. | 11-15-2012 |
20120293546 | AUGMENTED-REALITY MOBILE COMMUNICATOR WITH ORIENTATION - A mobile communication device adapted to communicate with a plurality of pre-determined sources disposed at pre-determined different locations includes a receiver adapted to receive wirelessly communicated visual information from a particular source at a pre-determined location, an orientation detector that detects the orientation of the receiver relative to the pre-determined location of the particular source to provide an orientation signal indicating that the mobile communication device is oriented toward the predetermined location of the particular source, and an interface circuit responsive to the wirelessly communicated visual information and the orientation signal to present the visual information to a user. | 11-22-2012 |
20120293547 | Management Of Access To And Life Cycles Of Virtual Signs - Many different methods, apparatus, and program products are disclosed for handling virtual signs over their life cycles. Potential future locations and headings of a mobile device are used to fetch virtual signs in advance of when the virtual signs might be used. Techniques are disclosed for handling timelines of virtual signs, including registering and responding to events in the timelines. Techniques are disclosed for allowing localities to license virtual signs. Techniques are disclosed to allow advertisers to bid for and win virtual sign competitions and product placement. Techniques are presented for presenting billing information to owners of virtual signs. | 11-22-2012 |
20120293548 | EVENT AUGMENTATION WITH REAL-TIME INFORMATION - A system and method to present a user wearing a head mounted display with supplemental information when viewing a live event. A user wearing an at least partially see-through, head mounted display views the live event while simultaneously receiving information on objects, including people, within the user's field of view, while wearing the head mounted display. The information is presented in a position in the head mounted display which does not interfere with the user's enjoyment of the live event. | 11-22-2012 |
20120293549 | COMPUTER-READABLE STORAGE MEDIUM HAVING INFORMATION PROCESSING PROGRAM STORED THEREIN, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD - In a game apparatus, a plurality of images of a real object are taken from a plurality of directions, and the plurality of images are previously stored in a storage device so as to be associated with imaging directions. The game apparatus causes an outer imaging section to take an image including a marker positioned in a real space, and detects the marker included in the taken image. The game apparatus calculates, based on the detected marker, a position of the outer imaging section in a marker coordinate system based on the marker. The game apparatus calculates a vector indicating a direction from the position of the outer imaging section toward the marker, selects, based on the vector, an image from among the plurality of images stored in the storage device, and displays the selected image on the upper LCD. | 11-22-2012 |
20120293550 | LOCALIZATION DEVICE AND LOCALIZATION METHOD WITH THE ASSISTANCE OF AUGMENTED REALITY - A localization device assisted with augmented reality and a localization method thereof are provided. The localization device includes a subject object coordinate generating unit, a relative angle determining element and a processing unit. The subject object coordinate generating unit selects at least three subject objects outside the localization device and obtains at least three subject object coordinate values of the at least three subject objects. The relative angle determining element determines at least two viewing angle differences between any two of the at least three subject objects. The processing unit generates a location coordinate value of the localization device according to the at least two viewing angle differences and the at least three subject object coordinate values. | 11-22-2012 |
20120293551 | USER INTERFACE ELEMENTS AUGMENTED WITH FORCE DETECTION - A computing device includes a touch screen display with at least one force sensor, each of which provides a signal in response to contact with the touch screen display. Using force signals from the at least one force sensor that result from contact with the touch screen, the operation of the computing device may be controlled, e.g. to select one of a plurality of overlaying interface elements, to prevent the unintended activation of suspect commands that require secondary confirmation, and to mimic the force requirements of real-world objects in augmented reality applications. | 11-22-2012 |
20120299962 | METHOD AND APPARATUS FOR COLLABORATIVE AUGMENTED REALITY DISPLAYS - Methods and apparatuses are provided for facilitating interaction with augmented reality devices, such as augmented reality glasses and/or the like. A method may include receiving a visual recording of a view from a first user from an imaging device. The method may also include displaying the visual recording to a display. Further, the method may include receiving an indication of a touch input to the display. In addition, the method may include determining, by a processor, a relation of the touch input to the display. The method may also include displaying, at least in part on the determined relation, an icon representative of the touch input to the imaging device. Corresponding apparatuses are also provided. | 11-29-2012 |
20120299963 | METHOD AND SYSTEM FOR SELECTION OF HOME FIXTURES - A system for selecting one of a plurality of home fixtures, includes a processor, a memory including a database, an input device, and a display. The processor executes instructions to present information corresponding to the home fixtures. The memory is in communication with the processor. The database contains the information corresponding to the home fixtures, such as materials, models, category, accessories, 360° views, high definition images and specifications. The database also contains information corresponding to a context, such as a counter top surface, that may be associated with the home fixtures. The input device permits a user to select the one of the home fixtures to be presented. The display is in communication with the processor, and is configured to show the information corresponding to the home fixtures. The system may also include a camera that permits acquisition of information corresponding to a custom context for the home fixture. | 11-29-2012 |
20120306917 | COMPUTER-READABLE STORAGE MEDIUM HAVING STORED THEREIN IMAGE DISPLAY PROGRAM, IMAGE DISPLAY APPARATUS, IMAGE DISPLAY METHOD, IMAGE DISPLAY SYSTEM, AND MARKER - A content of an image corresponding to a recognition object in a captured image is identified, and first identification information and second identification information are acquired. Based on the first identification information and second identification information, one display object is determined from a plurality of virtual objects stored in advance in a predetermined storage medium. Then, an image of the determined virtual object captured by a virtual camera is displayed on a predetermined display section. | 12-06-2012 |
20120306918 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - Disclosed herein is an image processing apparatus including a display control part configured to display a human-figure virtual object image in a pose from which to extract information necessary for motion capture, the human-figure virtual object image being the object to be handled corresponding to a person targeted to be recognized. | 12-06-2012 |
20120306919 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - Disclosed herein is an image processing apparatus including: an image processing part configured such that if an image taken of a user includes an image of the clothes worn by the user and making up a clothes region, if the image of the clothes is to be replaced with an image of virtual clothes prepared beforehand and making up a virtual clothes region, and if the clothes region overlaid with the virtual clothes region has a protruded region protruding from the virtual clothes region, then the image processing part performs a process of making the virtual clothes region coincide with the clothes region. | 12-06-2012 |
20120306920 | AUGMENTED REALITY AND FILTERING - A system (and corresponding method) that can enhance a user experience by augmenting real-world experiences with virtual world data to is provided. The augmented reality system discloses various techniques to personalize real-world experiences by overlaying or interspersing virtual capabilities (and data) with real world situations. The innovation can also filter, rank, modify or ignore virtual-world information based upon a particular real-world class, user identity or context. | 12-06-2012 |
20120313969 | PROVIDING A SIMULATION OF WEARING ITEMS SUCH AS GARMENTS AND/OR ACCESSORIES - A user may simulate wearing real-wearable items, such as virtual garments and accessories. A virtual-outfitting interface may be provided for presentation to the user. An item-search/selection portion within the virtual-outfitting interface may be provided. The item-search/selection portion may depict one or more virtual-wearable items corresponding to one or more real-wearable items. The user may be allowed to select at least one virtual-wearable item from the item-search/selection portion. A main display portion within the virtual-outfitting interface may be provided. The main display portion may include a composite video feed that incorporates a video feed of the user and the selected at least one virtual-wearable item such that the user appears to be wearing the selected at least one virtual-wearable item in the main display portion. | 12-13-2012 |
20120320092 | METHOD AND APPARATUS FOR EXHIBITING MIXED REALITY BASED ON PRINT MEDIUM - An apparatus for exhibiting mixed reality based on a print medium includes a command identification module and a content reproduction module. The command identification module identifies a hand gesture of a user performed on a printer matter in the print medium to recognize a user input command corresponding to the hand gesture. The content reproduction module provides a digital content corresponding to the printed matter onto a display area on the print medium. | 12-20-2012 |
20120327114 | DEVICE AND ASSOCIATED METHODOLOGY FOR PRODUCING AUGMENTED IMAGES - An augmented image producing device includes a processor programmed receive scene imagery from an imaging device and to identify at least one marker in the scene imagery. The processor then determines whether at least one marker corresponds to a known pattern and if the marker does correspond to a known pattern, the scene imagery is augmented with computer-generated graphics dispersed from a position of the at least one marker. Once the scene imagery is augmented, the computer-generated graphics are displayed on a display screen. The augmented scene imagery can then be used, for example, to actively engage audience members during an event. | 12-27-2012 |
20120327115 | Signal-enhancing Beamforming in an Augmented Reality Environment - An augmented reality environment allows interaction between virtual and real objects. Beamforming techniques are applied to signals acquired by an array of microphones to allow for simultaneous spatial tracking and signal acquisition from multiple users. Localization information such as from other sensors in the environment may be used to select a particular set of beamformer coefficients and resulting beampattern focused on a signal source. Alternately, a series of beampatterns may be used iteratively to localize the signal source in a computationally efficient fashion. The beamformer coefficients may be pre-computed. | 12-27-2012 |
20120327116 | TOTAL FIELD OF VIEW CLASSIFICATION FOR HEAD-MOUNTED DISPLAY - Virtual images are located for display in a head-mounted display (HMD) to provide an augment reality view to an HMD wearer. Sensor data may be collected from on-board sensors provided on an HMD. Additionally, other day may be collected from external sources. Based on the collected sensor data and other data, the position and rotation of the HMD wearer's head relative to the HMD wearer's body and surrounding environment may be determined. After resolving the HMD wearer's head position, the HMD wearer's total field of view (TFOV) may be classified into regions. Virtual images may then be located in the classified TFOV regions to locate the virtual images relative to the HMD wearer's body and surrounding environment. | 12-27-2012 |
20120327117 | DIGITALLY ENCODED MARKER-BASED AUGMENTED REALITY (AR) - A system and method for markers with digitally encoded geographic coordinate information for use in an augmented reality (AR) system. The method provides accurate location information for registration of digital data and real world images within an AR system. The method includes automatically matching digital data within an AR system by utilizing a digitally encoded marker (DEM) containing world coordinate information system and mathematical offset of digital data and a viewing device. The method further includes encoding geographic coordinate information into markers (e.g., DEMs) and decoding the coordinate information into an AR system. Through use of the method and corresponding system, marker technology and the basis of geo-location technology can be combined into a geo-located marker, thereby solving the problem of providing accurate registration within an augmented reality. | 12-27-2012 |
20120327118 | DISPLAY CONTROL APPARATUS, DISPLAY CONTROL METHOD AND PROGRAM - There is provided a display control apparatus including a display control unit that adds a virtual display to a real object containing a region associated with a time. The display control unit may add the virtual display to the region. | 12-27-2012 |
20120327119 | USER ADAPTIVE AUGMENTED REALITY MOBILE COMMUNICATION DEVICE, SERVER AND METHOD THEREOF - The present disclosure provides an augmented reality mobile communication device and a method and system thereof, which can provide digital content items to individual users by reflecting a user preference associated with user circumstances in the provision of augmented reality. The augmented reality mobile communication device includes: a context inference unit that receives sensory information and predicts a user context regarding a user of a mobile communication device based on the sensory information; a transmission unit that transmits user context data to a server; a receiving unit that receives a personalized content item from the server, the personalized content item being generated based on user profile data and user preference data corresponding to user context data; and an augmented reality content renderer that overlays the received personalized content item on an image captured by a camera. | 12-27-2012 |
20120327120 | METHOD AND APPARATUS FOR CREATING VIRTUAL GRAFFITI IN A MOBILE VIRTUAL AND AUGMENTED REALITY SYSTEM - A method and system is provided for easily creating virtual graffiti that will be left for a particular device to view. During operation a device will be placed near a first point that is used to define a boundary for the virtual graffiti. The device will locate the first point, and use the point to define the boundary. The device will receive an image that is to be used as virtual graffiti, and will fit the image within or upon the boundary of the virtual graffiti. For example, the device may be consecutively placed near four points that will define a polygon to be used as the boundary for the virtual graffiti. An image will then be received, and the image will be fit within the polygon. Additionally, a device may create virtual graffiti from an image and a boundary. | 12-27-2012 |
20130002717 | POSITIONAL CONTEXT DETERMINATION WITH MULTI MARKER CONFIDENCE RANKING - An augmented reality (AR) system receives image data from a camera of mobile device. The image data includes images of objects in the field of view of and at least two markers. A repository contains position information for objects and markers that may potentially appear in an image. The AR system receives a confidence level for the markers in the image, and selects the marker with the highest confidence level. The transformation and position of the marker in the image data is used to determine the identity of objects in the field of view. Further, the AR system generates overlaid image data that can be incorporated into the image data and sent back to be displayed on the mobile device. The overlaid image data is positioned according to the position data for the object it is associated with, and the transformation and position data for the selected marker. | 01-03-2013 |
20130009993 | Systems, Computer Medium and Computer-Implemented Methods for Providing Health Information to Employees Via Augmented Reality Display - Provided are embodiments of systems, computer medium and computer-implemented methods for providing feedback of health information to an employee when the employee is engaged in their work duties. The method including receiving health data output by a set of health sensors provided on or near the employee when the employee is engaged in work duties. The health sensors comprising at least one of biometric and biomechanic sensors. The health data corresponding to biometric and/or biomechanic characteristics sensed by the set of health sensors. The method including processing the health data to identify health status information for the employee, and providing for display via an augmented reality display, augmented reality content including the health status information. The augmented reality display providing the employee with an augmented reality view including a real world view of a surrounding environment having the health status information for the employee overlaid thereon. | 01-10-2013 |
20130009994 | METHODS AND APPARATUS TO GENERATE VIRTUAL-WORLD ENVIRONMENTS - Example methods and apparatus to generate virtual-world environments are disclosed. A disclosed example method involves receiving real-world data associated with a real-world environment in which a person is located at a particular time and receiving virtual-reality data representative of a virtual-world environment corresponding to the real-world environment in which the person was located at the particular time. The method also involves displaying the virtual-world environment based on the virtual-reality data and displaying, in connection with the virtual-world environment, a supplemental visualization based on supplemental user-created information. The supplemental user-created information is obtained based on the real-world data. | 01-10-2013 |
20130016123 | SYSTEMS AND METHODS FOR AN AUGMENTED REALITY PLATFORM - Systems and methods for augmenting a view of reality. In an embodiment, a first medium is superimposed over a first view of reality. One or more changes to the superimposed medium are received, such as a change in transparency, change in size, and change in position. A first marker, comprising at least a portion of the first view of reality, is generated. First metadata related to the first medium and/or the first marker are also generated. The first medium, the first marker, and the first metadata are sent to a depository. In a further embodiment, a second medium, second marker, and second metadata are received from the depository. The second marker is matched to a least a portion of a second view of reality, and the second medium is superimposed over the at least a portion of the second view of reality to generate an augmented view of reality. | 01-17-2013 |
20130021373 | Automatic Text Scrolling On A Head-Mounted Display - A see-through head-mounted display (HMD) device, e.g., in the form of glasses, provides view an augmented reality image including text, such as in an electronic book or magazine, word processing document, email, karaoke, teleprompter or other public speaking assistance application. The presentation of text and/or graphics can be adjusted based on sensor inputs indicating a gaze direction, focal distance and/or biological metric of the user. A current state of the text can be bookmarked when the user looks away from the image and subsequently resumed from the bookmarked state. A forward facing camera can adjust the text if a real word object passes in front of it, or adjust the appearance of the text based on a color of pattern of a real world background object. In a public speaking or karaoke application, information can be displayed regarding a level of interest of the audience and names of audience members. | 01-24-2013 |
20130021374 | Manipulating And Displaying An Image On A Wearable Computing System - Example methods and systems for manipulating and displaying a real-time image and/or photograph on a wearable computing system are disclosed. A wearable computing system may provide a view of a real-world environment of the wearable computing system. The wearable computing system may image at least a portion of the view of the real-world environment in real-time to obtain a real-time image. The wearable computing system may receive at least one input command that is associated with a desired manipulation of the real-time image. The at least one input command may be a hand gesture. Then, based on the at least one received input command, the wearable computing system may manipulate the real-time image in accordance with the desired manipulation. After manipulating the real-time image, the wearable computing system may display the manipulated real-time image in a display of the wearable computing system. | 01-24-2013 |
20130027428 | Generating a Discussion Group in a Social Network Based on Metadata - The present invention includes a system and method for generating a discussion in a social network based on visual search results. A mixed media reality (MMR) engine receives images from a user device and identifies matching MMR objects. A social network application determines whether a discussion group that is related to metadata associated with the images from user devices are related to a discussion group. If the discussion group does not yet exist, the social network application generates the discussion group and provides the user devices with information about the discussion group. | 01-31-2013 |
20130027429 | SYSTEM AND METHOD FOR LOCATIONAL MESSAGING - Herein is disclosed a positional content platform and related systems and methods. According to some embodiments, the platform includes a mobile processing and communication device and a service layer executing on a server. The mobile processing and communication device communicates with the service layer. By virtue of the communication, a user of the mobile device is able to locate and view digital content that has been created and stored on the platform. The aforementioned content is associated with a geographic location, and, according to some embodiments, the content is represented in a user interface by an icon that is superimposed over a field of view representative of a geographic region. | 01-31-2013 |
20130027430 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM - A method is provided for superimposing schedule data on a temporal measurement object. The method comprises receiving image data representing an input image. The method further comprises detecting the presence of a temporal measurement object in the input image based on features of the temporal measurement object detected in the image data. The method further comprises providing, in response to detection of the presence of the temporal measurement object in the input image, schedule data for superimposing on a user's view of the temporal measurement object. | 01-31-2013 |
20130033522 | PREPOPULATING APPLICATION FORMS USING REAL-TIME VIDEO ANALYSIS OF IDENTIFIED OBJECTS - Embodiments of the invention are directed to methods and apparatuses for populating documents based on identification of objects in an augmented reality environment. The method includes capturing a video stream using a mobile computing device; determining, using a computing device processor, the object; identifying a document associated with the object; populating at least a portion of the document; and submitting the document. The method may also include presenting indicators associated with the user, the identified document, or a financial transaction associated with the document. The method may also include providing recommendations or suggestions to the user related to alternative offers associated with the document. Systems and computer program products for populating forms using video analysis of identified objects are also provided. | 02-07-2013 |
20130038631 | SYSTEMS AND METHODS FOR A VIRTUAL TERRAIN DISPLAY - Embodiments of the present invention provide improved systems and methods for providing a virtual terrain display. In one embodiment, a method comprises identifying a location within an enclosure. The location is referenced against an external environment containing the enclosure. The method also comprises identifying a portion of a structure of the enclosure. The portion of the structure exists between the location and the external environment and blocks a view of the external environment. The method also comprises generating a display depicting a view of the external environment from the location; and applying a translucent structure representation to the display. The structure representation is a visual depiction of the portion of the structure and appearing in front of the depicted view without blocking the depicted view of the external environment. | 02-14-2013 |
20130038632 | SYSTEM AND METHOD FOR IMAGE REGISTRATION OF MULTIPLE VIDEO STREAMS - Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements. | 02-14-2013 |
20130038633 | ASSEMBLING METHOD, OPERATING METHOD, AUGMENTED REALITY SYSTEM AND COMPUTER PROGRAM PRODUCT - An assembling method for assembling a measurement or production set-up includes providing an augmented reality system with a processing device, an output device and a sensing device. The sensing device captures sensing data belonging to a working space. The method then includes providing first and second set-up components having first and second markers at the working space where the second set-up component is connectable to the first set-up component. The method captures the first and second markers by the sensing device and identifies the first and second marker. The processing device retrieves respective digital information assigned to the identified first and second markers from a database and makes a decision on the compatibility of the first set-up component with the second set-up component based on the retrieved digital information. An augmented representation of at least part of the captured sensing data and the decision on the compatibility is output. | 02-14-2013 |
20130044128 | CONTEXT ADAPTIVE USER INTERFACE FOR AUGMENTED REALITY DISPLAY - A user interface includes a virtual object having an appearance in context with a real environment of a user using a see-through, near-eye augmented reality display device system. A virtual type of object and at least one real world object are selected based on compatibility criteria for forming a physical connection like attachment, supporting or integration of the virtual object with the at least one real object. Other appearance characteristics, e.g. color, size or shape, of the virtual object are selected for satisfying compatibility criteria with the selected at least one real object. Additionally, a virtual object type and appearance characteristics of the virtual object may be selected based on a social context of the user, a personal context of the user or both. | 02-21-2013 |
20130044129 | LOCATION BASED SKINS FOR MIXED REALITY DISPLAYS - The technology provides embodiments for providing a location-based skin for a see-through, mixed reality display device system. In many embodiments, a location-based skin includes a virtual object viewable by a see-through, mixed reality display device system which has been detected in a specific location. Some location-based skins implement an ambient effect. The see-through, mixed reality display device system is detected to be present in a location and receives and displays a skin while in the location in accordance with user settings. User data may be uploaded and displayed in a skin in accordance with user settings. A location may be a physical space at a fixed position and may also be a space defined relative to a position of a real object, for example, another see-through, mixed reality display device system. Furthermore, a location may be a location within another location. | 02-21-2013 |
20130044130 | PROVIDING CONTEXTUAL PERSONAL INFORMATION BY A MIXED REALITY DEVICE - The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location. | 02-21-2013 |
20130044131 | SOFTWARE CONTROLLER FOR AUDIO MIXER EQUIPMENT - A method for revealing changes in settings of an analogue control console, the method comprising:
| 02-21-2013 |
20130044132 | USER AUGMENTED REALITY FOR CAMERA-ENABLED MOBILE DEVICES - A user augmented reality (UAR) service for a camera-enabled mobile device obtains meta data regarding one or more images/video that are captured with such device and provides the meta data in the display of the mobile device. The meta data is interactive and allows the user to obtain additional information or specific types of information, such as information that will aid the user in making a decision regarding the identified objects or selectable action options that can be used to initiate actions with respect to the identified objects. | 02-21-2013 |
20130050259 | APPARATUS AND METHOD FOR SHARING DATA USING AUGMENTED REALITY (AR) - A first terminal, includes: an image acquiring unit to acquire an image of a second terminal; a controller to control the first terminal and to acquire network information from the image of the second terminal; an AR configuration unit to create an AR display based on the image of the second terminal and the acquired network information; and a communication unit to communicate data between the first terminal and the second terminal via a network. A method, includes: acquiring an image of a second terminal; acquiring network information of a network from the image of the second terminal; creating an AR display based on the image of the second terminal and the acquired network information; allowing a selection of data based on the AR display; and communicating the selected data between the terminal and the second terminal via the network. | 02-28-2013 |
20130050260 | COHERENT PRESENTATION OF MULTIPLE REALITY AND INTERACTION MODELS - A method for navigating concurrently and from point-to-point through multiple reality models is described. The method includes: generating, at a processor, a first navigatable virtual view of a first location of interest, wherein the first location of interest is one of a first virtual location and a first non-virtual location; and concurrently with the generating the first navigatable virtual view of the first location of interest, generating, at the processor, a second navigatable virtual view corresponding to a current physical position of an object, such that real-time sight at the current physical position is enabled within the second navigatable virtual view. | 02-28-2013 |
20130050261 | INFORMATION PROCESSING APPARATUS AND METHOD, INFORMATION PROCESSING SYSTEM, AND PROVIDING MEDIUM - The invention enables users to virtually attach information to situations in the real world, and also enables users to quickly and easily find out desired information. An IR sensor receives an IR signal transmitted from an IR beacon, and supplies the received signal to a sub-notebook PC. A CCD video camera takes in a visual ID from an object, and supplies the inputted visual ID to the sub-notebook PC. A user inputs, through a microphone, a voice to be attached to situations in the real world. The sub-notebook PC transmits position data, object data and voice data, which have been supplied to it, to a server through a communication unit. The transmitted data is received by the server via a wireless LAN. The server stores the received voice data in a database in correspondence to the position data and the object data. | 02-28-2013 |
20130050262 | METHOD FOR ACCESSING INFORMATION ON CHARACTER BY USING AUGMENTED REALITY, SERVER, AND COMPUTER READABLE RECORDING MEDIUM - The present invention relates to a method for accessing information of a person by using augmented reality. The method includes the steps of: (a) receiving profile information from multiple users and information on a level of sharing the profile information; (b) checking locations of the multiple users; and (c) allowing a program code for (i) acquiring information on at least one user in close proximity, if it is sensed that a surrounding image is received in a preview state through a terminal of the first user and displaying at least one icon corresponding to the user in close proximity through the terminal of the first user in a form of AR with the surrounding image and (ii) displaying the profile information of a specific user corresponding to a specific icon, if being selected later among the displayed icons, through the screen of the first user to be executed. | 02-28-2013 |
20130057581 | METHOD OF DISPLAYING VIRTUAL INFORMATION IN A VIEW OF A REAL ENVIRONMENT - A method of displaying virtual information in a view of a real environment comprising the following steps: providing a system for displaying of virtual information in a view of a real environment, determining a current pose of at least one part of the system relative to at least one part of the real environment and providing accuracy information of the current pose, providing multiple pieces of virtual information, and assigning a respective one of the pieces of virtual information to one of different parameters indicative of different pose accuracy information, and displaying at least one of the pieces of virtual information in the view of the real environment according to the accuracy information of the current pose in relation to the assigned parameter of the at least one of the pieces of virtual information. | 03-07-2013 |
20130057582 | DISPLAY CONTROL APPARATUS, METHOD FOR CONTROLLING DISPLAY CONTROL APPARATUS, AND STORAGE MEDIUM - A display control apparatus capable of displaying information about an object detected from a captured image causes a display unit to display information corresponding to even an object that cannot be detected from the captured image, if the information is relevant to a query input by a user. | 03-07-2013 |
20130057583 | PROVIDING INFORMATION SERVICES RELATED TO MULTIMODAL INPUTS - A system and method provides information services related to multimodal inputs. Several different types of data used as multimodal inputs are described. Also described are various methods involving the generation of contexts using multimodal inputs, synthesizing context-information service mappings and identifying and providing information services. | 03-07-2013 |
20130057584 | INFORMATION PROCESSING APPARATUS AND METHOD, INFORMATION PROCESSING SYSTEM, AND PROVIDING MEDIUM - The invention enables users to virtually attach information to situations in the real world, and also enables users to quickly and easily find out desired information. An IR sensor receives an IR signal transmitted from an IR beacon, and supplies the received signal to a sub-notebook PC. A CCD video camera takes in a visual ID from an object, and supplies the inputted visual ID to the sub-notebook PC. A user inputs, through a microphone, a voice to be attached to situations in the real world. The sub-notebook PC transmits position data, object data and voice data, which have been supplied to it, to a server through a communication unit. The transmitted data is received by the server via a wireless LAN. The server stores the received voice data in a database in correspondence to the position data and the object data. | 03-07-2013 |
20130057585 | USER AUGMENTED REALITY FOR CAMERA-ENABLED MOBILE DEVICES - Apparatus and methods are described for providing a user augmented reality (UAR) service for a camera-enabled mobile device, so that a user of such mobile device can use the mobile device to obtain meta data regarding one or more images/video that are captured with such device. The meta data is interactive and allows the user to obtain additional information or specific types of information, such as information that will aid the user in making a decision regarding the identified objects or selectable action options that can be used to initiate actions with respect to the identified objects. | 03-07-2013 |
20130063486 | Optical Display System and Method with Virtual Image Contrast Control - A method includes generating a light pattern using a display panel and forming a virtual image from the light pattern utilizing one or more optical components. The virtual image is viewable from a viewing location. The method also includes receiving external light from a real-world environment incident on an optical sensor. The real-world environment is viewable from the viewing location. Further, the method includes obtaining an image of the real-world environment from the received external light, identifying a background feature in the image of the real-world environment over which the virtual image is overlaid, and extracting one or more visual characteristics of the background feature. Additionally, the method includes comparing the one or more visual characteristics to an upper threshold value and a lower threshold value and controlling the generation of the light pattern based on the comparison. | 03-14-2013 |
20130063487 | METHOD AND SYSTEM OF USING AUGMENTED REALITY FOR APPLICATIONS - A computerized method for superposing an image of an object onto an image of a scene, including obtaining a 2.5D representation of the object, obtaining the image of the scene, obtaining a location in the image of the scene for superposing the image of the object, producing the image of the object using the 2.5D representation of the object, superposing the image of the object onto the image of the scene, at the location. A method for online commerce via the Internet, including obtaining an image of an object for display, obtaining an image of a scene suitable for including the image of the object for display, and superposing the image of the object for display onto the image of the scene, wherein the image of the object for display is produced from a 2.5D representation of the object. Related apparatus and methods are also described. | 03-14-2013 |
20130069985 | Wearable Computer with Superimposed Controls and Instructions for External Device - A wearable computing device includes a head-mounted display (HMD) that provides a field of view in which at least a portion of the environment of the wearable computing device is viewable. The HMD is operable to display images superimposed over the field of view. When the wearable computing device determines that a target device is within its environment, the wearable computing device obtains target device information related to the target device. The target device information may include information that defines a virtual control interface for controlling the target device and an identification of a defined area of the target device on which the virtual control image is to be provided. The wearable computing device controls the HMD to display the virtual control image as an image superimposed over the defined area of the target device in the field of view. | 03-21-2013 |
20130069986 | METHODS AND ARRANGEMENTS FOR AUGMENTED REALITY - A mobile device, a method in a mobile device, a server and a method in a server for augmented reality. | 03-21-2013 |
20130076787 | DYNAMIC INFORMATION PRESENTATION ON FULL WINDSHIELD HEAD-UP DISPLAY - A method to dynamically register a graphic representing essential vehicle information onto a driving scene of a subject vehicle utilizing a substantially transparent windscreen head up display includes monitoring subject vehicle information and identifying the essential vehicle information based on the monitored subject vehicle information. The graphic representing the essential vehicle information is determined and determining, and a preferred location for the graphic upon the substantially transparent windscreen head up display is dynamically registering in accordance with minimizing an operator's head movement and eye saccades for viewing the graphic. The graphic is displayed upon the substantially transparent windscreen head up display based upon the preferred location. | 03-28-2013 |
20130076788 | APPARATUS, METHOD AND SOFTWARE PRODUCTS FOR DYNAMIC CONTENT MANAGEMENT - The present invention provides systems and methods for dynamic content management, the method including generating content associated with an object, dynamically adjusting the content associated with the object according to a user profile to form a user-defined object-based content package, displaying at least one captured image of the identified object on the device, and uploading the user-defined object-based content package associated with the identified object to the device simultaneously with the displaying step to provide dynamic content to the user on the device. | 03-28-2013 |
20130076789 | AUGMENTED REALITY USING PROJECTOR-CAMERA ENABLED DEVICES - An augmented reality scene may be registered onto an arbitrary surface. A camera may capture an image of the arbitrary surface. The camera may analyze the surface geometry of the arbitrary surface. In some embodiments, a processing computing device may analyze data captured by the camera and an adjacent camera to reconstruct the surface geometry of the arbitrary surface. A scene may be registered to a three dimensional coordinate system corresponding to the arbitrary surface. A projector may project the scene onto the arbitrary surface according to the registration so that the scene may not display as being distorted. | 03-28-2013 |
20130076790 | METHOD FOR AUGMENTING A REAL SCENE - Methods, systems and devices for augmenting a real scene in a video stream are disclosed herein. | 03-28-2013 |
20130076791 | HEAD-UP DISPLAY SYSTEM - Independent optical unit for head-up display system for motor vehicle, intended for the display in the field of view of the driver of a virtual image obtained from an object image coming from a projector, including a first optical component reflecting the incident light rays emanating from the projector towards a second optical component placed in the field of view of the driver for the positioning of a final virtual image, means being provided for the adjustment of their relative position. | 03-28-2013 |
20130083061 | FRONT- AND REAR- SEAT AUGMENTED REALITY VEHICLE GAME SYSTEM TO ENTERTAIN & EDUCATE PASSENGERS - In accordance with an exemplary embodiment, an augmented virtual reality game is provided for a vehicle. A method comprises receiving a real-time video image during operation of a vehicle from a camera mounted on the vehicle and merging the real-time video image with one or more virtual images to provide an augmented reality image. The augmented reality image is then transmitted to a display of a gaming device during the operation of the vehicle. A system comprises a camera providing a real-time video image and a controller coupled to the camera. Additionally, a database provides the controller with one or more virtual images so that the controller may provide an augmented reality image by merging the real-time video image with the one or more virtual images. Finally, a transmitter is included for transmitting the augmented reality image to a display of a game device during the operation of the vehicle. | 04-04-2013 |
20130083062 | PERSONAL A/V SYSTEM WITH CONTEXT RELEVANT INFORMATION - A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. | 04-04-2013 |
20130083063 | Service Provision Using Personal Audio/Visual System - A collaborative on-demand system allows a user of a head-mounted display device (HMDD) to obtain assistance with an activity from a qualified service provider. In a session, the user and service provider exchange camera-captured images and augmented reality images. A gaze-detection capability of the HMDD allows the user to mark areas of interest in a scene. The service provider can similarly mark areas of the scene, as well as provide camera-captured images of the service provider's hand or arm pointing to or touching an object of the scene. The service provider can also select an animation or text to be displayed on the HMDD. A server can match user requests with qualified service providers which meet parameters regarding fee, location, rating and other preferences. Or, service providers can review open requests and self-select appropriate requests, initiating contact with a user. | 04-04-2013 |
20130083064 | PERSONAL AUDIO/VISUAL APPARATUS PROVIDING RESOURCE MANAGEMENT - Technology is described for resource management based on data including image data of a resource captured by at least one capture device of at least one personal audiovisual (A/V) apparatus including a near-eye, augmented reality (AR) display. A resource is automatically identified from image data captured by at least one capture device of at least one personal A/V apparatus and object reference data. A location in which the resource is situated and a 3D space position or volume of the resource in the location is tracked. A property of the resource is also determined from the image data and tracked. A function of a resource may also be stored for determining whether the resource is usable for a task. Responsive to notification criteria for the resource being satisfied, image data related to the resource is displayed on the near-eye AR display. | 04-04-2013 |
20130083065 | FIT PREDICTION ON THREE-DIMENSIONAL VIRTUAL MODEL - Systems and methods for conducting and providing a virtual shopping experience are disclosed. The disclosed systems and methods enable a customer to create a single instance of customer-specific dimension data and then securely utilize their dimension data to conduct virtual shopping sessions with a plurality of different and unrelated entities. | 04-04-2013 |
20130083066 | AUGMENTED REALITY FOR TABLE GAMES - A method includes acquiring media content of a wagering game table at a wagering game establishment with a camera of a mobile device. A location of the mobile device is determined when the media content is acquired. A direction that a lens of the camera is facing when the media content is acquired is determined. The wagering game table is identified based on the location and the direction. Overlay imagery derived from wagering game activity of the wagering game table is downloaded into the mobile device from a server. The overlay imagery is composited onto the media content to create a composited media content. The composited media content is displayed on a display of the mobile device. | 04-04-2013 |
20130083067 | INFORMATION PROCESSING METHOD AND DEVICE FOR PRESENTING HAPTICS RECEIVED FROM A VIRTUAL OBJECT - At the time of using a haptic device to present to a user haptics which a first virtual object superimposed on the haptic device receives from a second virtual object superimposed on a real object, the user is enabled to touch within the second virtual object, regardless of the real object. Accordingly, the haptics received from the second virtual object is obtained using a first haptic event model of the first virtual object and a second haptic event model of the second virtual object, and while the first haptic event model corresponds to computer graphics information of the first virtual object, the shape of the first virtual object differs from that of the haptic device, such that instructions can be made regarding the inside of the real object, using the first virtual object. | 04-04-2013 |
20130088514 | MOBILE ELECTRONIC DEVICE, METHOD AND WEBPAGE FOR VISUALIZING LOCATION-BASED AUGMENTED REALITY CONTENT - A mobile electronic device comprises a display, a processor controlling the display, a memory for storing data and software code, and a data interface for establishing data connection to a server. It further comprises a camera configured to generate video data, a position sensor configured to generate position data, a digital compass configured to generate directional data, and an inclination sensor configured to generate inclination data. The process is configured to retrieve video data from the camera, to generate a video stream from the retrieved video data and to display the video stream in a window representation in the display. The processor is further configured to enhance the window representation with augmented reality objects derived from the position data, the directional data and the inclination data. | 04-11-2013 |
20130088515 | METHOD OF PROVIDING AUGMENTED CONTENTS AND APPARATUS FOR PERFORMING THE SAME, METHOD OF REGISTERING AUGMENTED CONTENTS AND APPARATUS FOR PERFORMING THE SAME, SYSTEM FOR PROVIDING TARGETING AUGMENTED CONTENTS - The system for providing targeting augmented contents includes: an augmented metadata generation apparatus that generates augmented metadata designating specific space and time of broadcast contents as an augmented area; a broadcast content providing apparatus that transmits the augmented metadata to a first broadcast terminal apparatus and transmits the augmented metadata and the broadcast contents to a second broadcast terminal apparatus; a first broadcast terminal apparatus that transmits augmented contents displayed in the augmented area in which the augmented metadata are designated to the augmented content providing apparatus; an augmented content providing apparatus that transmits the augmented contents to a second broadcast terminal apparatus; and a second broadcast terminal apparatus that receives the broadcast contents and the augmented metadata from the broadcast content providing apparatus and receives the augmented contents from the augmented content providing apparatus based on the augmented metadata. | 04-11-2013 |
20130088516 | OBJECT DISPLAYING APPARATUS, OBJECT DISPLAYING SYSTEM, AND OBJECT DISPLAYING METHOD - In an object displaying apparatus, when, in the case where a virtual object is displayed superimposed on an image of the real space according to information about the layout position of the virtual object, the virtual object is displayed overlapping a superimposition inhibit object, display control is performed to display the virtual object transparently so that the superimposition inhibit object is not hidden by the virtual object. It is thereby possible to preferentially display the superimposition inhibit object. | 04-11-2013 |
20130093788 | USER CONTROLLED REAL OBJECT DISAPPEARANCE IN A MIXED REALITY DISPLAY - The technology causes disappearance of a real object in a field of view of a see-through, mixed reality display device system based on user disappearance criteria. Image data is tracked to the real object in the field of view of the see-through display for implementing an alteration technique on the real object causing its disappearance from the display. A real object may satisfy user disappearance criteria by being associated with subject matter that the user does not wish to see or by not satisfying relevance criteria for a current subject matter of interest to the user. In some embodiments, based on a 3D model of a location of the display device system, an alteration technique may be selected for a real object based on a visibility level associated with the position within the location. Image data for alteration may be prefetched based on a location of the display device system. | 04-18-2013 |
20130093789 | TOTAL FIELD OF VIEW CLASSIFICATION FOR HEAD-MOUNTED DISPLAY - Virtual images are located for display in a head-mounted display (HMD) to provide an augment reality view to an HMD wearer. Sensor data may be collected from on-board sensors provided on an HMD. Additionally, other day may be collected from external sources. Based on the collected sensor data and other data, the position and rotation of the HMD wearer's head relative to the HMD wearer's body and surrounding environment may be determined. After resolving the HMD wearer's head position, the HMD wearer's total field of view (TFOV) may be classified into regions. Virtual images may then be located in the classified TFOV regions to locate the virtual images relative to the HMD wearer's body and surrounding environment. | 04-18-2013 |
20130093790 | METHOD AND SYSTEM FOR IMPLEMENTING AUGMENTED REALITY APPLICATION - The present invention discloses a method and an apparatus for implementing an augmented reality application. The method includes: searching for AR applications related to set AR application parameter; selecting at least two AR applications from multiple AR applications found through searching and integrating the at least two AR applications into one new AR application; and providing the new AR application after integration for a user. | 04-18-2013 |
20130106910 | SYSTEM AND METHOD FOR VISUALIZATION OF ITEMS IN AN ENVIRONMENT USING AUGMENTED REALITY | 05-02-2013 |
20130113827 | HANDS-FREE AUGMENTED REALITY FOR WIRELESS COMMUNICATION DEVICES - This disclosure relates to techniques for providing hands-free augmented reality on a wireless communication device (WCD). According to the techniques, an application processor within the WCD executes an augmented reality (AR) application to receive a plurality of image frames and convert the plurality of image frames into a single picture comprising the plurality of image frames stitched together to represent a scene. The WCD executing the AR application then requests AR content for the scene represented in the single picture from an AR database server, receives AR content for the scene from the AR database server, and processes the AR content to overlay the single picture for display to a user on the WCD. In this way, the user may comfortably look at the single picture with the overlaid AR content on a display of the WCD to learn more about the scene represented in the single picture. | 05-09-2013 |
20130113828 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing apparatus including an image processing unit which combines a virtual object with a captured image. The image processing unit determines the virtual object based on a state or a type of an object shown in the captured image. | 05-09-2013 |
20130113829 | INFORMATION PROCESSING APPARATUS, DISPLAY CONTROL METHOD, AND PROGRAM - There is provided an information processing apparatus including an operation detecting unit detecting an operation of a subject that has been captured, and a display control unit controlling at least one of wearing or removal of at least one of virtual clothing or accessories to be displayed overlaid on the subject in accordance with the operation detected by the operation detecting unit. | 05-09-2013 |
20130120449 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD AND PROGRAM - A failure analysis apparatus obtains information associated with an operational status of a data center, determines information regarding fault repair work for the data center, based on the information associated with the operational status, and transmits the information regarding the fault repair work to an HMD. The HMD synthesizes and presents computer graphics image data for providing guidance for a method of the fault repair work, with an image of real space, based on the information regarding the fault repair work. | 05-16-2013 |
20130120450 | METHOD AND APPARATUS FOR PROVIDING AUGMENTED REALITY TOUR PLATFORM SERVICE INSIDE BUILDING BY USING WIRELESS COMMUNICATION DEVICE - A method of providing an augmented reality tour platform service for the inside of a building by using a wireless communication device. The method includes: acquiring an image of the building from the wireless communication device; collecting information associated with the acquired image; extracting a candidate building group from a previously established database on the basis of the acquired image and the collected information; specifying a building matching the acquired image from among the extracted candidate building group; and transmitting information regarding the inside of the specified building to the wireless communication device. | 05-16-2013 |
20130120451 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - When combining a virtual subject with a background image, there may be a case where the hue is different between both and a feeling of difference arises. Moreover, conventionally, it is necessary to manually adjust rendering parameters etc. from the rendering result, which takes time and effort. An image processing device that combines a virtual subject with a background image to generate a combined image is characterized by including a correction coefficient deriving unit configured to derive a correction coefficient by performing rendering of a color object arranged in a position where the virtual subject is placed using an environment map indicating information of a light source around the virtual subject, a background image correcting unit configured to correct the background image based on the derived correction coefficient, and a combining unit configured to combine a corrected background image and the virtual subject using the environment map. | 05-16-2013 |
20130120452 | WIRELESS AUGMENTED REALITY COMMUNICATION SYSTEM - The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own. | 05-16-2013 |
20130127906 | INFORMATION DISPLAY APPARATUS, METHOD THEREOF AND PROGRAM THEREOF - An information display apparatus creates determination image which indicates the fact that a reference object in a plurality of objects arranged in a row satisfies a predetermined rule or that the reference object does not satisfy the rule, creates an image to be displayed by superimposing the determination image on the acquired image, and displays the image to be displayed. | 05-23-2013 |
20130127907 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY SERVICE FOR MOBILE TERMINAL - An apparatus and method for providing an augmented reality service in a mobile terminal are provided. A method for providing an augmented reality service in a mobile terminal includes generating an augmented reality service scene using surrounding information, identifying recognizable at least one marker based on the augmented reality service scene, generating a guide scene considering the recognizable at least one marker, and adding the guide scene to a part of the augmented reality service scene and displaying the added scene. | 05-23-2013 |
20130135348 | COMMUNICATION DEVICE, COMMUNICATION SYSTEM, COMMUNICATION METHOD, AND COMMUNICATION PROGRAM - A communication device includes a content generation unit that generates annotation generation information for generating annotation corresponding to a physical object, a display requirement setting unit that generates display requirement information indicative of requirements for displaying the annotation, a display object setting unit that generates display object information for identifying the physical object to be displayed where the annotation is displayed, and a communication unit that transmits the annotation generation information, the display requirement information, and the display object information. | 05-30-2013 |
20130141460 | METHOD AND APPARATUS FOR VIRTUAL INCIDENT REPRESENTATION - A virtual incident representation capability is disclosed. The virtual incident representation capability is configured to represent a real world incident within a virtual world representation to provide thereby a virtual incident representation of the real world incident, which may be made available to people involved in the handling of the real world incident (e.g., operators at the safety answering point to which the real world incident is reported, responders in the field who have or will respond to the site of the real world incident, and the like). The virtual incident representation approximates the actual events of the real world incident in both space and time, and also may indicate the degree of certainty of at least a portion of the information included within the virtual incident representation. The virtual incident representation may be dynamic and interactive. | 06-06-2013 |
20130141461 | AUGMENTED REALITY CAMERA REGISTRATION - A system and method executable by a computing device of an augmented reality system for registering a camera in a physical space is provided. The method may include identifying an origin marker in a series of images of a physical space captured by a camera of an augmented reality system, and defining a marker graph having an origin marker node. The method may further include analyzing in real-time the series of images to identify a plurality of expansion markers with locations defined relative to previously imaged markers, and defining corresponding expansion marker nodes in the marker graph. The method may further include calculating a current position of the camera of the augmented reality system in the physical space based on a location of a node in the marker graph corresponding to a most recently imaged marker, relative to the origin marker and any intermediate markers. | 06-06-2013 |
20130147836 | MAKING STATIC PRINTED CONTENT DYNAMIC WITH VIRTUAL DATA - The technology provides embodiments for making static printed content being viewed through a see-through, mixed reality display device system more dynamic with display of virtual data. A printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a printed content selection within the printed content item is identified based on physical action user input, for example eye gaze or a gesture. A task in relation to the printed content selection can also be determined based on physical action user input. Virtual data for the printed content selection is displayed in accordance with the task. Additionally, virtual data can be linked to a work embodied in a printed content item. Furthermore, a virtual version of the printed material may be displayed at a more comfortable reading position and with improved visibility of the content. | 06-13-2013 |
20130147837 | AUGMENTED REALITY PERSONALIZATION - A method is provided, such as for mobile augmented reality personalization. A front-facing camera of the mobile device acquires a first view of a user of the mobile device. A personal characteristic of the user of the mobile device is identified from the first view. A location of the mobile device may be determined. A back-facing camera of the mobile device may acquire a second view of a region at the location. Augmented reality information is selected as a function of the personal characteristic. A second view is displayed with the augmented reality information. | 06-13-2013 |
20130147838 | UPDATING PRINTED CONTENT WITH PERSONALIZED VIRTUAL DATA - The technology provides for updating printed content with personalized virtual data using a see-through, near-eye, mixed reality display device system. A printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a printed content selection within the printed content item is identified based on physical action user input, for example eye gaze or a gesture. Virtual data is selected from available virtual data for the printed content selection based on user profile data, and the display device system displays the selected virtual data in a position registered to the position of the printed content selection. In some examples, a task related to the printed content item is determined based on physical action user input, and personalized virtual data is displayed registered to the printed content item in accordance with the task. | 06-13-2013 |
20130147839 | AUGMENTED REALITY PROVIDING SYSTEM, INFORMATION PROCESSING TERMINAL, INFORMATION PROCESSING APPARATUS, AUGMENTED REALITY PROVIDING METHOD, INFORMATION PROCESSING METHOD, AND PROGRAM - An Augmented Reality (AR) providing apparatus sends to a server apparatus a request, including image information from an imaging device, for obtaining product information indicating a product that can be displayed on a shelf. and the AR apparatus displays product information included in a reply from the server apparatus in response to the request in an overlaying image manner. The server apparatus determines a shelf from the image information included in the request, determines a size of an empty shelf space, and selects product information of products smaller than the determined size of the empty shelf space. The product information is selected from a storage device storing multiple sets of product information indicating a product and its associated size information. The server apparatus sends a reply including the selected product information to the AR providing apparatus. | 06-13-2013 |
20130147840 | PROJECTED REAR PASSENGER ENTERTAINMENT SYSTEM - A method for augmenting a graphic displayed on a surface inside of a vehicle using a rear seat entertainment projection (RSEP) system includes generating the graphic for display on the surface inside the vehicle. When the graphic is displayed on the surface, an input that causes a reaction to the graphic displayed upon the surface is obtained, and the graphic displayed on the surface is augmented based on the reaction to the graphic. | 06-13-2013 |
20130155105 | METHOD AND APPARATUS FOR PROVIDING SEAMLESS INTERACTION IN MIXED REALITY - An approach is provided for providing seamless interaction in mixed reality. A mixed reality platform processes and/or facilitates a processing of media information associated with at least one augmented reality application to determine one or more digital objects, wherein the one or more digital objects aggregate, at least in part, data for defining the one or more digital objects, one or more computation closures acting on the data, one or more results of the one or more computation closures, or a combination thereof. The mixed reality platform also causes, at least in part, a composition, a decomposition, or a combination thereof of the one or more digital objects to cause, at least in part, an enhancement, a modification, an initiation, or a combination thereof of one or more functions associated with the at least one augmented reality application. | 06-20-2013 |
20130155106 | METHOD AND SYSTEM FOR COORDINATING COLLISIONS BETWEEN AUGMENTED REALITY AND REAL REALITY - A method and system for coordinating placement of an augmented reality/virtual world object(s) into a scene relative to position and orientation. The object(s) can be connected to an anchor point having an absolute location relative to the marker via a connector (e.g., spring-like connector) in such a way that the behavior of the object responds to a physical force and a collision which exists in the augmented reality scene. The connection between the virtual object and location of the marker permits the object to exactly track the marker when there are no real world collisions between the markers. The virtual objects can be displaced so the objects do not pass through one another when the real world markers come into a close spatial proximity and the corresponding virtual objects begin to collide. | 06-20-2013 |
20130155107 | Systems and Methods for Providing an Augmented Reality Experience - Methods and systems, including computer program products, are described for providing an augmented reality experience. A Near Field Communication (NFC) enabled mobile device reads data from a data-encoded tag associated with a physical object. The mobile device captures an image. The mobile device transmits the read data to a server computing device. The mobile device retrieves, from the server computing device, data elements associated with the physical object. The mobile device generates an augmented image based on the capture image and the data elements associated with the physical object. | 06-20-2013 |
20130155108 | Augmented Reality User Interaction Methods, Computing Devices, And Articles Of Manufacture - Augmented reality user interaction methods, computing devices, and articles of manufacture are disclosed according to some aspects of the description. In one aspect, an augmented reality user interaction method includes executing an augmented reality browser application, displaying a camera view of a computing device wherein images generated by a camera are displayed using a touch sensitive display, during the displaying the camera view, displaying an icon interface comprising a pathway and a plurality of icons with respect to the pathway using the touch sensitive display, first detecting a user input moving in a direction of the pathway, moving the icons along the pathway in the direction of the user input as a result of the first detecting, second detecting a user input selecting one of the icons, and depicting augmented reality content with respect to at least one of the images as a result of the second detecting. | 06-20-2013 |
20130162673 | PIXEL OPACITY FOR AUGMENTED REALITY - In embodiments of pixel opacity for augmented reality, a display lens system includes a first display panel that displays a virtual image generated to appear as part of an environment viewed through optical lenses. A second display panel displays an environment image of the environment as viewed through the optical lenses, and the environment image includes opaque pixels that form a black silhouette of the virtual image. The display lens system also includes a beam-splitter panel to transmit light of the environment image and reflect light of the virtual image to form a composite image that appears as the virtual image displayed over the opaque pixels of the environment image. | 06-27-2013 |
20130162674 | INFORMATION PROCESSING TERMINAL, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing terminal includes a recognition unit that recognizes an identifier projected over an image, an acquisition unit that acquires data of an object corresponding to the identifier, a processing unit that changes the orientation of the object according to the positional relationship between the information processing terminal itself and the identifier specified based on the image and, when it is no longer able to recognize the identifier, changes the orientation of the object according to the positional relationship between the information processing terminal itself and the identifier specified based on sensor data, and a display control unit that causes the object of which the orientation is changed according to the positional relationship between the information processing terminal itself and the identifier to be displayed over the image in a superimposed manner. | 06-27-2013 |
20130162675 | INFORMATION PROCESSING APPARATUS - An information processing apparatus comprising first and second display units for respectively displaying first and a second composite images for the two eyes of a user, comprising: a moving unit configured to move positions of the first and second display units; a detecting unit configured to detect moving amounts of the first and second display units; first and second image capturing units configured to respectively obtain first and second captured images; an extracting unit configured to generate first and second extracted images by respectively extracting portions of the first and second captured images in extraction ranges associated with the moving amounts; and a composite image generating unit configured to generate the first and second composite images by respectively compositing first and second CG images with the first and second extracted images. | 06-27-2013 |
20130162676 | CONTENT IDENTIFICATION AND DISTRIBUTION - The invention provides an identifier system for computing identity information from image data. At least part of the image data is representative of an identifier. The identifier comprises a location element and encoded data associated with the location element. The identifier system comprises computer interpretable reference data corresponding to the identifier. The reference data is suitable for use in feature matching to determine a location and an orientation of the location element in the image data—thereby to locate the encoded data in the image data for subsequent decoding into the identity information The invention also provides a computer implemented method of presenting an augmented reality view of a physical article using the identifier system. | 06-27-2013 |
20130162677 | METHOD FOR DISPLAYING A VIRTUAL WORLD IN WHICH THE AVATAR OF A USER OF A VIRTUAL-REALITY SERVICE EVOLVES - The invention pertains to a method for displaying a virtual world in which the avatar of a user of a virtual reality service evolves, said method being operative to use a standard mode for displaying said virtual world, to identify objects visible to the avatar within the displayed virtual world, and, for at least one of said identified objects, to determine whether a relationship exists within the virtual reality service's social network between said object and the user, and if so, to determine a display mode to apply to said object depending on said relationship, the display of said object being altered by applying said determined mood. | 06-27-2013 |
20130169679 | VEHICLE IMAGE DISPLAY SYSTEM AND CORRECTION METHOD THEREOF - A vehicle image display system and correction method thereof, applicable to a vehicle to perform vehicle information display, comprising following steps, fetch a front road image; obtain positions of a lane marking and an obstacle in front based on said front road image; after correction, calculate display information of said positions of said lane markings and said obstacles in front, to be overlapped with images of an actual traffic lane; utilize a motor control unit to adjust focal length or inclination angle of a projector unit, or inclination angle of a viewable panel, to produce overlap error correction values to correct said display information. Said projector unit projects large area display information overlapped entirely with images of said actual traffic lane onto said viewable panel or windshield, so that a driver can obtain vehicle driving information, to raise vehicle driving safety. | 07-04-2013 |
20130169680 | SOCIAL SYSTEM AND METHOD USED FOR BRINGING VIRTUAL SOCIAL NETWORK INTO REAL LIFE - The present invention is related to a social system and process used for bringing virtual social network into real life, which is allowed for gathering and analyzing a social message of at least one interlocutor from virtual social network so as to generate at least one recommended topic, allowing a user to talk with the interlocutor through the utilization of the recommended topic, and then capturing and analyzing at least one speech and behavior and/or physiological response of the interlocutor during talking so as to generate an emotion state of the interlocutor. The user is allowed to determine whether the interlocutor is interested in the recommended topic through the emotion state of interlocutor. Thus, it is possible to bring the social message on virtual network into real life, so as to increase communication topics between persons in real life. | 07-04-2013 |
20130169681 | SYSTEMS AND METHODS FOR PRESENTING BUILDING INFORMATION - Described herein are systems and methods for presenting building information. In overview, the technologies described herein provide relationships between Building Information Modeling (BIM) data (which includes building schematics defined in terms of standardized three dimensional models) and Building Management System (BMS) data (which includes data indicative of the operation of building components such as HVAC components and the like). Some embodiments use relationships between these forms of data thereby to assist technicians in identifying the physical location of particular pieces of equipment, for example in the context of performing inspections and/or maintenance. In some cases this includes the provision of 2D and/or 3D maps to portable devices, these maps including the location of equipment defined both in BIM and BMS data. In some cases, augmented reality technology is applied thereby to provide richer access to positional information. | 07-04-2013 |
20130169682 | TOUCH AND SOCIAL CUES AS INPUTS INTO A COMPUTER - A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated with real object, and automatically displays virtual objects associated with the real object if the one or more interactions involve touching or satisfy one or more social rules stored in a social rules database. The one or more social rules may be used to infer a particular social relationship by considering the distance to another person, the type of environment (e.g., at home or work), and particular physical interactions (e.g., handshakes or hugs). The virtual objects displayed on the HMD may depend on the particular social relationship inferred (e.g., a friend or acquaintance). | 07-04-2013 |
20130169683 | HEAD MOUNTED DISPLAY WITH IRIS SCAN PROFILING - A see-through head mounted-display and method for operating the display to optimize performance of the display by referencing a user profile automatically. The identity of the user is determined by performing an iris scan and recognition of a user enabling user profile information to be retrieved and used to enhance the user's experience with the see through head mounted display. The user profile may contain user preferences regarding services providing augmented reality images to the see-through head-mounted display, as well as display adjustment information optimizing the position of display elements in the see-though head-mounted display. | 07-04-2013 |
20130169684 | POSITIONAL CONTEXT DETERMINATION WITH MULTI MARKER CONFIDENCE RANKING - A computer implemented method for augmenting a display image includes receiving image data, the image data including data representing one or more objects, and at least a first marker and a second marker. The method includes receiving a first confidence level for the first marker and a second confidence level for the second marker. The method includes determining a selected marker from the first marker and the second marker. The selected marker is determined according to a highest confidence level of the first confidence level and the second confidence level. The method includes determining a transformation and a positional offset for the selected marker. The method includes generating overlaid display data for the one or more objects in the image data, the one or more objects determined in accordance with the transformation and the positional offset. | 07-04-2013 |
20130176334 | METHOD AND APPARATUS FOR ANALYZING CLUSTERING OF MIXED REALITY CONTENT AND COMPUTAIONS - An approach is provided for analyzing clustering of mixed reality content and computations. A mixed reality platform determines one or more clusters of one or more mixed reality digital objects, one or more computations associated with the one or more mixed reality digital objects, or a combination thereof based, at least in part, one or more densities of one or more requests for the one or more mixed reality digital objects. The mixed reality platform also processes and/or facilitates a processing of the one or more requests, the one or more densities, or a combination thereof to determine one or more gradients with respect to one or more locations associated with the mixed reality digital objects. The one or more gradients represent inflow/outflow information associated with the one or more locations. | 07-11-2013 |
20130176335 | VEHICULAR DISPLAY DEVICE AND VEHICULAR DISPLAY SYSTEM - A vehicular display device includes a display unit that displays visible information and a light projection unit that guides light including the visible information displayed on the display unit to a predetermined projection surface, and displaying the visible information as a virtual image. The vehicular display device includes a guide display unit and a guide display control unit. The guide display unit indicates a relationship between at least positions of a first display region in which the virtual image is displayed by projection of the light projection unit and a second display region in which detailed information is displayed. The detailed information has an association with a content of particular information that is displayed on the display unit under a predetermined condition. The guide display control unit controls the guide display unit into a display state when the particular information is displayed on the display unit. | 07-11-2013 |
20130176336 | METHOD OF AND SYSTEM FOR OVERLAYING NBS FUNCTIONAL DATA ON A LIVE IMAGE OF A BRAIN - The present invention discloses a method of overlaying Navigated Brain Stimulation (NBS) functional data on a live image of a brain. The method comprises the steps of obtaining a live image of a brain, obtaining a functional map of the brain comprising an anatomical model of the brain and NBS functional data associated with the brain, identifying at least one anatomical landmark of the brain from the live image of the brain, identifying at least one of said identified anatomical landmarks on the anatomical model of the brain, modifying the functional map so that the identified at least one anatomical landmark of the model corresponds in size and orientation to the corresponding at least one anatomical landmark in the live image of the brain, and digitally overlaying at least said NBS functional data on said live image of the brain according to the corresponding, aligned anatomical landmarks. | 07-11-2013 |
20130176337 | Device and Method For Information Processing - A device and method for information processing are described. The device includes a display unit having a preset transmittance; an object determination unit configured to determine at least one object at the information processing device side; an additional information acquisition unit configured to acquire additional information corresponding to the at least one object; an additional information position determination unit configured to determine the display position of the additional information on the display unit; and a display processing unit configured to display the additional information on the display unit based on the display position. | 07-11-2013 |
20130182010 | DEVICE FOR CAPTURING AND DISPLAYING IMAGES OF OBJECTS, IN PARTICULAR DIGITAL BINOCULARS, DIGITAL CAMERA OR DIGITAL VIDEO CAMERA - The invention relates to a device for capturing and displaying images of objects, in particular digital binoculars ( | 07-18-2013 |
20130182011 | Method for Providing Information on Object Which Is Not Included in Visual Field of Terminal Device, Terminal Device and Computer Readable Recording Medium - The present invention relates to a method for providing information on an object excluded in a visual field of a terminal in a form of augmented reality (AR) by using an image inputted to the terminal and information related thereto. The method includes the steps of: (a) specifying the visual field of the terminal corresponding to the inputted image by referring to at least one piece of information on a location, a displacement and a viewing angle of the terminal; (b) searching an object(s) excluded in the visual field of the terminal; and (c) displaying guiding information on the searched object(s) with the inputted image in a form of the augmented reality; wherein the visual field is specified by a viewing frustum whose vertex corresponds to the terminal. | 07-18-2013 |
20130182012 | METHOD OF PROVIDING AUGMENTED REALITY AND TERMINAL SUPPORTING THE SAME - A method of providing augmented reality and a terminal supporting the same are provided. The terminal for supporting augmented reality includes: a display unit displaying a specific image during a preview image mode; and a controller recognizing at least one surface from the specific image according to a predetermined criteria, combining an image of a virtual object with the specific image so that the image of a virtual object is displayed on the recognized at least one surface, and controlling the display unit to output the combined image. | 07-18-2013 |
20130187950 | TRANSPARENT DISPLAY FOR MOBILE DEVICE - A projection-type display device is connectively coupled to a mobile device (such as a smartphone) where the light generated by a small projection device is directed at a relatively transparent holographic optical element (HOE) to provide a display to an operator of the mobile device or a viewer. The projector and HOE may be configured to produce and magnify a virtual image that is perceived as being displayed at a large distance from the operator who views the image through the HOE. The HOE may comprise a volume grating effective at only the narrow wavelengths of the projection device to maximize transparency while also maximizing the light reflected from the display projector to the eyes of the operator. | 07-25-2013 |
20130187951 | AUGMENTED REALITY APPARATUS AND METHOD - According to one embodiment, an augmented reality apparatus includes an estimation unit, a search unit, a first generation unit, a second generation unit and selection unit. The estimation unit estimates a main facility. The search unit searches for facilities to obtain target facilities. The first generation unit generates a first feature value according to each item of interest of the user. The second generation unit generates a second feature value for each target facility. The selection unit calculates a degree of association based on the first feature value and the second feature value to select data of a target facility having the degree of association not less than a first threshold as recommended facility data. | 07-25-2013 |
20130187952 | NETWORK-BASED REAL TIME REGISTERED AUGMENTED REALITY FOR MOBILE DEVICES - A method of operating a mobile device with a camera, a display and a position sensor to provide a display of supplementary information aligned with a view of a scene. One or more image obtained from the camera is uploaded to a remote server together with corresponding data from the position sensor. Image processing is then performed to track image motion between that image and subsequent images obtained from the camera, determining a mapping between the uploaded image and a current image. Data is then received via the network indicative of a pixel location for display of supplementary information within the reference image. The mapping is used to determine a corresponding pixel location for display of the supplementary information within the current image, and the supplementary information is displayed on the display correctly aligned with the view of the scene. | 07-25-2013 |
20130187953 | Image Matching Apparatus and Image Matching Method - An image matching apparatus is provided. The apparatus includes a storage unit, an obtaining unit, a specification unit, and an image matching unit. The storage unit is configured to store image data of one or more devices that are connected to a local network. The obtaining unit is configured to obtain image data of device image obtained by capturing a device. The specification unit is configured to specify one or more local networks to be used for image matching. The image matching unit is configured to perform image matching of the obtained image data against the stored image data of one or more devices that are connected to the specified local network. | 07-25-2013 |
20130194304 | COORDINATE-SYSTEM SHARING FOR AUGMENTED REALITY - A method for presenting real and virtual images correctly positioned with respect to each other. The method includes, in a first field of view, receiving a first real image of an object and displaying a first virtual image. The method also includes, in a second field of view oriented independently relative to the first field of view, receiving a second real image of the object and displaying a second virtual image, the first and second virtual images positioned coincidently within a coordinate system. | 08-01-2013 |
20130194305 | MIXED REALITY DISPLAY SYSTEM, IMAGE PROVIDING SERVER, DISPLAY DEVICE AND DISPLAY PROGRAM - A Mixed Reality display system and others capable of experiencing Mixed Reality by changing own line of sight of a user in a case where a plurality of users experience a synthesized image are provided. The Mixed Reality display system is a system in which an image providing server and a plurality of client terminals are constructed to be capable of being communicated with each other, the image providing server represents a virtual object, synthesizes the represented object and an omnidirectional image taken by an omnidirectional image obtaining camera, and then delivers the synthesized image information to a plurality of client terminals. The client terminal extracts a partial area image from the synthesized image indicated by the synthesized image information received from the image providing server based on the portion/pose information defining the line of sight of a user observing the client terminal, and then displayed the extracted image. | 08-01-2013 |
20130194306 | SYSTEM FOR PROVIDING TRAFFIC INFORMATION USING AUGMENTED REALITY - The present invention relates to a system for providing traffic information using augmented reality, and comprises: an AR server for providing a virtual image which is obtained by processing transfer information on transfer locations at stops for transportation means and information according to the transportation means, using characters and graphics; and a personal portable communication device which displays the virtual image received from the AR server while overlapping the virtual image on a real-captured image of a transfer location obtained through a camera. According to the present invention, a virtual image which displays transfer information on transfer locations such as bus platforms, subway stations, airports and the like and information according to transportation means and the like is displayed while being overlapped on an actual real-captured mage, thereby making it possible for a user to easily and simply obtain information on the detailed traffic flow at transfer locations or the surroundings thereof. | 08-01-2013 |
20130201214 | Methods, Apparatuses, and Computer-Readable Storage Media for Providing Interactive Navigational Assistance Using Movable Guidance Markers - Methods, apparatuses and computer-readable storage media for displaying at a first user equipment a first marker a location wherein the location is defined remotely at a second user equipment; displaying at the second user equipment a current geographic location and a current vantage point for the first user equipment; displaying at the second user equipment a second marker; accepting at the second user equipment an input for adjusting the second marker from a first position to a second position, wherein the second position is indicative of a target geographic location in a first virtual view of the current geographic location of the first user equipment as displayed on the second user equipment; and in response to the adjusting, displaying at the first user equipment the first marker at the target geographic location in a second virtual view of the current geographic location. | 08-08-2013 |
20130201215 | ACCESSING APPLICATIONS IN A MOBILE AUGMENTED REALITY ENVIRONMENT - An augmented reality system and method that allows a user to access, and more particularly, install and subsequently have access to an application on an augmented reality mobile device. The system and method enhances the augmented reality experience by minimizing or eliminating user interaction in the process of initiating the installation of the application. This is achieved, at least in part, through the use of a passively activated application program. It is passively activated in that it effects the application installation based on signals received and processed by the augmented reality mobile device, where the signals reflect the surrounding environment in which the augmented reality mobile device is operating. No direct interaction by the user of the augmented reality mobile device is required to initiate the installation of the application. | 08-08-2013 |
20130201216 | SERVER, CLIENT TERMINAL, SYSTEM, AND PROGRAM - There is provided a server including a reception unit configured to receive, from a client terminal, position information indicating a position of the client terminal, and direction information indicating a direction in which the client terminal is directed, and a search unit configured to search for image data provided with position information indicating an opposite position across a target object present in the direction indicated by the direction information with respect to the position of the client terminal based on the position information. | 08-08-2013 |
20130201217 | OBJECT DISPLAY DEVICE AND OBJECT DISPLAY METHOD - In an object display device, in the case that a marker is not detected at present, a display complementing unit acquires a change in an image in real space displayed on a display unit between the past when the marker was detected and the present. Since a virtual object is displayed based on the position and shape of the marker in the image in real space, the display position and display manner of the virtual object are also to be changed in accordance with a change in the image in real space. A display decision unit can therefore decide the display position and display manner of the virtual object at present from the display position and display manner of the virtual object in the past, based on the change in the image in real space between the past and the present. | 08-08-2013 |
20130208003 | IMAGING STRUCTURE EMITTER CONFIGURATIONS - In embodiments of imaging structure emitter configurations, an imaging structure includes a silicon backplane with a driver pad array. The embedded light sources are formed on the driver pad array in an emitter material layer, and the embedded light sources can be individually controlled at the driver pad array to generate and emit light. The embedded light sources are configured in multiple rows for scanning by an imaging unit to generate a scanned image for display. | 08-15-2013 |
20130208004 | DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND PROGRAM - There is provided a display control device including an action information acquisition unit that acquires, at an action position of one actor, action information regarding a past action of another actor, an object generation unit that generates a virtual object for virtually indicating a position of the other actor during an action of the one actor based on the acquired action information, and a display control unit that causes a display unit displaying a surrounding scene to superimpose and display the generated virtual object during the action of the one actor. | 08-15-2013 |
20130208005 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing device including a data acquisition unit configured to acquire a recommended angle-of-view parameter that represents a recommended angle of view for a subject in an environment that appears in an image, and a display control unit configured to overlay on the image a virtual object that guides a user so that an angle of view for capturing an image of the subject becomes closer to the recommended angle of view, using the recommended angle-of-view parameter. The recommended angle-of-view parameter is a parameter that represents a three-dimensional position and attitude of a device that captures an image of the subject at the recommended angle of view. | 08-15-2013 |
20130208006 | SYSTEM AND METHOD OF IMAGE AUGMENTATION - A book for use in an augmented reality system includes first and second pages. The first page is on a first leaf of the book and includes a first fiduciary marker for indicating the orientation of the book to a recognition system. The second page is on a second leaf of the book and includes a second fiduciary marker for indicating the orientation of the book to the recognition system and also page marker for indicating the page number of the second page to the recognition system. The page marker orientation is ambiguous without reference to a fiduciary marker. The page marker is positioned on the second page closer to an edge of the page than the second fiduciary marker to become visible to the recognition system before the second fiduciary marker, as the book is turned to the second page | 08-15-2013 |
20130208007 | POSITION-RELATED INFORMATION REGISTRATION APPARATUS, POSITION-RELATED INFORMATION REGISTRATION SYSTEM, POSITION-RELATED INFORMATION REGISTRATION AND DISPLAY SYSTEM, AND RECORDING MEDIUM - A position-related information registration and display system can register position-related information in a position-related information management server using a first terminal, and display an air tag based on the position-related information on a second data terminal including an imaging unit. The first data terminal extracts a search keyword from content displayed on a first display unit, accesses a keyword information storage unit, acquires a corresponding position information piece that is a position information piece corresponding to the search keyword extracted from the content, and transmits a registration request in addition to the content and the corresponding position information piece to the position-related information management server. The second data terminal acquires a position-related information piece corresponding to a location where the imaging unit performs imaging, from the position-related information management server, and displays an air tag based on the acquired position-related information piece in a superimposed manner in a captured image. | 08-15-2013 |
20130215147 | Apparatus and Method for Enhancing Human Visual Performance in a Head Worn Video System - Visual impairment, or vision impairment, refers to the vision loss of an individual to such a degree as to require additional support for one or more aspects of their life. Such a significant limitation of visual capability may result from disease, trauma, congenital, and/or degenerative conditions that cannot be corrected by conventional means, such as refractive correction, such as eyeglasses or contact lenses, medication, or surgery. According to embodiments of the invention a method of augmenting a user's sight is provided comprising obtaining an image of a scene using a camera carried by the individual, transmitting the obtained image to a processor, selecting an algorithm of a plurality of spectral, spatial, and temporal image modification algorithms to be applied to the image by the processor, modifying the using the algorithm substantially in real time, and displaying the modified image on a display device worn by the individual. | 08-22-2013 |
20130215148 | INTERACTIVE INPUT SYSTEM HAVING A 3D INPUT SPACE - An interactive input system comprises computing structure; and an input device detecting at least one physical object carrying a recognizable pattern within a three-dimensional (3D) input space and providing output to the computing structure, wherein the computing structure processes the output of the input device to: recognize the pattern carried by the at least one physical object in the 3D input space; and modify an image presented on a display surface by applying a transition to digital content associated with the at least one physical object based on a detected state of the at least one physical object. | 08-22-2013 |
20130215149 | INFORMATION PRESENTATION DEVICE, DIGITAL CAMERA, HEAD MOUNT DISPLAY, PROJECTOR, INFORMATION PRESENTATION METHOD AND NON-TRANSITORY COMPUTER READABLE MEDIUM - A digital camera functioning as an information presentation device is provided with a CG superimposition unit | 08-22-2013 |
20130222426 | METHOD AND DEVICE FOR PROVIDING AUGMENTED REALITY OUTPUT - Methods and devices for generating an augmented reality output are described. In one aspect, the method includes: obtaining camera data from a camera associated with an electronic device, the camera data defining an image representing a card having a graphic disposed thereon; obtaining sensor data from a sensor associated with the electronic device; and generating an augmented reality output on an output interface based on the sensor data and the graphic. | 08-29-2013 |
20130222427 | SYSTEM AND METHOD FOR IMPLEMENTING INTERACTIVE AUGMENTED REALITY - An augmented reality implementing system is disclosed. The augmented reality implementing system includes an image outputting device and an augmented reality implementing device. The augmented reality implementing device derives an object from a captured image of a specific space and extracts a predetermined virtual object corresponding to the derived object; when an image of a user tool for interaction with the virtual object is included in the captured image, reflects a motion command corresponding to a motion pattern of the user tool on the virtual object; and generates a new image by reflecting the virtual object on the captured image, and outputs the new image to the image outputting device. | 08-29-2013 |
20130222428 | METHOD AND APPARATUS FOR AN AUGMENTED REALITY USER INTERFACE - An approach is provided for an augmented reality user interface. An image representing a physical environment is received. Data relating to a horizon within the physical environment is retrieved. A section of the image to overlay location information based on the horizon data is determined. Presenting of the location information within the determined section to a user equipment is initiated. | 08-29-2013 |
20130229433 | COHERENT PRESENTATION OF MULTIPLE REALITY AND INTERACTION MODELS - A method for navigating concurrently and from point-to-point through multiple reality models is described. The method includes: generating, at a processor, a first navigatable virtual view of a first location of interest, wherein the first location of interest is one of a first virtual location and a first non-virtual location; and concurrently with the generating the first navigatable virtual view of the first location of interest, generating, at the processor, a second navigatable virtual view corresponding to a current physical position of an object, such that real-time sight at the current physical position is enabled within the second navigatable virtual view. | 09-05-2013 |
20130235078 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND COMPUTER-READABLE MEDIUM - An object of the present invention to enable easy display of a virtual object toward a direction desired by a user, when performing a superimposed display of the virtual object in a captured image containing an augmented reality marker. In the present invention, a CPU identifies the direction of an AR marker detected from within an actual image captured in an image sensor as an imaging direction, rotates the virtual object so that the front side of the virtual object is directed toward the imaging direction in a state where the front side of the virtual object is directed toward a reference direction (for example, front side direction) of the AR marker, and performs a superimposed display of the virtual object on the area of the AR marker. | 09-12-2013 |
20130235079 | COHERENT PRESENTATION OF MULTIPLE REALITY AND INTERACTION MODELS - A method for navigating concurrently and from point-to-point through multiple reality models is described. The method includes: generating, at a processor, a first navigatable virtual view of a first location of interest, wherein the first location of interest is one of a first virtual location and a first non-virtual location; and concurrently with the generating the first navigatable virtual view of the first location of interest, generating, at the processor, a second navigatable virtual view corresponding to a current physical position of an object, such that real-time sight at the current physical position is enabled within the second navigatable virtual view. | 09-12-2013 |
20130241955 | AUGMENTED REALITY PROVIDING APPARATUS - Provided is an augmented reality providing apparatus capable of preventing an image sickness and reduction of a third party. When a position measurement reliability is less than a first threshold and a movement of HMD | 09-19-2013 |
20130249942 | SYSTEM AND APPARATUS FOR AUGMENTED REALITY DISPLAY AND CONTROLS - An augmented reality system includes a vision output device for displaying virtual images. A sensing device captures an image within a reference frame. The vision output device is captured within the image. A processing unit identifies the vision output device within the reference frame of the captured image and localizes the vision output device within the reference frame of the captured image for identifying an absolute position and orientation of the vision output device within the reference frame of the captured image. The vision output device generates virtual displays to a user at respective locations based on the absolute position and orientation of the vision output device within reference frame of the captured image. The sensing device captures a user's selection of a virtual control. The processing unit identifies the selection of the virtual control within the captured image and enables a control action of a controllable device. | 09-26-2013 |
20130249943 | AUGMENTED REALITY PROCESS FOR SORTING MATERIALS - Systems and methods for processing materials for a recycling workstream are disclosed. The system may include one or more sorting surfaces on which sortable items may be placed. Illumination sources may be provided to illuminate both the items and the sorting surface(s). A variety of sensor systems may also be provided. The outputs of the sensor systems may be supplied to a computing system for determining the composition of the items and their location on the sorting surface(s). The computing system may also control the surface(s), illumination sources, and sensor systems. Additionally, the system may include one or more augmented reality interface devices used by sorters at the sorting facility. The computing system may communicate data streams to the augmented reality interfaces to provide the users augmented reality sensations. The sensations may give the users information and instructions regarding how to sort the items into one or more sorting bins. | 09-26-2013 |
20130249944 | APPARATUS AND METHOD OF AUGMENTED REALITY INTERACTION - A method of augmented reality interaction for repositioning a virtual object on an image of a surface comprises capturing successive video images of the surface and first and second control objects and defining an interaction start area over the surface with respect to the virtual object. The method detects the control objects in successive video images, detects whether the control objects are brought together over the interaction start area, and if so, analyses a region of successive video images using optical flow analysis to determine the overall direction of motion of the control objects and augmenting the video image to show the virtual object being held by the control objects. Augmenting the video image itself comprises superposing a graphical effect on the video image prior to superposition of the virtual object, such that the graphical effect visually disconnects the virtual object from the video image in the resulting augmented image. | 09-26-2013 |
20130249945 | HEAD-MOUNTED DISPLAY DEVICE - A head-mounted display device that allows a user to visually recognize a virtual image in a state where the head-mounted display device is mounted on the head of the user, including: an image processing unit that performs a process of generating an image; and an image display unit having an image light generating unit that generates image light representing the image, and configured such that the user can visually recognize the virtual image and the outside world, wherein the head-mounted display device is configured such that in a partial area of an area where the virtual image can be displayed in a visual field of the user, the outside world can be visually recognized preferentially. | 09-26-2013 |
20130249946 | HEAD-MOUNTED DISPLAY DEVICE - Ahead-mounted display device includes: an image display unit including an image light generating unit that generates image light representing an image and a light guide unit that guides the image light to the eyes of a user, and allowing the user to visually recognize a virtual image in a state where the image display unit is mounted on the head of the user; a detecting unit that is disposed in the image display unit and detects at least one of an impact and displacement; and a control unit that generates a given command based on detection data detected in the detecting unit. | 09-26-2013 |
20130249947 | COMMUNICATION USING AUGMENTED REALITY - A method for communicating with at least one using augmented reality is described. The method for communicating with at least one using augmented reality includes providing at least one augmented reality environment and combining the augmented reality environment with first data from a third party API, wherein the first data is mapped and blended with the at least one augmented reality environment. The method further includes receiving second data from a first user and a second user, wherein the second data is generated by a plurality of input/output (I/O) devices, and wherein said I/O devices provide the first user and the second user with access to the at least one augmented reality environment. | 09-26-2013 |
20130249948 | PROVIDING INTERACTIVE TRAVEL CONTENT AT A DISPLAY DEVICE - A method for providing content to a user at an interactive device with a display is provided. The method includes providing a presentation layer for the content, wherein the presentation layer is operable to embed interactive elements that appear on the display, receiving, at the interactive device, data, displaying the content, wherein the content is based at least in part on the data and providing the user with the ability to arrange travel plans associated with the interactive elements. | 09-26-2013 |
20130257906 | Generating publication based on augmented reality interaction by user at physical site - Interaction data represents augmented reality interaction by a user using a mobile computing device with physical points of interest at a physical site. A publication is generated based on this interaction data and provided to the user. | 10-03-2013 |
20130257907 | CLIENT DEVICE - An information processing apparatus comprising an image recognition unit that detects whether an image preset as a marker image exists in a captured image; a spatial recognition unit that, when the marker image is detected by the image recognition unit, constructs a real-world coordinate system computed from a position and orientation of the marker image, and computes information corresponding to a position and orientation of the information processing apparatus in the constructed real-world coordinate system; and a virtual object display unit that acquires at least one of the data stored in the storage unit, data existing on a network, and data transmitted from the one or more other information processing apparatuses and received by the communication interface, places the acquired data in the constructed real-world coordinate system, and controls the display unit to display the acquired data as one or more virtual objects. | 10-03-2013 |
20130257908 | OBJECT DISPLAY DEVICE, OBJECT DISPLAY METHOD, AND OBJECT DISPLAY PROGRAM - An object display device includes a virtual object process unit that processes an object based on imaging information that an imaging unit references upon acquisition of an image in real space, an image synthesis unit that superimposes the processed object on the image in real space, and a display unit that displays a superimposed image. Accordingly, the feature of the image in real space is reflected in the object superimposed. Thus, a sense of incongruity upon superimposing and displaying the object on the image in real space is reduced. | 10-03-2013 |
20130265331 | Virtual Reality Telescopic Observation System of Intelligent Electronic Device and Method Thereof - A virtual reality telescopic observation system of an intelligent electronic device. The virtual reality telescopic observation system includes an electronic device arranged for displaying a virtual image, an engagement slot arranged for movably embedding the electronic device therein and a virtual reality telescopic optical module, including at least one optical lens installed corresponding to the electronic device, arranged for projecting the virtual image displayed by the electronic device into the virtual reality telescopic optical module, such that a viewer views the virtual image displayed by the electronic device from the virtual reality telescopic optical module. Wherein, when the viewer performs a virtual reality telescopic observation, the angle, distance and position of the virtual image displayed by the electronic device are adjusted according to a changing of the field of view made by the viewer. | 10-10-2013 |
20130265332 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD OF INFORMATION PROCESSING APPARATUS, AND STORAGE MEDIUM STORING PROGRAM - An information processing apparatus includes an identifier obtaining unit configured to obtain an identifier of another apparatus, a detection unit configured to detect a predetermined object from a captured image, a feature-information obtaining unit configured, based on the obtaining identifier, to obtain feature information for specifying the predetermined object, a specifying unit configured, based on the characteristic information, to perform specifying processing for specifying the predetermined object detected by the detection unit, and a control unit configured, depending on a result of the specifying processing, to perform control not to obtain the identifier by the identifier obtaining unit. | 10-10-2013 |
20130265333 | Augmented Reality Based on Imaged Object Characteristics - Augmented reality may be enabled by adding computer generated images to images of real world occurrences. The computer generated images may be inserted automatically based on a characteristic of an imaged object in said image. | 10-10-2013 |
20130271491 | LOCAL SENSOR AUGMENTATION OF STORED CONTENT AND AR COMMUNICATION - The augmentation of stored content with local sensors and AR communication is described. In one example, the method includes gathering data from local sensors of a local device regarding a location, receiving an archival image at the local device from a remote image store, augmenting the archival image using the gathered data, and displaying the augmented archival image on the local device. | 10-17-2013 |
20130278631 | 3D POSITIONING OF AUGMENTED REALITY INFORMATION - A system and method for providing informational labels with perceived depth in the field of view of a user of a head mounted display device. In one embodiment, the method includes determining a physical location of the user and the head mounted display device, and identifying and determining a distance from the user to one or more objects of interest in the user's field of view. Using the distance from the user for each object, one can calculate a disparity value for viewing each object. The processor of the head mounted device may gather information concerning each of the objects in which the user is interested. The head mounted display device then provides a label for each of the objects and for each eye of the user, and, using the disparity values, places the labels within the field of view of the user. | 10-24-2013 |
20130278632 | METHOD FOR DISPLAYING AUGMENTED REALITY IMAGE AND ELECTRONIC DEVICE THEREOF - A method for displaying an augmented reality image and an electronic device thereof are provided. A method for displaying an augmented reality image in an electronic device includes comparing a target image with a viewpoint conversion image, the comparison determining matching pairs of a plurality of features of the target image and a plurality of features of the viewpoint conversion image and, if the matching pairs are determined, displaying an augmented reality image of the viewpoint conversion image. | 10-24-2013 |
20130278633 | METHOD AND SYSTEM FOR GENERATING AUGMENTED REALITY SCENE - A method and system for generating an augmented reality (AR) scene may include obtaining real world information including multimedia information and sensor information associated with a real world, loading an AR locator representing a scheme for mixing the real world information and at least one virtual object content and the real world information onto an AR container, obtaining the at least one virtual object content corresponding to the real world information using the AR locator from a local storage or an AR contents server, and visualizing AR information by mixing the real world information and the at least one virtual object content based on the AR locator. | 10-24-2013 |
20130278634 | METHOD AND APPARATUS OF PROCESSING DATA TO SUPPORT AUGMENTED REALITY - A method and apparatus of processing data to support Augmented Reality (AR) are provided. In the method, an image file including at least a scene description stream, an object descriptor stream, and a visual stream including a real-world medium is received, the scene description stream including an AR node that represents information about at least one piece of AR content used to augment the real-world medium, information about an AR locator and an object descriptor Identifier (ID) is acquired from the AR node, the AR locator describing when the at least one piece of AR content appears and how the real-world medium is augmented with the at least one piece of AR content and the object indicator ID identifying an Object Descriptor (OD) that describes the at least one piece of AR content, and at least one elementary stream descriptor. | 10-24-2013 |
20130278635 | ASSEMBLING METHOD, MONITORING METHOD, COMMUNICATION METHOD, AUGMENTED REALITY SYSTEM AND COMPUTER PROGRAM PRODUCT - An augmented reality system, an assembling method for assembling a first set-up component to a second set-up component under the assistance of an augmented reality system, a method for monitoring a set-up component and a method for transmitting data from or to a set-up component are provided. The augmented realty system may capture a variable marker associated with the respective set-up component. The augmented reality system can recognize the location and/or status of the variable marker and thus decide whether the connection between the first and second set-up component is established correctly or not. Further, data can be transmitted by the variable marker monitored by the augmented reality system. | 10-24-2013 |
20130278636 | OBJECT DISPLAY DEVICE, OBJECT DISPLAY METHOD, AND OBJECT DISPLAY PROGRAM - An object display device calculates a setting value including the focal length for acquiring an image in real space with a camera setting value determination unit based on the distance to a virtual object calculated by a virtual object distance calculation unit, and acquires the image in real space with an imaging unit using the calculated focal length. Thus, an image that is in focus in a position where the virtual object is superimposed and becomes more out of focus as the distance increases from the position where the virtual object is superimposed is acquired. Since the virtual object is superimposed on the image in real space acquired in this manner, the virtual object that is the subject of attention for a user is emphasized, and a sense of incongruity in the superimposed image is reduced. | 10-24-2013 |
20130286045 | METHOD OF SIMULATING LENS USING AUGMENTED REALITY - In a method of simulating a lens using augmented reality, a user who desires to purchase a vision correction product may wear lenses precisely corrected using a computer device through a virtual experience, and inconvenience of frequently replacing various lenses when taking an eye examination is considerably mitigated. An effect of wearing a variety of vision correction products in a short time period can be experienced, and it is expected to be able to select an optimized custom-tailored vision correction product. Particularly, in manufacturing a functional lens which has complicated manufacturing steps and requires a precise examination, such as a progressive multi-focal lens, a coating lens, a color lens, a myopia progress suppression lens, an eye fatigue relieve lens or the like, it is expected that a precise product can be manufactured, and manufacturing time can be greatly reduced. | 10-31-2013 |
20130286046 | NARROWCASTING FROM PUBLIC DISPLAYS, AND RELATED METHODS - A user with a cell phone interacts, in a personalized session, with an electronic sign system. In some embodiments, the user's location relative to the sign is discerned from camera imagery—either imagery captured by the cell phone (i.e., of the sign), or captured by the sign system (i.e., of the user). Demographic information about the user can be estimated from imagery captured acquired by the sign system, or can be accessed from stored profile data associated with the user. The sign system can transmit payoffs (e.g., digital coupons or other response data) to viewers—customized per user demographics. In some arrangements, the payoff data is represented by digital watermark data encoded in the signage content. The encoding can take into account the user's location relative to the sign—allowing geometrical targeting of different payoffs to differently-located viewers. Other embodiments allow a user to engage an electronic sign system for interactive game play, using the cell phone as a controller. | 10-31-2013 |
20130286047 | MIRROR SYSTEM AND CONTROL METHOD THEREFOR - In a mirror system for displaying an image on a mirror surface, a mirror reflects incident light from an object facing its front surface side to present a reflected image, and transmits incident light from its rear surface. A display unit generates a presentation image to be superimposed on the reflected image. An optical unit is arranged between the display unit and the rear surface of the mirror, and images the presentation image. An acquisition unit acquires distance information between the object and the mirror. A controller controls the imaging point of the presentation image by the optical unit according to the distance information. | 10-31-2013 |
20130286048 | METHOD AND SYSTEM FOR MANAGING DATA IN TERMINAL-SERVER ENVIRONMENTS - A system and method of providing augmented reality (AR) information uses a camera to take images from a mobile device's environment. These images are compared to images stored in an image-object database to identify objects within these images. The objects are compared to objects stored in a database with AR information to display the AR information as an overlay of the real image on the mobile device. The AR information is generated from a mobile device user preference database. The preferences are matched with a second user's preferences obtained from data mining activities across a store management system, ERP system, CRM system, data on sales figures tied to customer data, and the like. An additional image-object localization database provides detailed localization information for objects identified. The store management system, ERP, or CRM system can facilitate electronic analysis of products, sales and customer data. | 10-31-2013 |
20130293577 | INTELLIGENT TRANSLATIONS IN PERSONAL SEE THROUGH DISPLAY - A see-through, near-eye, mixed reality display apparatus for providing translations of real world data for a user. A wearer's location and orientation with the apparatus is determined and input data for translation is selected using sensors of the apparatus. Input data can be audio or visual in nature, and selected by reference to the gaze of a wearer. The input data is translated for the user relative to user profile information bearing on accuracy of a translation and determining from the input data whether a linguistic translation, knowledge addition translation or context translation is useful. | 11-07-2013 |
20130293578 | Four Dimensional Image Registration Using Dynamical Model For Augmented Reality In Medical Applications - Technologies described herein generally provide for an improved augmented reality system for providing augmented reality images comprising a pre-operative image superimposed on a patient image. The accuracy of registering the pre-operative image on the patient image, and hence the quality of the augmented reality image, may be impacted by the periodic movement of an organ. Registration of the pre-operative image on the patient image can be improved by accounting for motion of the organ. That is, the organ motion, which can be described by a dynamical model, can be used to correct registration errors that do not match the dynamical model. The technologies may generate a sequence of 3-D patient images in real-time for guided surgery. | 11-07-2013 |
20130293579 | ELECTRONIC SYSTEM WITH AUGMENTED REALITY MECHANISM AND METHOD OF OPERATION THEREOF - A method of operation of an electronic system includes: scanning an image for detecting a subject; detecting a potential adjustment for moving the subject within the image; and selecting an augmentation for recommending the potential adjustment and for displaying the augmentation on a device. | 11-07-2013 |
20130293580 | SYSTEM AND METHOD FOR SELECTING TARGETS IN AN AUGMENTED REALITY ENVIRONMENT - Techniques are disclosed for facilitating electronic commerce in an augmented reality environment. In some embodiments, a method comprises detecting, by a mobile device, presence of the physical product or the real life service; and presenting, on the mobile device, information to conduct the transaction of a physical product or a real life service via the augmented reality environment. In some embodiments, a method comprises detecting one or more targets in the augmented reality platform using a select area in a perspective of a user, the perspective being captured by a mobile device; and prompting the user to choose an object of interest from the one or more detected targets. Among other advantages, embodiments disclosed herein provide an intuitive and integrated user experience in shopping using augmented reality devices, thereby reducing the consumer user's effort in conducting such activities, reducing necessary sales personnel and their working hours, and increasing sales. | 11-07-2013 |
20130293581 | Back-to-Back Video Displays - The present invention extends to methods, systems, and computer program products for displaying advertisements to customers using multiple display devices that are positioned in a back-to-back orientation. Visual content captured by one display device can be transmitted to the other display device for display such that the display devices appear to be transparent. In this way, items positioned in front of one display device can be captured by the display device and augmented prior to being displayed on the other display device. These augmentations can provide dynamic and customized advertisements of items as they appear on a shelf or other location within a retail environment to thereby increase a customer's interest in the product while the customer is near the product. | 11-07-2013 |
20130293582 | METHOD TO GENERATE VIRTUAL DISPLAY SURFACES FROM VIDEO IMAGERY OF ROAD BASED SCENERY - Generating a virtual model of environment in front of a vehicle based on images captured using an image capturing. The Images captured on an image capturing device of a vehicle are processed to extract features of interest. Based on the extracted features, a virtual model of the environment is constructed. The virtual model includes one or more surfaces. Each of the surfaces may be used as a reference surface to attach and move graphical elements generated to implement augmented reality (AR). As the vehicle moves, the graphical elements move as if the graphical elements are affixed to the one of the surfaces. By presenting the graphical elements to move together with real objects in front of the vehicle, a driver perceives the graphical elements as being part of the actual environment and reduces distraction or confusion associated with the graphical elements. | 11-07-2013 |
20130293583 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing device including a superimposition display position determining unit which determines a position of an object having a predetermined flat surface or curved surface out of an object imaged in an input image based on an environment map, a superimposition display image generating unit which generates a superimposition display image by setting superimposition display data at the position of the object determined by the superimposition display position determining unit, an image superimposing unit which superimposes the superimposition display image on a visual field of a user, an operating object recognizing unit which recognizes an operating object imaged in the input image, and a process executing unit which executes a process corresponding to an item selected based on a position of the operating object recognized by the operating object recognizing unit. | 11-07-2013 |
20130293584 | USER-TO-USER COMMUNICATION ENHANCEMENT WITH AUGMENTED REALITY - The enhancement of user-to-user communication with augmented reality is described. In one example, a method includes receiving virtual object data at a local device from a remote user, generating a virtual object using the received virtual object data, receiving an image at the local device from a remote image store, augmenting the received image at the local device by adding the generated virtual object to the received image, and displaying the augmented received image on the local device. | 11-07-2013 |
20130293585 | MOBILE TERMINAL AND CONTROL METHOD FOR MOBILE TERMINAL - Display is switched between overlapping AR objects by a mobile terminal ( | 11-07-2013 |
20130293586 | INFORMATION PROCESSING DEVICE, ALARM METHOD, AND PROGRAM - An apparatus comprising a memory storing instructions is provided. The apparatus includes a control unit for executing the instructions to send signals to display, for a user, a first virtual image superimposed onto a representation of real space. The control unit further executes instructions to send signals to identify a potential source of interest for the user, the potential source of interest being outside of a user focus area in the representation of real space. The control unit further executes instructions to send signals to notify the user of the potential source of interest. | 11-07-2013 |
20130300766 | DISPLAY INSTRUMENT AND IMAGE DISPLAY METHOD - A head-mounted display device including an image display apparatus configured to display a captured image of a portion of an environment viewable through the head-mounted display device; and a dimmer configured to, while the captured image is displayed, allow a portion of ambient light from the environment to pass through the dimmer. Also, a method of displaying information on a head-mounted display device. The method may include displaying a captured image of a portion of an environment viewable through the display device; and dimming ambient light received through the head-mounted display device from the environment while displaying the captured image. | 11-14-2013 |
20130300767 | METHOD AND SYSTEM FOR AUGMENTED REALITY - A method of augmented reality includes associating tint information with a predetermined graphical object, and receiving a video image of a real scene comprising a feature for detection. The method further includes detecting the feature in the video image of the real scene and selecting a graphical object responsive to the detected feature, and augmenting the video image with the selected graphical object. If the selected graphical object is the predetermined graphical object, the method further includes retrieving the tint information associated with the predetermined graphical object, and modifying the colour balance of the video image responsive to the tint information. | 11-14-2013 |
20130307874 | METHODS AND SYSTEMS FOR GENERATING AND JOINING SHARED EXPERIENCE - According to an example, a computer may receive characteristics information of an object in a video stream captured by a first computing device, generate a signature based on the characteristics information, identify an augmented reality information associated with the signature, transmit the augmented reality information to the first computing device, receive, from a second computing device, a set of characteristics information of the object in an image captured by the second computing device, determine that the set of characteristics information from the second computing device has a second signature that matches the signature generated based on the characteristics information received form the first computing device, and transmit the identified augmented reality information to the second computing device. | 11-21-2013 |
20130307875 | AUGMENTED REALITY CREATION USING A REAL SCENE - The creation of augmented reality is described using a real scene. In one example, a process includes observing a real scene through a camera of a device, observing a user gesture through the camera of the device, presenting the scene and the gesture on the display of the device, generating a virtual object and placing it in the scene based on the observed user gesture, and presenting the virtual object in the real scene on the display. | 11-21-2013 |
20130314441 | IMAGE-DRIVEN VIEW MANAGEMENT FOR ANNOTATIONS - A mobile device uses an image-driven view management approach for annotating images in real-time. An image-based layout process used by the mobile device computes a saliency map and generates an edge map from a frame of a video stream. The saliency map may be further processed by applying thresholds to reduce the number of saliency levels. The saliency map and edge map are used together to determine a layout position of labels to be rendered over the video stream. The labels are displayed in the layout position until a change of orientation of the camera that exceeds a threshold is detected. Additionally, the representation of the label may be adjusted, e.g., based on a plurality of pixels bounded by an area that is coincident with a layout position for a label in the video frame. | 11-28-2013 |
20130314442 | SPATIALLY REGISTERED AUGMENTED VIDEO - A source video stream is processed to extract a desired object from the remainder of video stream to produce a segmented video of the object. Additional relevant information, such as the orientation of the source camera for each frame in the resulting segmented video of the object, is also determined and stored. During replay, the segmented video of the object, as well as the source camera orientation are obtained. Using the source camera orientation for each frame of the segmented video of the object, as well as target camera orientation for each frame of a target video stream, a transformation for the segmented video of the object may be produced. The segmented video of the object may be displayed over the target video stream, which may be a live video stream of a scene, using the transformation to spatially register the segmented video to the target video stream. | 11-28-2013 |
20130314443 | METHODS, MOBILE DEVICE AND SERVER FOR SUPPORT OF AUGMENTED REALITY ON THE MOBILE DEVICE - The present disclosure relates to methods, to a mobile device and to a server for support of augmented reality on the mobile device. The mobile device acquires a current image of its environment. The mobile device determines its orientation and its location by comparing the current image with reference image information related to an estimated location of the mobile device. The mobile device may provide information about new, modified or missing reference image features, for a given location, to a server. The server may then update a corresponding reference image feature in a local database. | 11-28-2013 |
20130321462 | GESTURE BASED REGION IDENTIFICATION FOR HOLOGRAMS - Techniques are provided for allowing a user to select a region within virtual imagery, such as a hologram, being presented in an HMD. The user could select the region by using their hands to form a closed loop such that from the perspective of the user, the closed loop corresponds to the region the user wishes to select. The user could select the region by using a prop, such as a picture frame. In response to the selection, the selected region could be presented using a different rendering technique than other regions of the virtual imagery. Various rendering techniques such as zooming, filtering, etc. could be applied to the selected region. The identification of the region by the user could also serve as a selection of an element in that portion of the virtual image. | 12-05-2013 |
20130321463 | APPARATUS AND METHOD FOR AUGMENTING A VIDEO IMAGE - A method of augmenting a video image comprises the steps of estimating the extent of an identified surface captured within the video image and identifying skin pixels within a test area corresponding to the estimated extent of the identified surface within the video image. The method also includes extrapolating positions for skin pixels within the video image that are outside the estimated extent of the identified surface based upon the identified skin pixels, generating a mask from a combination of identified skin pixels and extrapolated skin pixels, and setting an extent of a computer graphic to be superposed substantially on top of the identified surface within the video image. The extent is greater than the estimated extent of the identified surface. The method further includes superposing the graphical augmentation of the video image responsive to the generated mask. | 12-05-2013 |
20130321464 | APPARATUS AND METHOD OF AUGMENTING VIDEO - A method of generating an internally consistent model of the state of a book captured in a video image is provided. The method comprises obtaining a plurality of pieces of evidence relating to the state of a corresponding plurality of aspects of the book in the video image, associating a quality score with each piece of evidence, generating an initial model of the state of the book wherein the state of the book is constrained by physical properties of the book and at least the highest scoring piece of evidence, and sequentially constraining the model in response to one or more successive pieces of evidence whose scores meet a respective predetermined first threshold value. | 12-05-2013 |
20130328925 | OBJECT FOCUS IN A MIXED REALITY ENVIRONMENT - A system and method are disclosed for interpreting user focus on virtual objects in a mixed reality environment. Using inference, express gestures and heuristic rules, the present system determines which of the virtual objects the user is likely focused on and interacting with. At that point, the present system may emphasize the selected virtual object over other virtual objects, and interact with the selected virtual object in a variety of ways. | 12-12-2013 |
20130328926 | AUGMENTED REALITY ARRANGEMENT OF NEARBY LOCATION INFORMATION - A method of displaying information of interest to a user on an electronic device comprises capturing an image of surrounding area via a camera, displaying the image on a display of the electronic device, identifying objects of interest in a portion of the image as points of interest (POI) to the user, obtaining POI information about the points of interest, arranging said POI information, and displaying the arranged information with augmented reality on the image for the identified objects. | 12-12-2013 |
20130328927 | AUGMENTED REALITY PLAYSPACES WITH ADAPTIVE GAME RULES - A system for generating a virtual gaming environment based on features identified within a real-world environment, and adapting the virtual gaming environment over time as the features identified within the real-world environment change is described. Utilizing the technology described, a person wearing a head-mounted display device (HMD) may walk around a real-world environment and play a virtual game that is adapted to that real-world environment. For example, the HMD may identify environmental features within a real-world environment such as five grassy areas and two cars, and then spawn virtual monsters based on the location and type of the environmental features identified. The location and type of the environmental features identified may vary depending on the particular real-world environment in which the HMD exists and therefore each virtual game may look different depending on the particular real-world environment. | 12-12-2013 |
20130328928 | OBSTACLE AVOIDANCE APPARATUS AND OBSTACLE AVOIDANCE METHOD - An obstacle detecting unit detects an obstacle for a user wearing a head mounted display from an image of the outside world. A distance calculating unit calculates the distance from a detected obstacle to the user wearing the head mounted display. An obstacle replacing unit replaces the detected obstacle with a virtual object. A virtual object synthesizing unit generates a virtual object at a position within a virtual space displayed on the head mounted display, in which the position is determined according to the distance to the obstacle. | 12-12-2013 |
20130328929 | MOBILE COMMUNICATION TERMINAL FOR PROVIDING AUGMENTED REALITY SERVICE AND METHOD OF CHANGING INTO AUGMENTED REALITY SERVICE SCREEN - A method of changing into a screen of an Augmented Reality (AR) service in a mobile communication terminal including a camera is provided. The method of changing into a screen of an AR service in a mobile communication terminal includes displaying a prior screen different from an AR service screen, detecting a predetermined event to change display of the prior screen to the AR service screen, driving the camera if the predetermined event has been detected, capturing an image using the camera, and displaying the AR service screen rendered based on the image captured by the camera. | 12-12-2013 |
20130328930 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY SERVICE - An apparatus for providing an augmented reality service is provided. The apparatus includes a first server configured to provide an augmented reality service for a first region, a second server configured to provide an augmented reality service for a second region that is differentiated from the first region, and a terminal configured to receive information for the second region from any one of the first server and the second server, and at the time of entering the second region from the first region in the course of performing the augmented reality service mode for the first region, configured to output the information for the second region and configured to switch to the augmented reality service performing mode for the second region. | 12-12-2013 |
20130328931 | System and Method for Mobile Identification of Real Property by Geospatial Analysis - A system and method of identifying real property based on real time sensor collected geospatial data regarding the location, orientation and field of view of a camera enabled mobile computing device by a mobile device and collecting and returning information related to the identified real property. A mobile device user takes a picture of a property (i.e. home, building, structure etc.) at which time the client software captures the device's location and orientation-related sensor data before, during and after the picture is taken, and sends this data and the picture to the servers. The servers examine this data and use it to construct a database query of potential property matches, and the criteria by which those potential matches will be scored. The servers then score each candidate property against the criteria, and return the best match or matches to the client device, including additional information about each property. The client renders this information for the user, records passive or active user feedback about the accuracy of the match and information, and sends that feedback back to the server. | 12-12-2013 |
20130335445 | METHODS AND SYSTEMS FOR REALISTIC RENDERING OF DIGITAL OBJECTS IN AUGMENTED REALITY - A system and method of rendering in real time a virtual object onto a viewfinder display, the method comprising determining one or more scene properties of a scene on a viewfinder display of a device, receiving a virtual object for insertion into the scene, determining a location for placing the virtual object within the scene, determining a first appearance of the virtual object based on the one or more scene properties, and inserting the virtual object with the first appearance into the scene depicted on the viewfinder display of the device based on the location. | 12-19-2013 |
20130335446 | METHOD AND APPARATUS FOR CONVEYING LOCATION BASED IMAGES BASED ON A FIELD-OF-VIEW - An approach for enabling users to view an image of a location from different fields-of-view is described. A field-of-view generator causes a rendering of a user interface element representing a field-of-view. The field-of-view generator further processes one or more interactions with the user interface element to determine one or more parameters for specifying the field-of-view. The field-of-view generator further determines a portion of at least one panoramic image that is visible in the field-of-view based, at least in part, on the one or more parameters. Still further, the field-of-view generator causes a rendering of the portion of that at least one panoramic image | 12-19-2013 |
20130335447 | ELECTRONIC DEVICE AND METHOD FOR PLAYING REAL-TIME IMAGES IN A VIRTUAL REALITY - In a method for playing real-images in a virtual reality using an electronic device, the method generates a request for obtaining a real-time image of a scene and notifies a Virtual Reality(VR) client to send the request to a web sever to obtain the real-time image. The real-time image is cut into a sequence of static pictures by the web server, and is sent to the VR client through a network. The method triggers the VR client to play each of the sequence of static pictures to present the real-time image on a display screen of the electronic device. | 12-19-2013 |
20130335448 | METHOD AND APPARATUS FOR PROVIDING VIDEO CONTENTS SERVICE, AND METHOD OF REPRODUCING VIDEO CONTENTS OF USER TERMINAL - Disclosed is a method of providing a video contents service including calculating conversion information indicating a relation between a projection area and an area corresponding to the projection area in order to project video contents on the area corresponding to the projection area which is a partial area within a prepared image in a user image photographed by a user terminal; and transmitting the calculated conversion information to the user terminal. | 12-19-2013 |
20130342568 | LOW LIGHT SCENE AUGMENTATION - Embodiments related to providing low light scene augmentation are disclosed. One embodiment provides, on a computing device comprising a see-through display device, a method including recognizing, from image data received from an image sensor, a background scene of an environment viewable through the see-through display device, the environment comprising a physical object. The method further includes identifying one or more geometrical features of the physical object and displaying, on the see through display device, an image augmenting the one or more geometrical features. | 12-26-2013 |
20130342569 | METHOD AND APPARATUS FOR AUGMENTING AN INDEX GENERATED BY A NEAR EYE DISPLAY - A method, apparatus and computer program product are provided in order to augment an index image generated by a near eye display in order to more clearly present at least a portion of the index image. In the context of a method, a position of the mobile terminal relative to an index image generated by a near eye display is determined. The method also determines an image to be presented by the mobile terminal based upon the index image and the position of the mobile terminal relative to the index image. The method also causes the image to be presented by the mobile terminal. A corresponding apparatus and a computer program product are also provided. | 12-26-2013 |
20130342570 | OBJECT-CENTRIC MIXED REALITY SPACE - A see-through, near-eye, mixed reality display apparatus providing a mixed reality environment wherein one or more virtual objects and one or more real objects exist within the view of the device. Each of the real and virtual have a commonly defined set of attributes understood by the mixed reality system allowing the system to manage relationships and interaction between virtual objects and other virtual objects, and virtual and real objects. | 12-26-2013 |
20130342571 | MIXED REALITY SYSTEM LEARNED INPUT AND FUNCTIONS - A see-through, near-eye, mixed reality display apparatus providing a mixed reality environment wherein one or more virtual objects and one or more real objects exist within the view of the device. Each of the real and virtual have a commonly defined set of attributes understood by the mixed reality system allowing the system to manage relationships and interaction between virtual objects and other virtual objects, and virtual and real objects. | 12-26-2013 |
20130342572 | CONTROL OF DISPLAYED CONTENT IN VIRTUAL ENVIRONMENTS - A system and method are disclosed for controlling content displayed to a user in a virtual environment. The virtual environment may include virtual controls with which a user may interact using predefined gestures. Interacting with a virtual control may adjust an aspect of the displayed content, including for example one or more of fast forwarding of the content, rewinding of the content, pausing of the content, stopping the content, changing a volume of content, recording the content, changing a brightness of the content, changing a contrast of the content and changing the content from a first still image to a second still image. | 12-26-2013 |
20130342573 | Transitioning 3D Space Information to Screen Aligned Information for Video See Through Augmented Reality - Methods, apparatuses, and systems are provided to transition 3D space information detected in an Augmented Reality (AR) view of a mobile device to screen aligned information on the mobile device. In at least one implementation, a method includes determining augmentation information associated with an object of interest, including a Modelview (M1) matrix and a Projection (P1) matrix, displaying the augmentation information on top of a video image of the object of interest using the M1 and P1 matrices, generating a second Modelview (M2) matrix and a second Projection (P2) matrix, such that the matrices M2 and P2 represent the screen aligned final position of the augmentation information, and displaying the augmentation information using the M2 and P2 matrices. | 12-26-2013 |
20130342574 | Range of Focus in an Augmented Reality Application - A computer-implemented augmented reality method includes receiving one or more indications, entered on a mobile computing device by a user of the mobile computing device, of a distance range for determining items to display with an augmented reality application, the distance range representing geographic distance from a base point where the mobile computing device is located. The method also includes selecting, from items in a computer database, one or more items that are located within the distance range from the mobile computing device entered by the user, and providing data for representing labels for the selected one or more items on a visual display of the mobile computing device, the labels corresponding to the selected items, and the items corresponding to geographical features that are within the distance range as measure from the mobile computing device. | 12-26-2013 |
20140002490 | SAVING AUGMENTED REALITIES | 01-02-2014 |
20140002491 | DEEP AUGMENTED REALITY TAGS FOR HEAD MOUNTED DISPLAYS | 01-02-2014 |
20140002492 | PROPAGATION OF REAL WORLD PROPERTIES INTO AUGMENTED REALITY IMAGES | 01-02-2014 |
20140002493 | AUGMENTED REALITY SIMULATION CONTINUUM | 01-02-2014 |
20140002494 | ORIENTATION AWARE APPLICATION DEMONSTRATION INTERFACE | 01-02-2014 |
20140002495 | MULTI-NODE POSTER LOCATION | 01-02-2014 |
20140002496 | CONSTRAINT BASED INFORMATION INFERENCE | 01-02-2014 |
20140002497 | AUGMENTED REALITY SYSTEM | 01-02-2014 |
20140002498 | APPARATUS AND METHOD FOR CREATING SPATIAL AUGMENTED REALITY CONTENT | 01-02-2014 |
20140002499 | Method, System, and Computer-Readable Recording Medium for Providing Information on an Object Using Viewing Frustums | 01-02-2014 |
20140015858 | AUGMENTED REALITY SYSTEM - This relates to augmented reality systems. The augmented reality system may display a computer-generated image of a virtual object overlaid on a view of a physical, real-world environment. The system may allow users to move their associated virtual objects to real-world locations by changing the location data associated with the virtual objects. The system may also allow users to observe an augmented reality view having both a real-world view of an environment as captured by an image sensor and computer-generated images of the virtual objects located within the view of the image sensor. A user may then capture a virtual object displayed within their augmented reality view by taking a picture of the mixed-view image having the virtual object overlaid on the real-world view of the environment. | 01-16-2014 |
20140015859 | MOBILE TERMINAL AND CONTROL METHOD THEREOF - A mobile terminal and a control method thereof, which can obtain an image, are provided. A mobile terminal includes a camera unit, a pictogram extraction unit and a controller. The camera unit obtains image information corresponding to at least one of a still image and a moving image. The pictogram extraction unit extracts at least one pictogram from the obtained image information. The controller detects information related to the extracted pictogram, and displays the detected information to be overlapped with the obtained image information. In the mobile terminal, the controller includes, as the detected information, at least one of previously recorded information and currently searched information related to the extracted pictogram. | 01-16-2014 |
20140015860 | AUGMENTED REALITY SERVICE - A search system, a user device, and a server for AR service are disclosed. The search system includes a search engine configured to search a web content and a marker, in response to an input of a user, a matching unit configured to match the searched web content with the searched marker, and an output unit configured to transmit a document including the searched web content and the searched marker to the user. | 01-16-2014 |
20140022281 | PROJECTING AIRPLANE LOCATION SPECIFIC MAINTENANCE HISTORY USING OPTICAL REFERENCE POINTS - A method for displaying location specific maintenance history for an object is implemented by operating a camera to locate at least one marker tag with fiducial marker-based tracking functionality on the object to provide a reference to a coordinate system associated with the object. An area of the object surrounding the coordinates defined by marker tag is determined from the coordinate system. A repair history for the determined area is then projected onto the object with the projection referenced to the coordinate system associated with the object. | 01-23-2014 |
20140022282 | MOBILE TERMINAL DEVICE, TERMINAL PROGRAM, AUGMENTED REALITY SYSTEM, AND CLOTHING - A controller performs image recognition of the design of at least part of a picture obtained by capturing, by an image capturing unit, an image of a character(s) printed on or attached to a character item, generates an augmented reality image by combining a stagecraft image, prepared in association with each of image-recognized designs, with a subject image of a person wearing or having the character item, and displays the augmented reality image on a display unit. | 01-23-2014 |
20140022283 | AUGMENTED REALITY APPARATUS - Augmented reality apparatus ( | 01-23-2014 |
20140022284 | IMAGE DISPLAYING METHOD FOR A HEAD-MOUNTED TYPE DISPLAY UNIT - Disclosed herein is an image displaying method for a head-mounted type display unit which includes a frame of the glasses type for being mounted on the head of an observer, an image display apparatus attached to the frame, and a control section for controlling image display of the image display apparatus. The image display apparatus includes an image forming apparatus, and an optical apparatus. The image displaying method includes the steps of: storing a data group configured from a plurality of data in a storage section; adding a data identification code to each of the data; sending a designation identification code and display time information at predetermined intervals of time; and reading out the data whose data identification code coincides with the received designation identification code from the storage section and controlling the image forming apparatus to display an image based on the read out data. | 01-23-2014 |
20140028711 | Experimental Chamber with Computer-Controlled Display Wall - An environmental chamber having an interior compartment, an augmented display, and a controller is disclosed. The interior compartment is adapted for isolating an experimental setup from an environment external to the interior compartment. The augmented display is positioned to allow a user in the external environment to view the interior compartment and an image generated on the augmented display. The controller generates the image. The image includes information about a component within the interior compartment. The augment display can include a touch-enabled display screen that allows the user to interact with controller. | 01-30-2014 |
20140028712 | Method and apparatus for controlling augmented reality - Method and apparatus for controlling an augmented reality interface are disclosed. In one embodiment, a method for use with an augmented reality enabled device (ARD) comprises receiving image data for tracking a plurality of objects, identifying an object to be selected from the plurality of objects, determining whether the object has been selected based at least in part on a set of selection criteria, and causing an augmentation to be rendered with the object if it is determined that the object has been selected. | 01-30-2014 |
20140028713 | Interactions of Tangible and Augmented Reality Objects - Method, computer program product, and apparatus for providing interactions of tangible and augmented reality objects are disclosed. In one embodiment, a method for use with an augmented reality enabled device (ARD) comprises performing 3-dimensional tracking of one or more objects based at least in part on captured images of the one or more objects, detecting a state change of at least one object of the one or more objects based at least in part on the captured images, and causing an augmentation to be rendered in response to the state change of the at least one object, where a type of the augmentation is based at least in part on the state change of the at least one object. | 01-30-2014 |
20140028714 | Maintaining Continuity of Augmentations - Methods and apparatuses for maintaining continuity of augmentations are disclosed. In one embodiment, a method for use with an augmented reality enabled device (ARD) comprises tracking a plurality of objects and a background based at least in part on visual information derived from an image, maintaining states of the plurality of objects based at least in part on information other than the visual information, and providing data for rendering augmentation in response to the states of the plurality of objects. | 01-30-2014 |
20140028715 | METHOD AND APPARATUS FOR DETECTING OCCLUSION IN AN AUGMENTED REALITY DISPLAY - A method, apparatus and computer program product are provided to display objects in an augmented reality interface. In this regard, the method, apparatus, and computer program product may determine a location of a mobile terminal, receive object meshes for one or more objects in geographic proximity to the mobile terminal, remove, using a processor, one or more polygons from the object meshes, and determine occlusion between the location of the mobile terminal and at least one point of interest. The at least one point of interest may be identified as occluded if a line segment between the location and the at least one point of interest intersects with at least one of the object meshes. The method, apparatus, and computer program product may also include causing the at least one point of interest to not be displayed by an augmented reality interface. | 01-30-2014 |
20140028716 | METHOD AND ELECTRONIC DEVICE FOR GENERATING AN INSTRUCTION IN AN AUGMENTED REALITY ENVIRONMENT - A method for generating an instruction in an augmented reality environment includes capturing a series of reality images, each of which contains a portion of a hand, and a scene that includes at least one object-of-interest, recognizing the object-of-interest, generating an icon associated with an entry of object-of-interest data that is associated with the object-of-interest thus recognized, generating a series of augmented reality images by overlaying the icon onto the series of reality images, displaying the augmented reality images, recognizing a relationship between the portion of the hand and the icon, and generating an input instruction with reference to the relationship. | 01-30-2014 |
20140028717 | RADIATION IMAGE DISPLAYING APPARATUS AND RADIATION IMAGE DISPLAYING METHOD - An abnormal shadow detection section detects an abnormal shadow from each of two radiation images for displaying a stereoscopic image, obtained by imaging a subject from two different directions. A display control section determines all available combinations of abnormal shadows as abnormal shadows corresponding to each other between two radiation images in the case that a plurality of abnormal shadows are detected and for sequentially applying cursors to abnormal shadows of the determined combinations in two radiation images to sequentially display stereoscopic images marked with the cursors, in which the abnormal shadows are marked with the cursors, on the display section based on two radiation images marked with the cursors. | 01-30-2014 |
20140028718 | System and Method for Displaying Object Location in Augmented Reality - A system and a method are provided for displaying location information on a mobile device. The location information can include direction, distance, positional coordinates, etc. The mobile device's display displays an image captured using the mobile device's camera. A selection input is received to identify an object in the image. A facing direction of the mobile device is detected using the mobile device's magnetometer. The mobile device determines a bearing to the object relative to the mobile device's facing direction. The mobile device then determines a distance between the mobile device and the object. The obtained or computed location data is overlaid on the image, thereby augmenting the image. The location data can include at least a direction indicator of the object and the distance between the mobile device and the object, whereby the direction indicator can be determined using the bearing. | 01-30-2014 |
20140035951 | VISUALLY PASSING DATA THROUGH VIDEO - A method and a system involve the insertion of digital data into a number of video frames of a video stream, such that the video frames contain both video content and the inserted digital data. The video, including the inserted digital data is then visually conveyed to and received by an augmented reality device without the use of a network connection. In the augmented reality device, the digital data is detected, processed and used to provide computer-generated data and/or information. The computer-generated data and/or information is then presented on a display associated with the augmented reality device or otherwise reproduced through the augmented reality device, where the computer-generated data and/or information supplements the video content so as to enhance the viewing experience of the augmented reality device user. | 02-06-2014 |
20140035952 | INDIVIDUAL IDENTIFICATION CHARACTER DISPLAY SYSTEM, TERMINAL DEVICE, INDIVIDUAL IDENTIFICATION CHARACTER DISPLAY METHOD, AND COMPUTER PROGRAM - A terminal device retains a character image of an own terminal user and a character image of a second terminal user acquired from an AR database server (or second terminal devices) as a character definition in an AR control unit. The terminal device can acquire the position of the second terminal device and a direction in which a camera unit is oriented. The terminal device of a photographer determines whether the user of the second terminal device which is being searched for is present in the acquired direction by causing image recognition unit to identify a face image of a specific user of the second terminal device. When the specific user of the second terminal device is present, the character image of the specific user of the second terminal device is combined in the vicinity of a face image region of the image obtained by the camera unit and is displayed. | 02-06-2014 |
20140043365 | METHOD AND APPARATUS FOR LAYOUT FOR AUGMENTED REALITY VIEW - An approach is provided for providing an interactive perspective-based point of interest layout in an augmented reality view. The layout platform determines at least one zoom level for rendering of one or more representations of one or more items in a perspective-based display based, at least in part, on at least one push interaction or at least one pull interaction in the perspective-based display. The layout platform causes, at least in part, a rendering of the one or more representations based, at least in part, on the at least one zoom level. | 02-13-2014 |
20140049558 | AUGMENTED REALITY OVERLAY FOR CONTROL DEVICES - Embodiments for providing instructional information for control devices are disclosed. In one example, a method on a see-through display device comprising a see-through display and an outward-facing image sensor includes acquiring an image of a scene viewable through the see-through display and detecting a control device in the scene. The method also includes retrieving information pertaining to a function of an interactive element of the control device and displaying an image on the see-through display augmenting an appearance of the interactive element of the control device with image data related to the function of the interactive element. | 02-20-2014 |
20140049559 | MIXED REALITY HOLOGRAPHIC OBJECT DEVELOPMENT - Systems and related methods for presenting a holographic object that self-adapts to a mixed reality environment are provided. In one example, a holographic object presentation program captures physical environment data from a destination physical environment and creates a model of the environment including physical objects having associated properties. The program identifies a holographic object for display on a display of a display device, the holographic object including one or more rules linking a detected environmental condition and/or properties of the physical objects with a display mode of the holographic object. The program applies the one or more rules to select the display mode for the holographic object based on the detected environmental condition and/or the properties of the physical objects. | 02-20-2014 |
20140049560 | APPARATUS AND METHOD FOR OBJECT POSITIONING - An entertainment device comprises an input, a marker detector and a failure boundary calculation processor. The input is operable to receive a captured image from a video camera. The marker detector is operable to detect a fiduciary marker within the captured image, and is also operable to estimate a distance and angle of the fiduciary marker. The failure boundary calculation processor is operable to calculate at least one of an additional distance and an additional angle from the currently estimated distance and angle of the fiduciary marker at which recognition of the fiduciary marker is assumed to fail. | 02-20-2014 |
20140055488 | AUGMENTED REALITY PERSONAL IDENTIFICATION - An identification module receives an identification signal that uniquely identifies an individual and captures an image of the individual. The identification module determines tag information associated with the individual using the received identification signal and displays, to a user, the tag information overlayed on the image of the individual. | 02-27-2014 |
20140055489 | RENDERING TOOL INFORMATION AS GRAPHIC OVERLAYS ON DISPLAYED IMAGES OF TOOLS - An operator telerobotically controls tools to perform a procedure on an object at a work site while viewing real-time images of the work site on a display. Tool information is provided in the operator's current gaze area on the display by rendering the tool information over the tool so as not to obscure objects being worked on at the time by the tool nor to require eyes of the user to refocus when looking at the tool information and the image of the tool on a stereo viewer. | 02-27-2014 |
20140055490 | SERVICE COVERAGE IDENTIFICATION USING AUGMENTED REALITY - Facilitating service coverage identification using augmented reality is contemplated. The service coverage information may relate to wireless signaling metrics collected for wireless access points. A reality view captured with a mobile device to reflect an area proximate the access points may augmented with the service coverage information such that the resulting augmented reality view identifies variances or other parameters of the service coverage information relative to the mobile device. | 02-27-2014 |
20140055491 | INDICATING THE GEOGRAPHIC ORIGIN OF A DIGITALLY-MEDIATED COMMUNICATION - Technologies are described for indicating a geographic origin of a digitally-mediated communication relative to a location of a recipient by presenting the indication in an augmented reality scene. For example, an augmented reality scene can be presented to the recipient. The geographic origin of an incoming digital communication may be determined and a relative location of the origin with respect to the recipient's location may be computed. A format for presenting the relative location may be derived from the digital communication and the geographic origin. The augmented reality scene may be updated with the relative location based on the derived format. Techniques for integrating digital communications, location-based services, and augmented reality applications can enhance the recipient's experience by providing a perceptual solution to the loss of certain fundamental aspects of natural communication, such as the ability to instantly determine the geographic origin or relative location of an incoming digital communication. | 02-27-2014 |
20140055492 | Interactivity With A Mixed Reality - Methods of interacting with a mixed reality are presented. A mobile device captures an image of a real-world object where the image has content information that can be used to control a mixed reality object through an offered command set. The mixed reality object can be real, virtual, or a mixture of both real and virtual. | 02-27-2014 |
20140055493 | Interactivity With A Mixed Reality - Methods of interacting with a mixed reality are presented. A mobile device captures an image of a real-world object where the image has content information that can be used to control a mixed reality object through an offered command set. The mixed reality object can be real, virtual, or a mixture of both real and virtual. | 02-27-2014 |
20140063054 | AR GLASSES SPECIFIC CONTROL INTERFACE BASED ON A CONNECTED EXTERNAL DEVICE TYPE - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content, wherein the eyepiece has a control interface based on a connected external device type. | 03-06-2014 |
20140063055 | AR GLASSES SPECIFIC USER INTERFACE AND CONTROL INTERFACE BASED ON A CONNECTED EXTERNAL DEVICE TYPE - This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content, wherein the eyepiece has a user interface and control interface based on a connected external device type. | 03-06-2014 |
20140063056 | APPARATUS, SYSTEM AND METHOD FOR VIRTUALLY FITTING WEARABLE ITEMS - Provided herein are systems, apparatuses, methods and computer program products for virtually and interactively fitting at least one wearable item on a user. | 03-06-2014 |
20140063057 | SYSTEM FOR GUIDING USERS IN CROWDSOURCED VIDEO SERVICES - An apparatus comprising at least one processor and at least one memory including computer program code may be configured to receive data corresponding to a first media content from a user captured by a first media capturing device. The apparatus may be configured to determine at least one media capturing parameter of the first media content to be changed. The apparatus may be configured to cause information regarding the media capturing parameter of the first media content to be changed to be transmitted to at least the first media capturing device. The apparatus may be configured to receive a second media content from a user captured by the first media capturing device, wherein the second media content differs from the first media content by at least the media capturing parameter to be changed. Corresponding methods and computer program products are also provided. | 03-06-2014 |
20140063058 | METHOD AND APPARATUS FOR TRANSITIONING FROM A PARTIAL MAP VIEW TO AN AUGMENTED REALITY VIEW - An approach is provided for providing a map view that compliments an augmented reality view while a user navigates and interacts within a scene. A mapping platform determines a virtual floor surface of at least one presentation of a perspective-based display of location information. A mapping platform causes, at least in part, a rendering of a partial map view on the virtual floor surface, wherein the partial map view provides an alternate view of the location information. | 03-06-2014 |
20140063059 | INTERACTIVE AUGMENTED REALITY SYSTEM AND PORTABLE COMMUNICATION DEVICE AND INTERACTION METHOD THEREOF - An interactive augmented reality system includes an interactive object and a portable communication device. The communication device includes a control module, an image capture module and a communication module. The control module controls and operates the portable communication device and the interactive object. The image capture module is connected with the control module to capture an image of the interactive object. The communication module is connected with the control module to exchange information with interactive object. The control module computes and rebuilds the image, and then overlays the image on a virtual scene generated by the control module. As a result, an augmented reality image is generated, and an action signal is transmitted to the interactive object by the portable communication device, such that the interactive object executes a preset action according to the action signal. The system attracts users, reduces the operation difficulty, and enhances the user experiences. | 03-06-2014 |
20140063060 | AUGMENTED REALITY SURFACE SEGMENTATION - Methods, systems, computer-readable media, and apparatuses for providing intuitive, functional, and convenient ways of enabling a user of a head-mounted display unit or another augmented reality enabled device to interact with various user interfaces and other features provided by such a unit or device are presented. In some embodiments, a computing device, such as a head-mounted display unit, may receive camera input of a scene. Subsequently, the computing device may identify at least one reference object in the scene, for example, based on detecting one or more rectangles in the received camera input. The computing device then may receive input that defines a surface segment relative to the at least one reference object. Thereafter, the computing device may render the surface segment. | 03-06-2014 |
20140063061 | DETERMINING A POSITION OF AN ITEM IN A VIRTUAL AUGMENTED SPACE - A method for determining a position of an item in a virtual augmented space is provided. The method includes determining a first angle between a viewer and a first display device, determining a second angle between the first display device and a second display device and determining a third angle, based on the first and second angle that can be used to determine a relative viewing angle between the viewer and the second display device. | 03-06-2014 |
20140063062 | METHOD AND APPARATUS FOR SELECTIVELY PRESENTING CONTENT - A machine-implemented method includes obtaining input data and generating output data. The status of at least one contextual factor is determined and compared with a standard. If the status meets the standard, a transformation is applied to the output data. The output data is then outputted to the viewer. Through design and/or selection of contextual factors, standards, and transformations, output data may be selectively outputted to viewers in a context-suitable fashion, e.g. on a head mounted display the viewer's central vision may be left unobstructed while the viewer walks, drives, etc. An apparatus includes at least one sensor that senses a contextual factor. A processor determines the status of the contextual factor, determines if the status meets a standard, generates output data, and applies a transformation to the output data if the status meets the standard. A display outputs the output data to the viewer. | 03-06-2014 |
20140063063 | Spatial Calibration System for Augmented Reality Display - Systems and methods are described to allow a user of a computing device to augment a captured image of a design space, such as a photograph or a video of an interior room, with an image of a design element, such as a photograph. The disclosure provides systems and methods that enable users of computing devices to capture images from the design space, calibrate the size of a virtual image of a design element to the captured image of the design space, overlay the calibrated virtual image onto the captured image, and adjust the virtual image. | 03-06-2014 |
20140063064 | INFORMATION PROVIDING METHOD AND INFORMATION PROVIDING VEHICLE THEREFOR - A method of providing information about a predetermined external vehicle on a transparent display of an information providing vehicle, the method including: acquiring status information of the external vehicle; determining a display mode for displaying an object corresponding to the external vehicle based on the acquired status information; and displaying the object corresponding to the external vehicle on the transparent display in the determined display mode, wherein the display mode may include an augmented reality mode displaying an image obtained by overlaying a virtual image on an actual image of the external vehicle that is observed through the transparent display, and a map mode displaying the object corresponding to the external vehicle after mapping the object to a map. | 03-06-2014 |
20140063065 | INFORMATION PROCESSING APPARATUS, SERVER APPARATUS, INFORMATION PROCESSING METHOD - Disclosed is an information processing apparatus including virtual object management section ( | 03-06-2014 |
20140071163 | AUGMENTED REALITY INFORMATION DETAIL - A holographic object presentation system and related methods for presenting a holographic object having a selective information detail level are provided. In one example, a holographic object presentation program may receive user behavior information and physical environment information. Using one or more of the user behavior information and the physical environment information, the program may adjust the selective information detail level of the holographic object to an adjusted information detail level. The program may then provide the holographic object at the adjusted information detail level to an augmented reality display program for display on a display device. | 03-13-2014 |
20140071164 | CONTROLLING AN AUGMENTED REALITY OBJECT - Techniques for controlling an augmented reality object are described in various implementations. In one example implementation, a method may include receiving an initialization image captured by an image capture device, the initialization image depicting a background and being free of foreground objects positioned between the background and the image capture device. The method may also include receiving a plurality of subsequent images captured by the image capture device over a period of time, the plurality of subsequent images depicting the background and a foreground object, the foreground object being positioned between the background and the image capture device. The method may also include comparing the initialization image to the plurality of subsequent images to determine positioning of the foreground object over the period of time. The method may also include controlling an augmented reality object based on the positioning of the foreground object over the period of time. | 03-13-2014 |
20140071165 | MIXED REALITY SIMULATION METHODS AND SYSTEMS - Mixed reality simulation in general, and more specifically to mixed reality simulation devices and systems for training purposes, for example in the medical field, may be provided. For example, a mixed reality simulation method for rendering on a display a mixed reality scenario of a virtual environment adapted to a physical environment, may comprise acquiring, with a sensor, a position of a physical environment object; identifying a mismatch between a physical environment surface and a virtual environment surface, the mismatch depending on the physical environment object position and a mixed reality scenario parameter; and computing a mapping displacement for a virtual environment surface based on the identified mismatch. | 03-13-2014 |
20140071166 | Switching Between a First Operational Mode and a Second Operational Mode Using a Natural Motion Gesture - A mobile device is operative to change from a first operational mode to a second or third operational mode based on a user's natural motion gesture. The first operational mode may include a voice input mode in which a user provides a voice input to the mobile device. After providing the voice input to the mobile device, the user then makes a natural motion gesture and a determination is made as to whether the natural motion gesture places the mobile device in the second or third operational mode. The second operational mode includes an augmented reality display mode in which the mobile device displays images recorded from a camera overlaid with computer-generated images corresponding to results output in response to the voice input. The third operational mode includes a reading display mode in which the mobile device displays, without augmented reality, results output in response to the voice input. | 03-13-2014 |
20140078174 | AUGMENTED REALITY CREATION AND CONSUMPTION - Architectures and techniques for augmenting content on an electronic device are described herein. In particular implementations, a user may use a portable device (e.g., a smart phone, tablet computer, etc.) to capture images of an environment, such as a room, outdoors, and so on. As the images of the environment are captured, the portable device may send information to a remote device (e.g., server) to determine whether augmented reality content is associated with a textured target in the environment (e.g., a surface or portion of a surface). When such a textured target is identified, the augmented reality content may be sent to the portable device. The augmented reality content may be displayed in an overlaid manner on the portable device as real-time images are displayed. | 03-20-2014 |
20140078175 | METHODS AND SYSTEMS FOR MAKING THE USE OF HEAD-MOUNTED DISPLAYS LESS OBVIOUS TO NON-USERS - Various arrangements are presented for positioning virtual objects displayed by a head-mounted display. A location of a person within a real-world scene may be determined. A virtual object may be displayed to a user such that the virtual object is superimposed over the face of the person. | 03-20-2014 |
20140078176 | APPARATUS AND METHOD OF PROVIDING USER INTERFACE ON HEAD MOUNTED DISPLAY AND HEAD MOUNTED DISPLAY THEREOF - An apparatus and method of providing a user interface (UI) on head mounted display and the head mounted display (HMD) thereof are discussed. The apparatus includes a sensor unit configured to detect whether an object exits in the proximity of the HMD and if the object is detected, sense a distance between the object and the HMD, and a processor configured to apply a physical User Interface (UI) mode if the detected object is within a predetermined distance from the HMD. The physical UI mode provides a display for a virtual object. The processor is configured to adjust a display distance of the virtual object, when the virtual object is a 3-dimensional (3D) object and the 3D object includes a touch recognition surface, in order for the touch recognition surface to coincide with a surface of the object, and display the virtual object based on the adjusted display distance. | 03-20-2014 |
20140085333 | AUGMENTED REALITY PRODUCT INSTRUCTIONS, TUTORIALS AND VISUALIZATIONS - In a system for augmented reality product instructions, tutorials and visualizations a method may include receiving a request for information from a client device, the request including image data and a request type; converting, using at least one processor, the image data into a digital fingerprint; comparing the digital fingerprint to a plurality of stored fingerprints to identify an object in the image data; and generating an augmented reality view of the identified object based on the request type; and transmitting the augmented reality view to the client device. | 03-27-2014 |
20140085334 | Transparent Texting - An electronic communication device's camera can continuously capture and present video images as a background within a text messaging session currently being displayed by the device. The camera can be a rear-facing camera on the device, so that the video images represent the views that the device's user would see if the device's display were transparent. The camera can continuously capture and present the video images as the background in the text messaging session, so that the device's user continuously can be aware of the environment beyond the device's display while still focusing on the text messages being communicated. The background within the text messaging session can continuously be a live and current video image of the view seen by the camera at any given moment. Consequently, the device's user is less likely to collide with or stumble over an object while participating in a text messaging session. | 03-27-2014 |
20140085335 | Network Visualization Through Augmented Reality and Modeling - A user equipment (UE) comprising a display, an input device configured to receive user input, a visual input configured to capture motion or stop photography as visual data, and a processor coupled to the display, input device, and visual input and configured to, receive visual data from the visual input, overlay a model comprising network data onto the visual data to create a composite image, wherein the model is aligned to the visual data based on user input received from the input device, and transmit the composite image to the display. | 03-27-2014 |
20140092132 | SYSTEMS AND METHODS FOR 3D POSE ESTIMATION - The present system provides a tool to estimate the relative pose of a generic object with respect to a camera view-point by processing 2D images from a monocular camera in real-time. The capability of solving the pose estimation problem relies on the robust detection and matching in consecutive image frames of significant visual features belonging to an object of interest. To accomplish this, the system incorporates 3D modeling of the object of interest. In one embodiment, the shape of interest may be approximated by a parametric surface such as a cylinder, sphere, ellipsoid, or even complex non-parametric models. The system can restrain information retrieved at a 2D image level to estimate parameters about the pose. In operation, the accuracy of the 3D pose estimation of the object is a function of the degree of approximation of the selected model and the ability to select and track relevant features across consecutive image frames. | 04-03-2014 |
20140092133 | COMPUTER-READABLE MEDIUM, IMAGE PROCESSING DEVICE, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - An example system causes a computer to function as: a display reference information updating unit which updates at least any of a position and a posture of a virtual object in a virtual space based on information obtained from a real space; an image generating unit which generates an image of a virtual space including the virtual object; a display control unit which causes a display device to display an image so that the image of the virtual space is superimposed on the real space so as to be viewed by a user; and an update interrupting unit which interrupts an update of at least any of a position and a posture of the virtual object in the virtual space when predetermined conditions are met. | 04-03-2014 |
20140092134 | VISUAL GUIDANCE SYSTEM - A visual guidance system includes an image display to present an image overlaid on a windshield in front of a driver of a vehicle, a processor to output image information on a virtual line to display visual guidance to the image display, and a steering input detector to detect a steering input. The processor presents an attention attracting indication about an object outside of the vehicle in synchronization with the virtual line in such a manner that the virtual line extends from above the driver along a course of the vehicle seen within the windshield, and a pointing end of the virtual line is overlaid on a road surface on the course seen within the windshield. The processor outputs image information for changing the attention attracting indication to be less conspicuous than the virtual line in accordance with a steering input signal given by the steering input detector. | 04-03-2014 |
20140092135 | SYSTEM AND METHOD FOR DYNAMICALLY DISPLAYING MULTIPLE VIRTUAL AND AUGMENTED REALITY SCENES ON A SINGLE DISPLAY - One variation of a method for dynamically displaying multiple virtual and augmented reality scenes on a single display includes determining a set of global transform parameters from a combination of user-defined inputs, user-measured inputs, and device orientation and position derived from sensor inputs; calculating a projection from a configurable function of the global transform parameters, context provided by the user and context specific to a virtual and augmented reality scene; rendering a virtual and augmented reality scene with the calculated projection on a subframe of the display; and repeating the previous two steps to render additional virtual and augmented reality scenes. | 04-03-2014 |
20140098125 | FORMATTING OF ONE OR MORE PERSISTENT AUGMENTATIONS IN AN AUGMENTED VIEW IN RESPONSE TO MULTIPLE INPUT FACTORS - Computationally implemented methods and systems include presenting a first augmented view of a first scene from a real environment, the first augmented view to be presented including one or more persistent augmentations in a first one or more formats, the inclusion of the one or more persistent augmentations in the first augmented view being independent of presence of one or more visual cues in the actual view of the first scene from the real environment, obtaining an actual view of a second scene from the real environment that is different from the actual view of the first scene, and presenting a second augmented view of the second scene from the real environment, the second augmented view to be presented including the one or more persistent augmentations in a second one or more formats that is based, at least in part, on multiple input factors. | 04-10-2014 |
20140098126 | FORMATTING OF ONE OR MORE PERSISTENT AUGMENTATIONS IN AN AUGMENTED VIEW IN RESPONSE TO MULTIPLE INPUT FACTORS - Computationally implemented methods and systems include presenting a first augmented view of a first scene from a real environment, the first augmented view to be presented including one or more persistent augmentations in a first one or more formats, the inclusion of the one or more persistent augmentations in the first augmented view being independent of presence of one or more visual cues in the actual view of the first scene from the real environment, obtaining an actual view of a second scene from the real environment that is different from the actual view of the first scene, and presenting a second augmented view of the second scene from the real environment, the second augmented view to be presented including the one or more persistent augmentations in a second one or more formats that is based, at least in part, on multiple input factors. | 04-10-2014 |
20140098127 | PRESENTING AN AUGMENTED VIEW IN RESPONSE TO ACQUISITION OF DATA INFERRING USER ACTIVITY - Computationally implemented methods and systems include obtaining visual data of an actual view of a scene from a real environment, determining whether activity-inferring data that infers at least initial occurrence of one or more user activities associated with the scene from the real environment have at least been acquired, and presenting, in response at least in part to determining that the activity-inferring data have at least been acquired, an augmented view of the scene from the real environment, the augmented view including one or more augmentations that have been included into the augmented view based, at least in part, on the activity-inferring data. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 04-10-2014 |
20140098128 | PRESENTING AN AUGMENTED VIEW IN RESPONSE TO ACQUISITION OF DATA INFERRING USER ACTIVITY - Computationally implemented methods and systems include obtaining visual data of an actual view of a scene from a real environment, determining whether activity-inferring data that infers at least initial occurrence of one or more user activities associated with the scene from the real environment have at least been acquired, and presenting, in response at least in part to determining that the activity-inferring data have at least been acquired, an augmented view of the scene from the real environment, the augmented view including one or more augmentations that have been included into the augmented view based, at least in part, on the activity-inferring data. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 04-10-2014 |
20140098129 | SYSTEMS AND METHODS FOR SHARING AUGMENTATION DATA - Computationally implemented methods and systems include acquiring one or more first augmentations for inclusion in a first augmented view of a first scene, displaying the first augmented view including the one or more first augmentations, and transmitting augmentation data associated with the one or more first augmentations to facilitate remote display of one or more second augmentations in a second augmented view of a second scene, the second scene having one or more visual items that are also included in the first scene. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 04-10-2014 |
20140098130 | SYSTEMS AND METHODS FOR SHARING AUGMENTATION DATA - Computationally implemented methods and systems include acquiring one or more first augmentations for inclusion in a first augmented view of a first scene, displaying the first augmented view including the one or more first augmentations, and transmitting augmentation data associated with the one or more first augmentations to facilitate remote display of one or more second augmentations in a second augmented view of a second scene, the second scene having one or more visual items that are also included in the first scene. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 04-10-2014 |
20140098131 | SYSTEMS AND METHODS FOR OBTAINING AND USING AUGMENTATION DATA AND FOR SHARING USAGE DATA - Computationally implemented methods and systems include receiving augmentation data associated with one or more first augmentations, the one or more first augmentations having been included in a first augmented view of a first actual scene that was remotely displayed at a remote augmented reality (AR) device, displaying one or more second augmentations in a second augmented view of a second actual scene, the displaying of the one or more second augmentations being in response, at least in part, to the augmentation data, and transmitting to the remote AR device usage data that indicates usage information related at least to usage or non-usage of the received augmentation data. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 04-10-2014 |
20140098132 | SYSTEMS AND METHODS FOR OBTAINING AND USING AUGMENTATION DATA AND FOR SHARING USAGE DATA - Computationally implemented methods and systems include receiving augmentation data associated with one or more first augmentations, the one or more first augmentations having been included in a first augmented view of a first actual scene that was remotely displayed at a remote augmented reality (AR) device, displaying one or more second augmentations in a second augmented view of a second actual scene, the displaying of the one or more second augmentations being in response, at least in part, to the augmentation data, and transmitting to the remote AR device usage data that indicates usage information related at least to usage or non-usage of the received augmentation data. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 04-10-2014 |
20140098133 | CORRELATING USER REACTION WITH AT LEAST AN ASPECT ASSOCIATED WITH AN AUGMENTATION OF AN AUGMENTED VIEW - Computationally implemented methods and systems include detecting one or more user reactions of a user in response to a display to the user of an augmented view of an actual scene from a real environment, the augmented view that was displayed including one or more augmentations, and correlating the detected one or more user reactions with at least one or more aspects associated with the one or more augmentations that were included in the augmented view that was presented. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 04-10-2014 |
20140098134 | CORRELATING USER REACTION WITH AT LEAST AN ASPECT ASSOCIATED WITH AN AUGMENTATION OF AN AUGMENTED VIEW - Computationally implemented methods and systems include detecting one or more user reactions of a user in response to a display to the user of an augmented view of an actual scene from a real environment, the augmented view that was displayed including one or more augmentations, and correlating the detected one or more user reactions with at least one or more aspects associated with the one or more augmentations that were included in the augmented view that was presented. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 04-10-2014 |
20140098135 | CORRELATING USER REACTIONS WITH AUGMENTATIONS DISPLAYED THROUGH AUGMENTED VIEWS - Computationally implemented methods and systems include receiving first data that at least identifies one or more augmentations that were remotely displayed in one or more remotely displayed augmented views of one or more actual scenes, receiving second data indicating one or more user reactions of one or more users in response to the remote display of the one or more remotely displayed augmented views; and correlating the one or more user reactions with the one or more augmentations that were remotely displayed through the one or more remotely displayed augmented views. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 04-10-2014 |
20140098136 | DISPLAYING IN RESPONSE TO DETECTING ONE OR MORE USER BEHAVIORS ONE OR MORE SECOND AUGMENTATIONS THAT ARE BASED ON ONE OR MORE REGISTERED FIRST AUGMENTATIONS - Computationally implemented methods and systems include registering one or more first augmentations that were shown to a user through a first augmented view of a first actual scene, the one or more first augmentations having been shown at least at end of a segment of time, detecting, following the showing of the one or more first augmentations up to the end of the segment of time, one or more user behaviors of the user that when detected as occurring infers user's interest in seeing the one or more first augmentations; and displaying, in response at least in part to said detecting, one or more second augmentations through a second augmented view of the first actual scene or of a second actual scene, the one or more second augmentations to be displayed being based, at least in part, on the registering of the one or more first augmentations. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 04-10-2014 |
20140098137 | DISPLAYING IN RESPONSE TO DETECTING ONE OR MORE USER BEHAVIORS ONE OR MORE SECOND AUGMENTATIONS THAT ARE BASED ON ONE OR MORE REGISTERED FIRST AUGMENTATIONS - Computationally implemented methods and systems include registering one or more first augmentations that were shown to a user through a first augmented view of a first actual scene, the one or more first augmentations having been shown at least at end of a segment of time, detecting, following the showing of the one or more first augmentations up to the end of the segment of time, one or more user behaviors of the user that when detected as occurring infers user's interest in seeing the one or more first augmentations; and displaying, in response at least in part to said detecting, one or more second augmentations through a second augmented view of the first actual scene or of a second actual scene, the one or more second augmentations to be displayed being based, at least in part, on the registering of the one or more first augmentations. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 04-10-2014 |
20140098138 | METHOD AND SYSTEM FOR AUGMENTED REALITY BASED SMART CLASSROOM ENVIRONMENT - A method and system provide an augmented reality based environment using a portable electronic device. The method includes capturing an image of users, recognizing the users in the image, and fetching information associated with the recognized users. Further, the method includes determining location of the users in the image, mapping the fetched information associated with the users with the determined location of the users and communicating with the users based on the mapped information. | 04-10-2014 |
20140104315 | SYSTEM AND METHOD FOR CREATING AND DISPLAYING MAP PROJECTIONS RELATED TO REAL-TIME IMAGES - There is provided a method and system for creating and displaying a map projection of a device's real-time viewing area to depict virtual objects, the virtual objects providing a reflected view of real-time objects displayed within the device's viewing area, the method comprising: displaying a real-time image of the device's viewing area taken from a geographical location on a display; retrieving the map projection for revealing the reflected view as an elevated view of a ground surface about the device's current geographical location and in accordance with the device's viewing area; superimposing the map projection on the display and overlaid in an upper portion of the real-time image; and defining one or more markers configured to show a relationship between the map projection and the real-time image, each marker overlaid on the display and configured to connect between the virtual object in the map projection and the corresponding real-time object on the real-time image. | 04-17-2014 |
20140104316 | AUGMENTED REALITY COMPUTING DEVICE, APPARATUS AND SYSTEM - Embodiments of an apparatus and system are described for an augmented reality computing device. Some embodiments may comprise an enclosure comprising a display portion and a component portion, the display portion arranged to support a transparent display and the component portion arranged to support a processor and an augmented reality module operative on the processor to display one or more graphical user interface elements on the transparent display and to arrange the one or more graphical user interface elements based on one or more elements in a real world environment in proximity to the computing device. Other embodiments are described and claimed. | 04-17-2014 |
20140111542 | Platform for recognising text using mobile devices with a built-in device video camera and automatically retrieving associated content based on the recognised text - A platform for recognising text using mobile devices with a built-in device video camera and automatically retrieving associated content based on the recognised text. | 04-24-2014 |
20140111543 | METHOD FOR PROVIDING CONTENTS AND A DIGITAL DEVICE FOR THE SAME - A method for providing contents and a digital device for the same in which image data are displayed and navigated together with augmented reality information surrounded by a point where the corresponding image data are recorded. | 04-24-2014 |
20140111544 | Augmented Reality Control Systems - An augmented reality control system in which the generation and utilization of simple geometric forms of real world objects at the enterprise level are layered onto or based on real world objects. Digital data is aligned to real world objects via surrogate of simplified geometric forms. | 04-24-2014 |
20140111545 | Caching Support for Visual Search and Augmented Reality in Mobile Networks - A visual search and augmented reality are performed in a communication system that includes an Internet network attached to a mobile network. Data related to objects that are geographically related to at least one intermediate node is stored in at least one intermediate node in the mobile network. This involves transmitting a request for metadata from a user equipment in the mobile network towards the Internet network via the at least one intermediate node. The request comprises object recognition data related to an object geographically related to the at least one intermediate node. The request is de-tunnelled and intercepted in an intermediate node of the at least one intermediate node. | 04-24-2014 |
20140111546 | MIXED REALITY PRESENTATION SYSTEM - An image composition unit outputs a composition image of a physical space and virtual space to a display unit. The image composition unit calculates, as difference information, a half of the difference between an imaging time of the physical space and a generation completion predicted time of the virtual space. The difference information and acquired position and orientation information are transmitted to an image processing apparatus. A line-of-sight position prediction unit updates previous difference information using the received difference information, calculates, as the generation completion predicted time, a time ahead of a receiving time by the updated difference information, and predicts the position and orientation of a viewpoint at the calculated generation completion predicted time using the received position and orientation information. The virtual space based on the predicted position and orientation, and the generation completion predicted time are transmitted to a VHMD. | 04-24-2014 |
20140111547 | SYNCHRONIZED, INTERACTIVE AUGMENTED REALITY DISPLAYS FOR MULTIFUNCTION DEVICES - A device can receive live video of a real-world, physical environment on a touch sensitive surface. One or more objects can be identified in the live video. An information layer can be generated related to the objects. In some implementations, the information layer can include annotations made by a user through the touch sensitive surface. The information layer and live video can be combined in a display of the device. Data can be received from one or more onboard sensors indicating that the device is in motion. The sensor data can be used to synchronize the live video and the information layer as the perspective of video camera view changes due to the motion. The live video and information layer can be shared with other devices over a communication link. | 04-24-2014 |
20140118397 | PLANAR SURFACE DETECTION - A planar surface within a physical environment is detected enabling presentation of a graphical user interface overlaying the planar surface. Detection of planar surfaces may be performed, in one example, by obtaining a collection of three-dimensional surface points of a physical environment imaged via an optical sensor subsystem. A plurality of polygon sets of points are sampled within the collection. Each polygon set of points includes three or more localized points of the collection that defines a polygon. Each polygon is classified into one or more groups of polygons having a shared planar characteristic with each other polygon of that group. One or more planar surfaces within the collection are identified such that each planar surface is at least partially defined by a group of polygons containing at least a threshold number of polygons. | 05-01-2014 |
20140118398 | APPARATUS AND METHOD FOR AUGMENTED REALITY - A portable electronic device comprises a video camera for capturing a sequence of video images and an image processor operable to compress a first region of a current video image to a first extent and a second region of the current video image to a second, greater, extent to generate a processed current video image. The device includes a network communications interface operable to send processed video images to a server, and to receive control data from the server. The image processor is operable to augment the current video image with one or more computer graphic elements. Control data received from the server comprises image region information indicating a region of a video image estimated to comprise a predetermined marker and optionally augmentation instructions. Furthermore, the image processor is operable to define the first region of the current video image responsive to the image region information from the server. | 05-01-2014 |
20140125698 | MIXED-REALITY ARENA - A computing system comprises a see-through display device, a logic subsystem, and a storage subsystem storing instructions. When executed by the logic subsystem, the instructions display on the see-through display device a virtual arena, a user-controlled avatar, and an opponent avatar. The virtual arena appears to be integrated within a physical space when the physical space is viewed through the see-through display device. In response to receiving a user input, the instructions may also display on the see-through display device an updated user-controlled avatar. | 05-08-2014 |
20140125699 | RENDERING A DIGITAL ELEMENT - Rendering a digital element is disclosed. An indication that a device is within a region associated with the digital element is received. It is determined that the digital element is to be rendered. A representation of the digital element is generated in a rendered view of the region. The digital element is provided upon receiving an indication that the digital element has been selected. | 05-08-2014 |
20140125700 | USING A PLURALITY OF SENSORS FOR MAPPING AND LOCALIZATION - Systems and methods for performing localization and mapping with a mobile device are disclosed. In one embodiment, a method for performing localization and mapping with a mobile device includes identifying geometric constraints associated with a current area at which the mobile device is located, obtaining at least one image of the current area captured by at least a first camera of the mobile device, obtaining data associated with the current area via at least one of a second camera of the mobile device or a sensor of the mobile device, and performing localization and mapping for the current area by applying the geometric constraints and the data associated with the current area to the at least one image. | 05-08-2014 |
20140125701 | COMPUTER-READABLE MEDIUM, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD - In an example system, a computer is caused to function as: a feature detection unit which detects a feature arranged in a real space; an image generation unit which generates an image of a virtual space including a virtual object arranged based on the feature; a display control unit which causes a display apparatus to display an image in such a manner that a user perceives the image of the virtual space superimposed on the real space; a processing specification unit which specifies processing that can be executed in relation to the virtual space, based on the feature; and a menu output unit which outputs a menu for a user to instruct the processing specified by the processing specification unit, in such a manner that the menu can be operated by the user. | 05-08-2014 |
20140125702 | SYSTEM AND METHOD FOR GENERATING AN IMMERSIVE VIRTUAL ENVIRONMENT USING REAL-TIME AUGMENTATION OF GEO-LOCATION INFORMATION - A system and method for generating an immersive virtual environment using real-time augmentation of geo-location information. | 05-08-2014 |
20140125703 | Systems and Methods for Generating and Presenting Augmented Video Content - Computerized systems and methods are provided for generating and providing augmented video content to viewers. In one implementation, a media player executed by a user device obtains playlist data identifying underlying video content and elements of overlay content. The media player may generate augmented video content by merging an element of the overlay content into the underlying video content at a temporal position within the underlying video content that is relevant to the overlay content element, and further, may present the augmented video content to a viewer. The media player may detect a triggering event during the presentation of the augmented video content, and may modify the augmented video content in response to the triggering event. | 05-08-2014 |
20140125704 | SYSTEM AND METHOD OF VISUAL LAYERING - A camera identifies a physical object positioned in a workspace. A display displays first digital information into the workspace. A layering module treats the physical object as a first layer in the workspace and treats the first digital information as a second layer in the workspace. A controller controls the visual adjacency of the first and second layers via display of the first digital information. | 05-08-2014 |
20140132628 | REAL WORLD ACOUSTIC AND LIGHTING MODELING FOR IMPROVED IMMERSION IN VIRTUAL REALITY AND AUGMENTED REALITY ENVIRONMENTS - Systems and methods for modeling acoustic and lighting to provide improved immersion in a virtual reality and/or augmented reality environment are provided. In one aspect, systems and methods are provided to promote “improved augmented reality” which increases the realism and/or presence of virtual objects in the user's real environment. In some embodiments, changes in the user's actual room lighting are modeled in the virtual world to have a similar effect. In other embodiments, systems and methods are provided to promote immersion of a user in a virtual environment by extending the virtual world into the user's real world room. In this embodiment, lighting and/or sound from the virtual world is used to simulate the same or similar properties in the users actual, or real world environment, thereby improving virtual reality. | 05-15-2014 |
20140132629 | MODIFYING VIRTUAL OBJECT DISPLAY PROPERTIES - Various arrangements for organizing virtual objects within an augmented reality display are presented. A display may be provided and configured to present a virtual field-of-view having multiple virtual objects superimposed on a real-world scene. Priorities may be assigned to multiple regions of the virtual field-of-view based on real-world objects present within the real-world scene. A priority of a region of the multiple regions may be based on one or more real-world objects identified in the region. The multiple virtual objects may be displayed within the virtual field-of-view arranged based on the prioritized multiple regions. | 05-15-2014 |
20140132630 | APPARATUS AND METHOD FOR PROVIDING SOCIAL NETWORK SERVICE USING AUGMENTED REALITY - A system for providing a social network service (SNS) by utilizing augmented reality. The system includes a first device configured to receive image of a first user, to create a reference image based on the received images, and to transmit the reference image to a second device; and the second device is configured to receive the reference image from the first device, to generate a virtual image based on the reference image and the current context information of a second user, and to display the virtual image at predetermined locations. | 05-15-2014 |
20140132631 | HEAD MOUNTED DISPLAY, AND IMAGE DISPLAYING METHOD IN HEAD MOUNTED DISPLAY - Disclosed herein is A head mounted display including: (A) an eyeglasses frame-like frame to be mounted to an observer's head; (B) an image display device; (C) an image sensing device mounted to the frame; and (D) a correction section, wherein the image display device includes (B-1) an image generating device, and (B-2) see-through type light guide section which is mounted to the image generating device, on which beams emitted from the image generating device are incident, through which the beams are guided, and from which the beams are emitted toward an observer's pupil. | 05-15-2014 |
20140132632 | Interactivity With A Mixed Reality - Methods of interacting with a mixed reality are presented. A mobile device captures an image of a real-world object where the image has content information that can be used to control a mixed reality object through an offered command set. The mixed reality object can be real, virtual, or a mixture of both real and virtual. | 05-15-2014 |
20140139551 | AUGMENTED REALITY HELP - A system and related methods for an augmented reality help system in a head-mounted display device are provided. In one example, the head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. An augmented reality help program is configured to receive one or more user biometric parameters from the plurality of sensors. Based on the user biometric parameters, the program determines that the user is experiencing a stress response, and presents help content to the user via the head-mounted display device. | 05-22-2014 |
20140139552 | OBJECT DISPLAY DEVICE, OBJECT DISPLAY METHOD, AND OBJECT DISPLAY PROGRAM - An object display device includes an image capturing unit for acquiring an image in real space, an image feature extraction unit for extracting a predetermined feature about an image either in a plurality of feature regions detected from the image in real space or in the entire image in real space, an image processing unit for performing correction processing on an image of an object based on the predetermined feature about the image, and a display unit for displaying an overlaid image in which the image of the object subjected to correction processing is overlaid on the image in real space. With this configuration, the feature of the image in real space is appropriately reflected in the image of the object in the overlaid image. | 05-22-2014 |
20140139553 | VIRTUAL IMAGE DISPLAY APPARATUS - A trapezoidal correction processing portion performs trapezoidal correction (distortion correction) on video image areas each of which is divided into a plurality of video image areas separately in accordance with the divided video image areas. In this case, each of the video image areas is divided into a plurality of areas. Further, a pair of right and left virtual image formation sections perform trapezoidal correction in a mirror symmetric manner, and the center position of the wearer's eye in each of the virtual image formation sections is so adjusted that the center position coincides with a distortion-free image corrected by using a small amount of correction. | 05-22-2014 |
20140146082 | AUGMENTED REALITY INFORMATION SYSTEM - In various example embodiments, a system and method for providing information in an augmented reality display are provided. In example embodiments, a continuous stream of image data captured by a client device is received. An object within the continuous stream of image data is identified. Based on an identification of the object from the continuous stream of image data, a search for information related to the object is performed. A result is determined by filtering the information related to the object. The result is formatted to be displayed over a real-time image of the object on the client device. | 05-29-2014 |
20140146083 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - An image processing apparatus, which determines, for a combined image obtained by combining pixels of a given first image and pixels of an unknown second image either translucently or non-translucently using an unknown coefficient indicating a transparency, whether each of pixels included in the combined image is a translucently combined pixel, is provided. The image processing apparatus calculates, from pixel values of the combined image and the first image of respective pixels in a predetermined area including one pixel, pixel values of an image corresponding to the second image, calculates a total of differences between the calculated pixel values, identifies a coefficient used to obtain the combined image from the total of the difference, and determines that the one pixel is a translucently combined pixel when a value of the identified coefficient is larger than a predetermined value. | 05-29-2014 |
20140146084 | AUGMENTATION OF ELEMENTS IN DATA CONTENT - A system and method are disclosed for processing data content. Received data content comprises a scene and includes one or more recognized objects. The recognized object(s) include various characteristics, and can be detected and tracked during content capture. The data content can then be stored, and incorporated with metadata associated with the recognized object and/or one or more other elements in the data content. User inputs can be enabled in real-time, or post-capture, to augment one or more of the elements in the stored data content, including one or more characteristics of the recognized object. The data content can then be augmented to introduce one or more augmented elements, corresponding to respective elements of the data content, into the data content based on the user inputs. | 05-29-2014 |
20140152696 | GLASS TYPE MOBILE TERMINAL - A glass type mobile terminal including a transparent screen; a frame configured to secure the transparent screen in front of a user's eyes wearing the glass type mobile terminal; a camera mounted to the frame and configured to photograph an image in front of the user's eyes; a memory; an image recognition unit configured to extract information from the image photographed by the camera; and a controller configured to compare the extracted information with related information stored in the memory, and display the related information to the transparent screen on the transparent screen along with the captured image. | 06-05-2014 |
20140152697 | METHOD AND APPARATUS FOR PROVIDING AUGMENTED REALITY - A method and apparatus for providing augmented reality are provided and include a controller that is configured to match a head up display area of a windshield and an input image and determine an area in which an information amount is a minimum in the matched image as a position of a display window to display virtual information. In addition, the controller is configured to output virtual information at the determined position. | 06-05-2014 |
20140152698 | METHOD FOR OPERATING AUGMENTED REALITY CONTENTS AND DEVICE AND SYSTEM FOR SUPPORTING THE SAME - Disclosed are methods of operating augmented reality (AR) contents, and a device and a system supporting the same. In one method, a real world image is captured using a camera in a portable device. A virtual space corresponding to the real world image is built, by partitioning image elements included in the virtual space into plural background objects and displaying the background objects. Augmented reality contents are generated by mapping one or more user contents onto the background objects. In other embodiments, location information of the portable device is used to obtain AR contents for enhancing captured images. | 06-05-2014 |
20140160156 | DYNAMIC AUGMENTED REALITY MEDIA CREATION - According to one aspect of the present disclosure a system and technique for dynamic augmented reality media creation is disclosed. The system includes a processor and an augmentation module executable by the processor to: receive reality data; analyze the reality data; identify augmentation data based on the analysis of the reality data; and generate augmented reality content. The system also includes a capture module executable by the processor to determine if the reality data corresponds to predetermined capture event criteria and, responsive to determining that the reality data corresponds to the predetermined capture event criteria, capture the augmented reality content. | 06-12-2014 |
20140160157 | PEOPLE-TRIGGERED HOLOGRAPHIC REMINDERS - Methods for generating and displaying people-triggered holographic reminders are described. In some embodiments, a head-mounted display device (HMD) generates and displays an augmented reality environment to an end user of the HMD in which reminders associated with a particular person may be displayed if the particular person is within a field of view of the HMD or if the particular person is within a particular distance of the HMD. The particular person may be identified individually or identified as belonging to a particular group (e.g., a member of a group with a particular job title such as programmer or administrator). In some cases, a completion of a reminder may be automatically detected by applying speech recognition techniques (e.g., to identify key words, phrases, or names) to captured audio of a conversation occurring between the end user and the particular person. | 06-12-2014 |
20140160158 | DYNAMIC AUGMENTED REALITY MEDIA CREATION - According to one aspect of the present disclosure, a method and technique for dynamic augmented reality media creation is disclosed. The method includes: receiving reality data; analyzing the reality data; identifying augmentation data based on the analysis of the reality data; generating augmented reality content; determining if the reality data corresponds to predetermined capture event criteria; and responsive to determining that the reality data corresponds to the predetermined capture event criteria, capturing the augmented reality content. | 06-12-2014 |
20140160159 | METHOD AND ARRANGEMENT IN AN ELECTRONIC DEVICE - An object of the present invention is to provide a way of displaying content of an obscured area of a view for a user. | 06-12-2014 |
20140160160 | Image Mapping to Provide Visual Geographic Path - Provided is a computer system and method for mapping a visual path. The method includes receiving one or more images included in a predefined area; receiving one or more parameters associated with the image; and integrating the images and parameters into a map of the predefined area to enable mapping the visual path through the predefined area in response to one or more input path parameters. | 06-12-2014 |
20140160161 | AUGMENTED REALITY APPLICATION - An apparatus and computerized method include providing a memory, a display, an image source, and a processor communicably coupled to the memory, the display and the image source. The processor receives an image from the image source, detects at least one target within the image, retrieves an electronic content associated with the target, creates an augmented image by combining the image with the electronic content associated with the target and displays the augmented image on the display. | 06-12-2014 |
20140160162 | SURFACE PROJECTION DEVICE FOR AUGMENTED REALITY - Augmented reality (AR) is the process of overlaying or projecting computer generated images over a user's real world view of the physical world. The present invention allows for gameplay and/or training to contain augmented special effects. It is used to create surface patterns which are incorporated into augmented reality systems. It also allows for gesture control of AR elements during use. | 06-12-2014 |
20140160163 | Display Method And Display Device - A display device includes a first display unit, a second display unit, and a first image acquisition unit, a second image acquisition unit, an image processing unit, and an image output unit. A display method includes acquiring a first image and a second image by first image acquisition unit and second image acquisition unit respectively, both of the first image and the second image include the target image; determining a display information in accordance with the target image; determining a first display position in the first image and a second display position in the second image in accordance with the target image; displaying the first display image corresponding to the display information at the first display position and the second display image corresponding to the display information at the second display position. | 06-12-2014 |
20140160164 | IMAGE DISPLAY SYSTEM, IMAGE DISPLAY APPARATUS, AND CONTROL METHOD THEREOF - Upon receiving a communication switching instruction from a first wireless access point used for communication with an image processing apparatus, an image display apparatus disconnects communication with the first wireless access point. Simultaneously, the image display apparatus transmits, to a second wireless access point, a link request to establish communication with the second wireless access point of a new communication destination included in the switching instruction. The image display apparatus displays, on a display unit, a captured image continuously acquired from am image capturing unit until switching from the first wireless access point to the second wireless access point finishes as communication destination switching. | 06-12-2014 |
20140160165 | AUGMENTED REALITY SYSTEM USING MOVING CEILING TRANSPARENT DISPLAY FOR SHIP AND METHOD FOR ENABLING SAME - In an augmented reality system using a moving ceiling transparent display for a ship, according to the present invention, the moving ceiling transparent display is movably installed on the ceiling of a steering house of the ship, and the system comprises: a recognition portion for recognizing the location of the ship and the direction of the head and the pupils of a sailor; a reception portion for receiving external image data from an external landscape database with respect to an external landscape, in accordance with the location of the ship and the direction of the head and the pupils of the sailor; a matching portion for adjusting a match between the external image data, which is received by the receiving portion, and an actual external image through the window; and an output portion for outputting through the transparent display information on the external image data, which is adjusted through the matching portion, on the location of the actual external image. | 06-12-2014 |
20140168260 | WAVEGUIDE SPACERS WITHIN AN NED DEVICE - A system is disclosed for maintaining the spacing between waveguides in an optical element of a near eye display. Spacing is maintained with spacer elements mounted between adjacent waveguides in the optical element. | 06-19-2014 |
20140168261 | DIRECT INTERACTION SYSTEM MIXED REALITY ENVIRONMENTS - A system and method are disclosed for interacting with virtual objects in a virtual environment using an accessory such as a hand held object. The virtual object may be viewed using a display device. The display device and hand held object may cooperate to determine a scene map of the virtual environment, the display device and hand held object being registered in the scene map. | 06-19-2014 |
20140168262 | User Interface for Augmented Reality Enabled Devices - Method and apparatus for displaying augmented reality contents are disclosed. The method may include controlling a camera to scan an environment in view of a user, identifying a set of surfaces in the environment for displaying user interface windows according to characteristics of the environment, prioritizing a set of augmented reality contents for display with respect to the set of surfaces in the environment, and displaying the set of augmented reality contents on the set of surfaces in a display. Characteristics of the environment comprise at least aspect ratio of the set of surfaces with respect to the set of augmented reality contents to be displayed, and/or background color of the set of surfaces with respect to the set of augmented reality contents to be displayed. | 06-19-2014 |
20140168263 | CONSUMER ELECTRONICS WITH AN INVISIBLE APPEARANCE - A method, electronic device and system for displaying background images on an electronic device, wherein the electronic device includes a face that has at least one edge and a display visible in the face. The display extends to at least one edge of the face. Furthermore, a processor is coupled to the display and a photosensor is coupled to the processor. The photosensor is configured to capture background images of a background obscured behind the device when viewing the device face. The processor is configured to composite the background image with a second image. | 06-19-2014 |
20140168264 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR REAL-TIME ALIGNMENT OF AN AUGMENTED REALITY DEVICE - A system including a structure attachable to a surface in a real world environment, the structure establishing a known location and orientation of the structure, a docking element, as part of the structure, to secure an augmented reality device in a stationary position for alignment of the augmented reality device with the real world environment and with a parallel virtual environment, and a processor operable to perform the alignment by resetting the inertial navigation system of the augmented reality device to the known location when docked in the docking element and aligning the location and orientation of the virtual representation of the augmented reality device in the parallel virtual environment so that the parallel virtual environment in the augmented reality device overlaps the real world environment. A method and computer software produce are also disclosed. | 06-19-2014 |
20140168265 | HEAD-UP DISPLAY APPARATUS BASED ON AUGMENTED REALITY - Disclosed is a head-up display apparatus based on AR that three-dimensionally displays augmented image information on an object external to a vehicle on the basis of actual distance information between the vehicle and the external object, and thus can provide realistic information to a driver. The head-up display apparatus includes a distance information generating unit configured to receive an image signal from an image signal inputting apparatus capturing an image in front of a vehicle and generate distance information on each of a plurality of objects in the front image, an information image generating unit configured to generate an information image of each object in the front image, and an augmentation processing unit configured to generate augmented information image of each object on the basis of the distance information on each object. | 06-19-2014 |
20140168266 | HEAD-MOUNTED DISPLAY DEVICE, CONTROL METHOD FOR HEAD-MOUNTED DISPLAY DEVICE, AND WORK SUPPORTING SYSTEM - A head-mounted display device for enabling a user to simultaneously visually recognize a virtual image and an outside scene includes a warning-information generating unit configured to generate warning information, which is an image for calling the user's attention and an image display unit configured to cause the user to visually recognize the warning information as the virtual image. | 06-19-2014 |
20140168267 | AUGMENTED REALITY SYSTEM AND CONTROL METHOD THEREOF - A projection-based augmented reality system and a control method thereof are provided. The control method of an augmented reality system includes determining a conversion area to be converted from a work area based on a first gesture, acquiring a captured image of the determined conversion area, generating a virtual image of the determined conversion area from the acquired captured image, displaying the generated virtual image in the work area, and performing a manipulation function with respect to the displayed virtual image based on a second gesture. | 06-19-2014 |
20140168268 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - According to an embodiment of the present disclosure, there is provided an information processing device including a data acquisition unit configured to acquire sensor data indicating the direction of gravity exerted on an imaging device configured to image an image in which a physical space is projected, a decision unit configured to decide a relative attitude of a plane in the physical space with respect to the image based on the sensor data, and a conversion unit configured to perform conversion between a three-dimensional position of a given point on the plane and a two-dimensional position in the corresponding image using the attitude decided by the decision unit. | 06-19-2014 |
20140176603 | METHOD AND APPARATUS FOR MENTORING VIA AN AUGMENTED REALITY ASSISTANT - A method and apparatus for training and guiding users comprising generating a scene understanding based on video and audio input of a scene of a user performing a task in the scene, correlating the scene understanding with a knowledge base to produce a task understanding, comprising one or more goals, of a current activity of the user, reasoning, based on the task understanding and a user's current state, a next step for advancing the user towards completing one of the one or more goals of the task understanding and overlaying the scene with an augmented reality view comprising one or more visual and audio representation of the next step to the user. | 06-26-2014 |
20140176604 | Automated Object Selection and Placement for Augmented Reality - A system to facilitate AR processing includes receiving captured media from a user device and context information relating to media that is being delivered to a receiving device. The system may use the media being delivered with the captured media to generate one or more virtual objects. The user device may augment a user's view of reality that is reflected in the captured media by overlaying or otherwise incorporating the virtual objects in the user's view of reality. | 06-26-2014 |
20140176605 | DRIVER ASSISTANCE SYSTEM FOR VEHICLE - A vehicular camera includes a housing, a lens, an image sensor positioned for receiving images from the lens, a processor, and a memory. The memory contains a plurality of overlays. The processor is programmed to (a) receive first input data from a vehicle in which the camera is to be mounted, wherein the first input data correspond to the configuration of the vehicle, and (b) select a particular overlay to display based at least in part on the input received. | 06-26-2014 |
20140176606 | RECORDING AND VISUALIZING IMAGES USING AUGMENTED IMAGE DATA - A device and method for presenting augmented image data. An image capture device captures an image together with information about the image. Alternatively a data logger may capture additional information about a particular image. The image and additional information is sent to a database where a server analyzes the augmented image and relates the augmented image to other images in the database. A subsequent user may query the database for all images and augmented information for a particular area, location, or object and retrieve that collected information for subsequent analysis. | 06-26-2014 |
20140176607 | SIMULATION SYSTEM FOR MIXED REALITY CONTENT - Disclosed is a simulation system for mixed reality content. A simulation system according to the present invention may comprise at least one real object for demonstrating contents configured with a tracking sensor, a multi-modal input-output apparatus tracking the at least one real object and collecting information on the at least one real object and a content authoring apparatus configured to edit virtual contents according to a predefined scenario, receive the information on the at least one real object collected by the multi-modal input-output apparatus, and edit the virtual contents based on the information on the at least one real object and a user feedback. | 06-26-2014 |
20140176608 | DECORATING SYSTEM FOR EDIBLE PRODUCTS - An augmented reality system for a food product includes an edible media or a food product decoration with an embedded augmented reality marker and a related application for a mobile device. The application presents augmented reality content associated with the augmented reality marker. The application permits access to the augmented reality content in response to detection of the application recognizing the embedded augmented reality marker. | 06-26-2014 |
20140176609 | MIXED REALITY APPARATUS - A mixed reality device is provided with: a head-mounted display ( | 06-26-2014 |
20140184643 | Augmented Reality Worksite - A system and method for coordinating machines and personnel about a physical worksite maintains worksite information associated with the physical worksite in a database. A position of an operator display device in the physical worksite is determined and augmentation content is generated from the stored worksite information associated with the determined position. The augmentation content can be displayed on an operator display device through which the physical worksite is also visible. The operator display device may be a heads-up display, a head mounted display or, in some embodiments, an off-board display device. | 07-03-2014 |
20140184644 | RENDERING AUGMENTED REALITY BASED ON FOREGROUND OBJECT - A mobile device detects a moveable foreground object in captured images, e.g., a series of video frames without depth information. The object may be one or more of the user's fingers. The object may be detected by warping one of a captured image of a scene that includes the object and a reference image of the scene without the object so they have the same view and comparing the captured image and the reference image after warping. A mask may be used to segment the object from the captured image. Pixels are detected in the extracted image of the object and the pixels are used to detect the point of interest on the foreground object. The object may then be tracked in subsequent images. Augmentations may be rendered and interacted with or temporal gestures may be detected and desired actions performed accordingly. | 07-03-2014 |
20140184645 | ENERGY SAVINGS USING AUGMENTED REALITY - Technologies and implementations for energy savings using augmented reality are generally disclosed. | 07-03-2014 |
20140192084 | MIXED REALITY DISPLAY ACCOMMODATION - A mixed reality accommodation system and related methods are provided. In one example, a head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A mixed reality safety program is configured to receive a holographic object and associated content provider ID from a source. The program assigns a trust level to the object based on the content provider ID. If the trust level is less than a threshold, the object is displayed according to a first set of safety rules that provide a protective level of display restrictions. If the trust level is greater than or equal to the threshold, the object is displayed according to a second set of safety rules that provide a permissive level of display restrictions that are less than the protective level of display restrictions. | 07-10-2014 |
20140192085 | HEAD MOUNTED DISPLAY AND METHOD FOR CONTROLLING THE SAME - A head mounted display (HMD) and a method for controlling the same are disclosed. Most particularly, the HMD and the method for controlling the same may detect an external device having a ticket stored therein and display augmented reality information related to the stored ticket in a location approximate to an individual carrying an external device, in order to check tickets. | 07-10-2014 |
20140192086 | CAMERA-BASED DEVICE AND METHOD OF AUGMENTING DATA DISPLAYED ON A DISPLAY DEVICE USING THE CAMERA-BASED DEVICE - A method of augmenting data that is displayed on a display device by using a camera-based device, and the camera-based device are provided. The method includes configuring a network setting of the camera-based device to enable communication between the camera-based device and the display device; performing calibration to capture an initial boundary in the display device; capturing data that is displayed on the display device; associating the captured data with additional data; and displaying the additional data on the display device. The camera-based device includes a communication interface; and a processor that configures a network setting to enable communication between the camera-based device and the display device, perform calibration in order to capture an initial boundary in the display device; capture data that is displayed on the display device, associate the captured data with additional data; and display the additional data on the display device. | 07-10-2014 |
20140192087 | SYSTEM AND METHOD FOR PROVIDING A VIRTUAL IMMERSIVE ENVIRONMENT - A system and method for triggering movement of a user in a virtual immersive environment is disclosed. Embodiments provide evaluation criteria derived from a motion capture system to trigger movement of the user within a virtual world when the user moves, within an actual space of the environment, from a first location within a first zone to a second location within a second zone. The first zone and the second zone are concentric. | 07-10-2014 |
20140198128 | DYNAMIC ZONE PLATE AUGMENTED VISION EYEGLASSES - A method, an apparatus, and a computer program product for modulating optics in a display are provided. An apparatus forms a plurality of zone plates in a liquid crystal using electric fields. Each zone plate has a center, and the centers are aligned along a first axis of the display. The apparatus moves the plurality of zone plates in a first direction along a second axis of the display different from the first axis of the display, while maintaining alignment of the centers of the plurality of zone plates along the first axis. Such movement is provided through repositioning of electric fields through the liquid crystal. | 07-17-2014 |
20140198129 | APPARATUS AND METHOD FOR CONTROLLING AN AUGMENTED REALITY DEVICE - An apparatus, a method, and a computer program product are provided. The apparatus detects an eye gaze on a first region in a real world scene, sets a boundary that surrounds the first region, the boundary excluding at least a second region in the real world scene, performs an object recognition procedure on the first region within the boundary, and refrains from performing an object recognition procedure on the at least the second region. | 07-17-2014 |
20140198130 | AUGMENTED REALITY USER INTERFACE WITH HAPTIC FEEDBACK - A device may be configured to provide feedback based on an augmented reality environment. The device may comprise, for example, a processor configured to receive a control signal from an augmented reality device and a feedback device configured to provide a feedback based on the received control signal. The augmented reality device may generate an augmented reality environment and may be remote from the device. The control signal received by the device may be representative of an event occurring in the augmented reality environment. The augmented reality environment may include a physical space in which at least one physical object exists and an augmented reality space in which one or more virtual objects that augment the physical object are displayed. | 07-17-2014 |
20140204117 | MIXED REALITY FILTERING - Embodiments that relate to selectively filtering geo-located data items in a mixed reality environment are disclosed. For example, in one disclosed embodiment a mixed reality filtering program receives a plurality of geo-located data items and selectively filtering the data items based on one or more modes. The modes comprise one or more of a social mode, a popular mode, a recent mode, a work mode, a play mode, and a user interest mode. Such filtering yields a filtered collection of the geo-located data items. The filtered collection of data items is then provided to a mixed reality display program for display by a display device. | 07-24-2014 |
20140204118 | PERSONALIZING MEDICAL CONDITIONS WITH AUGMENTED REALITY - Augmented reality is used to simulate the impact of medical conditions on body parts and other objects within images taken of the objects. The simulations enable a user to see how a medical condition can affect the user by dynamically simulating the impact of the medical condition on captured images of body parts associated with the user in real-time. A user can select different medical conditions that are associated with different body parts. These objects are then identified within images containing the body parts using image recognition algorithms and/or user input. Thereafter, the images are modified so as to render the body parts as though the body parts were being impacted by the medical condition. The modifications are made by blending image data of the captured image with condition image data available to the processing system. | 07-24-2014 |
20140204119 | GENERATING AUGMENTED REALITY EXEMPLARS - Technologies are generally described for automatic clustering and rendering of augmentations into one or more operational exemplars in an augmented reality environment. In some examples, based on a user's context, augmentations can be retrieved, analyzed, and grouped into clusters. Exemplars can be used to render the clusters as conceptual representations of the grouped augmentations. An exemplar's rendering format can be derived from the grouped augmentations, the user's context, or formats of other exemplars. Techniques for grouping the augmentations into clusters and rendering these clusters as exemplars to a user can enhance the richness and meaning of an augmented reality environment along contextually or user-determined axes while reducing the sensorial and cognitive load on the user. | 07-24-2014 |
20140204120 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - An image processing device includes a processor; and a memory which stores a plurality of instructions, which when executed by the processor, cause the processor to execute: acquiring an image including a first object captured by an image sensor; computing, from the image, flatness information on flatness of a projection plane of the first object on which a superimposed image is displayed; and defining a display position of the superimposed image on the basis of the flatness information. | 07-24-2014 |
20140204121 | AUGMENTED REALITY FOR OILFIELD - A method for augmenting an immediate user task includes obtaining role information identifying a role of a user within an oilfield company. The user is performing oilfield operations in a field. The method further includes identifying a current location of the user in the field to identify the immediate user task being performed by the user in the field, defining, using the role information, a user perspective of the user, selecting metadata corresponding to the user perspective to obtain selected metadata, and presenting the selected metadata to the user. | 07-24-2014 |
20140210855 | SYSTEM AND METHOD FOR PROVIDING AUGMENTED CONTENT - Systems and methods for selectively augmenting an electronic media file with additional content responsive to the user viewing portions of the electronic media file are provided. The system includes a computing device with a display for playing an electronic media file that includes portions that are augmentable with additional content. The system also includes a camera and image processing software to track the focus of the user's eyes on the display while looking at the electronic media file. The system can determine whether the focus of user eyes corresponds to an augmentable portion of the electronic media file and augment the electronic media file by playing additional content to the user, and then continue playing the electronic media file. | 07-31-2014 |
20140210856 | Coordinate Geometry Augmented Reality Process for Internal Elements Concealed Behind an External Element - Embodiments of the invention include a method, a system, and a mobile device that incorporate augmented reality technology into land surveying, 3D laser scanning, and digital modeling processes. By incorporating the augmented reality technology, a 3D digital model of internal elements concealed behind an external element can be visualized on a live view, aligned to the orientation and scale of the scene displayed on the mobile device. In an embodiment, a marker can be placed at a predetermined set of coordinates on the external element, determined by surveying equipment. The 3D digital model of the internal elements can be retrieved by the mobile device and overlaid in relation to the marker position, orientation, and size so that it is seen at a calculated distance in depth behind the external element as they would exist hidden behind the external element in the real environment. | 07-31-2014 |
20140210857 | REALIZATION METHOD AND DEVICE FOR TWO-DIMENSIONAL CODE AUGMENTED REALITY - A computer-implemented method for two-dimensional code augmented reality includes: detecting an image capture of a two-dimensional code through a camera video frame; identifying the contour of the two-dimensional code captured in the camera video frame; decoding the information embedded in the detected two-dimensional code; obtaining content information corresponding to the decoded two-dimensional code; tracking the identified contour of the two-dimensional code within the camera video frame to obtain the position information of the two-dimensional code in the camera video frame; performing augmented reality processing on the two-dimensional code based on the content information and the position information; and generating the augmented reality on the device while simultaneously displaying real-world imagery on the display of the device, wherein any visual augmented reality is displayed in accordance with the location of the two-dimensional code in the video frame. | 07-31-2014 |
20140210858 | ELECTRONIC DEVICE AND METHOD FOR SELECTING AUGMENTED CONTENT USING THE SAME - Disclosed are an electronic device and a method for selecting augmented content that selectively augment various types of content with respect to a real object using a marker. The electronic device providing augmented reality, comprising: a memory storing at least one content group, wherein each of the at least one content group includes a plurality of virtual objects; a display; and a controller configured to: capture, via the camera, an image including a real object and a marker, obtain identification information on the real object based on the image, obtain angle information reflecting an orientation of the marker with respect to the real object using the image, determine a specific content group from the at least one content group based on the identification information, select a specific virtual object among the virtual objects include in the specific content group based on the angle information, and augment, via the display, the specific virtual object. | 07-31-2014 |
20140210859 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND PROGRAM, INFORMATION PROCESSING APPARATUS, VACANT SPACE GUIDANCE SYSTEM, VACANT SPACE GUIDANCE METHOD AND PROGRAM, IMAGE DISPLAY SYSTEM, IMAGE DISPLAY METHOD AND PROGRAM - A mobile terminal ( | 07-31-2014 |
20140218398 | SYSTEMS AND METHODS FOR MANAGING COMPUTING SYSTEMS UTILIZING AUGMENTED REALITY - Systems and methods for managing computing systems are provided. One system includes a capture device for capturing environmental inputs, memory storing code comprising a management module, and a processor. The processor, when executing the code comprising the management module, is configured to perform the method below. One method includes capturing an environmental input, identifying a target device in the captured environmental input, and comparing the target device in the captured environmental input to a model of the target device. The method further includes recognizing, in real-time, a status condition of the target device based on the comparison and providing a user with troubleshooting data if the status condition is an error condition. Also provided are physical computer storage mediums including a computer program product for performing the above method. | 08-07-2014 |
20140218399 | SYSTEMS AND METHODS FOR MANAGING COMPUTING SYSTEMS UTILIZING AUGMENTED REALITY - Systems and methods for managing computing systems are provided. One system includes a capture device for capturing environmental inputs, memory storing code comprising a management module, and a processor. The processor, when executing the code comprising the management module, is configured to perform the method below. One method includes capturing an environmental input, identifying a target device in the captured environmental input, and comparing the target device in the captured environmental input to a model of the target device. The method further includes recognizing, in real-time, a status condition of the target device based on the comparison and providing a user with troubleshooting data if the status condition is an error condition. Also provided are physical computer storage mediums including a computer program product for performing the above method. | 08-07-2014 |
20140232746 | THREE DIMENSIONAL AUGMENTED REALITY DISPLAY APPARATUS AND METHOD USING EYE TRACKING - A three-dimensional augmented reality display apparatus and method using eye tracking adjust a depth of an image without an increase in volume and are easily applied to an augmented reality head up display (HUD), by adjusting an angle of a total reflection prism based on positions of both eyes of a driver detected in real time using eye tracking to allow a right eye image to be formed in a driver's right eye and allow a left eye image to be formed in a driver's left eye. | 08-21-2014 |
20140232747 | OPERATION DISPLAY SYSTEM AND OPERATION DISPLAY METHOD - Disclosed is an operation display system, including: an operation display device including a display unit to display an operation window; an operating unit to receive an operation to the operation window; and a display control unit to change the operation window in accordance with the operation received by using the operating unit; an air operation detecting unit to detect an air operation performed by one user in air apart from the display unit; a virtual operation window creating unit to create a virtual operation window in which the operation window is changed in accordance with the air operation; and an AR display unit to show the one user an augmented reality space in which the virtual operation window is synthesized with a real space, wherein the display control unit does not change the operation window displayed on the display unit, in accordance with the air operation. | 08-21-2014 |
20140232748 | DEVICE, METHOD AND COMPUTER READABLE RECORDING MEDIUM FOR OPERATING THE SAME - A method of operating an electronic device is provided. The method includes recognizing at least one object from a digital image, wherein the recognizing of the object includes generating at least one descriptor from the digital image, determining an object in the digital image on a basis of at least one part of the at least one descriptor and identification data corresponding to the at least one reference object, and determining a pose of the object on a basis of at least one part of the reference descriptor corresponding to the determined object and the at least one descriptor. | 08-21-2014 |
20140232749 | VISION-BASED AUGMENTED REALITY SYSTEM USING INVISIBLE MARKER - A vision-based augmented reality system using an invisible marker indicates an invisible marker on a target object to be tracked, such that it can rapidly and correctly track the target object by detecting the invisible marker. The augmented reality system includes a target object (TO) including an infrared marker (IM) drawn by an invisible infrared light-emitting material; a visible-ray camera ( | 08-21-2014 |
20140232750 | VISUAL OVERLAY FOR AUGMENTING REALITY - Augmented reality may be provided to one or more users in a real-world environment. For instance, information related to a recognized object may be displayed as a visual overlay appearing to be in the vicinity of the object in the real-world environment that the user is currently viewing. The information displayed may be determined based on at least one of captured images and transmissions from other devices. In one example, a portable apparatus receives a transmitted user identifier (ID) and may submit the user ID to a remote computing device that compares a profile of a user corresponding to the user ID with a profile associated with the portable apparatus for determining, at least in part, information to be displayed as the visual overlay. As another example, the portable apparatus may include a camera to capture images that are analyzed for recognizing objects and identifying other users. | 08-21-2014 |
20140240349 | METHOD AND APPARATUS FOR PRESENTING TASK-RELATED OBJECTS IN AN AUGMENTED REALITY DISPLAY - An approach is provided for causing a presentation in an augmented reality user interface for user guidance. The approach involves causing a presentation of one or more indications of one or more parts in an augmented reality user interface, wherein the one or more parts are associated with at least one task. The approach also involves causing a presentation of one or more guides for aligning the one or more indications with the one or more parts in the augmented reality user interface. | 08-28-2014 |
20140240350 | DIRECTIONAL AND X-RAY VIEW TECHNIQUES FOR NAVIGATION USING A MOBILE DEVICE - Techniques for displaying navigation information on a mobile device are provided that include a method that includes obtaining an indication of a position and an indication of a direction associated with the mobile device, using the indication of the position, the indication of the direction, information regarding identities of POIs within a geographic region of interest, and information regarding areas associated with the POIs to determine at least one relevant POI, of the POIs, that is associated with the position and direction, and displaying at least one visual indication associated with each of the at least one relevant POI on the mobile device. The appearance of the at least one visual indication is dependent on at least one of a distance from the mobile device of the relevant POI associated with the visual indication or presence of a known physical barrier between the mobile device and that relevant POI. | 08-28-2014 |
20140240351 | MIXED REALITY AUGMENTATION - Embodiments that relate to providing motion amplification to a virtual environment are disclosed. For example, in one disclosed embodiment a mixed reality augmentation program receives from a head-mounted display device motion data that corresponds to motion of a user in a physical environment. The program presents via the display device the virtual environment in motion in a principal direction, with the principal direction motion being amplified by a first multiplier as compared to the motion of the user in a corresponding principal direction. The program also presents the virtual environment in motion in a secondary direction, where the secondary direction motion is amplified by a second multiplier as compared to the motion of the user in a corresponding secondary direction, and the second multiplier is less than the first multiplier. | 08-28-2014 |
20140240352 | CONTENT DELIVERY SYSTEM WITH AUGMENTED REALITY MECHANISM AND METHOD OF OPERATION THEREOF - A content delivery system includes: a registration module configured to identify a registration type for calibrating a current location of a device; a tracker module, coupled to the registration module, configured to track a subject of interest based on a context; a source module, coupled to the tracker module, configured to determine an information source based on the registration type; and a content module, coupled to the source module, configured to generate an overlay content based on the information source for displaying the overlay content for the subject of interest on the device. | 08-28-2014 |
20140240353 | SYSTEM FOR AND METHOD OF AUGMENTING VIDEO AND IMAGES - A system for and a method of augmenting video and images. A target area of an image frame is obtained. Boundary values for the target area of the image frame are obtained. Image data to be inserted into the image frame is also obtained. The image data is blended according to the boundary values for the target area using spectral methods. The blended image data is inserted into the target area of the image frame. The image can be a portion of a video clip in which case blended image data can be inserted in the target area for each of a plurality of image frames of the video clip to generate a resulting video clip. | 08-28-2014 |
20140240354 | AUGMENTED REALITY APPARATUS AND METHOD - An augmented reality apparatus includes a plurality of photographing modules separated from each other, capturing images of a real scene that includes an object; a tracking unit obtaining status information of the object by tracking the object in the images of the real scene; an image processor determining status information of a virtual image that corresponds to the real scene based on the status information of the object, and generating an augmented reality image by combining the virtual image and the images of the real scene based on the determined status information; and a rendering unit rendering the augmented reality image to display the augmented reality image. | 08-28-2014 |
20140240355 | Imaging System and Method for Use in Surgical and Interventional Medical Procedures - A system and method for displaying images of internal anatomy includes an image processing device configured to provide high resolution images of the surgical field from low resolution scans during the procedure. The image processing device digitally manipulates a previously-obtained high resolution baseline image to produce many representative images based on permutations of movement of the baseline image. During the procedure a representative image is selected having an acceptable degree of correlation to the new low resolution image. The selected representative image and the new image are merged to provide a higher resolution image of the surgical field. The image processing device is also configured to provide interactive movement of the displayed image based on movement of the imaging device, and to permit placement of annotations on the displayed image to facilitate communication between the radiology technician and the surgeon. | 08-28-2014 |
20140247278 | BARCODE VISUALIZATION IN AUGMENTED REALITY - Disclosed herein is an improved method for providing content associated with barcodes in augmented reality in addition or in combination with providing content associated with target objects in augmented reality. The improved method advantageously provides a augmented reality client that a user may use to view the respective content associated with barcodes and target objects while in camera view to improve usability. Advantageously, the user is not unexpectedly taken out of camera view to view the content associated with the barcode and the user experience provided is consistent between barcodes and target objects. Furthermore, the improved method integrates barcodes and a visualization of the barcode within augmented reality, without disrupting the real-time augmented reality experience. | 09-04-2014 |
20140247279 | REGISTRATION BETWEEN ACTUAL MOBILE DEVICE POSITION AND ENVIRONMENTAL MODEL - A user interface enables a user to calibrate the position of a three dimensional model with a real-world environment represented by that model. Using a device's sensor, the device's location and orientation is determined. A video image of the device's environment is displayed on the device's display. The device overlays a representation of an object from a virtual reality model on the video image. The position of the overlaid representation is determined based on the device's location and orientation. In response to user input, the device adjusts a position of the overlaid representation relative to the video image. | 09-04-2014 |
20140247280 | FEDERATED MOBILE DEVICE POSITIONING - A user interface enables a user to calibrate the position of a three dimensional model with a real-world environment represented by that model. Using a device's sensor suite, the device's location and orientation is determined. A video image of the device's environment is displayed on the device's display. The device overlays a representation of an object from a virtual reality model on the video image. The position of the overlaid representation is determined based on the device's location and orientation. In response to user input, the device adjusts a position of the overlaid representation relative to the video image. | 09-04-2014 |
20140247281 | Dynamic Augmented Reality Vision Systems - Imaging systems which include an augmented reality feature are provided with automated means to throttle or excite the augmented reality generator. Compound images are presented whereby an optically captured image is over late with a pewter generated image portion to form the complete augmented image for presentation to a user. Upon the particular conditions of the imager, imaged scene and the Imaging environment, these imaging systems include automated responses. Computer-generated images which are overlaid optically captured images are either bolstered all in the detail and content where an increase in information is needed, or they are tempered where a decrease in information is preferred as determined by prescribed conditions and values. | 09-04-2014 |
20140247282 | APPARATUS AND ASSOCIATED METHODS - An apparatus comprising:
| 09-04-2014 |
20140247283 | UNIFYING AUGMENTED REALITY AND BIG DATA - Embodiments of the present invention relate to unifying augmented reality technology and big data. An interactive operation element may be defined. The interactive operation element is associated with an event and a location on an augmented reality (AR) screen. An action may be performed based on the event using a predefined communication protocol. The action may be associated with an information artifact which is derived from big data. | 09-04-2014 |
20140253588 | DISABLING AUGMENTED REALITY (AR) DEVICES AT SPEED - Systems, apparatus and methods for limiting information on an augmented reality (AR) display based on various speeds of an AR device are presented. Often information forms a distraction when the wearer is driving, running or even walking Therefore, the described systems, devices and methods aim to limit information displayed on an AR display based on three or more levels movement (e.g., stationary, walking, driving) such that the wearer is less distracted when higher levels of concentration are needed for real world activities. | 09-11-2014 |
20140253589 | INCONSPICUOUS TAG FOR GENERATING AUGMENTED REALITY EXPERIENCES - A system and method for generating virtual objects, the data for the virtual object is retrieved at least in part from a tag. The tag comprises a transparent physical surface and a visually imperceptible structure constructed in the transparent physical surface. The tag encodes the data for the virtual objects in the visually imperceptible structure. When detected by the appropriately configured capture devices, the visually imperceptible structure produces a depth pattern that is reflected in phase shifts between regions in the tag. | 09-11-2014 |
20140253590 | METHODS AND APPARATUS FOR USING OPTICAL CHARACTER RECOGNITION TO PROVIDE AUGMENTED REALITY - A processing system uses optical character recognition (OCR) to provide augmented reality (AR). The processing system automatically determines, based on video of a scene, whether the scene includes a predetermined AR target. In response to determining that the scene includes the AR target, the processing system automatically retrieves an OCR zone definition associated with the AR target. The OCR zone definition identifies an OCR zone. The processing system automatically uses OCR to extract text from the OCR zone. The processing system uses results of the OCR to obtain AR content which corresponds to the text from the OCR zone. The processing system automatically causes that AR content to be presented in conjunction with the scene. Other embodiments are described and claimed. | 09-11-2014 |
20140253591 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM RECORDING INFORMATION PROCESSING PROGRAM - An example information processing apparatus includes: an image acquiring unit that acquires an image of a real space captured by an imaging device; a feature detection unit that detects one or more features from the captured image; an image generating unit that generates an image of a virtual space, by placing a virtual object made to correspond to the detected feature at a position based on the position of the feature in the virtual space; and a display control unit that causes an image to be displayed on a display device such that a user sees the virtual space image superimposed on a real space, wherein when a first feature and a second feature are detected in a captured image, the image generating unit places a first virtual object corresponding to the first feature in the virtual space at a position based on the position of the second feature. | 09-11-2014 |
20140253592 | METHOD FOR PROVIDING AUGMENTED REALITY, MACHINE-READABLE STORAGE MEDIUM, AND PORTABLE TERMINAL - A method for providing Augmented Reality (AR) is provided. The method includes acquiring an image, sequentially setting candidate regions of the image according to types of subjects, detecting at least one of the subjects from the candidate regions, creating a synthesized image by mapping a virtual object to a position corresponding to a position of the detected subject in the image, and displaying the synthesized image to a user. | 09-11-2014 |
20140267395 | LOW-LATENCY INTERACTIVE MULTIVIEWER INTERFACES AND METHODS FOR VIDEO BROADCAST EQUIPMENT - Low-latency interactive multiviewer interfaces and methods for video broadcast equipment are disclosed. A video switcher includes an integrated multiviewer to generate a multiviewer interface. Video signals are received from video sources and a user interface graphic is received from an external source such as an external computer. The received video signals are routed in a video path of the video switcher to be presented in a multiviewer interface on a display, and the received interface graphic is overlaid on the received video signals for presentation in the multiviewer interface. Realtime elements associated with frames of the video signals may also be generated within the video switcher and combined with the video signals and the externally generated interface graphic in the multiviewer interface. The video signals could include video signals from external video sources, outputs of internal processing elements within the video switcher, and/or intermediary video signals within the video switcher. | 09-18-2014 |
20140267396 | AUGMENTING IMAGES WITH HIGHER RESOLUTION DATA - Embodiments are disclosed that relate to augmenting a lower resolution image with higher resolution image data. For example, one disclosed embodiment provides a method comprising imaging a scene with a first, lower resolution imaging device to form a first set of image data. The scene may also be imaged with a second, higher resolution imaging device to form a second set of image data. The method further comprises augmenting at least a portion of the first set of image data with at least a portion of the second set of image data to form an augmented image. | 09-18-2014 |
20140267397 | IN SITU CREATION OF PLANAR NATURAL FEATURE TARGETS - Disclosed are a system, apparatus, and method for in-situ creation of planar natural feature targets. In one embodiment, a planar target is initialized from a single first reference image one or more subsequent images are processed. In one embodiment, the planar target is tracked in six degrees of freedom upon the processing of the one or more subsequent images and a second reference image is selected from the processed one or more subsequent images. In one embodiment, upon selecting the second reference image the planar target is refined to a more accurate planar target. | 09-18-2014 |
20140267398 | AUGMENTED REALITY HEADS UP DISPLAY (HUD) FOR YIELD TO PEDESTRIAN SAFETY CUES - An augmented reality driver system, device, and method safely guide a vehicle driver to yield to pedestrians. A vehicle navigator determines a turn lane based upon proximity to a vehicle. A target sensor detects a pedestrian entering the turn lane and to determine a crosswalk path across the turn lane. An augmented reality controller three dimensionally maps a forward view including the pedestrian, and spatially overlays an augmented reality display on the volumetric heads up display for a driver of the vehicle by projecting a yielding indication adjacent to the crosswalk path. | 09-18-2014 |
20140267399 | Using Augmented Reality to Determine Information - Methods and systems according to one or more embodiments of the present disclosure are provided for using augmented reality to determine information. In an embodiment, an augmented reality (AR) system comprises one or more processors, and one or more memories in communication with the one or more processors and adapted to store a plurality of machine-readable instructions which when executed by the one or more processors cause the system to: capture an image view via a viewer; deconstruct the image view into one or more objects; identify at least one object of interest based on specific relevancy information; determine real time information about the object(s) of interest; and present the real time information on a user interface. | 09-18-2014 |
20140267400 | User Interface for a Head Mounted Display - A user interface (UI) of a head mounted display (HMD) is provided that allows a user to access one or more persistent data elements that are otherwise outside the user's initial field of view by using a head movement, such as a head tilt (i.e., movement about a horizontal axis) and/or rotation (i.e., movement about a vertical axis). Embodiments also can provide for further movement and/or other manipulation of data of persistent data elements with further detected movement of the user's head. | 09-18-2014 |
20140267401 | Visual Cortex Thought Detector Interface - A wearable computing device comprises one or more eye pieces each of which further comprises a flexible frame surrounding a display screen and tactile elements arranged on the perimeter of the display screen. The tactile elements provide tactile feedback to the user that is synchronous with the display on the display screen. A detection system is also included in the flexible frame to monitor the movements of a wearer's eyes and the eye sockets and to execute various tasks in response to the detected movements. A visual cortex thought detector also coupled to the wearable computing device obtains information regarding the wearer's thoughts and manipulates a display on the display screen based on the obtained information. | 09-18-2014 |
20140267402 | VOLUMETRIC HEADS-UP DISPLAY WITH DYNAMIC FOCAL PLANE - A heads-up display device for displaying graphic elements in view of a user while the user views an environment through a display screen. The heads-up display device includes at least one projector that projects a graphic element on a frontal focal plane in view of the user while the user views the environment through the display screen, and at least one projector that projects a graphic element on a ground-parallel focal plane in view of the user while the user views the environment through the display screen. The projector that projects the graphic element on the frontal focal plane is mounted on an actuator that linearly moves the projector so as to cause the frontal focal plane to move in a direction of a line-of-sight of the user. The projector that projects the ground-parallel focal plane is fixedly arranged such that the ground-parallel focal plane is static. | 09-18-2014 |
20140267403 | METHODS AND APPARATUS FOR AUGMENTED REALITY TARGET DETECTION - Systems and methods are disclosed for target-based AR devices to perform low-power front-end passive scanning of targets to alert users to AR content linked to any image targets the users may be viewing. Passive scanning by AR devices relieves users of the need to manually activate a camera for AR image target identification, and helps to identify image targets the users may be unknowingly viewing. To conserver power, AR devices may autonomously activate a camera to perform an exploratory scan when the AR devices detect from users' movement patterns that users may be interested in certain targets or is in a state of attentiveness. AR devices may identify one or more image targets from the exploratory scans. If users elect to interact with the AR content, AR devices may activate the camera to perform a full capture or real-time tracking of the image targets to augment the AR content. | 09-18-2014 |
20140267404 | AUGMENTED REALITY DEVICE WITH PREDEFINED OBJECT DATA - Techniques for displaying an augmented reality toy. Embodiments capture a visual scene for display. The visual scene includes a first physical object and is captured using one or more camera devices. The first physical object is identified as a first predetermined object type, based on one or more object identifiers associated with the first physical object. Embodiments retrieve predefined geometric information corresponding to the first predetermined object type and render a sequence of frames for display in which the captured visual scene is augmented, based on the predefined geometric information. | 09-18-2014 |
20140267405 | CAMPAIGN OPTIMIZATION FOR EXPERIENCE CONTENT DATASET - A server for campaign optimization is described. An experience content dataset is generated for an augmented reality application of a device based on analytics results. The analytics results are generated based on analytics data received from the device. The experience content dataset is provided to the device. The device recognizes a content identifier of the experience content dataset and generates an interactive experience with a presentation of virtual object content that is associated with the content identifier. | 09-18-2014 |
20140267406 | CONTENT CREATION TOOL - A server for content creation is described. A content creation tool of the server generates an experience content dataset using a template to process a content identifier and virtual object content. An experience generator of the server provides the experience content dataset to a device that recognizes the content identifier, to generate an interactive experience with the virtual object content at the device. | 09-18-2014 |
20140267407 | SEGMENTATION OF CONTENT DELIVERY - A system and method for segmentation of content delivery is described. A virtual object model is divided into a plurality of segments. An order of the plurality of segments is arranged in a delivery queue. Each segment of the virtual object model is delivered in the order of the delivery queue to a device that is configured to recognize a physical object that is associated with the virtual object model. | 09-18-2014 |
20140267408 | REAL WORLD ANALYTICS VISUALIZATION - A server receives and analyzes analytics data from an application of one or more devices. The application corresponds to a content generator. The server generates, using the content generator, a visualization content dataset based on the analysis of the analytics data. The visualization content dataset comprises a set of images, along with corresponding analytics virtual object models to be engaged with an image of a physical object captured with the one or more devices and recognized in the set of images. The analytics data and the visualization content dataset may be stored in a storage device of the server. | 09-18-2014 |
20140267409 | DYNAMICALLY PRESERVING SCENE ELEMENTS IN AUGMENTED REALITY SYSTEMS - Methods, apparatuses, computer program products, devices and systems are described that carry out accepting a user request associated with at least one of an item, an aspect, or an element of a field of view of an augmented reality device; determining that a first presentation of the at least one item, aspect, or element has a limited period of viability for user interaction relative to the field of view of the augmented reality device; and at least one of maintaining the first presentation or providing a substantially similar second presentation in response to determining that a first presentation of the at least one item, aspect, or element has a limited period of viability for interaction relative to the field of view of the augmented reality device. | 09-18-2014 |
20140267410 | TEMPORAL ELEMENT RESTORATION IN AUGMENTED REALITY SYSTEMS - Methods, apparatuses, computer program products, devices and systems are described that carry out accepting a request associated with at least one of an item, an aspect, or an element that is not present in a field of view of a user's augmented reality device; presenting in a display of the augmented reality device at least one augmented reality representation related to the at least one item, aspect, or element in response to accepting a request associated with at least one item, aspect, or element that is not present in a field of view of an augmented reality device; and processing the request and any related interaction of the user via the at least one augmented reality representation. | 09-18-2014 |
20140267411 | INDICATING OBSERVATION OR VISIBILITY PATTERNS IN AUGMENTED REALITY SYSTEMS - Methods, apparatuses, computer program products, devices and systems are described that carry out presenting a location history query to a data source, wherein the data source includes data relating to at least one of a fixed recording device within a defined radius of a component of the location history query, a mobile recording device within a defined radius of a component of the location history query, or an individual present within a defined radius of a component of the location history query;
| 09-18-2014 |
20140267412 | OPTICAL ILLUMINATION MAPPING - Techniques for augmenting an appearance of a first object. Embodiments include capturing a visual scene for display. Here, the visual scene includes a physical object and wherein the visual scene is captured using one or more camera devices. The physical object is identified as a first predetermined object type, based on one or more object identifiers associated with the physical object. Embodiments also retrieve visual characteristics information corresponding to the first predetermined object type. A sequence of frames that includes the first object is then rendered for display, where the appearance the first object in the rendered sequence of frames is augmented based on the retrieved visual characteristics information and an appearance of the physical object in the captured visual scene. | 09-18-2014 |
20140267413 | ADAPTIVE FACIAL EXPRESSION CALIBRATION - Technologies for generating an avatar with a facial expression corresponding to a facial expression of a user include capturing a reference user image of the user on a computing device when the user is expressing a reference facial expression for registration. The computing device generates reference facial measurement data based on the captured reference user image and compares the reference facial measurement data with facial measurement data of a corresponding reference expression of the avatar to generate facial comparison data. After a user has been registered, the computing device captures a real-time facial expression of the user and generates real-time facial measurement data based on the captured real-time image. The computing device applies the facial comparison data to the real-time facial measurement data to generate modified expression data, which is used to generate an avatar with a facial expression corresponding with the facial expression of the user. | 09-18-2014 |
20140267414 | VIRTUAL BOOKSHELVES FOR DISPLAYING AND SHARING DIGITAL CONTENT - A virtual bookshelf for displaying digital content items is generated. In operation, a virtual space that is to contain a virtual bookshelf is defined and the virtual bookshelf is generated based on dimensions of the virtual space. The virtual bookshelf is populated with digital content items. The virtual bookshelf and the digital content items are displayed using a display device. | 09-18-2014 |
20140267415 | ROAD MARKING ILLUMINATTION SYSTEM AND METHOD - A controller is configured to enhance driving awareness and safety by recognizing road marking objects and automatically generating laser or light beams to illuminate the road marking objects. The road marking objects are recognized from vehicle surrounding sensing system where cameras are frequently used. The road marking objects are also inferred from navigation information system based on the vehicle's position and knowledge about surrounding environment. Road markings for future vehicle positions are predicted based on present vehicle states and motions. The relative positions of the road marking objects are determined with respect to a vehicle coordinate system. When illuminated from a projector on the vehicle, the projected images of the road markings sufficiently overlap and highlight their target road marking objects on road surface. | 09-18-2014 |
20140267416 | VIRTUAL BILLBOARDS - Disclosed are methods and apparatus for implementing a reality overlay device. A reality overlay device captures information that is pertinent to physical surroundings with respect to a device, the information including at least one of visual information or audio information. The reality overlay device may transmit at least a portion of the captured information to a second device. For instance, the reality overlay device may transmit at least a portion of the captured information to a server via the Internet, where the server is capable of identifying an appropriate virtual billboard. The reality overlay device may then receive overlay information for use in generating a transparent overlay via the reality overlay device. The transparent overlay is then superimposed via the device using the overlay information, wherein the transparent overlay provides one or more transparent images that are pertinent to the physical surroundings. Specifically, one or more of the transparent images may operate as “virtual billboards.” Similarly, a portable device such as a cell phone may automatically receive a virtual billboard when the portable device enters an area within a specified distance from an associated establishment. | 09-18-2014 |
20140267417 | Method and System for Disambiguation of Augmented Reality Tracking Databases - A method and system for reducing the amount of processing that an augmented reality platform needs to perform in order to provide an augmented reality experience to a user. The present invention enhances the efficiency of augmented reality processing by reducing the amount of tracking cues and images that need to be processed to determine the narrow pose of a mobile device that outputs the augmented reality experience. | 09-18-2014 |
20140267418 | METHOD FOR SIMULATING NATURAL PERCEPTION IN VIRTUAL AND AUGMENTED REALITY SCENES - A preferred method for dynamically displaying virtual and augmented reality scenes can include determining input parameters, calculating virtual photometric parameters, and rendering a VAR scene with a set of simulated photometric parameters. | 09-18-2014 |
20140267419 | METHOD AND SYSTEM FOR REPRESENTING AND INTERACTING WITH AUGMENTED REALITY CONTENT - Systems and methods for displaying augmented reality (AR) content are disclosed. The AR device may include a display configured to display real-world content overlaid with AR content and at least one sensor configured to provide an output indicative of an orientation, location, or motion of the AR device. The system may also include a processor device configured to: cause the AR content to be shown on the display at an initial location on the display; determine a change in orientation of the AR device based on the output of the at least one sensor; and change a position of the AR content on the display to a second location on the display, wherein the change in position of the AR content from the initial location to the second location is related to the determined change in orientation of the AR device. | 09-18-2014 |
20140267420 | DISPLAY SYSTEM AND METHOD - One embodiment is directed to a user display device comprising a housing frame mountable on the head of the user, a lens mountable on the housing frame and a projection sub system coupled to the housing frame to determine a location of appearance of a display object in a field of view of the user based at least in part on at least one of a detection of a head movement of the user and a prediction of a head movement of the user, and to project the display object to the user based on the determined location of appearance of the display object. | 09-18-2014 |
20140267421 | TERMINAL AND METHOD FOR PROVIDING AUGMENTED REALITY - A method for providing augmented reality includes acquiring a real-world image including an object; transmitting terminal information, in which the terminal information includes a location information of a terminal and an original retrieval distance; receiving object information corresponding to the object, in which the object information is based on the transmitted terminal information; and overlapping the received object information over the corresponding object in the real-world image. A terminal to perform the methods described herein includes a location information providing unit, an information transmitting/receiving unit, an image processing unit, and a user view analyzing unit. | 09-18-2014 |
20140285519 | METHOD AND APPARATUS FOR PROVIDING LOCAL SYNCHRONIZATION OF INFORMATION FOR AUGMENTED REALITY OBJECTS - An approach is provided for local synchronization of information for augmented reality objects. A mixed reality platform determines at least one augmented reality object of at least one augmented reality information space associated with at least one device. The mixed reality platform determines local information associated with the at least one device, one or more other devices proximate to the at least one device, or a combination based, at least in part, on a relevancy of the local information to the at least one augmented reality object. The mixed reality platform then causes, at least in part, a presentation of the local information as one or more layers of an augmented reality user interface depicting the at least one augmented reality object. | 09-25-2014 |
20140285520 | WEARABLE DISPLAY DEVICE USING AUGMENTED REALITY - A wearable display device using an augmented reality interface is disclosed. The disclosed device includes: a sensor unit configured to acquire image information; a watch recognition part configured to recognize a watch worn by a user from the acquired image information; an interface image generator part configured to generate a graphic interface image using the recognized watch as a reference and to show the graphic interface image in the acquired image; a control command recognition part configured to recognize a control command using the graphic interface image; and a processor configured to execute the recognized control command. The device provides the advantage of supporting various interface commands without requiring a separate interface device. | 09-25-2014 |
20140285521 | INFORMATION DISPLAY SYSTEM USING HEAD MOUNTED DISPLAY DEVICE, INFORMATION DISPLAY METHOD USING HEAD MOUNTED DISPLAY DEVICE, AND HEAD MOUNTED DISPLAY DEVICE - In an information display system, an information apparatus includes a target information storage section that stores target information to be published by the information apparatus and an extraction section that extracts the target information from the target information storage section on the basis of user information that is information regarding a user of a head mounted display device, and the head mounted display device includes an information generating section that generates information for additional presentation for providing the augmented reality to the user using the target information acquired from the information apparatus and an image display section that enables the user to view the generated information for additional presentation as a virtual image. | 09-25-2014 |
20140285522 | SYSTEM AND METHOD FOR PRESENTING TRUE PRODUCT DIMENSIONS WITHIN AN AUGMENTED REAL-WORLD SETTING - Methods, systems, computer-readable media, and apparatuses for presenting a representation of an augmented real-world setting are presented. In some embodiments, a method includes presenting a representation of an augmented real-world setting. The method includes capturing a plurality of images of a real-world setting. The method also includes analyzing one or more real-world objects within the plurality of images of the real-world setting. The method further includes receiving information pertaining to a real-world product, wherein the information is indicative of first physical dimensions of the real-world product during a first mode of operation and second physical dimensions of the real-world product during a second mode of operation and overlaying an augmented reality object depicting the real-world product during the first mode of operation, and having the first physical dimensions, within at least one of the plurality of images of the real-world setting, based at least in part on the analyzing step. | 09-25-2014 |
20140285523 | Method for Integrating Virtual Object into Vehicle Displays - A method for the depiction of virtual objects in vehicle displays using one of at least one digital image of a defined real 3D object space recorded by a camera involves generating a virtual course of the road by retrieving perspective information from the digital image of the defined real 3D object space. A pre-determined virtual 3D object is then generated, which is subsequently adapted to the virtual course of the road of the defined real 3D object perspectively and with spatial accuracy. The adapted virtual 3D object is then integrated into the virtual course of the road of the defined real 3D object space. | 09-25-2014 |
20140292807 | ENVIRONMENT ACTUATION BY ONE OR MORE AUGMENTED REALITY ELEMENTS - Apparatuses, systems, media and methods may provide for environment actuation by one or more augmented reality elements. A location module may determine a location of one or more networked devices in a real space and/or establish a location of the one or more augmented reality elements in a virtual space, which may be mapped to the real space. A coordinator module may coordinate a virtual action in the virtual space of the one or more augmented reality elements with an actuation event by the one or more networked devices in the real space. The actuation event may correspond to the virtual action in the virtual space and be discernible in the real space. | 10-02-2014 |
20140292808 | WIRELESS DEVICES WITH FLEXIBLE MONITORS AND KEYBOARDS - A portable device (e.g., a wireless device such as a cell phone) is provided with a flexible keyboard and a flexible display screen. Such flexible components may be stored in the housing of the portable device when not in use. The flexible display screen and flexible keyboard may be expanded from the housing when the flexible components are utilized by a user. Non-flexible display and input components may be provided on the exterior of the portable device such that the device may be used, in some form, while the flexible components are stored. In one embodiment, a portion of the flexible display (or flexible keyboard) may be utilized when the flexible display (or flexible keyboard) is stored in said first housing. | 10-02-2014 |
20140292809 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM - There is provided an information processing apparatus including an image acquisition part configured to acquire an image captured by an imaging part, and a display controller configured to cause a virtual object to be displayed in accordance with a recognition result of a real object shown in the image. The display controller controls the virtual object on a basis of a size of the real object in a real space. | 10-02-2014 |
20140292810 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - There is provided an information processing apparatus including an acquisition unit configured to acquire a motion state of a real object, and a display control unit configured to display a virtual object according to the acquired motion state. | 10-02-2014 |
20140292811 | MIXED REALITY IMAGE PROCESSING APPARATUS AND MIXED REALITY IMAGE PROCESSING METHOD - There is provided a mixed reality image processing apparatus capable of forming mixed reality image data that matches an illumination environment of an external world. The mixed reality image processing apparatus includes a standard illumination environment processing unit configured to extract illumination environment information, which indicates the illumination information of the external world, from image data imaged by an imaging unit, and a local illumination environment processing unit configured to convert mixed reality image data, which is formed by combining virtual image data to the image data, into image data corresponding to the illumination environment of the external world based on the illumination environment information. | 10-02-2014 |
20140292812 | VISUAL TRAINING DEVICES, SYSTEMS, AND METHODS - Visual training aids including an eyewear article including a lens, an image generator mounted to the eyewear article in a position to display an image on the lens, a processor operatively connected to and in data communication with the image generator, a global positioning system operatively connected to and in data communication with the processor, a computer readable medium operatively connected to and in data communication with the processor. In some examples, the visual training aid includes a pair of eyeglasses. In some examples, the visual training aid includes a display monitor mounted to the eyewear. In some examples, the visual training aid includes a camera mounted to the eyewear. | 10-02-2014 |
20140300632 | AUGMENTED REALITY APPARATUS - An embodiment of the invention relates to an apparatus and related methods for providing a person with an augmented reality, the apparatus comprising: a projector that projects light that generates an augmenting visual image (AVI); optics that directs to a camera a portion of the projected light and a portion of light arriving from a scene so that a combined image comprising an image of the scene and the AVI are generated in a photosensor in the camera; and a controller that processes the combined image to compare positions of homologous features in the image of the scene with positions of corresponding location markers in the AVI, and renders an adjusted AVI based on the comparison, so that the location markers and the corresponding homologous features are substantially coincident in the combined image. | 10-09-2014 |
20140300633 | IMAGE PROCESSOR AND STORAGE MEDIUM - There is provided an image processor including a recognition part configured to recognize a captured target on the basis of a captured image, and an image generation part configured to generate an additive image for decorating the target depending on a user circumstance of a user seeing the target and a recognition result by the recognition part. | 10-09-2014 |
20140300634 | APPARATUS AND METHOD FOR IMPLEMENTING AUGMENTED REALITY BY USING TRANSPARENT DISPLAY - An apparatus and a method for implementing augmented reality by using a transparent display are provided. The method includes calculating a point-of-view of a user based on a face image captured by a front camera, detecting an image of an area depending on an angle of view identical to a viewing angle of the user based on the calculated point-of-view of the user, implementing the augmented reality by using the detected image, and outputting the implemented augmented reality to the transparent display. | 10-09-2014 |
20140300635 | INFORMATION PROCESSING APPARATUS, DISPLAY CONTROL METHOD, AND PROGRAM - There is provided an information processing apparatus including a judgment unit for judging an anteroposterior relationship between a shot actual object and a virtual object for each part by use of depth information, and a display control unit for displaying a virtual image in which the virtual object is projected to be overlapped on a shot image in which the actual object is shot based on the anteroposterior relationship judged by the judgment unit. | 10-09-2014 |
20140300636 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - Provided is an information processing apparatus including a sound output control unit configured to generate localization information of a sound marker based on a virtual position, and a sound output unit configured to output a sound associated with the sound marker, based on the localization information, wherein the virtual position is determined based on a position of a real object present in a space. | 10-09-2014 |
20140306993 | HOLOGRAPHIC SNAP GRID - Methods for positioning virtual objects within an augmented reality environment using snap grid spaces associated with real-world environments, real-world objects, and/or virtual objects within the augmented reality environment are described. A snap grid space may comprise a two-dimensional or three-dimensional virtual space within an augmented reality environment in which one or more virtual objects may be positioned. In some embodiments, a head-mounted display device (HMD) may identify one or more grid spaces within an augmented reality environment, detect a positioning of a virtual object within the augmented reality environment, determine a target grid space of the one or more grid spaces in which to position the virtual object, determine a position of the virtual object within the target grid space, and display the virtual object within the augmented reality environment based on the position of the virtual object within the target grid space. | 10-16-2014 |
20140306994 | PERSONAL HOLOGRAPHIC BILLBOARD - Methods for generating and displaying personalized virtual billboards within an augmented reality environment are described. The personalized virtual billboards may facilitate the sharing of personalized information between persons within an environment who have varying degrees of acquaintance (e.g., ranging from close familial relationships to strangers). In some embodiments, a head-mounted display device (HMD) may detect a mobile device associated with a particular person within an environment, acquire a personalized information set corresponding with the particular person, generate a virtual billboard based on the personalized information set, and display the virtual billboard on the HMD. The personalized information set may include information associated with the particular person such as shopping lists and classified advertisements. The HMD may share personalized information associated with an end user of the HMD with the mobile device based on whether the particular person is a friend or unknown to the end user. | 10-16-2014 |
20140306995 | Virtual chroma keying in real time - This invention discloses a real time chroma keying method of eliminating the need for a large and bulky monochromatic screen background in a live non-studio outdoor or indoor setting by creating computer-generated virtual or soft chroma keying layers. More particularly the invention relates to a live portable hybrid chroma keying technique that programmatically generates a contiguous chroma for keying in multiple layers of computer-generated graphics as background and foreground of a scene in real time. The invention is particularly useful in creating HD quality special effects video footage of consumers on-the-fly for use in entertainment, advertainment, advertising campaigns, immersive gaming and related industries; and can be implemented in almost any location whether it is high-footfall public place, such as mall, airport, bus or train transit station, conference, trade show, library, museum, amusement park or some such social venue, or even in living room or home theatre of a private home. | 10-16-2014 |
20140306996 | METHOD, DEVICE AND STORAGE MEDIUM FOR IMPLEMENTING AUGMENTED REALITY - The present disclosure relates to a method, a device and a storage medium for implementing augmented reality. The method includes: obtaining a real scene, and according to shooting position and shooting direction of the real scene, obtaining POIs within a preset area and POI information corresponding to the POIs, the POI information comprising position information of the corresponding POI; creating a virtual plane, and mapping position relationship between the POIs on the virtual plane, and inserting tags of POI information to the location of the corresponding POI on the virtual plane; superimposing the virtual plane having the tags of POI information onto the real scene to form an augmented reality view; and displaying the augmented reality view, and adjusting the virtual plane according to real-time information of the real scene, to make the virtual plane be visually parallel to the horizontal plane of the real scene. | 10-16-2014 |
20140313225 | AUGMENTED REALITY AUCTION PLATFORM - An augmented reality submission includes a hologram to virtually augment a world space object and a compensation offer for presenting the hologram to a viewer of the world space object. The augmented reality submission is selected as a winning submission if the submission satisfies a selection criteria. | 10-23-2014 |
20140313226 | APPARTUS FOR HANDS-FREE AUGMENTED REALITY VIEWING - An apparatus for viewing augmented reality images in a hands-free manner includes a tracker base in which various interchangeable trackers may be placed. The interchangeable trackers include an image specialized to an application framework running on a device with an image capture device in range of the image. An arm connects to and extends from the tracker base, and includes a clasp for holding the device. With the augmented reality application running, the device provides a display of the augmented reality images associated with the image on the interchangeable tracker in a hands-free manner. The tracker base spins, providing different viewpoints of the augmented reality images. The tracking element and the image capture device are held in a fixed spatial relationship with respect to one another, resulting in a stable image. The apparatus provides interactions in a multi-system environment by invoking commands between the device and an external multimedia component. | 10-23-2014 |
20140313227 | SYSTEM FOR CREATING A COMPOSITE IMAGE AND METHODS FOR USE THEREWITH - A system includes a video device for capturing, at a viewing time, a first video image corresponding to a foundation scene at a setting, the foundation scene viewed at the viewing time from a vantage position. A memory stores a library of image data including media generated at a time prior to the viewing time. A vantage position monitor tracks the vantage position and generating vantage position of a human viewer. A digital video data controller selects from the image data in the library, at the viewing time and based on the vantage position data, a plurality of second images corresponding to a modifying scene at the setting, the modifying scene further corresponding to the vantage position. A combiner combines the first video image and the plurality of second images to create a composite image for display. | 10-23-2014 |
20140313228 | IMAGE PROCESSING DEVICE, AND COMPUTER PROGRAM PRODUCT - An information processing apparatus and non-transitory computer readable medium cooperate to provide a control unit having circuitry configured to receive an indication of a detected posture of the information processing apparatus, and attach a virtual object to a reference environment with a posture related to the detected posture of the information processing apparatus. | 10-23-2014 |
20140320529 | VIEW STEERING IN A COMBINED VIRTUAL AUGMENTED REALITY SYSTEM - One embodiment of the present invention provides a system for assisting view-steering from a remote client machine. During operation, the system receives, at a local client from a collaboration server, a view-synchronization request for synchronizing a local scene displayed on the local client with a remote scene displayed on the remote client; generates, at the local client, a view-steering widget based on the view-synchronization request; and displays the view-steering widget on top of the local scene, thereby facilitating a local user of the local client to update the local scene displayed on the local machine in order to match the local scene to at least a portion of the remote scene displayed on the remote client machine. | 10-30-2014 |
20140320530 | APPARATUS AND METHOD FOR RADIANCE TRANSFER SAMPLING FOR AUGMENTED REALITY - Methods, systems, computer-readable media, and apparatuses for radiance transfer sampling for augmented reality are presented. In some embodiments, a method includes receiving at least one video frame of an environment. The method further includes generating a surface reconstruction of the environment. The method additionally includes projecting a plurality of rays within the surface reconstruction of the environment. Upon projecting a plurality of rays within the surface reconstruction of the environment, the method includes generating illumination data of the environment from the at least one video frame. The method also includes determining a subset of rays from the plurality of rays in the environment based on areas within the environment needing refinement. The method further includes rendering the virtual object over the video frames based on the plurality of rays excluding the subset of rays. | 10-30-2014 |
20140320531 | COMPUTER GRAPHICS PRESENTATION SYSTEMS AND METHODS - A data processing unit generates graphics data that are sent to a display screen of a head-mountable structure worn by a user. Thereby, the user can observe the image data, which reflect a virtual reality environment implemented by the data processing unit, namely image data representing a field of view as seen by the user from a particular position and in a particular direction in the virtual reality environment. The head-mountable structure includes a first light source projecting a well-defined light pattern on a light-reflecting surface. The data processing unit is associated with an image registering unit recording image data representing the first well-defined light pattern. The data processing unit calculates the graphics data based on the image data. | 10-30-2014 |
20140320532 | WEARABLE ELECTRONIC DEVICE AND METHOD OF CONTROLLING THE SAME - A wearable electronic device capable of helping a user for remembering a person whom the user meets, and a method of controlling the wearable electronic device are disclosed. The wearable electronic device includes a transparent or light-transmitting lens, a camera taking a picture of a front view of a user wearing the wearable electronic device, a display part displaying an additional information to the lens, which is added to a front view recognized by the user, and a control part detecting a past image including a human face that is similar to a human face in a real image of a front view, and controlling the display part to display the past image as a sub information. | 10-30-2014 |
20140320533 | Interference Based Augmented Reality Hosting Platforms - Interference-based augmented reality hosting platforms are presented. Hosting platforms can include networking nodes capable of analyzing a digital representation of scene to derive interference among elements of the scene. The hosting platform utilizes the interference to adjust the presence of augmented reality objects within an augmented reality experience. Elements of a scene can constructively interfere, enhancing presence of augmented reality objects; or destructively interfere, suppressing presence of augmented reality objects. | 10-30-2014 |
20140327699 | PILOTING ASSISTANCE DEVICE CAPABLE OF DISPLAYING AN ANIMATION, AND ASSOCIATED METHOD - A device is provided for assisting in the piloting of a vehicle, in particular an aircraft. The device includes a surface displaying images and a display management unit for images designed to be displayed on the display surface superimposed on a view of the outside landscape. The management unit is capable of commanding the display of piloting information images on the display surface. The management unit is capable of commanding the display, for at least one of the piloting information images, of an animation intended to attract a user's attention to that piloting information image, the animation comprising the display of an alert image and the shrinkage of the alert image. | 11-06-2014 |
20140327700 | INFORMATION PROCESSING DEVICE AND METHOD OF PROCESSING INFORMATION - A method and apparatus adapted to input a position and orientation of a viewpoint in an image in MR space obtained by superimposing a first virtual object to be displayed on a display of a real space; to input a position and orientation of the real object; to calculate an amount of change in a relative orientation between the orientation of the viewpoint and the orientation of the real object; to switch a first virtual object to be displayed to a second virtual object to be displayed which is different from the first virtual object to be displayed when the amount of change exceeds a predetermined threshold; and to output an image in the MR space obtained by superimposing the second virtual object to be displayed on the display of the real space in accordance with the position and orientation of the viewpoint and the position of the real object. | 11-06-2014 |
20140333664 | VENDING KIOSK USER INTERFACE SYSTEMS AND METHODS - An exemplary method includes a computer-implemented vending kiosk user interface system 1) receiving a camera image captured by a mobile device, the camera image including a visual representation of a vending kiosk located within a vicinity of the mobile device, 2) detecting the visual representation of the vending kiosk within the camera image, 3) generating, based on the detecting of the visual representation of the vending kiosk within the camera image, an augmented reality image that includes a combination of camera image content included in the camera image and virtual content associated with a vending service, and 4) directing the mobile device to display the augmented reality image. Corresponding methods and systems are also disclosed. | 11-13-2014 |
20140333665 | CALIBRATION OF EYE LOCATION - Embodiments are disclosed that relate to calibrating a predetermined eye location in a head-mounted display. For example, in one disclosed embodiment a method includes displaying a virtual marker visually alignable with a real world target at an alignment condition. At the alignment condition, image data is acquired to determine a location of the real world target. From the image data, an estimated eye location relative to a location of the head-mounted display is determined. Based upon the estimated eye location, the predetermined eye location is then calibrated. | 11-13-2014 |
20140333666 | INTERACTIONS OF VIRTUAL OBJECTS WITH SURFACES - Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a display system. For example, one disclosed embodiment includes displaying a virtual object via the display system as free-floating, detecting a trigger to display the object as attached to a surface, and, in response to the trigger, displaying the virtual object as attached to the surface via the display system. The method may further include detecting a trigger to detach the virtual object from the surface and, in response to the trigger to detach the virtual object from the surface, detaching the virtual object from the surface and displaying the virtual object as free-floating. | 11-13-2014 |
20140333667 | METHOD AND APPARATUS FOR PROVIDING CONTENTS INCLUDING AUGMENTED REALITY INFORMATION - An augmented reality contents provision method and apparatus are provided. The method and apparatus includes generating augmented reality content by merging augmented reality information onto an image and updating the augmented reality information on the image. The method for providing a content including augmented reality information according to the present disclosure includes displaying a preview image input through a camera and the augmented reality information applied on to the preview image, capturing the preview image as an image in response to an augmented reality content creation request, and generating the augmented reality content by combining the image and the augmented reality information. | 11-13-2014 |
20140333668 | Augmented Reality Videogame Broadcast Programming - There is provided a system and method for integrating a virtual rendering system and a video capture system using flexible camera control to provide an augmented reality. There is provided a method comprising receiving input data from a plurality of clients for modifying a virtual environment presented using the virtual rendering system, obtaining, from the virtual rendering system, a virtual camera configuration of a virtual camera in the virtual environment, programming the video capture system using the virtual camera configuration to correspondingly control a robotic camera in a real environment, capturing a video capture feed using the robotic camera, obtaining a virtually rendered feed using the virtual camera showing the modifying of the virtual environment, rendering the composite render by processing the feeds, and outputting the composite render to the display. | 11-13-2014 |
20140340423 | Marker-based augmented reality (AR) display with inventory management - A platform to enable configuration, administration and management of augmented reality markers adapted to be scanned by an end user mobile device to enable AR experience. The platform enables control of marker provisioning by entities who decide what content should appear in mobile applications when their AR codes are scanned by end users. The platform generates unique AR markers. A marker has a first code region, and a second code region. The code regions are adapted to be scanned, preferably sequentially, and the first code region encodes a first identifier identifying an External marker ID in a pattern matching approach, and second code region that encodes a second identifier identifying an Internal marker ID in a encoding/decoding approach. In one embodiment, the first code region is generally circular and includes a central area, and the second code region is located within the central area of the first code region. | 11-20-2014 |
20140340424 | SYSTEM AND METHOD FOR RECONFIGURABLE PROJECTED AUGMENTED/VIRTUAL REALITY APPLIANCE - A system comprising a head mounted display with sight line tracking is presented with an attachment for reconfiguration from projected augmented reality applications to those using closed virtual reality as well as mixed modes. | 11-20-2014 |
20140347390 | BODY-LOCKED PLACEMENT OF AUGMENTED REALITY OBJECTS - Embodiments are disclosed that relate to placing virtual objects in an augmented reality environment. For example, one disclosed embodiment provides a method comprising receiving sensor data comprising one or more of motion data, location data, and orientation data from one or more sensors located on a head-mounted display device, and based upon the motion data, determining a body-locking direction vector that is based upon an estimated direction in which a body of a user is facing. The method further comprises positioning a displayed virtual object based on the body-locking direction vector. | 11-27-2014 |
20140347391 | HOLOGRAM ANCHORING AND DYNAMIC POSITIONING - A system and method are disclosed for displaying virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects. When a user is moving through the mixed reality environment, the virtual objects may remain world-locked, so that the user can move around and explore the virtual objects from different perspectives. When the user is motionless in the mixed reality environment, the virtual objects may rotate to face the user so that the user can easily view and interact with the virtual objects. | 11-27-2014 |
20140347392 | AUGMENTED-REALITY RANGE-OF-MOTION THERAPY SYSTEM AND METHOD OF OPERATION THEREOF - A server to perform a selected therapy on a user. The server may include a processor which may obtain activity information (AI) including information related to one or more of augmented-reality (AR) activity information, AR anatomical feature information, and range-of-motion (ROM) information; obtain user information including information related to one or more of the anatomy and physiology of the user; determine expected range-of-motion (EROM) information in accordance with the AI and the user information; track selected body parts (SBPs) of the user corresponding with the AR anatomical feature information; and/or render one or more augmented-reality limbs (ARLs) in relation with one or more corresponding SBPs of the user on a display of the system. | 11-27-2014 |
20140347393 | SERVER APPARATUS AND COMMUNICATION METHOD - A server apparatus includes a processor, configured to acquire object specifying information for specifying an object in a reality space captured by a first portable terminal, to acquire condition information including an external condition of the first portable terminal, and to acquire content information associated with the object specifying information and the condition information by referring to a database. The database stores, in association with one another, content information representing a content displayed by being superimposed on an object in a reality space captured by a second portable terminal, object information for identifying the object captured by the second portable terminal, and condition information representing a condition for displaying, in the first portable terminal, the content represented by the content information. The processor of the server apparatus is also configured to transmit the content information to the first portable terminal. | 11-27-2014 |
20140347394 | LIGHT FIXTURE SELECTION USING AUGMENTED REALITY - A fixture can include a housing, a member for positioning the housing on or proximate to a surface in a room, an augmented reality (AR) target coupled to the housing and having a unique identifier for detection by an AR software application, and a light source disposed at least partially within the housing and configured to (i) illuminate the AR target for easier detection and (ii) at least partially illuminate the room. A method can include positioning the self-illuminated AR target in a possible position for an interior decoration in the room, initiating the AR software application on a mobile computing device, capturing image data including the self-illuminated AR target with the mobile computing device, and viewing an AR view of the room on a display of the mobile computing device, the AR view including an AR view of the interior decoration at the possible position. | 11-27-2014 |
20140347395 | Augmented Reality Methods and Systems Including Optical Merging of a Plurality of Component Optical Images - The present disclosure provides augmented reality methods and systems where two or more component optical images are optically overlaid via one or more beam splitters to form composite optical images. In some embodiments a second component optical image is an electronic optical image (an image from an electronically controlled emission source) while the first component optical image is one of a physical optical image (an image of a physical object from which diffuse reflection occurs), an electronic optical image, an emission optical image (an image from a non-electronic source that emits radiation), or a hybrid optical image (composed, of at least two of a physical optical image, and electronic optical image, or an emission optical image). In some embodiments the first and second component optical images are used to provide feedback concerning the quality of the overlaying and appropriate correction factors to improve the overlay quality. | 11-27-2014 |
20140354684 | SYMBOLOGY SYSTEM AND AUGMENTED REALITY HEADS UP DISPLAY (HUD) FOR COMMUNICATING SAFETY INFORMATION - An augmented reality driver system, device, and method for providing real-time safety information to a driver by detecting the presence and attributes of pedestrians and other road users in the vicinity of a vehicle. An augmented reality controller spatially overlays an augmented reality display on a volumetric heads up by projecting indicators, associated with the social and behavioral states of road users, in a visual field of the driver. | 12-04-2014 |
20140354685 | MIXED REALITY DATA COLLABORATION - Embodiments that relate to sharing mixed reality experiences among multiple display devices are disclosed. In one embodiment, a method includes receiving current versions of a plurality of data subtypes geo-located at a keyframe location. A world map data structure is updated to include the current versions, and a neighborhood request including the keyframe location is received from a display device. Based on the keyframe location, an identifier and current version indicator for each data subtype is provided to the device. A data request from the device for two or more of the data subtypes is received, and the two or more data subtypes are prioritized based on a priority hierarchy. Based on the prioritization, current versions of the data subtypes are sequentially provided to the device for augmenting an appearance of a mixed reality environment. | 12-04-2014 |
20140354686 | DATA MANIPULATION BASED ON REAL WORLD OBJECT MANIPULATION - A system and method for data manipulation based on real world object manipulation is described. A device captures an image of a physical object. The image is communicated via a network to a remote server. The remote server includes virtual object data associated with the image and a communication notification for a user of the computing device. The device receives the virtual object data and displays the virtual image in a virtual landscape using the virtual object data. In response to relative movement between the computing device and the physical object caused by the user, the virtual image is modified. | 12-04-2014 |
20140354687 | COMPUTER-READABLE NON-TRANSITORY STORAGE MEDIUM HAVING STORED THEREIN INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD - At least one determination coordinate is set near a virtual camera, and whether or not an obstacle object is present in a predetermined range in a direction based on a predetermined object from the determination coordinate, is determined. As a result, if the obstacle object is present, the virtual camera is moved so that the obstacle object will not be present in the predetermined range. | 12-04-2014 |
20140354688 | AUGMENTED REALITY INFORMATION PROVIDING APPARATUS AND METHOD - Provided are an apparatus and method for providing augmented reality information using a glasses-type wearable device or head mount display. The present invention may provide augmented reality information generated by overlaying POI information in a gaze direction of a user on an image in the direction. Also, the present invention may compare main POI with an object in the image to adjust position information of the user when providing the augmented reality information, thereby providing the augmented reality information having high visibility and accuracy, with the size of UI being adjusted according to the distance from the user. The present invention may allow the user to designate an object in the image by a 3D pointing operation of the user, thereby displaying more precise augmented reality information. | 12-04-2014 |
20140354689 | DISPLAY APPARATUSES AND CONTROL METHODS THEREOF - A display apparatus may include a display unit configured to display images; a first camera, mounted on a surface of the display unit on which the images are displayed, configured to acquire an image of a user's face; a second camera, mounted on a surface of the display unit opposite to the first camera, configured to acquire an image of an object; and/or a controller configured to detect a gaze direction of the user from the image of the user's face acquired by the first camera, configured to control a shooting direction of the second camera to match the detected gaze direction, and configured to display the image of the object acquired by the second camera, having an adjusted shooting direction, on the display unit. | 12-04-2014 |
20140354690 | DISPLAY APPLICATION AND PERSPECTIVE VIEWS OF VIRTUAL SPACE - A display management resource associated with a mobile device controls display of images on a respective display screen of the mobile device. The display management resource receives location information indicating a location of the mobile device in a geographical region. Additionally, the display management resource receives input from a user operating the mobile device. The input can be any suitable information such as a command to play back images on the display screen of the mobile device. The display management resource maps the input to content such as virtual images such as images associated with an historical event that occurred in the past. Using the virtual images, the display management resource initiates display of a rendition of the virtual images from different perspectives depending on an orientation and location of the mobile device and corresponding display screen. | 12-04-2014 |
20140354691 | VOLUMETRIC HEADS-UP DISPLAY WITH DYNAMIC FOCAL PLANE - A heads-up display device for displaying graphic elements in view of a user while the user views an environment through a display screen. The heads-up display device includes at least one projector that projects a graphic element on a frontal focal plane in view of the user while the user views the environment through the display screen, and at least one projector that projects a graphic element on a ground-parallel focal plane in view of the user while the user views the environment through the display screen. The projector that projects the graphic element on the frontal focal plane is mounted on an actuator that linearly moves the projector so as to cause the frontal focal plane to move in a direction of a line-of-sight of the user. The projector that projects the ground-parallel focal plane is fixedly arranged such that the ground-parallel focal plane is static. | 12-04-2014 |
20140354692 | VOLUMETRIC HEADS-UP DISPLAY WITH DYNAMIC FOCAL PLANE - A heads-up display device for displaying graphic elements in view of a user while the user views an environment through a display screen. The heads-up display device includes at least one projector that projects a graphic element on a dynamic, frontal focal plane in view of the user while the user views the environment through the display screen, and at least one projector that projects a graphic element on a static, ground-parallel focal plane in view of the user while the user views the environment through the display screen. A controller determines a target graphic element position and a graphic element size based on the target graphic element position for the graphic element projected on the frontal focal plane, so as to provide the user with an immersive three-dimensional heads-up display. | 12-04-2014 |
20140362110 | SYSTEMS AND METHODS FOR CUSTOMIZING OPTICAL REPRESENTATION OF VIEWS PROVIDED BY A HEAD MOUNTED DISPLAY BASED ON OPTICAL PRESCRIPTION OF A USER - Systems and methods for operating a screen of a head mounted display includes executing a program. The execution of the program causes rendering of images on the screen of the HMD. The screen renders the images using a first optical setting. A first image is presented on the screen. The first image has a first size and is presented at a distance. Input is received identifying a clarity level for the first image. A second image is presented on the screen. The second image has a second size and the distance. Input is received identifying the clarity level for the second image. Based on the clarity level received for the first and the second images, the first optical setting for the screen is changed to a second optical setting. | 12-11-2014 |
20140362111 | METHOD AND DEVICE FOR PROVIDING INFORMATION IN VIEW MODE - A method and a device for providing information in a view mode is provided, which can discriminatively apply and display virtual information according to an importance value (or a priority) of objects in a reality image when the virtual information is mapped and displayed on the reality image of a real world acquired through a camera module in the view mode. The method includes displaying a reality image acquired in a view mode; analyzing an importance value of objects according to the reality image; determining a display range of virtual information for each of the objects according to the importance value of the objects; and displaying the virtual information for each of the objects according to the display range of the virtual information. | 12-11-2014 |
20140362112 | FRAMEWORKS, DEVICES AND METHODS CONFIGURED FOR ENABLING TRANSITION OF CONTENT IN A USER INTERFACE BETWEEN A MAP-BOUND LAYER AND A MAP-UNBOUND LAYER - Described herein are frameworks, devices and methods configured for enabling display for facility information and content, in some cases via touch/gesture controlled interfaces. Embodiments of the invention have been particularly developed for allowing an operator to conveniently access a wide range of information relating to a facility via, for example, one or more wall mounted displays. While some embodiments will be described herein with particular reference to that application, it will be appreciated that the invention is not limited to such a field of use, and is applicable in broader contexts. | 12-11-2014 |
20140362113 | CHANGE NATURE OF DISPLAY ACCORDING TO OVERALL MOTION - A head mountable display (HMD) is operable to detect the HMD's overall motion relative to its current orientation, and to display extra image material so that the extra image material moves in the same general direction as real objects would be seen by the user to move if the user were not wearing the HMD. | 12-11-2014 |
20140362114 | METHOD AND APPARATUS FOR NEEDLE VISUALIZATION ENHANCEMENT IN ULTRASOUND IMAGES - The present invention provides a method and an apparatus for needle visualization enhancement in ultrasound (US) imaging and an US imaging system. The apparatus comprises: a Radon transform (RT) unit adapted to perform RT on a sequence of frames to detect line features in the frames, a frame comprising US radio-frequency (RF) data obtained during monitoring the insertion of a needle into a subject or an US image reconstructed from the RF data; a false needle feature removing unit adapted to remove line features which remain substantially stationary among the frames as false needles while locating a line feature which extends among the frames as the needle; and an overlaying unit adapted to overlay the location of the line feature as the needle on an US image of a frame to produce an enhanced image to be displayed. | 12-11-2014 |
20140368537 | SHARED AND PRIVATE HOLOGRAPHIC OBJECTS - A system and method are disclosed for displaying virtual objects in a mixed reality environment including shared virtual objects and private virtual objects. Multiple users can collaborate together in interacting with the shared virtual objects. A private virtual object may be visible to a single user. In examples, private virtual objects of respective users may facilitate the users' collaborative interaction with one or more shared virtual objects. | 12-18-2014 |
20140368538 | TECHNIQUES FOR AUGMENTED SOCIAL NETWORKING - Techniques for augmented social networking may include receiving an image. After receiving an image, in real time, an identity of a person in the image may be determined. Association information for the person based on the identity and one or more defined parameters may be determined. The defined parameters may represent electronic communication. Location information of the person may be determined. The association information may be presented proximate to the person in an augmented reality view using the location information. Other embodiments are described and claimed. | 12-18-2014 |
20140368539 | HEAD WEARABLE ELECTRONIC DEVICE FOR AUGMENTED REALITY AND METHOD FOR GENERATING AUGMENTED REALITY USING THE SAME - A head wearable electronic device has an image acquisition module, a physical characteristics recognition module, a see-through display module and a processing module, and a method for generating augmented reality is performed by the head wearable electronic device, and includes using the image acquisition module to acquire a first-person view (FPV) streaming video of a surrounding environment, using the processing module to calculate a stream of depth maps of an object and a body portion in the FPV streaming video of the surrounding environment according to the FPV streaming video, using the physical characteristics recognition module to keep track of the body portion and output motion data of the body portion, and using the processing module to display a virtual streaming video on the see-through display module according to the motion data and the stream of depth maps of the object and the body portion. | 12-18-2014 |
20140368540 | IN-VEHICLE DISPLAY APPARATUS AND PROGRAM PRODUCT - An in-vehicle display apparatus in a vehicle includes a region recognition circuit and an image output circuit. The region recognition circuit recognizes a target plane region in scenery ahead of the vehicle; the target plane region corresponds to a continuous region having (i) a flatness equal to or greater than a predetermined threshold value and (ii) an area size equal to or greater than a predetermined threshold value. The image output circuit displays a driving information picture as a virtual image using a liquid crystal panel such that a driver of the vehicle views the virtual image in the target plane region within a displayable region through a windshield of the vehicle. | 12-18-2014 |
20140368541 | INFORMATION DISPLAY APPARATUS AND INFORMATION DISPLAY METHOD - An information display apparatus includes an image pick-up unit configured to picking up an image, a display unit configured to display the image picked up by the image pick-up unit, a first correcting unit configured to correct the image picked up by the image pick-up unit to generate a first image, a second correcting unit configured to correct the image picked up by the image pick-up unit to generate a second image, a recognizing unit configured to recognize the second image generated by the second correcting unit, and a display control unit configured to display an additional information according to a result of the recognition performed by the recognizing unit with superimposing the additional information on the first image generated by the first correcting unit on the display unit. | 12-18-2014 |
20140368542 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, PROGRAM, PRINT MEDIUM, AND PRINT-MEDIA SET - There is provided an image processing apparatus including an image acquisition unit configured to acquire an image that shows a real object including a primary recognition target and a secondary recognition target, and a control unit configured to set an augmented reality space associated with the image on the basis of image recognition of the primary recognition target, and configured to decide an augmented reality object to be arranged in the augmented reality space depending on the secondary recognition target that is recognized on the basis of the image recognition of the primary recognition target. | 12-18-2014 |
20140368543 | DIRECTED COMMUNICATION IN A VIRTUAL ENVIRONMENT - The present invention relates to directed communication between avatars in a virtual environment controlled by end-users from outside the virtual environment. A method in accordance with an embodiment includes: determining a relative location of a first avatar and a second avatar in a virtual environment, wherein at least one end-user of the first avatar and the second avatar wears a headset configured to track head movements thereof, wherein the head movements of the end-user translate to their avatar and influence volume of voice communication from their avatar; adjusting aspects of a voice communication between the first avatar and the second avatar based on the relative location and the track head movements; referring to a list of avatars whose audio characteristics are to be portrayed differently in a voice communication; and further adjusting the audio characteristics of one of the first avatar and the second avatar in accordance with the list. | 12-18-2014 |
20140368544 | IMAGE DISPLAY DEVICE - An image display device includes: a projection port that projects image display light generated based on an image signal; and a combiner that presents a virtual image by reflecting the image display light projected from the projection port. With respect to a specific direction along a reflection surface of the combiner, the curvature of the combiner in the specific direction becomes smaller as a distance from the projection port becomes larger. The reflection surface of the combiner may be formed by a biconic surface. | 12-18-2014 |
20140375679 | Dual Duty Cycle OLED To Enable Dynamic Control For Reduced Motion Blur Control With Constant Brightness In Augmented Reality Experiences - A head-mounted display (HMD) device is provided with reduced motion blur by reducing row duty cycle for an organic light-emitting diode (LED) panel as a function of a detected movement of a user's head. Further, a panel duty cycle of the panel is increased in concert with the decrease in the row duty cycle to maintain a constant brightness. The technique is applicable, e.g., to scenarios in which an augmented reality image is displayed in a specific location in world coordinates. A sensor such as an accelerometer or gyroscope can be used to obtain an angular velocity of a user's head. The angular velocity indicates a number of pixels subtended in a frame period according to an angular resolution of the LED panel. The duty cycles can be set, e.g., once per frame, based on the angular velocity or the number of pixels subtended in a frame period. | 12-25-2014 |
20140375680 | TRACKING HEAD MOVEMENT WHEN WEARING MOBILE DEVICE - Methods for tracking the head position of an end user of a head-mounted display device (HMD) relative to the HMD are described. In some embodiments, the HMD may determine an initial head tracking vector associated with an initial head position of the end user relative to the HMD, determine one or more head tracking vectors corresponding with one or more subsequent head positions of the end user relative to the HMD, track head movements of the end user over time based on the initial head tracking vector and the one or more head tracking vectors, and adjust positions of virtual objects displayed to the end user based on the head movements. In some embodiments, the resolution and/or number of virtual objects generated and displayed to the end user may be modified based on a degree of head movement of the end user relative to the HMD. | 12-25-2014 |
20140375681 | ACTIVE BINOCULAR ALIGNMENT FOR NEAR EYE DISPLAYS - A system and method are disclosed for detecting angular displacement of a display element relative to a reference position on a head mounted display device for presenting a mixed reality or virtual reality experience. Once the displacement is detected, it may be corrected for to maintain the proper binocular disparity of virtual images displayed to the left and right display elements of the head mounted display device. In one example, the detection system uses an optical assembly including collimated LEDs and a camera which together are insensitive to linear displacement. Such a system provides a true measure of angular displacement of one or both display elements on the head mounted display device. | 12-25-2014 |
20140375682 | INTERACTIVE CONTROL OVER AUGMENTED REALITY CONTENT - In a method for performing a computer action to manage a visual display on an augmented reality computing device, parameters are received representing a user command entered on at least one tactile sensor of an augmented reality computing device. One or more processors determine a computer action represented by the user command. In response to determining the computer action, modifying the display of content at a specific location on the augmented reality computing device. | 12-25-2014 |
20140375683 | INDICATING OUT-OF-VIEW AUGMENTED REALITY IMAGES - Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object. | 12-25-2014 |
20140375684 | Augmented Reality Technology - A method is disclosed for determining the user's position by analyzing a picture of buildings located in front of the user and comparing the picture content with a database that stores a 3D model of the buildings. The method is utilized in various indoors and outdoors augmented reality applications. For example, the method gives the user accurate directional instructions to move from a place to another. It enables the user to accurately tag parts of buildings or places with virtual digital data. Also, the method allows the user to augment parts of buildings or places with certain Internet content in a fast and simple manner. | 12-25-2014 |
20140375685 | INFORMATION PROCESSING APPARATUS, AND DETERMINATION METHOD - An information processing apparatus includes: a memory; and a processor coupled to the memory and configured to: acquire first image data, generate, when second image data of a particular object is included in the first image data, movement information regarding a position where an image corresponding to the first image data is photographed, and control execution of processing according to execution or inexecution of work in a place where the work has to be executed, according to a determination result regarding whether or not the movement information satisfies a certain condition. | 12-25-2014 |
20140375686 | APPARATUS, METHOD, AND DEVICE - An apparatus includes a memory; and a processor coupled to the memory and configured to: receive a request of changing information that is displayed on an image in a superimposed manner corresponding to a position of an object included in the image; and associate second information with the object based on the request in place of first information displayed on the image in the superimposed manner corresponding to the position of the object. | 12-25-2014 |
20140375687 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - While a first mode is set as the operation mode of an image processing apparatus, an image of a virtual space containing a virtual object seen from a first viewpoint having a position and orientation set by a user is generated and output. While a second mode is set as the operation mode of the image processing apparatus, an image of a virtual space seen from a second viewpoint determined based on information of the first viewpoint and the position of a head mounted display device is generated and output. | 12-25-2014 |
20140375688 | MULTIUSER AUGMENTED REALITY SYSTEM - A multiuser, collaborative augmented reality (AR) system employs individual AR devices for viewing real-world anchors, that is, physical models that are recognizable to the camera and image processing module of the AR device. To mitigate ambiguous configurations when used in the collaborative mode, each anchor is registered with a server to ensure that only uniquely recognizable anchors are simultaneously active at a particular location. The system permits collaborative AR to span multiple sites, by associating a portal with an anchor at each site. Using the location of their corresponding AR device as a proxy for their position, AR renditions of the other participating users are provided. This AR system is particularly well suited for games. | 12-25-2014 |
20140375689 | IMAGE PROCESSING DEVICE AND COMPUTER READABLE MEDIUM - Disclosed is an image processing device including an image pickup unit, a display unit which displays a picked up image obtained by the image pickup unit, a frame-in frame-out recognition unit which recognizes that a predetermined marker framed-in in or framed-out from a screen of the display unit, a frame-in frame-out direction recognition unit which recognizes a frame-in direction or a frame-out direction of the marker and a control unit which makes the display unit perform a predetermined display according to the frame-in direction or the frame-out direction of the marker. | 12-25-2014 |
20140375690 | Image Mapping to Provide Visual Geographic Path - Provided is a computer system and method for mapping a visual path. The method includes receiving one or more images included in a predefined area; receiving one or more parameters associated with the image; and integrating the images and parameters into a map of the predefined area to enable mapping the visual path through the predefined area in response to one or more input path parameters. | 12-25-2014 |
20140375691 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An apparatus including an image processor configured to receive a video including an object, determine a positional relationship between the apparatus and the object, and change a positional relationship between an image superimposed on the video and the object when the positional relationship between the apparatus and the object changes. | 12-25-2014 |
20150009233 | Asset-Linked Augmented Reality With User Generated Content and User Assigned Editable Links - Systems, methods and computer program products are disclosed that facilitate content sharing coupled with a producer-designated physical asset. Specifically, in an aspect, systems, methods, and computer program products are disclosed wherein an augmented reality experience is created for a consumer. The augmented reality experience comprises content (e.g., a video, image, audio file, and the like) created by a producer for the consumer. The content is linked by the producer with a physical asset chosen by the producer via a producer key. The consumer may access the augmented reality experience via a computing device by providing a consumer key and detecting the producer-designated physical asset with the computing device. | 01-08-2015 |
20150015607 | USING VORTICES TO PROVIDE TACTILE SENSATIONS CORRESPONDING TO A VISUAL PRESENTATION - To convey tactile sensations over an open space, a system may use a vortex generator to direct one or more vortices at an object in 3-D space. Once a vortex strikes an object—e.g., a user's hand—it applies a force. The vortex generator can control the frequency and intensity of the vortices in order to provide different tactile sensations that correspond to virtual objects or events in a visual presentation. The system may identify and track objects in the real-world environment, and based on information provided by a device displaying the visual presentation, transmit instructions to the vortex generator to discharge vortices that convey a tactile sensation corresponding to the virtual object or event in the visual presentation. By doing so, the vortices augment the real-world environment to immerse the user in the visual presentation. | 01-15-2015 |
20150015608 | GLASS TYPE PORTABLE DEVICE AND INFORMATION PROJECTING SIDE SEARCHING METHOD THEREOF - Disclosed are a glass-type portable device and a method of searching an information projecting side thereof. In occurrence of information to be displayed, a screen at which a user is staring is analyzed. Candidate UI regions, on which the information is to be displayed on the analyzed screen, are set. An optimum UI region which satisfies a preset condition is selected from the candidate UI regions. Then the information is displayed on the optimum UI region. Under such configuration, the glass-type portable device can effectively display information without blocking a user's view. | 01-15-2015 |
20150015609 | METHOD OF AUGMENTED REALITY COMMUNICATION AND INFORMATION - A communication method comprising the following operations:
| 01-15-2015 |
20150015610 | SYSTEM AND METHOD FOR CONTROLLING DEVICE - Provided is a system and method for controlling a device using Augmented Reality (AR). A system for controlling a device using Augmented Reality (AR) includes a device server, an AR server, and a portable terminal. The device server registers information about each device. The AR server generates an AR screen displaying type information and service-related information of at least one device searched in response to a request of a portable terminal by using the registered device information, and provides the generated AR screen to the portable terminal. The portable terminal connects with a device selected among devices displayed on the AR screen and performs a specific function with the connected device. | 01-15-2015 |
20150015611 | METHOD FOR REPRESENTING VIRTUAL INFORMATION IN A REAL ENVIRONMENT - The invention relates to a method for ergonomically representing virtual information in a real environment, including the following steps: providing at least one view of a real environment and of a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup comprising at least one display device, ascertaining a position and orientation of at least one part of the system setup relative to at least one component of the real environment, subdividing at least part of the view of the real environment into a plurality of regions comprising a first region and a second region, with objects of the real environment within the first region being placed closer to the system setup than objects of the real environment within the second region, and blending in at least one item of virtual information on the display device in at least part of the view of the real environment, considering the position and orientation of said at least one part of the system setup, wherein the virtual information is shown differently in the first region than in the second region with respect to the type of blending in the view of the real environment. | 01-15-2015 |
20150022551 | DISPLAY DEVICE AND CONTROL METHOD THEREOF - Discussed are a display device that copies a virtual image mapped to a marker to another marker and a control method thereof. The display device includes a sensor unit configured to sense an input to the display device, a camera unit configured to capture a surrounding image of the display device, a display unit configured to display a virtual image, and a processor configured to control the sensor unit, the camera unit, and the display unit. The processor is further configured to acquire information regarding a first virtual image mapped to a first marker and display the first virtual image using the acquired information regarding the first virtual image, when the first marker is recognized from the captured surrounding image, and maintain display of the first virtual image, when a copy trigger signal for the first virtual image is received and a second marker is overlaid on the first marker. | 01-22-2015 |
20150022552 | AR IMAGE PROCESSING APPARATUS AND METHOD TECHNICAL FIELD - A first AR analyzer ( | 01-22-2015 |
20150029218 | LATE STAGE REPROJECTION - Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image. | 01-29-2015 |
20150029219 | INFORMATION PROCESSING APPARATUS, DISPLAYING METHOD AND STORAGE MEDIUM - An image processing apparatus includes: a memory and a processor coupled to the memory and configured to: calculate, based on figure of a reference object recognized from an input image, positional information indicating a positional relationship between the reference object and an imaging position of the input image; and select, based on the positional information, at least one display data from among a plurality of pieces of display data associated with the reference object. | 01-29-2015 |
20150029220 | DETECTING AND VISUALIZING WIRELESS NETWORK DEVICES IN COMMUNICATION NETWORKS - There is provided techniques for improved visualization of wireless devices in a communication network. The techniques include receiving, via an identification device, real-time display data, determining, via the identification device, a location of a network device and displaying, via a display of the identification device, the real-time display data and an indication of the location of the network device relative to the real-time display data. | 01-29-2015 |
20150029221 | APPARATUS, METHODS, COMPUTER PROGRAMS SUITABLE FOR ENABLING IN-SHOP DEMONSTRATIONS - A functionally limited apparatus configured to enable, at a remote apparatus, an augmented reality, the apparatus comprising: a body; and computer visible features configured to control augmented reality output at the remote apparatus. | 01-29-2015 |
20150029222 | DYNAMICALLY CONFIGURING AN IMAGE PROCESSING FUNCTION - Method and systems for dynamically configuring an image processing function into at least a first and second detection state on the basis of function parameters are described, wherein transitions between said detection states are determined by at least a first state transition condition and wherein said image processing function includes extracting features from an image frame, matching features with features associated with one or more target objects and estimating pose information on the basis of matched features and wherein method comprises: configuring said image processing function in a detection state on the basis of a first set of function parameter values; processing a first image frame in said first detection state; monitoring said image processing function for occurrence of said at least state transition condition; and, if said at least one state transition condition is met, configuring said image processing function in said second detection state. | 01-29-2015 |
20150029223 | IMAGE PROCESSING APPARATUS, PROJECTION CONTROL METHOD, AND PROGRAM - An information processing apparatus and method acquires an image, performs an image recognition process on the acquired image to recognize a physical object in that image, and then generates a virtual image based on the results of the image recognition process. The virtual image includes a virtual object positioned relative to the physical object that was recognized in the acquired image. A display then displays the virtual image, and a projector projects at least part of the virtual image. The apparatus and method also include modes in which the display displays the virtual image, but the projector does not project the virtual image, where the projector projects the virtual image but the display does not display the virtual image, and where the display displays the virtual image and the projector projects the virtual image. | 01-29-2015 |
20150035861 | MIXED REALITY GRADUATED INFORMATION DELIVERY - Embodiments that relate to presenting a plurality of visual information density levels for a plurality of geo-located data items in a mixed reality environment are disclosed. For example, in one disclosed embodiment a graduated information delivery program receives information for a selected geo-located data item and provides a minimum visual information density level for the item to a head-mounted display device. The program receives via the head-mounted display device a user input corresponding to the selected geo-located data item. Based on the input, the program provides an increasing visual information density level for the selected item to the head-mounted display device for display within the mixed reality environment. | 02-05-2015 |
20150035862 | MULTIPLE PERSPECTIVE VIDEO SYSTEM AND METHOD - Subject viewpoint data and external viewpoint data are collected, such as via cameras, and separate virtual or synthetic video views are generated, one from the subject viewpoint and one from an external viewpoint. The subject video view is presented to the subject, such as via a headset, while the external video view is presented to at least one other person. At least the subject video view is presented sufficiently close in time to permit the subject to react to it. A model is referenced so the video views may include a virtual or synthetic setting, or at least partial replacement of the subject with a character, or both. | 02-05-2015 |
20150035863 | System and Method for Integrating Multiple Virtual Rendering Systems to Provide an Augmented Reality - There is provided a system and method for integrating multiple virtual rendering systems to provide an augmented reality. There is provided a method for integrating multiple virtual rendering systems for outputting a composite render to a display, comprising obtaining a first environment data from a first virtual rendering system using a first camera view, obtaining a second environment data from a second virtual rendering system using a second camera view, rendering the composite render by processing the first environment and the second environment, and outputting the composite render to the display. Additionally, the first environment data and second environment data may depend on synchronized or corresponding inputs, and display priority algorithms or masking algorithms may be used to determine virtual and real object display priority. | 02-05-2015 |
20150042678 | METHOD FOR VISUALLY AUGMENTING A REAL OBJECT WITH A COMPUTER-GENERATED IMAGE - A method and system for visually augmenting a real object with a computer-generated image that includes sending a virtual model in a client-server architecture from a client computer to a server via a computer network, receiving the virtual model at the server, instructing a 3D printer to print at least a part of the real object according to the virtual model, generating an object detection and tracking configuration configured to identify at least a part of the real object, receiving an image captured by a camera representing at least part of an environment in which the real object is placed, determining a pose of the camera with respect to the real object, and overlaying at least part of a computer-generated image with at least part of the real object. | 02-12-2015 |
20150042679 | APPARATUS, METHOD, COMPUTER PROGRAM AND SYSTEM FOR A NEAR EYE DISPLAY - Embodiments of the present invention relate to an apparatus, method, computer program and system for a near eye display to cause at least: displaying visual content on a near eye display; detecting an event; and adjusting, in response to detecting the event, a visual prominence of the displayed visual content for altering a viewer's immersion level in the displayed visual content. | 02-12-2015 |
20150042680 | METHOD AND DEVICE FOR CONTROLLING A NEAR EYE DISPLAY - A method and a system for controlling a near eye display using a virtual navigation space are provided herein. The system may include: a wearable near eye display; a sensor having a field of view, attached to the wearable near eye display and configured to capture a scene; a transmitter attached to the wearable near eye display said transmitter is configured to transmit a structured light pattern onto a navigation space, wherein the sensor is configured to capture reflections of the specified pattern coming from the navigation space; and a computer processor configured to analyze said reflections and control a visual indicator presented to a user over the wearable near eye display. The method implements the aforementioned logic without being limited to the architecture. | 02-12-2015 |
20150042681 | Augmented Reality Device - An augmented reality device may consist of at least a controller, memory, and at least one screen. The augmented reality device can be configured to display an augmented reality digital content via the at least one screen with the augmented reality digital content positioned at a physical location and displayed only when a user is oriented towards the physical location. | 02-12-2015 |
20150049112 | AUTOMATIC CUSTOMIZATION OF GRAPHICAL USER INTERFACE FOR OPTICAL SEE-THROUGH HEAD MOUNTED DISPLAY WITH USER INTERACTION TRACKING - A method, an apparatus, and a computer program product render a graphical user interface (GUI) on an optical see-through head mounted display (HMD). The apparatus obtains a location on the HMD corresponding to a user interaction with a GUI object displayed on the HMD. The GUI object may be an icon on the HMD and the user interaction may be an attempt by the user to select the icon through an eye gaze or gesture. The apparatus determines whether a spatial relationship between the location of user interaction and the GUI object satisfies a criterion, and adjusts a parameter of the GUI object when the criterion is not satisfied. The parameter may be one or more of a size of the GUI object, a size of a boundary associated with the GUI object or a location of the GUI object. | 02-19-2015 |
20150049113 | VISUAL SEARCH IN REAL WORLD USING OPTICAL SEE-THROUGH HEAD MOUNTED DISPLAY WITH AUGMENTED REALITY AND USER INTERACTION TRACKING - A method, an apparatus, and a computer program product conduct online visual searches through an augmented reality (AR) device having an optical see-through head mounted display (HMD). An apparatus identifies a portion of an object in a field of view of the HMD based on user interaction with the HMD. The portion includes searchable content, such as a barcode. The user interaction may be an eye gaze or a gesture. A user interaction point in relation to the HMD screen is tracked to locate a region of the object that includes the portion and the portion is detected within the region. The apparatus captures an image of the portion. The identified portion of the object does not encompass the entirety of the object. Accordingly, the size of the image is less than the size of the object in the field of view. The apparatus transmits the image to a visual search engine. | 02-19-2015 |
20150049114 | EXERCISING APPLICATIONS FOR PERSONAL AUDIO/VISUAL SYSTEM - The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The personal A/V apparatus serves as an exercise program that is always with the user, provides motivation for the user, visually tells the user how to exercise, and lets the user exercise with other people who are not present. | 02-19-2015 |
20150049115 | HEAD MOUNTED DISPLAY, DISPLAY, AND CONTROL METHOD THEREOF - If the size of an error portion within an image of the frame of interest is equal to or larger than a threshold and it is determined that no error portion exists within an image of a frame immediately before the frame of interest, a display unit ( | 02-19-2015 |
20150049116 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing device including: a data storage unit storing feature data indicating a feature of appearance of one or more physical objects; an environment map building unit for building an environment map based on an input image obtained by imaging a real space and the feature data, the environment map representing a position of a physical object present in the real space: a control unit for acquiring procedure data for a set of procedures of operation to be performed in the real space, the procedure data defining a correspondence between a direction for each procedure and position information designating a position at which the direction is Cu be displayed; and a superimposing unit for generating an output image by superimposing the direction for each procedure at a position in the input image determined based on the environment map and the position information, using the procedure data. | 02-19-2015 |
20150054850 | REHABILITATION DEVICE AND ASSISTIVE DEVICE FOR PHANTOM LIMB PAIN TREATMENT - A rehabilitation device for recovering a function of a paralyzed hand includes: a camera which photographs an image of a mark placed on the paralyzed hand and outputs a photographed image; a position recognition unit which takes input of the photographed image and recognizes a position of the paralyzed hand, using the image of the mark; an image forming unit which outputs a dynamic image in which the paralyzed hand moves; and a head-mounted display which displays the dynamic image superimposed on the paralyzed hand. | 02-26-2015 |
20150062157 | METHOD AND SYSTEM OF DISPLAYING INFORMATION DURING A MEDICAL PROCEDURE - A method and system including a head mounted display for displaying information to a user when performing a medical procedure is described. A user wearing the head mounted display device can view objects, while simultaneously receiving and displaying feedback information on a procedure being performed. Text and graphical information is presented in a position on the head mounted display so that the object or a part of an object out of the line of sight appears in the same location and with the same shape, size and orientation as if the object or part of an object were visible to the user. | 03-05-2015 |
20150062158 | INTEGRATION OF HEAD MOUNTED DISPLAYS WITH PUBLIC DISPLAY DEVICES - Various arrangements for presenting private information are presented. Private information to be displayed via a head mounted display to a user may be identified. A marker displayed by a public display device may also be identified. This public display device may be visible in a vicinity of the user. The private information and an indication of the marker may be output to the head-mounted display of the user, such that the private information is displayed by the head-mounted display in relation to the marker displayed by the public display device. | 03-05-2015 |
20150062159 | DYNAMIC DISPLAY MARKERS - Various arrangements for defining a marker are presented. A first defined marker presented by a public display device may be determined to be insufficient for use by a head mounted display. The first defined marker may be used as a reference point for positioning information for display by the head mounted display. In response to determining that the first defined marker is insufficient, a second marker displayed by the public display device may be defined. The second marker may have a display characteristic different from the first defined marker. The second defined marker may then be used as the reference point for positioning the information for display by the head mounted display. An indication of the second marker may be transmitted to the head mounted display. | 03-05-2015 |
20150062160 | WEARABLE USER DEVICE ENHANCED DISPLAY SYSTEM - Systems and methods for displaying information on a wearable user device include determining calibration information based on user actions and information from images captured by a camera on a wearable user device. A field of view calibration may then be performed on a display engine in the wearable user device using the calibration information. Graphical information may then be displayed on a display device on the wearable user device according to a user field of view using the display engine in the wearable user. Field of view calibrations may be performed using calibration information that is based on: a user's head range of motion such that displayed graphical information conforms to that head range of motion, calibration objects by themselves that indicate a user's perspective in a user field of view, and calibration objects that allow the size of a user's hands to be determined and used to measure objects. | 03-05-2015 |
20150062161 | PORTABLE DEVICE DISPLAYING AUGMENTED REALITY IMAGE AND METHOD OF CONTROLLING THEREFOR - A method of controlling a portable device according to one embodiment of the present specification can display an augmented reality image corresponding to a marker before the marker is displayed. The method includes the steps of configuring an angle of view of a camera unit as a first angle of view and an angle of view of a display unit as a second angle of view, wherein the second angle of view is less than the first angle of view, detecting a marker at an angle less than the first angle of view and larger than the second angle of view, and displaying a first part of an augmented reality image corresponding to the marker, wherein the first part may correspond to a part of the augmented reality image positioned within the second angle of view. | 03-05-2015 |
20150062162 | PORTABLE DEVICE DISPLAYING AUGMENTED REALITY IMAGE AND METHOD OF CONTROLLING THEREFOR - A method of controlling a portable device according to one embodiment of the present specification includes detecting a marker, if the marker is detected in a first area of the portable device, displaying at least a part of the augmented reality image based on a position of the detected marker with a first augmented reality mode, wherein the augmented reality image corresponds to the marker and is assigned to a first position and if the detected marker deviates from the first area, displaying the augmented reality image assigned to the first position with a second augmented reality mode, wherein the first augmented reality mode displays the augmented reality image corresponding to the marker while the marker is positioned within the first area and wherein the second augmented reality mode displays the augmented reality image corresponding to the marker in case that the marker deviates from the first area. | 03-05-2015 |
20150062163 | PORTABLE DEVICE AND METHOD OF CONTROLLING THEREFOR - A portable device is disclosed. A method of controlling a portable device, comprising the steps of capturing an image in front of the portable device, detecting a marker object from the image, displaying a virtual image corresponding to the marker object based on a location of the marker object, and terminating a display of the virtual image corresponding to the marker object, when the detecting of the marker object is terminated, terminate a display of the virtual image based on a first terminate mode if the gaze location of the user is detected at a first location, and terminate the display of the virtual image based on a second terminate mode if the gaze location of the user is detected at a second location. | 03-05-2015 |
20150062164 | HEAD MOUNTED DISPLAY, METHOD OF CONTROLLING HEAD MOUNTED DISPLAY, COMPUTER PROGRAM, IMAGE DISPLAY SYSTEM, AND INFORMATION PROCESSING APPARATUS - A head mounted display which allows a user to visually recognize a virtual image and external scenery, includes a generation unit that generates a list image including a first image which is a display image of an external apparatus connected to the head mounted display and a second image of the head mounted display, and an image display unit that forms the virtual image indicating the generated list image. | 03-05-2015 |
20150062165 | IMAGE PROCESSING DEVICE AND HEAD MOUNTED DISPLAY APPARATUS INCLUDING THE SAME - When a hand of the user is recognized in an image pickup region of a camera, a head mounted display stores a contour shape of the hand which would be imaged by the camera in advance. In addition, the head mounted display receives an input of image data per pixel included in the camera, calculates a difference between colors of adjacent pixels represented by the image data, sets a set of image data having the same color system, where the calculated difference is within a predetermined threshold, as a group, and captures a contour of a region of the data. Next, the head mounted display compares the captured contour to a contour shape of the hand which is stored in advance to allow the user to recognize the hand of the user in the image pickup region. | 03-05-2015 |
20150062166 | EXPANDING A DIGITAL REPRESENTATION OF A PHYSICAL PLANE - Techniques are presented for expanding a digital representation of a physical plane from a physical scene. In some aspects, a method may include determining an orientation and an initial portion of a physical plane in the scene, and subdividing a rectified image for the scene into a plurality of grid cells. For the grid cells, an image signature may be generated. A grid cell contiguous to the obtained initial portion of the plane is determined to include part of the plane. An iterative process may be performed for each neighboring grid cell from the grid cell contiguous to at least part of the obtained initial portion, determining whether the neighboring grid cell is to be included as part of the plane if the image signature of said neighboring grid cell is similar to the image signature of a grid cell already determined to be included as part of the plane. | 03-05-2015 |
20150062167 | VISION-BASED AUGMENTED REALITY SYSTEM USING INVISIBLE MARKER - A vision-based augmented reality system using an invisible marker indicates an invisible marker on a target object to be tracked, such that it can rapidly and correctly track the target object by detecting the invisible marker. The augmented reality system includes a target object (TO) including an infrared marker (IM) drawn by an invisible infrared light-emitting material; a visible-ray camera ( | 03-05-2015 |
20150062168 | SYSTEM AND METHOD FOR PROVIDING AUGMENTED REALITY BASED DIRECTIONS BASED ON VERBAL AND GESTURAL CUES - A method and system for providing augmented reality based directions. The method and system include receiving a voice input based on verbal cues provided by one or more vehicle occupants in a vehicle. The method and system also include receiving a gesture input and a gaze input based on gestural cues and gaze cues provided by the one or more vehicle occupants in the vehicle. The method and system additionally include determining directives based on the voice input, the gesture input and the gaze input and associating the directives with the surrounding environment of the vehicle. Additionally, the method and system include generating augmented reality graphical elements based on the directives and the association of the directives with the surrounding environment of the vehicle. The method and system further include displaying the augmented reality graphical elements on a heads-up display system of the vehicle. | 03-05-2015 |
20150070386 | TECHNIQUES FOR PROVIDING AN AUGMENTED REALITY VIEW - Various embodiments are generally directed to techniques for providing an augmented reality view in which eye movements are employed to identify items of possible interest for which indicators are visually presented in the augmented reality view. An apparatus to present an augmented reality view includes a processor component; a presentation component for execution by the processor component to visually present images captured by a camera on a display, and to visually present an indicator identifying an item of possible interest in the captured images on the display overlying the visual presentation of the captured images; and a correlation component for execution by the processor component to track eye movement to determine a portion of the display gazed at by an eye, and to correlate the portion of the display to the item of possible interest. Other embodiments are described and claimed. | 03-12-2015 |
20150070387 | STRUCTURAL MODELING USING DEPTH SENSORS - Techniques are presented for constructing a digital representation of a physical environment. In some embodiments, a method includes obtaining image data indicative of the physical environment; receiving gesture input data from a user corresponding to at least one location in the physical environment, based on the obtained image data; detecting at least one discontinuity in the physical environment near the at least one location corresponding to the received gesture input data; and generating a digital surface corresponding to a surface in the physical environment, based on the received gesture input data and the at least one discontinuity. | 03-12-2015 |
20150070388 | AUGMENTED REALITY ALTERATION DETECTOR - Technologies are generally described for systems and methods effective to detect an alteration in augmented reality. A processor may receive a real image that corresponds to a real object and may receive augmented reality instructions to generate a virtual object. The processor may determine that the virtual object at least partially obscures the real object when the virtual object is rendered on a display. The processor may, upon determining that the virtual object at least partially obscures the real object when the virtual object is rendered on the display, simulate an activity on the real object to produce a first activity simulation and simulate the activity on the virtual object to produce a second activity simulation. The processor may determine a difference between the first and the second activity simulation and modify the augmented reality instructions to generate a modified virtual object in response to the determination of the difference. | 03-12-2015 |
20150070389 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD - The present disclosure is generally directed to an information processing apparatus comprising at least one processor configured to execute instructions to generate a superimposition parameter corresponding to superimposing virtual information on a real space based on a spatial position of a marker projected on the real space by an input apparatus, and cause a display section to display the virtual information superimposed on the real space according to the spatial position relationship while the marker is detectable and continue the display of the virtual information superimposed on the real space after the marker is undetectable according to a last detected spatial position of the marker. | 03-12-2015 |
20150070390 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing device including a captured image information acquisition portion which acquires captured image information corresponding to a captured image, a displayed image information acquisition portion which acquires displayed image information corresponding to a first image displayed on a display screen, and an object recognition portion which detects the position and the posture of the first image in the captured image using the displayed image information and the captured image information. | 03-12-2015 |
20150070391 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING SYSTEM - An image acquisition unit acquires an image captured by a first imaging device provided in a HMD for presenting an image observed when a three-dimensional image in a virtual three-dimensional space is projected onto a real-world setting, the first imaging device being configured to visualize an area including a field of view of a user wearing the HMD. A marker detection unit detects a marker included in the image captured by the first imaging device and acquired by the image acquisition unit. The image acquisition unit acquires an image captured by a second imaging device having an angle of view that at least partially overlaps an angle of view of the first imaging device. If the marker is not captured in the image captured by the first imaging device, the marker detection unit detects the marker in an image captured by the second imaging device. | 03-12-2015 |
20150077434 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An information processing system that acquires image data captured by an image capturing device; identifies a density of distribution of a plurality of feature points in the acquired image data; and controls a display to display guidance information based on the density of the distribution of the plurality of feature points. | 03-19-2015 |
20150077435 | SETTING METHOD AND INFORMATION PROCESSING DEVICE - A setting method executed by a computer includes acquiring display data to be associated with a reference object detected from a first input image data and to be displayed when the reference object is detected from another input image, generating, by the computer, attitude information indicating an arrangement attitude of the display data relative to the reference object, based on rotation information indicating a magnitude of rotation that is applied to the computer by a user, and storing, in a storage device, setting information including the attitude information, the display data, and identification information of the reference object. | 03-19-2015 |
20150084988 | HEAD-UP DISPLAY APPARATUS FOR VEHICLE USING AUGMENTED REALITY - A head-up display device for a vehicle using augmented reality includes a distance detector configured to detect a distance between a driver's vehicle and a front vehicle in front of the driver's vehicle to output distance information. A navigator is configured to output a position of the driver's vehicle and information about a road on which the driver's vehicle drives, as driving information. An image controller is configured to calculate a transverse position of the front vehicle according to the distance information and the driving information, and generate image information corresponding to the front vehicle by using the transverse position. The image controller corrects the transverse position according to whether any one of the driver's vehicle and the front vehicle enters a junction between a curved road and a straight road. | 03-26-2015 |
20150084989 | PORTABLE AUGMENTED REALITY - The different illustrative embodiments provide a method for augmenting reality that may be applied to repairs performed on a composite structure. An image may be recorded of a first layer of a composite component. The image may relate to a first repair performed at a first time. Physical data may be captured for the composite component from a surface layer of the composite component using a number of portable devices. A multi-dimensional representation of a combined augmented reality of the composite component including a display of various layers of the composite component selected by a user may be generated. A user selection may include a time restriction and a spatial restriction. The number of portable devices may display the multi-dimensional representation comprising physical data for an image of the surface of the composite component and digital data for an overlay of composite layers beyond the surface. | 03-26-2015 |
20150084990 | AUGMENTED REALITY MEDICAL PROCEDURE AID - An apparatus for aiding a medical practitioner in performance of a medical procedure, the apparatus comprising: a photosensor that registers light; optics that directs an image of a scene of the medical procedure onto the photosensor and a retina of the practitioner; a light projector; and a controller that controls the light projector to project light to form an augmenting virtual image (AVI) on the photosensor and the retina of the practitioner but not the scene, the AVI comprising a visual representation for use in the medical procedure and a plurality of markers corresponding to selected homologous features in the scene, wherein the controller configures the AVI responsive to the scene imaged on the photosensor and the AVI formed onto the photosensor so that the markers are substantially coincident with the corresponding homologous features. | 03-26-2015 |
20150091941 | AUGMENTED VIRTUALITY - Techniques for providing a user with an augmented virtuality (AV) experience are described herein. An example of a method of providing an AV experience includes determining a location of a mobile device, determining a context based on the location, obtaining AV object information, displaying the AV object information in relation to the context, detecting an interaction with the context, modifying the AV object information based on the interaction, and displaying the modified AV object information. The context may include weighting information. The weighting information may be based on Received Signal Strength Indication (RSSI) or Round-Trip Time (RTT) data. The weighting information may be associated with a composition of a physical object in the context. A user gesture may be received, and the AV object information may be modified based on the received gesture information. | 04-02-2015 |
20150091942 | SYSTEM FOR AUTHORING AND PROVIDING AUGMENTED REALITY CONTENTS - Disclosed is a system for authoring and providing augmented reality contents, which includes a database storing a plurality of place models expressing an inherent physical place, a positioning unit for determining a current position of a user, a place model processing unit for searching and loading a place model corresponding to the current position of the user from the database, and a virtual object processing unit for disposing a virtual object expressed through a HTML document at a predetermined location in the loaded place model, wherein the plurality of place models is hierarchically stored so that at least one place model has at least one another place model of a subordinate concept. | 04-02-2015 |
20150091943 | WEARABLE DISPLAY DEVICE AND METHOD FOR CONTROLLING LAYER IN THE SAME - Discussed are a wearable display device and a method for controlling an augmented reality layer. The wearable display device may include a camera unit configured to capture an image of a user's face, a sensor unit configured to sense whether or not the user is turning his (or her) head, and a controller configured to move a virtual object belonging to a layer being gazed upon by the user's eye-gaze, when at least one of a turning of the user's head and a movement in the user's eye-gaze is identified based upon the image of the user's face captured by the camera unit and information sensed by the sensor unit, and when the user's eye-gaze is gazing upon any one of a first virtual object belonging to a first layer and a second virtual object belonging to a second layer. | 04-02-2015 |
20150091944 | MOVING OBJECT TRACKING DEVICE, MOVING OBJECT TRACKING SYSTEM AND MOVING OBJECT TRACKING METHOD - A moving object tracking device for allowing a display device to display a movement line of a moving object superimposed on a real-time image of a monitored area includes: a detection unit that detects a moving object from images of the monitored area and outputs detection position information; a first movement line generation unit that removes erroneous detection information included in the detection position information and generates a determined movement line using the detection position information having the erroneous detection information removed; a second movement line generation unit that generates a provisional movement line interpolating an undetermined section between a substantially latest detection position indicated by the detection position information and an end point of the determined movement line; and a movement line information obtaining unit that obtains movement line information relating to an integrated movement line formed of the determined movement line and the provisional movement line integrated together. | 04-02-2015 |
20150097860 | SYSTEM AND METHOD FOR DYNAMIC IN-VEHICLE VIRTUAL REALITY - A method for in-vehicle dynamic virtual reality includes receiving vehicle data from one or more vehicle systems of a vehicle, wherein the vehicle data includes vehicle dynamics data and receiving user data from a virtual reality device. The method includes generating a virtual view based on the vehicle data, the user data and a virtual world model, the virtual world model including one or more components that define the virtual view, wherein generating the virtual view includes augmenting one or more components of the virtual world model according to at least one of the vehicle data and the user data and rendering the virtual view to an output device by controlling the output device to update display of the virtual view according to the vehicle dynamics data. | 04-09-2015 |
20150097861 | SYSTEM AND METHOD FOR DYNAMIC IN-VEHICLE VIRTUAL REALITY - A method for in-vehicle dynamic virtual reality includes receiving vehicle data from one or more vehicle systems of a vehicle, wherein the vehicle data includes vehicle dynamics data and receiving user data from a virtual reality device. The method includes generating a virtual view based on the vehicle data, the user data and a virtual world model, the virtual world model including one or more components that define the virtual view, wherein generating the virtual view includes augmenting one or more components of the virtual world model according to at least one of the vehicle data and the user data and rendering the virtual view to an output device by controlling the output device to update display of the virtual view according to the vehicle dynamics data. | 04-09-2015 |
20150097862 | GENERATING AUGMENTED REALITY CONTENT FOR UNKNOWN OBJECTS - Techniques described herein provide a method for defining virtual content for real objects that are unknown or unidentified at the time of the development of the application for an augmented reality (AR) environment. For example, at the time of development of an AR application, the application developer may not know the context that the mobile device may operate in and consequently the types or classes of real object and the number of real objects that the AR application may encounter. In one embodiment, the mobile device may detect unknown objects from a physical scene. The mobile device may then associate an object template with the unknown object based on the physical attributes, such as height, shape, size, etc., associated with the unknown object. The mobile device may render a display object at the pose of the unknown object using at least one display property of the object template. | 04-09-2015 |
20150097863 | SYSTEM AND METHOD FOR DYNAMIC IN-VEHICLE VIRTUAL REALITY - A method for in-vehicle dynamic virtual reality, including receiving vehicle data from a portable device, the portable device operably connected for computer communication to an output device, the vehicle data including vehicle dynamics data, and receiving user data from at least one of the portable device or the output device. The method including generating a virtual view based on the vehicle data, the user data and a virtual world model, the virtual world model including one or more components that define the virtual view, wherein generating the virtual view includes augmenting one or more components of the virtual world model according to at least one of the vehicle data or the user data. The method including rendering the virtual view to the output device by controlling the output device to update display of the virtual view according to at least one of the vehicle data or the user data. | 04-09-2015 |
20150097864 | SYSTEM AND METHOD FOR DYNAMIC IN-VEHICLE VIRTUAL REALITY - A method for in-vehicle dynamic virtual reality includes receiving vehicle data and user data from one or more portable devices, wherein the vehicle data comprises vehicle dynamics data of the vehicle. The method includes generating a virtual view based on the vehicle data, the user data and a virtual world model. The virtual world model including one or more components that define the virtual view and wherein generating the virtual view includes augmenting one or more components of the virtual world model according to at least one of the vehicle data and the user data. The method includes rendering the virtual view to an output device by controlling the output device to update display of the virtual view according to at least one of the vehicle data or the user data | 04-09-2015 |
20150097865 | METHOD AND COMPUTING DEVICE FOR PROVIDING AUGMENTED REALITY - A method and computing device for providing Augmented Reality (AR) is provided. The method of providing AR includes detecting at least one physical object from a real scene obtained through a camera of a computing device, rendering at least one virtual object at a desired position of the detected at least one physical object on the real scene provided on a display, enabling communication through a command for interaction between the rendered at least one virtual object, and enabling the at least one virtual object to perform an action in response to command communication between the at least one virtual object. | 04-09-2015 |
20150097866 | DISPLAY CONTROL APPARATUS, COMPUTER-IMPLEMENTED METHOD, STORAGE MEDIUM, AND PROJECTION APPARATUS - In a display control apparatus configured to control image data displayed on a predetermined display medium, an information acquisition unit acquires a first information associated with the predetermined matter at a first time and acquire a second information associated with the predetermined matter at a second time after the first time. A determination unit judges whether there is a predetermined change between the first information and the second information. A control unit controls the image data such that in a case where a predetermined change is detected, at least one of parameters including a location, a size, and a shape of a restricted area, in which the displaying of display information including in the image data is limited, is changed so as to achieve the parameter defined in relation to the second information. | 04-09-2015 |
20150097867 | SYSTEM AND METHOD FOR TRANSITIONING BETWEEN INTERFACE MODES IN VIRTUAL AND AUGMENTED REALITY APPLICATIONS - One preferred embodiment of the present invention includes a method for transitioning a user interface between viewing modes. The method of the preferred embodiment can include detecting an orientation of a mobile terminal including a user interface disposed on a first side of the mobile terminal, wherein the orientation of the mobile terminal includes an imaginary vector originating at a second side of the mobile terminal and projecting in a direction substantially opposite the first side of the mobile terminal. The method of the preferred embodiment can also include transitioning between at least two viewing modes in response to the imaginary vector intersecting an imaginary sphere disposed about the mobile terminal at a first latitudinal point having a predetermined relationship to a critical latitude of the sphere. | 04-09-2015 |
20150103096 | DISPLAY DEVICE, HEAD MOUNT DISPLAY, CALIBRATION METHOD, CALIBRATION PROGRAM AND RECORDING MEDIUM - An optically transmissive display device is configured displays additional information to a real environment visually recognized by a user. The display device includes: a position detecting unit which detects a specific position in the real environment; a calibration unit which obtains a correspondence between the specific position in the real environment and a specific position of the display device to compute calibration data for transforming from a first coordinate system of the position detecting unit at the specific position in the real environment to a second coordinate system of the display device; and a determining unit which determines a natural feature point suitable for computing the calibration data in the natural feature points existing in the real environment, based on a taken image. The specific position in the real environment in the calibration unit is specified by detecting the position of the natural feature point determined by the determining unit. | 04-16-2015 |
20150103097 | Method and Device for Implementing Augmented Reality Application - A method for implementing an augmented reality application includes collecting an image and label information of the image, where the image has been uploaded by a user and releasing the image and the label information of the image to a social networking contact of the user in accordance with a social graph of the user and an interest graph. The method also includes obtaining comment information from the social networking contact about the image and extracting, from the comment information, a keyword, where an occurrence frequency of the keyword is higher than a first threshold. Additionally, the method includes adding the image to an image album in accordance with the label information of the image and the keyword and generating an augmented reality pattern and augmented reality content about a describing object of the image in accordance with image features of images in the image album and the keyword. | 04-16-2015 |
20150103098 | Camera and Sensor Augmented Reality Techniques - Camera and sensor augmented reality techniques are described. In one or more implementations, an optical basis is obtained that was generated from data obtained by a camera of a computing device and a sensor basis is obtained that was generated from data obtained from one or more sensors that are not a camera. The optical basis and the sensor basis describe a likely orientation or position of the camera and the one or more sensors, respectively, in a physical environment. The optical basis and the sensor basis are compared to verify the orientation or the position of the computing device in the physical environment. | 04-16-2015 |
20150109334 | AUGMENTED REALITY AIDED NAVIGATION - In a computer-implemented method for augmented reality aided navigation to at least one physical device indicia corresponding to the at least one physical device supporting virtualization infrastructure is observed. Based on the observed indicia, navigational cues correlating to a location of the at least one physical device is generated. Navigational cues are displayed such that augmented reality aided navigation is provided to the at least one physical device. | 04-23-2015 |
20150109335 | COMPUTER-READABLE RECORDING MEDIUM RECORDING INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD - An example computer includes: an image acquiring unit that acquires an image of a real space captured by an imaging device; a feature detecting unit that detects a feature from the image; a determining unit that determines a virtual object, or, a virtual object and an aspect of the virtual object while changing the same in accordance with a recognition condition of the detected feature; an image generating unit that generates an image of a virtual space in which the determined virtual object or the virtual object in the determined aspect is placed on a basis of the feature; and a display controlling unit that displays an image on a display device such that the image of the virtual space is visually recognized by a user while being superimposed on the real space. | 04-23-2015 |
20150109336 | COMPUTER-READABLE RECORDING MEDIUM RECORDING INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD - An example information processing system includes: a computer; an imaging device; a display device; and a first and a second feature placed in a real space. The computer includes: an image acquiring unit that acquires an image of the real space; a feature detecting unit that detects the first feature and the second feature from the image; a changing unit that changes an association of the first feature with a virtual objects by adding a virtual object associated with the second feature, and an generating unit that generates an image of a virtual space in which the virtual object associated with the first feature is placed at a position based on the first feature; and a display controlling unit that displays an image on the display device. | 04-23-2015 |
20150109337 | FEEDBACK TO USER FOR INDICATING AUGMENTABILITY OF AN IMAGE - Methods and systems for determining augmentability information associated with an image frame captured by a digital imaging part of a user device. The determined augmentability score may then be used in the generation of feedback to the user. For example, a graphical user interface may be generated and rendered having a substantially continuous visual output corresponding to the augmentability information. | 04-23-2015 |
20150109338 | WIDE AREA AUGMENTED REALITY LOCATION-BASED SERVICES - Apparatus, methods and systems of providing AR content are disclosed. Embodiments of the inventive subject matter can obtain an initial map of an area, derive views of interest, obtain AR content objects associated with the views of interest, establish experience clusters and generate a tile map tessellated based on the experience clusters. A user device could be configured to obtain and instantiate at least some of the AR content objects based on at least one of a location and a recognition. | 04-23-2015 |
20150109339 | METHOD AND APPARATUS FOR IMPLEMENTING AUGMENTED REALITY - A method and an apparatus for implementing augmented reality are provided, where the method includes: acquiring real image information by using a camera; acquiring one or more target objects selected by a user from the real image information; with respect to each target object, acquiring an identification image that is used to identify the target object and categorization information of the target object separately; acquiring an image template corresponding to the categorization information, and performing image matching of the image template with the identification image to identify each target object; and acquiring augmented information corresponding to each identified target object, and simultaneously displaying the identification image of the target object and the augmented information to form the augmented reality. The present invention reduces limitation on using the augmented reality, so that it is more convenient for the user to use the augmented reality. | 04-23-2015 |
20150116354 | MIXED REALITY SPOTLIGHT - Various embodiments relating to creating a virtual shadow of an object in an image displayed with a see-through display are provided. In one embodiment, an image of a virtual object may be displayed with the see-through display. The virtual object may appear in front of a real-world background when viewed through the see-through display. A relative brightness of the real-world background around a virtual shadow of the virtual object may be increased when viewed through the see-through display. The virtual shadow may appear to result from a spotlight that is fixed relative to a vantage point of the see-through display. | 04-30-2015 |
20150116355 | REFERENCE IMAGE SLICING - Method and systems for generating reference features sets for slices of a reference image. The reference features sets generated from slices enables better object recognition and/or tracking when a camera image only shows a portion of the reference image. Metadata is used to link the reference features set of the original image and of the slices together as belonging to the same object, providing hierarchical relationship information and/or spatial relationship information. An image processing function may be dynamically configured on the basis of whether an object has been successfully detected and the metadata associated with the object. | 04-30-2015 |
20150116356 | Anchors for location-based navigation and augmented reality applications - A method for encoding information includes specifying a digital value and providing a symbol ( | 04-30-2015 |
20150116357 | DISPLAY DEVICE - According to one embodiment, a display device includes a light emitter, a reflector, a virtual image position controller, and a holder. The light emitter emits light flux including an image. The reflector is in front of an eye of a viewer. The reflector is partially reflective and partially transparent and reflects the light flux toward the eye and form a virtual image. The virtual image position controller controls a position of the virtual image. The holder holds the reflector. The virtual image position controller sets the position of the virtual image to a first position on a line connecting the eye and a background object in front of the eye and subsequently moves the position of the virtual image to a second position on the line that lies closer to the reflector than the first position. | 04-30-2015 |
20150116358 | APPARATUS AND METHOD FOR PROCESSING METADATA IN AUGMENTED REALITY SYSTEM - An apparatus for processing metadata includes: a map node defining component configured to define a map node for setting a virtual map; a map overlay node defining component configured to define a map overlay node for setting a layer in which an augmented reality object is to be overlaid on a map set according to the map node; a map marker node defining component configured to define a map marker node for setting a position of the augmented reality object on the map, which is to be overlaid on the layer set according to the map overlay node; a point of interest node defining component configured to set information on a point of interest, which is a position of the augmented reality object on the map; and a controller configured to load the virtual map, the layer, the map marker, and the point of interest. | 04-30-2015 |
20150116359 | DISPLAY APPARATUS WITH IMAGE-CAPTURING FUNCTION, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE DISPLAY SYSTEM - A display apparatus with an image-capturing function includes an outputting unit configured to output an image signal to an external apparatus, an inputting unit configured to input an image signal from the external apparatus, an image-capturing unit, a display unit, an image-capture-distortion corrector configured to perform image-capture-distortion correction on an image signal captured by the image-capturing unit, a display-distortion corrector configured to perform display-distortion correction, and a controller configured to control whether or not the image-capture-distortion corrector is to perform the image-capture-distortion correction, and whether or not the display-distortion corrector is to perform the display-distortion correction. Therefore, distortion caused by the image-capturing system and display system of the display apparatus with an image-capturing function can be appropriately corrected in the overall system including the display apparatus with an image-capturing function and the external apparatus. | 04-30-2015 |
20150123996 | VIDEO OUTPUTTING APPARATUS, THREE-DIMENTIONAL VIDEO OBSERVATION DEVICE, VIDEO PRESENTATION SYSTEM, AND VIDEO OUTPUTTING METHOD - An object classification unit | 05-07-2015 |
20150123997 | Information Display System Including Transmission Type HMD, Non-Transitory Computer-Readable Storage Medium and Display Control Method - Provided are an information display system, a non-transitory computer-readable storage medium, and a display control method. The information display system includes a transmission type head-mounted display and a control section. The control section includes a gaze point detecting section configured to detect a point of gaze of a user, a first judgment section configured to judge whether the user gazes at a certain area on a virtual screen or on the background beyond the virtual screen, a second judgment section configured to judge whether a sight-line region around the intersection of the virtual screen and a sight line of the user overlaps with an object displayed on the virtual screen, and a display control section configured to, in response to a movement of the point of gaze, change at least one of a display position and a display mode of the object on the basis of judgment results. | 05-07-2015 |
20150130834 | INTERACTIVE AUGMENTED REALITY FOR MEMORY DIMM INSTALLATION - Digital images are captured of uninstalled memory modules, an identifying portion of a target computer system, and empty memory module sockets within the target computer system. The captured digital images are analyzed to identify each of the uninstalled memory modules and the number and type of empty memory module sockets. A predetermined set of installation rules associated with the target computer system are used to determine a memory module configuration that identifies the uninstalled memory modules to be installed in the empty memory module sockets. Real-time digital video of a user installing each of the memory modules in one of the empty memory module sockets is captured and displayed on a display device. The displayed digital video is augmented with a computer generated graphic element or audio identifying which empty memory module socket should receive a particular memory module. Other pluggable components may be similarly configured and installed. | 05-14-2015 |
20150130835 | INTERACTIVE AUGMENTED REALITY FOR MEMORY DIMM INSTALLATION - Digital images are captured of uninstalled memory modules, an identifying portion of a target computer system, and empty memory module sockets within the target computer system. The captured digital images are analyzed to identify each of the uninstalled memory modules and the number and type of empty memory module sockets. A predetermined set of installation rules associated with the target computer system are used to determine a memory module configuration that identifies the uninstalled memory modules to be installed in the empty memory module sockets. Real-time digital video of a user installing each of the memory modules in one of the empty memory module sockets is captured and displayed on a display device. The displayed digital video is augmented with a computer generated graphic element or audio identifying which empty memory module socket should receive a particular memory module. Other pluggable components may be similarly configured and installed. | 05-14-2015 |
20150130836 | ADAPTING CONTENT TO AUGMENTED REALITY VIRTUAL OBJECTS - Technologies for adapting content to augmented reality virtual objects include a content consumption device to render selected content and a mobile computing device to render a virtual object within the physical environment of the mobile computing device. The mobile computing device may transfer the virtual object to the content consumption device. The content consumption device may adapt the content based on the transferred virtual object, and render the adapted content. The adapted content may be selected from a number of pre-defined scenes, or may be generated to include the virtual object. The adapted content may include other characters or objects that react to the transferred virtual object. The virtual object may be transferred back to the mobile computing device with updated attributes. The content may be streamed from a content source over a network. Other embodiments are described and claimed. | 05-14-2015 |
20150130837 | HEAD-MOUNTED DISPLAY - A head-mounted display includes a display unit, a detector, and a first control unit. The display unit is mountable on a head of a user and configured to be capable of providing the user with a field of view of a real space. The detector detects an azimuth of the display unit around at least one axis. The first control unit includes a region limiter, a storage unit, and a display control unit. The region limiter is capable of limiting a display region of the field of view along a direction of the one axis in three-dimensional coordinates surrounding the display unit. The storage unit stores images including information relating to a predetermined target present in the field of view with the images being made corresponding to the three-dimensional coordinates. The display control unit is configured to display, based on an output of the detector, an image in the three-dimensional coordinates, which corresponds to the azimuth, in the field of view. | 05-14-2015 |
20150130838 | DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND PROGRAM - There is provided a display control device including an image acquiring section configured to acquire a moving image shot from a viewpoint changing from moment to moment, a spatial position specifying section configured to specify a spatial position in a first frame of the moving image, and a display control section configured to display the moving image, in such a manner to maintain the spatial position in a predetermined state in a second frame after the first frame. | 05-14-2015 |
20150130839 | DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND PROGRAM - There is provided a display control device including a matching section configured to match a first image or sensor data output from a first imaging device or a sensor worn on a head of a first user, to a second image output from a second imaging device worn on a part other than the head of the first user, a sight estimation section configured to estimate a region corresponding to a sight of the first user in the second image, on the basis of a result of the matching, and a display control section configured to generate an image expressing the sight of the first user using the second image on the basis of a result of the estimation of the sight, and display the image expressing the sight of the first user toward a second user that is different from the first user. | 05-14-2015 |
20150130840 | SYSTEM AND METHOD FOR REPORTING EVENTS - Disclosed is a system for reporting changes to a network in case of an event. The system includes a survey unit adapted to be located at a site of the network using data from a positioning sensor of the survey unit. The survey unit is configured to request from a control unit an augmented view related to the location of the site of the network and displays the augmented view in a display of the survey unit on top of a current view of the site. The survey unit is adapted to capture a photograph on the display and to communicate the photograph to the control unit. The control unit is configured to determine changes in the network by comparing the current view as shown in the photograph with the augmented view, and to create an event report including a catalog of the changes to the network. | 05-14-2015 |
20150130841 | METHODS AND COMPUTING DEVICES TO MEASURE MUSCULOSKELETAL MOVEMENT DEFICIENCIES - Methods and computing devices for measuring a range of motion of a musculoskeletal joint in a human or animal patient are provided. | 05-14-2015 |
20150138231 | Method, device and system for realizing augmented reality information sharing - A method, device and system for realizing augmented reality information sharing are disclosed. The method includes that an intelligent mobile terminal transmits an obtained current image and determined current position information to a cloud end server; the cloud end server sends the received image to a computer related to the intelligent mobile terminal, obtains attribute information of the image according to the received position information, and sends the attribute information to the intelligent mobile terminal and the computer; and the intelligent mobile terminal and the computer respectively overlap the image and the attribute information to form augmented reality information. The disclosure can realize sharing of augmented reality information. | 05-21-2015 |
20150138232 | AR DISPLAY DEVICE, PROCESS CONTENTS SETTING DEVICE, PROCESS CONTENTS SETTING METHOD AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM - Disclosed is an AR display device including: a display unit configured to display an augmented reality space; a camera configured to shoot the reality space; an object detecting unit configured to detect a predetermined object in the reality space; a display control unit configured to prepare a simulation image which simulatedly shows a processing result and to instruct the display unit to display the augmented reality space in which the simulation image is overlapped with the predetermined object; an operation detecting unit configured to detect an operation for the simulation image in the augmented reality space; and a setting information output unit configured to output setting information in order for the predetermined device to obtain the processing result, wherein the display control unit changes the simulation image according the operation, and the setting information output unit outputs the setting information corresponding to the changed simulation image. | 05-21-2015 |
20150138233 | DEVICES, SYSTEMS, AND METHODS FOR EXAMINING THE INTERACTIONS OF OBJECTS IN AN ENHANCED SCENE - Systems, devices, and methods obtain a sequence of images of a physical scene that includes a physical representation of a first object; calculate a sequence of first transform values of the physical representation of the first object based on the sequence of images; store the sequence of first transform values; generate an enhanced scene; maintain the first object in the enhanced scene at positions and orientations that are indicated by the sequence of first transform values; receive an indication of selected transform values in the sequence of first transform values; retrieve the selected transform values; and generate a replay image of the enhanced scene, from a second observer viewpoint, that shows the first object at the position and the orientation that are indicated by the selected transform values. | 05-21-2015 |
20150138234 | METHOD FOR EFFECT DISPLAY OF ELECTRONIC DEVICE, AND ELECTRONIC DEVICE THEREOF - A method for effect display of an electronic device, and the electronic device thereof are provided. The method in the electronic device includes obtaining a correlation among a plurality of objects extracted from an image, and displaying at least any one of the plurality of objects by adding an effect, on the basis of the correlation. Further, other exemplary embodiments are also included in the present disclosure in addition to the aforementioned exemplary embodiments. | 05-21-2015 |
20150138235 | COLLIMATED DISPLAY DEVICE FOR AUGMENTED REALITY AND METHOD THEREOF - There are provided a device for displaying virtual reality overlapping the real world and a method thereof. A collimated display device for augmented reality includes a virtual image providing unit configured to modulate an image of the virtual object to light and project the result; and a collimation mirror made of a translucent material that reflects light of the image of the virtual object to a user's field of vision and provides the image of the virtual object overlapping the real world. Therefore, it is possible for the user to see an image of the virtual object that is an additional image matching the real world. | 05-21-2015 |
20150138236 | DISPLAY CONTROL DEVICE AND METHOD - A display control device includes circuitry configured to acquire a movement distance of an imaging device configured to acquire a plurality of images including a first image and a second image, calculate, when a specific object is detected from the first image, a first positional relationship between the specific object and the imaging device, control a display to superimpose a specific image corresponding to the specific object on the first image based on the first positional relationship, and control the display to superimpose the specific image on the second image based on the first positional relationship, when the specific object is not detected from the second image and the movement distance is smaller than a specific value, the movement distance being from a first position where the first image is captured by the imaging devise to a second position where the second image is captured by the imaging devise. | 05-21-2015 |
20150145887 | PERSISTENT HEAD-MOUNTED CONTENT DISPLAY - A method for persistently displaying selected virtual content includes: selecting desired content; selecting, using a head-mounted display, a physical location as a reference location for the desired content to be displayed virtually as a window of desired content, the physical location being in a line of sight of, but separate from the head-mounted display; and displaying at least a portion of the window of desired content using the head-mounted display such that the at least a portion of the window of desired content appears to a user of the head-mounted display to be disposed at the physical location regardless of changes of orientation, location, or orientation and location of the head-mounted display. | 05-28-2015 |
20150145888 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing device including an acquisition section configured to acquire a captured image captured by an imaging section, and a data processing section configured to superimposedly display a virtual image generated by changing an input image on the captured image in a display section. The data processing section displays, on the display section, the virtual image generated by changing one of a relative position and a relative angle of the imaging section and the input image, which are virtually set, in a time series. | 05-28-2015 |
20150145889 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing device including an acquisition section configured to acquire space position information of a terminal including a display section that displays an image in which a virtual image is superimposed on a captured image acquired by an imaging section, and a specifying section configured to specify display information for displaying the virtual image on the display section according to the space position information of the terminal. | 05-28-2015 |
20150291160 | VEHICLE CRUISE CONTROL APPARATUS AND METHOD - A vehicle cruise control apparatus of a vehicle includes a display and a controller. The display displays virtual information by projecting the virtual information on an information display region in front of a driver of the vehicle through augmented reality. The controller determines whether the front road is a slope road or a curved road based on road information while controlling driving of the vehicle according to an inter-vehicle distance and a driving speed of the preceding vehicle, determines whether the preceding vehicle exists in an information display region in front of a driver when it is determined that the front road is the slope road or the curved road, and controls the display to display a virtual preceding vehicle in the information display region through augmented reality when it is determined that the preceding vehicle does not exist. | 10-15-2015 |
20150294503 | REALITY AUGMENTING METHOD, CLIENT DEVICE AND SERVER - A reality augmenting method, a client device and a server are provided. The reality augmenting method includes: obtaining information related to an object to be identified, in which the information includes image information of the object; sending the information to a server, receiving augmented information of the object and display position information of the augmented information returned from the server according to the information; and displaying the augmented information and the image information simultaneously according to the display position information. | 10-15-2015 |
20150294504 | MARKER-BASED PIXEL REPLACEMENT - A videographic system uses a videographic camera to obtain a temporal series of digital images of a scene and substitutes the video appearance of display boards in the scene with sub-images from a database. 3D vectorized tracking markers disposed rigidly with respect to the display boards enable a controller to geometrically adapt the sub-images for changing perspectives of the videographic camera. The markers may have a rotationally asymmetric pattern of contrasting portions with perimeters that have sections that are mathematically describable curves. The markers may be monolithically integrated with the display boards. The adapted images may be supplied to an interactive display system, along with pixel coordinate information about the sub-images and resource location identifiers associated with the sub-images. This allows linking to a networked resource by selecting the sub-image with a digital pointing and selecting device. The system may be configured to replace televised advertising board information with geometrically adapted user-targeted advertisements. | 10-15-2015 |
20150294505 | HEAD MOUNTED DISPLAY PRESENTATION ADJUSTMENT - Embodiments are disclosed for adjusting a presentation on a head-mounted display (HMD). In one or more example embodiments, a method of dynamically orienting a presentation of a HMD includes gathering HMD sensor data via at least one HMD sensor that is installed on an HMD worn by a driver of the vehicle and gathering vehicle sensor data via at least one vehicle mounted sensor mounted to the vehicle. The example method further includes performing an analysis of the HMD sensor data and of the vehicle sensor data to identify a difference between the HMD sensor data and the vehicle sensor data, and calculating, based on the difference, an orientation of the HMD device in relation to the vehicle. The method further includes adjusting a presentation of data on a display of the HMD device based on the orientation. | 10-15-2015 |
20150294506 | System and Method for Augmented Reality Display of Dynamic Environment Information - A method for providing environment information to a mobile device user is presented. The method comprises receiving a request for target environment information from a mobile device, determining the pose of the mobile interface device relative to the target environment, and obtaining target environment data for one or more measurable environment parameters (e.g., radiation level). The target environment data is used to assemble augmented reality information configured for viewing in conjunction with a real-time view of the target environment captured by the mobile interface device. The target augmented reality information is then transmitted to the mobile device for display to the user. | 10-15-2015 |
20150294507 | SAVING AUGMENTED REALITIES - Saving augmented realities includes collecting, with an augmented reality device, observation information of a physical space, and obtaining, with the augmented reality device, an augmentation associated with the physical space. An augmented view of the physical space including a visual representation of the augmentation is visually presented with the augmented reality device, and the augmented view is saved for subsequent playback. | 10-15-2015 |
20150294508 | EFFECTUATING MODIFICATIONS WITHIN AN INSTANCE OF A VIRTUAL SPACE PRESENTED VIA MULTIPLE DISPARATE CLIENT COMPUTING PLATFORMS RESPONSIVE TO DETECTION OF A TOKEN ASSOCIATED WITH A SINGLE CLIENT COMPUTING PLATFORM - Exemplary implementations may facilitate effectuating modifications within an instance of a virtual space presented via multiple disparate client computing platforms responsive to detection of a token associated with a single client computing platform. In some implementations, tokens may be detected based on signals received from token readers associated with individual ones of the multiple client computing platforms. A given token may be a standalone physical object. The given token may be detectable based on a signal conveying information associated with the given token. The information associated with the given token conveyed by the signal may be devoid of virtual space content. | 10-15-2015 |
20150296324 | Method and Apparatus for Interacting Between Equipment and Mobile Devices - A method interacts between equipment and a mobile device by first selecting, using the mobile device, the equipment. A communication link is established between the mobile device and the equipment. In response, data from the equipment is received in the mobile device then the equipment and mobile device interact according to the data, wherein the equipment is within a visible range of the mobile device. | 10-15-2015 |
20150301334 | COMPACT HEAD-UP DISPLAY HAVING A LARGE EXIT PUPIL - The invention relates to a head-up display including an element ( | 10-22-2015 |
20150301335 | COMPACT AND ENERGY-EFFICIENT HEAD-UP DISPLAY - The invention relates to a head-up display comprising a group of optical sub-systems ( | 10-22-2015 |
20150301596 | Method, System, and Computer for Identifying Object in Augmented Reality - A method, a system, and a computer for identifying an object in augmented reality, the identification method includes: a computer receiving a user's left eye pupil position and right eye pupil position input by an input device, computing spatial coordinates of a visual focus of eyes according to the left eye pupil position and the right eye pupil position; the computer receiving spatial coordinates of each virtual object input by the input device, and comparing the spatial coordinates of each virtual object with the spatial coordinates of the visual focus of eyes to determine a virtual object to be operated by the user. | 10-22-2015 |
20150301787 | SYSTEMS AND METHODS FOR GENERATING SOUND WAVEFRONTS IN AUGMENTED OR VIRTUAL REALITY SYSTEMS - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302597 | SYSTEMS AND METHODS FOR PREPARING CUSTOM CLOTHING PATTERNS - Methods and systems of preparing custom clothing patterns are described. In particular, custom clothing patterns are prepared by obtaining a 3-D image of an individual, determining points of measurements of the individual from the 3-D image, and modulating a digital clothing pattern template comprising measurement locations corresponding to the points of measurements by applying the measurements to the corresponding measurement locations of the digital pattern template, and altering the pattern based on the measurements. | 10-22-2015 |
20150302625 | GENERATING A SOUND WAVEFRONT IN AUGMENTED OR VIRTUAL REALITY SYSTEMS - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302642 | ROOM BASED SENSORS IN AN AUGMENTED REALITY SYSTEM - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302643 | STRESS REDUCTION IN GEOMETRIC MAPS OF PASSABLE WORLD MODEL IN AUGMENTED OR VIRTUAL REALITY SYSTEMS - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302644 | RENDERING TECHNIQUES TO FIND NEW MAP POINTS IN AUGMENTED OR VIRTUAL REALITY SYSTEMS - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302645 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, PROGRAM, AND TERMINAL DEVICE - An information processing system that acquires image data; distorts the acquired image data according to a predetermined distortion criterion; acquires an object image corresponding to an object that is at least partially obstructed in the acquired image; combines the object image with the distorted image data; and outputs the distorted image data combined with the object image. | 10-22-2015 |
20150302646 | SPATIAL LOCATION PRESENTATION IN HEAD WORN COMPUTING - Aspects of the present invention relate presentation of digital content, in a see-through display, representing a known location in an environment proximate to a head worn computer. | 10-22-2015 |
20150302647 | SPATIAL LOCATION PRESENTATION IN HEAD WORN COMPUTING - Aspects of the present invention relate presentation of digital content, in a see-through display, representing a known location in an environment proximate to a head worn computer. | 10-22-2015 |
20150302649 | POSITION IDENTIFICATION METHOD AND SYSTEM - A method includes acquiring a first image including a specific object and captured at an imaging position, generating first three-dimensional information based on a first shape of the specific object, the first three-dimensional information corresponding to the imaging position, generating second three-dimensional information based on a specific depth value and a designated position on the first image, generating first line information based on the first and the second three-dimensional information, acquiring a second image including the specific object and captured at another imaging position, generating third three-dimensional information based on a second shape of the specific object, the third three-dimensional information corresponding to the another imaging position, generating second line information based on the second and the third three-dimensional information, generating a fourth three-dimensional information based on the first and the second line information, and storing the fourth three-dimensional information associated with a content. | 10-22-2015 |
20150302650 | Methods and Systems for Providing Procedures in Real-Time - Systems and methods for providing users with an augmented view of a work environment are provided. The method includes downloading data relevant to a component in the work environment onto a mobile device. The work environment is navigated to locate the component based on prompts provided by the mobile device. An augmented reality (AR) marker located proximate to the component is scanned with the mobile device to access interactive procedures relevant to the component. One or more of the interactive procedures are performed. | 10-22-2015 |
20150302651 | SYSTEM AND METHOD FOR AUGMENTED OR VIRTUAL REALITY ENTERTAINMENT EXPERIENCE - Augmented Reality (AR) and Virtual Reality (VR) headsets, such as the Google Glass® and Oculus Rift® systems, respectively, are poised to become significant new factors in computer environments, including gaming, virtual tourism, and the like. Such may be advantageously employed in the playback and rendering of books, and in particular audio books. Systems and methods according to present principles generally provide an audio playback experience, of an audio book, while displaying scenes pertaining to the audio book on the screen of an AR or VR system, e.g., a headset or other environment. | 10-22-2015 |
20150302653 | Augmented Digital Data - A system for augmented digital data is disclosed. The system is comprised of a first electronic device and a second electronic device. The first electronic device is comprised of a first display presenting a first digital data. The second electronic device is comprised of a second display and input device. The second electronic device is simultaneously presenting a second digital data while the first display is presenting the first digital data. An image of the second electronic device including the second display and input device are presented on the first display, with an image of the user's hands/digits overlaying the image of the input device to indicate their relative location. Accordingly, the system allows the user to simultaneously use multiple electronic devices without gaze altering interruption occurring. | 10-22-2015 |
20150302654 | THERMAL IMAGING ACCESSORY FOR HEAD-MOUNTED SMART DEVICE - A thermal imaging accessory (TIA) is linked with a head-mounted smart device (HMSD) with a data display for displaying data for an eye of a user wearing the HMSD. The HMSD supports the TIA in an orientation where a field-of-view of a thermal imaging camera of the TIA is substantially in alignment with the field-of-view of an eye looking through the data display. The HMSD is configured to: link the TIA to the HMSD, activate a thermal imaging application on the HMSD to receive data from the TIA and display it on an HMSD data display, receive thermal imaging data of a target from the TIA, process the thermal imaging data received from the TIA, and initiate a display of the processed thermal imaging data on the HMSD data display. | 10-22-2015 |
20150302655 | USING A MAP OF THE WORLD FOR AUGMENTED OR VIRTUAL REALITY SYSTEMS - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302656 | USING A MAP OF THE WORLD FOR AUGMENTED OR VIRTUAL REALITY SYSTEMS - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302657 | USING PASSABLE WORLD MODEL FOR AUGMENTED OR VIRTUAL REALITY - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302658 | COMPENSATING FOR AMBIENT LIGHT IN AUGMENTED OR VIRTUAL REALITY SYSTEMS - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302659 | UTILIZING IMAGE BASED LIGHT SOLUTIONS FOR AUGMENTED OR VIRTUAL REALITY - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302660 | SYSTEMS AND METHODS FOR USING IMAGE BASED LIGHT SOLUTIONS FOR AUGMENTED OR VIRTUAL REALITY - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302661 | INFERENTIAL AVATAR RENDERING TECHNIQUES IN AUGMENTED OR VIRTUAL REALITY SYSTEMS - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302662 | USING OBJECT RECOGNIZERS IN AN AUGMENTED OR VIRTUAL REALITY SYSTEM - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302663 | RECOGNIZING OBJECTS IN A PASSABLE WORLD MODEL IN AN AUGMENTED OR VIRTUAL REALITY SYSTEM - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150302664 | AVATAR RENDERING FOR AUGMENTED OR VIRTUAL REALITY - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 10-22-2015 |
20150310279 | Augmented Reality Display of Dynamic Target Object Information - A method for providing target object information to a mobile device user is presented. The method includes receiving a request for target object information from a mobile device, determining the pose of the mobile interface device relative to the target object, and obtaining target object data for one or more measurable target object parameters (e.g., surface topography). The target object data is used to assemble augmented reality information configured for viewing in conjunction with a real-time view of the target object captured by the mobile interface device. The target object augmented reality information is then transmitted to the mobile device for display to the user. | 10-29-2015 |
20150310617 | DISPLAY CONTROL DEVICE AND DISPLAY CONTROL METHOD - A display control device that calculates an imaging position and an imaging posture of a camera capturing a plurality of captured images with a marker as a reference based on position information of the marker recognized from the captured images, and superimposes and displays augment information on the captured image based on the calculated imaging position and the imaging posture of the camera, the device includes a memory; and a processor to execute a plurality of instructions stored in the memory to perform: acquiring a second captured image captured by the camera on which the augment information is to be superimposed and displayed, and a plurality of first captured images captured by the camera that is captured at a time different from that of the second captured image, and on which the augment information is to be superimposed and displayed; storing an image feature information. | 10-29-2015 |
20150310664 | AUGMENTED REALITY BASED MANAGEMENT OF A REPRESENTATION OF A SMART ENVIRONMENT - A capability for managing a representation of a smart environment is presented herein. The capability for managing a representation of a smart environment is configured to support augmented reality (AR)-based management of a representation of a smart environment, which may include AR-based generation of a representation of the smart environment, AR-based alignment of the representation of the smart environment with the physical reality of the smart environment, and the like. | 10-29-2015 |
20150310667 | SYSTEMS AND METHODS FOR CONTEXT BASED INFORMATION DELIVERY USING AUGMENTED REALITY - Generally discussed herein are systems, apparatuses, and methods for providing contextually relevant augmented reality (AR) to a wearable device. In one or more embodiments, a method can include extracting features of one or more objects in a location in a field of view of a camera of a wearable device, retrieving a three dimensional model of the location based on the extracted features, assessing a situation of a user associated with the wearable device, modifying the three dimensional model based on the assessed user situation, and presenting, using the wearable device, at least a portion of the modified three dimensional model. | 10-29-2015 |
20150310668 | HEAD-WORN PLATFORM FOR INTEGRATING VIRTUALITY WITH REALITY - The invention relates to a device for obtaining and processing of 3D images. | 10-29-2015 |
20150310669 | BLENDING REAL AND VIRTUAL CONSTRUCTION JOBSITE OBJECTS IN A DYNAMIC AUGMENTED REALITY SCENE OF A CONSTRUCTION JOBSITE IN REAL-TIME - A method of blending at least one virtual construction jobsite object and at least one real construction jobsite object in a dynamic augmented reality scene of a construction jobsite includes several steps. One step involves capturing a depth map image via a depth sensing device, and capturing a red, green, and blue (RGB) image via an RGB image-capturing device. Another step involves registering the depth map image and the RGB image to a common coordinate system. Yet another step involves projecting the at least one virtual construction jobsite object in a scene of the construction jobsite with the use of geographical information system (GIS) data, computer-aided design (CAD) data, or both types of data, to generate the augmented reality scene of the construction jobsite. And yet another step involves removing hidden surfaces of the at least one virtual construction jobsite object in the augmented reality scene. | 10-29-2015 |
20150310671 | SYSTEMS AND METHODS FOR AUGMENTED REALITY INTERACTION - An augmented reality system is described that comprises a movable object comprising an object hardware component; a control hardware component for wirelessly transmitting and receiving signals via a communication link to the object hardware component; and a software component stored on a non-transitory computer-readable medium and in operable communication with the control hardware component. An application user interface is provided for enabling a user to provide command input for controlling the movement of the movable object via the object hardware component. | 10-29-2015 |
20150310672 | AUGMENTED REALITY METHOD APPLIED TO THE INTEGRATION OF A PAIR OF SPECTACLES INTO AN IMAGE OF A FACE - Method for creating a final real-time photorealistic image of a virtual object, corresponding to a real object arranged on an original photo of a user, in a realistic orientation related to the user's position, includes: detecting the presence of an area for the object in the photo; determining the position of characteristic points of the area for the object in the photo; determining the 3D orientation of the face, the angles φ and ψ of the camera having taken the photo relative to the principal plane of the area; selecting the texture to be used for the virtual object, in accordance with the angle-of-view, and generating the view of the virtual object in 3D; creating a first layered rendering in the correct position consistent with the position of the placement area for the object in the original photo; obtaining the photorealistic rendering by adding overlays to obtain the final image. | 10-29-2015 |
20150317518 | HEAD-MOUNT TYPE DISPLAY DEVICE, CONTROL SYSTEM, METHOD OF CONTROLLING HEAD-MOUNT TYPE DISPLAY DEVICE, AND COMPUTER PROGRAM - A transmissive head-mount type display device includes an image display section adapted to display a virtual image, and capable of transmitting an external sight, an object acquisition section adapted to obtain a selectable object located in a predetermined distance range from the image display section, and a position of a specific object included in the external sight, and a control section adapted to display an object-correspondence virtual image associated with the object obtained as the virtual image using the image display section, identify a change in the position of the specific object based on the position of the specific object obtained, select the object based on a relationship between the change in the position of the specific object identified and a position of the object obtained, and display a specific check image associated with the object selected as the virtual image using the image display section. | 11-05-2015 |
20150317829 | Explorable Augmented Reality Displays - Concepts and technologies are disclosed herein for explorable augmented reality displays. An augmented reality service can receive a request for augmented reality display data. The request can be associated with a device. The augmented reality service can determine a location associated with the device and identify augmented reality data associated with the location. The augmented reality service can provide augmented reality display data to the device. | 11-05-2015 |
20150317832 | WORLD-LOCKED DISPLAY QUALITY FEEDBACK - Embodiments that relate to communicating to a user of a head-mounted display device an estimated quality level of a world-lock display mode are disclosed. For example, in one disclosed embodiment a sensor data is received from one or more sensors of the device. Using the sensor data, an estimated pose of the device is determined. Using the estimated pose, one or more virtual objects are displayed via the device in either the world-lock display mode or in a body-lock display mode. One or more of input uncertainty values of the sensor data and pose uncertainty values of the estimated pose are determined. The input uncertainty values and/or pose uncertainty values are mapped to the estimated quality level of the world-lock display mode. Feedback of the estimated quality level is communicated to a user via device. | 11-05-2015 |
20150317833 | POSE TRACKING AN AUGMENTED REALITY DEVICE - An augmented reality device including a plurality of sensors configured to output pose information indicating a pose of the augmented reality device. The augmented reality device further includes a band-agnostic filter and a band-specific filter. The band-specific filter includes an error correction algorithm configured to receive pose information as filtered by the band-agnostic filter and reduce a tracking error of the pose information in a selected frequency band. The augmented reality device further includes a display engine configured to position a virtual object on a see-through display as a function of the pose information as filtered by the band-agnostic filter and the band-specific filter. | 11-05-2015 |
20150317835 | AUTOMATED PATRON GUIDANCE - In one embodiment, a method comprises determining, by a first access network computing node at a venue, a position of a person based on an image of the person captured with at least one camera at the venue; controlling rendering, by the first access network computing node, of an icon moving toward a destination in response to a determined movement of the person; and handing-off, by the first access network computing node, the controlling rendering of the icon to a second access network computing node in response to the position of the person moving from a first domain zone associated with the first access network computing node to a second domain zone associated with the second access network computing node. | 11-05-2015 |
20150317837 | COMMAND DISPLAYING METHOD AND COMMAND DISPLAYING DEVICE - An electronic device is provided. The electronic device includes a display configured to display information, and an augmented reality module that is implemented by a processor, the augmented reality module configured to recognize an external object for the electronic device, and display at least one text corresponding to a voice command corresponding to an application or function related to the external object, through the display. | 11-05-2015 |
20150317838 | REGISTRATION FOR VEHICULAR AUGMENTED REALITY USING AUTO-HARMONIZATION - A method, a system, and a computer program product for tracking an object moving relative to a moving platform, the moving platform moving relative to an external frame of reference. The system includes An inertial sensor for tracking the object relative to the external reference frame, a non-inertial sensor for tracking the object relative to the moving platform, and a processor to perform sensor fusion of the inertial and non-inertial measurements in order to accurately track the object and concurrently estimate the misalignment of the non-inertial sensor's reference frame relative to the moving platform. | 11-05-2015 |
20150317839 | INTERACTING WITH TOTEMS IN AUGMENTED OR VIRTUAL REALITY SYSTEMS - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 11-05-2015 |
20150317956 | HEAD MOUNTED DISPLAY UTILIZING COMPRESSED IMAGERY IN THE VISUAL PERIPHERY - A system, method and wearable display unit for presenting content to a user is disclosed. A display is worn by a user in front of eyes of the user. The display includes a peripheral display area for displaying content to the user in at least one peripheral visual area of the user. A processor provides content to the peripheral display area. A database may store content and the processor obtains the content from the database. The database may be separate from the wearable display unit. A camera of the wearable display unit may also provide the content. | 11-05-2015 |
20150319355 | Coupled Light Field Camera and Display - A coupled light field camera and display system, the system comprising a light field camera device and a light field display device, the light field camera device configured to capture an input light field video stream, the light field display device configured to display an output light field video stream based on the input light field video stream. | 11-05-2015 |
20150319376 | Creating and Customizing a Colorable Image of a User - The invention provides, among other things, methods and systems for creating a customized colorable image of a user. An exemplary method may include receiving an indication of a selected colorable background. The method may further include capturing a real-time image of the user, where the user is adjacent to a predetermined background. An image of the predetermined background may be filtered out of the real-time image of the user to generate a filtered real-time image of the user. Then, the filtered real-time image of the user against the selected colorable background may be presented. | 11-05-2015 |
20150324645 | EYEWEAR-TYPE TERMINAL AND METHOD OF CONTROLLING THE SAME - Provided is an eyewear-type terminal including a display unit on which picture information is displayed; a sensing unit that senses a period of time for which a user's gaze has been fixed in a state where a user wears the eyewear-type terminal; and a controller that collects information relating to something that the user gazes toward, in a case where the user's gaze has been fixed for a period of reference time or longer, and controls the display unit in such a manner that, among the pieces of collected information, at least one piece of collected information, is displayed on the display unit. | 11-12-2015 |
20150325027 | METHOD AND SYSTEM FOR REDUCING MOTION SICKNESS IN VIRTUAL REALITY RIDE SYSTEMS - A virtual reality ride system including a headset, a control unit, and a dynamic platform. The headset includes a display unit configured to display a video of an animated virtual environment. The control unit includes one or more processors configured to perform render processing that renders the video of the virtual environment; event motion processing that generates first data representing motions associated with events in the virtual environment; and low-frequency motion processing that generates second data representing low-frequency vibrations unrelated to the events in the virtual environment. The dynamic platform is configured to produce the motions associated with the events in the virtual environment based on the first data, and to produce the low-frequency vibrations based on the second data. The low-frequency vibrations include a frequency between about 5 Hz and 70 Hz. | 11-12-2015 |
20150325047 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY FOR MAINTENANCE APPLICATIONS - A method for providing maintenance instructions to a user is provided. The method obtains a set of data, comprising maintenance instructions, macrolocation data, and object recognition characteristics associated with a target apparatus; guides the user to a macrolocation and a microlocation of the target apparatus, based on the obtained set of data; and provides the maintenance instructions associated with the target apparatus, when the user has reached the macrolocation and the microlocation. | 11-12-2015 |
20150325049 | AUGMENTING A DIGITAL IMAGE - For augmenting a digital image, code identifies a structure image in a digital image. The code further augments the digital image with structure information for the structure image and/or the digital image with structure image removed. | 11-12-2015 |
20150325050 | APPARATUS AND METHOD FOR PROVIDING AUGMENTED REALITY - A personalized augmented reality providing apparatus includes an interest object determiner configured to determine an interest object among external objects each having a predetermined relationship with a user, a relationship identifier configured to identify a subjective relationship between the interest object determined by the interest object determiner and the user, an additional information generator configured to generate additional information representing a current relationship state between the interest object and the user based on the subjective relationship identified by the relationship identifier, and an additional information provider configured to provide the user with the additional information generated by the additional information generator. | 11-12-2015 |
20150325051 | Method, apparatus and system for rendering virtual content - A method of rendering virtual content is disclosed. A position to render each of a first and a second portion of virtual content on an augmented reality device is determined. Each of the first and second portion of virtual content is linked to a corresponding physical document. The position for rendering each of the first and second portions of virtual content is determined according to a position of the corresponding physical documents with the position of the first portion of virtual content being adjacent to the position of the second portion of virtual content. Viewing zones for each of the first and second portions of virtual content are determined, based on the determined positions. Each of the viewing zones is defined as a physical region where an augmentation of the corresponding portion of virtual content is visible on the augmented reality device. The viewing zone of the second portion of virtual content is modified to reduce any overlap with the viewing zone of the first portion of virtual content. The first and the second portions of virtual content are rendered on the augmented reality device according to the determined viewing zones. | 11-12-2015 |
20150325052 | IMAGE SUPERPOSITION OF VIRTUAL OBJECTS IN A CAMERA IMAGE - A method superposes a virtual graphical object on a camera image of a real item. The camera image is displayed by a display device. The method takes into consideration when superposing virtual graphical objects also the real items imaged in the camera image. To this end, a distance of the item from the display device is captured by a capturing device. By way of object data, a virtual object distance of the object from the display device is given. The object is only superposed on the camera image if the object distance is less than the captured distance of the item. | 11-12-2015 |
20150325054 | INDICATING OUT-OF-VIEW AUGMENTED REALITY IMAGES - Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object. | 11-12-2015 |
20150332500 | Handheld GIS data collection device target augmentation - Target augmentation makes GIS data collection devices more intuitive and useful for GIS workers, and reduces GIS data collection errors. | 11-19-2015 |
20150332502 | GLASS TYPE MOBILE TERMINAL - A glass type mobile terminal is provided. The mobile terminal includes a band frame wearable on a user's head, a light transmissive lens coupled to the band frame to be located in front the user wearing the band frame, the light transmissive lens comprising a transparent screen where an image is focused, a projector for outputting an image toward the transparent screen from a lateral surface of the light transmissive lens, a photo shutter coupled to a front surface of the lens, with a controllable transparency, and a controller for controlling the transparency of the photo shutter, such that the visibility of the image focused on the light transmissive lens may be enhanced by controlling the transparency of the light transmissive lens, using the photo shutter, that a clear image may be seen even in a bright place. | 11-19-2015 |
20150332504 | Method for Representing Virtual Information in a Real Environment - A method for representing virtual information in a view of a real environment is provided that includes: providing a system setup including at least one display device, wherein the system setup is adapted for blending in virtual information on the display device in at least part of the view, determining a position and orientation of a viewing point relative to at least one component of the real environment, providing a geometry model of the real environment, providing at least one item of virtual information and a position of the at least one item of virtual information, determining whether the position of the item of virtual information is inside a 2D or 3D geometrical shape, determining a criterion which is indicative of whether the built-in real object is at least partially visible or non-visible in the view of the real environment, and blending in the at least one item of virtual information on the display device in at least part of the view of the real environment. | 11-19-2015 |
20150332505 | Method for Representing Virtual Information in a Real Environment - A method for representing virtual information in a view of a real environment is provided that includes the following steps: providing a system setup comprising at least one display device, determining a position of a viewing point relative to at least one component of the real environment, providing a geometry model of the real environment, providing at least one item of virtual information and its position, determining a visualization mode of blending in the at least one item of virtual information on the display device according to the position of the viewing point and the geometry model, calculating a ray between the viewing point and the item of virtual information, and determining a number of boundary intersections by the ray, wherein if the number of boundary intersections is less than 2, the item of virtual information is blended in a non-occlusion mode, otherwise in an occlusion mode. | 11-19-2015 |
20150332506 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - Arrangement positions in a physical space, which are set in advance for a plurality of indices, are acquired. Among indices in an image of the physical space, an index that satisfies a predetermined condition is specified as a target index. Notification of the arrangement position of the target index is performed. | 11-19-2015 |
20150332507 | Positioning of projected augmented reality content - A method of displaying augmented reality content on a physical surface is disclosed. A surface complexity measure is determined for the physical surface from a captured image of the physical surface. A content complexity measure is determined for the augmented reality content to be applied to the physical surface. The content complexity measure represents an amount of fine detail in the augmented reality content. The method determines if the amount of fine detail in the augmented reality content is to be modified, based on a function of the surface complexity measure and said content complexity measure. A display attribute of the augmented reality content is adjusted to modify the fine detail in the augmented reality content. The modified augmented reality content is displayed on the physical surface. | 11-19-2015 |
20150332512 | AUGMENTED REALITY CONTENT RENDERING VIA ALBEDO MODELS, SYSTEMS AND METHODS - Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user. | 11-19-2015 |
20150332513 | AUGMENTED REALITY DISPLAY OF SCENE BEHIND SURFACE - Embodiments are disclosed that relate to augmenting an appearance of a surface via a see-through display device. For example, one disclosed embodiment provides, on a computing device comprising a see-through display device, a method of augmenting an appearance of a surface. The method includes acquiring, via an outward-facing image sensor, image data of a first scene viewable through the display. The method further includes recognizing a surface viewable through the display based on the image data and, in response to recognizing the surface, acquiring a representation of a second scene comprising one or more of a scene located physically behind the surface viewable through the display and a scene located behind a surface contextually related to the surface viewable through the display. The method further includes displaying the representation via the see-through display. | 11-19-2015 |
20150332514 | RENDERING A DIGITAL ELEMENT - Rendering a digital element is disclosed. An indication that a device is within a region associated with the digital element is received. It is determined that the digital element is to be rendered. A representation of the digital element is generated in a rendered view of the region. The digital element is provided upon receiving an indication that the digital element has been selected. | 11-19-2015 |
20150332566 | SYSTEM WITH WEARABLE DEVICE AND HAPTIC OUTPUT DEVICE - A system includes a wearable device, a second device remote from and in communication with the wearable device, a processor configured to generate a control signal representative of an event occurring in an environment related to the wearable device and/or the second device, and a haptic output device configured to provide haptic feedback based on the generated control signal. | 11-19-2015 |
20150338191 | COMPACT RIFLESCOPE DISPLAY ADAPTER - This disclosure describes a compact and lightweight riflescope display adapter configured to be affixed in front of the objective lens of a riflescope. The display adapter includes a receptacle that enables the adapter to be electrically connected to a ballistic computer, rangefinder or other targeting mechanism. The display adapter is configured to receive aimpoint information and project illuminated symbology that is brought into focus by the riflescope optics in such a way that the symbology appears to overlay an image of a scene on which the riflescope is focused. The display adapter includes a casing that houses processing circuitry, a light emitting diode, polarizer, polarized beam splitter, liquid crystal on silicon imaging element and reflective element. The display adapter also includes a light bar, spherical mirror, quarter-wave plate and an additional polarized beam splitter contained within the light bar. | 11-26-2015 |
20150338913 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - There is provided an information processing apparatus that controls display of a virtual object displayed in an extended work space in which a real object and the virtual object are operable, the information processing apparatus including an operation deciding unit configured to decide an operation process to the virtual object displayed in the extended work space on the basis of a result of analysis of input information to the extended work space, the analysis being based on position information of an information terminal detected in the extended work space and display control trigger information for changing display of the virtual object, and a display control unit configured to execute a display control process of the virtual object on the basis of the decided operation process. | 11-26-2015 |
20150338915 | SYSTEMS AND METHODS FOR BIOMECHANICALLY-BASED EYE SIGNALS FOR INTERACTING WITH REAL AND VIRTUAL OBJECTS - Systems and methods are provided for discerning the intent of a device wearer primarily based on movements of the eyes. The system can be included within unobtrusive headwear that performs eye tracking and controls screen display. The system can also utilize remote eye tracking camera(s), remote displays and/or other ancillary inputs. Screen layout is optimized to facilitate the formation and reliable detection of rapid eye signals. The detection of eye signals is based on tracking physiological movements of the eye that are under voluntary control by the device wearer. The detection of eye signals results in actions that are compatible with wearable computing and a wide range of display devices. | 11-26-2015 |
20150339453 | CONTEXT BASED AUGMENTED REALITY - Context based AR may include receiving a first wireless signal from a pair of context based AR glasses worn by a user. The context based AR glasses may include a display viewable by the user and a camera to image an object viewed by the user. The image of the object may be analyzed, and compared to images of objects stored in a database that includes information associated with the images of the objects. Based on a match, the object viewed by the user may be identified. Based on collaboration of the user with personnel disposed remotely from the user, and the identified object, a second wireless signal may be sent to the pair of context based AR glasses to provide information related to the collaboration, and to further superimpose the information associated with the identified object adjacent to and/or on top of the object viewed by the user. | 11-26-2015 |
20150339532 | Virtualization of Tangible Interface Objects - An example system includes a stand configured to position a computing device proximate to a physical activity surface. The system further includes a video capture device, a detector, and an activity application. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically interactable with by a user. The detector is executable to detect motion in the activity scene based on the processing and, responsive to detecting the motion, process the video stream to detect one or more interface objects included in the activity scene of the physical activity surface. The activity application is executable to present virtual information on a display of the computing device based on the one or more detected interface objects. | 11-26-2015 |
20150339534 | DRIVE SUPPORT DISPLAY DEVICE - A drive support display device supporting a drive of a self-vehicle based on a detection result by an object detector regarding a nearby area of the self-vehicle. The device receives object Information from an external source, the object information identifying a nearby object existing in the nearby area of the self-vehicle, determines whether the nearby object is detected by the object detector, and provides a warning indicator when the determination section determines that the nearby object is an undetected nearby object that is not detected by the object detector. As a result, the undetected nearby object, which Is already identified as existing somewhere nearby the self-vehicle but is invisible therefrom, may be emphasized in an image provided by the drive support display device. | 11-26-2015 |
20150339819 | METHOD FOR PROCESSING LOCAL INFORMATION - A method for processing local information acquired by a virtual representation and a device having an inertial unit and an image sensor. At least one image of a real environment of the device is captured. The localization of the device in the virtual representation, corresponding to the localization of the device in the real environment, is obtained by correlating the portions of the captured image and portions of the virtual representation, The inertial unit determines the displacement of the device. The localization of the device in the virtual representation is modified as a function of the displacement so that the real position of the device corresponds, during the displacement, to the localization of the device in the virtual representation. | 11-26-2015 |
20150339839 | METHODS AND SYSTEMS FOR GENERATING AND JOINING SHARED EXPERIENCE - According to an example, a computer may receive characteristics information of an object in a video stream captured by a first computing device, generate a signature based on the characteristics information, identify an augmented reality information associated with the signature, transmit the augmented reality information to the first computing device, receive, from a second computing device, a set of characteristics information of the object in an image captured by the second computing device, determine that the set of characteristics information from the second computing device has a second signature that matches the signature generated based on the characteristics information received form the first computing device, and transmit the identified augmented reality information to the second computing device. | 11-26-2015 |
20150339855 | LASER POINTER SELECTION FOR AUGMENTED REALITY DEVICES - In an approach to selecting a real world object for display in an augmented reality view using a laser signal, one or more computer processors determine a real world environment being viewed in an augmented reality view. The one or more computer processors recognize a laser light signature signal originating from an object in the real world environment. The one or more computer processors receive a selection of the object, based, at least in part, on the recognized laser light signature signal. The one or more computer processors display the selected object in the augmented reality view. | 11-26-2015 |
20150339856 | DISPLAY CONTROL METHOD AND INFORMATION PROCESSING APPARATUS - An information processing apparatus includes circuitry configured to: control an imaging device to stop a first focus adjustment when a specific object is detected from a first image captured by the imaging device while the first focus adjustment is in execution, the first focus adjustment being performed at each first time interval by the imaging device, and control the imaging device to start a second focus adjustment when the specific object is not detected from a second image captured by the imaging device after the imaging device is controlled to stop the first focus adjustment. | 11-26-2015 |
20150339857 | AMBIENT LIGHT COMPENSATION FOR AUGMENTED OR VIRTUAL REALITY - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 11-26-2015 |
20150339858 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD - An information processing device configured to display a first display image on a display unit based on display information and first position information acquired based on identification information associated with the display information, the information processing device includes a memory, and a processor coupled to the memory, configured to acquire imaged images imaged by an imaging unit, extract the identification information from an object included in each of the imaged image, acquire the display information associated with the identification information when the identification information is extracted by the extraction, and display a second display image on the display unit based on second position information and the display information acquired by the acquisition, when the identification information is not extracted from any of the imaged images acquired after the identification information is extracted by the extraction. | 11-26-2015 |
20150339859 | APPARATUS AND METHOD FOR NAVIGATING THROUGH VOLUME IMAGE - Disclosed are an apparatus for navigating through a volume image and a method thereof. The apparatus for navigating through a volume image includes a navigation plane detecting unit configured to generate a virtual plane in a navigation coordinate system in a real world from user gesture data and determine the virtual plane as a navigation plane, an extracting unit configured to extract a 2D sectional image corresponding to the navigation plane from 3D volume data, based on a reference surface of a volume coordinate system of a virtual world corresponding to a reference surface of the navigation coordinate system, and a display unit configured to display the extracted sectional image. | 11-26-2015 |
20150339860 | ARTICLE INFORMATION PROVIDING APPARATUS THAT PROVIDES INFORMATION OF ARTICLE, ARTICLE INFORMATION PROVIDING SYSTEM,AND ARTICLE INFORMATION PROVISION METHOD - Provided is an article information providing apparatus that includes a display circuit, an imaging circuit, an object recognizing circuit, and an article discriminating circuit. The display circuit displays an article information image in which article information is shown. The imaging circuit images real space. The object recognizing circuit recognizes the object imaged by the imaging circuit. The article discriminating circuit discriminates the article from the object recognized by the object recognizing circuit and displays the article information image on the display circuit based on the state of the discriminated article. In addition, the article discriminating circuit displays the article information image on the display circuit when a dynamic state change of the discriminated article is identified. | 11-26-2015 |
20150339861 | METHOD OF PROCESSING IMAGE AND ELECTRONIC DEVICE THEREOF - A method of operating an electronic device is provided. The method includes detecting a movement of the electronic device, and changing a scale of a displayed image based on the movement. | 11-26-2015 |
20150347849 | METHOD FOR SUPPORTING AN OPERATOR IN MEASURING A PART OF AN OBJECT - Method for supporting an operator in measuring a part of an object, comprising the steps of equipping the operator with an electronic device and with a dimension measuring apparatus. The device comprises a see-though head mounted display, a camera, and a digital processor. An image of the object is captured with the camera so that the processor recognizes or identifies the part of the object in the image. The method further comprises the steps of obtaining a model of said part and displaying on the display an indication of the dimension that is intended to be measured. A value of the dimension measured by the dimension measuring apparatus is acquired in the processor that will process it according to the model. | 12-03-2015 |
20150347850 | DEVICE-PROVIDED TRACKING DATA FOR AUGMENTED REALITY - A networked device may provide tracking data associated with the networked device to an augmented reality (AR) device via a local network. The tracking data may describe one or more trackable features of the networked device. The AR device may utilize the tracking data to detect the networked device in a camera view. The AR device may generate an augmented reality view in association with the networked device in response to detecting at least one trackable feature of the networked device in the camera view of the AR device. At least a portion of the augmented reality view can be augmented with AR properties associated with the networked device when the networked device is positioned in the camera view. | 12-03-2015 |
20150347852 | APPARATUS, METHOD AND COMPUTER PROGRAM FOR DETERMINING INFORMATION TO BE PROVIDED TO A USER - An apparatus, method and computer program wherein the apparatus comprises: processing circuitry; and memory circuitry including computer program code; the memory circuitry and the computer program code configured to, with the processing circuitry, cause the apparatus at least to perform; detecting user selection of a part of an image wherein the image is displayed on a display; obtaining context information; and determining information to be provided to the user based on the user selection, the displayed image and the obtained context information. | 12-03-2015 |
20150347854 | System and Method for Using Augmented Reality Display in Surface Treatment Procedures - A method for providing target object surface information to a mobile device user is presented. The method includes receiving a request for target object surface information from a mobile device, determining the pose of the mobile interface device relative to the target object, and obtaining target object surface information for one or more measurable target object surface parameters. The target object data is used to assemble augmented reality surface information configured for viewing in conjunction with a real-time view of the target object captured by the mobile interface device. The target object augmented reality surface information is then transmitted to the mobile device for display to the user. | 12-03-2015 |
20150348321 | Augmented reality display device - A rear projection display screen, including a film having a surface facing the back of the screen including ridges having a triangular cross-section defining prisms, wherein at least a wall of each of said ridges has an inclination such that the angle of incidence, on said wall, of a light ray normal to the display is equal, to within 5 degrees, to the Brewster angle of the surface of separation formed by said wall. | 12-03-2015 |
20150348322 | Dynamically Composited Information Handling System Augmented Reality at a Primary Display - An augmented virtual reality is provided by composite visual information generated at goggles and a display viewed through the goggles. Positional cues at the display and/or goggles provide the relative position of the display to the goggles so that an end user primarily views the display through the goggles at the position of the display. The goggles provide peripheral visual images to support the display visual images and to support end user interactions with input devices used for controlling display and goggle visual images. | 12-03-2015 |
20150348324 | PROJECTING A VIRTUAL IMAGE AT A PHYSICAL SURFACE - Techniques for projecting virtual images are described herein. A plane of a surface may be identified, and a virtual image is projected onto the plane of the physical surface. The virtual image is rendered at a graphical user interface of a mobile computing device. | 12-03-2015 |
20150348325 | METHOD AND SYSTEM FOR STABILIZATION AND REFRAMING - A method and apparatus for dynamically maintaining a horizontal framing of a video. The system permits the user to freely rotate the device while filming, while visualizing the final output in an overly on the device viewfinder or screen during shooting. The resulting recording is subsequently corrected to maintain a single orientation with a stable horizon. The system and method is operative display an overly over a captured representation of the captured video wherein the overlay indicates a modified image with respect to said orientation | 12-03-2015 |
20150348326 | IMMERSION PHOTOGRAPHY WITH DYNAMIC MATTE SCREEN - A method may include displaying, on one or more display devices in a virtual-reality environment, a visual representation of a 3-D virtual scene from the perspective of a subject location in the virtual-reality environment. The method may also include displaying, on the one or more display devices, a chroma-key background with the visual representation. The method may further include recording, using a camera, an image of the subject in the virtual-reality environment against the chroma-key background. | 12-03-2015 |
20150348328 | HEAD-MOUNTED DISPLAY DEVICE, METHOD OF CONTROLLING HEAD-MOUNTED DISPLAY DEVICE, INFORMATION TRANSMITTING AND RECEIVING SYSTEM, AND COMPUTER PROGRAM - A head-mounted display device includes an image display unit cause a user to visually recognize image light as a virtual image on the basis of image data and cause the user to visually recognize an outside scene in a state in which the image display unit is worn on the head of the user, an image pickup unit configured to pick up an image of the outside scene, and a control unit configured to cause, when a mark image, which is an image of a specific mark, is included in the picked-up image, using the image display unit, the user to visually recognize a specific virtual image associated with a combination of a kind of the mark image and a shape of the mark image. | 12-03-2015 |
20150348329 | SYSTEM AND METHOD FOR PROVIDING AUGMENTED REALITY ON MOBILE DEVICES - A system and method for providing augmented reality on a mobile device is herein disclosed. According to one embodiment, the computer-implemented method includes providing a targeting advice area in a camera preview of an application running on a user device and recognizing a target using the targeting advice area. The computer-implemented method further provides an event via the camera preview based on a target recognition. | 12-03-2015 |
20150348331 | PORTABLE INFORMATION PROCESSOR AND METHOD OF CONTROLLING HEAD-MOUNTABLE DISPLAY - A system includes a head-mountable display and a portable information processor. The portable information processor includes a determination device that determines whether communication between a head-mountable display and the portable information processor is enabled. The portable information processor includes an identification device that identifies at least a portion of image data that is displayed by the portable information processor. The portable information processor includes a transmitter that transmits the identified at least a portion of the image data to the head-mountable display when the determination device determines that communication between the head-mountable display and the portable information processor is enabled. The head-mountable display includes a receiver that receives the identified at least a portion of the image data from the portable information processor. The head-mountable display includes a display that displays the identified at least a portion of the image data. | 12-03-2015 |
20150355463 | IMAGE DISPLAY APPARATUS, IMAGE DISPLAY METHOD, AND IMAGE DISPLAY SYSTEM - An image obtained when a rare or valuable thing is found is shared between users each wearing an image display apparatus on the head or face. | 12-10-2015 |
20150355468 | CONTENT PRESENTATION IN HEAD WORN COMPUTING - Aspects of the present invention relate to methods and systems for presenting digital content in a field of view of a head-worn computer. | 12-10-2015 |
20150356772 | CONTENT PRESENTATION IN HEAD WORN COMPUTING - Aspects of the present invention relate to methods and systems for presenting digital content in a field of view of a head-worn computer. | 12-10-2015 |
20150356774 | LAYOUT DESIGN USING LOCALLY SATISFIABLE PROPOSALS - A “Layout Optimizer” provides various real-time iterative constraint-satisfaction methodologies that use constraint-based frameworks to generate optimized layouts that map or embed virtual objects into environments. The term environment refers to combinations of environmental characteristics, including, but not limited to, 2D or 3D scene geometry or layout, scene colors, patterns, and/or textures, scene illumination, scene heat sources, fixed or moving people, objects or fluids, etc., any of which may evolve or change over time. A set of parameters are specified or selected for each object. Further, the environmental characteristics are determined automatically or specified by users. Relationships between objects and/or the environment derived from constraints associated with objects and the environment are then used to iteratively determine optimized self-consistent and scene-consistent object layouts. This enables the Layout Optimizer to augment environments with arbitrary content in a structured constraint-based process that adapts to changing scenes or environments. | 12-10-2015 |
20150356775 | CONTENT PRESENTATION IN HEAD WORN COMPUTING - Aspects of the present invention relate to methods and systems for presenting digital content in a field of view of a head-worn computer. | 12-10-2015 |
20150356776 | CONTENT PRESENTATION IN HEAD WORN COMPUTING - Aspects of the present invention relate to methods and systems for presenting digital content in a field of view of a head-worn computer. | 12-10-2015 |
20150356777 | CONTENT PRESENTATION IN HEAD WORN COMPUTING - Aspects of the present invention relate to methods and systems for presenting digital content in a field of view of a head-worn computer. | 12-10-2015 |
20150356778 | CONTENT PRESENTATION IN HEAD WORN COMPUTING - Aspects of the present invention relate to methods and systems for presenting digital content in a field of view of a head-worn computer. | 12-10-2015 |
20150356779 | CONTENT PRESENTATION IN HEAD WORN COMPUTING - Aspects of the present invention relate to methods and systems for presenting digital content in a field of view of a head-worn computer. | 12-10-2015 |
20150356780 | METHOD FOR PROVIDING REAL TIME GUIDANCE TO A USER AND A SYSTEM THEREOF - The present subject matter relates to a method and a guidance system for providing real time guidance to a novice user by an expert. The method comprises capturing images of a plurality of actions performed by the user and the expert based on which position and motion data associated with the actions are identified. Further, the method maps the complex environment of the novice user and position & motion data into corresponding digital representations to allow real time interaction between the novice user and the expert. During interaction, the guidance system monitors the performance of the novice user and dynamically suggests a list of alternate actions when the guidance system identifies a deviation in the actions performed by the novice user compared to the actions performed by the expert. If no deviations are identified, the guidance system implements the plurality of actions of the task in the real physical world. | 12-10-2015 |
20150356781 | RENDERING AN AVATAR FOR A USER IN AN AUGMENTED OR VIRTUAL REALITY SYSTEM - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 12-10-2015 |
20150356782 | CREATING A TOPOLOGICAL MAP FOR LOCALIZATION IN AUGMENTED OR VIRTUAL REALITY SYSTEMS - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 12-10-2015 |
20150356783 | UTILIZING TOPOLOGICAL MAPS FOR AUGMENTED OR VIRTUAL REALITY - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 12-10-2015 |
20150356784 | FINDING NEW POINTS BY RENDER RATHER THAN SEARCH IN AUGMENTED OR VIRTUAL REALITY SYSTEMS - An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems. | 12-10-2015 |
20150356785 | IMAGE SYNTHESIS METHOD AND IMAGE SYNTHESIS APPARATUS - In order to easily and rapidly determine a subject region in a captured image using a single image, a synthesized image of an extraction image including a subject and an instruction image representing a first acquisition region from which color information on the subject is acquired is used. Color information on the subject is acquired from the first acquisition region represented by the instruction image in the synthesized image. Color information on a background of the subject is acquired from a region not including the first acquisition region and a color information non-acquisition region that is adjacent to the first acquisition region and is set in advance in the synthesized image. On the basis of the color information on the subject and the color information on the background, extraction information of the subject is determined and output. | 12-10-2015 |
20150356786 | System and Method for Augmented Reality Display of Electrical System Information - A method for providing electrical system status information to a mobile device user is presented. The method comprises receiving a request for target area electrical system status information from a mobile device, determining the pose of the mobile interface device relative to the target area, obtaining target area electrical system status information for a target electrical system at least partially disposed within the target area, and assembling AR electrical system status information for transmission to and display on the mobile interface device. The AR electrical system status information is assembled using the target area electrical system status information and is configured for viewing in conjunction with a real-time view of the target area captured by the mobile interface device. The AR electrical system status information is then transmitted to the mobile interface device. | 12-10-2015 |
20150356787 | INFORMATION PROCESSING DEVICE, CLIENT DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing device including an image acquisition unit that acquires a captured image of a real space from an image capture device, a setting unit that sets, in association with the real space, an augmented reality space that virtually augments the real space depicted in the captured image, the augmented reality space differing according to related information that relates to the captured image, and a control unit that causes an image of a virtual object placed for each user within the augmented reality space to be displayed on a screen. | 12-10-2015 |
20150356788 | INFORMATION PROCESSING DEVICE, CLIENT DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing device including an image acquisition unit that acquires a captured image of a real space from an image capture device, a setting unit that sets, in association with the real space, an augmented reality space that virtually augments the real space depicted in the captured image, and a control unit that determines a target position for an action executed within the augmented reality space, on a basis of a positional relationship between an object within the augmented reality space and an optical axis of the image capture device. | 12-10-2015 |
20150356789 | DISPLAY DEVICE AND DISPLAY METHOD - A display method includes: detecting a specific object from an input image captured by a camera; generating display data for rendering a plan view, a first display component, and a second display component based on a positional relationship between the camera and the specific object, the plan view corresponding to a first plane which is different from a second plane projected on the input image, the first display component being displayed at a camera position on the first plane, and the second display component being displayed at a reference object position on the first plane; receiving a designation of a point on the plane view; converting a designated position on the first plane into another position on depth direction to the second plane; generating positional information indicating three dimensional position of an augmented reality content based on the another position. | 12-10-2015 |
20150356790 | Augmented Reality Control Systems - An augmented reality control system in which the generation and utilization of simple geometric forms of real world objects at the enterprise level are layered onto or based on real world objects. Digital data is aligned to real world objects via surrogate of simplified geometric forms. | 12-10-2015 |
20150362733 | WEARABLE HEAD-MOUNTED DISPLAY AND CAMERA SYSTEM WITH MULTIPLE MODES - Embodiments of the present disclosure include systems and methods for a wearable head-mounted display and camera system with multiple modes of user interaction including at in some embodiments a natural reality mode, an augmented reality mode, and a virtual reality mode. | 12-17-2015 |
20150363647 | MOBILE AUGMENTED REALITY FOR MANAGING ENCLOSED AREAS - Example embodiments relate to providing mobile augmented reality for an enclosed area. In example embodiments, controller device receives a fixed video stream from a fixed camera and a mobile video stream of a current field of view of a mobile user device. The mobile user device comprises a reality augmentation module to project information on the current field of view. Further, the controller device includes a tracking module to identify a position and orientation of a mobile user of the mobile user device based on image processing of the fixed video stream and a fuzzy map module to use a fuzzy map of the enclosed area and the position and orientation of the mobile user to identify items of interest in the current field of view of the mobile user device, where the fuzzy map is generated based on a floor plan of the enclosed area. | 12-17-2015 |
20150363956 | RADIOACTIVE SUBSTANCE DISTRIBUTION MAP PRODUCING SYSTEM AND METHOD OF PRODUCING RADIOACTIVE SUBSTANCE DISTRIBUTION MAP - A radioactive substance distribution map producing system includes a radiation detector, a position measuring unit and a radioactive substance distribution map producing apparatus. The radiation detector is loaded on a moving vehicle and measures radiations from radioactive substances. The position measuring unit measures a position of the moving vehicle. The radioactive substance distribution map producing apparatus receives measurement data which contains a measurement result by the radiation detector and position data of the moving vehicle measured by the position measuring unit. The radioactive substance distribution map producing apparatus produces a distribution map of the radioactive substances by using the measurement data obtained at the plurality of positions while the moving vehicle moves. | 12-17-2015 |
20150363975 | EXTERNAL USER INTERFACE FOR HEAD WORN COMPUTING - Aspects of the present invention relate to external user interfaces used in connection with head worn computers (HWC). | 12-17-2015 |
20150363977 | VIRTUALIZING CONTENT - Techniques for virtualizing content are disclosed. One or more objects comprising source video content are determined. The one or more objects comprising the source video content are virtualized by mapping each to and representing each with a corresponding database object. Data comprising the corresponding database objects is provided for rendering the source video content instead of any original pixel information of the source video content so that a virtualized version of the source video content is rendered. | 12-17-2015 |
20150363978 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR GENERATING AN AUGMENTED SCENE DISPLAY - The subject matter described herein includes systems, methods, and computer readable media for generating an augmented scene display. An exemplary method includes forming, using a display device operating in a first stage, an augmented virtual image by emitting light rays through a plurality of spatial light modulation layers included in a display device. The method also includes forming, using the display device operating in a second stage, an occluded real image by opening a shutter element of the display device to receive light rays from a real object and utilizing the plurality of spatial light modulation layers to block any light ray from the real object which coincides with the augmented virtual image. The method further includes generating an augmented scene display that includes both the occluded real image and the augmented virtual image by alternating the operation of the display device between the first stage and the second stage. | 12-17-2015 |
20150363979 | HEAD MOUNTED DISPLAY AND CONTROL METHOD FOR HEAD MOUNTED DISPLAY - A head mounted display which allows a user to visually recognize a virtual image and external scenery, and includes an image display unit that forms the virtual image which is visually recognized by the user, and a superimposition processing unit that causes the image display unit to form the virtual image based on superimposition information for superimposing invisible information which is not shown in an appearance of an object on the object included in the external scenery. | 12-17-2015 |
20150371407 | DISPLAY APPARATUS AND CONTROL METHOD THEREOF - A display apparatus is disclosed. The display apparatus includes: a transparent display, a scanner comprising a lens array and attached to the transparent display and a controller configured to control the scanner to scan an object viewed through the transparent display using the lens array being slid from a side of the transparent display to another side opposite to the side, and control the transparent display to display information in response to the scanning of the object. | 12-24-2015 |
20150371415 | Simulation System, Simulation Device, and Product Explanation Assistance Method - In a simulation system in which a terminal device employed in a spectacle shop and a display device for visually recognizing by a prospective spectacle lens wearer, are connected to a server device so as to capable of communication: the server device includes an image generation unit that achieves a state in which a simulation image reflecting lens visual performance of a spectacle lens can be output for each of plural partial visual field areas, and an information storage unit that stores explanatory information regarding characteristics of the lens visual performance for each of the plural partial visual field areas; the display device includes a display screen unit that selectively displays the simulation images in the partial visual field areas; and the terminal device includes an information output unit that outputs the explanatory information corresponding to the partial visual field area being displayed on the display screen unit. | 12-24-2015 |
20150371443 | Viewpoint Control of a Display of a Virtual Product in a Virtual Environment - A method, system, and apparatus for visually presenting a virtual environment relative to a physical workspace. An output device visually presents a view of the virtual environment to guide a human operator in performing a number of operations within the physical workspace. A mounting structure holds the output device and is movable with at least one degree of freedom relative to the physical workspace. A sensor system measures movement of the output device relative to the physical workspace to generate sensor data. A controller computes a transformation matrix and the set of scale factors to align the virtual environment and the physical workspace. The controller changes the view of virtual environment based on the sensor data to thereby change the view of the virtual environment in correspondence with the movement of the output device relative to the physical workspace. | 12-24-2015 |
20150371444 | IMAGE PROCESSING SYSTEM AND CONTROL METHOD FOR THE SAME - An image processing system includes an image processing apparatus that is wearable by a user and is configured to capture real space and display real space video. The system generates mixed reality video obtained by superimposing virtual object video on the real space video; identifies a display area of a real object that is included in the real space video; measures a distance between the image processing apparatus and the real object. In addition, the system performs notification for causing a user who is wearing the image processing apparatus to recognize existence of the real object, if the display area of the real object is hidden by the virtual object video, and the distance between the image processing apparatus and the real object is less than a predetermined distance. | 12-24-2015 |
20150371446 | METHOD FOR OPERATING VIRTUAL REALITY SPECTACLES, AND SYSTEM HAVING VIRTUAL REALITY SPECTACLES - A method for operating virtual reality spectacles, having the steps of: display of at least one virtual object, arranged in virtual surroundings, by means of the virtual reality spectacles from a first observation position that is prescribed within the virtual surroundings; display of at least one position symbol at a position within the virtual surroundings that corresponds to a second observation position prescribed within the virtual surroundings; identification of a virtual standard line of vision to the virtual object from the second observation position; selection of the displayed position symbol as soon as a predetermined selection action has been sensed; and display of the virtual object from the second observation position as soon as a predetermined confirmation action for the selected position symbol has been sensed. Furthermore, the invention relates to a system having virtual reality spectacles. | 12-24-2015 |
20150371448 | CONTEXTUAL LOCAL IMAGE RECOGNITION DATASET - A contextual local image recognition module of a device retrieves a primary content dataset from a server and then generates and updates a contextual content dataset based on an image captured with the device. The device stores the primary content dataset and the contextual content dataset. The primary content dataset comprises a first set of images and corresponding virtual object models. The contextual content dataset comprises a second set of images and corresponding virtual object models retrieved from the server. | 12-24-2015 |
20150371449 | METHOD FOR THE REPRESENTATION OF GEOGRAPHICALLY LOCATED VIRTUAL ENVIRONMENTS AND MOBILE DEVICE - The invention relates to the representation of a high-quality vectorial and textured graphical environment, including, as the basis of this representation, the capturing of video and the sequencing of images and graphics in a vectorial format, provided by the image-capturing means of the mobile device that implements the method. Furthermore, this is carried out by placing said vectorial graphical environments in a pre-determined geographic location and subordinating the representation thereof to the real geographic location of a mobile device ( | 12-24-2015 |
20150373274 | DISPLAY DEVICE AND CONTROL METHOD - A display device includes circuitry configured to: selectively execute a first control and a second control, the first control performing both a capture process by an image capturing device and an image recognition process which detects specific object from an image captured by the image capturing device, and the second control performing the capture process from among the capture process and the image recognition device, and display a content corresponding to the specific object on the image during an execution of the first control when the specific object is detected from the image in the image recognition process during the execution of the first control. | 12-24-2015 |
20150379351 | Athletic Activity Heads Up Display Systems and Methods - A method of using an athletic activity heads up display system during an athletic activity includes the steps of a heads up display unit receiving information about a sport ball and the heads up display unit displaying an image to an individual based on the information, where the image is overlaid on the individual's present field of view of an environment. | 12-31-2015 |
20150379360 | EYEWEAR-TYPE TERMINAL AND METHOD FOR CONTROLLING THE SAME - An eyewear-type mobile terminal includes a camera; an output unit configured to output road guide information; and a controller configured to: acquire information about a current position; cause the output unit to output the road guide information for a route from the current position to a destination; cause the camera to acquire visual information in real-time while the road guide information is output; and update the road guide information in real-time based on the acquired visual information. | 12-31-2015 |
20150379770 | DIGITAL ACTION IN RESPONSE TO OBJECT INTERACTION - A system and method are disclosed for identifying objects, and performing a digital action with respect to the object in a mixed reality environment. Objects may be recognized by a processing unit receiving feedback from a head mounted display worn by a user in a number of ways. Once an object is identified, the digital action performed may be displaying additional information on the object, either on a virtual display slate or as a three-dimensional virtual representation. | 12-31-2015 |
20150379771 | IMAGE DATA GENERATING DEVICE, PORTABLE TERMINAL DEVICE, AND PORTABLE CONTROL DEVICE - A PC includes: an image data setting section for (i) setting, to background image data indicative of a background image, apparatus image data obtained by capturing an image of an apparatus and serving as referential image data to be referred to for identifying the apparatus, and (ii) setting, to the apparatus image data, dynamic part image data indicative of a dynamic part image positioned on the apparatus image; and an address setting section for (i) associating, with the dynamic part image data, (a) an address for specifying a storage area of a memory in which storage area data to be accessed by a portable terminal device is stored and (b) address substitutive information to be substituted for the address, and (ii) generating address display data to be used to display the address substitutive information instead of the address. | 12-31-2015 |
20150379772 | TRACKING ACCELERATOR FOR VIRTUAL AND AUGMENTED REALITY DISPLAYS - A display system includes: a sensor to detect head movements and to generate sensor data corresponding to the head movements; and a display device to display a first portion of an image according to the sensor data, wherein the first portion is smaller than an entirety of the image. | 12-31-2015 |
20150379773 | DISPLAY DEVICE FOR VEHICLE - A display device for a vehicle includes a decorative member; a light source that emits light to the decorative member; and a virtual image generation unit that displays a virtual image between a driver of the vehicle and the decorative member. The light source is disposed closer to the driver than the decorative member, and is alternately switchable between a light-on state and a light-off state when the virtual image generation unit displays a virtual image. | 12-31-2015 |
20150379774 | SYSTEM AND METHOD FOR DYNAMICALLY GENERATING CONTEXTUAL AND PERSONALIZED DIGITAL CONTENT - A digital content generating system comprising mobile computing devices in communication with a server. A sensor data analyser receives data, generated by the sensors one of the mobile devices in respect of a real-world scene and determines features of the real-world scene from the data. A data analytics system matches the real-world scene features and user preferences with digital content items to create personalised digital content for the user of the mobile device. The system therefore dynamically generates digital content that is automatically and contextually constructed, the context being created from automatically-sensed features in the user's physical environment and from the user's interests and opinions. | 12-31-2015 |
20150379775 | METHOD FOR OPERATING A DISPLAY DEVICE AND SYSTEM WITH A DISPLAY DEVICE - A method for operating a display device Involves displaying at least one virtual object from a virtual observation position by virtual reality glasses, continuously detecting a position of the virtual reality glasses, determining, using the continuously detected position of the virtual reality glasses, whether the glasses are disposed in a specified region, and displaying the virtual object from the same virtual observation position by means of the display device as long as the virtual reality glasses are disposed in the specified region. A system includes virtual reality glasses. | 12-31-2015 |
20150379777 | AUGMENTED REALITY PROVIDING SYSTEM, RECORDING MEDIUM, AND AUGMENTED REALITY PROVIDING METHOD - An augmented reality providing system is provided with: a group of sensors measuring information on movement; a storage device that stores the reference position of the group of sensors; a card control unit that decides whether or not the group of sensors is located at the reference position; a position and posture identification unit that, after decision by the card control unit that the group of sensors is located at the reference position, identifies the current position of the group of sensors based on the reference position stored in the storage device and the information on the movement measured by the group of sensors; and a display unit that outputs output information in accordance with the current position of the group of sensors identified by the position and posture identification unit, thereby representing augmented reality. | 12-31-2015 |
20150379778 | COORDINATE GEOMETRY AUGMENTED REALITY PROCESS FOR INTERNAL ELEMENTS CONCEALED BEHIND AN EXTERNAL ELEMENT - Embodiments of the invention include a method, a system, and a mobile device that incorporate augmented reality technology into land surveying, 3D laser scanning, and digital modeling processes. By incorporating the augmented reality technology, a 3D digital model of internal elements concealed behind an external element can be visualized on a live view, aligned to the orientation and scale of the scene displayed on the mobile device. In an embodiment, a marker can be placed at a predetermined set of coordinates on the external element, determined by surveying equipment. The 3D digital model of the internal elements can be retrieved by the mobile device and overlaid in relation to the marker position, orientation, and size so that it is seen at a calculated distance in depth behind the external element as they would exist hidden behind the external element in the real environment. | 12-31-2015 |
20150379779 | APPARATUS AND METHOD FOR DISPLAYING DATA IN PORTABLE TERMINAL - An apparatus and method for displaying data in a portable terminal to control data displayed on a projection beam screen. The apparatus includes a beam projector unit for displaying data on a beam screen, at least one camera unit for capturing the data displayed on the beam screen, and a controller for extracting a differential region between data to be displayed on the beam screen and the displayed data captured by the camera unit and displaying the data on the beam screen according to a display screen region excluding the differential region therefrom. | 12-31-2015 |
20160004081 | HEADUP DISPLAY DEVICE - Provided is a headup display device capable of projecting a virtual image onto a curved surface. A headup display device comprises: a display means for emitting display light representing an image; a pair of parallel flat mirrors comprising a parallel arrangement of a flat semi-transparent mirror and a flat mirror, said flat semi-transparent mirror receiving the display light emitted from the display means and reflecting a part of the display light while transmitting the other part of the display light, and said flat mirror reflecting the display light toward the flat semi-transparent mirror; and a curved mirror for projecting an image represented by the display light onto a curved surface as a virtual image by reflecting the display light transmitted through the flat semi-transparent mirror onto the curved surface. The curved mirror has a curved surface shape such that when display light transmitted through the flat semi-transparent mirror and parallel to a first plane is reflected by the curved mirror and the curved surface in that order, the reflected display light becomes display light representing a virtual image parallel to a second plane. | 01-07-2016 |
20160004298 | Chemical Compositon And Its Devlivery For Lowering The Risks Of Alzheimer's Cardiovascular And Type -2 Diabetes Diseases - Chemical compositions of bioactive compounds and/or bioactive molecules for lowering the risks of Alzheimer's, Cardiovascular and Diabetes diseases are described. Targeted, passive and programmable/active deliveries of the bioactive compounds and/or bioactive molecules are described. Many embodiments of various subsystems for detection of disease specific biomarkers/an array of disease specific biomarkers and programmable/active delivery of the bioactive compounds and/or bioactive molecules in near real-time/real-time are also described. A portable internet appliance, a portable internet cloud appliance and an augmented reality personal assistant subsystem are also described along with various applications. | 01-07-2016 |
20160004305 | Method and Apparatus for Construction Machine Visualization - A system for displaying information to an operator of a machine comprises a head tracking system and a projection system. One system for displaying information uses a projection system and a see-through display to present two-dimensional images to an operator. One system uses a projection system and a see-through display to present three-dimensional images to an operator. One system uses a pair of smart glasses to display information to a user based a direction a user is looking. | 01-07-2016 |
20160004320 | TRACKING DISPLAY SYSTEM, TRACKING DISPLAY PROGRAM, TRACKING DISPLAY METHOD, WEARABLE DEVICE USING THESE, TRACKING DISPLAY PROGRAM FOR WEARABLE DEVICE, AND MANIPULATION METHOD FOR WEARABLE DEVICE - Provided are a tracking display system, a tracking display program, a tracking display method, a wearable device using these, a tracking display program for the wearable device, and a manipulation method for the wearable device that enhance the manipulability of a program in a stereoscopic space. The tracking display system of the present invention includes: a display unit that displays a view object in a virtual space; a detection unit that detects an image of a target in a real space, the target being for selecting the view object; and a control unit that controls at least a position of the view object. Further, the control unit displays the view object such that the view object tracks a motion of the target, in a case where the control unit determines that the view object and the image of the target overlap with each other in the display unit. | 01-07-2016 |
20160005189 | PROVIDING OVERLAYS BASED ON TEXT IN A LIVE CAMERA VIEW - Approaches are described for rendering augmented reality overlays on an interface displaying the active field of view of a camera. The interface can display to a user an image or video, for example, and the overlay can be rendered over, near, or otherwise positioned with respect to any text or other such elements represented in the image. The overlay can have associated therewith at least one function or information, and when an input associated with the overlay is selected, the function can be performed (or caused to be performed) by the portable computing device. | 01-07-2016 |
20160005199 | DIGITAL IMAGE PROCESSING APPARATUS AND CONTROLLING METHOD THEREOF - A digital image processing apparatus and a controlling method thereof are disclosed. The digital image processing apparatus includes an input unit configured to receive information on a real travel path, a sensor unit configured to detect position change information of the digital image processing apparatus, a display unit configured to play a scene image of a virtual travel path, and a controller configured to control the input unit, the sensor unit, and the display unit, wherein the controller may detect similarity levels of virtual travel paths in accordance with a predetermined reference based upon the real travel path, select one virtual travel path respective to a highest similarity level among the detected similarity levels or respective to a user selection, and control the play of a scene image of the selected virtual travel path based upon the information on the real travel path and the detected position change information. | 01-07-2016 |
20160005207 | METHOD AND SYSTEM FOR GENERATING MOTION SEQUENCE OF ANIMATION, AND COMPUTER-READABLE RECORDING MEDIUM - One aspect of the present invention provides a method for generating a motion sequence of an animation, the method comprising the steps of: generating a line of movement indicating a path along which a character moves, with reference to a first user manipulation inputted with respect to a reference plane; specifying the line of movement, a section included in the line of movement and a point on the line of movement with reference to a second user manipulation inputted with respect to the reference plane; and generating a motion sequence which enables the character to carry out assigned motions by assigning a motion to the line of movement, the section or the point with reference to a third user manipulation inputted with respect to the reference plane when the character is located at the line of movement, the section or the point to which the motion is assigned. | 01-07-2016 |
20160005231 | IMAGE DISPLAY DEVICE - An image display device used while mounted on a head of an observer includes: a left-eye display unit having an optical section adapted to form a virtual image for a left eye of the observer; a right-eye display unit having an optical section adapted to form a virtual image for a right eye of the observer; and an interpupillary adjustment mechanism adapted to adjust a distance between the left-eye display unit and the right-eye display unit, wherein the interpupillary adjustment mechanism includes a first adjustment mechanism capable of simultaneously adjusting a position of the left-eye display unit and a position of the right-eye display unit, and a second adjustment mechanism capable of independently adjusting only either one of the position of the left-eye display unit and the position of the right-eye display unit. | 01-07-2016 |
20160005232 | UNDERWATER VIRTUAL REALITY SYSTEM - Embodiments are directed to virtual reality methods and/or apparatus for providing a user with an underwater virtual reality experience that causes the user to experience the virtual reality environment, and to interact with the virtual reality environment in addition to experiencing the sensations of the real underwater environment. | 01-07-2016 |
20160005233 | METHOD, SYSTEM, AND APPARATUS FOR OPTIMISING THE AUGMENTATION OF RADIO EMISSIONS - In accordance with one example embodiment of the present invention, a plurality of antennas that are arranged according to a predetermined geometrical pattern receive radio emission signals from nearby radio emitting objects. Said radio emission signals are used, at least in part, to exhibit augmented reality indicia on a display, wherein the position of said augmented reality indicia on said display approximately indicates the direction of arrival of said radio emission signals and is organized or corrected according to predetermined criteria. One or more databases, either positioned on the cloud, or on the headset, or at an intermediate apparatus, may store the data, settings, and authorizations associated with said radio emitting object to permit and regulate the representation of said augmented reality indicia. | 01-07-2016 |
20160005235 | METHOD AND APPARATUS FOR SELECTIVELY PRESENTING CONTENT - A machine-implemented method includes obtaining input data and generating output data. The status of at least one contextual factor is determined and compared with a standard. If the status meets the standard, a transformation is applied to the output data. The output data is then outputted to the viewer. Through design and/or selection of contextual factors, standards, and transformations, output data may be selectively outputted to viewers in a context-suitable fashion, e.g. on a head mounted display the viewer's central vision may be left unobstructed while the viewer walks, drives, etc. An apparatus includes at least one sensor that senses a contextual factor. A processor determines the status of the contextual factor, determines if the status meets a standard, generates output data, and applies a transformation to the output data if the status meets the standard. A display outputs the output data to the viewer. | 01-07-2016 |
20160012612 | DISPLAY CONTROL METHOD AND SYSTEM | 01-14-2016 |
20160012639 | SYSTEM AND METHOD OF AUGMENTED REALITY ALARM SYSTEM INSTALLATION | 01-14-2016 |
20160012641 | DISPLAY APPARATUS AND CONTROL METHOD THEREOF | 01-14-2016 |
20160012643 | HMD Calibration with Direct Geometric Modeling | 01-14-2016 |
20160012645 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM | 01-14-2016 |
20160016472 | VEHICLE DISPLAY APPARATUS - A vehicle display apparatus includes a left decorative panel and a right decorative panel each including a first instrument panel and a second instrument panel, a left rotating mechanism that rotates the left decorative panel about an axis of rotation and changes which of the first instrument panel and the second instrument panel to be allowed to face toward a driver in a vehicle, and a right rotating mechanism that rotates the right decorative panel about an axis of rotation and changes which of the first instrument panel and the second instrument panel to be allowed to face toward the driver in the vehicle. | 01-21-2016 |
20160019016 | AUGMENTED REALITY DOLL - A physical doll including active electronic data processing capabilities which provide supplementary visual effects to a user via a wireless connection to augmented reality glasses worn by the user. A virtual version of the doll is seen by the user superimposed over the user's current physical environment. The physical doll may be comparable in size to a typical doll, whereas the virtual version of the doll may have the visual appearance of a fully-grown person. The augmented reality features can be initiated by physically placing the doll onto a dock or by other means. The active electronic data processing capabilities can be included in the doll, the dock, both the doll and the dock, or distributed between the doll and the dock. The active electronic data processing capabilities can also include video game-playing and artificial intelligence, and the virtual version of the doll may play video games with the user. | 01-21-2016 |
20160019212 | MAINTENANCE ASSISTANCE FOR AN AIRCRAFT BY AUGMENTED REALITY - A method for supporting aircraft maintenance, performed in a system comprising a display selection device and a portable device with a camera and an augmented reality display. The method comprises the steps of acquiring images of an equipment of the aircraft with the camera, and sending them to the display selection device; identifying the equipment present in these images with the display selection device and determining the identifier thereof, referred to as the useful identifier; on the basis of the useful identifier, sending maintenance assistance data with the display selection device to the augmented reality display; in response, displaying, in augmented reality, images corresponding to the data with the augmented reality display device. The method also comprises steps for displaying guidance data guiding towards one equipment in particular. A device for implementing such a method is also disclosed. | 01-21-2016 |
20160019423 | METHODS AND SYSTEMS FOR WEARABLE COMPUTING DEVICE - A wearable device that can provide images via a display located within two inches and in view of a human eye in association with headgear, and can biometrically authenticate an authorized user based on at least one of a user's eye. The wearable device can access a data network and determine a user's location. An authorized user can be provided with data based on the user's identity and location as determined by a wearable device. The location of a user within a venue can be determined using radio frequency transponders in communication with a wearable device and authenticating the user via biometric attributes of a user's eye as captured by a imaging device associated with the wearable device. Sensitive data can be managed in association with a patient based on health provider authentication and identity of a transponder used in association with the patient. | 01-21-2016 |
20160019715 | CONTENT PRESENTATION IN HEAD WORN COMPUTING - Aspects of the present invention relate to providing assistance to medical professionals during the performance of medical procedures through the use of technologies facilitated through a head-worn computer. | 01-21-2016 |
20160019716 | COMPUTER ASSISTED SURGICAL SYSTEM WITH POSITION REGISTRATION MECHANISM AND METHOD OF OPERATION THEREOF - A computer assisted surgical system and method of operation thereof includes: capturing historic scan data from a three dimensional object; sampling a current surface image from the three dimensional object in a different position; automatically transforming the historical scan data to align with the current surface image for forming a transform data; and displaying, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention. | 01-21-2016 |
20160019717 | RETAIL SPACE PLANNING SYSTEM - A three dimensional virtual retail space representing a physical space for designing a retail store space layout is provided. A three dimensional virtual object representing at least one physical object for the retail space is provided. Input can be received from a virtual reality input interface for interacting with the virtual object in the virtual retail space. Based on the input, the virtual object can be placed in the virtual retail space. An updated video signal can be sent to a head mounted display that provides a three dimensional representation of the virtual object in the virtual space. | 01-21-2016 |
20160019719 | CONTENT PRESENTATION IN HEAD WORN COMPUTING - Aspects of the present invention relate to providing assistance to medical professionals during the performance of medical procedures through the use of technologies facilitated through a head-worn computer. | 01-21-2016 |
20160019721 | System and Method for Augmented Reality Display of Hoisting and Rigging Information - A method for providing information associated with a lift process to a mobile device user is presented. The method comprises receiving a request for lift environment information from a mobile device, determining a pose of the mobile interface device relative to a lift process target area, and obtaining lift environment information for at least a portion of the lift process target area. The lift environment information is used to assemble AR lift information for transmission to and display on the mobile interface device. The AR lift information is configured for viewing in conjunction with a real-time view of the lift process target area captured by the mobile interface device. The AR lift information is then transmitted to the mobile interface device for display. | 01-21-2016 |
20160019722 | System and Method for Defining an Augmented Reality View in a Specific Location - This invention is a system and method for defining a location-specific augmented reality capability for use in portable devices having a camera. The system and method uses recent photographs or digital drawings of a particular location to help the user of the system or method position the portable device in a specific place. Once aligned, a digital scene is displayed to the user transposed over (and combined with) the camera view of the current, real-world environment at that location, creating an augmented reality experience for the user. | 01-21-2016 |
20160019723 | AUGMENTED REALITY SYSTEM METHOD AND APPARTUS FOR DISPLAYING AN ITEM IMAGE IN ACONTEXTUAL ENVIRONMENT - Method, apparatus, and system for providing an item image to a client for display in a contextual environment are described. In some embodiments, the user may select an item for display in the contextual environment, and the user may position a camera coupled to a processing system to capture the contextual environment. A placeholder may be generated and associated with an item selected by a user. In an embodiment, the generated placeholder may be placed in a location within the contextual environment, and the user's processing system may send a visual data stream of the camera-captured environment to a server. In an embodiment, the user's processing device may receive a modified data stream including an image of the item, and the user's processing device may display the item image in the same location as the placeholder. | 01-21-2016 |
20160025974 | EXTERNAL USER INTERFACE FOR HEAD WORN COMPUTING - Aspects of the present invention relate to the projection of imagery from a head-worn computer, wherein a projector with x-y control and a laser are mounted in the head-worn computer and positioned to project a raster style interactive user interface image onto a nearby surface. | 01-28-2016 |
20160026242 | GAZE-BASED OBJECT PLACEMENT WITHIN A VIRTUAL REALITY ENVIRONMENT - A head mounted display (HMD) device operating in a real world physical environment is configured with a sensor package that enables determination of an intersection of a device user's projected gaze with a location in a virtual reality environment so that virtual objects can be placed into the environment with high precision. Surface reconstruction of the physical environment can be applied using data from the sensor package to determine the user's view position in the virtual world. A gaze ray originating from the view position is projected outward and a cursor or similar indicator is rendered on the HMD display at the ray's closest intersection with the virtual world such as a virtual object, floor/ground, etc. In response to user input, such as a gesture, voice interaction, or control manipulation, a virtual object is placed at the point of intersection between the projected gaze ray and the virtual reality environment. | 01-28-2016 |
20160026724 | AUGMENTED REALITY PRODUCT BROCHURE APPLICATION - A method for viewing an augmented reality product brochure for a mattress product on a computing device is provided. The method includes capturing an image corresponding to the mattress product with a camera of the computing device and retrieving the augmented reality product brochure corresponding to the image from a memory of the computing device. The method also includes displaying the augmented reality product brochure on a user interface of the computing device, wherein the augmented reality product brochure includes a representation of the mattress product and modifying the representation of the mattress product based on receiving one or more instructions from the user. | 01-28-2016 |
20160026869 | INFORMATION PROCESSING APPARATUS AND METHOD, INFORMATION PROCESSING SYSTEM, AND PROVIDING MEDIUM - The invention enables users to virtually attach information to situations in the real world, and also enables users to quickly and easily find out desired information. An IR sensor receives an IR signal transmitted from an IR beacon, and supplies the received signal to a sub-notebook PC. A CCD video camera takes in a visual ID from an object, and supplies the inputted visual ID to the sub-notebook PC. A user inputs, through a microphone, a voice to be attached to situations in the real world. The sub-notebook PC transmits position data, object data and voice data, which have been supplied to it, to a server through a communication unit. The transmitted data is received by the server via a wireless LAN. The server stores the received voice data in a database in correspondence to the position data and the object data. | 01-28-2016 |
20160027210 | Composite Image Associated with a Head-Mountable Device - In one aspect, an HMD is disclosed that provides a technique for generating a composite image representing the view of a wearer of the HMD. The HMD may include a display and a front-facing camera, and may be configured to perform certain functions. For instance, the HMD may be configured to make a determination that a trigger event occurred and responsively both generate a first image that is indicative of content displayed on the display, and cause the camera to capture a second image that is indicative of a real-world field-of-view associated with the HMD. Further, the HMD may be configured to generate a composite image that combines the generated first image and the captured second image. | 01-28-2016 |
20160027211 | EXTERNAL USER INTERFACE FOR HEAD WORN COMPUTING - Aspects of the present invention relate to the projection of imagery from a head-worn computer, wherein a projector with x-y control and a laser are mounted in the head-worn computer and positioned to project a raster style interactive user interface image onto a nearby surface. | 01-28-2016 |
20160027212 | ANTI-TRIP WHEN IMMERSED IN A VIRTUAL REALITY ENVIRONMENT - An HMD device with a see-through display and depth sensing capability is configured to selectively dim or fade out a display of a virtual reality environment to enable a user to see the real world without obstruction by the virtual world when a distance between the user and a real world object is determined to be less than a threshold distance. The current height of the user's head (i.e., the distance from head to ground) may be utilized when performing the dimming/fading so that different threshold distances can be used depending on whether the user is standing or seated. | 01-28-2016 |
20160027213 | GROUND PLANE ADJUSTMENT IN A VIRTUAL REALITY ENVIRONMENT - An HMD device is configured to vertically adjust the ground plane of a rendered virtual reality environment that has varying elevations to match the flat real world floor so that the device user can move around to navigate and explore the environment and always be properly located on the virtual ground and not be above it or underneath it. Rather than continuously adjust the virtual reality ground plane, which can introduce cognitive dissonance discomfort to the user, when the user is not engaged in some form of locomotion (e.g., walking), the HMD device establishes a threshold radius around the user within which virtual ground plane adjustment is not performed. The user can make movements within the threshold radius without the HMD device shifting the virtual terrain. When the user moves past the threshold radius, the device will perform an adjustment as needed to match the ground plane of the virtual reality environment to the real world floor. | 01-28-2016 |
20160027214 | MOUSE SHARING BETWEEN A DESKTOP AND A VIRTUAL WORLD - A mixed-reality head mounted display (HMD) device supports a three dimensional (3D) virtual world application with which a real world desktop displayed on a monitor coupled to a personal computer (PC) may interact and share mouse input. A mouse input server executing on the PC tracks mouse movements on the desktop displayed on a monitor. When movement of the mouse takes it beyond the edge of the monitor screen, the mouse input server takes control of the mouse and stops mouse messages from propagating through the PC's system. The mouse input server communicates over a network connection to a mouse input client exposed by the application to inform the client that the mouse has transitioned to operating in the virtual world and passes mouse messages describing movements and control operation such as button presses. | 01-28-2016 |
20160027218 | MULTI-USER GAZE PROJECTION USING HEAD MOUNTED DISPLAY DEVICES - A head mounted display (HMD) device operating in a real world physical environment is configured with a sensor package that enables determination of an intersection of a projection of the device user's gaze with a location in a mixed or virtual reality environment. When a projected gaze ray is visibly rendered on other HMD devices (where all the devices are operatively coupled), users of those devices can see what the user is looking at in the environment. In multi-user settings, each HMD device user can see each other's projected gaze rays which can facilitate collaboration in a commonly-shared and experienced mixed or virtual reality environment. The gaze projection can be used much like a finger to point at an object, or to indicate a location on a surface with precision and accuracy. | 01-28-2016 |
20160027219 | AUGMENTED REALITY SYSTEMS AND METHODS USED TO IDENTIFY TOYS AND TRIGGER VIDEO AND GRAPHICS - An augmented reality system, used to identify toys and trigger video and graphics, includes an input device, one or more toy markers, a computer database, and a controller. Both the input device and the database are coupled to a network. The input device is configured to capture images and playback video, while the toy target marker is configured for image capture and video playback. The controller is coupled to the input device for manipulation of video and graphics such that when one or more toys contained in a real world environment, are manipulated, the toys are augmented, using computer-generated sensory input and a manipulated toy image is created. | 01-28-2016 |
20160027220 | LOW LATENCY METHODOLOGIES FOR A HEADSET-MOUNTED CAMERA ON VIRTUAL REALITY DISPLAYS - Systems and methods for camera and inertial sensor integration are described. The systems and methods may include receiving inertial data from one or more inertial sensors; processing the inertial data with an inertial sensor algorithm to produce an inertial sensor position and orientation; receiving camera data from one or more cameras; processing the camera data and the inertial sensor position with a camera sensor algorithm to produce a camera position and orientation; receiving the inertial sensor position and the camera position in a Kalman filter to determine position or orientation of a user wearing a virtual reality headset; and providing the user's position or orientation to the virtual reality headset. An apparatus that incorporates these systems and methods is also set forth. | 01-28-2016 |
20160027221 | METHODS AND SYSTEMS FOR GENERATING AND JOINING SHARED EXPERIENCE - According to an example, a computer may receive characteristics information of an object in a video stream captured by a first computing device, generate a signature based on the characteristics information, identify an augmented reality information associated with the signature, transmit the augmented reality information to the first computing device, receive, from a second computing device, a set of characteristics information of the object in an image captured by the second computing device, determine that the set of characteristics information from the second computing device has a second signature that matches the signature generated based on the characteristics information received form the first computing device, and transmit the identified augmented reality information to the second computing device. | 01-28-2016 |
20160027401 | DISPLAY, CONTROL METHOD, AND STORAGE MEDIUM - A display obtains a first image by image capturing, receives, from an external apparatus, a second image generated based on the first image, determines a communication quality with the external apparatus in reception of the second image, and controls to display the second image on a display unit until a value indicating a degree of degradation of the communication quality exceeds a first threshold as a result of determination, and display the first image on the display unit when the value indicating the degree of degradation of the communication quality exceeds the first threshold. The display controls to display the first image until the value indicating the degree of degradation of the communication quality becomes smaller than a second threshold smaller than the first threshold after exceeding the first threshold. | 01-28-2016 |
20160030851 | VIRTUAL WORLD PROCESSING DEVICE AND METHOD - Disclosed are a virtual world processing device and method. By way of example, data collected from the real world is converted to binary form data which is then transmitted, or is converted to XML data, or the converted XML data is further converted to binary form data which is then transmitted, thereby allowing the data transmission rate to be increased and a low bandwidth to be used, and, in the case of a data-receiving adaptation RV engine, the complexity of the adaptation RV engine can be reduced as there is no need to include an XML parser. | 02-04-2016 |
20160034042 | WEARABLE GLASSES AND METHOD OF PROVIDING CONTENT USING THE SAME - A wearable glasses is provided. The wearable glasses includes a sensing circuit, a communication interface, a display, and a controller. The sensing circuit senses movement information of a user wearing the wearable glasses. The communication interface receives notification message information. The display displays the notification message information within an angle of view of the user wearing the wearable glasses. The controller determines a movement state of the user based on the sensed movement information of the user and controls the display to display the received notification message information according to the movement state of the user. | 02-04-2016 |
20160034761 | SYSTEMS AND METHODS FOR EQUIPMENT INSTALLATION, CONFIGURATION, MAINTENANCE, AND PERSONNEL TRAINING - A method, performed by a server, for supporting equipment service at a site includes receiving, from Head Mounted Equipment (HME) associated with an installer at a site, data relating to an inventory and location of equipment at the site, wherein the data is collected by the HME during equipment service, wherein the equipment includes one or more of a circuit pack, a line module, a cable and power equipment; and checking the equipment service based on the received data and at least one of plans associated with the site and configuration rules of the equipment. | 02-04-2016 |
20160034762 | METHOD AND DEVICE FOR MAPPING SENSOR LOCATION AND EVENT OPERATION USING MONITORING DEVICE - A method, device and chipset for a monitoring device connectable with a sensor device monitoring surroundings thereof. The method includes searching for a sensor device, acquiring images for the surroundings of the monitoring device, registering location information corresponding to the sensor device, discovered through searching, using the images, and registering monitoring information including an operation performed in response to an event occuring in the discovered sensor device. The device includes a camera configured to acquire an image; a communication unit configured to transmit/receive a signal in a wired or wireless manner; a storage unit configured to register information; and a controller configured to search for a sensor device, acquire images for the surroundings of the monitoring device, register location information corresponding to the sensor device, discovered through searching, using the images, and register monitoring information including an operation performed in response to an event occurring in the discovered sensor device. | 02-04-2016 |
20160035132 | AUGMENTED REALITY OBJECTS BASED ON BIOMETRIC FEEDBACK - Technologies are generally described for refining virtual objects output within an augmented reality environment. In one example, a method includes determining, by a system comprising a processor, first response data representative of a first response to a first set of object data associated with a simulation of an interaction between a first virtual object and a second virtual object. The method also includes modifying at least one object of the first set of object data to create a second set of object data associated with another simulation of the interaction between the first virtual object and the second virtual object. Further, the method includes outputting data representative of the first virtual object, the second virtual object, and the second set of object data. | 02-04-2016 |
20160035134 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - A plurality of types of arrangement pattern candidates are generated. Each arrangement pattern candidate is an arrangement pattern candidate of a plurality of indices in a physical space used to calculate the position and orientation of the viewpoint. The arrangement pattern candidate enables observation of a predetermined number or more of indices from a position in an area where mixed reality can be experienced in the physical space. Information representing the arrangement pattern of a plurality of indices in the physical space is generated by using the generated types of arrangement pattern candidates. | 02-04-2016 |
20160035135 | WEARABLE DEVICE AND METHOD OF CONTROLLING THEREFOR - The present specification relates to a wearable device and a method of controlling therefor. According to one embodiment, a method of controlling a wearable device includes the steps of detecting a real object and displaying a first virtual object based on the detected real object when the real object is detected, and detecting the real object and a first interaction and displaying a second virtual object when the real object and the first interaction are detected, wherein the second virtual object is displayed based on the second virtual object information transmitted by the external device. | 02-04-2016 |
20160035136 | DISPLAY APPARATUS, METHOD FOR CONTROLLING DISPLAY APPARATUS, AND PROGRAM - A head mounted display is worn on the body of a user before use and includes an image display unit that transmits an outside scene and displays an image in a visually recognizable manner along with the outside scene and a camera that performs image capturing in directions of sight lines of the user. A control section of the head mounted display includes an image generation section that generates a display image from a captured image from the camera and an image display control section that causes the image display unit to display the display image generated by the image generation section. | 02-04-2016 |
20160035137 | DISPLAY DEVICE, METHOD OF CONTROLLING DISPLAY DEVICE, AND PROGRAM - A head mounted display device includes an image display unit which is used by being mounted on a body of a user, through which outside scenery is transmitted, and which displays an image such that the image is visually recognizable together with the outside scenery. The head mounted display device includes a target detection unit that detects a target of the user in a visual line direction and a data acquisition unit that acquires data of the image displayed by the image display unit. The head mounted display device includes an image display control unit that allows the image to be displayed in a position in which the image is visually recognized by overlapping at least a part of the target detected by the target detection unit based on the data acquired by the data acquisition unit. | 02-04-2016 |
20160035138 | TRANSPARENT DISPLAY DEVICE AND CONTROL METHOD THEREOF - A transparent display device is provided, which includes: a transparent display; a camera; a graphic processor configured to generate an augmented reality (AR) object; and a controller configured to operate in at least one of a transparent AR mode in which the AR object is displayed on the transparent display and a video AR mode in which the AR object is displayed on an image captured by the camera. The controller is further configured to switch between the transparent AR mode and the video AR mode in response to an occurrence of a predetermined event. | 02-04-2016 |
20160035139 | LOW LATENCY STABILIZATION FOR HEAD-WORN DISPLAYS - Methods, systems, and computer readable media for low latency stabilization for head-worn displays are disclosed. According to one aspect, the subject matter described herein includes a system for low latency stabilization of a head-worn display. The system includes a low latency pose tracker having one or more rolling-shutter cameras that capture a 2D image by exposing each row of a frame at a later point in time than the previous row and that output image data row by row, and a tracking module for receiving image data row by row and using that data to generate a local appearance manifold. The generated manifold is used to track camera movements, which are used to produce a pose estimate. | 02-04-2016 |
20160035140 | IMAGE PROCESSING - An image processing method for a head mounted display device is provided, operable in respect of an image generated for display by the head mountable display device according to at least one of an initial position or orientation of a viewpoint. The method includes detecting one or both of a current position or orientation of the head mountable display device depending upon a display time at which the image is to be displayed. The method further includes processing the image using pixel position mappings in which at least a subset of pixels of the image are displaced by respective pixel displacements dependent upon a difference between at least one of the initial position and the detected current position, or the initial orientation and the detected current orientation at the time at which the image is to be displayed. | 02-04-2016 |
20160035141 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing device including: a data storage unit storing feature data indicating a feature of appearance of one or more physical objects; an environment map building unit for building an environment map based on an input image obtained by imaging a real space and the feature data, the environment map representing a position of a physical object present in the real space; a control unit for acquiring procedure data for a set of procedures of operation to be performed in the real space, the procedure data defining a correspondence between a direction for each procedure and position information designating a position at which the direction is to be displayed; and a superimposing unit for generating an output image by superimposing the direction for each procedure at a position in the input image determined based on the environment map and the position information, using the procedure data. | 02-04-2016 |
20160041391 | VIRTUAL REALITY SYSTEM ALLOWING IMMERSION IN VIRTUAL SPACE TO CONSIST WITH ACTUAL MOVEMENT IN ACTUAL SPACE - A virtual reality system includes a play ground defined within an actual space of a real world to have a predetermined area in which a user is actually movable, a head mounted device having a display for displaying an image of a virtual space formed corresponding to real objects in the play ground and worn by the user to surround both eyes, at least one sensor attached to a predetermined location in the play ground, the head mounted device and/or a body of the user to sense an actual location and/or motion of the user in the play ground, and a control unit for calculating an actual location and a facing direction of the user in the play ground according to a signal received from the at least one sensor, and controlling the head mounted device to display an image of the virtual space, observed at the actual location and in the facing direction of the user, on the display, wherein when the user wearing the head mounted device actually moves in the play ground, a feeling of actually moving in the virtual space is given to the user. Therefore, it is possible to provide a new-generation entertainment system where the immersion in a virtual space may consist with the actual movement in an actual space. | 02-11-2016 |
20160042233 | METHOD AND SYSTEM FOR FACILITATING EVALUATION OF VISUAL APPEAL OF TWO OR MORE OBJECTS - Disclosed herein is a computer implemented method of facilitating evaluation of visual appeal of a combination of two or more objects. The method may include presenting a user-interface to enable a user to perform a first identification of one or more first objects and a second identification of one or more second objects. Further, the method may include retrieving one or more first images of the one or more first objects based on the first identification. Additionally, the method may include retrieving one or more second images of the one or more second objects based on the second identification. Furthermore, the method may include creating a combination image based on each of the one or more first images and the one or more second images. The combination image may represent a virtual combination of each of the one or more first objects and the one or more second objects. | 02-11-2016 |
20160042563 | AUGMENTED REALITY INFORMATION MANAGEMENT - Technologies related to Augmented Reality (AR) information management are generally described. In some examples, a computing device which receives multiple AR information display requests (AR requests) may prioritize and limit the AR requests to display prioritized AR requests which are displayable by the computing device in substantially real-time. The computing device may select a set of AR requests for display within a real-time view frame, prioritize the AR requests, and display a real-time limited subset of the higher priority AR requests. The computing device may subsequently display additional AR requests according to AR request priority, such that AR request priority determines timing of displaying each respective additional AR request. | 02-11-2016 |
20160042567 | IMMERSIVE DISPLAYS - An immersive display and a method of operating the immersive display to provide information relating to an object. The method includes receiving information from an input device of the immersive display or coupled to the immersive display, detecting an object based on the information received from the input device, and displaying a representation of the object on images displayed on a display of the immersive display such that attributes of the representation distinguish the representation from the images displayed on the display, wherein the representation is displayed at a location on the display that corresponds with a location of the object. | 02-11-2016 |
20160042568 | Computer system generating realistic virtual environments supporting interaction and/or modification - A computer system capable of generating realistic virtual environments [ | 02-11-2016 |
20160042569 | AUGMENTED REALITY WITH GRAPHICS RENDERING CONTROLLED BY MOBILE DEVICE POSITION - Systems and methods are provided for rendering graphics in augmented reality software based on the movement of a device in relation to a target object, in order to produce more desired rendering effects. An augmented reality graphic can be both scaled and shifted laterally compared to the target based on a position of the device, and can then be cropped to match the target. Scaling and shifting related to movement parallel to the target can be performed using a first (parallel) function, and scaling and shifting related to movement toward and away from the target can be performed using a second (perpendicular) function. Both functions can be chosen to ensure that an edge of the augmented image is not passed over so as to provide blank space. | 02-11-2016 |
20160042570 | DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND PROGRAM - There is provided a display control device including an action information acquisition unit that acquires, at an action position of one actor, action information regarding a past action of another actor, an object generation unit that generates a virtual object for virtually indicating a position of the other actor during an action of the one actor based on the acquired action information, and a display control unit that causes a display unit displaying a surrounding scene to superimpose and display the generated virtual object during the action of the one actor. | 02-11-2016 |
20160048230 | IMAGE PROCESSING APPARATUS AND METHOD, AND PROGRAM - An image clipping unit clips a region on an input image where a virtual image subjected to touch operation by a user is displayed to obtain a clipped image, and a distance calculating unit performs stereo-matching on left and right clipped images to obtain a distance to each of objects on the clipped image. A touch coordinate calculating unit obtains a touch position of the user based on information of the distance, and a touch processing unit performs processing according to the touch position. A short distance image separating unit separates the object existing closer to the user side than the virtual image from the clipped image, using the information, to obtain a short distance image. The short distance image is combined with the virtual image, and the virtual image after the combining is combined with the input image to be displayed. | 02-18-2016 |
20160048636 | DISTRIBUTED APPLICATION WINDOWS - Embodiments for aggregating multiple data sources on a single display device are provided. In one example, a computing device comprises a plurality of inputs each configured to receive data output from a respective data source, a display output to send a multiple element screen for display on a display device, and instructions to capture digitized data received from each of the data sources via the plurality of inputs into respective screen elements. The computing device includes further instructions to, for a selected captured screen element, process the captured screen element, select at least a portion of the captured screen element for inclusion in a display information element of the multiple element screen, and output the multiple element screen to the display device via the display output. | 02-18-2016 |
20160048732 | DISPLAYING INFORMATION RELATING TO A DESIGNATED MARKER - A method and system for displaying information relating to a designated marker is provided. An image including the designated marker is acquired. The designated marker is extracted from the acquired image. A type of the designated marker is identified from the extracted marker. The identified type of the designated marker is communicated to a server, and in response, marker information identified from the type of the designated marker is obtained from the server. The marker information relates to the designated marker and identifies at least two other markers. Relative positional information of the device in relation to the extracted marker is determined. A displayed informational image includes the designated marker and at least one other marker of the at least two other markers, which are displayed in accordance with a determined relative position between the designated marker and each marker of the at least one other marker. | 02-18-2016 |
20160048964 | SCENE ANALYSIS FOR IMPROVED EYE TRACKING - Technologies related to scene analysis for improved eye tracking are generally described. In some examples, detected gaze targets may be derived from gaze direction information from an eye-facing sensor. Detected gaze target positions and/or motion may be improved by capturing and analyzing digital scene information from a scene visible by the eye. Digital scene information captured by a digital camera may be analyzed to identify potential gaze targets, such as stationary gaze targets, moving gaze targets, and/or accelerating gaze targets. Detected gaze targets may be modified to positions of selected gaze targets. | 02-18-2016 |
20160049008 | CONTENT PRESENTATION IN HEAD WORN COMPUTING - Aspects of the present invention relate to methods and systems for presenting digital content in a field of view of a head-worn computer. | 02-18-2016 |
20160049009 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - An image processing device includes memory; and a processor configured to execute a plurality of instructions stored in the memory, the instructions comprising: recognizing a target object recognized from a first image, which is a captured image, including the target object in a real world; controlling a second image, which is an augmented image, including information of the target object from the first image, and a third image which is an augmented image of the second image and to be formed so as to inscribe an outer surrounding the second image and covers a center of visual field of a user relative to the second image; and displaying, in a state where the user directly visually recognizes the target object in the real world, the second image and the third image such that the second image and the third image are caused to correspond to a position. | 02-18-2016 |
20160049010 | DOCUMENT INFORMATION RETRIEVAL FOR AUGMENTED REALITY DISPLAY - According to some embodiments of the present invention there is provided a computerized method to generate an augmented document display. The method may comprise receiving a document image from an augmented reality device, wherein the document image images a document currently viewed by a user. The method may comprise processing the document image to identify an image content data of the document. The method may comprise selecting one of two or more document records based on the image content data. The method may comprise identifying a current workflow step of the document from two or more document workflow steps, using the selected document record. The method may comprise determining one or more document support data based on the current workflow step. The method may comprise instructing the augmented reality device to display the one or more document support data when the document is viewed by the user. | 02-18-2016 |
20160049011 | DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND PROGRAM - Provided is a display control device including: a display control unit configured to control a display unit of a terminal device to display an image of a real space in which the terminal device is present. The display control unit selects an image to be displayed by the display unit from a first image which has continuity with the image of the real space viewed in a region other than the display unit and a second image which is generated based on a captured image of the real space and is independent from the image of the real space viewed in the region other than the display unit. | 02-18-2016 |
20160049012 | HEAD MOUNTED DISPLAY DEVICE, CONTROL METHOD THEREOF, AND COMPUTER PROGRAM - A transmission type head mounted display device includes an image display unit configured to display an image and causes a user wearing the head mounted display device to visually recognize the image, and to transmit an outside scene; a movement detection unit configured to detect that the head mounted display device moves to a specific place; and a processing control unit configured to change at least a part of predetermined functions mounted on the head mounted display device. | 02-18-2016 |
20160049013 | Systems and Methods for Managing Augmented Reality Overlay Pollution - A system and method enabling an Augmented Reality (AR) capable system to manage the displaying of AR overlays in ways that prevent possible AR user distractions, respect AR user's privacies and prevent interference or conflict with other AR overlays that may appear in an AR user's field of view. | 02-18-2016 |
20160054793 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - Provided is an image processing device including: an image processing unit configured to generate an output image using a first image obtained by imaging a real space from a first viewpoint as an input image. Based on a position and a posture of the first viewpoint in the real space, the image processing unit generates, as the output image, a second image obtained by virtually imaging the real space from a second viewpoint having a position and a posture different from the position and the posture of the first viewpoint in the real space. | 02-25-2016 |
20160054797 | Thumb Controller - A glove interface object is provided, comprising: an index finger portion; a resistance circuit including an index resistor defined along a length of the index finger portion, and a power source for applying a current across the index resistor; a thumb portion; a probe circuit including a thumb contact defined on the thumb portion, the probe circuit configured to read a voltage, via the thumb contact, at a location along the length of the index resistor at which contact between the thumb contact and the index resistor occurs; a data processing module for processing the voltage read by the probe circuit to generate thumb-index control data that is indicative of the location at which the contact between the thumb contact and the index resistor occurs; a communications module configured to send the thumb-index control data to a computing device for processing to define a setting for a virtual environment. | 02-25-2016 |
20160055640 | Techniques for Accurate Pose Estimation - The described technology regards an augmented reality system and method for estimating a position of a location of interest relative to the position and orientation of a display, including a retroactive update to a previously rendered estimate of the position of the orientation of the display stored in a rewind buffer. Systems of the described technology include including a plurality of sensors, a processing module or other computation means, and a database. Methods of the described technology use data from the sensor package useful to accurately render graphical user interface information on a display. | 02-25-2016 |
20160055656 | Techniques for Accurate Pose Estimation - The described technology regards an augmented reality system and method for estimating a position of a location of interest relative to the position and orientation of a display based upon a retroactive adjustment of a previously rendered position and orientation of the display, by means of an adjust-update-predict (AUP) cycle, and calculating the location of interest relative to the position and orientation of the display. Systems of the described technology include including a plurality of sensors, a processing module or other computation means, and a database. Methods of the described technology use data from the sensor package useful to accurately render graphical user interface information on a display. | 02-25-2016 |
20160055673 | DISTRIBUTED APERTURE VISUAL INERTIA NAVIGATION - A system and method for visual inertial navigation for augmented reality are described. In some embodiments, at least one camera of a wearable device generates a plurality of video frames. At least one inertial measurement unit (IMU) sensors of the wearable device generates IMU data. Features in the plurality of video frames for each camera are tracked. The plurality of video frames for each camera are synchronized and aligned based on the IMU data. A dynamic state of the wearable device is computed based on the synchronized plurality of video frames with the IMU data for each camera. Augmented reality content is generated and positioned in a display of the wearable device based on the dynamic state of the wearable device. | 02-25-2016 |
20160055674 | EXTRACTING SENSOR DATA FOR AUGMENTED REALITY CONTENT - A system and method for extracting data for augmented reality content are described. A device identifies a sensing device using an image captured with at least one camera of the device. Visual data are extracted from the sensing device. The device generates an AR content based on the extracted visual data and maps and displays the AR content in the display to form a layer on the sensing device. | 02-25-2016 |
20160055675 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - Provided is a display control device including: a display control unit configured to control a first display unit included in a first terminal device. The display control unit selects a criterion for disposition of a virtual object displayed in a real space via the first display unit from positional information of the real space and a real object in the real space. | 02-25-2016 |
20160055676 | DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND PROGRAM - Provided is a display control device including: a display control unit configured to control a display unit of a terminal device. The display control unit performs control to decide a display position of a virtual object displayed in a real space via the display unit based on positional information associated with the virtual object in the real space and display the virtual object in the real space based on the display position, and control to display a notification indicating presence of the virtual object in the real space when a part or all of the virtual object is outside of a visible range of the real space. | 02-25-2016 |
20160055677 | Methods and Systems for Augmented Reality to Display Virtual Representations of Robotic Device Actions - Example methods and systems for augmented reality interfaces to display virtual representations of robotic device actions are provided. An example method includes receiving information that indicates an action or an intent of a robotic device to perform a task, and the action or the intent includes one or more of a planned trajectory of the robotic device to perform at least a portion of the task and an object to be handled by the robotic device to perform at least a portion of the task. The method also includes providing, for display by a computing device on an augmented reality interface, a virtual representation of the action or the intent, and the virtual representation includes as annotations on the augmented reality interface at least a portion of the planned trajectory of the robotic device or highlighting the object to be handled by the robotic device. | 02-25-2016 |
20160055678 | Techniques for Accurate Pose Estimation in Outdoor Environments - The described technology regards an augmented reality system and method for estimating a position of a location of interest relative to the position and orientation of a display, using a forward buffer to store current and predicted position estimates calculated by the methods of the present invention. Systems of the described technology include including a plurality of sensors, a processing module or other computation means, and a database. Methods of the described technology use data from the sensor package useful to accurately render graphical user interface information on a display. | 02-25-2016 |
20160055679 | Techniques for Accurate Pose Estimation in Outdoor Environments - The described technology regards an augmented reality system and method for estimating a position of a location of interest relative to the position and orientation of a display, including receiving and selectively filtering a plurality of measurement vectors from a rate-gyroscope. Systems of the described technology include including a plurality of sensors, a processing module or other computation means, and a database. Methods of the described technology use data from the sensor package useful to accurately render graphical user interface information on a display. | 02-25-2016 |
20160055680 | METHOD OF CONTROLLING DISPLAY OF ELECTRONIC DEVICE AND ELECTRONIC DEVICE - A method of controlling a display of an electronic device and the electronic device thereof are provided. The method includes measuring an amount of movement of a user, when the movement of the user is detected while a Virtual Reality (VR) operation is provided to the user; comparing the amount of the movement with a threshold value corresponding to a type of content being used by the user; and changing the VR operation into a see-through operation, when the amount of the movement is greater than the threshold value corresponding to the type of the content. | 02-25-2016 |
20160062454 | HEAD-MOUNTED DISPLAY APPARATUS - A head-mounted display apparatus may include: a main frame, one surface of which faces a user's face; and a support part coupled to at least part of the main frame to fix the main frame to the user's face, wherein the main frame has a cavity structure such that an electronic device is mounted on an opposite surface thereof and includes a position adjustment part for adjusting the position of an electronic device, and a structure for preventing the electronic device from being tilted during the position adjustment is included in the interior of the main frame. | 03-03-2016 |
20160062459 | SYSTEMS AND METHODS FOR BIOMECHANICALLY-BASED EYE SIGNALS FOR INTERACTING WITH REAL AND VIRTUAL OBJECTS - Systems and methods are provided for discerning the intent of a device wearer primarily based on movements of the eyes. The system can be included within unobtrusive headwear that performs eye tracking and controls screen display. The system can also utilize remote eye tracking camera(s), remote displays and/or other ancillary inputs. Screen layout is optimized to facilitate the formation and reliable detection of rapid eye signals. The detection of eye signals is based on tracking physiological movements of the eye that are under voluntary control by the device wearer. The detection of eye signals results in actions that are compatible with wearable computing and a wide range of display devices. | 03-03-2016 |
20160063327 | Wearable Device To Display Augmented Reality Information - A wearable device configured to display augmented reality (AR) information to a user wearing the wearable device is disclosed. The wearable device includes a screen and a set of mirrors including a first mirror and a second mirror. The screen is configured to display an overlay AR image including AR information associated with a real world scene of a surrounding environment of the user. The first mirror is configured to reflect the overlay AR image displayed on the screen to the second mirror. The second mirror is configured to simultaneously a) receive and reflect the overlay AR image reflected by the first mirror and b) transmit the real world scene, such that the second mirror displays a mixed image of the AR information and the real world scene to the user. | 03-03-2016 |
20160063761 | COMMUNICATION OF SPATIAL INFORMATION BASED ON DRIVER ATTENTION ASSESSMENT - The disclosure includes a system and method for spatial information for a heads-up display. The system includes a processor and a memory storing instructions that, when executed, cause the system to: receive object data about an object, determine a vehicle path for the vehicle, estimate a danger index for the object based on the vehicle data and the object data, detect a user's gaze, determine whether the user sees the object based on the user's gaze, identify a graphic that is a representation of the object, and position the graphic to correspond to the user's eye frame. | 03-03-2016 |
20160063762 | MANAGEMENT OF CONTENT IN A 3D HOLOGRAPHIC ENVIRONMENT - Methods for managing content within an interactive augmented reality environment are described. An augmented reality environment may be provided to an end user of a head-mounted display device (HMD) in which content (e.g., webpages) may be displayed to the end user using one or more curved slates that are positioned on a virtual cylinder that appears body-locked to the end user. The virtual cylinder may be located around the end user with the end user positioned in the middle of the virtual cylinder such that the one or more curved slates appear to be displayed at the same distance from the end user. The position and size of each of the one or more curved slates may be controlled by the end user using head gestures and a virtual pointer projected onto the virtual cylinder. | 03-03-2016 |
20160063763 | IMAGE PROCESSOR AND INFORMATION PROCESSOR - An image processor according to the present embodiment is an image processor for processing an image of an object visible through a transparent display. The image processor includes an acquisition unit and a controller. The acquisition unit acquires display information corresponding to the object and obtained by performing recognition processing on the image. The controller displays, on the transparent display, the display information. | 03-03-2016 |
20160063764 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT - An image processing apparatus includes: a setting unit that sets, when a setting instruction has been received from a user, a reference plane for arranging a virtual object in the real space, according to a detected first posture information of a photographing unit that photographs a real space; a deriving unit that derives a first relative direction of the reference plane to a photographing direction of the photographing unit; a first calculating unit that calculates second posture information of the reference plane located in the first relative direction; and a display control unit that performs control of displaying a superimposed image, in which an object image of a drawn virtual object in a posture of the second posture information is superimposed at an area corresponding to the reference plane in a real-space image taken by the photographing unit, on a display unit. | 03-03-2016 |
20160063765 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes: a first acquisition unit that acquires a document image; a first reception unit that receives specification of a display region of the document image on a screen of a display unit; an arrangement unit that calculates a virtual region corresponding to the specified display region in a virtual three-dimensional space; and a display control unit that performs control to display, on the display unit, a superimposition image formed by superimposing a background image and a two-dimensional document image obtained by projecting a three-dimensional document image formed by arranging the document image in the calculated virtual region onto a two-dimensional space visually recognized from a predetermined viewpoint position as a preview image estimating a print result of the document image. | 03-03-2016 |
20160063766 | METHOD AND APPARATUS FOR CONTROLLING THE NOTIFICATION INFORMATION BASED ON MOTION - Various embodiments of the present disclosure relates to a method of controlling notification information based on a user's movement in a virtual reality environment. The method comprises displaying a virtual reality (VR) content execution screen; displaying a notification icon on at least a portion of the VR content execution screen when a notification is received; determining that a position of a user's sight line reaches a position at which the notification icon is displayed; and displaying notification information corresponding to the notification icon on the VR content execution screen. | 03-03-2016 |
20160063768 | SYSTEM AND METHOD OF OPERATION FOR REMOTELY OPERATED VEHICLES WITH SUPERIMPOSED 3D IMAGERY - The present invention provides a system and method of utilizing superimposed 3D imagery for remotely operated vehicles, namely 3D, reconstructed images of the environment of the ROV. In another aspect, it includes generating a virtual video of 3D elements in the operation environment, synchronizing the angle and position of the camera of a virtual video with the angle and position of a real camera, superimposing the virtual video and the real video from the real camera; superimposing these video feeds such that one is manipulated to show transparencies in areas of less interest, in order to show through the other video. It furthermore may include superimposing information, whether graphic, textual or both on to the hybrid virtual-real 3D imagery. The subject invention is also networked, such that the immersive visual interface described above accessible to a plurality of users operating from to plurality of locations. | 03-03-2016 |
20160070103 | CORRECTIVE OPTICS FOR REDUCING FIXED PATTERN NOISE IN A VIRTUAL REALITY HEADSET - A virtual reality (VR) headset includes an electronic display element and a corrective optics block. The electronic display element outputs image light via a plurality of sub-pixels of different colors that are separated from each other by a dark space. The corrective optics block includes an optical element including a diffractive surface. The corrective optics block is configured to magnify the image light and generate optically corrected image light by using the diffractive surface to generate blur spots of each sub-pixel masking dark space between adjacent sub-pixels. Optically corrected light is directed form the corrective optics block to an exit pupil of the VR headset for presentation to a user. | 03-10-2016 |
20160070343 | INFORMATION PROCESSING METHOD AND ELECTRONIC DEVICE - An information processing method is disclosed. The method comprises determining whether a head-mounted electronic device has changed from a first position to a second position, wherein the head-mounted electronic device comprises an image capture unit and a display unit for displaying a virtual scene; and capturing an image of an environment where a user of the head-mounted electronic device is located by using the image capture unit of the head-mounted electronic device and displaying the image, or a part thereof, via the display unit of the head-mounted electronic device for viewing by the user. Also disclosed is an information processing method and an electronic device. The method comprises: obtaining environment information about a real-world environment where a display is located with the sensor and generating security warning information based on the environment information; and presenting, via the display, a virtual scene independent from the real-world environment along with the security warning information. | 03-10-2016 |
20160071319 | METHOD TO USE AUGUMENTED REALITY TO FUNCTION AS HMI DISPLAY - A portable computing device includes a camera, a display screen, and a processor. The processor is configured to identify an electronic device from an image of the electronic device captured by the camera, display a virtual human-machine interface (HMI) associated with the electronic device on the display screen, receive user commands through the virtual HMI, and cause the user commands to be communicated to the electronic device. | 03-10-2016 |
20160071320 | HUD Object Design and Method - The invention features a rectangular 3-D modeling grid called a display environment that may be mapped to one or more sensor(s) to provide a heads up display device the ability to generate and view an Augmented Reality first; person view of custom 3-D objects. Location sensors create the positioning and perimeter of the display environment. The Display Environment may Sbe navigated by the combination of the display device's physical movement sensed by motion sensors and the display device's physical location based on its proximity to synchronized location sensors. Sensors on the display device recognize when the device is moving with respect to the Display Environment to initiate re-rendering of its 3-D model being displayed. Movement of the display device enable first person 3-D model illustrative and perspective views which may also be used to design 3-D models with customizable scale, orientation, positioning physics, and artificial intelligence. | 03-10-2016 |
20160071323 | IMMERSIVE DISPLAYS - A method of displaying images on an immersive display. The method includes receiving information from an external sensor or input device of the immersive display, based on the information received, detecting an object that conflicts with a virtual reality space, adjusting at least one dimension of virtual reality space to provide an adjusted virtual reality for display on the immersive display to accommodate for the object, and displaying the adjusted virtual reality on the display of the immersive display. | 03-10-2016 |
20160071325 | GENERATING AUGMENTED REALITY IMAGES USING SENSOR AND LOCATION DATA - Embodiments relate to using sensor data and location data from a device to generate augmented reality images. A mobile device pose can be determined (a geographic position, direction and a three dimensional orientation of the device) within a location. A type of destination in the location can be identified and multiple destinations can be identified, with the mobile device receiving queue information about the identified destinations from a server. A first image can be captured. Based on the queue information, one of the identified destinations can be selected. The geographic position of each identified destination can be identified, and these positions can be combined with the mobile device pose to generate a second image. Finally, an augmented reality image can be generated by combining the first image and the second image, the augmented reality image identifying the selected one destination. | 03-10-2016 |
20160071326 | SYSTEM AND METHOD FOR SELECTING TARGETS IN AN AUGMENTED REALITY ENVIRONMENT - Techniques are disclosed for facilitating electronic commerce in an augmented reality environment. In some embodiments, a method comprises detecting, by a mobile device, presence of the physical product or the real life service; and presenting, on the mobile device, information to conduct the transaction of a physical product or a real life service via the augmented reality environment. In some embodiments, a method comprises detecting one or more targets in the augmented reality platform using a select area in a perspective of a user, the perspective being captured by a mobile device; and prompting the user to choose an object of interest from the one or more detected targets. | 03-10-2016 |
20160077343 | System and Method for Video Image Registration and/or Providing Supplemental Data in a Heads Up Display - Video sources and inertial sensors are attached to a weapon and to goggles. A computer receives video images from the weapon- and goggles-mounted sources and inertial data from the sensors. The computer calculates a location for an image from the weapon-mounted source within an image from the goggles-mounted source using the inertial sensor data. The sensor-based location is checked (and possibly adjusted) based on a comparison of the images. A database contains information about real-world objects in a field of view of the goggles-mounted source, and is used to generate icons or other graphics concerning such objects. | 03-17-2016 |
20160078641 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - A warning information output outputs warning information for a user, based on a position of the user and a position of a real object with which data is associated in a case where a first attribute indicates outputting of the warning information for the user. An image output outputs an image to be generated, in a case where a second attribute indicates drawing of a virtual object in accordance with the position of the real object and a position at which the virtual object to be viewed by the user is arranged, by not drawing the virtual object in a region in which the position of the real object with which the data is associated is closer to the position of the user than the position at which the virtual object is arranged. | 03-17-2016 |
20160078679 | CREATING A VIRTUAL ENVIRONMENT FOR TOUCHLESS INTERACTION - This disclosure is directed to a touchless interactive environment. An input device may be configured to capture electronic images corresponding to physical objects detectable within a physical three-dimensional region. A computer system may establish a virtual three-dimensional region mapped to the physical three-dimensional region, with the virtual three-dimensional region defining a space where a plurality of virtual objects are instantiated based on the plurality of electronic images. The computer system may select a virtual object from the plurality of virtual objects as one or more commanding objects, with the one or more commanding objects indicating a command of a graphical user interface to be performed based on a position of the one or more commanding objects. The computer system may then perform the command of the graphical user interface based on the position of the one or more commanding objects. | 03-17-2016 |
20160078680 | TECHNOLOGIES FOR ADJUSTING A PERSPECTIVE OF A CAPTURED IMAGE FOR DISPLAY - Technologies for adjusting a perspective of a captured image for display on a mobile computing device include capturing a first image of a user by a first camera and a second image of a real-world environment by a second camera. The mobile computing device determines a position of an eye of the user relative to the mobile computing device based on the first captured image and a distance of an object in the real-world environment from the mobile computing device based on the second captured image. The mobile computing device generates a back projection of the real-world environment captured by the second camera to the display based on the determined distance of the object in the real-world environment relative to the mobile computing device, the determined position of the user's eye relative to the mobile computing device, and at least one device parameter of the mobile computing device. | 03-17-2016 |
20160078681 | WORKPIECE MACHINING WORK SUPPORT SYSTEM AND WORKPIECE MACHINING METHOD - The system includes imaging unit configured to image a work space at a viewpoint position in a visual line direction of a worker together with a workpiece, a position attitude information obtaining unit configured to obtain a position attitude information which indicates a relative position attitude relation between a viewpoint of the worker and the workpiece in the work space, a virtual image generating unit configured to generate a three-dimensional virtual image which indicates a completed shape of the workpiece in the viewpoint position and the visual line direction of the worker based on the position attitude information, an image composing unit configured to generate a composite image by superimposing the virtual image on a real image of the work space, and a display unit configured to display the composite image. According to the system, efficiency of workpiece machining work can be considerably improved by using the mixed reality technology. | 03-17-2016 |
20160078682 | COMPONENT MOUNTING WORK SUPPORT SYSTEM AND COMPONENT MOUNTING METHOD - The system has an imaging unit configured to image a work space at a viewpoint position in a visual line direction of a worker together with a workpiece on which a component is mounted, a position attitude information obtaining unit configured to obtain a position attitude information which indicates a relative position attitude relation between the viewpoint of the worker and the workpiece in the work space, a virtual image generating unit configured to generate a three-dimensional virtual image indicating the component after being mounted at the viewpoint position in the visual line direction, an image composing unit configured to generate a composite image by superimposing the virtual image on a real image of the work space, and a display unit configured to display the composite image. According to the system, efficiency of the component mounting work can be considerably improved by using the mixed reality technology. | 03-17-2016 |
20160078683 | MARKER-BASED AUGMENTED REALITY AUTHORING TOOLS - An augmented reality-based content authoring tool is presented. A content author arranges machine-recognizable markers in a physical environment. A computing device operating as the authoring tool recognizes the markers and their arrangement based on a captured digital representation of the physical environment. Once recognized, augmented reality primitives corresponding to the markers can be bound together via their primitive interfaces to give rise to a content set. The individual primitives and content set are instantiated based on the nature of the marker's arrangement. | 03-17-2016 |
20160078684 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM - To notify a user of an area where marker-based position and orientation measurement becomes unstable, an information processing apparatus according to the present specification includes, a first acquisition unit configured to acquire arrangement information and size information of a marker arranged in a real space, a second acquisition unit configured to acquire information about an imaging apparatus for capturing the real space, an unstable area derivation unit configured to derive an unstable area where the imaging apparatus is unable to stably detect the marker arranged in the real space, based on the arrangement information and the size information of the marker and the information about the imaging apparatus, and an output unit configured to output the area derived by the derivation unit. | 03-17-2016 |
20160078685 | DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND RECORDING MEDIUM - There is provided a display control device including a viewpoint acquisition unit configured to acquire a viewpoint of a user detected by a viewpoint detection unit; and a display control unit configured to control a display unit so that a virtual object is stereoscopically displayed by the display unit. The display control unit is configured to control a position in a depth direction of the virtual object presented to the user based on the viewpoint. | 03-17-2016 |
20160078686 | TRANSMISSIVE DISPLAY APPARATUS AND OPERATION INPUT METHOD - A transmissive display apparatus includes an operation section that detects an operational input issued through an operation surface, an image output section for the left eye and an image output section for the right eye that output predetermined image light, an image pickup section that visually presents the predetermined image light in an image pickup area that transmits externally incident light, a sensor that outputs a signal according to a positional relationship between the operation surface and the image pickup area, and a determination section that detects overlap between an optical image of the operation surface that has passed through the image pickup area and the predetermined image in the image pickup area based on the signal according to the positional relationship and receives the operational input when the optical image of the operation surface overlaps with the predetermined image. | 03-17-2016 |
20160085300 | WAVEGUIDE EYE TRACKING EMPLOYING SWITCHABLE DIFFRACTION GRATINGS - A transparent waveguide, for use in tracking an eye illuminated by infrared light, includes an input-coupler and an output-coupler. The input-coupler includes a stack of electronically switchable diffractive gratings arranged parallel to one another, each of which has a respective lens power that causes each of the gratings in the stack to have a different focal length. Each grating, when turned on, couples received infrared light into the waveguide. A sensor images an eye in dependence on infrared light beams that exit the waveguide at the output-coupler. Images of an eye, obtained using the sensor, are analyzed to determine which one of the electronically switchable diffractive gratings, when turned on, provides a best focused image of the eye or portion thereof. The one of the electronically switchable diffractive gratings, which provides the best focused image of the eye, is used for imaging the eye during eye tracking. | 03-24-2016 |
20160085302 | SYSTEMS AND METHODS FOR BIOMECHANICALLY-BASED EYE SIGNALS FOR INTERACTING WITH REAL AND VIRTUAL OBJECTS - Systems and methods are provided for discerning the intent of a device wearer primarily based on movements of the eyes. The system can be included within unobtrusive headwear that performs eye tracking and controls screen display. The system can also utilize remote eye tracking camera(s), remote displays and/or other ancillary inputs. Screen layout is optimized to facilitate the formation and reliable detection of rapid eye signals. The detection of eye signals is based on tracking physiological movements of the eye that are under voluntary control by the device wearer. The detection of eye signals results in actions that are compatible with wearable computing and a wide range of display devices. | 03-24-2016 |
20160086372 | Three Dimensional Targeting Structure for Augmented Reality Applications - A method is provided for obtaining AR information for display on a mobile interface device. The method comprises placing a three dimensional targeting structure in a target space, the targeting structure comprising a plurality of planar, polygonal facets each having a unique target pattern applied thereto. A position of the targeting structure relative to the target space is then determined. The method further comprises capturing an image of a portion of the target space including the targeting structure and identifying the unique target pattern of one of the plurality facets visible in the captured image. The method also comprises establishing a pose of the mobile interface device relative to the target space using the captured image and the position of the targeting structure, obtaining AR information associated with the unique target pattern of the particular one of the plurality facets, and displaying the AR information on the mobile interface device. | 03-24-2016 |
20160086377 | DETERMINING AN IMAGE TARGET'S SUITABILITY FOR COLOR TRANSFER IN AN AUGMENTED REALITY ENVIRONMENT - Disclosed are example methods, apparatuses, and articles of manufacture for determining and providing a suitability of an image target for Color Transfer. In an example embodiment, a method, which may be implemented using a computing device, may comprise: receiving image data representative of the image target; determining a suitability of the image target for Color Transfer based, at least in part, on one or more colors of the image data; and providing an indication indicative of the suitability of the image target for Color Transfer. | 03-24-2016 |
20160086378 | IMMERSIVE DISPLAYS - A method of providing information for display on a display of an immersive display. The method includes obtaining information utilized for displaying a first image in front of a first eye and a second image in front of a second eye of a user of the immersive display, excluding part of the information to yield adjusted information to occlude or replace first information from a first area of the first image and second information from a second area of a second image when displayed on the immersive display, and providing the adjusted information for displaying the first image absent the first area and the second image absent the second area on the immersive display. | 03-24-2016 |
20160086379 | INTERACTION WITH THREE-DIMENSIONAL VIDEO - In one embodiment, a method includes presenting to a user, on a display of a head-worn client computing device, a three-dimensional video including images of a real-life scene that is remote from the user's physical environment. The method also includes presenting to the user, on the display of the head-worn client computing device, a graphical object including an image of the user's physical environment or a virtual graphical object. | 03-24-2016 |
20160086380 | HYPERSPECTRAL IMAGER - A system for augmenting a surgeon's view of a surgical field includes a hyperspectral imager and a display. The hyperspectral imager is configured to provide a hyperspectral image of tissue and anatomical structures in a surgical field. The display displays the image provided by the hyperspectral imager, and the image is registered with an actual view of the surgical field as seen by the surgeon thereby augmenting the surgeon's view of the surgical field. | 03-24-2016 |
20160086381 | METHOD FOR PROVIDING VIRTUAL OBJECT AND ELECTRONIC DEVICE THEREFOR - An electronic device and method for providing a virtual object are disclosed, including a processor and memory storing program instructions executable by the processor to implement the method, which includes receiving a request for a virtual object including a plurality of present conditions, searching a database for the virtual object by comparing the received plurality of present conditions to a plurality of condition sets, each set associated with at least one virtual object stored in the database; and when none of the plurality of condition sets matches all of the received plurality of present conditions, detecting a partially matching condition set matching at least one of the received plurality of present conditions and providing a partially matching virtual object corresponding to the partially matching condition set. | 03-24-2016 |
20160086382 | PROVIDING LOCATION OCCUPANCY ANALYSIS VIA A MIXED REALITY DEVICE - The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location. | 03-24-2016 |
20160086383 | Object Outlining to Initiate a Visual Search - Methods and devices for initiating a search of an object are disclosed. In one embodiment, a method is disclosed that includes receiving sensor data from a sensor on a wearable computing device and, based on the sensor data, detecting a movement that defines an outline of an area in the sensor data. The method further includes identifying an object that is located in the area and initiating a search on the object. In another embodiment, a server is disclosed that includes an interface configured to receive sensor data from a sensor on a wearable computing device, at least one processor, and data storage comprising instructions executable by the at least one processor to detect, based on the sensor data, a movement that defines an outline of an area in the sensor data, identify an object that is located in the area, and initiate a search on the object. | 03-24-2016 |
20160086384 | Augmented Reality Personalization - A method is provided, such as for mobile augmented reality personalization. A front-facing camera of the mobile device acquires a first view of a user of the mobile device. A personal characteristic of the user of the mobile device is identified from the first view. A location of the mobile device may be determined. A back-facing camera of the mobile device may acquire a second view of a region at the location. Augmented reality information is selected as a function of the personal characteristic. A second view is displayed with the augmented reality information. | 03-24-2016 |
20160086386 | METHOD AND APPARATUS FOR SCREEN CAPTURE - An electronic device includes: a display; and at least one processor configured to: generate a virtual reality image to be applied to a virtual reality environment, generate a right eye image and a left eye image based on the virtual reality image, pre-distort the right eye image and the left eye image based on lens distortion, control the display to display a stereo image on the display by using the right eye image and the left eye image, and in response to detecting a capture event while the stereo image is displayed, generate a captured image by using the virtual reality image. | 03-24-2016 |
20160091964 | SYSTEMS, APPARATUSES, AND METHODS FOR GESTURE RECOGNITION AND INTERACTION - Generally discussed herein are systems and apparatuses for gesture-based augmented reality. Also discussed herein are methods of using the systems and apparatuses. According to an example a method may include detecting, in image data, an object and a gesture, in response to detecting the object in the image data, providing data indicative of the detected object, in response to detecting the gesture in the image data, providing data indicative of the detected gesture, and modifying the image data using the data indicative of the detected object and the data indicative of the detected gesture. | 03-31-2016 |
20160093105 | DISPLAY OF TEXT INFORMATION ON A HEAD-MOUNTED DISPLAY - A method for presenting text information on a head-mounted display is provided, comprising: rendering a view of a virtual environment to the head-mounted display; tracking an orientation of the head-mounted display; tracking a gaze of a user of the head-mounted display; processing the gaze of the user and the orientation of the head-mounted display, to identify a gaze target in the virtual environment towards which the gaze of the user is directed; receiving text information for rendering on the head-mounted display; presenting the text information in the virtual environment in a vicinity of the gaze target. | 03-31-2016 |
20160093106 | SCHEMES FOR RETRIEVING AND ASSOCIATING CONTENT ITEMS WITH REAL-WORLD OBJECTS USING AUGMENTED REALITY AND OBJECT RECOGNITION - A method includes identifying a real-world object in a scene viewed by a camera of a user device, matching the real-world object with a tagged object based at least in part on image recognition and a sharing setting of the tagged object, the tagged object having been tagged with a content item, providing a notification to a user of the user device that the content item is associated with the real-world object, receiving a request from the user for the content item, and providing the content item to the user. A computer readable storage medium stores one or more computer programs, and an apparatus includes a processor-based device. | 03-31-2016 |
20160093107 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD, DISPLAY APPARATUS AND DISPLAY METHOD, AND INFORMATION PROCESSING SYSTEM - Interaction between a virtual object and the real space is to be presented in a preferred manner. | 03-31-2016 |
20160093108 | Synchronizing Multiple Head-Mounted Displays to a Unified Space and Correlating Movement of Objects in the Unified Space - A method for sharing content with other HMDs includes rendering content of a virtual environment scene on a display screen of a head-mounted display associated with a first user. The display screen rendering the virtual environment scene represents a virtual reality space of the first user. A request to share the virtual reality space of the first user is detected. The request targets a second user. In response to detecting acceptance of the request to share, the virtual reality space of the first user is shared with the second user. The sharing allows synchronizing the virtual environment scene rendered on the head mounted display of the first and the second users. | 03-31-2016 |
20160093109 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing device including: a setting unit configured to set a filter intensity based on a degree of how exactly an imaging device that produces a captured image containing a marker related to display of a virtual object faces the marker, the degree being detected based on the captured image; and an image processing unit configured to combine the virtual object corresponding to the marker with the captured image by using a filter having the set filter intensity. | 03-31-2016 |
20160098860 | SYSTEM AND METHOD FOR PROVIDING LIVE AUGMENTED REALITY CONTENT - A rendering platform for providing live augmented reality content to a user device is disclosed. The rendering platform receives augmented reality content associated with an event via a multicast data channel. A coverage area of the multicast data channel covers a portion of a venue associated with the event. The rendering platform receives a request to present an augmented reality display of the event at the user device within the coverage area. Further, the augmented reality content in the augmented reality display is presented based on the request. | 04-07-2016 |
20160098861 | SAFETY SYSTEM FOR AUGMENTING ROADWAY OBJECTS ON A HEADS-UP DISPLAY - The disclosure includes a system and method for providing a heads-up display in a vehicle for alerting users to roadway objects. The system includes a processor and a memory storing instructions that, when executed, cause the system to: monitor, with the one or more processors, for a roadway object; detect, with the one or more processors, the roadway object; generate graphical data configured to emphasize the roadway object on a heads-up display; determine whether a user sees the roadway object; responsive to the user failing to see the roadway object, determine whether the roadway object is critical; and responsive to the roadway object being critical, enhance a graphic output on the heads-up display to get the user's attention. | 04-07-2016 |
20160098863 | COMBINING A DIGITAL IMAGE WITH A VIRTUAL ENTITY - An image combining apparatus obtains a digital image comprising picture elements and captured at an image capturing position. The picture elements forms different objects, where each formed object has at least one distance value representing the distance between the image capturing position and a real object that the corresponding formed object depicts. The image combining apparatus receives a user selection of a virtual entity to be combined with the digital image, obtains at least one distance value associated with the virtual entity representing the distance between a user selection of a location at which location the virtual entity is to appear to be placed and the image capturing position, compares the distance values, and combines the virtual entity with the digital image to create a combined image based on the comparison and with preference given to the lowest distance. | 04-07-2016 |
20160103318 | IMAGING ADJUSTMENT DEVICE AND IMAGING ADJUSTMENT METHOD - An imaging adjustment apparatus includes: an imaging analysis module, used to determine whether a current image of an object is deformed relative to an original image of the object, and when the current image is deformed, generate imaging correction information corresponding to the deformation; an imaging lens group, used to image the object, and including a plurality of subregions having adjustable imaging parameters; and a lens adjustment module, used to adjust an imaging parameter of a corresponding subregion of the imaging lens group according to the imaging correction information. An object can be imaged by using an imaging lens in which each subregion has adjustable imaging parameters, so as to adjust the imaging parameters for each subregion separately, thereby adjusting and correcting a perspective deformation that occurs on the object, preventing a perspective deformation from occurring on an image of the object acquired by a user, and improving user experience. | 04-14-2016 |
20160104321 | TRANSFER OF ATTRIBUTES BETWEEN GENERATIONS OF CHARACTERS - A system in which attributes are transferred between generations of characters in an interactive software experience is described. In an embodiment, data identifying one or more hardware attributes for a virtual entity are received from a physical user device associated with that virtual entity. One or more virtual attributes for the virtual entity are accessed and one or more behaviors of the virtual entity within the interactive software experience are modified based on a combination of the hardware and virtual attributes. | 04-14-2016 |
20160104324 | Systems and Methods for Activities Solver Development in Augmented Reality Applications - Systems and methods for generating an augmented reality interface for generics activities are disclosed. The systems and methods may be directed to creating an augmented reality display for an activity performed on a surface. Given an image of the activity, an activity solver library and associated configuration information for the activity may be selected. The surface of the activity from the image may be rectified, forming a rectified image, from which activity state information may be extracted using the configuration information. The activity state information may be provided to the activity solver library to generate solution information, and elements indicating the solution information may be rendered in a perspective of the original image. By providing the configuration information associated with an activity solver library, an augmented reality interface can be generated for an activity by capturing an image of the activity. | 04-14-2016 |
20160104452 | SYSTEMS AND METHODS FOR A SHARED MIXED REALITY EXPERIENCE - A method for sharing a mixed reality experience (mixed reality content, mixed reality event) between one or more computing devices is disclosed. The method includes: determining a spatial location and a spatial orientation (spatial data) of the one or more computing devices each having a camera; mapping the (spatial) location and/or the spatial orientation (spatial data) of each of the one or more computing devices into a mixed reality manager; and presenting an event that is shared among the one or more computing devices, and, the presenting of the event is experienced simultaneously and varies among each of the one or more computing devices depending on the location or the orientation or both. | 04-14-2016 |
20160106338 | INTRAOPERATIVE IMAGE REGISTRATION BY MEANS OF REFERENCE MARKERS - A method for incorporating tomographically obtained image data from a patient into a system for surgical planning and/or intraoperative navigation involves tomographic image data or image data obtained by X-ray recordings from at least one defined body area of the patient by at least one first recording appliance, wherein a first reference body having at least one surface is arranged on the patient and is recorded by the first recording appliance at the same time. The recorded image data representing the first reference body are compared with known geometric data from the first reference body in order to obtain distortion information. The recorded image data are equalized by a computation unit based on the distortion information to obtain equalized image data which have further image data from the same body area superimposed to obtain superimposed image data that is presented on a display. | 04-21-2016 |
20160109706 | USING A PLURALITY OF STACKED WAVEGUIDES FOR AUGMENTED OR VIRTUAL REALITY DISPLAY - Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image-generating source to provide one or more frames of image data in a time-sequential manner, a light modulator configured to transmit light associated with the one or more frames of image data, a substrate to direct image information to a user's eye, wherein the substrate houses a plurality of reflectors, a first reflector of the plurality of reflectors to reflect transmitted light associated with a first frame of image data at a first angle to the user's eye, and a second reflector to reflect transmitted light associated with a second frame of the image data at a second angle to the user's eye. | 04-21-2016 |
20160109707 | COMBINING AT LEAST ONE VARIABLE FOCUS ELEMENT WITH A PLURALITY OF STACKED WAVEGUIDES FOR AUGMENTED OR VIRTUAL REALITY DISPLAY - Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image-generating source to provide one or more frames of image data in a time-sequential manner, a light modulator configured to transmit light associated with the one or more frames of image data, a substrate to direct image information to a user's eye, wherein the substrate houses a plurality of reflectors, a first reflector of the plurality of reflectors to reflect transmitted light associated with a first frame of image data at a first angle to the user's eye, and a second reflector to reflect transmitted light associated with a second frame of the image data at a second angle to the user's eye. | 04-21-2016 |
20160109708 | PROJECTING IMAGES TO A WAVEGUIDE THROUGH MICROPROJECTORS FOR AUGMENTED OR VIRTUAL REALITY - Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image-generating source to provide one or more frames of image data in a time-sequential manner, a light modulator configured to transmit light associated with the one or more frames of image data, a substrate to direct image information to a user's eye, wherein the substrate houses a plurality of reflectors, a first reflector of the plurality of reflectors to reflect transmitted light associated with a first frame of image data at a first angle to the user's eye, and a second reflector to reflect transmitted light associated with a second frame of the image data at a second angle to the user's eye. | 04-21-2016 |
20160110902 | METHOD, COMPUTER PROGRAM PRODUCT, AND SYSTEM FOR PROVIDING A SENSOR-BASED ENVIRONMENT - Method, computer program product, and system to provide an extended vision within an environment having a plurality of items, where the extended vision is based on a field of view of a person determined using a first visual sensor, and is further based on at least a second visual sensor disposed within the environment. Image information from the first and second visual sensors is associated to produce combined image information. Selected portions of the combined image information are displayed based on input provided through a user interface. | 04-21-2016 |
20160110912 | DELIVERING VIEWING ZONES ASSOCIATED WITH PORTIONS OF AN IMAGE FOR AUGMENTED OR VIRTUAL REALITY - Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image-generating source to provide one or more frames of image data in a time-sequential manner, a light modulator configured to transmit light associated with the one or more frames of image data, a substrate to direct image information to a user's eye, wherein the substrate houses a plurality of reflectors, a first reflector of the plurality of reflectors to reflect transmitted light associated with a first frame of image data at a first angle to the user's eye, and a second reflector to reflect transmitted light associated with a second frame of the image data at a second angle to the user's eye. | 04-21-2016 |
20160110920 | MODIFYING A FOCUS OF VIRTUAL IMAGES THROUGH A VARIABLE FOCUS ELEMENT - Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image-generating source to provide one or more frames of image data in a time-sequential manner, a light modulator configured to transmit light associated with the one or more frames of image data, a substrate to direct image information to a user's eye, wherein the substrate houses a plurality of reflectors, a first reflector of the plurality of reflectors to reflect transmitted light associated with a first frame of image data at a first angle to the user's eye, and a second reflector to reflect transmitted light associated with a second frame of the image data at a second angle to the user's eye. | 04-21-2016 |
20160110921 | HEAD MOUNTED DISPLAY, METHOD OF CONTROLLING HEAD MOUNTED DISPLAY, AND COMPUTER PROGRAM - A head mounted display, includes an image display unit enabling the user to visually recognize the virtual image, and an augmented reality processing unit causing the image display unit to form the virtual image including a virtual object displayed additionally to a real object actually existing in the real world, in which the augmented reality processing unit causes the virtual image including the virtual object in a first display aspect to be formed, and then causes the virtual image including the virtual object in a second display aspect to be formed after a predetermined retention time period has elapsed, and in which a degree of the visibility hindrance of the virtual object in the second display aspect for the real object is lower than a degree of the visibility hindrance of the virtual object in the first display aspect for the real object. | 04-21-2016 |
20160110922 | METHOD AND SYSTEM FOR ENHANCING COMMUNICATION BY USING AUGMENTED REALITY - The subject matter discloses a method for enhancing communication, comprising: generating an avatar according to metadata that is received from a remote computer device; augmenting the avatar in a live video stream captured by the computer device and instructing an audio unit of the computer device to play an audio stream; wherein the audio stream is received from the remote computer device; wherein the generating, the augmenting and the instructing being within a voice communication session with the remote computer device | 04-21-2016 |
20160110923 | AUGMENTED REALITY PRESENTATIONS - Technology is generally disclosed for augmented-reality presentations. In some embodiments, the technology can receive an indication of a user's sensitivity to an aspect of a presentation, receive general content relating to the presentation, receive overlay content relating to the presentation, combine the received general content and the received overlay content to create the presentation, render the presentation. The overlay content may respond to the user's sensitivity. | 04-21-2016 |
20160112479 | SYSTEM AND METHOD FOR DISTRIBUTED AUGMENTED REALITY - Systems and methods for distributed augmented reality are described herein. In one example, the method comprises receiving at least one of source data and augmented reality (AR) data from at least one of data source, identifying objects of interest present in at least one of the source data and the AR data based on analysis of the at least one of the source data and the AR data, and generating enhanced AR data, based on the outcome of identification of at least one of the objects of interest and AR enhancement rules. The method further comprises modifying at least one of the source data and the AR data based on the generation and transmitting at least one of the modified source data and the modified AR data to at least one of the one or more clients systems. | 04-21-2016 |
20160117802 | DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, NON-TRANSITORY RECORDING MEDIUM, AND PROJECTION DEVICE - A display control device includes: an acquirer that receives inclination information on an occupant's head in a mobile body from a detector that detects the inclination; and a controller that controls a displayer to generate a predetermined image representing a presentation image superimposed on an object as viewed from the occupant when the presentation image is displayed on a display medium, based on recognition results of the object ahead of the mobile body and the inclination information. When the object is recognized, with the head not inclined, the controller causes the displayer to generate a first predetermined image representing a first presentation image indicating a horizontal direction, and with the head inclined, the controller causes the displayer to generate a second predetermined image representing a second presentation image obtained by rotating part of the first presentation image by an angle which is determined according to the inclination. | 04-28-2016 |
20160117861 | USER CONTROLLED REAL OBJECT DISAPPEARANCE IN A MIXED REALITY DISPLAY - The technology causes disappearance of a real object in a field of view of a see-through, mixed reality display device system based on user disappearance criteria. Image data is tracked to the real object in the field of view of the see-through display for implementing an alteration technique on the real object causing its disappearance from the display. A real object may satisfy user disappearance criteria by being associated with subject matter that the user does not wish to see or by not satisfying relevance criteria for a current subject matter of interest to the user. In some embodiments, based on a 3D model of a location of the display device system, an alteration technique may be selected for a real object based on a visibility level associated with the position within the location. Image data for alteration may be prefetched based on a location of the display device system. | 04-28-2016 |
20160117862 | CONTEXT-AWARE TAGGING FOR AUGMENTED REALITY ENVIRONMENTS - A method for tag-based search includes capturing an image, extracting a tag from the image, identifying a location associated with the captured image, and querying stored content for information that matches the location and the tag. Local storage is checked for the information first, and remote storage may be checked subsequently. Any located information may be used to augment the image. Information located in the remote storage may be saved in the local storage until it reaches a certain age, until it fails to be accessed for a threshold period of time, or until the location moves outside a threshold radius associated with a location of the information located in the remote storage. | 04-28-2016 |
20160117864 | RECALIBRATION OF A FLEXIBLE MIXED REALITY DEVICE - The technology provides embodiments for recalibration of outward facing cameras supported by a see-through, head mounted, mixed reality display system having a flexible portion between see-through displays for the eyes. Each outward facing camera has a fixed spatial relationship with a respective or corresponding see-through display positioned to be seen through by a respective eye. For front facing cameras, the fixed spatial relationship allows a predetermined mapping between positions on an image sensor of each camera and positions on the respective display. The mapping may be used to register a position of a virtual object to a position of a real object. A change in a first flexible spatial relationship between the outward facing cameras can be automatically detected. A second spatial relationship between the cameras is determined. A registration of a virtual object to a real object may be updated based on the second spatial relationship. | 04-28-2016 |
20160124502 | SENSORY FEEDBACK SYSTEMS AND METHODS FOR GUIDING USERS IN VIRTUAL REALITY ENVIRONMENTS - Sensory feedback (“chaperoning”) systems and methods for guiding users in virtual/augmented reality environments such as walk-around virtual reality environments are described. Exemplary implementations assist with preventing collisions with objects in the physical operating space in which the user acts, among other potential functions and/or uses. | 05-05-2016 |
20160125250 | APPARATUS AND METHOD FOR CONNECTING A MOBILE DEVICE CAMERA - An apparatus and a method for connecting a mobile camera device to a vehicle-mounted display apparatus, wherein the mobile camera device is controlled both by the driver operating the mobile camera device as well as by additional information regarding the state of the vehicle, wherein the additional information is provided by vehicle sensors. The information from the driver and the vehicle is processed by an application program on the mobile camera device and the mobile camera device is controlled. The mobile camera device can display images and/or videos and process the images and/or videos by way of the application program. The information to be processed can be displayed on the display of the vehicle-mounted display apparatus. | 05-05-2016 |
20160125631 | APPARATUS FOR DYNAMICALLY CONTROLLING HUD (HEAD-UP DISPLAY) INFORMATION DISPLAY POSITION - An apparatus for controlling an HUD-information display position includes an outside-information generating unit that captures outside image of a vehicle and generates information on a surrounding of the vehicle from a captured image, an HUD-information generating unit that generates HUD information including vehicle operation information based on the information on the surrounding of the vehicle, an HUD-information projection unit that projects light corresponding to the HUD information to display the HUD information at a predetermined position on an inner side of a windshield, an HUD-information-display-position adjustment unit including a movable mechanism for moving the HUD-information projection unit to adjust the HUD-information display position, an HUD-information-display-position determining unit that determines the HUD-information display position based on the information on the surrounding of the vehicle, and a control unit that dynamically controls the HUD-information-display-position adjustment unit to display the HUD information at the HUD-information display position determined by the HUD-information-display-position determining unit. | 05-05-2016 |
20160125654 | COMPONENT ASSEMBLY WORK SUPPORT SYSTEM AND COMPONENT ASSEMBLY METHOD - The system includes imaging unit which images a work space at a viewpoint position in a visual line direction of a worker together with an other component to which one component is to be installed, a position attitude information obtaining unit which obtains position attitude information which indicates relative position attitude relation between viewpoint of the worker and other component in the work space, a virtual image generating unit which generates virtual image of an actual shape of the one component at the viewpoint position in the visual line direction of the worker based on position attitude information, an image composing unit which generates composite image by superimposing virtual image on a real image in the work space imaged by the imaging unit, and display unit which displays composite image. According to the system, efficiency of component assembly work can be considerably improved by using mixed reality technology. | 05-05-2016 |
20160125655 | A METHOD AND APPARATUS FOR SELF-ADAPTIVELY VISUALIZING LOCATION BASED DIGITAL INFORMATION - A method for self-adaptively visualizing location based digital information may comprise: obtaining context information for a location based service, in response to a request for the location based service from a user; and presenting, based at least in part on the context information, the location based service through a user interface in at least one of a first mode and a second mode for the location based service, wherein a control of the location based service in one of the first mode and the second mode causes, at least in part, an adaptive control of the location based service in other of the first mode and the second mode. | 05-05-2016 |
20160125656 | METHOD AND APPARTUS FOR SELECTIVELY INTEGRATING SENSORY CONTENT - To integrate a sensory property such as occlusion, shadowing, reflection, etc. among physical and notional (e.g. virtual/augment) visual or other sensory content, providing an appearance of similar occlusion, shadowing, etc. in both models. A reference position, a physical data model representing physical entities, and a notional data model are created or accessed. A first sensory property from either data model is selected. A second sensory property is determined corresponding with the first sensory property, and notional sensory content is generated from the notional data model with the second sensory property applied thereto. The notional sensory content is outputted to the reference position with a see-through display. Consequently, notional entities may appear occluded by physical entities, physical entities may appear to cast shadows from notional light sources, etc. | 05-05-2016 |
20160125657 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing device including: a data storage unit storing feature data indicating a feature of appearance of one or more physical objects; an environment map building unit for building an environment map based on an input image obtained by imaging a real space and the feature data, the environment map representing a position of a physical object present in the real space; a control unit for acquiring procedure data for a set of procedures of operation to be performed in the real space, the procedure data defining a correspondence between a direction for each procedure and position information designating a position at which the direction is to be displayed; and a superimposing unit for generating an output image by superimposing the direction for each procedure at a position in the input image determined based on the environment map and the position information, using the procedure data. | 05-05-2016 |
20160125658 | Augmented Reality Extrapolation Techniques - Augmented reality extrapolation techniques are described. In one or more implementations, a frame of an augmented-reality display is rendered based at least in part on an optical basis that describes a current orientation or position of at least a part of a computing device. While the frame is rendered, an extrapolation based on a previous basis and a sensor basis generates an updated optical basis that describes a likely orientation or position of the part of the computing device, and the extrapolation is effective to account for a lag time duration between rendering the frame and displaying the frame of the augmented-reality display. The rendered frame of the augmented-reality display is updated before the rendered frame is displayed based at least in part on the updated optical basis that describes the likely orientation or position of the part of the computing device. | 05-05-2016 |
20160128450 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - A method is provided for generating output image data. The method comprises receiving image data representing an input image, the input image containing at least one facial image. The method further comprises recognizing the facial image in the image data, and recognizing facial features of the facial image. The method further comprises generating data representing a makeup image based on the recognized facial features, the makeup image providing information assisting in the application of makeup. The method also comprises generating output image data representing the makeup image superimposed on the facial image. | 05-12-2016 |
20160131908 | VISUAL STABILIZATION SYSTEM FOR HEAD-MOUNTED DISPLAYS - Introduced herein are various techniques for displaying virtual and augmented reality content via a head-mounted display (HMD). The techniques can be used to improve the effectiveness of the HMD, as well as the general experience and comfort of users of the HMD. A binocular HMD system may present visual stabilizers to each eye that allow users to more easily fuse the digital content seen by each eye. In some embodiments the visual stabilizers are positioned within the digital content so that they converge to a shared location when viewed by a user, while in other embodiments the visual stabilizers are mapped to different locations within the user's field of view (e.g., peripheral areas) and are visually distinct from one another. These techniques allow the user to more easily fuse the digital content, thereby decreasing the eye fatigue and strain typically experienced when viewing virtual or augmented reality content. | 05-12-2016 |
20160132107 | ELECTRONIC DEVICE, METHOD AND STORAGE MEDIUM - According to one embodiment, an eye-worn electronic device includes a first nose pad, a first electrode, a second nose pad, a second electrode and a third electrode. The first electrode is on the first nose pad. The second electrode is on the second nose pad. The first electrode and the second electrode are in a first straight line extending in a first direction and are used for measuring first ocular potentials in the first direction. The third electrode is on the second nose pad and is at a position distance away from the first straight line. The first electrode and the third electrode are in a second straight line extending in a second direction different from the first direction and are used for measuring second ocular potentials in the second direction. | 05-12-2016 |
20160132189 | METHOD OF CONTROLLING THE DISPLAY OF IMAGES AND ELECTRONIC DEVICE ADAPTED TO THE SAME - A method of controlling a display of images and an electronic device adapted to the method are provided. The electronic device includes a display configured to display images, an input unit configured to detect an image display control input, and a controller configured to output a first image to the display, and control an auxiliary window to be displayed on a part of the first image, wherein the auxiliary window outputs a second image that has information about coordinates that differ from those of the first image. | 05-12-2016 |
20160132727 | CAMPAIGN OPTIMIZATION FOR EXPERIENCE CONTENT DATASET - A server for campaign optimization is described. The server generates analytics data from users interactions with a first virtual object displayed on a plurality of devices and user interactions with a first set of user interactive features of the first virtual object from a first content dataset. The server generates and provides a second content dataset to a device based on the analytics data. The second content dataset. The device recognizes an identifier from the second content dataset and displays, in the device, the second virtual object and the second set of user interactive features of the second virtual object in response to identifying the identifier. | 05-12-2016 |
20160133051 | DISPLAY DEVICE, METHOD OF CONTROLLING THE SAME, AND PROGRAM - A head mounted display device includes an image display portion that transmits external scenery and displays an image so as to be capable of being visually recognized together with the external scenery. In addition, the head mounted display device includes a control unit that acquires an external scenery image including the external scenery which is visually recognized through the image display portion, recognizes an object which is visually recognized through the image display portion on the basis of the acquired external scenery image, and displays information regarding the object on the image display portion. | 05-12-2016 |
20160133052 | VIRTUAL ENVIRONMENT FOR SHARING INFORMATION - An electronic device providing information through a virtual environment is disclosed. The device includes: a display; and an information providing module functionally connected with the display, wherein the information providing module displays an object corresponding to an external electronic device for the electronic device through the display, obtains information to be output through the external electronic device, and provides contents corresponding to the information in relation to a region, on which the object is displayed. | 05-12-2016 |
20160133053 | VISUAL STABILIZATION SYSTEM FOR HEAD-MOUNTED DISPLAYS - Introduced herein are various techniques for displaying virtual and augmented reality content via a head-mounted display (HMD). The techniques can be used to improve the effectiveness of the HMD, as well as the general experience and comfort of users of the HMD. A binocular HMD system may present visual stabilizers to each eye that allow users to more easily fuse the digital content seen by each eye. In some embodiments the visual stabilizers are positioned within the digital content so that they converge to a shared location when viewed by a user, while in other embodiments the visual stabilizers are mapped to different locations within the user's field of view (e.g., peripheral areas) and are visually distinct from one another. These techniques allow the user to more easily fuse the digital content, thereby decreasing the eye fatigue and strain typically experienced when viewing virtual or augmented reality content. | 05-12-2016 |
20160133054 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING SYSTEM, AND STORAGE MEDIUM - To appropriately superimpose and display a virtual object on an image of a real space, an information processing apparatus according to exemplary embodiment of the present invention determines the display position of the virtual object based on information indicating an allowable degree of superimposition of a virtual object on each real object in the image of the real space, and a distance from a real object for which a virtual object is to be displayed in association with the real object. | 05-12-2016 |
20160133055 | HIGH RESOLUTION PERCEPTION OF CONTENT IN A WIDE FIELD OF VIEW OF A HEAD-MOUNTED DISPLAY - Introduced herein are various techniques for displaying virtual and augmented reality content via a head-mounted display (HMD). The techniques can be used to improve the effectiveness of the HMD, as well as the general experience and comfort of users of the HMD. An HMD may increase and/or decrease the resolution of certain areas in digital content that is being viewed to more accurately mimic a user's high resolution and low resolution fields of view. For example, the HMD may monitor the user's eye movement to identify a focal point of the user's gaze, and then increase the resolution in an area surrounding the focal point, decrease the resolution elsewhere, or both. Predictive algorithms could also be employed to identify which areas are likely to be the subject of the user's gaze in the future, which allows the HMD to present the regionally-focused content in real-time. | 05-12-2016 |
20160133057 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An information processing system that acquires video data captured by an image pickup unit; detects an object from the video data; detects a condition corresponding to the image pickup unit; and controls a display to display content associated with the object at a position other than a detected position of the object based on the condition corresponding to the image pickup unit. | 05-12-2016 |
20160133058 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An information processing system that acquires video data captured by an image pickup unit; detects an object from the video data; detects a condition corresponding to the image pickup unit; and controls a display to display content associated with the object at a position other than a detected position of the object based on the condition corresponding to the image pickup unit. | 05-12-2016 |
20160133230 | REAL-TIME SHARED AUGMENTED REALITY EXPERIENCE - A system is provided for enabling a shared augmented reality experience. The system comprises zero, one or more on-site devices for generating augmented reality representations of a real-world location, and one or more off-site devices for generating virtual augmented reality representations of the real-world location. The augmented reality representations include data and or content incorporated into live views of a real-world location. The virtual augmented reality representations of the AR scene incorporate images and data from a real world location and include additional content used in an AR presentation. The on-site devices synchronize the content used to create the augmented reality experience with the off-site devices in real time such that the augmented reality representations and the virtual augmented reality representations are consistent with each other. | 05-12-2016 |
20160134484 | NETWORK CONSTRUCTION SUPPORT SYSTEM AND METHOD - A network management apparatus is configured to manage a network apparatuses and to store the work information and an augmented reality presenting apparatus. A tag is added to the network apparatuses and each of the network cables, and each of the tags is provided with a visible object that conforms to the tag information that includes an ID of a target to which the tag has been added. The network management apparatus transmits the guide information that includes the information that indicates the contents of a work for the work target based on the tag information of the work target or the tag shot image of the work target and the work information to the augmented reality presenting apparatus. The augmented reality presenting apparatus associates a guide that is based on the guide information with an input image from the shooting device, and displays the guide and the input image. | 05-12-2016 |
20160139666 | WHOLE-BODY HUMAN-COMPUTER INTERFACE - A human-computer interface system having an exoskeleton including a plurality of structural members coupled to one another by at least one articulation configured to apply a force to a body segment of a user, the exoskeleton comprising a body-borne portion and a point-of-use portion; the body-borne portion configured to be operatively coupled to the point-of-use portion; and at least one locomotor module including at least one actuator configured to actuate the at least one articulation, the at least one actuator being in operative communication with the exoskeleton. | 05-19-2016 |
20160140759 | AUGMENTED REALITY SECURITY FEEDS SYSTEM, METHOD AND APPARATUS - A system, method, and computer-readable storage medium configured to collect and aggregate images using augmented reality. | 05-19-2016 |
20160140760 | ADAPTING A DISPLAY ON A TRANSPARENT ELECTRONIC DISPLAY - A system and method for adapting a display on a transparent electronic display with a virtual display are disclosed herein. In one example, the system includes a focal point selector to select a focal point of the virtual display, the transparent electronic display is integrated into a front window of a vehicle. In another example, the system includes a an image detector to detect an image of an object based on a window defined by the transparent electronic display, the image detector receiving an image of the window from a front facing camera; an object augmentor to augment the object, and the transparent electronic display is integrated into a front window of a vehicle, and the object augmentor augments the object based on a distance of the object away from the vehicle and a speed of the vehicle. | 05-19-2016 |
20160140761 | USING DEPTH INFORMATION FOR DRAWING IN AUGMENTED REALITY SCENES - Optimizing augmented reality scenes by using depth information to accurately display interactions between real objects and synthetic objects is described. A stream of depth data associated with a real scene of an augmented reality display and a stream of color data associated with the real scene may be received. The stream of depth data may be processed to construct a first mesh and the first mesh may be projected into a color space associated with the stream of color data to construct a second mesh. In some examples, a position of the synthetic objects respective to real objects in the real scene may be determined and/or queries may be conducted to determine how the synthetic objects interact with the real objects in the real scene. Based at least on constructing the second mesh, determining positions, and/or conducting queries, one or more synthetic objects may be drawn into the real scene. | 05-19-2016 |
20160140762 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - An image processing device includes: a processor; and a memory which stores a plurality of instructions, which when executed by the processor, cause the processor to execute: acquiring an captured image including a recognition target object in a real world and an operation part of a user; recognizing the recognition target object and the operation part from the captured image; displaying an additional information image including information corresponding to the recognition target object; and determining, based on the amount of change in a feature amount of the operation part in the captured images, whether a motion of the operation part is directed at the recognition target object or is directed at the additional information image. | 05-19-2016 |
20160140763 | SPATIAL INTERACTION IN AUGMENTED REALITY - A method for spatial interaction in Augmented Reality (AR) includes displaying an AR scene that includes an image of a real-world scene, a virtual target object, and a virtual cursor. A position of the virtual cursor is provided according to a first coordinate system within the AR scene. A user device tracks a pose of the user device relative to a user hand according to a second coordinate system. The second coordinate system is mapped to the first coordinate system to control movements of the virtual cursor. In a first mapping mode, virtual cursor movement is controlled to change a distance between the virtual cursor and the virtual target object. In a second mapping mode, virtual cursor movement is controlled to manipulate the virtual target object. User input is detected to control which of the first mapping mode or the second mapping mode is used. | 05-19-2016 |
20160140764 | HEAD-MOUNTABLE APPARATUS AND SYSTEMS - A head mountable display (HMD) includes a camera operable to capture images of a peripheral and/or control device in use by a wearer of the HMD. A detector of the HMD is configured to detect occlusions in a captured image of the peripheral and/or control device. And an image renderer of the HMD is configured to render a virtual version of the peripheral and/or control device for display to the HMD wearer and to render a representation of a user's hand at a position of a detected occlusion. | 05-19-2016 |
20160140765 | AUGMENTED REALITY - A method for object recognition performed by a computing device of an augmented reality system. The method includes receiving an image from a user, determining channels that a user is subscribed to, and determining a list of servers that host the channels that the user is subscribed to by using groups of channels that are distributed among a cluster of servers. The method further includes selecting, using the computing device, which servers from the list of servers are to be used to query the channels that the user is subscribed to. In addition, the method includes querying, using the computing device, the selected servers for the channels that the user is subscribed to with the image to determine at least one object that matches the image from object databases for the channels that the user is subscribed to. The method also includes retrieving and sending content associated with the at least one object | 05-19-2016 |
20160140766 | Surface projection system and method for augmented reality - A surface projection system for augmented reality is provided. The surface projection system includes a surface projection device that is positionable adjacent a surface and having a light element and a sensor. The light element is configured to project a reference pattern on the surface. The sensor is positioned adjacent the surface and configured to gaze along the surface. | 05-19-2016 |
20160140767 | HEAD-MOUNTED DISPLAY DEVICE - A head-mounted display device that allows a user to visually recognize a virtual image in a state where the head-mounted display device is mounted on the head of the user, including: an image processing unit that performs a process of generating an image; and an image display unit having an image light generating unit that generates image light representing the image, and configured such that the user can visually recognize the virtual image and the outside world, wherein the head-mounted display device is configured such that in a partial area of an area where the virtual image can be displayed in a visual field of the user, the outside world can be visually recognized preferentially. | 05-19-2016 |
20160140768 | INFORMATION PROCESSING APPARATUS AND RECORDING MEDIUM - There is provided an information processing apparatus including a display control unit configured to include a first display control mode in which control is performed in a manner that a first image from a user viewpoint, which is captured by a first imaging unit, is displayed on a display unit or control is performed in a manner that the display unit is transmissive, and a second display control mode in which control is performed in a manner that a second image captured by a second imaging unit including a user within an angle of view is displayed on the display unit, and a switching control unit configured to perform control in response to an instruction from the user in a manner that a display control mode of the display control unit is switched from the first display control mode to the second display control mode. | 05-19-2016 |
20160140773 | HEAD-MOUNTED DISPLAY DEVICE, METHOD OF CONTROLLING HEAD-MOUNTED DISPLAY DEVICE, AND COMPUTER PROGRAM - A head-mounted display device with which a user can visually recognize a virtual image and an outside scene includes an image display unit configured to cause the user to visually recognize the virtual image, a detecting unit configured to cause the image display unit to form the virtual image for causing the user to visually recognize a mark on any standard coordinate on a three-dimensional space, calculate a gazing point coordinate on the three-dimensional space representing a gazing point of the user gazing at the mark, and detect a shift between the standard coordinate and the gazing point coordinate, and an augmented-reality processing unit configured to cause the image display unit to form the virtual image including a virtual object to be displayed with respect to a real object actually present in a real world, the virtual object being arranged using the detected shift. | 05-19-2016 |
20160140930 | METHODS AND SYSTEMS FOR VIRTUAL AND AUGMENTED REALITY - A portable virtual reality and/or augmented reality system enabling the projection and tracking of a user in a simulated environment is described. A system of motion capture cameras, computing, and tracking devices is provided in a portable package. Each tracking device is configured with one or more emitters which may generate a distinctive, repetitive pattern. The virtual reality and/or augmented reality system once assembled, provides for motion tracking and display of a one or more users in a simulated environment. | 05-19-2016 |
20160147492 | Augmented Reality Cross-Domain Solution for Physically Disconnected Security Domains - A method comprising the steps of: displaying primary data having a first sensitivity level on a first display screen that is operatively coupled to a first computer; capturing an image of the first display screen with an image capture device that is operatively coupled to a second computer that is communicatively isolated from the first computer such that no data is shared between the first and second computers; executing with the second computer a display recognition and characterization algorithm to recognize the primary data based only on the captured image of the first display screen; and augmenting the primary data by displaying secondary data on a second display, wherein the secondary data is related to, and has a higher sensitivity level than, the primary data. | 05-26-2016 |
20160148052 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM - An information processing apparatus includes an acquiring unit that acquires a plurality of kinds of medical apparatus information, which are information acquired from a plurality of medical apparatuses, a generating unit that generates information for presentation including at least a part of the plurality of kinds of medical apparatus information acquired by the acquiring unit, and a presenting unit that outputs the generated information for presentation to an image display apparatus that displays an image. The generating unit generates the information for presentation including at least the medical apparatus information acquired by the medical apparatus, visual recognition of a display unit of which by a user of the image display apparatus is difficult. | 05-26-2016 |
20160148428 | Cutout Object Merge - Cutout object merge techniques are described. In one or more embodiments, a cutout object is identified for insertion into a scene. The cutout object may, for instance, be selected from a library of cutout objects, each of which was extracted from an already-captured image. Before capturing an image of the scene, the selected cutout object may be placed in a substantially real-time display of the scene, such as that which is displayed via a camera's view finder. Using an image capturing device, an image of the scene may then be captured. Once an image of the scene is captured, the cutout object and the captured image may be merged to form a composite image that includes the cutout object at a location in the scene specified by the placement. | 05-26-2016 |
20160148431 | VIDEO SYSTEM FOR PILOTING A DRONE IN IMMERSIVE MODE - This system comprises a drone and a remote station with virtual reality glasses rendering images transmitted from the drone, and provided with means for detecting changes of orientation of the user's head. The drone generates a “viewpoint” image (P′ | 05-26-2016 |
20160148433 | SYSTEMS AND METHODS FOR AUGMENTED REALITY PREPARATION, PROCESSING, AND APPLICATION - Various of the disclosed embodiments provide systems and methods for acquiring and applying a depth determination of an environment in e.g., various augmented reality applications. A user may passively or actively scan a device (e.g., a tablet device, a mobile phone device, etc.) about the environment acquiring depth data for various regions. The system may integrate these scans into an internal three-dimensional model. This model may then be used in conjunction with subsequent data acquisitions to determine a device's location and orientation within the environment with high fidelity. In some embodiments, these determinations may be accomplished in real-time or near-real-time. Using the high-fidelity orientation and position determination, various augmented reality applications may then be possible using the same device used to acquire the depth data or a new device. | 05-26-2016 |
20160148434 | DEVICE AND METHOD FOR PROCESSING VISUAL DATA, AND RELATED COMPUTER PROGRAM PRODUCT - The disclosure relates to a visual data processing device and a visual data processing method. The device is used for displaying visual data for a terminal. The device comprises:
| 05-26-2016 |
20160154620 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM | 06-02-2016 |
20160155267 | DISPLAY CONTROL SYSTEM FOR AN AUGMENTED REALITY DISPLAY SYSTEM | 06-02-2016 |
20160155268 | ELECTRONIC APPARATUS, CONTROL METHOD THEREOF, COMPUTER PROGRAM, AND COMPUTER-READABLE RECORDING MEDIUM | 06-02-2016 |
20160155269 | ELECTRONIC APPARATUS, CONTROL METHOD THEREOF, COMPUTER PROGRAM, AND COMPUTER-READABLE RECORDING MEDIUM | 06-02-2016 |
20160155270 | INTERACTIONS OF VIRTUAL OBJECTS WITH SURFACES | 06-02-2016 |
20160155271 | METHOD AND DEVICE FOR PROVIDING AUGMENTED REALITY OUTPUT | 06-02-2016 |
20160155272 | AUGMENTATION OF ELEMENTS IN A DATA CONTENT | 06-02-2016 |
20160155273 | Wearable Electronic Device | 06-02-2016 |
20160161740 | AUTOMATIC VARIABLE VIRTUAL FOCUS FOR AUGMENTED REALITY DISPLAYS - The technology provides an augmented reality display system for displaying a virtual object to be in focus when viewed by a user. In one embodiment, the focal region of the user is tracked, and a virtual object within the user focal region is displayed to appear in the focal region. As the user changes focus between virtual objects, they appear to naturally move in and out of focus as real objects in a physical environment would. The change of focus for the virtual object images is caused by changing a focal region of light processing elements in an optical path of a microdisplay assembly of the augmented reality display system. In some embodiments, a range of focal regions are swept through at a sweep rate by adjusting the elements in the optical path of the microdisplay assembly. | 06-09-2016 |
20160162748 | GENERATING SUPPORT INSTRUCTIONS BY LEVERAGING AUGMENTED REALITY - A method for generating a sequence of support instruction steps by leveraging Augmented Reality (AR) can include: capturing workspace data from a workspace using a sensing device, wherein workspace may include a plurality of components, with at least one of the plurality of components having at least a tag; identifying both a first tag of a first component of the plurality of components, and a second tag of a second component of the plurality of components; determining, based on respective shapes of the first and second tags, that the first and second components are operably related; determining, again based on the first and second tags, a first operation; generating a first overlay, wherein the first overlay includes at least one image corresponding with the first operation; and generating a first augmented reality display of a first instruction step by combining the first overlay with a display of the workspace data. | 06-09-2016 |
20160163063 | MIXED-REALITY VISUALIZATION AND METHOD - Disclosed is a technique for providing a mixed-reality view to user of the visualization device. The device provides the user with a real-world, real-time view of an environment of the user, on a display area of the device. The device additionally determines a location at which a virtual reality window should be displayed within the real-world, real-time view of the environment of the user, and displays the virtual reality window at the determined location within the real-world, real-time view of the environment of the user. The device may additionally display one or more augmented reality objects within the real-world, real-time view of the environment of the user. | 06-09-2016 |
20160163108 | AUGMENTED REALITY HUD DISPLAY METHOD AND DEVICE FOR VEHICLE - An augmented reality head-up display (HUD) display method for a vehicle includes: detecting a position of an object outside of the vehicle at which a driver of the vehicle is looking; detecting a position of an eye of the driver while the driver is viewing external object information displayed on a windshield of the vehicle; extracting augmented reality HUD display coordinates of the object based on the detected object position and augmented reality HUD display coordinates of the eye based on the detected eye position; correcting one or more errors in the augmented reality HUD display coordinates of the object and one or more errors in the augmented reality HUD display coordinates of the eye using an error correction parameter for the augmented reality HUD display coordinates of the object and an error correction parameter for the augmented reality HUD display coordinates of the eye, the error correction parameters varying from one another; receiving the corrected augmented reality HUD display coordinates of the object and the corrected augmented reality HUD display coordinates of the eye; and displaying augmented reality HUD graphics of the external object information on the windshield based on the received corrected augmented reality HUD display coordinates. | 06-09-2016 |
20160163109 | DISPLAY DEVICE, HEAD MOUNTED DISPLAY, DISPLAY SYSTEM, AND CONTROL METHOD FOR DISPLAY DEVICE - A transmission type head mounted display which allows a user to visually recognize a display screen as a virtual image and to visually recognize the display screen at a focal length matching a convergence angle of the user, includes an image analysis portion that recognizes a real object within a view field of the user, a distance sensor that detects a distance from the user to the real object, a display control portion that displays the display screen in which an AR image for the real object is disposed, and a display section that displays the display screen, in which the display control portion displays a screen effect which guides a convergence angle of the user to a convergence angle matching the detected distance, on the display screen. | 06-09-2016 |
20160163110 | VIRTUAL REALITY SYSTEM AND METHOD FOR CONTROLLING OPERATION MODES OF VIRTUAL REALITY SYSTEM - A virtual reality system is provided. The virtual reality system includes a host device, a transmission cable, and a head mounted display apparatus worn by a user and coupled to the host device via the transmission cable. The head mounted display apparatus includes a multimedia module, a multi-sensing module, and a peripheral hub. The multimedia module receives multimedia content from the host device via the transmission cable, and displays a video part of the multimedia content. The multi-sensing module obtains sensing information regarding the head mounted display apparatus, the user and an obstruction. The peripheral hub provides the sensing information to the host device via the transmission cable. At least one virtual object of the video part of the multimedia content is adjusted in response to the sensing information. | 06-09-2016 |
20160163111 | CONTENT CREATION TOOL - A server for content creation is described. A content creation tool of the server receives, from a first device, a content identifier of a physical object, a virtual object content, and a selection of a template corresponding to an interactive feature for the virtual object content. The content creation tool generates a content dataset based on the content identifier of the physical object, the virtual object content, and the selected template. The content creation tool provides the content dataset to a second device, the second device configured to display the interactive feature corresponding to the selected template | 06-09-2016 |
20160163112 | OFFLOADING AUGMENTED REALITY PROCESSING - A system and method for offloading augmented reality processing is described. A first sensor of a server generates a first set of sensor data corresponding to a location and an orientation of a display device. The server receives a request from the display device to offload a combination of at least one of a tracking process and a rendering process from the display device. The server generates offloaded processed data based on a combination of at least one of the first set of sensor data and a second set of sensor data. The second set of sensor data is generated by a second sensor at the display device. The server streams the offloaded processed data to the display device. | 06-09-2016 |
20160163117 | DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND RECORDING MEDIUM - There is provided a display control device including a display controller configured to place a virtual object within an augmented reality space corresponding to a real space in accordance with a recognition result of a real object shown in an image captured by an imaging part, and an operation acquisition part configured to acquire a user operation. When the user operation is a first operation, the display controller causes the virtual object to move within the augmented reality space. | 06-09-2016 |
20160163283 | VIRTUAL REALITY SYSTEM - A virtual reality system is provided. The virtual reality system includes a host device and a head mounted display apparatus to be worn by a user. The head mounted display apparatus includes a first wireless module, a second wireless module, a multimedia module, a multi-sensing module, and a peripheral hub. The multimedia module receives multimedia content from the host device through the first wireless module. The multi-sensing module obtains sensing information regarding the head mounted display apparatus and the user. The peripheral hub receives communication data from the host device through the second wireless module, and provides the sensing information to the host device through the second wireless module. | 06-09-2016 |
20160170216 | DISPLAY SYSTEM | 06-16-2016 |
20160171704 | IMAGE PROCESSING METHOD AND APPARATUS | 06-16-2016 |
20160171739 | AUGMENTATION OF STOP-MOTION CONTENT | 06-16-2016 |
20160171767 | FACILITATING DYNAMIC NON-VISUAL MARKERS FOR AUGMENTED REALITY ON COMPUTING DEVICES | 06-16-2016 |
20160171770 | System and Method for Assisting a User in Locating Physical Objects While the User is in a Virtual Reality Environment | 06-16-2016 |
20160171771 | System and Method for Assisting a User in Remaining in a Selected Area While the User is in a Virtual Reality Environment | 06-16-2016 |
20160171772 | EYEWEAR OPERATIONAL GUIDE SYSTEM AND METHOD | 06-16-2016 |
20160171773 | DISPLAY CONTROL METHOD, INFORMATION PROCESSING APPARATUS, AND STORAGE MEDIUM | 06-16-2016 |
20160171774 | INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING APPARATUS, AND COMPUTER PROGRAM | 06-16-2016 |
20160171775 | INFORMATION AUGMENTED PRODUCT GUIDE | 06-16-2016 |
20160171777 | AUGMENTED REALITY ASSET LOCATOR | 06-16-2016 |
20160171778 | SMART TOOLS AND WORKSPACES FOR DO-IT-YOURSELF TASKS | 06-16-2016 |
20160171781 | SERVER, CLIENT TERMINAL, SYSTEM, AND PROGRAM FOR PRESENTING LANDSCAPES | 06-16-2016 |
20160179193 | CONTENT PROJECTION SYSTEM AND CONTENT PROJECTION METHOD | 06-23-2016 |
20160180536 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM | 06-23-2016 |
20160180574 | SYSTEM, DEVICE AND METHOD FOR PROVIDING USER INTERFACE FOR A VIRTUAL REALITY ENVIRONMENT | 06-23-2016 |
20160180589 | Selectively Pairing an Application Presented in Virtual Space with a Physical Display | 06-23-2016 |
20160180590 | SYSTEMS AND METHODS FOR CONTEXTUALLY AUGMENTED VIDEO CREATION AND SHARING | 06-23-2016 |
20160180591 | Eye Tracking With Mobile Device In A Head-Mounted Display | 06-23-2016 |
20160180592 | Selectively Pairing an Application Presented in Virtual Space with a Physical Display | 06-23-2016 |
20160180594 | AUGMENTED DISPLAY AND USER INPUT DEVICE | 06-23-2016 |
20160180598 | Three-Dimensional Virtual Environment | 06-23-2016 |
20160187655 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING SYSTEM - An image acquisition unit acquires an image captured by a first imaging device provided in a HMD for presenting an image observed when a three-dimensional image in a virtual three-dimensional space is projected onto a real-world setting, the first imaging device being configured to visualize an area including a field of view of a user wearing the HMD. A marker detection unit detects a marker included in the image captured by the first imaging device and acquired by the image acquisition unit. The image acquisition unit acquires an image captured by a second imaging device having an angle of view that at least partially overlaps an angle of view of the first imaging device. If the marker is not captured in the image captured by the first imaging device, the marker detection unit detects the marker in an image captured by the second imaging device. | 06-30-2016 |
20160187657 | OVERMOLDED LEDS AND FABRIC IN VIRTUAL REALITY HEADSETS - A headset for virtual reality applications includes an array of light emitting diodes (LEDs) emitting light captured by a camera included in a virtual reality system, allowing the virtual reality system to detect the position and orientation of the headset in three-dimensional space. To manufacture the headset, a flexible strip including a circuit having the LEDs is molded into an outer shell of the headset using a casting material that is transmissible to wavelengths of light transmitted by the LEDs. An interior surface of the outer shell of the headset is within a specified distance of the LEDs. The outer shell may also include fabric that is also molded into the outer shell in the same or in a similar process. | 06-30-2016 |
20160188585 | TECHNOLOGIES FOR SHARED AUGMENTED REALITY PRESENTATIONS - A system and a method for providing a shared augmented reality presentation are disclosed. A group presentation server communicates with one or more wearable computing devices. The group presentation server coordinates the outputs of the various wearable computing devices to present a shared augmented reality presentation to members of group, where every member of the group experiences a unique perspective on the presentation. | 06-30-2016 |
20160189397 | SAMPLE BASED COLOR EXTRACTION FOR AUGMENTED REALITY - A system and method for sampling-based color extraction for augmented reality are described. A viewing device includes an optical sensor to capture an image of a real-world object. A color extraction software divides the captured image into multiple regions or recognizes pre-defined regions and identifies a color value for each region. A color-based augmented reality effect module retrieves a virtual content based on the color values for the regions, and delivers the virtual content in the viewing device. | 06-30-2016 |
20160189425 | DETERMINATION OF AUGMENTED REALITY INFORMATION - Systems and methods may provide for obtaining or implementing augmented reality information. A logic architecture may be employed to detect a low acceleration condition with respect to an image capture device. The logic architecture may select data from a video associated with the image capture device in response to the low acceleration condition. The logic architecture may also use the data to obtain augmented reality information for the video. Additionally, the logic architecture may modify the video with the augmented reality information, or may display the video with the augmented reality information. | 06-30-2016 |
20160189426 | VIRTUAL REPRESENTATIONS OF REAL-WORLD OBJECTS - Methods for generating virtual proxy objects and controlling the location of the virtual proxy objects within an augmented reality environment are described. In some embodiments, a head-mounted display device (HMD) may identify a real-world object for which to generate a virtual proxy object, generate the virtual proxy object corresponding with the real-world object, and display the virtual proxy object using the HMD such that the virtual proxy object is perceived to exist within an augmented reality environment displayed to an end user of the HMD. In some cases, image processing techniques may be applied to depth images derived from a depth camera embedded within the HMD in order to identify boundary points for the real-world object and to determine the dimensions of the virtual proxy object corresponding with the real-world object. | 06-30-2016 |
20160189427 | SYSTEMS AND METHODS FOR GENERATING HAPTICALLY ENHANCED OBJECTS FOR AUGMENTED AND VIRTUAL REALITY APPLICATIONS - A system includes a memory device configured to store a virtual object and associated haptic asset, an augmented or virtual reality device configured to generate an augmented or virtual reality space configured to display the virtual object, and an electronic device that includes a haptic output device constructed and arranged to generate a haptic effect based on the haptic asset when the augmented or virtual reality device detects an interaction involving the virtual object. | 06-30-2016 |
20160189428 | METHODS AND SYSTEMS FOR DISPLAYING VIRTUAL OBJECTS - Methods and systems for displaying a virtual object capture, via an image capturing device, images of a physical scene that includes a first marker and a second marker, wherein the first marker and the second marker are physical markers; associate a first virtual object with the first marker and a second virtual object with the second marker; track the position of the first marker and the position of the second marker in the captured images of the physical scene using a tracking device; detect an interaction between the first marker and the second marker based on the tracked position of the first marker and the tracked position of the second marker, wherein the interaction is detected when the first marker and the second marker are within a predetermined proximity of each other; and associate, in response to detecting that the interaction, a third virtual object with the first marker. | 06-30-2016 |
20160189429 | SCANNING DISPLAY SYSTEM IN HEAD-MOUNTED DISPLAY FOR VIRTUAL REALITY - Methods, systems, and computer programs are presented for the presentation of images in a head-mounted display (HMD). One HMD includes a screen, a processor, inertial sensors, a motion tracker module, and a display adjuster module. The motion tracker tracks motion of the HMD based on inertial data from the inertial sensors, and the display adjuster produces modified display data for an image frame to be scanned to the screen if the motion of the HMD is greater than a threshold amount of motion. The display data includes pixel values to be scanned to rows in sequential order, and the modified display data includes adjusted pixel values for pixels in a current pixel row of the image frame to compensate for the distance traveled by the HMD during a time elapsed between scanning a first pixel row of the image frame and scanning the current pixel row of the image frame. | 06-30-2016 |
20160189430 | METHOD FOR OPERATING ELECTRONIC DATA GLASSES, AND ELECTRONIC DATA GLASSES - A method operates electronic data glasses. The method involves detecting whether an object arranged outside of the data glasses lines up, at least partially, with a symbol displayed by a display device of the data glasses, and selecting the object if the object overlaps with the symbol, at least partially, and if at least one predetermined condition has been met. | 06-30-2016 |
20160189432 | AUTOMATIC FOCUS IMPROVEMENT FOR AUGMENTED REALITY DISPLAYS - An augmented reality system provides improved focus of real and virtual objects. A see-through display device includes a variable focus lens a user looks through. A focal region adjustment unit automatically focuses the variable focus lens in a current user focal region. A microdisplay assembly attached to the see-through display device generates a virtual object for display in the user's current focal region by adjusting its focal region. The variable focus lens may also be adjusted to provide one or more zoom features. Visual enhancement of an object may also be provided to improve a user's perception of an object. | 06-30-2016 |
20160189434 | SYSTEM FOR REPRODUCING VIRTUAL OBJECTS - A system for reproducing virtual objects includes a detector device that carries a known tracking pattern or tracking feature; and a host device configured for virtually reproducing a template pattern to a surface and producing an image combining the tracking pattern and the template pattern. The template pattern corresponds to a virtual object. The host device is configured to process the image and thereby transmit information regarding the geometrical relationship between the tracking pattern and the template pattern to a user so that the user can reproduce the virtual object on the surface based on the information. | 06-30-2016 |
20160196603 | PRODUCT AUGMENTATION AND ADVERTISING IN SEE THROUGH DISPLAYS | 07-07-2016 |
20160196692 | VIRTUAL LASERS FOR INTERACTING WITH AUGMENTED REALITY ENVIRONMENTS | 07-07-2016 |
20160196693 | DISPLAY SYSTEM, CONTROL METHOD FOR DISPLAY DEVICE, AND COMPUTER PROGRAM | 07-07-2016 |
20160196694 | SYSTEM AND METHOD FOR CONTROLLING IMMERSIVENESS OF HEAD-WORN DISPLAYS | 07-07-2016 |
20160202021 | Relative Aiming Point Display | 07-14-2016 |
20160203365 | PROVIDING VOLUME INDICATORS BASED ON RECEIVED IMAGES OF CONTAINERS | 07-14-2016 |
20160203640 | PROVIDING VOLUME INDICATORS BASED ON RECEIVED IMAGES OF CONTAINERS | 07-14-2016 |
20160203641 | AUGMENTED REALITY DEVICE DISPLAY OF IMAGE RECOGNITION ANALYSIS MATCHES | 07-14-2016 |
20160203643 | EXHIBITION GUIDE APPARATUS, EXHIBITION MEDIA DISPLAY APPARATUS, MOBILE TERMINAL AND METHOD FOR GUIDING EXHIBITION | 07-14-2016 |
20160203644 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM | 07-14-2016 |
20160203645 | SYSTEM AND METHOD FOR DELIVERING AUGMENTED REALITY TO PRINTED BOOKS | 07-14-2016 |
20160203647 | Augmented Reality Design System | 07-14-2016 |
20160249989 | REALITY-AUGMENTED MORPHOLOGICAL PROCEDURE | 09-01-2016 |
20160251030 | TURNING ANGLE CORRECTION METHOD, TURNING ANGLE CORRECTION DEVICE, IMAGE-CAPTURING DEVICE, AND TURNING ANGLE CORRECTION SYSTEM | 09-01-2016 |
20160253840 | CONTROL SYSTEM AND METHOD FOR VIRTUAL NAVIGATION | 09-01-2016 |
20160253841 | AUGMENTED REALITY SKIN EVALUATION | 09-01-2016 |
20160253842 | MOLDING AND ANCHORING PHYSICALLY CONSTRAINED VIRTUAL ENVIRONMENTS TO REAL-WORLD ENVIRONMENTS | 09-01-2016 |
20160253843 | METHOD AND SYSTEM OF MANAGEMENT FOR SWITCHING VIRTUAL-REALITY MODE AND AUGMENTED-REALITY MODE | 09-01-2016 |
20160253844 | SOCIAL APPLICATIONS FOR AUGMENTED REALITY TECHNOLOGIES | 09-01-2016 |
20160253845 | INFORMATION PROCESSING APPARATUS AND METHOD, INFORMATION PROCESSING SYSTEM, AND PROVIDING MEDIUM | 09-01-2016 |
20160375360 | METHODS, APPARATUSES, AND SYSTEMS FOR REMOTE PLAY - Methods, apparatuses, systems, and storage media for emulating physical gaming experiences are provided. In embodiments, a computing device or a remote play module may detect a first object in a field of view (FOV), obtain object data representative of a second object detected by another apparatus and a position of the second object, determine a position of the first object using the sensor data, generate an image of the second object based on the object data, generate an overlay image based on the first object or the second object, display the generated image at a position within the FOV based on the position of the second object, and display the overlay image within the FOV. Other embodiments may be described and/or claimed. | 12-29-2016 |
20160377381 | Target Analysis and Recommendation - An electronic device determines target information about a target and recommends a target based on the target information. | 12-29-2016 |
20160378176 | Hand And Body Tracking With Mobile Device-Based Virtual Reality Head-Mounted Display - A head-mounted display (HMD) may include a mobile device that includes a display unit, at least one sensing unit and a processing unit. The at least one sensing unit may be configured to detect a presence of an object. The processing unit may be configured to receive data associated with the detecting from the at least one sensing unit, and determine one or more of a position, an orientation and a motion of the object based at least in part on the received data. The HMD may also include an eyewear piece that includes a holder and a field of view (FOV) enhancement unit. The holder may be wearable by a user on a forehead thereof to hold the mobile device in front of eyes of the user. The FOV enhancement unit may be configured to enlarge or redirect a FOV of the at least one sensing unit. | 12-29-2016 |
20160378296 | Augmented Reality Electronic Book Mechanism - A method is described to facilitate augmented reality. The method includes receiving image data from a book, extracting features from the image data as the book as being read, comparing the extracted features to previously stored data in the database, retrieving page data associated with the extracted features upon detecting a comparison match, overlaying the page data on a book page currently being viewed and displaying the page data on the book page currently being viewed. | 12-29-2016 |
20160379083 | REAL-TIME, MODEL-BASED OBJECT DETECTION AND POSE ESTIMATION - A system includes a memory and a processor configured to select a set of scene point pairs, to determine a respective feature vector for each scene point pair, to find, for each feature vector, a respective plurality of nearest neighbor point pairs in feature vector data of a number of models, to compute, for each nearest neighbor point pair, a respective aligning transformation from the respective scene point pair to the nearest neighbor point pair, thereby defining a respective model-transformation combination for each nearest neighbor point pair, each model-transformation combination specifying the respective aligning transformation and the respective model with which the nearest neighbor point pair is associated, to increment, with each binning of a respective one of the model-transformation combinations, a respective bin counter, and to select one of the model-transformation combinations in accordance with the bin counters to detect an object and estimate a pose of the object. | 12-29-2016 |
20160379407 | Virtual Fantasy System and Method of Use - A system for creating visual fantasy while engaging in a physical activity with a partner includes at least one eyewear piece provided with at one camera, a power source, a wireless transceiver, a visual display for receiving a video transmission, and a computer based device provided with virtual merging software for video data from said camera of the eyewear piece, wherein the virtual merging software can dynamically identify facial/body area data of a viewed person in video data, a database associated with the virtual merging software having predetermined facial character data of corresponding to another face and retrieves a predetermined facial/body character data based on a user predefined selection and manipulates the video data overlaying the selected predetermined facial/body character data into the facial area of the video data generating a merged field of view data and sends signal back to the visual display. | 12-29-2016 |
20160379408 | MIXED-REALITY IMAGE CAPTURE - A head-mounted display includes a visible-light camera configured to collect a visible-light image of a physical space, a surface sensor configured to measure one or more surface parameters of the physical space, a see-through display configured to visually present an augmentation image while light from the physical space passes through the see-through display to a user eye, and an augmented-reality engine. The augmented-reality engine may be configured to identify a surface of the physical space from the one or more measured surface parameters, compose a mixed-reality image that includes the augmentation image overlaid on the visible-light image, and visually present, via the see-through display, the mixed-reality image in alignment with the identified surface. | 12-29-2016 |
20160379410 | ENHANCED AUGMENTED REALITY MULTIMEDIA SYSTEM - A method for operating an augmented reality system includes acquiring video data from a camera sensor or video file, and identifying at least one region of interest within the video data. Augmented reality data is generated for the region of interest without receiving user input, with the augmented reality data being contextually related to the region of interest. The video data may be displayed with the augmented reality data superimposed thereupon in real time as the video data is acquired from the camera sensor or video file. The video data and the augmented reality data are stored in a non-conflated fashion. The video data may be displayed with updated AR content acquired for stored AR metadata during later playback. The method therefore allows the storage of AR ROI's and data from any suitable sensor as metadata, so that later retrieval is possible in the absence of additional processing. | 12-29-2016 |
20160379411 | AUGMENTED REALITY SYSTEM FOR VEHICLE BLIND SPOT PREVENTION - The present disclosure relates to systems and methods for providing various types of information to a vehicle driver. Such information can be used by the vehicle driver singularly or in conjunction with other information available to the vehicle driver in order to allow the driver to operate the vehicle in an increasingly safe manner and/or to reduce the likelihood of property damage and/or possible bodily injuries to the driver, etc. In some instances, such information is presented to the driver as an augmented reality environment such that the driver can “see through” objects that may be occluding the driver's vision. | 12-29-2016 |
20160379412 | REALITY AUGMENTATION TO ELIMINATE, OR DE-EMPHASIZE, SELECTED PORTIONS OF BASE IMAGE - An augmented reality display system used to diminish (for example, obscure, obfuscate, hide, make less distracting, block out, “white wash” and/or make less discernible) certain portions of a base image (for example, a user's view of a part of the real world as seen through eyeglasses). Some examples of visual subject matter that can be diminished include: (i) driver distraction phenomena; (ii) advertising; and/or (iii) subject matter the user is not authorized to view. | 12-29-2016 |
20160379414 | AUGMENTED REALITY VISUALIZATION SYSTEM - An augmented reality (AR) system comprising a head mounted display (HMD) configured to display one or more AR visualizations within an operator's field of view (FOV), a control system including a processor and a storage system configured to store machine readable instructions, sensors configured to determine at least location and/or orientation of said sensors including a head mounted and device mounted sensor, and a communication system configured to communicate data between elements of the AR system. The software including various subroutines or machine readable instructions including an orientation/location instructions for determining orientation and/or position of the sensors, a visualizations generation instructions section configured to generate a visualization showing an aim point of a device coupled to said device mounted sensor, a path of travel of a projectile launched from said device, or an impact point of said projectile. Embodiments can include one or more photogrammetry processing sections. | 12-29-2016 |
20160379415 | Systems and Methods for Generating 360 Degree Mixed Reality Environments - Systems and methods for generating a 360 degree mixed virtual reality environment that provides a 360 degree view of an environment in accordance with embodiments of the invention are described. In a number of embodiments, the 360 degree mixed virtual reality environment is obtained by (1) combining one or more real world videos that capture images of an environment with (2) a virtual world environment that includes various synthetic objects that may be placed within the real world clips. Furthermore, the virtual objects embedded within the 360 degree mixed reality environment interact with the real world objects depicted in the real world environment to provide a realistic mixed reality experience. | 12-29-2016 |
20160379416 | APPARATUS AND METHOD FOR CONTROLLING OBJECT MOVEMENT - Methods and apparatuses are provided for controlling movement of a first object. A screen photographed by a camera is displayed on a display of an electronic device. A virtual area is set on the screen based on user input. A first object is identified on the screen. The first object is controlled to move within the virtual area. | 12-29-2016 |
20160379417 | AUGMENTED REALITY VIRTUAL MONITOR - A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display. | 12-29-2016 |
20160379591 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM - An information processing apparatus connected to a first display apparatus that is mounted on or held with one portion of a body of a first user and displays a virtual object and to a second display apparatus that is mounted on or held with one portion of a body of a second user different from the first user and displays an image corresponding to an image displayed on the first display apparatus includes a determination unit configured to determine whether the body of the first user or an object held by the first user satisfies a predetermined condition, and an output unit configured, if the determination unit determines that the body of the first user or the object held by the first user satisfies the predetermined condition, to output a determination result to the second display apparatus. | 12-29-2016 |
20160381790 | ADHESIVE JOINT SYSTEM FOR PRINTED CIRCUIT BOARDS - An adhesive joint system comprises a circuit board with a distal end and a proximal end mounted on a first side via a tongue and groove connection to a housing. An adhesive is positioned at least in the gap surrounding the tongue, and an electrical component mounted to the distal end on a second side of the circuit board that is opposite the first side. The respective coefficients of thermal expansion (CTE) of the tongue, adhesive, and the material defining the groove are related, such that as heat is applied to the tongue and groove connection, the adhesive is compressed within the gap. | 12-29-2016 |
20170235135 | ON-VEHICLE DEVICE, METHOD OF CONTROLLING ON-VEHICLE DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM | 08-17-2017 |
20170235458 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM | 08-17-2017 |
20170236316 | AUGMENTED REALITY CONSUMPTION DATA ANALYSIS | 08-17-2017 |
20170236328 | METHOD FOR MOTION-SYNCHRONIZED AR OR VR ENTERTAINMENT EXPERIENCE | 08-17-2017 |
20170236330 | NOVEL DUAL HMD AND VR DEVICE WITH NOVEL CONTROL METHODS AND SOFTWARE | 08-17-2017 |
20170236331 | METHOD AND SYSTEM FOR GEOGRAPHIC MAP OVERLAY | 08-17-2017 |
20170236332 | REALITY MIXER FOR MIXED REALITY | 08-17-2017 |
20170236334 | VIRTUAL FITTING SYSTEM, DEVICE AND METHOD | 08-17-2017 |
20170236348 | SYSTEMS AND METHODS OF ACCESS CONTROL IN SECURITY SYSTEMS WITH AUGMENTED REALITY | 08-17-2017 |
20180024363 | Methods and Devices for Rendering Interactions Between Virtual and Physical Objects on a Substantially Transparent Display | 01-25-2018 |
20180025502 | Techniques for Accurate Pose Estimation | 01-25-2018 |
20180025520 | Binocular see-through AR head-mounted display device and information displaying method thereof | 01-25-2018 |
20180025522 | DISPLAYING LOCATION-SPECIFIC CONTENT VIA A HEAD-MOUNTED DISPLAY DEVICE | 01-25-2018 |
20180025544 | METHOD AND DEVICE FOR DETERMINING RENDERING INFORMATION FOR VIRTUAL CONTENT IN AUGMENTED REALITY | 01-25-2018 |
20190146219 | WRISTWATCH BASED INTERFACE FOR AUGMENTED REALITY EYEWEAR | 05-16-2019 |
20190147631 | Computer Readable Media, Information Processing Apparatus and Information Processing Method | 05-16-2019 |
20190147632 | IMAGE PROCESSING METHOD AND APPARATUS, DEVICE AND COMPUTER READABLE STORAGE MEDIUM | 05-16-2019 |
20190147651 | SYSTEM AND METHOD FOR MATCHING VIRTUAL REALITY GOALS WITH AN OPTIMAL PHYSICAL LOCATION | 05-16-2019 |
20190147652 | AUGMENTED REALITY DEVICE | 05-16-2019 |
20190147653 | TECHNIQUES FOR FACILITATING INTERACTIONS BETWEEN AUGMENTED REALITY FEATURES AND REAL-WORLD OBJECTS | 05-16-2019 |
20190147656 | DETECTION AND VISUALIZATION OF SYSTEM UNCERTAINTY IN THE REPRESENTATION OF AUGMENTED IMAGE CONTENT IN HEADS-UP DISPLAYS | 05-16-2019 |
20190147662 | REGISTRATION BETWEEN ACTUAL MOBILE DEVICE POSITION AND ENVIRONMENTAL MODEL | 05-16-2019 |
20190147664 | APPLICATION CONTROL PROGRAM, APPLICATION CONTROL METHOD, AND APPLICATION CONTROL SYSTEM | 05-16-2019 |
20220137703 | HEADWARE WITH COMPUTER AND OPTICAL ELEMENT FOR USE THEREWITH AND SYSTEMS UTILIZING SAME - An apparatus for mounting on a head including a frame, A face-wearable near-ocular optics and a micro-display for displaying data in front of the eyes is provided. A computing device is coupled to the micro-display. At least one sensor is coupled to the computing device for receiving biometric human information. | 05-05-2022 |
20220138979 | REMOTE MEASUREMENTS FROM A LIVE VIDEO STREAM - Embodiments include systems and methods for remotely measuring distances in an environment captured by a device. A device captures a video stream of a device along with AR data that may include camera pose information and/or depth information, and transmits the video stream and AR data to a remote device. The remote device receives a selection of a first point and a second point within the video stream and, using the AR data, calculates a distance between the first and second points. The first and second points may be at different locations not simultaneously in view of the device. Other embodiments may capture additional points to compute areas and/or volumes. | 05-05-2022 |
20220138999 | VIDEO AND AUDIO PRESENTATION DEVICE, VIDEO AND AUDIO PRESENTATION METHOD, AND PROGRAM - A video and audio presentation device configured to present video and audio for sports training with less delay in physical reaction is provided. The device includes: an offset determination unit configured to determine a time-series offset t | 05-05-2022 |
20220139052 | RECOMMENDATIONS FOR EXTENDED REALITY SYSTEMS - Techniques and systems are provided for providing recommendations for extended reality systems. In some examples, a system determines one or more environmental features associated with a real-world environment of an extended reality system. The system determines one or more user features associated with a user of the extended reality system. The system also outputs, based on the one or more environmental features and the one or more user features, a notification associated with at least one application supported by the extended reality system. | 05-05-2022 |
20220141422 | EYE GAZE ADJUSTMENT - A computing system, a method, and a computer-readable storage medium for adjusting eye gaze are described. The method includes capturing a video stream including images of a user, detecting the user's face region within the images, and detecting the user's facial feature regions within the images based on the detected face region. The method includes determining whether the user is completely disengaged from the computing system and, if the user is not completely disengaged, detecting the user's eye region within the images based on the detected facial feature regions. The method also includes computing the user's desired eye gaze direction based on the detected eye region, generating gaze-adjusted images based on the desired eye gaze direction, wherein the gaze-adjusted images include a saccadic eye movement, a micro-saccadic eye movement, and/or a vergence eye movement, and replacing the images within the video stream with the gaze-adjusted images. | 05-05-2022 |