Patent application number | Description | Published |
20080200205 | System and method for interacting with objects via a camera enhanced mobile device - Embodiments of the present invention enable an image based controller to control and manipulate objects with simple point-and-capture operations via images captured by a camera enhanced mobile device. Powered by this technology, a user is able to complete many complicated control tasks via guided control of objects without utilizing laser pointers, IR transmitters, mini-projectors, or bar code tagging and/or customized wall paper are not needed for the environment control. This description is not intended to be a complete description of, or limit the scope of, the invention. Other features, aspects, and objects of the invention can be obtained from a review of the specification, the figures, and the claims. | 08-21-2008 |
20090243957 | SYSTEMS AND METHODS FOR INFORMATION VISUALIZATION IN MULTI-DISPLAY ENVIRONMENTS - A system for visualizing information in multi-display environments (“MDEs”) using spatial and perspective-aware visualization techniques. In one implementation, the position of each display in a three-dimensional MDE is determined relative to the other displays. Graphical decoration objects and link paths are then used to help visualize relatedness and continuity between graphical data objects and graphical node-link objects on different displays. Three-dimensional decoration objects and link paths are also constructed to visualize interrelationships between data objects on displays that are not on the same plane. The visualization techniques can also be integrated with mobile displays using location sensing technology to dynamically adjust the decoration objects. Additionally, user tracking systems will dynamically adjust the decoration objects based on user perspective. The visualization techniques are applicable to physical, virtual or mixed physical-virtual environments. | 10-01-2009 |
Patent application number | Description | Published |
20100214284 | MODEL CREATION USING VISUAL MARKUP LANGUAGES - A method and system for defining a model by analyzing images of a physical space. In some embodiments the images of a physical space contain distinctive visual features with associated semantic information, and the model is defined using image feature detection techniques to identify distinctive visual features and a rich marker-based markup language to give meaning to the distinctive visual features. In some embodiments the distinctive visual features are predefined markers, and the markup language specifies model aspects and rules for combining semantic information from a plurality of markers to define the model. | 08-26-2010 |
20110248995 | SYSTEM AND METHODS FOR CREATING INTERACTIVE VIRTUAL CONTENT BASED ON MACHINE ANALYSIS OF FREEFORM PHYSICAL MARKUP - Systems and methods are described for creating virtual models, primarily through actions taken in actual 3D physical space. For many applications, such systems are more natural to users and may provide a greater sense of reality than can be achieved by editing a virtual model at a computer display, which requires the use of manipulations of a 2D display to effect 3D changes. Actions are taken (markup is drawn or laid out, etc.) in a physical workspace. Such physical workspaces may in fact be identical to the space being modeled, small physical scale models of the space, or even a whiteboard or set of papers or objects which get mapped onto the space being modeled. | 10-13-2011 |
20110279697 | AR NAVIGATION FOR REPEAT PHOTOGRAPHY AND DIFFERENCE EXTRACTION - Systems and methods for repeat photography and difference extraction that help users take pictures from the same position and camera angle as earlier photos. The system automatically extracts differences between the photos. Camera poses are estimated and then indicators are rendered to show the desired camera angle, which guide the user to the same camera angle for repeat photography. Using 3D rendering techniques, photos are virtually projected onto a 3D model to adjust them and improve the match between the photos, and the difference between the two photos are detected and highlighted. Highlighting the detected differences helps users to notice the differences. | 11-17-2011 |
20130024819 | SYSTEMS AND METHODS FOR GESTURE-BASED CREATION OF INTERACTIVE HOTSPOTS IN A REAL WORLD ENVIRONMENT - Systems and methods provide for gesture-based creation of interactive hotspots in a real world environment. A gesture made by a user in a three-dimensional space in the real world environment is detected by a motion capture device such as a camera, and the gesture is then identified and interpreted to create a “hotspot,” which is a region in three-dimensional space through which a user interacts with a computer system. The gesture may indicate that the hotspot is anchored to the real world environment or anchored to an object in the real world environment. The functionality of the hotspot is defined in order to identify the type of gesture which will initiate the hotspot and associate the activation of the hotspot with an activity in the system, such as control of an application on a computer or an electronic device connected with the system. | 01-24-2013 |
20130088605 | SYSTEM AND METHOD FOR DETECTING AND ACTING ON MULTIPLE PEOPLE CROWDING A SMALL DISPLAY FOR INFORMATION SHARING - Systems and methods directed to detecting crowds and sharing content based on the devices in proximity to the detected crowd. Systems and methods may utilize a camera included in most mobile devices to estimate how many people are involved in an interaction with the mobile device. Depending on the number of people detected, the systems and methods may invoke appropriate actions. Appropriate actions may involve sharing information by various methods, such as sharing content on a large display, printing or emailing documents. The systems and methods may also be extended to generally detecting a crowd in proximity to a device, and invoking appropriate actions based on the number of people detected. | 04-11-2013 |
20130188456 | LOCALIZATION USING MODULATED AMBIENT SOUNDS - Systems and methods for determining the location of a microphone by using sounds played from loudspeakers at known locations. Systems and methods may thereby require a minimal level of infrastructure, using sounds that would naturally be played in the environment. Systems and methods may thereby allow devices such as smart-phones, tablets, laptops or portable microphones to determine their location in indoor settings, where Global Positioning Satellite (GPS) systems may not work reliably. | 07-25-2013 |
20140152660 | METHOD FOR CREATING 3-D MODELS BY STITCHING MULTIPLE PARTIAL 3-D MODELS - A method of creating a 3-D model by capturing partial 3-D models each comprising a sequence of 2-D images, analyzing each of the partial 3-D models to identify image features in the sequence of 2-D images of each of the partial 3-D models, identifying pairs of overlapping image features between the 2-D mages of each of the partial 3-D models by identifying image features in each 2-D image in the sequence of 2-D images of each of the partial 3-D models that overlaps image features in 2-D images of the sequence of 2-D images of the other partial 3-D models and selecting a 2-d image from each of the partial 3-D models, computing an initial transformation between 3-D coordinates of individual pairs of identified image features between the selected 2-D image from each of the partial 3-D models; and generating a final 3-D model based on the initial transformation. | 06-05-2014 |
Patent application number | Description | Published |
20080263592 | SYSTEM FOR VIDEO CONTROL BY DIRECT MANIPULATION OF OBJECT TRAILS - One embodiment is a method for an interaction technique allowing users to control nonlinear video playback by directly manipulating objects seen in the video playback, comprising the steps of: tracking a moving object on a camera; recording a video; creating an object trail for the moving object which corresponds to the recorded video; allowing the user to select a point in the object trail; and displaying a frame in the recorded video that corresponds with the selected point in the object trail. | 10-23-2008 |
20090002489 | EFFICIENT TRACKING MULTIPLE OBJECTS THROUGH OCCLUSION - Visual tracking of multiple objects in a crowded scene is critical for many applications include surveillance, video conference and human computer interaction. Complex interactions between objects result in partial or significant occlusions, making tracking a highly challenging problem. Presented is a novel efficient approach to tracking a varying number of objects through occlusion. The object tracking during occlusion is posed as a track-based segmentation problem in the joint-object space. Appearance models are used to interpret the foreground into multiple layer probabilistic masks in a Bayesian framework. The search for optimal segmentation solution is achieved by a greedy searching algorithm and integral image for real-time computing. Promising results on several challenging video surveillance sequences have been demonstrated. | 01-01-2009 |
20090046153 | HIDDEN MARKOV MODEL FOR CAMERA HANDOFF - An integrated method for modeling the handoff between cameras for tracking a specific individual, including: creating a representation of overlaps, gaps, and allowable movement among the fields of view of the cameras, wherein the representation is modeled as states in a Hidden Markov Model (HMM); training the HMM using video of people walking through the fields of view of the cameras; selecting a person to be tracked; and identifying the best camera area using the HMM. | 02-19-2009 |
20090154796 | SYSTEMS AND METHODS FOR HUMAN BODY POSE ESTIMATION - Systems and computer-implemented methods for use in body pose estimation are provided. Training data is obtained, where the training data includes observation vector data and corresponding pose vector data for a plurality of images. The observation vector data is representative of the images in observation space. The pose vector data is representative of the same images in pose space. Based on the training data, a model is computed that includes parameters of mapping from the observation space to latent space, parameters of mapping from the latent space to the pose space, and parameters of the latent space. The latent space has a lower dimensionality than the observation space and the pose space. | 06-18-2009 |