Patent application number | Description | Published |
20100053151 | IN-LINE MEDIATION FOR MANIPULATING THREE-DIMENSIONAL CONTENT ON A DISPLAY DEVICE - A user holds the mobile device upright or sits in front of a nomadic or stationary device, views the monitor from a suitable distance, and physically reaches behind the device with her hand to manipulate a 3D object displayed on the monitor. The device functions as a 3D in-line mediator that provides visual coherency to the user when she reaches behind the device to use hand gestures and movements to manipulate a perceived object behind the device and sees that the 3D object on the display is being manipulated. The perceived object that the user manipulates behind the device with bare hands corresponds to the 3D object displayed on the device. The visual coherency arises from the alignment of the user's head or eyes, the device, and the 3D object. The user's hand may be represented as an image of the actual hand or as a virtualized representation of the hand, such as part of an avatar. | 03-04-2010 |
20100053164 | SPATIALLY CORRELATED RENDERING OF THREE-DIMENSIONAL CONTENT ON DISPLAY COMPONENTS HAVING ARBITRARY POSITIONS - Two or more display components are used to provide spatially correlated displays of 3D content. Three-dimensional content is rendered on multiple displays where the 3D content refers to the same virtual 3D coordinates, in which the relative position of the displays to each other determines the 3D virtual camera position for each display. Although not required, one of the displays may be mobile, such as a cell phone, and the other stationary or nomadic, such as a laptop. Each display shows a view based on a virtual camera into 3D content, such as an online virtual world. By continuously sensing and updating the relative physical distances and orientations of each device to one another, the devices show the user a view into the 3D content that is spatially correlated. Each device has a virtual camera that uses a common pool of 3D geometrical data and renders this data to display images. | 03-04-2010 |
20100053322 | DETECTING EGO-MOTION ON A MOBILE DEVICE DISPLAYING THREE-DIMENSIONAL CONTENT - A method of measuring ego-motion speed of a mobile device is described. The linear motion of the device is measured using an image sensor component, thereby creating linear motion data. The rotational or angular motion of the device is measured using an inertial sensor component, thereby creating rotational motion data. The rotational and linear motion data of the device are used to calculate the ego-motion speed of the mobile device. This ego-motion speed can then be used to control a virtual camera control module for adjusting the view of 3D content viewed by the user on the mobile device as the user moves the device, changing the position of the virtual camera. | 03-04-2010 |
20100053324 | EGOMOTION SPEED ESTIMATION ON A MOBILE DEVICE - Linear and rotational speeds of a mobile device are calculated using distance estimates between imaging sensors in the device and objects or scenes in front of the sensors. The distance estimates are used to modify optical flow vectors from the sensors. Shifting and rotational speeds of the mobile device may then be calculated using the modified optical flow vector values. For example, given a configuration where the first imaging sensor and the second imaging sensor face opposite directions on a single axis, a shifting speed is calculated in the following way: multiplying a first optical flow vector and a first distance estimate, thereby deriving a first modified optical flow vector value; multiplying a second optical flow vector and a second distance estimate, thereby deriving a second modified optical flow vector value; the second modified optical flow vector value may then be subtracted from the first modified optical flow vector value, resulting in a measurement of the shifting speed. | 03-04-2010 |
20100128112 | IMMERSIVE DISPLAY SYSTEM FOR INTERACTING WITH THREE-DIMENSIONAL CONTENT - A system for displaying three-dimensional (3-D) content and enabling a user to interact with the content in an immersive, realistic environment is described. The system has a display component that is non-planar and provides the user with an extended field-of-view (FOV), one factor in the creating the immersive user environment. The system also has a tracking sensor component for tracking a user face. The tracking sensor may include one or more 3-D and 2-D cameras. In addition to tracking the face or head, it may also track other body parts, such as hands and arms. An image perspective adjustment module processes data from the face tracking and enables the user to perceive the 3-D content with motion parallax. The hand and other body part output data is used by gesture detection modules to detect collisions between the user's hand and 3-D content. When a collision is detected, there may be tactile feedback to the user to indicate that there has been contact with a 3-D object. All these components contribute towards creating an immersive and realistic environment for viewing and interacting with 3-D content. | 05-27-2010 |
20100134618 | EGOMOTION SPEED ESTIMATION ON A MOBILE DEVICE USING A SINGLE IMAGER - Linear and rotational speeds of a mobile device are calculated using distance estimates between imaging sensors in the device and objects or scenes in front of the sensors. The distance estimates are used to modify optical flow vectors from the sensors. Shifting and rotational speeds of the mobile device may then be calculated using the modified optical flow vector values. For example, given a configuration where the first imaging sensor and the second imaging sensor face opposite directions on a single axis, a shifting speed is calculated in the following way: multiplying a first optical flow vector and a first distance estimate, thereby deriving a first modified optical flow vector value; multiplying a second optical flow vector and a second distance estimate, thereby deriving a second modified optical flow vector value; the second modified optical flow vector value may then be subtracted from the first modified optical flow vector value, resulting in a measurement of the shifting speed. | 06-03-2010 |
20100208029 | MOBILE IMMERSIVE DISPLAY SYSTEM - A mobile content delivery and display system enables a user to use a communication device, such as a cell phone or smart handset device, to view data, images, and video, make phone calls, and perform other functions, in an immersive environment while being mobile. The system, also referred to as a platform, includes a display component which may have one of numerous configurations, each providing extended field-of-views (FOVs). Display component shapes may include hemispherical, ellipsoidal, tubular, conical, pyramidal, or square/rectangular. The display component may have one or more vertical and/or horizontal cuts, each having various degrees of inclination, thereby providing the user with partial physical enclosure creating extended horizontal and/or vertical FOVs. The platform may also have one or more projectors for displaying data (e.g., text, images, or video) on the display component. Other sensors in the system may include 2-D and 3-D cameras, location sensors, speakers, microphones, communication devices, and interfaces. The platform may be worn or attached to the user as an accessory facilitating user mobility. | 08-19-2010 |
20110285622 | RENDITION OF 3D CONTENT ON A HANDHELD DEVICE - A handheld device having a display and a front-facing sensor and a back-facing sensor is able to render 3D content in a realistic and spatially correct manner using position-dependent rendering and view-dependent rendering. In one scenario, the 3D content is only computer-generated content and the display on the device is a typical, non-transparent (opaque) display. The position-dependent rendering is performed using either the back-facing sensor or a front-facing sensor having a wide-angle lens. In another scenario, the 3D content is composed of computer-generated 3D content and images of physical objects and the display is either a transparent or semi-transparent display where physical objects behind the device show through the display. In this case, position-dependent rendering is performed using a back-facing sensor that is actuated (capable of physical panning and tilting) or is wide-angle, thereby enabling virtual panning. | 11-24-2011 |
Patent application number | Description | Published |
20120075166 | ACTUATED ADAPTIVE DISPLAY SYSTEMS - Adjustable, adaptive display system having individual display elements is able to change its configuration based on a user's movements, position, and activities. A method of adjusting a display system tracking a user is tracked using a camera or other tracking sensor, thereby creating user-tracking data. The user-tracking data is input to an actuator signal module which generates input signals for one or more actuators. The input signals are created, in part, from the user-tracking data. Two or more display elements are actuated using the one or more actuators based on the input signals. The display elements may be planar or curved. In this manner, a configuration of the display system adapts to user movements and adjusts systematically. This provides for a greater amount of a user's human visual field (or user FOV) to be filled by the display system. | 03-29-2012 |
20120260207 | DYNAMIC TEXT INPUT USING ON AND ABOVE SURFACE SENSING OF HANDS AND FINGERS - A virtual keyboard is displayed on a touch screen display surface of a computing device. Partial images of the keyboard are displayed, where a partial image may be one key, referred to as the most probable key that the user will touch, or a group of keys, which may include some less probable or surrounding keys that may be touched. Sensors under or near the display surface detect an outline of the user hands and determines which finger is the fastest moving finger, which is presumed to be the finger used to touch a key. The most probable key is determined based on the fastest moving finger and may be displayed before the finger touches the surface. If the most probable key is not touched, a user profile containing user typing habits may be updated to reflect that a less probable key was touched. | 10-11-2012 |
20130073956 | LINKING PROGRAMMATIC ACTIONS TO USER ACTIONS AT DIFFERENT LOCATIONS - A method for operating a computing device is disclosed, where data that associates a user action at a predetermined location with a programmatic action is stored in memory. A user action being performed at the predetermined location is detected, and the corresponding programmatic action is performed in response to detecting the user action being performed at the predetermined location. | 03-21-2013 |
20130076860 | THREE-DIMENSIONAL RELATIONSHIP DETERMINATION - Example embodiments disclosed herein relate to determining relationships between locations based on beacon information. At least three sensors of a device can be used to determine locations of a beacon. The device can determine a three-dimensional relationship between the locations. | 03-28-2013 |
20130076909 | SYSTEM AND METHOD FOR EDITING ELECTRONIC CONTENT USING A HANDHELD DEVICE - Embodiments of the present invention disclose a system and method for editing electronic content using a handheld device. According to one example embodiment, the system includes a mobile computing device hosting electronic content, and a handheld imaging device. The handheld imaging device is configured to communicate with the mobile computing device and includes an optical sensor for capturing image data associated with an object or area. Still further, the handheld imaging device is configured to transmit and designate a location for insertion of said image data into the electronic content hosted on the mobile device. | 03-28-2013 |
20130077059 | DETERMINING MOTION OF PROJECTION DEVICE - Example embodiments disclosed herein relate to determining a motion based on projected image information. Image information is projected onto an external surface from a device. Sensor information about the external surface and/or projection is received. Motion of the device is determined based on the sensor information. | 03-28-2013 |
20130082928 | KEYBOARD-BASED MULTI-TOUCH INPUT SYSTEM USING A DISPLAYED REPRESENTATION OF A USERS HAND - Example embodiments relate to a keyboard-based multi-touch input system using a displayed representation of a user's hand. In example embodiments, a sensor detects movement of a user's hand in a direction parallel to a top surface of a physical keyboard. A computing device may then receive information describing the movement of the user's hand from the sensor and output a real-time visualization of the user's hand on the display. This visualization may be overlaid on a mufti-touch enabled user interface, such that the user may perform actions on objects within the user interface by performing multi-touch gestures. | 04-04-2013 |
20130082937 | METHOD AND SYSTEM FOR ENABLING INSTANT HANDWRITTEN INPUT - Embodiments of the present invention disclose a method and system for enabling instant handwriting input on a mobile computing device. According to one embodiment, while the mobile device is in an inactive state and identity-protected, an activation event associated with a writing tool operated by a user is detected. In response to the activation event, the mobile computing device is switched from the inactive state to a low power state in which the mobile computing device is configured to accept and store handwritten input while remaining identity-protected. | 04-04-2013 |
20130088427 | MULTIPLE INPUT AREAS FOR PEN-BASED COMPUTING - Embodiments of the present invention disclose a pen-based computing system and method using multiple input areas. According to one embodiment, the system includes a mobile computing device having a display, and a pen input device configured to transmit a signal for determining a position of the pen device relative to the mobile computing device. A plurality of input areas are designated around the entire outer periphery of the display and the mobile computing device such that the presence or movement of the pen input device within any one of the plurality of input areas corresponds to an input operation on the mobile computing device. | 04-11-2013 |
20130091238 | PEN-BASED CONTENT TRANSFER SYSTEM AND METHOD THEREOF - Embodiments of the present invention disclose a system and method for providing pen-based content transfer between mobile computing devices. According to one embodiment, a first mobile computing device and second mobile computing device are configured to host electronic content. A pen device is operated by a user for selecting preferred electronic content from the electronic content hosted on the first computing device. Furthermore, the pen device is configured to store transfer information for facilitating transmission of the preferred electronic content from the first mobile computing device to the electronic content of the second mobile computing device based on action from the user. | 04-11-2013 |
20130100008 | Haptic Response Module - Embodiments provide an apparatus that includes a tracking sensor to track movement of a hand behind a display, such that a virtual object may be output via a display, and a haptic response module to output a stream of gas based a determination that the virtual object has interacted with a portion of the image. | 04-25-2013 |
20130167161 | PROCESSING OF RENDERING DATA BY AN OPERATING SYSTEM TO IDENTIFY A CONTEXTUALLY RELEVANT MEDIA OBJECT - Examples disclose a processor to execute an application associated with an operating system to transmit rendering data which identifies visual objects to display by the operating system. Further, the examples provide the operating system to process the rendering data to identify a media object contextually relevant to the rendering data. Additionally, the examples also disclose the operating system to output the rendering data and the identified media object. | 06-27-2013 |
20130222381 | AUGMENTED REALITY WRITING SYSTEM AND METHOD THEREOF - Embodiments of the present invention disclose an augmented reality writing system and method thereof. According to one example embodiment, the system includes a handheld writing tool having an end portion and a display device for displaying digital content for viewing by an operating user. An optical sensor is coupled to the display device and includes a field of view facing away from the operating user. Furthermore, coupled to the optical sensor is a processing unit configured to detect and track the position of the end portion of the handwriting tool. In accordance therewith, handwritten content is digitally rendered on the display device to correspond with the handwriting motion of the writing tool within the field of view of the optical sensor. | 08-29-2013 |
20130257734 | USE OF A SENSOR TO ENABLE TOUCH AND TYPE MODES FOR HANDS OF A USER VIA A KEYBOARD - Example embodiments relate to a keyboard-based system that enables a user to provide touch input via the keyboard with one hand and typed input via the keyboard with the other hand. In example embodiments, a sensor detects the user's hands on the top surface of the keyboard. In response, a computing device identifies a first hand and a second hand by analyzing information provided by the sensor. The computing device then assigns the first hand to a touch mode and the second hand to a typing mode. The user may then provide touch input using a visualization of the first hand overlaid on a user interface, while providing typed input with the second hand via the keyboard. | 10-03-2013 |
20130286199 | GENERATION OF A COMBINED IMAGE OF A PRESENTATION SURFACE - Example embodiments relate to generating a combined image of a presentation surface. In example embodiments, an image including a portion of a presentation surface is captured using a camera. A location identifier is then received, where the location identifier specifies a location of a digital version of the content displayed on the presentation surface. Next, the digital version of the content displayed on the presentation surface is retrieved from the location specified by the location identifier. Finally, a combined image is generated by combining the captured image and the retrieved digital version of the content displayed on the presentation surface. | 10-31-2013 |
20150058776 | PROVIDING KEYBOARD SHORTCUTS MAPPED TO A KEYBOARD - Example embodiments relate to the provision of keyboard shortcuts that are mapped to a physical keyboard. In example embodiments, a user interface including a plurality of selectable UI elements is outputted. A plurality of keyboard shortcuts may then be outputted, such that each keyboard shortcut corresponds to a key on a physical keyboard and the shortcuts are spatially arranged in a layout corresponding to a layout of the keyboard. A selection of a particular key may then be received and, in response, the UI element positioned at the location of the keyboard shortcut corresponding to the selected key may be activated. | 02-26-2015 |
Patent application number | Description | Published |
20120085115 | AIR CONDITIONER FOR VEHICLE - An air conditioner for a vehicle may include an evaporator core, a heater core allowing air which has passed through the evaporator core to selectively pass therethrough, a defrost outlet, a vent outlet formed adjacent to the defrost outlet, a floor outlet formed adjacent to the vent outlet, and a sliding door sliding between the defrost outlet, the vent outlet and the floor outlet in series, thus selectively opening or closing the defrost outlet, the vent outlet and the floor outlet. | 04-12-2012 |
20130174676 | AIR CONDITIONER CONTROLLING DEVICE FOR VEHICLE - An air conditioner controlling device for a vehicle may include a knob rotatably installed on the front of a case, a cylindrical cam rotatably installed inside the case, interlocked with a shaft of the knob, and having a slot formed in an outer surface thereof, and a lever having one end rotatably mounted to the case and the other end insertedly coupled to the slot of the cylindrical cam to thereby reciprocate the cylindrical cam at the time of rotation of the cylindrical cam, wherein the lever may be insertedly mounted in a portion of the slot formed along the outer surface of the cylindrical cam, and a mounting position and angle of the lever may be thus adjusted, such that a reciprocation direction of the lever may be freely adjusted. | 07-11-2013 |
20140134937 | AIR CONDITIONING APPARATUS FOR VEHICLES - An air conditioning apparatus includes an air conditioning housing and a mode door. The air conditioning housing has a discharge chamber through which conditioned air is discharged out of the air conditioning housing. The discharge chamber includes a defrost vent, a main vent, floor vents and a rear seat vent. The floor vents are formed on opposite sides of a rear portion of the air conditioning housing. The rear seat vent is formed between the floor vents. The mode door has a front opening hole which is formed in a front portion of the mode door, a pair of main opening holes which are formed in a rear portion of the mode door at positions spaced apart from each other, and a rear-end opening hole which is formed behind a first portion provided between the main opening holes. | 05-15-2014 |