Patent application number | Description | Published |
20090088233 | DYNAMIC PROBLEM SOLVING FOR GAMES - The claimed subject matter provides a system and/or a method that facilitates enhancing a game, game play or playability of a game. An experience component can collect a portion of data related to a game in which the portion of data indicates at least one of a tip or a tactic for the game. A game component can dynamically incorporate the portion of data into the game during game play to enhance playability of such game for a user with assistance provided by at least one of the tip or the tactic. | 04-02-2009 |
20090088726 | SYMBIOTIC BIOLOGICAL SYSTEMS AS PLATFORMS FOR SENSING, PRODUCTION, AND INTERVENTION - Provided are systems and/or methods that facilitate sensing, detecting, or treatment of a condition or need of a living body using a genetically engineered symbiotic agent. | 04-02-2009 |
20090102817 | USER INPUT DEVICE WITH FLYWHEEL FOR SCROLLING - User input devices and methods for use in scrolling with a computing device are provided. One disclosed user input device includes a housing, a control surface coupled to the housing and configured to be manipulated by a digit of the user, and a flywheel operatively coupled to the control surface, such that motion of the digit of the user on the control surface is transferred to the flywheel. The flywheel may be a mechanical flywheel operated by a scroll wheel on which the control surface is positioned, or a virtual flywheel implemented by a computer program and operated by a pressure sensitive input device on which the control surface is positioned. | 04-23-2009 |
20090164236 | SMARTER SCHEDULING FOR MEDICAL FACILITIES AND PHYSICIANS - The claimed subject matter provides a system and/or a method that facilitates scheduling an incoming patient appointment for a medical facility. A medical facility can provide healthcare to a patient, wherein the medical facility can utilize a schedule with an available time slot to assign an appointment to a patient. A match component can evaluate a portion of transportation data to select a patient to which an appointment on the schedule is allotted. A dynamic schedule component can automatically adjust the schedule based upon the evaluation. | 06-25-2009 |
20090198733 | HEALTHCARE RESOURCE LOCATOR - The claimed subject matter provides a system and/or a method that facilitates identifying a medical facility for an emergency medical situation. An interface can receive a portion of data related to an emergency medical incident and a corresponding location. A match component can evaluate the portion of data to select a medical facility in which to transport a patient involved in the emergency medical incident, wherein the medical facility can be ascertained based on a distance between the location of the emergency medical incident and a location for the selected medical facility and traffic related to a route there between. | 08-06-2009 |
20090270755 | PEDOMETER FOR THE BRAIN - The claimed subject mater provides systems and/or techniques that provide biometric feedback monitoring of brain activity. The system includes mechanisms that obtain indication of brain activity associated with an individual which can be utilized to ensure that the individual is maximizing his or her brain activity. Where it is determined that the individual is not optimally utilizing his or her brain, feedback can be directed to the individual in order to stimulate brain activity in a specified response center of the brain. | 10-29-2009 |
20090322278 | DOCKING STATION FOR ELECTRONIC DEVICE - A docking station for an electronic device includes a magnet that holds the electronic device in a mated orientation relative to the docking station. The docking station may include a mounting surface with two or more different charge-contact regions, each charge-contact region electrically coupled to a terminal of an electric power source and positioned to form an electrical connection with a charging terminal of the electronic device when the electronic device mates with the mounting surface. | 12-31-2009 |
20100066675 | Compact Interactive Tabletop With Projection-Vision - The subject application relates to a system(s) and/or methodology that facilitate vision-based projection of any image (still or moving) onto any surface. In particular, a front-projected computer vision-based interactive surface system is provided which uses a new commercially available projection technology to obtain a compact, self-contained form factor. The subject configuration addresses installation, calibration, and portability issues that are primary concerns in most vision-based table systems. The subject application also relates to determining whether an object is touching or hovering over an interactive surface based on an analysis of a shadow image. | 03-18-2010 |
20100271315 | ENCODING AND DECODING ADAPTIVE INPUT DEVICE INPUTS - Systems and methods for encoding and decoding adaptive device inputs are provided. The system may include a computing device coupled to an adaptive input device having a mechanical key set including a plurality of mechanically depressible keys, each key including a touch display. The computing device may comprise code stored in mass storage for implementing via a processor, a touch display application program interface configured to receive encoded input device data including one or more of mechanical key-down input data and touch input data, decode the encoded input device data to identify one or more of a key command corresponding to the mechanical key-down input data and a touch command corresponding to touch input data from one or more keys, and send one or more messages to an adaptive input device application based on the identified key command and/or touch commands. | 10-28-2010 |
20110043142 | SCANNING COLLIMATION OF LIGHT VIA FLAT PANEL LAMP - Various embodiments are disclosed that relate to scanning the direction of light emitted from optical collimators. For example, one disclosed embodiment provides a system for scanning collimated light, the system comprising an optical wedge, a light injection system, and a controller. The optical wedge comprises a thin end, a thick end opposite the thin end, a viewing surface extending at least partially between the thick end and the thin end, and a back surface opposite the viewing surface. The thick end of the optical wedge further comprises an end reflector comprising a faceted lens structure. The light injection system is configured to inject light into the thin end of the optical wedge, and the controller is configured to control the location along the thin end of the optical wedge at which the light injection system injects light. | 02-24-2011 |
20110043490 | ILLUMINATOR FOR TOUCH- AND OBJECT-SENSITIVE DISPLAY - An integrated vision and display system comprises a display-image forming layer configured to transmit a display image for viewing through a display surface; an imaging detector configured to image infrared light of a narrow range of angles relative to the display surface normal and including a reflection from one or more objects on or near the display surface; a vision-system emitter configured to emit the infrared light for illuminating the objects; a visible- and infrared-transmissive light guide having opposing upper and/or lower face, configured to receive the infrared light from the vision-system emitter, to conduct the infrared light via TIR from the upper and lower faces, and to project the infrared light onto the objects outside of the narrow range of angles relative to the display surface normal. | 02-24-2011 |
20110044056 | LIGHT COLLECTOR FOR AN ILLUMINATION OPTIC - A light collector is provided to converge light from a light source down to a range of acceptance angles of an illumination optic, and to couple the converged light into the illumination optic, where the range of acceptance angles of the illumination optic is less than a range of emission angles of the light source. | 02-24-2011 |
20110044579 | EFFICIENT COLLIMATION OF LIGHT WITH OPTICAL WEDGE - Embodiments of optical collimators are disclosed. For example, one disclosed embodiment comprises an optical waveguide having a first end, a second end opposing the first end, a viewing surface extending at least partially between the first end and the second end, and a back surface opposing the viewing surface. The viewing surface comprises a first critical angle of internal reflection, and the back surface is configured to be reflective at the first critical angle of internal reflection. Further, a collimating end reflector comprising a faceted lens structure having a plurality of facets is disposed at the second end of the optical waveguide. | 02-24-2011 |
20110044582 | EFFICIENT COLLIMATION OF LIGHT WITH OPTICAL WEDGE - Embodiments of optical collimators are disclosed. For example, one disclosed embodiment comprises an optical waveguide having a first end, a second end opposing the first end, a viewing surface extending at least partially between the first end and the second end, and a back surface opposing the viewing surface. The viewing surface comprises a first critical angle of internal reflection, and the back surface is configured to be reflective at the first critical angle of internal reflection. Further, an end reflector is disposed at the second end of the optical waveguide, and includes a faceted lens structure to cause a majority of the viewing surface to be uniformly illuminated when uniform light is injected into the first end and also to cause a majority of the injected light to exit the viewing surface. | 02-24-2011 |
20110050580 | LIGHT COLLECTOR FOR AN ILLUMINATION OPTIC - A light collector is provided to converge light from a light source down to a range of acceptance angles of an illumination optic, and to couple the converged light into the illumination optic, where the range of acceptance angles of the illumination optic is less than a range of emission angles of the light source. | 03-03-2011 |
20110119640 | DISTANCE SCALABLE NO TOUCH COMPUTING - Disclosed herein are techniques for scaling and translating gestures such that the applicable gestures for control may vary depending on the user's distance from a gesture-based system. The techniques for scaling and translation may take the varying distances from which a user interacts with components of the gesture-based system, such as a computing environment or capture device, into consideration with respect to defining and/or recognizing gestures. In an example embodiment, the physical space is divided into virtual zones of interaction, and the system may scale or translate a gesture based on the zones. A set of gesture data may be associated with each virtual zone such that gestures appropriate for controlling aspects of the gesture-based system may vary throughout the physical space. | 05-19-2011 |
20110154266 | CAMERA NAVIGATION FOR PRESENTATIONS - Techniques for managing a presentation of information in a gesture-based system, where gestures are derived from a user's body position or motion in the physical space, may enable a user to use gestures to control the manner in which the information is presented or to otherwise interact with the gesture-based system. A user may present information to an audience to an audience using gestures that control aspects of the system, or multiple users may work together using gestures to control aspects of the system. Thus, in an example embodiment, a single user can control the presentation of information to the audience via gestures. In another example embodiment, multiple participants can share control of the presentation via gestures captured by a capture device or otherwise interact with the system to control aspects of the presentation. | 06-23-2011 |
20110228562 | EFFICIENT COLLIMATION OF LIGHT WITH OPTICAL WEDGE - Embodiments of optical collimators are disclosed. For example, one disclosed embodiment comprises an optical waveguide having a first end, a second end opposing the first end, a viewing surface extending at least partially between the first end and the second end, and a back surface opposing the viewing surface. The viewing surface comprises a first critical angle of internal reflection, and the back surface is configured to be reflective at the first critical angle of internal reflection. Further, a collimating end reflector comprising a faceted lens structure having a plurality of facets is disposed at the second end of the optical waveguide. | 09-22-2011 |
20110242298 | PRIVATE VIDEO PRESENTATION - Embodiments are disclosed that relate to private video presentation. For example, one disclosed embodiment provides a system including a display surface, a directional backlight system configured to emit a beam of light from the display surface and to vary a direction in which the beam of light is directed, and a spatial light modulator configured to form an image for display via the directional backlight system. The system further includes a controller configured to control the optical system and the light modulator to display a first video content item at a first viewing angle and a second video content item at a second viewing angle. | 10-06-2011 |
20110260976 | TACTILE OVERLAY FOR VIRTUAL KEYBOARD - The present disclosure provides for a computing system having virtual keyboard functionality that can be selectively enhanced through use of a tactile keyboard overlay. The tactile keyboard overlay includes a plurality of depressible key portions, and is placed onto an operative surface of a touch interactive display. The computing system configures the virtual keyboard functionality, so that each of the depressible key portions is operable to produce a touch input on the touch interactive display that causes performance of a corresponding input operation. The virtual keyboard functionality is dynamically and automatically located on the touch interactive display based on user placement of the tactile keyboard overlay. | 10-27-2011 |
20110267501 | SCANNED BEAM DISPLAY AND IMAGE CAPTURE - A method for displaying or capturing an image comprises directing an illumination beam onto a mirror of a highly resonant, mirror-mount system and applying a drive signal to a transducer to deflect the mirror. In this method, the drive signal has a pulse frequency approaching a resonance frequency of the mirror-mount system. The method further comprises reflecting the illumination beam off the mirror so that the illumination beam scans through an area where the image is to be displayed or captured, and, addressing each pixel of the image in synchronicity with the drive signal to display or capture the image. | 11-03-2011 |
20110279648 | SCANNED-BEAM DEPTH MAPPING TO 2D IMAGE - A method for constructing a 3D representation of a subject comprises capturing, with a camera, a 2D image of the subject. The method further comprises scanning a modulated illumination beam over the subject to illuminate, one at a time, a plurality of target regions of the subject, and measuring a modulation aspect of light from the illumination beam reflected from each of the target regions. A moving-mirror beam scanner is used to scan the illumination beam, and a photodetector is used to measure the modulation aspect. The method further comprises computing a depth aspect based on the modulation aspect measured for each of the target regions, and associating the depth aspect with a corresponding pixel of the 2D image. | 11-17-2011 |
20110307599 | PROXIMITY NETWORK - A proximity network architecture is proposed that enables a device to detect other devices in its proximity and automatically interact with the other devices to share in a user experience. In one example implementation, data and code for the experience is stored in the cloud so that users can participate in the experience from multiple and different types of devices. | 12-15-2011 |
20110310232 | SPATIAL AND TEMPORAL MULTIPLEXING DISPLAY - Described is using a combination of which a multi-view display is provided by a combining spatial multiplexing (e.g., using a parallax barrier or lenslet), and temporal multiplexing (e.g., using a directed backlight). A scheduling algorithm generates different views by determining which light sources are illuminated at a particular time. Via the temporal multiplexing, different views may be in the same spatial viewing angle (spatial zone). Two of the views may correspond to two eyes of a person, with different video data sent to each eye to provide an autostereoscopic display for that person. Eye (head) tracking may be used to move the view or views with a person as that person moves. | 12-22-2011 |
20110310233 | Optimization of a Multi-View Display - Described herein is a multi-view display (based on spatial and/or temporal multiplexing) having an optimization mechanism that dynamically adjust views based upon detected state changes with respect to one or more views. The optimization mechanism determines viewing parameters (e.g., brightness and/or colors) for a view based upon a current position of the view, and/or on the multi-view display's capabilities. The state change may correspond to the view (a viewer's eye) moving towards another viewing zone, in which event new viewing parameters are determined, which may be in anticipation of entering the zone. Another state change corresponds to more views being needed than the display is capable of outputting, whereby one or more existing views are degraded, e.g., from 3D to 2D and/or from a personal video to a non-personal view. Conversely, a state change corresponding to excess capacity becoming available can result in enhancing a view to 3D and/or personal. | 12-22-2011 |
20120127084 | VARIABLE LIGHT DIFFUSION IN INTERACTIVE DISPLAY DEVICE - Embodiments are disclosed that relate to variable diffusers in interactive display devices. One embodiment provides an interactive display device comprising a display panel configured to display an image on an interactive surface, an image capture device configured to capture an image of the interactive surface, a variable diffuser disposed optically between the display panel and the image capture device, a logic subsystem comprising one or more logic devices, and memory comprising instructions executable by the logic subsystem to operate the display panel, the image capture device, and the variable diffuser. | 05-24-2012 |
20120127128 | HOVER DETECTION IN AN INTERACTIVE DISPLAY DEVICE - Embodiments are disclosed that relate to hover detection in interactive display devices. One embodiment provides an interactive display device comprising a display panel configured to display an image on an interactive surface, an imaging optical wedge disposed adjacent to the display panel, an image sensor configured to capture an image of an object located in front of the interactive surface and spaced from the interactive surface by capturing the image through the imaging optical wedge, a logic subsystem, and a data-holding subsystem comprising instructions executable by the logic subsystem to operate the display panel and the image sensor, and to detect a hover input based upon one or more images received from the image sensor. | 05-24-2012 |
20120200532 | TOUCH-PRESSURE SENSING IN A DISPLAY PANEL - A touch-pressure sensitive panel includes a locally and resiliently deformable waveguide having an exterior surface for receiving localized touch pressure from a user, and a wetting surface opposite the exterior surface. The panel also includes a de-wettable layer presenting a de-wettable surface arranged beneath the wetting surface, such that the localized touch pressure reversibly increases localized optical wetting of the de-wettable surface by the wetting surface. The panel also includes an imaging detector configured to receive light coupled into the de-wettable layer due to the localized optical wetting. | 08-09-2012 |
20120206937 | EFFICIENT COLLIMATION OF LIGHT WITH OPTICAL WEDGE - Embodiments of optical collimators are disclosed. For example, one disclosed embodiment comprises an optical waveguide having a first end, a second end opposing the first end, a viewing surface extending at least partially between the first end and the second end, and a back surface opposing the viewing surface. The viewing surface comprises a first critical angle of internal reflection, and the back surface is configured to be reflective at the first critical angle of internal reflection. Further, a collimating end reflector comprising a faceted lens structure having a plurality of facets is disposed at the second end of the optical waveguide. | 08-16-2012 |
20120262407 | TOUCH AND STYLUS DISCRIMINATION AND REJECTION FOR CONTACT SENSITIVE COMPUTING DEVICES - A “Contact Discriminator” provides various techniques for differentiating between valid and invalid contacts received from any input methodology by one or more touch-sensitive surfaces of a touch-sensitive computing device. Examples of contacts include single, sequential, concurrent, or simultaneous user finger touches (including gesture type touches), pen or stylus touches or inputs, hover-type inputs, or any combination thereof. The Contact Discriminator then acts on valid contacts (i.e., contacts intended as inputs) while rejecting or ignoring invalid contacts or inputs. Advantageously, the Contact Discriminator is further capable of disabling or ignoring regions of input surfaces, such tablet touch screens, that are expected to receive unintentional contacts, or intentional contacts not intended as inputs, for device or application control purposes. Examples of contacts not intended as inputs include, but are not limited to, a user's palm resting on a touch screen while the user writes on that screen with a stylus or pen. | 10-18-2012 |
20120315965 | Locational Node Device - A node device in a distributed virtual environment captures locational signals projected by another node device into a capture area of the node device and reflected from the capture area to a capture device of the node device. The location of the node device relative to the other node device is determined based on the captured locational signals. The determined location can be based on an angular relationship determined between the node device and the other node device based on the captured locational signals. The determined location can also be based on a relative distance determined between the node device and the other node device based on the captured locational signals. Topology of the capture area can also be detected by the node device, and topologies of multiple capture areas can be combined to define one or more surfaces in a virtual environment. | 12-13-2012 |
20120320157 | COMBINED LIGHTING, PROJECTION, AND IMAGE CAPTURE WITHOUT VIDEO FEEDBACK - A “Concurrent Projector-Camera” uses an image projection device in combination with one or more cameras to enable various techniques that provide visually flicker-free projection of images or video, while real-time image or video capture is occurring in that same space. The Concurrent Projector-Camera provides this projection in a manner that eliminates video feedback into the real-time image or video capture. More specifically, the Concurrent Projector-Camera dynamically synchronizes a combination of projector lighting (or light-control points) on-state temporal compression in combination with on-state temporal shifting during each image frame projection to open a “capture time slot” for image capture during which no image is being projected. This capture time slot represents a tradeoff between image capture time and decreased brightness of the projected image. Examples of image projection devices include LED-LCD based projection devices, DLP-based projection devices using LED or laser illumination in combination with micromirror arrays, etc. | 12-20-2012 |
20120320169 | VOLUMETRIC VIDEO PRESENTATION - Various embodiments are disclosed that relate to the presentation of video images in a presentation space via a head-mounted display. For example, one disclosed embodiment comprises receiving viewer location data and orientation data from a location and orientation sensing system, and from the viewer location data and the viewer orientation data, locate a viewer in a presentation space, determine a direction in which the user is facing, and determine an orientation of the head-mounted display system. From the determined location, direction, and orientation, a presentation image is determined based upon a portion of and an orientation of a volumetric image mapped to the portion of the presentation space that is within the viewer's field of view. The presentation image is then sent to the head-mounted display. | 12-20-2012 |
20120324491 | VIDEO HIGHLIGHT IDENTIFICATION BASED ON ENVIRONMENTAL SENSING - Embodiments related to identifying and displaying portions of video content taken from longer video content are disclosed. In one example embodiment, a portion of a video item is provided by receiving, for a video item, an emotional response profile for each viewer of a plurality of viewers, each emotional response profile comprising a temporal correlation of a particular viewer's emotional response to the video item when viewed by the particular viewer. The method further comprises selecting, using the emotional response profiles, a first portion of the video item judged to be more emotionally stimulating than a second portion of the video item, and sending the first portion of the video item to another computing device in response to a request for the first portion of the video item without sending the second portion of the video item. | 12-20-2012 |
20120324492 | VIDEO SELECTION BASED ON ENVIRONMENTAL SENSING - Embodiments related to providing video items to a plurality of viewers in a video viewing environment are provided. In one embodiment, the video item is provided by determining identities for each of the viewers from data received from video viewing environment sensors, obtaining the video item based on those identities, and sending the video item for display. | 12-20-2012 |
20120326959 | REGION OF INTEREST SEGMENTATION - A sensor manager provides dynamic input fusion using thermal imaging to identify and segment a region of interest. Thermal overlay is used to focus heterogeneous sensors on regions of interest according to optimal sensor ranges and to reduce ambiguity of objects of interest. In one implementation, a thermal imaging sensor locates a region of interest that includes an object of interest within predetermined wavelengths. Based on the thermal imaging sensor input, the regions each of the plurality of sensors are focused on and the parameters each sensor employs to capture data from a region of interest are dynamically adjusted. The thermal imaging sensor input may be used during data pre-processing to dynamically eliminate or reduce unnecessary data and to dynamically focus data processing on sensor input corresponding to a region of interest. | 12-27-2012 |
20120327218 | RESOURCE CONSERVATION BASED ON A REGION OF INTEREST - A detected region of interest is used to reduce the data processed by a capture device and/or transmitted by the capture device to a console, and/or to reduce power consumption by the capture device. Raw data from the one or more sensors is processed in the capture device to reduce data corresponding to regions outside the region of interest. Such a data reduces computational requirements, which conserves power. Operational parameters of the capture device are adjusted based on the region of interest mask. A field of view, the resolution, or the sensitivity of at least one of the sensors may be narrowed to focus resources on the region of interest. Adjusting the operational parameters of a sensor reduces the power consumption of the capture device and reduces data input. An illumination source may be adjusted to focus the illumination source on the region of interest to use less power. | 12-27-2012 |
20140022184 | SPEECH AND GESTURE RECOGNITION ENHANCEMENT - The recognition of user input to a computing device is enhanced. The user input is either speech, or handwriting data input by the user making screen-contacting gestures, or a combination of one or more prescribed words that are spoken by the user and one or more prescribed screen-contacting gestures that are made by the user, or a combination of one or more prescribed words that are spoken by the user and one or more prescribed non-screen-contacting gestures that are made by the user. | 01-23-2014 |
20140132595 | IN-SCENE REAL-TIME DESIGN OF LIVING SPACES - A display that renders realistic objects allows a designer to redesign a living space in real time based on an existing layout. A computer system renders simulated objects on the display such that the simulated objects appear to the viewer to be in substantially the same place as actual objects in the scene. The displayed simulated objects can be spatially manipulated on the display through various user gestures. A designer can visually simulate a redesign of the space in many ways, for example, by adding selected objects, or by removing or rearranging existing objects, or by changing properties of those objects. Such objects also can be associated with shopping resources to enable related goods and services to be purchased, or other commercial transactions to be engaged in. | 05-15-2014 |
20140168096 | REDUCING LATENCY IN INK RENDERING - A reduced-latency ink rendering system and method that reduces latency in rendering ink on a display by bypassing at least some layers of the operating system. “Ink” is any input from a user through a touchscreen device using the user's finger or a pen. Moreover, some embodiments of the system and method avoid the operating system and each central-processing unit (CPU) on a computing device when initially rendering the ink by going directly from the digitizer to the display controller. Any correction or additional processing of the rendered ink is performed after the initial rendering of the ink. Embodiments of the system and method address ink-rendering latency in software embodiments, which include techniques to bypass the typical rendering pipeline and quickly render ink on the display, and hardware embodiments, which use hardware and techniques that locally change display pixels. These embodiments can be mixed and matched in any manner. | 06-19-2014 |
20140198297 | USING A 3D DISPLAY TO TRAIN A WEAK EYE - A method for treating a weak viewer-eye includes the steps of receiving eye-strength data indicative of an eye-strength of the weak viewer-eye and causing a 3D display system to vary, in accordance with the eye-strength of the weak viewer-eye, display characteristics of a perspective that the 3D display system displays. | 07-17-2014 |
20140200079 | SYSTEMS AND METHODS FOR DIFFERENTIATING BETWEEN DOMINANT AND WEAK EYES IN 3D DISPLAY TECHNOLOGY - A method of displaying visual information to different viewer-eyes includes receiving eye strength data indicative of a deficiency of a weak viewer-eye with respect to a dominant viewer-eye. The method further includes causing a 3D-display system to display a first perspective of an image to the weak viewer-eye and causing the 3D-display system to display a second perspective of the image to the dominant viewer-eye. A difference between the first perspective and the second perspective is a variation of a display characteristic of one of the first and second perspectives where the variation is made in accordance with the indicated deficiency of the weak viewer-eye | 07-17-2014 |
20140267176 | SYSTEMS AND METHODS FOR PARALLAX COMPENSATION - An electronic device may include a touch screen electronic display configured to offset and/or shift the contact locations of touch implements and/or displayed content based on one or more calculated parallax values. The parallax values may be associated with the viewing angle of an operator relative to the display of the electronic device. In various embodiments, the parallax value(s) may be calculated using three-dimensional location sensors, an angle of inclination of a touch implement, and/or one or more displayed calibration objects. Parallax values may be utilized to remap contact locations by a touch implement, shift and/or offset displayed content, and/or perform other transformations as described herein. A stereoscopically displayed content may be offset such that a default display plane is coplanar with a touch surface rather than a display surface. Contacts by a finger may be remapped using portions of the contact region and/or a centroid of the contact region. | 09-18-2014 |
20140267177 | SYSTEMS AND METHODS FOR PARALLAX COMPENSATION - An electronic device may include a touch screen electronic display configured to offset and/or shift the contact locations of touch implements and/or displayed content based on one or more calculated parallax values. The parallax values may be associated with the viewing angle of an operator relative to the display of the electronic device. In various embodiments, the parallax value(s) may be calculated using three-dimensional location sensors, an angle of inclination of a touch implement, and/or one or more displayed calibration objects. Parallax values may be utilized to remap contact locations by a touch implement, shift and/or offset displayed content, and/or perform other transformations as described herein. A stereoscopically displayed content may be offset such that a default display plane is coplanar with a touch surface rather than a display surface. Contacts by a finger may be remapped using portions of the contact region and/or a centroid of the contact region. | 09-18-2014 |
20140267178 | SYSTEMS AND METHODS FOR PARALLAX COMPENSATION - An electronic device may include a touch screen electronic display configured to offset and/or shift the contact locations of touch implements and/or displayed content based on one or more calculated parallax values. The parallax values may be associated with the viewing angle of an operator relative to the display of the electronic device. In various embodiments, the parallax value(s) may be calculated using three-dimensional location sensors, an angle of inclination of a touch implement, and/or one or more displayed calibration objects. Parallax values may be utilized to remap contact locations by a touch implement, shift and/or offset displayed content, and/or perform other transformations as described herein. A stereoscopically displayed content may be offset such that a default display plane is coplanar with a touch surface rather than a display surface. Contacts by a finger may be remapped using portions of the contact region and/or a centroid of the contact region. | 09-18-2014 |
20140267179 | SYSTEMS AND METHODS FOR PARALLAX COMPENSATION - An electronic device may include a touch screen electronic display configured to offset and/or shift the contact locations of touch implements and/or displayed content based on one or more calculated parallax values. The parallax values may be associated with the viewing angle of an operator relative to the display of the electronic device. In various embodiments, the parallax value(s) may be calculated using three-dimensional location sensors, an angle of inclination of a touch implement, and/or one or more displayed calibration objects. Parallax values may be utilized to remap contact locations by a touch implement, shift and/or offset displayed content, and/or perform other transformations as described herein. A stereoscopically displayed content may be offset such that a default display plane is coplanar with a touch surface rather than a display surface. Contacts by a finger may be remapped using portions of the contact region and/or a centroid of the contact region. | 09-18-2014 |
20140267184 | Multimode Stylus - A stylus for use as an input device automatically switches its mode of operation. | 09-18-2014 |
20140333735 | CONTROLLABLE LENTICULAR LENSLETS - An autostereoscopic 3D display system includes a display having a plurality of pixels, wherein each pixel is configured to display light rays representing a left-eye view and a right-eye view of an image. The autostereoscopic 3D display system further includes an optical-deflection system configured to control the light rays representing the left-eye view and the right-eye view. The optical-deflection system includes a separately controllable lenslet associated with each pixel, where the lenslet is configured to steer the light ray representing the left-eye view corresponding to the pixel, and steer the light ray representing the right-eye view corresponding to the pixel. | 11-13-2014 |
20140376785 | SYSTEMS AND METHODS FOR ENHANCEMENT OF FACIAL EXPRESSIONS - A system for enhancing a facial expression includes a processing circuit is configured to receive video of a user, generate facial data corresponding to a face of the user, analyze the facial data to identify a facial expression, enhance the facial data based on the facial expression, and output modified video including the enhanced facial data. | 12-25-2014 |