Patent application number | Description | Published |
20090219256 | Systems and Methods for Resolving Multitouch Scenarios for Optical Touchscreens - An optical touch detection system may rely on triangulating points in a touch area based on the direction of shadows cast by an object interrupting light in the touch area. When two interruptions occur simultaneously, ghost points and true touch points triangulated from the shadows can be distinguished from one another without resort to additional light detectors. In some embodiments, a distance from a touch point to a single light detector can be determined or estimated based on a change in the length of a shadow detected by a light detector when multiple light sources are used. Based on the distance, the true touch points can be identified by comparing the distance as determined from shadow extension to a distance calculated from the triangulated location of the touch points. | 09-03-2009 |
20100045629 | Systems For Resolving Touch Points for Optical Touchscreens - An optical touch detection system may rely on triangulating points in a touch area based on the direction of shadows cast by an object interrupting light in the touch area. When two interruptions occur simultaneously, ghost points and true touch points triangulated from the shadows can be distinguished from one another without resort to additional light detectors. In some embodiments, a distance from a touch point to a single light detector can be determined or estimated based on a change in the length of a shadow detected by a light detector when multiple light sources are used. Based on the distance, the true touch points can be identified by comparing the distance as determined from shadow extension to a distance calculated from the triangulated location of the touch points. | 02-25-2010 |
20100085330 | TOUCH SCREEN SIGNAL PROCESSING - A coordinate detection system can comprise a display screen, a touch surface corresponding the top of the display screen or a material positioned above the screen and defining a touch area, at least one camera outside the touch area and configured to capture an image of space above the touch surface, an illumination system comprising a light source, the illumination system configured to project light from the light source through the touch surface, and a processor executing program code to identify whether an object interferes with the light from the light source projected through the touch surface based on the image captured by the at least one camera. Light can be directed upward by sources positioned behind the screen, by sources positioned behind the screen that direct light into a backlight assembly that directs the light upward, and/or by a forward optical assembly in front of the screen that directs the light upward. | 04-08-2010 |
20100090985 | TOUCH SCREEN SIGNAL PROCESSING - A touch screen which uses light sources at one or more edges of the screen which directs light across the surface of the screen and at least two cameras having electronic outputs located at the periphery of the screen to receive light from said light sources. A processor receives the outputs of said cameras and employs triangulation techniques to determine the location of an object proximate to said screen. Detecting the presence of an object includes detecting at the cameras the presence or absence of direct light due to the object, using a screen surface as a mirror and detecting at the cameras the presence or absence of reflected light due to an object. The light sources may be modulated to provide a frequency band in the output of the cameras. | 04-15-2010 |
20100097353 | TOUCH SCREEN SIGNAL PROCESSING - A touch screen which uses light sources at one or more edges of the screen which directs light across the surface of the screen and at least two cameras having electronic outputs located at the periphery of the screen to receive light from said light sources. A processor receives the outputs of said cameras and employs triangulation techniques to determine the location of an object proximate to said screen. Detecting the presence of an object includes detecting at the cameras the presence or absence of direct light due to the object, using a screen surface as a mirror and detecting at the cameras the presence or absence of reflected light due to an object. The light sources may be modulated to provide a frequency band in the output of the cameras. | 04-22-2010 |
20100103143 | TOUCH SCREEN SIGNAL PROCESSING - A touch screen which uses light sources at one or more edges of the screen which directs light across the surface of the screen and at least two cameras having electronic outputs located at the periphery of the screen to receive light from said light sources. A processor receives the outputs of said cameras and employs triangulation techniques to determine the location of an object proximate to said screen. Detecting the presence of an object includes detecting at the cameras the presence or absence of direct light due to the object, using a screen surface as a mirror and detecting at the cameras the presence or absence of reflected light due to an object. The light sources may be modulated to provide a frequency band in the output of the cameras. | 04-29-2010 |
20100207911 | Touch screen Signal Processing With Single-Point Calibration - A coordinate detection system can comprise a display screen, a touch surface corresponding the top of the display screen or a material positioned above the screen and defining a touch area, at least one camera outside the touch area and configured to capture an image of space above the touch surface, and a processor executing program code to identify whether an object interferes with the light from the light source projected through the touch surface based on the image captured by the at least one camera. The processor can be configured to carry out a calibration routine utilizing a single touch point in order to determine a plane corresponding to the touch surface by using mirror images of the features adjacent the touch surface, images of the features, and/or based on the touch point and a normal to the reflective plane defined by an image of the object and its mirror image. | 08-19-2010 |
20100225588 | Methods And Systems For Optical Detection Of Gestures - A position detection system can comprise a display device, an input device, and an optical assembly positioned adjacent to the display device. The optical assembly can comprise an image sensor configured to detect light in a space between the display device and the input device. One or both of the imaging assembly and the input device can be configured to direct energy into the space between the display device and the input device, with directing energy comprising reflecting energy and/or emitting energy. A processing device can be configured to use the imaging sensor to determine when an object is in the space and/or to determine motion of the object. | 09-09-2010 |
20100229090 | Systems and Methods for Interacting With Touch Displays Using Single-Touch and Multi-Touch Gestures - Embodiments include position detection systems that can identify two touch locations mapped to positions proximate a GUI object, such as a boundary. In response to movement of one or both of the two touch locations, the GUI object can be affected, such as moving the boundary to resize a corresponding object and/or to relocate the boundary, or the GUI object can be selected without movement of the touch locations. Embodiments include single touch gestures, such as identifying a rolling, bending, or other movement occurring while a touch location remains substantially the same and interpreting the movement as an input command. Embodiments may utilize one or more optical sensors having sufficient sensitivity to recognize changes in detected light due to variations in object orientation, makeup or posture caused by the rolling, bending, and/or other movement(s). | 09-09-2010 |
20110050649 | Determining the Location of Touch Points in a Position Detection System - A position detection system includes at least two optical units configured to image a space, a memory, and a processing device interfaced to the memory and the optical units. The processing device is configured to access image data from the first and second optical units and use this data to determine at least one of a current first position and a current second position representing touch points on a display. The processing device can define a polygon having at least four sides based the current first and current second positions and can access the memory to store and retrieve the polygon. If the processing device can determine only one of the current first position or the current second position based on the accessed image data, the processing device can use the previously defined polygon to estimate the other position that was not determined using the accessed image data. | 03-03-2011 |
20110176082 | Mounting Members For Touch Sensitive Displays - A mounting assembly for an optical touch system can comprise an elongated member defining a top face extending in a plane. A first mounting portion extends perpendicular to a first end of the elongated member and a second mounting portion extends perpendicular a second end. The elongated member defines a first side face perpendicular to a top face and each mounting portion defines a second side face perpendicular to the top face. The mounting assembly is directly mounted to a panel with the top face of the elongated member overlaying at least a portion of the front face of the panel, the first side face overlaying at least a portion of the top edge, the top face of each mounting portion overlaying at least a portion of the front face along the side edges, and each second side face overlaying at least a portion of a respective side edge. | 07-21-2011 |
20110199387 | Activating Features on an Imaging Device Based on Manipulations - Certain aspects and embodiments of the present invention relate to manipulating elements to control an imaging device. According to some embodiments, the imaging device includes a memory, a processor, and a photographic assembly. The photographic assembly includes sensors that can detect and image an object in a viewing area of the imaging device. One or more computer programs can be stored in the memory to determine whether identifiable elements used in the manipulation exist. Manipulations of these elements are compared to stored manipulations to locate a match. In response to locating a match, one or more functions that correspond to the manipulation can be activated on the imaging device. Examples of such functions include the zoom and focus features typically found in cameras, as well as features that are represented as “clickable” icons or other images that are superimposed on the screen of the imaging device. | 08-18-2011 |
20110205151 | Methods and Systems for Position Detection - A computing device, such as a desktop, laptop, tablet computer, a mobile device, or a computing device integrated into another device (e.g., an entertainment device for gaming, a television, an appliance, kiosk, vehicle, tool, etc.) is configured to determine user input commands from the location and/or movement of one or more objects in a space. The object(s) can be imaged using one or more optical sensors and the resulting position data can be interpreted in any number of ways to determine a command, including 2-dimensional and 3-dimensional movements with or without touch. | 08-25-2011 |
20110205155 | Methods and Systems for Position Detection Using an Interactive Volume - A computing device, such as a desktop, laptop, tablet computer, a mobile device, or a computing device integrated into another device (e.g., an entertainment device for gaming, a television, an appliance, kiosk, vehicle, tool, etc.) is configured to determine user input commands from the location and/or movement of one or more objects in a space. The object(s) can be imaged using one or more optical sensors and the resulting position data can be interpreted in any number of ways to determine a command. An interactive volume can be defined and adjusted so that the same movement at different locations within the volume may result in different corresponding movement of a cursor or other interpretations of input. | 08-25-2011 |
20110205185 | Sensor Methods and Systems for Position Detection - A computing device, such as a desktop, laptop, tablet computer, a mobile device, or a computing device integrated into another device (e.g., an entertainment device for gaming, a television, an appliance, kiosk, vehicle, tool, etc.) is configured to determine user input commands from the location and/or movement of one or more objects in a space. The object(s) can be imaged using one or more optical sensors and the resulting position data can be interpreted in any number of ways to determine a command. Signal conditioning logic (or a programmable CPU) can be used to facilitate detection by performing at least some image processing in hardware before the image is provided by the imaging device, such as by a hardware-implemented ambient subtraction, infinite impulse response (IIR) or finite impulse response (FIR) filtering, background-tracker-based touch detection, or the like. | 08-25-2011 |
20110205186 | Imaging Methods and Systems for Position Detection - A computing device, such as a desktop, laptop, tablet computer, a mobile device, or a computing device integrated into another device (e.g., an entertainment device for gaming, a television, an appliance, kiosk, vehicle, tool, etc.) is configured to determine user input commands from the location and/or movement of one or more objects in a space. The object(s) can be imaged using one or more optical sensors and the resulting position data can be interpreted in any number of ways to determine a command. During a first sampling iteration, a range of pixels can be identified from a location of a feature of the object, with the range used in sampling from the at least one imaging device during a second iteration based on the data sampled during the first iteration. | 08-25-2011 |
20110205189 | Stereo Optical Sensors for Resolving Multi-Touch in a Touch Detection System - An optical touch detection system including at least two stereo pairs of optical sensors, which are identify a set of potential points, and methods for determining which of the potential points are true touch points. The first pair of optical sensors is used to identify a first set of potential points and the second set of optical sensors is used to identify a second set of potential points. Potential point pairs are then compared as between the first and second sets of potential points, i.e., each potential point pair includes a potential point from the first set and a potential point from the second set. Each potential point pair is evaluated to determine the distance between its constituent potential points. The true touch points are identified by selecting the potential point pairs having the shortest distances between their constituent potential points. Using at least two pairs of optical sensors reduces the total number of potential point pairs that must be evaluated to determine the true touch points necessary computational analysis. | 08-25-2011 |
20110221666 | Methods and Apparatus For Gesture Recognition Mode Control - Computing devices can comprise a processor and an imaging device. The processor can be configured to support both a mode where gestures are recognized and one or more other modes during which the computing device operates but does not recognize some or all available gestures. The processor can determine whether a gesture recognition mode is activated, use image data from the imaging device to identify a pattern of movement of an object in the space, and execute a command corresponding to the identified pattern of movement if the gesture recognition mode is activated. The processor can also be configured to enter or exit the gesture recognition mode based on various input events. | 09-15-2011 |
20120044143 | OPTICAL IMAGING SECONDARY INPUT MEANS - An optical imaging secondary input means for a computing system. The computing system includes a display screen having a viewing area and a computing device interfaced therewith. At least one primary input means, such as a coordinate input system, keyboard, mouse, etc., is interfaced with the computing device. The optical imaging secondary input means includes a reflective surface external to the viewing area of the display screen, at least one energy emitter for emitting energy toward the reflective surface, and at least one optical sensor for detecting the energy reflected from the reflective surface and outputting signals representing the same to the computing device. The computing device also executes one or more program modules for determining whether an object interacts with the secondary input means based on changes in the energy reflected from the reflective surface, as represented by the signals from the at least one optical sensor. | 02-23-2012 |