PERCEPTIVE PIXEL INC. Patent applications |
Patent application number | Title | Published |
20160132137 | SYSTEMS FOR AN ELECTROSTATIC STYLUS WITHIN A CAPACITIVE TOUCH SENSOR - Systems are described for transmitting and receiving signals in an active stylus for a capacitive touch sensor, for which the systems have at least one circuit for receiving a current from an electrode and for transmitting a voltage onto the electrode. The systems include components for receiving the current in a receiving mode, a switch, and a switchmode power supply circuit having at least a transformer and a diode, for which the diode is coupled to the transformer. In a transmission mode, there is means for electrically isolating at least some of the components configured for receiving the current in the receiving mode from the voltage formed across the stray capacitance. In the receiving mode, there is a means for electrically isolating at least some of the components configured for receiving the current in the receiving mode from an inductance of the transformer in the switchmode power supply circuit. | 05-12-2016 |
20140267162 | CAPACITIVE TOUCH SENSOR HAVING CODE-DIVIDED AND TIME-DIVIDED TRANSMIT WAVEFORMS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for digital signal processing (DSP) techniques for generally improving a signal-to-noise ratio (SNR) of capacitive touch sensors. | 09-18-2014 |
20140208248 | PRESSURE-SENSITIVE LAYERING OF DISPLAYED OBJECTS - First and second objects are displayed on a pressure-sensitive touch-screen display device. An intersection is detected between the objects. Contact by one or more input mechanisms is detected in a region that corresponds to the first displayed object. Pressure applied by at least one input mechanisms is sensed. The depth of the first displayed object is adjusted as a function of the sensed pressure. The depth of the displayed objects are determined at their detected intersection. The determined depths of the displayed objects are compared. Based on a result of comparing the determined depths, data is stored indicating that one of the displayed objects is overlapping the other. In addition, the displayed objects are displayed such that the overlapping displayed object is displayed closer to a foreground of the pressure-sensitive touch-screen display device than the other displayed object. | 07-24-2014 |
20140168160 | TECHNIQUES FOR DISAMBIGUATING TOUCH DATA USING USER DEVICES - Techniques for disambiguating touch data and determining user assignment of touch points detected by a touch sensor are described. The techniques leverage both user-specific touch data projected onto axes and non-user-specific touch data captured over a complete area. | 06-19-2014 |
20140168128 | 3D MANIPULATION USING APPLIED PRESSURE - Placement by one or more input mechanisms of a touch point on a multi-touch display device that is displaying a three-dimensional object is detected. A two-dimensional location of the touch point on the multi-touch display device is determined, and the touch point is matched with a three-dimensional contact point on a surface of the three-dimensional object that is projected for display onto the image plane of the camera at the two-dimensional location of the touch point. A change in applied pressure at the touch point is detected, and a target depth value for the contact point is determined based on the change in applied pressure. A solver is used to calculate a three-dimensional transformation of the three-dimensional object using an algorithm that reduces a difference between a depth value of the contact point after object transformation and the target depth value. | 06-19-2014 |
20140104320 | Controlling Virtual Objects - Controlling virtual objects displayed on a display device includes controlling display, on a display device, of multiple virtual objects, each of the multiple virtual objects being capable of movement based on a first type of input and being capable of alteration based on a second type of input that is different than the first type of input, the alteration being different from movement. A subset of the multiple virtual objects as candidates for restriction is identified, and based on identifying the subset of virtual objects as candidates for restriction, a responsiveness to the first type of input for the subset of virtual objects is restricted. The first type of input applied to a first virtual object included in the subset of virtual objects and a second virtual object included in the multiple virtual objects is detected, with the second virtual object being excluded from the subset of virtual objects. Based on detecting the first type of input applied to the first virtual object and the second virtual object, movement of the first virtual object is controlled in accordance with the restricted responsiveness to the first type of input, and movement of the second virtual object is controlled without restriction. | 04-17-2014 |
20140104194 | Input Classification for Multi-Touch Systems - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for input classification for multi-touch systems. In one aspect, a method includes receiving data describing a first region of contact with a touch sensitive display and a second region of contact with the touch sensitive display, the second region of contact being separate from the first region of contact. The method includes classifying the first region of contact as a touch point provided by a user's body part. The method includes classifying the second region of contact as incidental touch input provided by a user's resting body part. The method includes determining an area that is outside of the second region of contact and that extends at least a threshold distance from the second region of contact. The method includes determining a location of the touch point associated with the first region of contact. | 04-17-2014 |
20140104193 | Input Classification for Multi-Touch Systems - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for input classification for multi-touch systems. In one aspect, a method includes receiving data describing a first region of contact with a touch sensitive display, a second region of contact with the touch sensitive display, and a third region of contact with the touch sensitive display, the second region of contact being separate from the first region of contact and the third region of contact being separate from the first region of contact and the second region of contact. The method includes classifying the first region of contact as a touch point provided by a user's body part. The method includes classifying the second region of contact as incidental touch input provided by a user's resting body part. The method includes classifying the third region of contact as a stylus input. | 04-17-2014 |
20130307827 | 3D MANIPULATION USING APPLIED PRESSURE - Placement by one or more input mechanisms of a touch point on a multi-touch display device that is displaying a three-dimensional object is detected. A two-dimensional location of the touch point on the multi-touch display device is determined, and the touch point is matched with a three-dimensional contact point on a surface of the three-dimensional object that is projected for display onto the image plane of the camera at the two-dimensional location of the touch point. A change in applied pressure at the touch point is detected, and a target depth value for the contact point is determined based on the change in applied pressure. A solver is used to calculate a three-dimensional transformation of the three-dimensional object using an algorithm that reduces a difference between a depth value of the contact point after object transformation and the target depth value. | 11-21-2013 |
20130300674 | Overscan Display Device and Method of Using the Same - A display device comprises a display panel including a display region configured to display one or more image objects and an overscan region configured to prevent the display of images within the overscan region, a touchscreen panel overlying the display region and the overscan region configured to detect and generate engagement data indicative of detected engagement, and a computer processor configured to access first engagement data, determine that the first engagement data reflects engagement with at least one portion of the touchscreen panel correspondingly overlapping with the overscan region, identify a first particular engagement input type based on the first engagement data, and instruct the display panel to invoke the display of the one or more image objects in the display region or to change the display of the one or more image objects in the display region. | 11-14-2013 |
20130135291 | Volumetric Data Exploration Using Multi-Point Input Controls - A three-dimensional data set is accessed. A two-dimensional plane is defined that intersects a space defined by the three-dimensional data set. The two-dimensional plane defines a two-dimensional data set within the three-dimensional data set and divides the three-dimensional data set into first and second subsets. A three-dimensional view based on the three-dimensional data set is rendered on such that at least a portion of the first subset of the three-dimensional data set is removed and at least a portion of the two-dimensional data set is displayed. A two-dimensional view of a first subset of the two-dimensional data set also is rendered. Controls are provided that enable visual navigation through the three-dimensional data set by engaging points on the multi-touch display device that correspond to either the three-dimensional view based on the three-dimensional data set and/or the two-dimensional view of the first subset of the two-dimensional data set. | 05-30-2013 |
20130135290 | Volumetric Data Exploration Using Multi-Point Input Controls - A three-dimensional data set is accessed. A two-dimensional plane is defined that intersects a space defined by the three-dimensional data set. The two-dimensional plane defines a two-dimensional data set within the three-dimensional data set and divides the three-dimensional data set into first and second subsets. A three-dimensional view based on the three-dimensional data set is rendered on such that at least a portion of the first subset of the three-dimensional data set is removed and at least a portion of the two-dimensional data set is displayed. A two-dimensional view of a first subset of the two-dimensional data set also is rendered. Controls are provided that enable visual navigation through the three-dimensional data set by engaging points on the multi-touch display device that correspond to either the three-dimensional view based on the three-dimensional data set and/or the two-dimensional view of the first subset of the two-dimensional data set. | 05-30-2013 |
20130127833 | Volumetric Data Exploration Using Multi-Point Input Controls - A three-dimensional data set is accessed. A two-dimensional plane is defined that intersects a space defined by the three-dimensional data set. The two-dimensional plane defines a two-dimensional data set within the three-dimensional data set and divides the three-dimensional data set into first and second subsets. A three-dimensional view based on the three-dimensional data set is rendered on such that at least a portion of the first subset of the three-dimensional data set is removed and at least a portion of the two-dimensional data set is displayed. A two-dimensional view of a first subset of the two-dimensional data set also is rendered. Controls are provided that enable visual navigation through the three-dimensional data set by engaging points on the multi-touch display device that correspond to either the three-dimensional view based on the three-dimensional data set and/or the two-dimensional view of the first subset of the two-dimensional data set. | 05-23-2013 |
20130093792 | Organizational Tools on a Multi-touch Display Device - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Operations are invoked that establish a relationship between a particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 04-18-2013 |
20130093756 | Volumetric Data Exploration Using Multi-Point Input Controls - A three-dimensional data set is accessed. A two-dimensional plane is defined that intersects a space defined by the three-dimensional data set. The two-dimensional plane defines a two-dimensional data set within the three-dimensional data set and divides the three-dimensional data set into first and second subsets. A three-dimensional view based on the three-dimensional data set is rendered on such that at least a portion of the first subset of the three-dimensional data set is removed and at least a portion of the two-dimensional data set is displayed. A two-dimensional view of a first subset of the two-dimensional data set also is rendered. Controls are provided that enable visual navigation through the three-dimensional data set by engaging points on the multi-touch display device that correspond to either the three-dimensional view based on the three-dimensional data set and/or the two-dimensional view of the first subset of the two-dimensional data set. | 04-18-2013 |
20130093695 | Organizational Tools on a Multi-touch Display Device - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Operations are invoked that establish a relationship between a particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 04-18-2013 |
20130093694 | Organizational Tools on a Multi-touch Display Device - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Operations are invoked that establish a relationship between a particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 04-18-2013 |
20130093693 | Organizational Tools on a Multi-touch Display Device - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Operations are invoked that establish a relationship between a particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 04-18-2013 |
20130069991 | ORGANIZATIONAL TOOLS ON A MULTI-TOUCH DISPLAY DEVICE - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Operations are invoked that establish a relationship between a particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 03-21-2013 |
20130069885 | Organizational Tools on a Multi-touch Display Device - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Operations are invoked that establish a relationship between a particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 03-21-2013 |
20130069860 | Organizational Tools on a Multi-touch Display Device - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Engagement of an input mechanism with one of the target element and a particular one of the objects displayed on the multi-input display device is detected. Movement of the input mechanism is monitored while the input mechanism remains engaged with whichever one of the target element and the particular displayed object that the input mechanism engaged. A determination is made that at least a portion of a particular displayed object is overlapping at least a portion of a target element on the multi-input display device upon detecting disengagement of the input mechanism. As a consequence of disengagement and the overlap, processes are invoked that establish a relationship between the particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 03-21-2013 |
20120268427 | Optical Filtered Sensor-In-Pixel Technology for Touch Sensing - Optical filtered sensor-in-pixel technology for touch sensing, in which a waveguide receives infrared light emitted by a light source and causes at least some of the received infrared light to undergo total internal reflection within the waveguide. A frustrating layer is disposed relative to the waveguide so as to contact the waveguide when a touch input is provided. The frustrating layer causes frustration of the total internal reflection of the received infrared light within the waveguide at a contact point between the frustrating layer and the waveguide. A sensor-in-pixel display displays an image that is perceivable through the waveguide and the frustrating layer and includes photosensors. The photosensors have a photosensor corresponding to each pixel of the image and sense at least some of the infrared light that escapes from the waveguide at the contact point. | 10-25-2012 |
20120227012 | Graphical User Interface for Large-Scale, Multi-User, Multi-Touch Systems - A method implemented on the graphical user interface device to invoke an independent, user-localized menu in an application environment, by making a predetermined gesture with a pointing device on an arbitrary part of a display screen or surface, especially when applied in a multi-touch, multi-user environment, and in environments where multiple concurrent pointing devices are present. As an example, the user may trace out a closed loop of a specific size that invokes a default system menu at any location on the surface, even when a second user may be operating a different portion of the system elsewhere on the same surface. As an additional aspect of the invention, the method allows the user to smoothly transition between the menu-invocation and menu control. | 09-06-2012 |
20120223911 | Reduction of Noise in Touch Sensors - Techniques are described for providing a cleaned signal in a capacitive touch sensor. A phase of periodic noise on an input of the capacitive touch sensor is determined, and a periodic excitation signal having a phase that is locked to the determined phase of the periodic noise is generated. The periodic excitation signal is applied to an excited conductor in a first array of the touch sensor. While the excitation signal is applied, a response signal on a responding conductor in a second array of the touch sensor is detected, and, based on the detected response signal, a value indicative of a measured capacitance between the excited conductor and the responding conductor is generated. A threshold value is accessed, and a determination is made whether the response signal corresponds to a touch based on a difference between the value and the threshold value. | 09-06-2012 |
20120200523 | Techniques for Disambiguating Touch Data Using User Devices - Techniques for disambiguating touch data and determining user assignment of touch points detected by a touch sensor are described. The techniques leverage both user-specific touch data projected onto axes and non-user-specific touch data captured over a complete area. | 08-09-2012 |
20120200522 | Techniques for Disambiguating Touch Data - Techniques for disambiguating touch data and determining user assignment of touch points detected by a touch sensor are described. The techniques leverage both user-specific touch data projected onto axes and non-user-specific touch data captured over a complete area. | 08-09-2012 |
20120182266 | MULTI-TOUCH SENSING THROUGH FRUSTRATED TOTAL INTERNAL REFLECTION - High-resolution, scalable multi-touch sensing display systems and processes based on frustrated total internal reflection employ an optical waveguide that receives light, such as infrared light, that undergoes total internal reflection and an imaging sensor that detects light that escapes the optical waveguide caused by frustration of the total internal reflection due to contact by a user. The optical waveguide when fitted with a compliant surface overlay provides superior sensing performance, as well as other benefits and features. The systems and processes described provide true multi-touch (multi-input) and high-spatial and temporal resolution capability due to the continuous imaging of the frustrated total internal reflection that escapes the entire optical waveguide. Among other features and benefits, the systems and processes are scalable to large installations. | 07-19-2012 |
20120146934 | Event Registration and Dispatch System and Method for Multi-Point Controls - Dynamic registration of event handlers in a computer application or operating system recognizes multiple synchronous input streams by identifying each new stroke in a frame representing a single moment in time and mapping in a registration process each identified new stroke to a listening process that is associated with the user interface element to which the new input stream is to be applied. In the same frame, released strokes are unmapped and then each active listening process is called to carry out a respective control operation. When called, the strokes have the correct data for the given frame. The process is repeated for subsequent frames. By carrying out various processes in a sequence of frames, the concept of concurrency is preserved, which is particularly beneficial to multi-touch and multi-user systems. | 06-14-2012 |
20120146933 | Event Registration and Dispatch System and Method for Multi-Point Controls - Dynamic registration of event handlers in a computer application or operating system recognizes multiple synchronous input streams by identifying each new stroke in a frame representing a single moment in time and mapping in a registration process each identified new stroke to a listening process that is associated with the user interface element to which the new input stream is to be applied. In the same frame, released strokes are unmapped and then each active listening process is called to carry out a respective control operation. When called, the strokes have the correct data for the given frame. The process is repeated for subsequent frames. By carrying out various processes in a sequence of frames, the concept of concurrency is preserved, which is particularly beneficial to multi-touch and multi-user systems. | 06-14-2012 |
20120146932 | Event Registration and Dispatch System and Method for Multi-Point Controls - Dynamic registration of event handlers in a computer application or operating system recognizes multiple synchronous input streams by identifying each new stroke in a frame representing a single moment in time and mapping in a registration process each identified new stroke to a listening process that is associated with the user interface element to which the new input stream is to be applied. In the same frame, released strokes are unmapped and then each active listening process is called to carry out a respective control operation. When called, the strokes have the correct data for the given frame. The process is repeated for subsequent frames. By carrying out various processes in a sequence of frames, the concept of concurrency is preserved, which is particularly beneficial to multi-touch and multi-user systems. | 06-14-2012 |
20120110431 | Touch-Based Annotation System with Temporary Modes - One embodiment provides a system for processing gesture inputs on a touch screen display. The system receives a gesture input on the touch screen display. When the gesture is recognized as invoking an annotation canvas, the system determines the height, width and location of an annotation canvas, and displays the annotation canvas on the touch screen display. Then, in response to an input gesture within the annotation canvas, the system recognizes the gesture as an annotation gesture, and executes the annotation gesture. In response to receiving an input gesture outside of the annotation canvas, the gesture is interpreted by the system as a navigation input. | 05-03-2012 |
20120038583 | FORCE AND TRUE CAPACITIVE TOUCH MEASUREMENT TECHNIQUES FOR CAPACITIVE TOUCH SENSORS - Methods, systems, and apparatus relate to touch sensors that are configured to measure a true capacitive touch and a force applied to the sensor from a user. Some implementations involve the measurement of force and true capacitive touch simultaneously in a touch capacitive sensor. | 02-16-2012 |
20120013565 | Techniques for Locally Improving Signal to Noise in a Capacitive Touch Sensor - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for digital signal processing (DSP) techniques for generally improving a signal-to-noise ratio (SNR) of capacitive touch sensors. | 01-19-2012 |
20120013564 | Capacitive Touch Sensor Having Correlation with a Receiver - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for digital signal processing (DSP) techniques for generally improving a signal-to-noise ratio (SNR) of capacitive touch sensors. | 01-19-2012 |
20120013546 | Capacitive Touch Sensor Having Code-Divided and Time-Divided Transmit Waveforms - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for digital signal processing (DSP) techniques for generally improving a signal-to-noise ratio (SNR) of capacitive touch sensors. | 01-19-2012 |
20110096025 | Projected Capacitive Touch Sensing - Methods, systems, and apparatus relate to touch sensors that are configured to measure input applied to the sensor from a user. Some implementations involve the measurement of changes in capacitance between pairs of adjacent patterned electrodes to detect input at a touch sensor. | 04-28-2011 |
20100302196 | Touch Sensing - A touch-screen device includes an imaging waveguide, in which the imaging waveguide has a radiation receiving surface and an imaging surface different than the radiation receiving surface. The imaging waveguide is configured to receive, at the radiation receiving surface, radiation emitted by a contact receiving structure, and transmit the received radiation from a position on the radiation receiving surface to a position and/or angle of incidence on the imaging surface as a function of the position on the radiation receiving surface at which the transmitted radiation was received. An imaging sensor coupled to the imaging surface of the imaging optical waveguide is configured to detect radiation incident upon the imaging surface of the imaging optical waveguide. A processing device coupled to the imaging sensor is configured to determine contact points on the exposed contact surface of a contact receiving structure based on radiation detected by the imaging sensor. | 12-02-2010 |