Patent application number | Description | Published |
20130182077 | ENHANCED CONTRAST FOR OBJECT DETECTION AND CHARACTERIZATION BY OPTICAL IMAGING - Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels. | 07-18-2013 |
20130182079 | MOTION CAPTURE USING CROSS-SECTIONS OF AN OBJECT - An object's position and/or motion in three-dimensional space can be captured. For example, a silhouette of an object as seen from a vantage point can be used to define tangent lines to the object in various planes (“slices”). From the tangent lines, the cross section of the object is approximated using a simple closed curve (e.g., an ellipse). Alternatively, locations of points on an object's surface in a particular slice can also be determined directly, and the object's cross-section in the slice can be approximated by fitting a simple closed curve to the points. Positions and cross sections determined for different slices can be correlated to construct a 3D model of the object, including its position and shape. A succession of images can be analyzed to capture motion of the object. | 07-18-2013 |
20130182897 | SYSTEMS AND METHODS FOR CAPTURING MOTION IN THREE-DIMENSIONAL SPACE - Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on reflections therefrom or shadows cast thereby. | 07-18-2013 |
20130182902 | SYSTEMS AND METHODS FOR CAPTURING MOTION IN THREE-DIMENSIONAL SPACE - Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on edge points thereof. | 07-18-2013 |
20140028861 | OBJECT DETECTION AND TRACKING - Imaging systems and methods improve object recognition by more strongly enhancing contrast between the object and non-object (e.g., background) surfaces than would be possible with a simple optical filter tuned to the wavelength(s) of the source light(s). In some embodiments, the overall scene illuminated by ambient light is preserved (or may be reconstructed) for presentation purposes—e.g., combined with a graphical overlay of the sensed object(s) in motion. | 01-30-2014 |
20140125775 | THREE-DIMENSIONAL IMAGE SENSORS - An image sensor frame rate can be increased by “interlaced” mode operation whereby only half the number of lines (alternating between odd and even lines) of an image is transported to the readout circuitry. This halves the integration time but also halves the resolution of the sensor. The reduction is tolerable for motion characterization as long as sufficient image resolution remains. Accordingly, in one embodiment, an image sensor operated in an interlaced fashion is first exposed to a scene under a first form of illumination (e.g., narrowband illumination), and a first set of alternating (horizontal or vertical) lines constituting half of the pixels is read out of the array; the sensor is then exposed to the same scene under a second form of illumination (e.g., existing ambient illumination with the illumination source turned off), and a second set of alternating lines, representing the other half of the pixel array, is read out. The two images are compared and noise removed from the image obtained under narrowband illumination. As this occurs, the image sensor is capturing the next image under the first form of illumination, and the process continues. | 05-08-2014 |
20140125813 | OBJECT DETECTION AND TRACKING WITH VARIABLE-FIELD ILLUMINATION DEVICES - Imaging systems and methods optimize illumination of objects for purposes of detection, recognition and/or tracking by tailoring the illumination to the position of the object within the detection space. For example, feedback from a tracking system may be used to control and aim the lighting elements so that the illumination can be reduced or increased depending on the need. | 05-08-2014 |
20140125815 | OBJECT DETECTION AND TRACKING WITH REDUCED ERROR DUE TO BACKGROUND ILLUMINATION - An image sensor frame rate can be increased by “interlaced” mode operation whereby only half the number of lines (alternating between odd and even lines) of an image is transported to the readout circuitry. This halves the integration time but also halves the resolution of the sensor. The reduction is tolerable for motion characterization as long as sufficient image resolution remains. Accordingly, in one embodiment, an image sensor operated in an interlaced fashion is first exposed to a scene under a first form of illumination (e.g., narrowband illumination), and a first set of alternating (horizontal or vertical) lines constituting half of the pixels is read out of the array; the sensor is then exposed to the same scene under a second form of illumination (e.g., existing ambient illumination with the illumination source turned off), and a second set of alternating lines, representing the other half of the pixel array, is read out. The two images are compared and noise removed from the image obtained under narrowband illumination. As this occurs, the image sensor is capturing the next image under the first form of illumination, and the process continues. | 05-08-2014 |
20140139641 | SYSTEMS AND METHODS FOR CAPTURING MOTION IN THREE-DIMENSIONAL SPACE - Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on edge points thereof. | 05-22-2014 |
20140177913 | ENHANCED CONTRAST FOR OBJECT DETECTION AND CHARACTERIZATION BY OPTICAL IMAGING - Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels. | 06-26-2014 |
20140192024 | OBJECT DETECTION AND TRACKING WITH AUDIO AND OPTICAL SIGNALS - The technology disclosed addresses these problems by supplementing optical gesture recognition with the ability to recognize touch gestures. This capability allows the user to execute intuitive gestures involving contact with a surface. For example, in low-light situations where free-form gestures cannot be recognized optically with a sufficient degree of reliability, a device implementing the invention may switch to a touch mode in which touch gestures are recognized. In implementations, two contact microphones or other vibratory or acoustical sensors, that are coupled to an optical motion-capture and are in contact with a surface that a user touches, are monitored. When the contact microphones detect audio signals (or other vibrational phenomena) generated by contact of an object with the surface that the user touches, a position of the object traveling across and in contact with the surface is tracked. | 07-10-2014 |
20140192206 | POWER CONSUMPTION IN MOTION-CAPTURE SYSTEMS - The technology disclosed relates to reducing the overall power consumption of motion-capture system without compromising the quality of motion capture and tracking In general, this is accomplished by operating the motion-detecting cameras and associated image-processing hardware in a low-power mode unless and until a moving object is detected. Once an object of interest has been detected in the field of view of the cameras, the motion-capture system is “woken up,” i.e., switched into a high-power mode, in which it acquires and processes images at a frame rate sufficient for accurate motion tracking. | 07-10-2014 |
20140192259 | POWER CONSUMPTION IN MOTION-CAPTURE SYSTEMS WITH AUDIO AND OPTICAL SIGNALS - The technology disclosed provides systems and methods for reducing the overall power consumption of an optical motion-capture system without compromising the quality of motion capture and tracking. In implementations, this is accomplished by operating the motion-detecting cameras and associated image-processing hardware in a low-power mode (e.g., at a low frame rate or in a standby or sleep mode) unless and until touch gestures of an object such as a tap, sequence of taps, or swiping motions are performed with a surface proximate to the cameras. A contact microphone or other appropriate sensor is used for detecting audio signals or other vibrations generated by contact of the object with the surface. | 07-10-2014 |
20140201666 | DYNAMIC, FREE-SPACE USER INTERACTIONS FOR MACHINE CONTROL - Embodiments of display control based on dynamic user interactions generally include capturing a plurality of temporally sequential images of the user, or a body part or other control object manipulated by the user, and computationally analyzing the images to recognize a gesture performed by the user. In some embodiments, a scale indicative of an actual gesture distance traversed in performance of the gesture is identified, and a movement or action is displayed on the device based, at least in part, on a ratio between the identified scale and the scale of the displayed movement. In some embodiments, a degree of completion of the recognized gesture is determined, and the display contents are modified in accordance therewith. In some embodiments, a dominant gesture is computationally determined from among a plurality of user gestures, and an action displayed on the device is based on the dominant gesture. | 07-17-2014 |
20140201674 | DYNAMIC USER INTERACTIONS FOR DISPLAY CONTROL AND IDENTIFYING DOMINANT GESTURES - The technology disclosed relates to distinguishing meaningful gestures from proximate non-meaningful gestures in a three-dimensional ( | 07-17-2014 |
20140201683 | DYNAMIC USER INTERACTIONS FOR DISPLAY CONTROL AND MEASURING DEGREE OF COMPLETENESS OF USER GESTURES - The technology disclosed relates to distinguishing meaningful gestures from proximate non-meaningful gestures in a three-dimensional (3D) sensory space. In particular, it relates to calculating spatial trajectories of different gestures and determining a dominant gesture based on magnitudes of the spatial trajectories. The technology disclosed also relates to uniformly responding to gestural inputs from a user irrespective of a position of the user. In particular, it relates to automatically adapting a responsiveness scale between gestures in a physical space and resulting responses in a gestural interface by automatically proportioning on-screen responsiveness to scaled movement distances of gestures in the physical space, user spacing with the 3D sensory space, or virtual object density in the gestural interface. The technology disclosed further relates to detecting if a user has intended to interact with a virtual object based on measuring a degree of completion of gestures and creating interface elements in the 3D space. | 07-17-2014 |
20140201684 | DYNAMIC USER INTERACTIONS FOR DISPLAY CONTROL AND MANIPULATION OF DISPLAY OBJECTS - The technology disclosed relates to distinguishing meaningful gestures from proximate non-meaningful gestures in a three-dimensional (3D) sensory space. In particular, it relates to calculating spatial trajectories of different gestures and determining a dominant gesture based on magnitudes of the spatial trajectories. The technology disclosed also relates to uniformly responding to gestural inputs from a user irrespective of a position of the user. In particular, it relates to automatically adapting a responsiveness scale between gestures in a physical space and resulting responses in a gestural interface by automatically proportioning on-screen responsiveness to scaled movement distances of gestures in the physical space, user spacing with the 3D sensory space, or virtual object density in the gestural interface. The technology disclosed further relates to detecting if a user has intended to interact with a virtual object based on measuring a degree of completion of gestures and creating interface elements in the 3D space. | 07-17-2014 |
20140201689 | FREE-SPACE USER INTERFACE AND CONTROL USING VIRTUAL CONSTRUCTS - During control of a user interface via free-space motions of a hand or other suitable control object, switching between control modes may be facilitated by tracking the control object's movements relative to, and its penetration of, a virtual control construct (such as a virtual surface construct). The position of the virtual control construct may be updated, continuously or from time to time, based on the control object's location. | 07-17-2014 |
20140201690 | DYNAMIC USER INTERACTIONS FOR DISPLAY CONTROL AND SCALING RESPONSIVENESS OF DISPLAY OBJECTS - The technology disclosed relates to distinguishing meaningful gestures from proximate non-meaningful gestures in a three-dimensional (3D) sensory space. In particular, it relates to calculating spatial trajectories of different gestures and determining a dominant gesture based on magnitudes of the spatial trajectories. The technology disclosed also relates to uniformly responding to gestural inputs from a user irrespective of a position of the user. In particular, it relates to automatically adapting a responsiveness scale between gestures in a physical space and resulting responses in a gestural interface by automatically proportioning on-screen responsiveness to scaled movement distances of gestures in the physical space, user spacing with the 3D sensory space, or virtual object density in the gestural interface. The technology disclosed further relates to detecting if a user has intended to interact with a virtual object based on measuring a degree of completion of gestures and creating interface elements in the 3D space. | 07-17-2014 |
20140205146 | SYSTEMS AND METHODS OF TRACKING OBJECT MOVEMENTS IN THREE-DIMENSIONAL SPACE - The technology disclosed relates to tracking movement of a real world object in three-dimensional (3D) space. In particular, it relates to mapping, to image planes of a camera, projections of observation points on a curved volumetric model of the real world object. The projections are used to calculate a retraction of the observation points at different times during which the real world object has moved. The retraction is then used to determine translational and rotational movement of the real world object between the different times. | 07-24-2014 |
20140210707 | IMAGE CAPTURE SYSTEM AND METHOD - An example of an image capture system includes a support structure and a sensor arrangement, mounted to the support structure, including an image sensor, a lens, and a drive device. The image sensor has a sensor surface with a sensor surface area. The lens forms a focused image generally on the sensor surface. The area of the focused image is larger than the sensor surface area. The drive device is operably coupled to a chosen one of the lens and the image sensor for movement of the chosen one along a path parallel to the focused image. A portion of the viewing area including the object can be imaged onto the sensor surface and image data of the object, useful to determine information about the object, can be created by the image sensor to determine information regarding the object. | 07-31-2014 |
20140240466 | ADJUSTING MOTION CAPTURE BASED ON THE DISTANCE BETWEEN TRACKED OBJECTS - The technology disclosed relates to adjusting the monitored field of view of a camera and/or a view of a virtual scene from a point of view of a virtual camera based on the distance between tracked objects. For example, if the user's hand is being tracked for gestures, the closer the hand gets to another object, the tighter the frame can become—i.e., the more the camera can zoom in so that the hand and the other object occupy most of the frame. The camera can also be reoriented so that the hand and the other object remain in the center of the field of view. The distance between two objects in a camera's field of view can be determined and a parameter of a motion-capture system adjusted based thereon. In particular, the pan and/or zoom levels of the camera may be adjusted in accordance with the distance. | 08-28-2014 |
20140245200 | DISPLAY CONTROL WITH GESTURE-SELECTABLE CONTROL PARADIGMS - Systems and methods for dynamically displaying content in accordance with a user's manipulation of an object involve interpreting and/or displaying gestures in accordance with a control paradigm specific to the object. For example, a detected object may be compared with records in an object database, where each record in the database includes a reference object and specifies a gesture-based control paradigm specific to the reference object. The gesture-based control paradigm relates gestures performed with the reference object to contents displayable on a display, and as the user manipulates the object, the display contents change in a manner consistent with the control paradigm. | 08-28-2014 |
20140253691 | MOTION-CAPTURE APPARATUS WITH LIGHT-SOURCE FORM FACTOR - A system which identifies the position and shape of an object in 3D space includes a housing having a base portion and a body portion, the base portion including electrical contacts mating with a lighting receptacle. A camera, an image analyzer and power conditioning circuitry are within the housing. The image analyzer, coupled to the camera for receipt of camera image data, is configured to capture at least one image of the object and to generate object data indicative of the position and shape of the object in 3D space. The power conditioning circuitry converts power from the lighting receptacle to power suitable for the system. The object data can be used to computationally construct a representation of the object. Some examples include a database containing a library of object templates, the image analyzer being configured to match the 3D representation to one of the templates. | 09-11-2014 |
20140267190 | IDENTIFYING AN OBJECT IN A FIELD OF VIEW - The technology disclosed relates to identifying an object in a field of view of a camera. In particular, it relates to identifying a display in the field of view of the camera. This is achieved by monitoring a space including acquiring a series of image frames of the space using the camera and detecting one or more light sources in the series of image frames. Further, one or more frequencies of periodic intensity or brightness variations, also referred to as ‘refresh rate’, of light emitted from the light sources is measured. Based on the one or more frequencies of periodic intensity variations of light emitted from the light sources, at least one display that includes the light sources is identified. | 09-18-2014 |
20140267666 | DETERMINING THE RELATIVE LOCATIONS OF MULTIPLE MOTION-TRACKING DEVICES - The technology disclosed relates to coordinating motion-capture of a hand by a network of motion-capture sensors having overlapping fields of view. In particular, it relates to designating a first sensor among three or more motion-capture sensors as having a master frame of reference, observing motion of a hand as it passes through overlapping fields of view of the respective motion-capture sensors, synchronizing capture of images of the hand within the overlapping fields of view by pairs of the motion-capture devices, and using the pairs of the hand images captured by the synchronized motion-capture devices to automatically calibrate the motion-capture sensors to the master frame of reference frame. | 09-18-2014 |
20140267774 | DETERMINING THE ORIENTATION OF OBJECTS IN SPACE - A method and system determines object orientation using a light source to create a shadow line extending from the light source. A camera captures an image including the shadow line on an object surface. An orientation module determines the surface orientation from the shadow line. In some examples a transparency imperfection in a window through which a camera receives light can be detected and a message sent to a user as to the presence of a light-blocking or light-distorting substance or particle. A system can control illumination while imaging an object in space using a light source mounted to a support structure so a camera captures an image of the illuminated object. Direct illumination of the camera by light from the light source can be prevented such as by blocking the light or using a light-transmissive window adjacent the camera to reject light transmitted directly from the light source. | 09-18-2014 |
20140282282 | DYNAMIC USER INTERACTIONS FOR DISPLAY CONTROL - The technology disclosed relates to using gestures to supplant or augment use of a standard input device coupled to a system. It also relates to controlling a display using gestures. It further relates to controlling a system using more than one input device. In particular, it relates to detecting a standard input device that causes on-screen actions on a display in response to control manipulations performed using the standard input device. Further, a library of analogous gestures is identified, which includes gestures that are analogous to the control manipulations and also cause the on-screen actions responsive to the control manipulations. Thus, when a gesture from the library of analogous gestures is detected, a signal is generated that mimics a standard signal from the standard input device and causes at least one on-screen action. | 09-18-2014 |
20140285428 | RESOURCE-RESPONSIVE MOTION CAPTURE - The technology disclosed relates to operating a motion-capture system responsive to available computational resources. In particular, it relates to assessing a level of image acquisition and image-analysis resources available using benchmarking of system components. In response, one or more image acquisition parameters and/or image-analysis parameters are adjusted. Acquisition and/or analysis of image data are then made compliant with the adjusted image acquisition parameters and/or image-analysis parameters. In some implementations, image acquisition parameters include frame resolution and frame capture rate and image-analysis parameters include analysis algorithm and analysis density. | 09-25-2014 |
20140285818 | DETERMINING POSITIONAL INFORMATION OF AN OBJECT IN SPACE - The technology disclosed relates to determining positional information of an object in a field of view. In particular, it relates to calculating a distance of the object from a reference such as a sensor including scanning the field of view by selectively illuminating directionally oriented light sources and measuring one or more differences in property of returning light emitted from the light sources and reflected from the object. The property can be intensity or phase difference of the light. It also relates to finding an object in a region of space. In particular, it relates to scanning the region of space with directionally controllable illumination, determining a difference in a property of the illumination received for two or more points in the scanning, and determining positional information of the object based in part upon the points in the scanning corresponding to the difference in the property. | 09-25-2014 |
20140304665 | CUSTOMIZED GESTURE INTERPRETATION - The technology disclosed relates to filtering gestures, according to one implementation. In particular, it relates to distinguishing between interesting gestures from non-interesting gestures in a three-dimensional (3D) sensory space by comparing characteristics of user-defined reference gestures against characteristics of actual gestures performed in the 3D sensory space. Based on the comparison, a set of gestures of interest are filtered from all the gestures performed in the 3D sensory space. | 10-09-2014 |
20140307920 | SYSTEMS AND METHODS FOR TRACKING OCCLUDED OBJECTS IN THREE-DIMENSIONAL SPACE - Methods and systems for the tracking of one or more occluded objects in 3D space include creating an approximation of an object while it is occluded. | 10-16-2014 |
20140340311 | CURSOR MODE SWITCHING - Methods and systems for processing input from an image-capture device for gesture-recognition. The method further includes computationally interpreting user gestures in accordance with a first mode of operation; analyzing the path of movement of an object to determine an intent of a user to change modes of operation; and, upon determining an intent of the user to change modes of operation, subsequently interpreting user gestures in accordance with the second mode of operation. | 11-20-2014 |
20140340524 | SYSTEMS AND METHODS FOR PROVIDING NORMALIZED PARAMETERS OF MOTIONS OF OBJECTS IN THREE-DIMENSIONAL SPACE - Systems and methods are disclosed for detecting user gestures using detection zones to save computational time and cost and/or to provide normalized position-based parameters, such as position coordinates or movement vectors. The detection zones may be established explicitly by a user or a computer application, or may instead be determined from the user's pattern of gestural activity. The detection zones may have three-dimensional (3D) boundaries or may be two-dimensional (2D) frames. The size and location of the detection zone may be adjusted based on the distance and direction between the user and the motion-capture system. | 11-20-2014 |
20140344731 | DYNAMIC INTERACTIVE OBJECTS - Aspects of the systems and methods are described providing for modifying a presented interactive element or object, such as a cursor, based on user-input gestures, the presented environment of the cursor, or any combination thereof. The color, size, shape, transparency, and/or responsiveness of the cursor may change based on the gesture velocity, acceleration, or path. In one implementation, the cursor “stretches” to graphically indicate the velocity and/or acceleration of the gesture. The display properties of the cursor may also change if, for example, the area of the screen occupied by the cursor is dark, bright, textured, or is otherwise complicated. In another implementation, the cursor is drawn using sub-pixel smoothing to improve its visual quality. | 11-20-2014 |
20140369558 | SYSTEMS AND METHODS FOR MACHINE CONTROL - A region of space may be monitored for the presence or absence of one or more control objects, and object attributes and changes thereto may be interpreted as control information provided as input to a machine or application. In some embodiments, the region is monitored using a combination of scanning and image-based sensing. | 12-18-2014 |
20140376773 | TUNABLE OPERATIONAL PARAMETERS IN MOTION-CAPTURE AND TOUCHLESS INTERFACE OPERATION - The technology disclosed can provide for improved motion capture and touchless interface operations by enabling tunable control of operational parameters without compromising the quality of image based recognition, tracking of conformation and/or motion, and/or characterization of objects (including objects having one or more articulating members (i.e., humans and/or animals and/or machines). Examples of tunable operational parameters include frame rate, field of view, contrast detection, light source intensity, pulse rate, and/or clock rate. Among other aspects, operational parameters can be changed based upon detecting presence and/or motion of an object indicating input (e.g., control information, input data, etc.) to the touchless interface, either alone or in conjunction with presence (or absence or degree) of one or more condition(s) such as accuracy conditions, resource conditions, application conditions, others, and/or combinations thereof. | 12-25-2014 |
20150022447 | NON-LINEAR MOTION CAPTURE USING FRENET-SERRET FRAMES - Implementations of the technology disclosed convert captured motion from Cartesian/(x,y,z) space to Frenet-Serret frame space, apply one or more filters to the motion in Frenet-Serret space, and output data (for display or control) in a desired coordinate space—e.g., in a Cartesian/(x,y,z) reference frame. The output data can better represent a user's actual motion or intended motion. | 01-22-2015 |
20150029092 | SYSTEMS AND METHODS OF INTERPRETING COMPLEX GESTURES - The technology disclosed relates to using a curvilinear gestural path of a control object as a gesture-based input command for a motion-sensing system. In particular, the curvilinear gestural path can be broken down into curve segments, and each curve segment can be mapped to a recorded gesture primitive. Further, certain sequences of gesture primitives can be used to identify the original curvilinear gesture. | 01-29-2015 |
20150253428 | DETERMINING POSITIONAL INFORMATION FOR AN OBJECT IN SPACE - System and methods for locating objects within a region of interest involve, in various embodiments, scanning the region with light of temporally variable direction and detecting reflections of objects therein; positional information about the objects can then be inferred from the resulting reflections. | 09-10-2015 |
Patent application number | Description | Published |
20150097768 | ENHANCED FIELD OF VIEW TO AUGMENT THREE-DIMENSIONAL (3D) SENSORY SPACE FOR FREE-SPACE GESTURE INTERPRETATION - The technology disclosed relates to enhancing the fields of view of one or more cameras of a gesture recognition system for augmenting the three-dimensional (3D) sensory space of the gesture recognition system. The augmented 3D sensory space allows for inclusion of previously uncaptured of regions and points for which gestures can be interpreted i.e. blind spots of the cameras of the gesture recognition system. Some examples of such blind spots include areas underneath the cameras and/or within 20-85 degrees of a tangential axis of the cameras. In particular, the technology disclosed uses a Fresnel prismatic element and/or a triangular prism element to redirect the optical axis of the cameras, giving the cameras fields of view that cover at least 45 to 80 degrees from tangential to the vertical axis of a display screen on which the cameras are mounted. | 04-09-2015 |
20150103004 | VELOCITY FIELD INTERACTION FOR FREE SPACE GESTURE INTERFACE AND CONTROL - The technology disclosed relates to automatically interpreting a gesture of a control object in a three dimensional sensor space by sensing a movement of the control object in the three dimensional sensor space, sensing orientation of the control object, defining a control plane tangential to a surface of the control object and interpreting the gesture based on whether the movement of the control object is more normal to the control plane or more parallel to the control plane. | 04-16-2015 |
20150119074 | DETERMINING POSITIONAL INFORMATION FOR AN OBJECT IN SPACE - The technology disclosed relates to determining positional information about an object of interest is provided. In particular, it includes, conducting scanning of a field of interest with an emission from a transmission area according to an ordered scan pattern. The emission can be received to form a signal based upon at least one salient property (e.g., intensity, amplitude, frequency, polarization, phase, or other detectable feature) of the emission varying with time at the object of interest. Synchronization information about the ordered scan pattern can be derived from a source, a second signal broadcast separately, social media share, others, or and/or combinations thereof). A correspondence between at least one characteristic of the signal and the synchronization information can be established. Positional information can be determined based at least in part upon the correspondence. | 04-30-2015 |
20150199025 | OBJECT DETECTION AND TRACKING FOR PROVIDING A VIRTUAL DEVICE EXPERIENCE - The technology disclosed can provide capabilities such as using vibrational sensors and/or other types of sensors coupled to a motion-capture system to monitor contact with a surface that a user can touch. A virtual device can be projected onto at least a portion of the surface. Location information of a user contact with the surface is determined based at least in part upon vibrations produced by the contact. Control information is communicated to a system based in part on a combination of the location on the surface portion of the virtual device and the detected location information of the user contact. The virtual device experience can be augmented in some implementations by the addition of haptic, audio and/or visual projectors. | 07-16-2015 |
20150227203 | SYSTEMS AND METHODS OF PROVIDING HAPTIC-LIKE FEEDBACK IN THREE-DIMENSIONAL (3D) SENSORY SPACE - The technology disclosed relates to providing haptic-like feedback for an interaction between a control object and a virtual object. In particular, it relates to defining virtual feeler zones of the control object and generating for display a feeler indicator that provides visual feedback over a range of hover proximity of the feeler zones to the virtual object, applied forces on the virtual object, and other material properties of the virtual object. | 08-13-2015 |
20150227210 | SYSTEMS AND METHODS OF DETERMINING INTERACTION INTENT IN THREE-DIMENSIONAL (3D) SENSORY SPACE - The technology disclosed relates to determining intent for the interaction by calculating a center of effort for the applied forces. Movement of the points of virtual contacts and the center of effort are then monitored to determine a gesture-type intended for the interaction. The number of points of virtual contacts of the feeler zones and proximities between the points of virtual contacts are used to determine a degree of precision of a control object-gesture. | 08-13-2015 |
20150242682 | Systems and Methods of Object Shape and Position Determination in Three-Dimensional (3D) Space - Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on edge points thereof. | 08-27-2015 |
20150243039 | Systems and Methods of Constructing Three-Dimensional (3D) Model of an Object Using Image Cross-Sections - Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on edge points thereof. | 08-27-2015 |
20150287204 | SYSTEMS AND METHODS OF LOCATING A CONTROL OBJECT APPENDAGE IN THREE DIMENSIONAL (3D) SPACE - Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on reflections therefrom or shadows cast thereby. | 10-08-2015 |
20150302576 | Retraction Based Three-Dimensional Tracking of Object Movements - The technology disclosed relates to tracking movement of a real world object in three-dimensional (3D) space. In particular, it relates to mapping, to image planes of a camera, projections of observation points on a curved volumetric model of the real world object. The projections are used to calculate a retraction of the observation points at different times during which the real world object has moved. The retraction is then used to determine translational and rotational movement of the real world object between the different times. | 10-22-2015 |
20150341618 | CALIBRATION OF MULTI-CAMERA DEVICES USING REFLECTIONS THEREOF - The technology disclosed can provide capabilities such as calibrating an imaging device based on images taken by device cameras of reflections of the device itself. Implementations exploit device components that are easily recognizable in the images, such as one or more light-emitting devices (LEDs) or other light sources to eliminate the need for specialized calibration hardware and can be accomplished, instead, with hardware readily available to a user of the device—the device itself and a reflecting surface, such as a computer screen. The user may hold the device near the screen under varying orientations and capture a series of images of the reflection with the device's cameras. These images are analyzed to determine camera parameters based on the known positions of the light sources. If the positions of the light sources themselves are subject to errors requiring calibration, they may be solved for as unknowns in the analysis. | 11-26-2015 |
Patent application number | Description | Published |
20100139881 | APPARATUS WITH AN IMPERMEABLE TRANSFER BELT IN A PAPERMAKING MACHINE, AND ASSOCIATED METHODS - An apparatus for transferring a wet paper web from a press nip to a drying cylinder ( | 06-10-2010 |
20100326616 | Papermaking Machine Employing An Impermeable Transfer Belt, and Associated Methods - A papermaking machine for making paper includes a forming section, a press section, and a drying section. The paper web is pressed between two press members while enclosed between a press felt and a transfer belt having non-uniformly distributed microscopic depressions in its surface, the web following the transfer belt from the press to a transfer point at which the web is transferred via a suction transfer device onto a structuring fabric, the web then being dried on a drying cylinder. The transfer point is spaced a distance D from the press nip selected based on machine speed, a basis weight of the web, and the surface characteristics of the transfer belt, such that within the distance D a thin water film between the web and the transfer belt at least partially dissipates to allow the web to be separated from the transfer belt. | 12-30-2010 |
20110126998 | Methods employing an impermeable transfer belt in a papermaking machine - A papermaking machine for making paper includes a forming section, a press section, and a drying section. The paper web is pressed between two press members while enclosed between a press felt and a transfer belt having non-uniformly distributed microscopic depressions in its surface, the web following the transfer belt from the press to a transfer point at which the web is transferred via a suction transfer device onto a structuring fabric, the web then being dried on a drying cylinder. The transfer point is spaced a distance D from the press nip selected based on machine speed, a basis weight of the web, and the surface characteristics of the transfer belt, such that within the distance D a thin water film between the web and the transfer belt at least partially dissipates to allow the web to be separated from the transfer belt. | 06-02-2011 |
20120073777 | Methods Employing an Impermeable Transfer Belt in a Papermaking Machine - A papermaking machine for making paper includes a forming section, a press section, and a drying section. The paper web is pressed between two press members while enclosed between a press felt and a transfer belt having non-uniformly distributed microscopic depressions in its surface, the web following the transfer belt from the press to a transfer point at which the web is transferred via a suction transfer device onto a structuring fabric, the web then being dried on a drying cylinder. The transfer point is spaced a distance D from the press nip selected based on machine speed, a basis weight of the web, and the surface characteristics of the transfer belt, such that within the distance D a thin water film between the web and the transfer belt at least partially dissipates to allow the web to be separated from the transfer belt. | 03-29-2012 |
20120103550 | Apparatus With An Impermeable Transfer Belt In A Papermaking Machine, And Associated Methods - An apparatus for transferring a wet paper web from a press nip to a drying cylinder of a papermaking machine, and for structuring the web, includes an impermeable transfer belt that passes through the press nip along with the paper web, and a permeable structuring fabric for transfer of the web onto the drying cylinder, the structuring fabric being arranged in a loop within which a suction transfer device is disposed. A web-contacting surface of the belt has a non-uniform distribution of microscopic-scale depressions, and a suction zone of the transfer device includes a transfer point spaced a distance D from the press nip. The belt is arranged to bring the web into contact with the structuring fabric in the suction zone for a length L, such that suction is exerted on the paper web to transfer the paper web from the belt onto the structuring fabric at the transfer point. | 05-03-2012 |
20130199741 | HIGH BULK TISSUE SHEETS AND PRODUCTS - Spirally wound paper products are disclosed having desirable roll bulk, firmness and softness properties. The rolled products can be made from single ply tissue webs formed according to various processes. | 08-08-2013 |
20140209262 | TISSUE HAVING HIGH STRENGTH AND LOW MODULUS - The present invention provides tissue products having a high degree of stretch and low modulus at relatively high tensile strengths, such as geometric mean tensile strengths greater than about 1500 g/3″ and more preferably greater than about 2000 g/3″. The combination of a tough, yet relatively supple sheet is preferably achieved by subjecting the embryonic web to a speed differential as it is passed from one fabric in the papermaking process to another, commonly referred to as rush transfer. | 07-31-2014 |
20140209265 | TISSUE HAVING HIGH STRENGTH AND LOW MODULUS - The present invention provides tissue products having a high degree of stretch and low modulus at relatively high tensile strengths, such as geometric mean tensile strengths greater than about 1500 g/3″ and more preferably greater than about 2000 g/3″. The combination of a tough, yet relatively supple sheet is preferably achieved by subjecting the embryonic web to a speed differential as it is passed from one fabric in the papermaking process to another, commonly referred to as rush transfer. | 07-31-2014 |
20150101774 | HIGH BULK TISSUE SHEETS AND PRODUCTS - Spirally wound paper products are disclosed having desirable roll bulk, firmness and softness properties. The rolled products can be made from single ply tissue webs formed according to various processes. | 04-16-2015 |
20150240426 | TISSUE HAVING HIGH STRENGTH AND LOW MODULUS - The present invention provides tissue products having a high degree of stretch and low modulus at relatively high tensile strengths, such as geometric mean tensile strengths greater than about 1500 g/3″ and more preferably greater than about 2000 g/3″. The combination of a tough, yet relatively supple sheet is preferably achieved by subjecting the embryonic web to a speed differential as it is passed from one fabric in the papermaking process to another, commonly referred to as rush transfer. | 08-27-2015 |
20150247290 | SMOOTH AND BULKY TISSUE - The present disclosure provides high bulk tissue products, as well as an apparatus and methods for manufacturing the same. The tissue products provided herein not only have high bulk, but they also have improved surface smoothness, particularly compared to tissue products of similar basis weights. | 09-03-2015 |
20150327731 | SMOOTH AND BULKY TISSUE - The present disclosure provides high bulk tissue products, as well as an apparatus and methods for manufacturing the same. The tissue products provided herein not only have high bulk, but they also have improved surface smoothness, particularly compared to tissue products of similar basis weights. | 11-19-2015 |