Turbell, SE
Henrik Turbell, Linkoping SE
Patent application number | Description | Published |
---|---|---|
20080204763 | Measuring Apparatus and Method For Range Inspection - The present invention relates to an imaging apparatus and method for measuring the three-dimensional characteristics of an object ( | 08-28-2008 |
20100141946 | METHOD AND APPARATUS FOR DETERMINING THE AMOUNT OF SCATTERED LIGHT IN A MACHINE VISION SYSTEM - The present invention relates to a method and an apparatus for determining the amount of light scattered in an object in a machine vision system comprising: a light source illuminating said object with incident light having a limited extension in at least one direction; and, an imaging sensor detecting light emanating from said object, wherein said emanated light is reflected light (R) on the surface of said object and light scattered (S) in said object, said detected light is resulting in at least one intensity distribution curve on said imaging sensor having a peak where said reflected light (R) is detected on said imaging sensor. A width (w) of said at least one intensity distribution curve around said peak is measured, whereby said measured width (w) indicates the amount of light scattered (S) in said object. | 06-10-2010 |
20100310126 | OPTICAL TRIANGULATION - The present invention relates to a method for determining the extension of a trajectory in a space-time volume of measure images. The space-time volume of measure images is generated by a measuring method utilizing a measuring system comprising a first light source and a sensor. The measuring method comprises a step of, in a predetermined operating condition of the measuring system, moving a measure object along a first direction of movement in relation to the measuring system while the first light source illuminates the measure object whereby the sensor generates a measure image of the measure object at each time instant in a set of at least two subsequent time instants, thus generating said space-time volume of measure images wherein a feature point of the measure object maps to a trajectory in the space-time volume. | 12-09-2010 |
20110288806 | CALIBRATION OF A PROFILE MEASURING SYSTEM - A method for calibrating a measuring system, which system comprises a structured light source, optics and a sensor. The light source is adapted to produce a light plane or sheet and the optics is located between the light plane and the sensor. The method is performed in order to obtain a mapping from the sensor to the light plane. In the method the light source is switched on such that the light plane is produced. In order to account for distortions due to the optics, a mapping calibration profile is introduced in the light plane, wherein the mapping calibration profile comprises at least three points forming a straight line. A non-linear mapping from the sensor to the light plane is then computed by using the at least three points. Next, in order to account for perspective distortions, a homography calibration profile is introduced in the light plane, wherein the homography calibration profile comprises at least four points the relative distance between which are predetermined. A homography from the sensor to at the light plane based on these four points is then computed. A calibration object for using in such a method is also presented. | 11-24-2011 |
Henrik Turbell, Stockholm SE
Patent application number | Description | Published |
---|---|---|
20150248167 | CONTROLLING A COMPUTING-BASED DEVICE USING GESTURES - Methods and systems for controlling a computing-based device based on gestures made within a predetermined range of a camera wherein the predetermined range is a subset of the field of view of the camera. Any gestures made outside of the predetermined range are ignored and do not cause the computing-based device to perform any action. In some examples, the gestures are used to control a drawing canvas that is implemented in a video conference session. In these examples, a single camera may be used to generate an image of a video conference user which is used to detect gestures in the predetermined range and provide other parties to the video conference session a visual image of the user. | 09-03-2015 |
20150248765 | DEPTH SENSING USING AN RGB CAMERA - A method of sensing depth using an RGB camera. In an example method, a color image of a scene is received from an RGB camera. The color image is applied to a trained machine learning component which uses features of the image elements to assign all or some of the image elements a depth value which represents the distance between the surface depicted by the image element and the RGB camera. In various examples, the machine learning component comprises one or more entangled geodesic random decision forests. | 09-03-2015 |
Henrik Valdemar Turbell, Stockholm SE
Patent application number | Description | Published |
---|---|---|
20160127681 | Modifying Video Call Data - A method comprising: displaying a UI for display of received video; detecting selection of a UI displayed button whilst a received video frame is displayed; in response, disabling the display of video frames received after the received video frame; determining a position of a face of a user in the received frame; receiving a plurality of drawing inputs whilst the button is selected, each drawing input defining image data to be applied at a position on said face; modifying the video frame in accordance with the drawing inputs by applying the image data to each of the positions; detecting a condition and in response, for each video frame received after the detection, determining a position of the face in the frame to determine the location of the positions in the frame, applying the image data to each of the positions, and displaying the modified video frame in the UI. | 05-05-2016 |
20160127682 | Modifying Video Call Data - A method comprising: displaying a UI for display of received video; detecting selection of a UI button; receiving a plurality of drawing inputs whilst the button is selected, each drawing input defining image data to be applied at a facial position on a first side of a face of a user displayed in the received video; for each drawing input, determining a further facial position on a second side of the face, that is symmetrically opposite to the facial position; and for each received video frame, the method comprises: for each drawing input (i) determining a position of the face in the frame by executing an algorithm to determine the locations of the facial position and determined further facial position on the face in the frame; and (ii) applying the image data to the facial position and the determined further facial position; and displaying the modified frame in the UI. | 05-05-2016 |