Patent application number | Description | Published |
20090299523 | Walking robot and method of controlling the same - Disclosed are a biped walking robot, which carries out walking with a high energy efficiency through adjustment of the stiffnesses of joints of legs and improves walking stability through control of the pose of a torso, and a method of controlling the walking robot. The method includes generating a walking pattern of plural legs connected to a torso of the walking robot; adjusting stiffness of each of the plural legs interlocking with walking phases of the plural legs driven according to the walking pattern; and measuring a tilt of the torso, and compensating for the tilt of the torso such that the torso is parallel with the gravity direction. | 12-03-2009 |
20100161225 | Method of building map of mobile platform in dynamic environment - Disclosed herein is a method of building a map of a mobile platform moving in a dynamic environment and detecting an object using a 3D camera sensor, e.g., an IR TOF camera sensor, for localization. A localization technology to separate and map a dynamic object and a static object is applied to a mobile platform, such as an unmanned vehicle or a mobile robot. Consequently, the present method is capable of accurately building map information based on the static object in a dynamic environment having a large number of dynamic objects and achieving a dynamic object avoidance or chasing function using position information acquired to build the map. | 06-24-2010 |
20100172571 | Robot and control method thereof - Disclosed herein are a feature point used to localize an image-based robot and build a map of the robot and a method of extracting and matching an image patch of a three-dimensional (3D) image, which is used as the feature point. It is possible to extract the image patch converted into the reference image using the position information of the robot and the 3D position information of the feature point. Also, it is possible to obtain the 3D surface information with the brightness values of the image patches to obtain the match value with the minimum error by a 3D surface matching method of matching the 3D surface information of the image patches converted into the reference image through the ICP algorithm. | 07-08-2010 |
20110038540 | METHOD AND APPARATUS EXTRACTING FEATURE POINTS AND IMAGE BASED LOCALIZATION METHOD USING EXTRACTED FEATURE POINTS - Disclosed herein are a method and apparatus for extracting feature points using hierarchical image segmentation and an image based localization method using the extracted feature points. An image is segmented using an affinity degree obtained using information observed during position estimation, new feature points are extracted from segmented areas in which registered feature points are not included, and position estimation is performed based on the new feature points. Accordingly, stable and reliable localization may be performed. | 02-17-2011 |
20110052043 | METHOD OF MOBILE PLATFORM DETECTING AND TRACKING DYNAMIC OBJECTS AND COMPUTER-READABLE MEDIUM THEREOF - Disclosed herein is a computer-readable medium and method of a mobile platform detecting and tracking dynamic objects in an environment having the dynamic objects. The mobile platform acquires a three-dimensional (3D) image using a time-of-flight (TOF) sensor, removes a floor plane from the acquired 3D image using a random sample consensus (RANSAC) algorithm, and individually separates objects from the 3D image. Movement of the respective separated objects is estimated using a joint probability data association filter (JPDAF). | 03-03-2011 |
20110090252 | MARKERLESS AUGMENTED REALITY SYSTEM AND METHOD USING PROJECTIVE INVARIANT - Disclosed herein are a markerless augmented reality system and method for extracting feature points within an image and providing augmented reality using a projective invariant of the feature points. The feature points are tracked in two images photographed while varying the position of an image acquisition unit, a set of feature points satisfying a plane projective invariant is obtained from the feature points, and augmented reality is provided based on the set of feature points. Accordingly, since the set of feature points satisfies the plane projective invariant even when the image acquisition unit is moved and functions as a marker, a separate marker is unnecessary. In addition, since augmented reality is provided based on the set of feature points, a total computation amount is decreased and augmented reality is more efficiently provided. | 04-21-2011 |
20110164792 | FACIAL RECOGNITION APPARATUS, METHOD AND COMPUTER-READABLE MEDIUM - Two-dimensional image information and three-dimensional image information of a subject are acquired, facial recognition is performed using the two-dimensional image information to determine whether a recognized face is a registered user's face, an elliptical model of the user is matched to the three-dimensional image information to calculate an error if it is determined that the recognized face is the user's face, and it is determined whether the user's face is improperly used based on the error. The subject's face is determined using the two-dimensional image information and the three-dimensional image information of the subject and it is determined whether the recognized face is improperly used, thereby improving facial recognition reliability. Thus, information security is improved. | 07-07-2011 |
20110164832 | IMAGE-BASED LOCALIZATION FEATURE POINT REGISTRATION APPARATUS, METHOD AND COMPUTER-READABLE MEDIUM - An image-based localization feature point registration apparatus includes a camera to capture an image, a feature point extractor to extract a feature point from the captured image, a calculator to calculate depth information about the feature point according to whether the feature point is one of two-dimensional (2D) and a three-dimensional (3D) corner, and a feature point register to register 3D coordinates of the feature point based on the depth information about the feature point and image coordinates of the feature point. | 07-07-2011 |
20110165893 | APPARATUS TO PROVIDE AUGMENTED REALITY SERVICE USING LOCATION-BASED INFORMATION AND COMPUTER-READABLE MEDIUM AND METHOD OF THE SAME - An augmented reality (AR) service apparatus includes a camera to capture an actual image, a controller to receive feature point information about the captured image from at least one of a plurality of base stations (BSs), detect a location of the camera by matching data of feature points with data of the image, and provide location-based information in a same direction as the captured image according to the location of the camera, and a display to realize an AR service by combining the captured image with the location-based information under control of the controller. | 07-07-2011 |
20110188708 | THREE-DIMENSIONAL EDGE EXTRACTION METHOD, APPARATUS AND COMPUTER-READABLE MEDIUM USING TIME OF FLIGHT CAMERA - A method of extracting a three-dimensional (3D) edge is based on a two-dimensional (2D) intensity image and a depth image acquired using a time of flight (TOF) camera. The 3D edge extraction method includes acquiring a 2D intensity image and a depth image using a TOF camera, acquiring a 2D edge image from the 2D intensity image, and extracting a 3D edge using a matched image obtained by matching the 2D intensity image and the depth image. | 08-04-2011 |
20120063673 | METHOD AND APPARATUS TO GENERATE OBJECT DESCRIPTOR USING EXTENDED CURVATURE GABOR FILTER - A method and apparatus to generate an object descriptor using extended curvature gabor filters. The method and apparatus may increase a recognition rate of even a relatively small image with use of an extended number of curvature gabor filters having controllable curvatures and may reduce the amount of calculation required for face recognition by performing the face recognition using only some of the extended curvature gabor filters which have a great effect on the recognition rate. The object descriptor generating method includes extracting gabor features from an input object image by applying a plurality of curvature gabor filters, generated via combination of a plurality of curvatures and a plurality of Gaussian magnitudes, to the object image, and generating an object descriptor for object recognition by projecting the extracted features onto a predetermined base vector. | 03-15-2012 |
20120076355 | 3D OBJECT TRACKING METHOD AND APPARATUS - A 3D object tracking method and apparatus in which a model of an object to be tracked is divided into a plurality of polygonal planes and the object is tracked using texture data of the respective planes and geometric data between the respective planes to enable more precise tracking. The 3D object tracking method includes modeling the object to be tracked to generate a plurality of planes, and tracking the plurality of planes, respectively. The modeling of the object includes selecting points from among the plurality of planes, respectively, and calculating projective invariants using the selected points. | 03-29-2012 |
20120089295 | MOVING ROBOT AND METHOD TO BUILD MAP FOR THE SAME - A moving robot and a method to build a map for the same, wherein a 3D map for an ambient environment of the moving robot may be built using a Time of Flight (TOF) camera that may acquire 3D distance information in real time. The method acquires 3D distance information of an object present in a path along which the moving robot moves, accumulates the acquired 3D distance information to construct a map of a specific level and stores the map in a database, and then hierarchically matches maps stored in the database to build a 3D map for a set space. This method may quickly and accurately build a 3D map for an ambient environment of the moving robot. | 04-12-2012 |
20120106828 | MOBILE ROBOT AND SIMULTANEOUS LOCALIZATION AND MAP BUILDING METHOD THEREOF - A simultaneous localization and map building method of a mobile robot including an omni-directional camera. The method includes acquiring an omni-directional image from the omni-directional camera, dividing the obtained omni-directional image into upper and lower images according to a preset reference to generate a first image, which is the lower image, and a second image, which is the upper image, extracting feature points from the first image and calculating visual odometry information calculating visual odometry information to track locations of the extracted feature points based on a location of the omni-directional camera, and performing localization and map building of the mobile robot using the calculated visual odometry information and the second image as an input of an extended Kalman filter. | 05-03-2012 |
20120114174 | Voxel map generator and method thereof - A volume cell (VOXEL) map generation apparatus includes an inertia measurement unit to calculate inertia information by calculating inertia of a volume cell (VOXEL) map generator, a Time of Flight (TOF) camera to capture an image of an object, thereby generating a depth image of the object and a black-and-white image of the object, an estimation unit to calculate position and posture information of the VOXEL map generator by performing an Iterative Closest Point (ICP) algorithm on the basis of the depth image of the object, and to recursively estimate a position and posture of the VOXEL map generator on the basis of VOXEL map generator inertia information calculated by the inertia measurement unit and VOXEL map generator position and posture information calculated by the ICP algorithm, and a grid map construction unit to configure a grid map based on the recursively estimated VOXEL map generator position and posture. | 05-10-2012 |
20120114175 | OBJECT POSE RECOGNITION APPARATUS AND OBJECT POSE RECOGNITION METHOD USING THE SAME - An object pose recognition apparatus and method. The object pose recognition method includes acquiring first image data of an object to be recognized and 3-dimensional (3D) point cloud data of the first image data, and storing the first image data and the 3D point cloud data in a database, receiving input image data of the object photographed by a camera, extracting feature points from the stored first image data and the input image data, matching the stored 3D point cloud data and the input image data based on the extracted feature points and calculating a pose of the photographed object, and shifting the 3D point cloud data based on the calculated pose of the object, restoring second image data based on the shifted 3D point cloud data, and re-calculating the pose of the object using the restored second image data and the input image data. | 05-10-2012 |
20120121126 | METHOD AND APPARATUS FOR ESTIMATING FACE POSITION IN 3 DIMENSIONS - An apparatus and method for estimating a three-dimensional face position. The method of estimating the three-dimensional face position includes acquiring two-dimensional image information from a single camera, detecting a face region of a user from the two-dimensional image information, calculating the size of the detected face region, estimating a distance between the single camera and the user's face using the calculated size of the face region, and obtaining positional information of the user's face in a three-dimensional coordinate system using the estimated distance between the single camera and the user's face. Accordingly, it is possible to estimate the distance between the user and the single camera using the size of the face region of the user in the image information acquired by the single camera so as to acquire the three-dimensional position coordinates of the user. | 05-17-2012 |
20120134537 | SYSTEM AND METHOD FOR EXTRACTING THREE-DIMENSIONAL COORDINATES - A system and method for extracting 3D coordinates, the method includes obtaining, by a stereoscopic image photographing unit, two images of a target object, and obtaining 3D coordinates of the object on the basis of coordinates of each pixel of the two images, measuring, by a Time of Flight (TOF) sensor unit, a value of a distance to the object, and obtaining 3D coordinates of the object on the basis of the measured distance value, mapping pixel coordinates of each image to the 3D coordinates obtained through the TOF sensor unit, and calibrating the mapped result, determining whether each set of pixel coordinates and the distance value to the object measured through the TOF sensor unit are present, calculating a disparity value on the basis of the distance value or the pixel coordinates, and calculating 3D coordinates of the object on the basis of the calculated disparity value. | 05-31-2012 |
20120155756 | METHOD OF SEPARATING FRONT VIEW AND BACKGROUND AND APPARATUS - A method of initially estimating a front view portion of a photographed image and separating the photographed image into a front view and a background without user interaction and apparatus performing the method are provided. The method of separating a front view and a background of an image includes dividing one or more pixels included in a photographed image into pixel groups according to color similarity between the pixels, estimating the position of the front view in the image divided into the pixel groups, and separating the front view and the background based on the estimated position of the front view. The method automatically separates the front view and the background of the image without a user input. | 06-21-2012 |
20120155775 | WALKING ROBOT AND SIMULTANEOUS LOCALIZATION AND MAPPING METHOD THEREOF - A walking robot and a simultaneous localization and mapping method thereof in which odometry data acquired during movement of the walking robot are applied to image-based SLAM technology so as to improve accuracy and convergence of localization of the walking robot. The simultaneous localization and mapping method includes acquiring image data of a space about which the walking robot walks and rotational angle data of rotary joints relating to walking of the walking robot, calculating odometry data using kinematic data of respective links constituting the walking robot and the rotational angle data, and localizing the walking robot and mapping the space about which the walking robot walks using the image data and the odometry data. | 06-21-2012 |
20120158178 | ROBOT AND METHOD FOR CREATING PATH OF THE ROBOT - A robot and a method for creating a robot path. The method for planning the robot path includes generating a depth map including a plurality of cells by measuring a distance to an object, dividing a boundary among the plurality of cells into a plurality of partitions according to individual depth values of the cells, and extracting a single closed loop formed by the divided boundary, obtaining a position and shape of the object located at a first time through the extracted single closed loop, calculating a probability that the object is located at a second time after t seconds on the basis of the obtained position and shape of the object located at the first time, and creating a moving path simultaneously while avoiding the object according to the calculated probability, thereby creating an optimum path without colliding with the object. | 06-21-2012 |
20130051658 | METHOD OF SEPARATING OBJECT IN THREE DIMENSION POINT CLOUD - A method of separating an object in a three dimension point cloud including acquiring a three dimension point cloud image on an object using an image acquirer, eliminating an outlier from the three dimension point cloud image using a controller, eliminating a plane surface area from the three dimension point cloud image, of which the outlier has been eliminated using the controller, and clustering points of an individual object from the three dimension point cloud image, of which the plane surface area has been eliminated using the controller. | 02-28-2013 |
20130089235 | MOBILE APPARATUS AND METHOD FOR CONTROLLING THE SAME - A method of controlling a mobile apparatus includes acquiring a first original image and a second original image, extracting a first feature point of the first original image and a second feature point of the second original image, generating a first blurring image and a second blurring image by blurring the first original image and the second original image, respectively, calculating a similarity between at least two images of the first original image, the second original image, the first blurring image, and the second blurring image, determining a change in scale of the second original image based on the calculated similarity, and controlling at least one of an object recognition and a position recognition by matching the second feature point of the second original image to the first feature point of the first original image based on the change in scale. | 04-11-2013 |
20130108123 | FACE RECOGNITION APPARATUS AND METHOD FOR CONTROLLING THE SAME | 05-02-2013 |
20130116823 | MOBILE APPARATUS AND WALKING ROBOT - A mobile apparatus and a position recognition method thereof capable of enhancing performance in position recognition, such as accuracy and convergence in position recognition of the mobile apparatus performs the position recognition by use of a distributed filter system, which is composed of a plurality of local filters independently operating and a single fusion filter that integrates the position recognition result performed by each of the plurality of local filters. The mobile apparatus includes a plurality of sensors, a plurality of local filters configured to receive detection information from at least one of the plurality of sensors to perform a position recognition of the mobile apparatus, and a fusion filter configured to integrate the position recognition result of the plurality of local filters and to perform a position recognition of the mobile apparatus by using the integrated position recognition result. | 05-09-2013 |
20130127996 | METHOD OF RECOGNIZING STAIRS IN THREE DIMENSIONAL DATA IMAGE - A method of recognizing stairs in a 3D data image includes an image acquirer that acquires a 3D data image of a space in which stairs are located. An image processor calculates a riser height between two consecutive treads of the stairs in the 3D data image, identifies points located between the two consecutive treads according to the calculated riser height, and detects a riser located between the two consecutive treads through the points located between the two consecutive treads. Then, the image processor calculates a tread depth between two consecutive risers of the stairs in the 3D data image, identifies points located between the two consecutive risers according to the calculated tread depth, and detects a tread located between the two consecutive risers through the points located between the two consecutive risers. | 05-23-2013 |
20130163853 | APPARATUS FOR ESTIMATING ROBOT POSITION AND METHOD THEREOF - A method for estimating a location of a device uses a color image and a depth image. The method includes matching the color image to the depth image, generating a 3D reference image based on the matching, generating a 3D object image based on the matching, extracting a 2D reference feature point from the reference image, extracting a 2D reference feature point from the object image, matching the extracted reference feature point from the reference image to the extracted reference feature point from the object image, extracting a 3D feature point from the object image using the matched 2D reference feature point, and estimating the location of the device based on the extracted 3D feature point. | 06-27-2013 |
20130166137 | MOBILE APPARATUS AND LOCALIZATION METHOD THEREOF - A mobile apparatus and a localization method thereof which perform localization of the mobile apparatus using a distributed filter system including a plurality of local filters independently operated and one fusion filter integrating results of localization performed through the respective local filters, and additionally apply accurate topological absolute position information to the distributed filter system to improve localization performance (accuracy, convergence and speed in localization, etc.) of the mobile apparatus on a wide space. The mobile apparatus includes at least one sensor, at least one first distribution filter generating current relative position information using a value detected by the at least one sensor, at least one second distribution filter generating current absolute position information using the value detected by the at least one sensor, and a fusion filter integrating the relative position information and the absolute position information to perform localization. | 06-27-2013 |
20130170744 | OBJECT RECOGNITION METHOD, DESCRIPTOR GENERATING METHOD FOR OBJECT RECOGNITION, AND DESCRIPTOR FOR OBJECT RECOGNITION - An object recognition method, a descriptor generating method for object recognition, and a descriptor for object recognition capable of extracting feature points using the position relationship and color information relationship between points in a group that are sampled from an image of an object, and capable of recognizing the object using the feature points, the object recognition method including extracting feature components of a point cloud using the position information and the color information of the points that compose the point cloud of the three-dimensional (3D) image of an object, generating a descriptor configured to recognize the object using the extracted feature components; and performing the object recognition based on the descriptor. | 07-04-2013 |
20130238295 | METHOD AND APPARATUS FOR POSE RECOGNITION - An apparatus and a method for pose recognition, the method for pose recognition including generating a model of a human body in a virtual space, predicting a next pose of the model of the human body based on a state vector having an angle and an angular velocity of each part of the human body as a state variable, predicting a depth image about the predicted pose, and recognizing a pose of a human in a depth image captured in practice, based on a similarity between the predicted depth image and the depth image captured in practice, wherein the next pose is predicted based on the state vector having an angular velocity as a state variable, thereby reducing the number of pose samples to be generated and improving the pose recognition speed. | 09-12-2013 |
20130243337 | IMAGE PROCESSING APPARATUS AND METHOD THEREOF - An image processing apparatus for searching for a feature point by use of a depth image and a method thereof are provided. The image processing apparatus includes an input unit configured to input a three-dimensional image having depth information, a feature point extraction unit configured to obtain a designated point from an object image extracted from the depth image to obtain a feature point that is located at a substantially farthest distance from the designated point, and to obtain other feature points that are located at substantially farthest distances from feature points that are previously obtained as well as the designated point. The apparatus includes a control unit configured to control the input unit and the feature point extraction unit so that time in estimating a structure of the object is reduced, and a recognition result is enhanced. | 09-19-2013 |
20140243596 | ENDOSCOPE SYSTEM AND CONTROL METHOD THEREOF - Disclosed herein are an endoscope system and a control method thereof. The control method includes acquiring plural omnidirectional images of the surroundings of an endoscope using a stereo omnidirectional camera mounted on the endoscope, calculating distances between the endoscope and an object around the endoscope using the acquired plural omnidirectional images, and executing an operation to avoid collision between the endoscope and the object around the endoscope based on the calculated distances, thus facilitating safe operation of the endoscope. | 08-28-2014 |
20140288413 | SURGICAL ROBOT SYSTEM AND METHOD OF CONTROLLING THE SAME - A surgical robot system includes a slave system to perform a surgical operation on a patient and an imaging system that includes an image capture unit including a plurality of cameras to acquire a plurality of affected area images, an image generator detecting an occluded region in each of the affected area images acquired by the plurality of cameras, removing the occluded region therefrom, warping each of the affected area images from which the occluded region is removed, and matching the affected area images to generate a final image, and a controller driving each of the plurality of cameras of the image capture unit to acquire the plurality of affected area images and inputting the acquired plurality of affected area images to the image generator to generate a final image. | 09-25-2014 |
20140313359 | CAMERA ASSEMBLY AND IMAGE ACQUISITION METHOD USING THE SAME - Disclosed is a camera assembly having a wide viewing angle using a variable mirror. The camera assembly includes a variable mirror located in front of an image sensor, a variable mirror controller to switch a mode of the variable mirror to one of a reflection mode to reflect light incident upon the variable mirror and a transmission mode to transmit light incident upon the variable mirror, an image sensor to sense the light reflected by the variable mirror to acquire first image data and to sense the light transmitted through the variable mirror to acquire second image data, and an image processing unit to register the first image data and the second image data acquired by the image sensor to generate a third image. | 10-23-2014 |
20140316252 | MARKER AND METHOD OF ESTIMATING SURGICAL INSTRUMENT POSE USING THE SAME - A marker includes a basal surface, and a plurality of reference lines provided at the basal surface in a longitudinal direction of the basal surface. The reference lines may have different gradients. The marker may be attached to an instrument and a camera may capture an image of the marker. Pose information of the instrument may be estimated based on the captured image. | 10-23-2014 |
20140330077 | SURGICAL TROCARS AND IMAGE ACQUISITION METHOD USING THE SAME - Surgical trocars, and image acquisition method using the same, include a body having a passage configured to receive at least one surgical instrument, and at least one camera movably coupled to an outer wall of the body. | 11-06-2014 |
20140330078 | ENDOSCOPE AND IMAGE PROCESSING APPARATUS USING THE SAME - An endoscope to acquire a 3D image and a wide view-angle image and an image processing apparatus using the endoscope includes a front image acquirer to acquire a front image and a lower image acquirer to acquire a lower image in a downward direction of the front image acquirer. The front image acquirer includes a first objective lens and a second objective lens arranged side by side in a horizontal direction. The lower image acquirer includes a third objective lens located below the first objective lens and inclined from the first objective lens and a fourth objective lens located below the second objective lens and inclined from the second objective lens. | 11-06-2014 |
20150019103 | MOBILE ROBOT HAVING FRICTION COEFFICIENT ESTIMATION FUNCTION AND FRICTION COEFFICIENT ESTIMATION METHOD - A mobile robot configured to move on a ground. The mobile robot including a contact angle estimation unit estimating contact angles between wheels of the mobile robot and the ground and uncertainties associated with the contact angles, a traction force estimation unit estimating traction forces applied to the wheels and traction force uncertainties, a normal force estimation unit estimating normal forces applied to the wheels and normal force uncertainties, a friction coefficient estimation unit estimating friction coefficients between the wheels and the ground, a friction coefficient uncertainty estimation unit estimating friction coefficient uncertainties, and a controller determining the maximum friction coefficient from among the friction coefficients such that the maximum friction coefficient has an uncertainty less than a threshold and at a point of time when the torque applied to each of the wheels changes from an increasing state to a decreasing state, among the estimated friction coefficients. | 01-15-2015 |