Class / Patent application number | Description | Number of patent applications / Date published |
382153000 | Robotics | 67 |
20080232678 | Localization method for a moving robot - A localization method of a moving robot is disclosed in which the moving robot includes: capturing a first omni-directional image by the moving robot; confirming at least one node at which a second omni-directional image having a high correlation with the first omni-directional image is captured; and determining that the moving robot is located at the first node when the moving robot reaches a first node, at which a second omni-directional image having a highest correlation with the first omni-directional image is captured, from among the at least one node. | 09-25-2008 |
20080240547 | Apparatus And Method For Vision Processing On Network Based Intelligent Service Robot System And The System Using The Same - There are provided an apparatus and method for vision processing on a network based intelligent service robot system and a system using the same. A robot can move to a target object, avoiding obstacles without helps of a robot server interfaced with a robot terminal over network, by extracting/processing three-dimensional distance information of external objects, using a stereo camera, a low price image processing dedicated chip and an embedded processor. Therefore, the intelligent robot can travel and move using only a stereo camera image processing without other sensors, and further provides users with various functional services with low expense. | 10-02-2008 |
20080247637 | METHODS AND DEVICES FOR TATTOO APPLICATION AND REMOVAL - A robotic tattoo application and tattoo removal methods and systems are described. This technology involves the use of a robotic system guided by control of a graphics capable computer in order to perform various types, including artistic, recreational, cosmetic, or therapeutic tattooing, or tattoo removal. | 10-09-2008 |
20080273791 | Apparatus, method, and medium for dividing regions by using feature points and mobile robot using the same - An apparatus, method, and medium for dividing regions by using feature points and a mobile robot cleaner using the same are provided. A method includes forming a grid map by using a plurality of grid points that are obtained by detecting distances of a mobile robot from obstacles; extracting feature points from the grid map; extracting candidate pairs of feature points, which are in the range of a region division element, from the feature points; extracting a final pair of feature points, which satisfies the requirements of the region division element, from the candidate pairs of feature points; forming a critical line by connecting the final pair of feature points; and forming a final region in accordance with the size relationship between regions formed of a closed curve which connects the critical line and the grid map. | 11-06-2008 |
20080310705 | LEGGED LOCOMOTION ROBOT - A robot capable of performing appropriate movement control while reducing arithmetic processing for recognizing the shape of a floor. The robot sets a predetermined landing position of steps of the legs on a present assumed floor, which is a floor represented by floor shape information used for a current motion control of the robot, during movement of the robot. An image projection areas is set, and is projected on each image captured by cameras mounted on the robot for each predetermined landing position in the vicinity of each of the predetermined landing positions. Shape parameters representing the shape of an actual floor partial area are estimated, forming an actual floor whose image is captured in each partial image area, of based on the image of the partial image area generated by projecting the set image projection area on the images captured by the cameras for each partial image area. | 12-18-2008 |
20080310706 | PROJECTIVE TRANSFORMATION CONVERGENCE CALCULATION METHOD - A method for performing a convergence calculation using a projective transformation between images captured by two cameras to observe a flat part of an object in the images, wherein a computational load is reduced while securing a convergence property of the convergence calculation. Initial values (n | 12-18-2008 |
20090060318 | Legged mobile robot - In a legged mobile robot having an imaging device (such as CCD camera) for taking an image utilizing incident light from external world in which a human being to be imaged is present, brightness reduction operation is executed to reduce brightness of a high-brightness imaging region produced by high-brightness incident light, when the high-brightness imaging region is present in the image taken by the imaging device. With this, when the imaged high-brightness imaging region is present owing to high-brightness incident light from the sun or the like, the legged mobile robot can reduce the brightness to image a human being or other object with suitable brightness. | 03-05-2009 |
20090110265 | System for Forming Patterns on a Multi-Curved Surface - According to one embodiment, a pattern forming system includes a patterning tool, a multi-axis robot, and a simulation tool that are coupled to a pattern forming tool that is executed on a suitable computing system. The pattern forming tool receives a contour measurement from the patterning tool and transmits the measured contour to the simulation tool to model the electrical characteristics of a conductive pattern or a dielectric pattern on the measured contour. Upon receipt of the modeled characteristics, the pattern forming system may adjust one or more dimensions of the pattern according to the model, and subsequently create, using the patterning tool, the corrected pattern on the surface. | 04-30-2009 |
20090148034 | Mobile robot - There is disclosed a mobile robot including an image processor that generates recognition information regarding a target object included in a taken image, and a main controller integrally controlling the robot based on this recognition information. The image processor executes steps of: generating a low-resolution image and at least one high-resolution image whose resolution higher than that of the low-resolution image; generating first target object information regarding the target object from the low-resolution image; determining which high-resolution image should be processed if two or more high-resolution images are generated, and then defining a resolution process region in the low-resolution image; processing a region in the high-resolution region corresponding to the resolution process region in the low-resolution image, so as to generate second target object information in the high-resolution image; and determining whether or not the first and the second target object information are matched; and based on this determination, using at least either of the first and the second target object information, thereby to generate the recognition information. | 06-11-2009 |
20090148035 | CONTROLLER OF MOBILE ROBOT - A controller of a mobile robot that moves an object such that the position of a representative point of the object and the posture of the object follow a desired position and posture trajectory is provided. The desired posture trajectory of the object includes the desired value of the angular difference about a yaw axis between a reference direction, which is a direction orthogonal to the yaw axis of the object, and the direction of the moving velocity vector of the representative point of the object, defined by the desired position trajectory. The controller has a desired angular difference setting means which variably sets the desired value of the angular difference according to at least a required condition related to a movement mode of the object. This allows the object to be moved at a posture which meets the required condition of the movement mode. | 06-11-2009 |
20090154791 | Simultaneous localization and map building method and medium for moving robot - A simultaneous localization and map building (SLAM) method and medium for a moving robot is disclosed. The SLAM method includes extracting a horizontal line from an omni-directional image photographed at every position where the mobile robot reaches during a movement of the mobile robot, correcting the extracted horizontal line, to create a horizontal line image, and simultaneously executing a localization of the mobile robot and building a map for the mobile robot, using the created horizontal line image and a previously-created horizontal line image. | 06-18-2009 |
20090190826 | WORKING APPARATUS AND CALIBRATION METHOD THEREOF - A working apparatus comprises a working unit which executes work on a work subject, and a calibration jig on which a plurality of markers is arranged in a radial pattern from a center point of markers, the plurality of markers being arranged in three dimensions, and the calibration jig being attached to a working unit such that a calibration reference point set of a working unit coincides with a center point of markers. According to such a composition, it becomes possible to calibrate a position of a working unit even when a portion of the jig containing a center point of markers is occluded during image measurement. | 07-30-2009 |
20090208094 | ROBOT APPARATUS AND METHOD OF CONTROLLING SAME - To make it possible, in a robot apparatus that performs actions in response to external environment, to distinguish the image of a part of its own body contained in three dimensional data of the external environment. A robot | 08-20-2009 |
20100021051 | Automated Guidance and Recognition System and Method of the Same - Disclosed herein are embodiments and methods of a visual guidance and recognition system requiring no calibration. One embodiment of the system comprises a servo actuated manipulator configured to perform a function, a camera mounted on the face plate of the manipulator, and a recognition controller configured to acquire a two dimensional image of the work piece. The manipulator controller is configured to receive and store the face plate position at a distance “A” between the reference work piece and the manipulator along an axis of the reference work piece when the reference work piece is in the camera's region of interest. The recognition controller is configured to learn the work piece from the image and the distance “A”. During operation, a work piece is recognized with the system, and the manipulator is accurately positioned with respect to the work piece so that the manipulator can accurately perform its function. | 01-28-2010 |
20100040279 | METHOD AND APPARATUS TO BUILD 3-DIMENSIONAL GRID MAP AND METHOD AND APPARATUS TO CONTROL AUTOMATIC TRAVELING APPARATUS USING THE SAME - A method and apparatus to build a 3-dimensional grid map and a method and apparatus to control an automatic traveling apparatus using the same. In building a 3-dimensional map to discern a current location and a peripheral environment of an unmanned vehicle or a mobile robot, 2-dimensional localization and 3-dimensional image restoration are appropriately used to accurately build the 3-dimensional grid map more rapidly. | 02-18-2010 |
20100119146 | ROBOT SYSTEM, ROBOT CONTROL DEVICE AND METHOD FOR CONTROLLING ROBOT - A robot system includes a robot having a movable section, an image capture unit provided on the movable section, an output unit that allows the image capture unit to capture a target object and a reference mark and outputs a captured image in which the reference mark is imaged as a locus image, an extraction unit that extracts the locus image from the captured image, an image acquisition unit that performs image transformation on the basis of the extracted locus image by using the point spread function so as to acquire an image after the transformation from the captured image, a computation unit that computes a position of the target object on the basis of the acquired image, and a control unit that controls the robot so as to move the movable section toward the target object in accordance with the computed position. | 05-13-2010 |
20100135572 | ROBOT MOTION DATA GENERATION METHOD AND A GENERATION APPARATUS USING IMAGE DATA - The present invention relates to a robot motion data generation method and a generation apparatus using image data, and more specifically to a robot action data generation method for perceiving motion of an object from a consecutive first image frame and a second image frame including image information on the moving object, and for generating robot action data from the perceived motion, comprising the steps of: a first step of performing digital markings at plural spots on top of the object of the first image frame, and storing first location coordinates values of the digital markings in tree type; a second step of storing peripheral image patterns of each digital marking in association with the first location coordinates values; a third step of recognizing image data identical with peripheral image patterns of each of the first location coordinates values from the second image frame, and finding out changed second location coordinates values from the first location coordinates values; a fourth step of extracting angle changes of each location coordinates value from the first location coordinates values and the second location coordinates values; a fifth step of converting extracted angles into motion templates; and a sixth step of generating robot action data from the motion templates. | 06-03-2010 |
20100172571 | Robot and control method thereof - Disclosed herein are a feature point used to localize an image-based robot and build a map of the robot and a method of extracting and matching an image patch of a three-dimensional (3D) image, which is used as the feature point. It is possible to extract the image patch converted into the reference image using the position information of the robot and the 3D position information of the feature point. Also, it is possible to obtain the 3D surface information with the brightness values of the image patches to obtain the match value with the minimum error by a 3D surface matching method of matching the 3D surface information of the image patches converted into the reference image through the ICP algorithm. | 07-08-2010 |
20100284604 | EFFICIENT IMAGE MATCHING - A system described herein includes a receiver component that receives a first image and a symmetry signature generator component that generates a first global symmetry signature for the image, wherein the global symmetry signature is representative of symmetry existent in the first image. The system also includes a comparer component that compares the first global symmetry signature with a second global symmetry signature that corresponds to a second image, wherein the second global symmetry signature is representative of symmetry existent in the second image. The system additionally includes an output component that outputs an indication of similarity between the first image and the second image based at least in part upon the comparison undertaken by the comparer component. | 11-11-2010 |
20100296723 | Methods, Devices, and Systems Useful in Registration - Methods, devices, and systems for use in accomplishing registration of a patient to a robot to facilitate image guided surgical procedures, such as stereotactic procedures. | 11-25-2010 |
20110075915 | IMAGE PROCESSING APPARATUS OF ROBOT SYSTEM AND METHOD AND COMPUTER-READABLE MEDIUM THEREOF - Disclosed is an image processing apparatus, method and computer-readable medium of a robot which efficiently manage a moving image acquired by the robot and reinforce security to prevent image leakage. In order to restore an object region within an original image photographed by the robot, a low-resolution image is generated using object password information and the original image to generate image information, and the image information is transmitted to a server over a network. The server detects the object password information and the low-resolution information from the image information and restores a high-resolution object image using the object password information and the low-resolution image. | 03-31-2011 |
20110135189 | SWARM INTELLIGENCE-BASED MOBILE ROBOT, METHOD FOR CONTROLLING THE SAME, AND SURVEILLANCE ROBOT SYSTEM - A plurality of swarm intelligence-based mobile robots, each having multiple legs and multiple joints, the mobile robot includes: an environment recognition sensor for collecting sensed data about the surrounding environment of the mobile robot; a communication unit for performing communication with a remote controller, a parent robot managing at least one mobile robot, or the other mobile robots located within a predefined area; and a control unit for controlling the motions of the multiple legs and multiple joints to control movement of the mobile robot to a given destination based on control data transmitted from the remote controller through the communication unit or based on communication with the other mobile robots within the predefined area or based on the sensed data collected by the environment recognition sensor. | 06-09-2011 |
20110150319 | Method for Determining 3D Poses Using Points and Lines - A three-dimensional (3D) pose of a 3D object in an environment is determined by extracting features from an image acquired of the environment by a camera. The features are matched to a 3D model of the environment to determine correspondences. A camera reference frame of the image and a world reference frame of the environment are transformed to a corresponding intermediate camera reference frame and a corresponding world reference frame using the correspondences. Geometrical constraints are applied to the intermediate camera reference frame and the intermediate world reference frame to obtain a constrained intermediate world reference frame and a constrained world reference frame. The 3D pose is then determined from parameters of the constrained intermediate world reference frame and the constrained world reference frame. | 06-23-2011 |
20110194755 | Apparatus and method with traveling path planning - An apparatus and method with traveling path planning of a mobile robot within a space along an inherent direction of the space. The apparatus may include a pattern extracting unit, a pattern direction extracting unit, and a path generating unit. The pattern extracting unit may extract at least one of pattern from a ceiling image captured by photographing in a ceiling direction. The pattern direction extracting unit may extract a pattern direction in the form of a line from the extracted pattern. The path generating unit may generate a traveling path using the extracted pattern direction. | 08-11-2011 |
20110268349 | SYSTEM AND METHOD BUILDING A MAP - A system building a map while an image sensor is moving, the system including the image sensor configured to capture images while the image sensor moves relative to one or more different locations, a sub-map building unit configured to recognize a relative location for at least the image sensor of the system using the captured images, build up a sub-map, and if a condition for stopping a building of the sub-map is met, store the sub-map which has been so far built up, an operation determining unit configured to determine whether the condition for stopping building the sub-map, an image group storing unit configured to store an image group including images that are newly captured from the image sensor after the storing of the sub-map when the condition for the stopping of the building of the sub-map is satisfied, and an overall map building unit configured to build an overall map based on the built sub-map and the stored image group when a current relative location for at least the image sensor of the system is determined to be same as a previous relative location for at least the image sensor of the system. | 11-03-2011 |
20110280472 | SYSTEM AND METHOD FOR ROBUST CALIBRATION BETWEEN A MACHINE VISION SYSTEM AND A ROBOT - A system and method for robustly calibrating a vision system and a robot is provided. The system and method enables a plurality of cameras to be calibrated into a robot base coordinate system to enable a machine vision/robot control system to accurately identify the location of objects of interest within robot base coordinates. | 11-17-2011 |
20110311127 | MOTION SPACE PRESENTATION DEVICE AND MOTION SPACE PRESENTATION METHOD - A motion space presentation device includes: a work area generation unit configured to generate a three-dimensional region in which the movable robot operates; an image capture unit configured to capture a real image; a position and posture detection unit configured to detect an image capture position and an image capture direction of the image capture unit; and an overlay unit configured to selectively superimpose dither an image of a segment approximation model of the movable robot as viewed in the image capture direction from the image capture position, or an image of the three-dimensional region as viewed in the image capture direction from the image capture position, on the real image captured by the image capture unit, according to the difficulty in recognizing each image. | 12-22-2011 |
20120020547 | Methods of Locating and Tracking Robotic Instruments in Robotic Surgical Systems - In one embodiment of the invention, a method is disclosed to locate a robotic instrument in the field of view of a camera. The method includes capturing sequential images in a field of view of a camera. The sequential images are correlated between successive views. The method further includes receiving a kinematic datum to provide an approximate location of the robotic instrument and then analyzing the sequential images in response to the approximate location of the robotic instrument. An additional method for robotic systems is disclosed. Further disclosed is a method for indicating tool entrance into the field of view of a camera. | 01-26-2012 |
20120076397 | SYSTEM FOR GUIDING A DRONE DURING THE APPROACH PHASE TO A PLATFORM, IN PARTICULAR A NAVAL PLATFORM, WITH A VIEW TO LANDING SAME - This system for guiding a drone during the approach phase to a platform, particularly a naval platform, with a view to landing the same, is characterized in that the platform is equipped with a glide slope indicator installation emitting an array of optical guide beams over an angular sector predetermined from the horizontal, and in that the drone is equipped with a beam acquisition camera connected to image analysis means and to computing means of orders for commanding automatic piloting means of the drone to cause it to follow the guide beams. | 03-29-2012 |
20120106828 | MOBILE ROBOT AND SIMULTANEOUS LOCALIZATION AND MAP BUILDING METHOD THEREOF - A simultaneous localization and map building method of a mobile robot including an omni-directional camera. The method includes acquiring an omni-directional image from the omni-directional camera, dividing the obtained omni-directional image into upper and lower images according to a preset reference to generate a first image, which is the lower image, and a second image, which is the upper image, extracting feature points from the first image and calculating visual odometry information calculating visual odometry information to track locations of the extracted feature points based on a location of the omni-directional camera, and performing localization and map building of the mobile robot using the calculated visual odometry information and the second image as an input of an extended Kalman filter. | 05-03-2012 |
20120106829 | ROBOT CLEANER AND CONTROLLING METHOD OF THE SAME - A robot cleaner and a method for controlling the same are provided. A region to be cleaned may be divided into a plurality of sectors based on detection data collected by a detecting device, and a partial map for each sector may be generated. A full map of the cleaning region may then be generated based on a position of a partial map with respect to each sector, and a topological relationship between the partial maps. Based on the full map, the robot cleaner may recognize its position, allowing the entire region to be completely cleaned, and allowing the robot cleaner to rapidly move to sectors that have not yet been cleaned. | 05-03-2012 |
20120121161 | SYSTEMS AND METHODS FOR VSLAM OPTIMIZATION - The invention is related to methods and apparatus that use a visual sensor and dead reckoning sensors to process Simultaneous Localization and Mapping (SLAM). These techniques can be used in robot navigation. Advantageously, such visual techniques can be used to autonomously generate and update a map. Unlike with laser rangefinders, the visual techniques are economically practical in a wide range of applications and can be used in relatively dynamic environments, such as environments in which people move. Certain embodiments contemplate improvements to the front-end processing in a SLAM-based system. Particularly, certain of these embodiments contemplate a novel landmark matching process. Certain of these embodiments also contemplate a novel landmark creation process. Certain embodiments contemplate improvements to the back-end processing in a SLAM-based system. Particularly, certain of these embodiments contemplate algorithms for modifying the SLAM graph in real-time to achieve a more efficient structure. | 05-17-2012 |
20120195491 | System And Method For Real-Time Mapping Of An Indoor Environment Using Mobile Robots With Limited Sensing - A system and method for real-time mapping of an indoor environment using mobile robots with limited sensing are provided. Raw trajectory data comprising a plurality of trajectory points is received from wall-following. Trace segmentation is applied to the raw trajectory data to generate line segments. The line segments are rectified to one another. A map is generated from the rectified line segments. | 08-02-2012 |
20120201448 | ROBOTIC DEVICE, INSPECTION DEVICE, INSPECTION METHOD, AND INSPECTION PROGRAM - A robotic device includes an imaging section adapted to take an image of an object having a hole, and generate an image data of the object including an inspection area of an image of the hole, a robot adapted to move the imaging section, an inspection area luminance value detection section adapted to detect a luminance value of the inspection area from the image data, a reference area luminance value detection section adapted to detect a luminance value of a reference area adjacent to the inspection area from the image data, and a determination section adapted to determine a state of the inspection area based on one of a ratio and a difference between the luminance value of the inspection area detected by the inspection area luminance value detection section and the luminance value of the reference area detected by the reference area luminance value detection section. | 08-09-2012 |
20120219207 | SLIP DETECTION APPARATUS AND METHOD FOR A MOBILE ROBOT - The present invention relates to a slip detection apparatus and method for a mobile robot, and more particularly, to a slip detection apparatus and method for a mobile robot, which not only use a plurality of rotation detection sensors to detect a lateral slip angle and lateral slip direction, but also analyze the amount of change in an image and detect the blocked degree of an image input unit to determine the quality of an input image, and detect the occurrence of a frontal slip to precisely detect the type of slip, direction of the slip, and the rotation angle, and, on the basis of the latter, to enable the mobile robot to move away from and avoid slip regions, and to reassume the precise position thereof. | 08-30-2012 |
20120269422 | COLLISION DETECTION SYSTEM, ROBOTIC SYSTEM, COLLISION DETECTION METHOD AND PROGRAM - A collision detection system includes a processing section, a drawing section, and a depth buffer. Depth information of an object is set to the depth buffer as depth map information. The drawing section performs a first drawing process of performing a depth test, and drawing a primitive surface on a reverse side when viewed from a predetermined viewpoint out of primitive surfaces constituting a collision detection target object with reference to the depth buffer. Further, the drawing section performs a second drawing process of drawing the primitive surface on the reverse side when viewed from a predetermined viewpoint out of the primitive surfaces constituting the collision detection target object without performing the depth test. The processing section determines whether or not the collision detection target object collides with the object on the target side based on the result of the first drawing process and the second drawing process. | 10-25-2012 |
20120294509 | ROBOT CONTROL SYSTEM, ROBOT SYSTEM AND PROGRAM - A robot control system includes a processing unit which performs visual servoing based on a reference image and a picked-up image, a robot control unit which controls a robot based on a control signal, and a storage unit which stores the reference image and a marker. The storage unit stores, as the reference image, a reference image with marker in which the marker is set in an area of a workpiece or a hand of the robot. The processing unit generates, based on the picked-up image, a picked-up image with marker in which the marker is set in an area of the workpiece or the hand of the robot, performs visual servoing based on the reference image with marker and the picked-up image with marker, generates the control signal, and outputs the control signal to the robot control unit. | 11-22-2012 |
20130034295 | OBJECT CATEGORY RECOGNITION METHODS AND ROBOTS UTILIZING THE SAME - Methods for recognizing a category of an object are disclosed. In one embodiment, a method includes determining, by a processor, a preliminary category of a target object, the preliminary category having a confidence score associated therewith, and comparing the confidence score to a learning threshold. If the highest confidence score is less than the learning threshold, the method further includes estimating properties of the target object and generating a property score for one or more estimated properties, and searching a supplemental image collection for supplemental image data using the preliminary category and the one or more estimated properties. Robots programmed to recognize a category of an object by use of supplemental image data are also disclosed. | 02-07-2013 |
20130163853 | APPARATUS FOR ESTIMATING ROBOT POSITION AND METHOD THEREOF - A method for estimating a location of a device uses a color image and a depth image. The method includes matching the color image to the depth image, generating a 3D reference image based on the matching, generating a 3D object image based on the matching, extracting a 2D reference feature point from the reference image, extracting a 2D reference feature point from the object image, matching the extracted reference feature point from the reference image to the extracted reference feature point from the object image, extracting a 3D feature point from the object image using the matched 2D reference feature point, and estimating the location of the device based on the extracted 3D feature point. | 06-27-2013 |
20130266205 | METHOD FOR THE FILTERING OF TARGET OBJECT IMAGES IN A ROBOT SYSTEM - The invention relates to a method and system for recognizing physical objects. In the method an object is gripped with a gripper, which is attached to a robot arm or mounted separately. Using an image sensor, a plurality of source images of an area comprising the object is captured while the object is moved with the robot arm. The camera is configured to move along the gripper, attached to the gripper or otherwise able to monitor the movement of the gripper. Moving image elements are extracted from the plurality of source images by computing a variance image from the source images and forming a filtering image from the variance image. A result image is obtained by using the filtering image as a bitmask. The result image is used for classifying the gripped object. | 10-10-2013 |
20140016856 | PREDICTION OF SUCCESSFUL GRASPS BY END OF ARM TOOLING - Given an image and an aligned depth map of an object, the invention predicts the 3D location, 3D orientation and opening width or area of contact for an end of arm tooling (EOAT) without requiring a physical model. | 01-16-2014 |
20140064601 | ROBOT CONTROL INFORMATION - Vision based tracking of a mobile device is used to remotely control a robot. For example, images captured by a mobile device, e.g., in a video stream, are used for vision based tracking of the pose of the mobile device with respect to the imaged environment. Changes in the pose of the mobile device, i.e., the trajectory of the mobile device, are determined and converted to a desired motion of a robot that is remote from the mobile device. The robot is then controlled to move with the desired motion. The trajectory of the mobile device is converted to the desired motion of the robot using a transformation generated by inverting a hand-eye calibration transformation. | 03-06-2014 |
20140161345 | Methods And Robots For Adjusting Object Detection Parameters, Object Recognition Parameters, Or Both Object Detection Parameters And Object Recognition Parameters - Methods and robots for adjusting object detection parameters, object recognition parameters, or both object detection parameters and object recognition parameters are disclosed. Methods include receiving image data, automatically recognizing an object with an object recognition module based on the image data, determining whether a pose estimation error has occurred, and adjusting at least one object recognition parameter when the pose estimation error has occurred. Methods include receiving image data and automatically detecting a candidate object with an object detection module based on the image data, recognizing an object with an object recognition module based on the detected candidate object, determining whether an object recognition error has occurred, and adjusting the at least one object detection parameter when the object recognition error has occurred. | 06-12-2014 |
20140212025 | AUTOMATIC ONLINE REGISTRATION BETWEEN A ROBOT AND IMAGES - A registration system and method includes a configurable device ( | 07-31-2014 |
20140294286 | THREE DIMENSION MEASUREMENT METHOD, THREE DIMENSION MEASUREMENT PROGRAM AND ROBOT DEVICE - A three-dimensional measurement method three-dimensionally restores an edge having a cross angle close to parallel to an epipolar line. Edges e | 10-02-2014 |
20140314306 | ROBOT FOR MANAGING STRUCTURE AND METHOD OF CONTROLLING THE ROBOT - Provided are a robot for managing a structure, and a method of controlling the robot. The robot for maintaining and repairing the structure measures a luminance value by capturing an image of the structure, or measures depth information of the structure by using a laser sensor or stereo vision, determines a protruding portion or depressed portion of the structure by using the measured luminance value or the measured depth information. Also, the robot removes the determined protruding portion and fills the determined depressed portion by using a combination hardener. Accordingly, protrusion, depression, and crack of a wall caused by deterioration or poor construction of the structure may be automatically found and repaired so as to efficiently manage the structure. | 10-23-2014 |
20140334713 | METHOD AND APPARATUS FOR CONSTRUCTING MAP FOR MOBILE ROBOT - A method and apparatus for constructing a map for a mobile robot to be able to reduce a data amount and increase an approach speed. The method includes: searching for a plurality of feature data occupying an arbitrary space by scanning a surrounding environment of the mobile robot; performing quadtree segmentation on first feature data of the plurality of feature data to generate a plurality of first node information as a result of the quadtree segmentation; determining a position of second feature data of the plurality of feature data with respect to the first feature data; and performing a neighborhood moving algorithm for generating a plurality of second node information of the second feature data according to the position of second feature data by using the plurality of first node information. | 11-13-2014 |
20140334714 | COLLISION DETECTION SYSTEM, ROBOTIC SYSTEM, COLLISION DETECTION METHOD AND PROGRAM - A collision detection system includes a processing section, a drawing section, and a depth buffer. Depth information of an object is set to the depth buffer as depth map information. The drawing section performs a first drawing process of performing a depth test, and drawing a primitive surface on a reverse side when viewed from a predetermined viewpoint out of primitive surfaces constituting a collision detection target object with reference to the depth buffer. Further, the drawing section performs a second drawing process of drawing the primitive surface on the reverse side when viewed from a predetermined viewpoint out of the primitive surfaces constituting the collision detection target object without performing the depth test. The processing section determines whether or not the collision detection target object collides with the object on the target side based on the result of the first drawing process and the second drawing process. | 11-13-2014 |
20150131896 | SAFETY MONITORING SYSTEM FOR HUMAN-MACHINE SYMBIOSIS AND METHOD USING THE SAME - A safety monitoring system for human-machine symbiosis is provided, including a spatial image capturing unit, an image recognition unit, a human-robot-interaction safety monitoring unit, and a process monitoring unit. The spatial image capturing unit, disposed in a working area, acquires at least two skeleton images. The image recognition unit generates at least two spatial gesture images corresponding to the at least two skeleton images, based on information of changes in position of the at least two skeleton images with respect to time. The human-robot-interaction safety monitoring unit generates a gesture distribution based on the at least two spatial gesture images and a safety distance. The process monitoring unit determines whether the gesture distribution meets a safety criterion. | 05-14-2015 |
20150294157 | FEATURE VALUE EXTRACTION APPARATUS AND PLACE ESTIMATION APPARATUS - A place estimation apparatus performs a place estimation process by using position-invariant feature values extracted by a feature value extraction unit. The feature value extraction unit includes local feature value extraction unit that extracts local feature values from each of input images formed from successively-shot successive images, feature value matching unit that obtains matching between successive input images based on the extracted local feature values, corresponding feature value selection unit that selects matched feature values as corresponding feature values, and position-invariant feature value extraction unit that obtains position-invariant feature values based on the corresponding feature values. The position-invariant feature value extraction unit extracts, from among the corresponding feature values, corresponding feature values whose position change is equal to or less than a predetermined threshold as the position-invariant feature values. | 10-15-2015 |
20150343640 | SYSTEM AND METHOD FOR LOCATING VEHICLE COMPONENTS RELATIVE TO EACH OTHER - A method for locating a first vehicle component relative to a second vehicle component includes the following steps: (a) moving the robotic arm to a first position such that a form feature of the first vehicle component is within a field of view of a camera; (b) capturing an image the form feature of the first vehicle component; (c) moving the robotic arm to a second position such that the form feature of the second vehicle component is within the field of view of the camera; (d) capturing an image of the form feature of the second vehicle component; (e) picking up the second vehicle component using the robotic arm; and (f) moving the robotic arm along with the second vehicle component toward the first vehicle component. | 12-03-2015 |
20160005161 | ROBOT SYSTEM - A robot system includes a processing apparatus that detects one work from a plurality of works, and a robot that operates the detected one work. The processing apparatus includes a display unit that displays image data containing an image of the plurality of works captured by an imaging apparatus, a selection unit that selects a first image and a second image from the image data, and a processing unit that generates a model based on the first image and the second image and detects the one work using the generated model. | 01-07-2016 |
20160035079 | Calibration and Transformation of a Camera System's Coordinate System - Systems and methods are disclosed that determine a mapping between a first camera system's coordinate system and a second camera system's coordinate system; or determine a transformation between a robot's coordinate system and a camera system's coordinate system, and/or locate, in a robot's coordinate system, a tool extending from an arm of the robot based on the tool location in the camera's coordinate system. The disclosed systems and methods may use transformations derived from coordinates of features found in one or more images. The transformations may be used to interrelate various coordinate systems, facilitating calibration of camera systems, including in robotic systems, such as an image-guided robotic systems for hair harvesting and/or implantation. | 02-04-2016 |
20160063309 | Combination of Stereo and Structured-Light Processing - Methods and systems for determining depth information using a combination of stereo and structured-light processing are provided. An example method involves receiving a plurality of images captured with at least two optical sensors, and determining a first depth estimate for at least one surface based on corresponding features between a first image and a second image. Further, the method involves causing a texture projector to project a known texture pattern, and determining, based on the first depth estimate, at least one region of at least one image of the plurality of images within which to search for a particular portion of the known texture pattern. And the method involves determining points corresponding to the particular portion of the known texture pattern within the at least one region, and determining a second depth estimate for the at least one surface based on the determined points corresponding to the known texture pattern. | 03-03-2016 |
20160075032 | CARPET DRIFT ESTIMATION USING DIFFERENTIAL SENSORS OR VISUAL MEASUREMENTS - Apparatus and methods for carpet drift estimation are disclosed. In certain implementations, a robotic device includes an actuator system to move the body across a surface. A first set of sensors can sense an actuation characteristic of the actuator system. For example, the first set of sensors can include odometry sensors for sensing wheel rotations of the actuator system. A second set of sensors can sense a motion characteristic of the body. The first set of sensors may be a different type of sensor than the second set of sensors. A controller can estimate carpet drift based at least on the actuation characteristic sensed by the first set of sensors and the motion characteristic sensed by the second set of sensors. | 03-17-2016 |
20160082598 | HEAD AND AUTOMATED MECHANIZED METHOD WITH VISION - An automated machining head with vision and procedure includes a pressure foot provided with side windows with the capacity to open and close, encasing the machining tool, associated with a vertical movement device provided with mechanical locking, vision equipment connected to a computer and a communications module. The main advantage is endowing an anthropomorphic robot, originally designed for the car industry and with relatively low accuracy, with a notably higher machining accuracy, equivalent to equipment of a much greater accuracy or to parallel kinematic-type robots, also compensating, in real-time and in a continuous manner, for off-centring and loss of perpendicularity by the pressure foot, which are common in conventional heads and are a source of errors and inaccuracy. | 03-24-2016 |
20160089783 | ROBOT CLEANER AND CONTROL METHOD THEREOF - A control method for a robot cleaner includes acquiring a plurality of images of surroundings during travel of the robot cleaner in a cleaning area, estimating a plurality of room-specific feature distributions according to a rule defined for each of a plurality of rooms, based on the images acquired while acquiring the plurality of images, acquiring an image of surroundings at a current position of the robot cleaner, obtaining a comparison reference group including a plurality of room feature distributions by applying the rule for each of the plurality of rooms to the image acquired while acquiring the image at the current position, comparing the obtained comparison reference group with the estimated room-specific feature distributions, and determining a room from the plurality of rooms having the robot cleaner currently located therein. | 03-31-2016 |
20160101524 | ROBOT CLEANER AND METHOD FOR CONTROLLING THE SAME - A robot cleaner and a control method thereof according to the present disclosure select a seed pixel in an image of an area in front of a main body, obtain a brightness difference between an upper region and a lower region obtained by dividing a predetermined detection area including each of neighboring pixels of the seed pixel and select a pixel belonging to a detection area having a largest brightness difference as a pixel constituting the boundary between objects indicated in the image. Accordingly, the boundary between objects present within a cleaning area can be detected rapidly and correctly through an image. | 04-14-2016 |
20160110840 | IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE, AND ROBOT SYSTEM - An image processing method can suppress detection accuracy of a detection target object from being lowered even if the detection target object has a different surface condition because of the influence of various kinds of noise. The image processing method includes the following operations of generating a captured model edge image by executing edge extraction processing on a captured model image acquired by capturing a detection target object, executing pattern matching of the captured model edge image and a model edge image, calculating similarity at respective edge points in the model edge image in the pattern matching of the captured model edge image and the model edge image, selecting an edge point to be eliminated based on the similarity from among the respective edge points in the model edge image, and generating an edge image acquired by eliminating the selected edge point as a final model edge image. | 04-21-2016 |
20160167226 | Systems and Methods for Capturing Images and Annotating the Captured Images with Information | 06-16-2016 |
20160167233 | METHODS AND DEVICES FOR CLEANING GARBAGE | 06-16-2016 |
20160171303 | VISUAL-ASSIST ROBOTS | 06-16-2016 |
20160184998 | ROBOT IDENTIFICATION SYSTEM - A robot identification system includes a robot having a rotatable arm, an imaging unit imaging the robot, an angle detector detecting a rotation angle of the arm, a model generator producing robot models representing the forms of the robot on the basis of the rotation angle detected by the angle detector, and an image identification unit that compares an image captured by the imaging unit with the robot models generated by the model generator to identify a robot image in the image. | 06-30-2016 |
20160203391 | Information Technology Asset Type Identification Using a Mobile Vision-Enabled Robot | 07-14-2016 |
20160253562 | INFORMATION PROCESSING APPARATUS, PROCESSING SYSTEM, OBJECT MOVING SYSTEM, AND OBJECT MOVING METHOD | 09-01-2016 |
20160378117 | BISTATIC OBJECT DETECTION APPARATUS AND METHODS - Apparatus and methods for navigation of a robotic device configured to operate in an environment comprising objects and/or persons. Location of objects and/or persons may change prior and/or during operation of the robot. In one embodiment, a bistatic sensor comprises a transmitter and a receiver. The receiver may be spatially displaced from the transmitter. The transmitter may project a pattern on a surface in the direction of robot movement. In one variant, the pattern comprises an encoded portion and an information portion. The information portion may be used to communicate information related to robot movement to one or more persons. The encoded portion may be used to determine presence of one or more object in the path of the robot. The receiver may sample a reflected pattern and compare it with the transmitted pattern. Based on a similarity measure breaching a threshold, indication of object present may be produced. | 12-29-2016 |
20170235314 | LOCATION-BASED CONTROL METHOD AND APPARATUS, MOVABLE MACHINE AND ROBOT | 08-17-2017 |