Integrating grasp planning and visual servoing for automatic grasping

Radu Horaud 1 Fadi Dornaika 1 Christian Laugier 2 Christian Bard 2
1 MOVI - Modeling, localization, recognition and interpretation in computer vision
GRAVIR - IMAG - Graphisme, Vision et Robotique, Inria Grenoble - Rhône-Alpes, CNRS - Centre National de la Recherche Scientifique : FR71
2 SHARP - Automatic Programming and Decisional Systems in Robotics
GRAVIR - IMAG - Graphisme, Vision et Robotique, Inria Grenoble - Rhône-Alpes
Abstract : In this paper we describe a method for aligning a robot gripper - or any other end effector - with an object. An example of such a gripper/object alignment is grasping. The task consists of, first computing an alignment condition, and second servoing the robot such that it moves and reaches the desired position. A single camera is used to provide the visual feedback necessary to estimate the location of the object to be grasped, to determine the gripper/object alignment condition, and to dynamically control the robot's motion. The original contributions of this paper are the following. Since the camera is not mounted onto the robot it is crucial to express the alignment condition such that it does not depend on the intrinsic and extrinsic camera parameters. Therefore we developp a method for expressing the alignment condition (the relative location of the gripper with respect to the object) such that it is projective invariant, i.e., it is view invariant and it does not require a calibrated camera. The central issue of any image-based servoing method is the estimation of the image Jacobian. This Jacobian relates the 3-D velocity field of a moving object to the image velocity field. In the past, the exact estimation of this Jacobian has been avoided because of the lack of a fast and robust method to estimate the pose of a 3-D object with respect to a camera. We discuss the advantage of using an exact image Jacobian with respect to the dynamic behaviour of the servoing process. From an experimental point of view, we describe a grasping experiment involving image-based object localization, grasp planning, and visual servoing.
Type de document :
Chapitre d'ouvrage
Oussama Khatib and J. Kenneth Salisbury. Experimental Robotics IV: The 4th International Symposium, Stanford, California, June 30 ~ July 2, 1995, 223, Springer Berlin / Heidelberg, pp.71--82, 1997, Lecture Notes in Control and Information Sciences, 〈10.1007/BFb0035198〉
Liste complète des métadonnées

https://hal.inria.fr/inria-00590074
Contributeur : Team Perception <>
Soumis le : mardi 3 mai 2011 - 09:14:54
Dernière modification le : mercredi 11 avril 2018 - 01:53:26

Lien texte intégral

Identifiants

Collections

Citation

Radu Horaud, Fadi Dornaika, Christian Laugier, Christian Bard. Integrating grasp planning and visual servoing for automatic grasping. Oussama Khatib and J. Kenneth Salisbury. Experimental Robotics IV: The 4th International Symposium, Stanford, California, June 30 ~ July 2, 1995, 223, Springer Berlin / Heidelberg, pp.71--82, 1997, Lecture Notes in Control and Information Sciences, 〈10.1007/BFb0035198〉. 〈inria-00590074〉

Partager

Métriques

Consultations de la notice

178