Grasping by Romeo with visual servoing

Giovanni Claudio 1, * Fabien Spindler 1 François Chaumette 1
* Auteur correspondant
1 Lagadic - Visual servoing in robotics, computer vision, and augmented reality
CRISAM - Inria Sophia Antipolis - Méditerranée , Inria Rennes – Bretagne Atlantique , IRISA-D5 - SIGNAUX ET IMAGES NUMÉRIQUES, ROBOTIQUE
Abstract : The purpose our work is to improve the visual servoing framework proposed in [1] for the upper body of the humanoid robot REEM, on Romeo a 37 degrees of freedom humanoid robot designed by Aldebaran Robotics. The main application is a manipulation task using one of its arms. The robot has to detect and to track with its gaze a box placed on a table in front of him, estimate the pose of the box with respect to one of its eye's camera, approach its arm near the box and then move the arm using visual feedback so that it is able to grasp the box accurately. Once this is achieved, it detects a human and delivers the box (see sequence in fig 1). In order to improve the robustness and to overcome the coarse calibration of the robot a visual servoing approach is used. The implementation is composed by two controllers: an arm and a head gaze velocity controller. Each of Romeo's arm is composed by seven degrees of freedom (two on the shoulder, two on the elbow and three on the wrist). Since the hand is not equipped with any force sensors, only vision is used to determine when the hand is near enough to the box to perform a successful grasping. To compute the pose of the hand we use a QR-Code (automatically detected) located on the hand. Since the box has known dimensions, the model based tracker available in ViSP 1 [2] allows to compute its pose in the space. This is a typical eye-to-hand Pose-Base Visual Servoing (PBVS) [3] to move the hand from the actual pose (extracted estimating the pose of the QR-Code) to the desired one (the grasping pose computed from the actual pose of the box). Note that the box can be placed on any reachable location and that the arm will adjust reactively its pose if the box is moved during the grasping process. Furthermore, the aim of the gaze controller is to keep both the hand (QR-Code) and the box in the field of view of the eye's camera. For this task two joints on the neck, two on the head plus two joints on the eye are used. This is a typical eye-in-hand Image-Based Visual Servoing (IBVS) in which the visual feature is the midpoint of the QR-Code and the box position in the image. For both controllers the redundancy-based strategy proposed in [4] for the avoidance of joint limits is used. Thanks to the large projection operator the robot avoids the joint limits very smoothly even when the main task constrains all the robot degrees of freedom. From the implementation point of view mainly three libraries are used: ViSP (model-based and template tracker, pose estimation, image and pose visual servoing), Metapod 2 from LAAS (kinematic model) and OpenCV 3 (face detection). Figure 1: Romeo grasping a box and delivering it to a human.
Type de document :
Communication dans un congrès
Journées Nationales de la Robotique Humanoïde, JNRH, Jun 2015, Nantes, France
Liste complète des métadonnées

Littérature citée [4 références]  Voir  Masquer  Télécharger
Contributeur : Eric Marchand <>
Soumis le : jeudi 4 juin 2015 - 09:28:36
Dernière modification le : mardi 16 janvier 2018 - 15:54:11
Document(s) archivé(s) le : mardi 15 septembre 2015 - 10:46:27


Fichiers produits par l'(les) auteur(s)


  • HAL Id : hal-01159882, version 1


Giovanni Claudio, Fabien Spindler, François Chaumette. Grasping by Romeo with visual servoing. Journées Nationales de la Robotique Humanoïde, JNRH, Jun 2015, Nantes, France. 〈hal-01159882〉



Consultations de la notice


Téléchargements de fichiers