Visual Servoing from Deep Neural Networks - Archive ouverte HAL Access content directly
Conference Papers Year : 2017

Visual Servoing from Deep Neural Networks

(1) , (1) , (2) , (1) , (2)
1
2

Abstract

We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing control scheme. The method converges robustly even in difficult real-world settings with strong lighting variations and occlusions. A positioning error of less than one millimeter is obtained in experiments with a 6 DOF robot.
Fichier principal
Vignette du fichier
RSS2017_Workshop.pdf (1.6 Mo) Télécharger le fichier
Loading...

Dates and versions

hal-01589887 , version 1 (07-12-2017)

Identifiers

  • HAL Id : hal-01589887 , version 1

Cite

Quentin Bateux, Eric Marchand, Jürgen Leitner, François Chaumette, Peter Corke. Visual Servoing from Deep Neural Networks. RSS 2017 - Robotics : Science and Systems, Workshop New Frontiers for Deep Learning in Robotics, Jul 2017, Boston, United States. pp.1-6. ⟨hal-01589887⟩
463 View
317 Download

Share

Gmail Facebook Twitter LinkedIn More