Skip to Main content Skip to Navigation
Conference papers

Visual Servoing from Deep Neural Networks

Quentin Bateux 1 Eric Marchand 1 Jürgen Leitner 2 François Chaumette 1 Peter Corke 2
1 Lagadic - Visual servoing in robotics, computer vision, and augmented reality
IRISA-D5 - SIGNAUX ET IMAGES NUMÉRIQUES, ROBOTIQUE, CRISAM - Inria Sophia Antipolis - Méditerranée , Inria Rennes – Bretagne Atlantique
Abstract : We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing control scheme. The method converges robustly even in difficult real-world settings with strong lighting variations and occlusions. A positioning error of less than one millimeter is obtained in experiments with a 6 DOF robot.
Document type :
Conference papers
Complete list of metadata

Cited literature [23 references]  Display  Hide  Download
Contributor : Eric Marchand <>
Submitted on : Thursday, December 7, 2017 - 11:36:32 AM
Last modification on : Saturday, July 11, 2020 - 3:14:56 AM
Long-term archiving on: : Thursday, March 8, 2018 - 12:10:24 PM


  • HAL Id : hal-01589887, version 1


Quentin Bateux, Eric Marchand, Jürgen Leitner, François Chaumette, Peter Corke. Visual Servoing from Deep Neural Networks. RSS 2017 - Robotics : Science and Systems, Workshop New Frontiers for Deep Learning in Robotics, Jul 2017, Boston, United States. pp.1-6. ⟨hal-01589887⟩



Record views


Files downloads