Skip to Main content Skip to Navigation
Conference papers

Training Deep Neural Networks for Visual Servoing

Quentin Bateux 1 Eric Marchand 1 Jürgen Leitner 2 François Chaumette 1 Peter Corke 2
1 RAINBOW - Sensor-based and interactive robotics
Inria Rennes – Bretagne Atlantique , IRISA-D5 - SIGNAUX ET IMAGES NUMÉRIQUES, ROBOTIQUE
Abstract : We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF positioning tasks by visual servoing. A convolutional neural network is fine-tuned to estimate the relative pose between the current and desired images and a pose-based visual servoing control law is considered to reach the desired pose. The paper describes how to efficiently and automatically create a dataset used to train the network. We show that this enables the robust handling of various perturbations (occlusions and lighting variations). We then propose the training of a scene-agnostic network by feeding in both the desired and current images into a deep network. The method is validated on a 6 DOF robot.
Document type :
Conference papers
Complete list of metadatas

Cited literature [30 references]  Display  Hide  Download

https://hal.inria.fr/hal-01716679
Contributor : Eric Marchand <>
Submitted on : Friday, February 23, 2018 - 10:36:22 PM
Last modification on : Saturday, July 11, 2020 - 3:14:53 AM
Long-term archiving on: : Friday, May 25, 2018 - 1:18:59 PM

File

ICRA18_0906_FI.pdf
Files produced by the author(s)

Identifiers

Citation

Quentin Bateux, Eric Marchand, Jürgen Leitner, François Chaumette, Peter Corke. Training Deep Neural Networks for Visual Servoing. ICRA 2018 - IEEE International Conference on Robotics and Automation, May 2018, Brisbane, Australia. pp.3307-3314, ⟨10.1109/ICRA.2018.8461068⟩. ⟨hal-01716679⟩

Share

Metrics

Record views

1134

Files downloads

3564