Training Deep Neural Networks for Visual Servoing - Archive ouverte HAL Access content directly
Conference Papers Year : 2018

Training Deep Neural Networks for Visual Servoing

(1) , (1) , (2) , (1) , (2)
1
2

Abstract

We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF positioning tasks by visual servoing. A convolutional neural network is fine-tuned to estimate the relative pose between the current and desired images and a pose-based visual servoing control law is considered to reach the desired pose. The paper describes how to efficiently and automatically create a dataset used to train the network. We show that this enables the robust handling of various perturbations (occlusions and lighting variations). We then propose the training of a scene-agnostic network by feeding in both the desired and current images into a deep network. The method is validated on a 6 DOF robot.
Fichier principal
Vignette du fichier
ICRA18_0906_FI.pdf (2.15 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01716679 , version 1 (23-02-2018)

Identifiers

Cite

Quentin Bateux, Eric Marchand, Jürgen Leitner, François Chaumette, Peter I Corke. Training Deep Neural Networks for Visual Servoing. ICRA 2018 - IEEE International Conference on Robotics and Automation, May 2018, Brisbane, Australia. pp.3307-3314, ⟨10.1109/ICRA.2018.8461068⟩. ⟨hal-01716679⟩
849 View
2766 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More