Visual Servoing in Autoencoder Latent Space - Archive ouverte HAL Access content directly
Journal Articles IEEE Robotics and Automation Letters Year : 2022

Visual Servoing in Autoencoder Latent Space

(1, 2) , (2) , (1, 3) , (2)
1
2
3

Abstract

Visual servoing (VS) is a common way in robotics to control a robot motion using information acquired by a camera. This approach requires to extract visual information from the image to design the control law. The resulting servo loop is built in order to minimize an error expressed in the image space. We consider a direct visual servoing (DVS) from whole images. We propose a new framework to perform VS in the latent space learned by a convolutional autoencoder. We show that this latent space avoids explicit feature extraction and tracking issues and provides a good representation, smoothing the cost function of the VS process. Besides, our experiments show that this unsupervised learning approach allows us to obtain, without labelling cost, an accurate end-positioning, often on par with the best DVS methods in terms of accuracy but with a larger convergence area.
Fichier principal
Vignette du fichier
2022_ral_felton.pdf (1.11 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03506036 , version 1 (11-01-2022)

Identifiers

Cite

Samuel Felton, Pascal Brault, Elisa Fromont, Eric Marchand. Visual Servoing in Autoencoder Latent Space. IEEE Robotics and Automation Letters, 2022, 7 (2), pp.3234-3241. ⟨10.1109/LRA.2022.3144490⟩. ⟨hal-03506036⟩
201 View
151 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More