A Multimodal Dataset for Object Model Learning from Natural Human-Robot Interaction

Abstract : Learning object models in the wild from natural human interactions is an essential ability for robots to perform general tasks. In this paper we present a robocentric multimodal dataset addressing this key challenge. Our dataset focuses on interactions where the user teaches new objects to the robot in various ways. It contains synchronized recordings of visual (3 cameras) and audio data which provide a challenging evaluation framework for different tasks. Additionally, we present an end-to-end system that learns object models using object patches extracted from the recorded natural interactions. Our proposed pipeline follows these steps: (a) recognizing the interaction type, (b) detecting the object that the interaction is focusing on, and (c) learning the models from the extracted data. Our main contribution lies in the steps towards identifying the target object patches of the images. We demonstrate the advantages of combining language and visual features for the interaction recognition and use multiple views to improve the object modelling. Our experimental results show that our dataset is challenging due to occlusions and domain change with respect to typical object learning frameworks. The performance of common out-of-the-box classifiers trained on our data is low. We demonstrate that our algorithm outperforms such baselines.
Type de document :
Communication dans un congrès
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Sep 2017, Vancouver, Canada. 〈http://www.iros2017.org/〉
Liste complète des métadonnées

Littérature citée [19 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01567236
Contributeur : Florian Golemo <>
Soumis le : vendredi 21 juillet 2017 - 21:36:44
Dernière modification le : jeudi 16 novembre 2017 - 17:12:03

Fichier

2017_IROS_Azagra_FinalVersion....
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01567236, version 1

Citation

Pablo Azagra, Florian Golemo, Yoan Mollard, Manuel Lopes, Javier Civera, et al.. A Multimodal Dataset for Object Model Learning from Natural Human-Robot Interaction. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Sep 2017, Vancouver, Canada. 〈http://www.iros2017.org/〉. 〈hal-01567236〉

Partager

Métriques

Consultations de la notice

129

Téléchargements de fichiers

95