A Multimodal Human-Robot Interaction Dataset

Abstract : This works presents a multimodal dataset for Human-Robot Interactive Learning. 1 The dataset contains synchronized recordings of several human users, from a stereo 2 microphone and three cameras mounted on the robot. The focus of the dataset is 3 incremental object learning, oriented to human-robot assistance and interaction. To 4 learn new object models from interactions with a human user, the robot needs to 5 be able to perform multiple tasks: (a) recognize the type of interaction (pointing, 6 showing or speaking), (b) segment regions of interest from acquired data (hands and 7 objects), and (c) learn and recognize object models. We illustrate the advantages 8 of multimodal data over camera-only datasets by presenting an approach that 9 recognizes the user interaction by combining simple image and language features.
Type de document :
Poster
NIPS 2016, workshop Future of Interactive Learning Machines, Dec 2016, Barcelona, Spain
Liste complète des métadonnées

Littérature citée [12 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01402479
Contributeur : Florian Golemo <>
Soumis le : mercredi 7 décembre 2016 - 16:10:42
Dernière modification le : jeudi 16 novembre 2017 - 17:12:03
Document(s) archivé(s) le : mercredi 22 mars 2017 - 23:49:41

Fichier

multimodal-dataset-nips(1).pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01402479, version 1

Citation

Pablo Azagra, Yoan Mollard, Florian Golemo, Ana Cristina Murillo, Manuel Lopes, et al.. A Multimodal Human-Robot Interaction Dataset. NIPS 2016, workshop Future of Interactive Learning Machines, Dec 2016, Barcelona, Spain. 〈hal-01402479〉

Partager

Métriques

Consultations de la notice

358

Téléchargements de fichiers

551