A Multimodal Dataset for Interactive and Incremental Learning of Object Models

Abstract : This work presents an incremental object learning framework oriented to human-robot assistance and interaction. To learn new object models from interactions with a human user, the robot needs to be able to perform multiple recognition tasks: (a) recognize the type of interaction, (b) segment regions of interest from acquired data, and (c) learn and recognize object models. The contributions on this work are focused on the recognition modules of this human-robot interactive framework. First, we illustrate the advantages of multimodal data over camera-only datasets. We present an approach that recognizes the user interaction by combining simple image and language features. Second, we propose an incremental approach to learn visual object models, which is shown to achieve comparable performance to a typical offline-trained system. We utilize two public datasets, one of them presented and released in this work. This dataset contains synchronized recordings from user speech and three cameras mounted on a robot, which captured the user teaching object names to the robot.
Type de document :
Pré-publication, Document de travail
2016
Liste complète des métadonnées

Littérature citée [26 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01402493
Contributeur : Florian Golemo <>
Soumis le : jeudi 24 novembre 2016 - 17:00:35
Dernière modification le : mardi 17 avril 2018 - 09:08:41
Document(s) archivé(s) le : mardi 21 mars 2017 - 04:10:19

Fichier

multimodal-dataset-interactive...
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01402493, version 1

Citation

Pablo Azagra, Yoan Mollard, Florian Golemo, Ana Murillo, Manuel Lopes, et al.. A Multimodal Dataset for Interactive and Incremental Learning of Object Models. 2016. 〈hal-01402493〉

Partager

Métriques

Consultations de la notice

408

Téléchargements de fichiers

449