Inferring symbols from demonstrations to support vector-symbolic planning in a robotic assembly task - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Inferring symbols from demonstrations to support vector-symbolic planning in a robotic assembly task

Résumé

While deep reinforcement learning is increasingly used to solve complex sensorimotor tasks, it requires vast amounts of experience to achieve adequate performance. Humans, by comparison, can correctly learn many tasks from just a small handful of demonstrations. A common explanation of this difference points to the human use of symbolic representations that constrain search spaces, enable instruction following, and promote the transfer and reuse of knowledge across tasks. In prior work, we developed neural network models that use vector-symbolic representations to plan complex actions, but the challenge of learning such representations and grounding them in sensorimotor data remains unsolved. As a step towards addressing this challenge, we use a programming-by-demonstration approach that automatically extracts symbols from sensorimotor data during a simulated robotic assembly task. The system first observes a small number of demonstrations of the task, and then constructs models of the associated states and actions. The system infers task-relevant states by clustering the positions and orientations of objects manipulated during the demonstrations. It then builds vector-symbolic representations of these states, and of actions defined in terms of transitions between states. This approach allows rapid and automatic sensory grounding of vector-symbolic representations, which in turn support flexible, robust, and biologically plausible control.
Fichier principal
Vignette du fichier
Yao2020_1st-SMILES-workshop_ICDL2020.pdf (510.2 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03041290 , version 1 (04-12-2020)

Identifiants

  • HAL Id : hal-03041290 , version 1

Citer

Xueyang Yao, Saeejith Nair, Peter Blouw, Bryan Tripp. Inferring symbols from demonstrations to support vector-symbolic planning in a robotic assembly task. ICDL 2020 - 1st SMILES (Sensorimotor Interaction, Language and Embodiment of Symbols) workshop, Nov 2020, Valparaiso / Virtual, Chile. ⟨hal-03041290⟩
58 Consultations
114 Téléchargements

Partager

Gmail Facebook X LinkedIn More