Inferring symbols from demonstrations to support vector-symbolic planning in a robotic assembly task - Archive ouverte HAL Access content directly
Conference Papers Year :

Inferring symbols from demonstrations to support vector-symbolic planning in a robotic assembly task

(1) , (2) , (3) , (1)
1
2
3

Abstract

While deep reinforcement learning is increasingly used to solve complex sensorimotor tasks, it requires vast amounts of experience to achieve adequate performance. Humans, by comparison, can correctly learn many tasks from just a small handful of demonstrations. A common explanation of this difference points to the human use of symbolic representations that constrain search spaces, enable instruction following, and promote the transfer and reuse of knowledge across tasks. In prior work, we developed neural network models that use vector-symbolic representations to plan complex actions, but the challenge of learning such representations and grounding them in sensorimotor data remains unsolved. As a step towards addressing this challenge, we use a programming-by-demonstration approach that automatically extracts symbols from sensorimotor data during a simulated robotic assembly task. The system first observes a small number of demonstrations of the task, and then constructs models of the associated states and actions. The system infers task-relevant states by clustering the positions and orientations of objects manipulated during the demonstrations. It then builds vector-symbolic representations of these states, and of actions defined in terms of transitions between states. This approach allows rapid and automatic sensory grounding of vector-symbolic representations, which in turn support flexible, robust, and biologically plausible control.
Fichier principal
Vignette du fichier
Yao2020_1st-SMILES-workshop_ICDL2020.pdf (510.2 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03041290 , version 1 (04-12-2020)

Identifiers

  • HAL Id : hal-03041290 , version 1

Cite

Xueyang Yao, Saeejith Nair, Peter Blouw, Bryan Tripp. Inferring symbols from demonstrations to support vector-symbolic planning in a robotic assembly task. ICDL 2020 - 1st SMILES (Sensorimotor Interaction, Language and Embodiment of Symbols) workshop, Nov 2020, Valparaiso / Virtual, Chile. ⟨hal-03041290⟩
48 View
97 Download

Share

Gmail Facebook Twitter LinkedIn More