Neural Network Based Reinforcement Learning for Audio-Visual Gaze Control in Human-Robot Interaction

Stéphane Lathuilière 1 Benoît Massé 1 Pablo Mesejo 1 Radu Horaud 1
1 PERCEPTION - Interpretation and Modelling of Images and Videos
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann, INPG - Institut National Polytechnique de Grenoble
Abstract : This paper introduces a novel neural network-based reinforcement learning approach for robot gaze control. Our approach enables a robot to learn and to adapt its gaze control strategy for human-robot interaction neither with the use of external sensors nor with human supervision. The robot learns to focus its attention onto groups of people from its own audio-visual experiences, independently of the number of people, of their positions and of their physical appearances. In particular, we use a recurrent neural network architecture in combination with Q-learning to find an optimal action-selection policy; we pre-train the network using a simulated environment that mimics realistic scenarios that involve speaking/silent participants, thus avoiding the need of tedious sessions of a robot interacting with people. Our experimental evaluation suggests that the proposed method is robust in terms of parameter configuration, i.e. the selection of the parameter values employed by the method do not have a decisive impact on the performance. The best results are obtained when both audio and visual information is jointly used. Experiments with the Nao robot indicate that our framework is a step forward towards the autonomous learning of a socially acceptable gaze behavior.
Complete list of metadatas

Cited literature [30 references]  Display  Hide  Download


https://hal.inria.fr/hal-01643775
Contributor : Team Perception <>
Submitted on : Wednesday, April 25, 2018 - 2:15:58 PM
Last modification on : Thursday, February 7, 2019 - 7:35:34 PM

Identifiers

Collections

Citation

Stéphane Lathuilière, Benoît Massé, Pablo Mesejo, Radu Horaud. Neural Network Based Reinforcement Learning for Audio-Visual Gaze Control in Human-Robot Interaction. Pattern Recognition Letters, Elsevier, 2019, 118, pp.61-71. ⟨10.1016/j.patrec.2018.05.023⟩. ⟨hal-01643775v2⟩

Share

Metrics

Record views

610

Files downloads

441