Score-based Inverse Reinforcement Learning - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Score-based Inverse Reinforcement Learning

Résumé

This paper reports theoretical and empirical results obtained for the score-based Inverse Reinforcement Learning (IRL) algorithm. It relies on a non-standard setting for IRL consisting of learning a reward from a set of globally scored trajec-tories. This allows using any type of policy (optimal or not) to generate trajectories without prior knowledge during data collection. This way, any existing database (like logs of systems in use) can be scored a posteriori by an expert and used to learn a reward function. Thanks to this reward function, it is shown that a near-optimal policy can be computed. Being related to least-square regression, the algorithm (called SBIRL) comes with theoretical guarantees that are proven in this paper. SBIRL is compared to standard IRL algorithms on synthetic data showing that annotations do help under conditions on the quality of the trajectories. It is also shown to be suitable for real-world applications such as the optimisation of a spoken dialogue system.
Fichier principal
Vignette du fichier
aamas-score-based.pdf (502.21 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01406886 , version 1 (02-12-2016)

Identifiants

  • HAL Id : hal-01406886 , version 1

Citer

Layla El Asri, Bilal Piot, Matthieu Geist, Romain Laroche, Olivier Pietquin. Score-based Inverse Reinforcement Learning. International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016), May 2016, Singapore, Singapore. ⟨hal-01406886⟩
704 Consultations
726 Téléchargements

Partager

Gmail Facebook X LinkedIn More