Reinforcement Learning of POMDPs using Spectral Methods

Abstract : We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods. While spectral methods have been previously employed for consistent learning of (passive) latent variable models such as hidden Markov models, POMDPs are more challenging since the learner interacts with the environment and possibly changes the future observations in the process. We devise a learning algorithm running through episodes, in each episode we employ spectral techniques to learn the POMDP parameters from a trajectory generated by a fixed policy. At the end of the episode, an optimization oracle returns the optimal memoryless planning policy which maximizes the expected reward based on the estimated POMDP model. We prove an order-optimal regret bound w.r.t. the optimal memoryless policy and efficient scaling with respect to the dimensionality of observation and action spaces.
Type de document :
Communication dans un congrès
Proceedings of the 29th Annual Conference on Learning Theory (COLT2016), Jun 2016, New York City, United States. 2016
Liste complète des métadonnées

Littérature citée [27 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01322207
Contributeur : Alessandro Lazaric <>
Soumis le : jeudi 26 mai 2016 - 17:34:15
Dernière modification le : mercredi 25 avril 2018 - 15:43:10
Document(s) archivé(s) le : samedi 27 août 2016 - 11:03:09

Fichier

master.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01322207, version 1

Collections

Citation

Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar. Reinforcement Learning of POMDPs using Spectral Methods. Proceedings of the 29th Annual Conference on Learning Theory (COLT2016), Jun 2016, New York City, United States. 2016. 〈hal-01322207〉

Partager

Métriques

Consultations de la notice

184

Téléchargements de fichiers

114