Selecting Near-Optimal Approximate State Representations in Reinforcement Learning

Ronald Ortner 1 Odalric-Ambrym Maillard 2 Daniil Ryabko 3
3 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal, Inria Lille - Nord Europe
Abstract : We consider a reinforcement learning setting where the learner does not have explicit access to the states of the underlying Markov decision process (MDP). Instead, she has access to several models that map histories of past interactions to states. Here we improve over known regret bounds in this setting, and more importantly generalize to the case where the models given to the learner do not contain a true model resulting in an MDP representation but only approximations of it. We also give improved error bounds for state aggregation.
Type de document :
Communication dans un congrès
International Conference on Algorithmic Learning Theory (ALT), Oct 2014, Bled, Slovenia. Springer, 8776, pp.140-154, 2014, LNCS
Liste complète des métadonnées

https://hal.inria.fr/hal-01057562
Contributeur : Daniil Ryabko <>
Soumis le : samedi 23 août 2014 - 22:40:11
Dernière modification le : jeudi 11 janvier 2018 - 06:22:13

Identifiants

  • HAL Id : hal-01057562, version 1

Citation

Ronald Ortner, Odalric-Ambrym Maillard, Daniil Ryabko. Selecting Near-Optimal Approximate State Representations in Reinforcement Learning. International Conference on Algorithmic Learning Theory (ALT), Oct 2014, Bled, Slovenia. Springer, 8776, pp.140-154, 2014, LNCS. 〈hal-01057562〉

Partager

Métriques

Consultations de la notice

288