Exploration–Exploitation in MDPs with Options

Ronan Fruit 1, 2 Alessandro Lazaric 1, 2
1 SEQUEL - Sequential Learning
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Abstract : While a large body of empirical results show that temporally-extended actions and options may significantly affect the learning performance of an agent, the theoretical understanding of how and when options can be beneficial in online reinforcement learning is relatively limited. In this paper, we derive an upper and lower bound on the regret of a variant of UCRL using options. While we first analyze the algorithm in the general case of semi-Markov decision processes (SMDPs), we show how these results can be translated to the specific case of MDPs with options and we illustrate simple scenarios in which the regret of learning with options can be provably much smaller than the regret suffered when learning with primitive actions.
Type de document :
Communication dans un congrès
AISTATS 2017 - 20th International Conference on Artificial Intelligence and Statistics, Apr 2017, Fort Lauderdale, United States
Liste complète des métadonnées

https://hal.inria.fr/hal-01493567
Contributeur : Alessandro Lazaric <>
Soumis le : vendredi 24 mars 2017 - 15:55:40
Dernière modification le : jeudi 11 janvier 2018 - 06:27:32
Document(s) archivé(s) le : dimanche 25 juin 2017 - 13:34:33

Fichier

main.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01493567, version 2

Collections

Citation

Ronan Fruit, Alessandro Lazaric. Exploration–Exploitation in MDPs with Options. AISTATS 2017 - 20th International Conference on Artificial Intelligence and Statistics, Apr 2017, Fort Lauderdale, United States. 〈hal-01493567v2〉

Partager

Métriques

Consultations de la notice

223

Téléchargements de fichiers

50