Finite-Sample Analysis of LSTD

Alessandro Lazaric 1 Mohammad Ghavamzadeh 1 Remi Munos 1
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : In this paper we consider the problem of policy evaluation in reinforcement learning, i.e., learning the value function of a fixed policy, using the least-squares temporal-difference (LSTD) learning algorithm. We report a finite-sample analysis of LSTD. We first derive a bound on the performance of the LSTD solution evaluated at the states generated by the Markov chain and used by the algorithm to learn an estimate of the value function. This result is general in the sense that no assumption is made on the existence of a stationary distribution for the Markov chain. We then derive generalization bounds in the case when the Markov chain possesses a stationary distribution and is $\beta$-mixing.
Type de document :
Communication dans un congrès
ICML - 27th International Conference on Machine Learning, Jun 2010, Haifa, Israel. pp.615-622, 2010
Liste complète des métadonnées

Littérature citée [10 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/inria-00482189
Contributeur : Mohammad Ghavamzadeh <>
Soumis le : dimanche 9 mai 2010 - 20:42:27
Dernière modification le : jeudi 11 janvier 2018 - 06:22:13
Document(s) archivé(s) le : jeudi 16 septembre 2010 - 13:18:05

Fichier

lstd-tech.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : inria-00482189, version 1

Collections

Citation

Alessandro Lazaric, Mohammad Ghavamzadeh, Remi Munos. Finite-Sample Analysis of LSTD. ICML - 27th International Conference on Machine Learning, Jun 2010, Haifa, Israel. pp.615-622, 2010. 〈inria-00482189〉

Partager

Métriques

Consultations de la notice

650

Téléchargements de fichiers

371