Finite-Sample Analysis of Least-Squares Policy Iteration - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Article Dans Une Revue Journal of Machine Learning Research Année : 2012

Finite-Sample Analysis of Least-Squares Policy Iteration

Alessandro Lazaric
Mohammad Ghavamzadeh
  • Fonction : Auteur
  • PersonId : 868946
Rémi Munos
  • Fonction : Auteur
  • PersonId : 836863

Résumé

In this paper, we report a performance bound for the widely used least-squares policy iteration (LSPI) algorithm. We first consider the problem of policy evaluation in reinforcement learning, that is, learning the value function of a fixed policy, using the least-squares temporal-difference (LSTD) learning method, and report finite-sample analysis for this algorithm. To do so, we first derive a bound on the performance of the LSTD solution evaluated at the states generated by the Markov chain and used by the algorithm to learn an estimate of the value function. This result is general in the sense that no assumption is made on the existence of a stationary distribution for the Markov chain. We then derive generalization bounds in the case when the Markov chain possesses a stationary distribution and is $\beta$-mixing. Finally, we analyze how the error at each policy evaluation step is propagated through the iterations of a policy iteration method, and derive a performance bound for the LSPI algorithm.
Fichier principal
Vignette du fichier
lazaric12a.pdf (270.78 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00772060 , version 1 (09-01-2013)

Identifiants

  • HAL Id : hal-00772060 , version 1

Citer

Alessandro Lazaric, Mohammad Ghavamzadeh, Rémi Munos. Finite-Sample Analysis of Least-Squares Policy Iteration. Journal of Machine Learning Research, 2012, 13, pp.3041-3074. ⟨hal-00772060⟩
192 Consultations
195 Téléchargements

Partager

Gmail Facebook X LinkedIn More