Finite-Sample Analysis of Least-Squares Policy Iteration

1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : In this paper, we report a performance bound for the widely used least-squares policy iteration (LSPI) algorithm. We first consider the problem of policy evaluation in reinforcement learning, i.e., learning the value function of a fixed policy, using the least-squares temporal-difference (LSTD) learning method, and report finite-sample analysis for this algorithm. To do so, we first derive a bound on the performance of the LSTD solution evaluated at the states generated by the Markov chain and used by the algorithm to learn an estimate of the value function. This result is general in the sense that no assumption is made on the existence of a stationary distribution for the Markov chain. We then derive generalization bounds in the case when the Markov chain possesses a stationary distribution and is $\beta$-mixing. Finally, we analyze how the error at each policy evaluation step is propagated through the iterations of a policy iteration method, and derive a performance bound for the LSPI algorithm.
Type de document :
Rapport
[Technical Report] 2010
Domaine :

Littérature citée [21 références]

https://hal.inria.fr/inria-00528596
Soumis le : vendredi 22 octobre 2010 - 10:27:28
Dernière modification le : jeudi 11 janvier 2018 - 06:22:13
Document(s) archivé(s) le : dimanche 23 janvier 2011 - 02:46:17

Fichier

lspi-jmlr.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

• HAL Id : inria-00528596, version 1

Citation

Alessandro Lazaric, Mohammad Ghavamzadeh, Rémi Munos. Finite-Sample Analysis of Least-Squares Policy Iteration. [Technical Report] 2010. 〈inria-00528596〉

Métriques

Consultations de la notice

370

Téléchargements de fichiers