Performance Bounds for Lambda Policy Iteration and Application to the Game of Tetris
Résumé
We consider the discrete-time infinite-horizon optimal control problem formalized by Markov Decision Processes. We revisit the work of Bertsekas and Ioffe, that introduced Lambda Policy Iteration --- a family of algorithms parameterized by a parameter $\lambda$ --- that generalizes the standard algorithms Value Iteration and Policy Iteration, and has some deep connections with the Temporal Difference algorithms described by Sutton. We deepen the original theory developed by the authors by providing convergence rate bounds which generalize standard bounds for Value Iteration described for instance by Puterman. Then, the main contribution of this paper is to develop the theory of this algorithm when it is used in an approximate form. We extend and unify the separate analyses developed by Munos for Approximate Value Iteration and Approximate Policy Iteration, and provide performance bounds in the discounted and the undiscounted situations. Finally, we revisit the use of this algorithm in the training of a Tetris playing controller as originally done by Bertsekas and Ioffe. Our empirical results are different from those of Bertsekas and Ioffe (which were originally qualified as ''paradoxical'' and ''intriguing''). We track down the reason to be a minor implementation error of the algorithm, which suggests that, in practice, Lambda Policy Iteration may be more stable than previously thought.
Origine : Fichiers produits par l'(les) auteur(s)