Softened Approximate Policy Iteration for Markov Games

Julien Pérolat 1, 2, 3 Bilal Piot 1, 2, 3 Matthieu Geist 4 Bruno Scherrer 5, 6 Olivier Pietquin 1, 2, 3, 7
1 SEQUEL - Sequential Learning
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
4 MALIS - MAchine Learning and Interactive Systems
SUPELEC-Campus Metz, CentraleSupélec
5 BIGS - Biology, genetics and statistics
Inria Nancy - Grand Est, IECL - Institut Élie Cartan de Lorraine
6 Probabilités et statistiques
IECL - Institut Élie Cartan de Lorraine
Abstract : This paper reports theoretical and empirical investigations on the use of quasi-Newton methods to minimize the Optimal Bellman Residual (OBR) of zero-sum two-player Markov Games. First, it reveals that state-of-the-art algorithms can be derived by the direct application of New-ton's method to different norms of the OBR. More precisely, when applied to the norm of the OBR, Newton's method results in the Bellman Residual Minimization Policy Iteration (BRMPI) and, when applied to the norm of the Projected OBR (POBR), it results into the standard Least Squares Policy Iteration (LSPI) algorithm. Consequently , new algorithms are proposed, making use of quasi-Newton methods to minimize the OBR and the POBR so as to take benefit of enhanced empirical performances at low cost. Indeed , using a quasi-Newton method approach introduces slight modifications in term of coding of LSPI and BRMPI but improves significantly both the stability and the performance of those algorithms. These phenomena are illustrated on an experiment conducted on artificially constructed games called Garnets.
Type de document :
Communication dans un congrès
ICML 2016 - 33rd International Conference on Machine Learning, Jun 2016, New York City, United States
Liste complète des métadonnées

https://hal.inria.fr/hal-01393328
Contributeur : Bruno Scherrer <>
Soumis le : lundi 7 novembre 2016 - 18:18:39
Dernière modification le : mardi 3 juillet 2018 - 11:31:49
Document(s) archivé(s) le : mercredi 8 février 2017 - 13:54:58

Identifiants

  • HAL Id : hal-01393328, version 1

Citation

Julien Pérolat, Bilal Piot, Matthieu Geist, Bruno Scherrer, Olivier Pietquin. Softened Approximate Policy Iteration for Markov Games. ICML 2016 - 33rd International Conference on Machine Learning, Jun 2016, New York City, United States. 〈hal-01393328〉

Partager

Métriques

Consultations de la notice

505

Téléchargements de fichiers

522