Learning Nash Equilibrium for General-Sum Markov Games from Batch Data - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Learning Nash Equilibrium for General-Sum Markov Games from Batch Data

Résumé

This paper addresses the problem of learning a Nash equilibrium in γ-discounted mul-tiplayer general-sum Markov Games (MGs) in a batch setting. As the number of players increases in MG, the agents may either collaborate or team apart to increase their final rewards. One solution to address this problem is to look for a Nash equilibrium. Although , several techniques were found for the subcase of two-player zero-sum MGs, those techniques fail to find a Nash equilibrium in general-sum Markov Games. In this paper, we introduce a new definition of-Nash equilibrium in MGs which grasps the strategy's quality for multiplayer games. We prove that minimizing the norm of two Bellman-like residuals implies to learn such an-Nash equilibrium. Then, we show that minimizing an empirical estimate of the L p norm of these Bellman-like residuals allows learning for general-sum games within the batch setting. Finally, we introduce a neural network architecture that successfully learns a Nash equilibrium in generic multiplayer general-sum turn-based MGs.
Fichier principal
Vignette du fichier
bellman-residual-aistats2016(5).pdf (909.96 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01648489 , version 1 (26-11-2017)

Identifiants

  • HAL Id : hal-01648489 , version 1

Citer

Julien Pérolat, Florian Strub, Bilal Piot, Olivier Pietquin. Learning Nash Equilibrium for General-Sum Markov Games from Batch Data. AISTATS 2017 - The 20th International Conference on Artificial Intelligence and Statistics, Apr 2017, Fort Lauderdale, United States. pp.1-14. ⟨hal-01648489⟩
165 Consultations
283 Téléchargements

Partager

Gmail Facebook X LinkedIn More