Skip to Main content Skip to Navigation

On the Study of Cooperative Multi-Agent Policy Gradient

Guillaume Bono 1 Jilles Dibangoye 1 Laëtitia Matignon 2, 1 Florian Pereyron 3 Olivier Simonin 1
1 CHROMA - Robots coopératifs et adaptés à la présence humaine en environnements dynamiques
Inria Grenoble - Rhône-Alpes, CITI - CITI Centre of Innovation in Telecommunications and Integration of services
2 SyCoSMA - Systèmes Cognitifs et Systèmes Multi-Agents
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
Abstract : Reinforcement Learning (RL) for decentralized partially observable Markov decision processes (Dec-POMDPs) is lagging behind the spectacular breakthroughs of single-agent RL. That is because assumptions that hold in single-agent settings are often obsolete in decentralized multi-agent systems. To tackle this issue, we investigate the foundations of policy gradient methods within the centralized training for decentralized control (CTDC) paradigm. In this paradigm, learning can be accomplished in a centralized manner while execution can still be independent. Using this insight, we establish policy gradient theorem and compatible function approximations for decentralized multi-agent systems. Resulting actor-critic methods preserve the decentralized control at the execution phase, but can also estimate the policy gradient from collective experiences guided by a centralized critic at the training phase. Experiments demonstrate our policy gradient methods compare favorably against standard RL techniques in benchmarks from the literature.
Complete list of metadata

Cited literature [37 references]  Display  Hide  Download
Contributor : Guillaume Bono Connect in order to contact the contributor
Submitted on : Tuesday, July 17, 2018 - 1:48:54 PM
Last modification on : Tuesday, June 1, 2021 - 2:08:10 PM
Long-term archiving on: : Thursday, October 18, 2018 - 2:23:55 PM


Files produced by the author(s)


  • HAL Id : hal-01821677, version 2


Guillaume Bono, Jilles Dibangoye, Laëtitia Matignon, Florian Pereyron, Olivier Simonin. On the Study of Cooperative Multi-Agent Policy Gradient. [Research Report] RR-9188, INSA Lyon; INRIA. 2018, pp.1-27. ⟨hal-01821677v2⟩



Les métriques sont temporairement indisponibles