Improving Coordination with Communication in Multiagent Reinforcement Learning - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2004

Improving Coordination with Communication in Multiagent Reinforcement Learning

Daniel Szer
  • Fonction : Auteur
  • PersonId : 830433

Résumé

In the following paper we present a new algorithm for cooperative reinforcement learning in multiagent systems. We consider autonomous and independently learning agents, and we seek to obtain an optimal solution for the team as a whole while keeping the learning as much decentralized as possible. Coordination between agents occurs through communication, namely the mutual notification algorithm. We define the learning problem as a decentralized process, using the MDP formalism. We then give an optimality criterion and prove the convergence of the algorithm for deterministic environments. We introduce variable and hierarchical communication strategies which considerably reduce the number of communications. Finally we study the convergence properties and communication overhead on a small example.

Domaines

Autre [cs.OH]
Fichier non déposé

Dates et versions

inria-00100165 , version 1 (26-09-2006)

Identifiants

  • HAL Id : inria-00100165 , version 1

Citer

Daniel Szer, François Charpillet. Improving Coordination with Communication in Multiagent Reinforcement Learning. 16th IEEE International Conference on Tools with Artificial Intelligence - ICTAI'04, 2004, Boca Raton, USA, 5 p. ⟨inria-00100165⟩
157 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More