Learning in Games with Lossy Feedback

Abstract : We consider a game-theoretical multi-agent learning problem where the feedback information can be lost during the learning process and rewards are given by a broad class of games known as variationally stable games. We propose a simple variant of the classical online gradient descent algorithm, called reweighted online gradient descent (ROGD) and show that in variationally stable games, if each agent adopts ROGD, then almost sure convergence to the set of Nash equilibria is guaranteed, even when the feedback loss is asynchronous and arbitrarily corrrelated among agents. We then extend the framework to deal with unknown feedback loss probabilities by using an estimator (constructed from past data) in its replacement. Finally, we further extend the framework to accomodate both asynchronous loss and stochastic rewards and establish that multi-agent ROGD learning still converges to the set of Nash equilibria in such settings. Together, these results contribute to the broad lanscape of multi-agent online learning by significantly relaxing the feedback information that is required to achieve desirable outcomes.
Document type :
Conference papers
Complete list of metadatas

Cited literature [11 references]  Display  Hide  Download

https://hal.inria.fr/hal-01904461
Contributor : Panayotis Mertikopoulos <>
Submitted on : Thursday, October 25, 2018 - 1:28:51 AM
Last modification on : Thursday, November 8, 2018 - 2:28:04 PM
Long-term archiving on : Saturday, January 26, 2019 - 1:00:00 PM

File

FeedbackLoss-NIPS.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01904461, version 1

Citation

Zhengyuan Zhou, Panayotis Mertikopoulos, Susan Athey, Nicholas Bambos, Peter Glynn, et al.. Learning in Games with Lossy Feedback. NIPS 2018 - Thirty-second Conference on Neural Information Processing Systems, Dec 2018, Montreal, Canada. pp.1-11. ⟨hal-01904461⟩

Share

Metrics

Record views

308

Files downloads

194