Skip to Main content Skip to Navigation
Conference papers

Finite-time last-iterate convergence for multi-agent learning in games

Abstract : In this paper, we consider multi-agent learning via online gradient descent in a class of games called λ-cocoercive games, a fairly broad class of games that admits many Nash equilibria and that properly includes unconstrained strongly monotone games. We characterize the finite-time lastiterate convergence rate for joint OGD learning on λ-cocoercive games; further, building on this result, we develop a fully adaptive OGD learning algorithm that does not require any knowledge of problem parameter (e.g. cocoercive constant λ) and show, via a novel double-stopping time technique, that this adaptive algorithm achieves same finite-time last-iterate convergence rate as nonadaptive counterpart. Subsequently, we extend OGD learning to the noisy gradient feedback case and establish last-iterate convergence results-first qualitative almost sure convergence, then quantitative finite-time convergence rates-all under non-decreasing step-sizes. To our knowledge, we provide the first set of results that fill in several gaps of the existing multi-agent online learning literature, where three aspects-finite-time convergence rates, non-decreasing step-sizes, and fully adaptive algorithms have been unexplored before.
Document type :
Conference papers
Complete list of metadatas

https://hal.inria.fr/hal-03043711
Contributor : Panayotis Mertikopoulos <>
Submitted on : Monday, December 7, 2020 - 2:04:32 PM
Last modification on : Wednesday, December 16, 2020 - 4:08:42 AM

File

ICML-2020-finite-time-last-ite...
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03043711, version 1

Citation

Tianyi Lin, Zhengyuan Zhou, Panayotis Mertikopoulos, Michael Jordan. Finite-time last-iterate convergence for multi-agent learning in games. ICML '20: The 37th International Conference on Machine Learning, 2020, Vienna, Austria. ⟨hal-03043711⟩

Share

Metrics

Record views

10

Files downloads

51