Skip to Main content Skip to Navigation
Journal articles

Learning in games via reinforcement learning and regularization

Abstract : We investigate a class of reinforcement learning dynamics in which each player plays a "regularized best response" to a score vector consisting of his actions' cumulative payoffs. Regularized best responses are single-valued regularizations of ordinary best responses obtained by maximizing the difference between a player's expected cumulative payoff and a (strongly) convex penalty term. In contrast to the class of smooth best response maps used in models of stochastic fictitious play, these penalty functions are not required to be infinitely steep at the boundary of the simplex; in fact, dropping this requirement gives rise to an important dichotomy between steep and nonsteep cases. In this general setting, our main results extend several properties of the replicator dynamics such as the elimination of dominated strategies, the asymptotic stability of strict Nash equilibria and the convergence of time-averaged trajectories to interior Nash equilibria in zero-sum games.
Complete list of metadatas

Cited literature [58 references]  Display  Hide  Download

https://hal.inria.fr/hal-01073491
Contributor : Panayotis Mertikopoulos <>
Submitted on : Tuesday, January 6, 2015 - 11:08:42 AM
Last modification on : Tuesday, October 6, 2020 - 4:20:09 PM
Long-term archiving on: : Tuesday, April 7, 2015 - 10:21:12 AM

File

1407.6267v1.pdf
Files produced by the author(s)

Identifiers

Citation

Panayotis Mertikopoulos, William H. Sandholm. Learning in games via reinforcement learning and regularization. Mathematics of Operations Research, INFORMS, 2016, 41 (4), pp.1297-1324. ⟨10.1287/moor.2016.0778⟩. ⟨hal-01073491⟩

Share

Metrics

Record views

435

Files downloads

602