Skip to Main content Skip to Navigation
Conference papers

A Theory of Regularized Markov Decision Processes

Abstract : Many recent successful (deep) reinforcement learning algorithms make use of regularization, generally based on entropy or Kullback-Leibler divergence. We propose a general theory of regularized Markov Decision Processes that generalizes these approaches in two directions: we consider a larger class of regularizers, and we consider the general modified policy iteration approach, encompassing both policy iteration and value iteration. The core building blocks of this theory are a notion of regularized Bellman operator and the Legendre-Fenchel transform, a classical tool of convex optimization. This approach allows for error propagation analyses of general algorithmic schemes of which (possibly variants of) classical algorithms such as Trust Region Policy Optimization, Soft Q-learning, Stochastic Actor Critic or Dynamic Policy Programming are special cases. This also draws connections to proximal convex optimization, especially to Mirror Descent.
Complete list of metadata
Contributor : Bruno Scherrer Connect in order to contact the contributor
Submitted on : Thursday, August 29, 2019 - 11:10:49 AM
Last modification on : Saturday, July 23, 2022 - 3:53:07 AM

Links full text


  • HAL Id : hal-02273741, version 1
  • ARXIV : 1901.11275


Matthieu Geist, Bruno Scherrer, Olivier Pietquin. A Theory of Regularized Markov Decision Processes. ICML 2019 - Thirty-sixth International Conference on Machine Learning, Jun 2019, Long Island, United States. ⟨hal-02273741⟩



Record views