Skip to Main content Skip to Navigation
Conference papers

Difference of Convex Functions Programming for Reinforcement Learning

Bilal Piot 1, 2 Matthieu Geist 1 Olivier Pietquin 3, 4, 2
2 SEQUEL - Sequential Learning
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189
Abstract : Large Markov Decision Processes are usually solved using Approximate Dy-namic Programming methods such as Approximate Value Iteration or Ap-proximate Policy Iteration. The main contribution of this paper is to show that, alternatively, the optimal state-action value function can be estimated using Difference of Convex functions (DC) Programming. To do so, we study the minimization of a norm of the Optimal Bellman Residual (OBR) T * Q − Q, where T * is the so-called optimal Bellman operator. Control-ling this residual allows controlling the distance to the optimal action-value function, and we show that minimizing an empirical norm of the OBR is consistant in the Vapnik sense. Finally, we frame this optimization problem as a DC program. That allows envisioning using the large related literature on DC Programming to address the Reinforcement Leaning problem.
Document type :
Conference papers
Complete list of metadata

Cited literature [22 references]  Display  Hide  Download
Contributor : Olivier Pietquin Connect in order to contact the contributor
Submitted on : Friday, January 16, 2015 - 4:49:27 PM
Last modification on : Thursday, January 20, 2022 - 4:16:45 PM
Long-term archiving on: : Friday, September 11, 2015 - 6:59:32 AM


Files produced by the author(s)


  • HAL Id : hal-01104419, version 1


Bilal Piot, Matthieu Geist, Olivier Pietquin. Difference of Convex Functions Programming for Reinforcement Learning. Advances in Neural Information Processing Systems (NIPS 2014), Dec 2014, Montreal, Canada. ⟨hal-01104419⟩



Les métriques sont temporairement indisponibles