Skip to Main content Skip to Navigation
Conference papers

Classification-based Policy Iteration with a Critic

Victor Gabillon 1 Alessandro Lazaric 1 Mohammad Ghavamzadeh 1 Bruno Scherrer 2
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
2 MAIA - Autonomous intelligent machine
INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications
Abstract : In this paper, we study the effect of adding a value function approximation component (critic) to rollout classification-based policy iteration (RCPI) algorithms. The idea is to use a critic to approximate the return after we truncate the rollout trajectories. This allows us to control the bias and variance of the rollout estimates of the action-value function. Therefore, the introduction of a critic can improve the accuracy of the rollout estimates, and as a result, enhance the performance of the RCPI algorithm. We present a new RCPI algorithm, called direct policy iteration with critic (DPI-Critic), and provide its finite-sample analysis when the critic is based on the LSTD method. We empirically evaluate the performance of DPI-Critic and compare it with DPI and LSPI in two benchmark reinforcement learning problems.
Document type :
Conference papers
Complete list of metadata
Contributor : Victor Gabillon Connect in order to contact the contributor
Submitted on : Friday, November 25, 2011 - 3:24:52 PM
Last modification on : Thursday, January 20, 2022 - 4:16:24 PM
Long-term archiving on: : Sunday, February 26, 2012 - 2:32:01 AM


Files produced by the author(s)


  • HAL Id : hal-00644935, version 1


Victor Gabillon, Alessandro Lazaric, Mohammad Ghavamzadeh, Bruno Scherrer. Classification-based Policy Iteration with a Critic. International Conference on Machine Learning (ICML), Jun 2011, Seattle, United States. pp.1049-1056. ⟨hal-00644935⟩



Les métriques sont temporairement indisponibles