Classification-based Policy Iteration with a Critic

Victor Gabillon 1 Alessandro Lazaric 1 Mohammad Ghavamzadeh 1 Bruno Scherrer 2
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal, Inria Lille - Nord Europe
2 MAIA - Autonomous intelligent machine
INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications
Abstract : In this paper, we study the effect of adding a value function approximation component (critic) to rollout classification-based policy iteration (RCPI) algorithms. The idea is to use a critic to approximate the return after we truncate the rollout trajectories. This allows us to control the bias and variance of the rollout estimates of the action-value function. Therefore, the introduction of a critic can improve the accuracy of the rollout estimates, and as a result, enhance the performance of the RCPI algorithm. We present a new RCPI algorithm, called direct policy iteration with critic (DPI-Critic), and provide its finite-sample analysis when the critic is based on the LSTD method. We empirically evaluate the performance of DPI-Critic and compare it with DPI and LSPI in two benchmark reinforcement learning problems.
Document type :
Conference papers
Complete list of metadatas

https://hal.inria.fr/hal-00644935
Contributor : Victor Gabillon <>
Submitted on : Friday, November 25, 2011 - 3:24:52 PM
Last modification on : Thursday, February 21, 2019 - 10:52:49 AM
Long-term archiving on : Sunday, February 26, 2012 - 2:32:01 AM

File

dpi-critic.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00644935, version 1

Citation

Victor Gabillon, Alessandro Lazaric, Mohammad Ghavamzadeh, Bruno Scherrer. Classification-based Policy Iteration with a Critic. International Conference on Machine Learning (ICML), Jun 2011, Seattle, United States. pp.1049-1056. ⟨hal-00644935⟩

Share

Metrics

Record views

387

Files downloads

120