Classification-based Policy Iteration with a Critic - Archive ouverte HAL Access content directly
Conference Papers Year : 2011

Classification-based Policy Iteration with a Critic

(1) , (1) , (1) , (2)
1
2
Alessandro Lazaric
Mohammad Ghavamzadeh
  • Function : Author
  • PersonId : 868946
Bruno Scherrer

Abstract

In this paper, we study the effect of adding a value function approximation component (critic) to rollout classification-based policy iteration (RCPI) algorithms. The idea is to use a critic to approximate the return after we truncate the rollout trajectories. This allows us to control the bias and variance of the rollout estimates of the action-value function. Therefore, the introduction of a critic can improve the accuracy of the rollout estimates, and as a result, enhance the performance of the RCPI algorithm. We present a new RCPI algorithm, called direct policy iteration with critic (DPI-Critic), and provide its finite-sample analysis when the critic is based on the LSTD method. We empirically evaluate the performance of DPI-Critic and compare it with DPI and LSPI in two benchmark reinforcement learning problems.
Fichier principal
Vignette du fichier
dpi-critic.pdf (221.22 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-00644935 , version 1 (25-11-2011)

Identifiers

  • HAL Id : hal-00644935 , version 1

Cite

Victor Gabillon, Alessandro Lazaric, Mohammad Ghavamzadeh, Bruno Scherrer. Classification-based Policy Iteration with a Critic. International Conference on Machine Learning (ICML), Jun 2011, Seattle, United States. pp.1049-1056. ⟨hal-00644935⟩
149 View
79 Download

Share

Gmail Facebook Twitter LinkedIn More