Value-Iteration Based Fitted Policy Iteration: Learning with a Single Trajectory

Andras Antos 1 Csaba Szepesvari 2 Rémi Munos 3
3 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : We consider batch reinforcement learning problems in continuous space,expected total discounted-reward Markovian Decision Problems when the training data is composed of the trajectory of some fixed behaviour policy. The algorithm studied is policy iteration where in successive iterations the action-value functions of the intermediate policies are obtained by means of approximate value iteration. PAC-style polynomial bounds are derived on the number of samples needed to guarantee near-optimal performance. The bounds depend on the mixing rate of the trajectory, the smoothness properties of the underlying Markovian Decision Problem, the approximation power and capacity of the function set used. One of the main novelties of the paper is that new smoothness constraints are introduced thereby significantly extending the scope of previous results.
Document type :
Conference papers
Complete list of metadatas

Cited literature [16 references]  Display  Hide  Download

https://hal.inria.fr/inria-00124833
Contributor : Rémi Munos <>
Submitted on : Tuesday, January 16, 2007 - 1:34:47 PM
Last modification on : Thursday, February 21, 2019 - 10:52:49 AM
Long-term archiving on : Friday, September 21, 2012 - 10:15:57 AM

Files

sapi_adprl_final.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : inria-00124833, version 1

Collections

Citation

Andras Antos, Csaba Szepesvari, Rémi Munos. Value-Iteration Based Fitted Policy Iteration: Learning with a Single Trajectory. IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, 2007, Hawai, United States. pp.2007. ⟨inria-00124833⟩

Share

Metrics

Record views

346

Files downloads

323