Fitted Q-iteration in continuous action-space MDPs

Andras Antos 1 Rémi Munos 2 Csaba Szepesvari 3
2 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal, Inria Lille - Nord Europe
Abstract : We consider continuous state, continuous action batch reinforcement learning where the goal is to learn a good policy from a sufficiently rich trajectory generated by some policy. We study a variant of fitted Q-iteration, where the greedy action selection is replaced by searching for a policy in a restricted set of candidate policies by maximizing the average action values. We provide a rigorous analysis of this algorithm, proving what we believe is the first finite-time bound for value-function based algorithms for continuous state and action problems.
Document type :
Reports
Complete list of metadatas

https://hal.inria.fr/inria-00185311
Contributor : Rémi Munos <>
Submitted on : Monday, November 5, 2007 - 5:34:16 PM
Last modification on : Thursday, February 21, 2019 - 10:52:49 AM
Long-term archiving on : Monday, April 12, 2010 - 1:23:42 AM

File

rlca.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : inria-00185311, version 1

Citation

Andras Antos, Rémi Munos, Csaba Szepesvari. Fitted Q-iteration in continuous action-space MDPs. [Technical Report] 2007, pp.22. ⟨inria-00185311v1⟩

Share

Metrics

Record views

12

Files downloads

181