Fitted Q-iteration in continuous action-space MDPs

Andras Antos 1 Rémi Munos 2 Csaba Szepesvari 3
2 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal, Inria Lille - Nord Europe
Abstract : We consider continuous state, continuous action batch reinforcement learning where the goal is to learn a good policy from a sufficiently rich trajectory generated by some policy. We study a variant of fitted Q-iteration, where the greedy action selection is replaced by searching for a policy in a restricted set of candidate policies by maximizing the average action values. We provide a rigorous analysis of this algorithm, proving what we believe is the first finite-time bound for value-function based algorithms for continuous state and action problems.
Document type :
Reports
Complete list of metadatas

Cited literature [17 references]  Display  Hide  Download

https://hal.inria.fr/inria-00185311
Contributor : Rémi Munos <>
Submitted on : Tuesday, January 8, 2008 - 4:52:29 PM
Last modification on : Thursday, February 21, 2019 - 10:52:49 AM
Long-term archiving on : Tuesday, September 21, 2010 - 3:52:39 PM

File

rlca.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : inria-00185311, version 2

Collections

Citation

Andras Antos, Rémi Munos, Csaba Szepesvari. Fitted Q-iteration in continuous action-space MDPs. [Technical Report] 2007, pp.24. ⟨inria-00185311v2⟩

Share

Metrics

Record views

500

Files downloads

330