A Unified View of TD Algorithms; Introducing Full-Gradient TD and Equi-Gradient Descent TD

Manuel Loth 1, 2 Philippe Preux 1, 2 Manuel Davy 1, 3
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal, Inria Lille - Nord Europe
3 LAGIS-SI
LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : This paper addresses the issue of policy evaluation in Markov Decision Processes, using linear function approximation. It provides a unified view of algorithms such as TD(lambda), LSTD(lambda), iLSTD, residual-gradient TD. It is asserted that they all consist in minimizing a gradient function and differ by the form of this function and their means of minimizing it. Two new schemes are introduced in that framework: Full-gradient TD which uses a generalization of the principle introduced in iLSTD, and EGD TD, which reduces the gradient by successive equi-gradient descents. These three algorithms form a new intermediate family with the interesting property of making much better use of the samples than TD while keeping a gradient descent scheme, which is useful for complexity issues and optimistic policy iteration.
Document type :
Conference papers
Complete list of metadatas

Cited literature [9 references]  Display  Hide  Download

https://hal.inria.fr/inria-00116936
Contributor : Manuel Loth <>
Submitted on : Wednesday, November 29, 2006 - 10:12:47 AM
Last modification on : Thursday, February 21, 2019 - 10:52:49 AM
Long-term archiving on : Monday, September 20, 2010 - 4:41:52 PM

Files

unified.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Manuel Loth, Philippe Preux, Manuel Davy. A Unified View of TD Algorithms; Introducing Full-Gradient TD and Equi-Gradient Descent TD. European Symposium on Artificial Neural Networks, Apr 2007, Bruges, Belgium, Belgium. ⟨inria-00116936v2⟩

Share

Metrics

Record views

418

Files downloads

369