Off-policy Learning with Eligibility Traces: A Survey

Matthieu Geist 1 Bruno Scherrer 2
2 MAIA - Autonomous intelligent machine
Inria Nancy - Grand Est, LORIA - AIS - Department of Complex Systems, Artificial Intelligence & Robotics
Abstract : In the framework of Markov Decision Processes, off-policy learning, that is the problem of learning a linear approximation of the value function of some fixed policy from one trajectory possibly generated by some other policy. We briefly review on-policy learning algorithms of the literature (gradient-based and least-squares-based), adopting a unified algorithmic view. Then, we highlight a systematic approach for adapting them to off-policy learning with eligibility traces. This leads to some known algorithms - off-policy LSTD(λ), LSPE(λ), TD(λ), TDC/GQ(λ) - and suggests new extensions - off-policy FPKF(λ), BRM(λ), gBRM(λ), GTD2(λ). We describe a comprehensive algorithmic derivation of all algorithms in a recursive and memory-efficent form, discuss their known convergence properties and illustrate their relative empirical behavior on Garnet problems. Our experiments suggest that the most standard algorithms on and off-policy LSTD(λ)/LSPE(λ) - and TD(λ) if the feature space dimension is too large for a least-squares approach - perform the best.
Complete list of metadatas

Cited literature [29 references]  Display  Hide  Download

https://hal.inria.fr/hal-00644516
Contributor : Bruno Scherrer <>
Submitted on : Friday, April 12, 2013 - 1:12:35 PM
Last modification on : Tuesday, December 18, 2018 - 4:40:21 PM
Long-term archiving on : Monday, April 3, 2017 - 4:24:16 AM

Files

jmlr.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00644516, version 2
  • ARXIV : 1304.3999

Citation

Matthieu Geist, Bruno Scherrer. Off-policy Learning with Eligibility Traces: A Survey. [Research Report] 2013, pp.43. ⟨hal-00644516v2⟩

Share

Metrics

Record views

495

Files downloads

409