l1-penalized projected Bellman residual

Matthieu Geist 1 Bruno Scherrer 2
2 MAIA - Autonomous intelligent machine
INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications
Abstract : We consider the task of feature selection for value function approximation in reinforcement learning. A promising approach consists in combining the Least-Squares Temporal Difference (LSTD) algorithm with $\ell_1$-regularization, which has proven to be effective in the supervised learning community. This has been done recently whit the LARS-TD algorithm, which replaces the projection operator of LSTD with an $\ell_1$-penalized projection and solves the corresponding fixed-point problem. However, this approach is not guaranteed to be correct in the general off-policy setting. We take a different route by adding an $\ell_1$-penalty term to the projected Bellman residual, which requires weaker assumptions while offering a comparable performance. However, this comes at the cost of a higher computational complexity if only a part of the regularization path is computed. Nevertheless, our approach ends up to a supervised learning problem, which let envision easy extensions to other penalties.
Document type :
Conference papers
Complete list of metadatas

Cited literature [25 references]  Display  Hide  Download

https://hal.inria.fr/hal-00644507
Contributor : Bruno Scherrer <>
Submitted on : Thursday, November 24, 2011 - 3:16:07 PM
Last modification on : Thursday, March 29, 2018 - 11:06:04 AM
Long-term archiving on : Friday, November 16, 2012 - 11:58:09 AM

File

gs_ewrl_l1_cr.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00644507, version 1

Collections

Citation

Matthieu Geist, Bruno Scherrer. l1-penalized projected Bellman residual. European Wrokshop on Reinforcement Learning (EWRL 11), Sep 2011, Athens, Greece. ⟨hal-00644507⟩

Share

Metrics

Record views

392

Files downloads

347