Transfer from Multiple MDPs

Alessandro Lazaric 1 Marcello Restelli 2
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : Transfer reinforcement learning (RL) methods leverage on the experience collected on a set of source tasks to speed-up RL algorithms. A simple and effective approach is to transfer samples from source tasks and include them in the training set used to solve a target task. In this paper, we investigate the theoretical properties of this transfer method and we introduce novel algorithms adapting the transfer process on the basis of the similarity between source and target tasks. Finally, we report illustrative experimental results in a continuous chain problem.
Document type :
Conference papers
Complete list of metadatas

Cited literature [12 references]  Display  Hide  Download

https://hal.inria.fr/hal-00772620
Contributor : Alessandro Lazaric <>
Submitted on : Thursday, January 10, 2013 - 6:34:44 PM
Last modification on : Thursday, February 21, 2019 - 10:52:49 AM
Long-term archiving on : Saturday, April 1, 2017 - 3:41:20 AM

File

sourcetransfer.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00772620, version 1

Collections

Citation

Alessandro Lazaric, Marcello Restelli. Transfer from Multiple MDPs. NIPS - Twenty-Fifth Annual Conference on Neural Information Processing Systems, Dec 2011, Granada, Spain. ⟨hal-00772620⟩

Share

Metrics

Record views

402

Files downloads

184