Skip to Main content Skip to Navigation
Conference papers

Transfer from Multiple MDPs

Alessandro Lazaric 1 Marcello Restelli 2
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : Transfer reinforcement learning (RL) methods leverage on the experience collected on a set of source tasks to speed-up RL algorithms. A simple and effective approach is to transfer samples from source tasks and include them in the training set used to solve a target task. In this paper, we investigate the theoretical properties of this transfer method and we introduce novel algorithms adapting the transfer process on the basis of the similarity between source and target tasks. Finally, we report illustrative experimental results in a continuous chain problem.
Document type :
Conference papers
Complete list of metadata

Cited literature [12 references]  Display  Hide  Download
Contributor : Alessandro Lazaric Connect in order to contact the contributor
Submitted on : Thursday, January 10, 2013 - 6:34:44 PM
Last modification on : Thursday, January 20, 2022 - 4:17:18 PM
Long-term archiving on: : Saturday, April 1, 2017 - 3:41:20 AM


Files produced by the author(s)


  • HAL Id : hal-00772620, version 1



Alessandro Lazaric, Marcello Restelli. Transfer from Multiple MDPs. NIPS - Twenty-Fifth Annual Conference on Neural Information Processing Systems, Dec 2011, Granada, Spain. ⟨hal-00772620⟩



Les métriques sont temporairement indisponibles