Transfer from Multiple MDPs

Alessandro Lazaric 1 Marcello Restelli 2
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : Transfer reinforcement learning (RL) methods leverage on the experience collected on a set of source tasks to speed-up RL algorithms. A simple and effective approach is to transfer samples from source tasks and include them into the training set used to solve a given target task. In this paper, we investigate the theoretical properties of this transfer method and we introduce novel algorithms adapting the transfer process on the basis of the similarity between source and target tasks. Finally, we report illustrative experimental results in a continuous chain problem.
Complete list of metadatas

Cited literature [14 references]  Display  Hide  Download
Contributor : Alessandro Lazaric <>
Submitted on : Thursday, September 1, 2011 - 11:13:43 AM
Last modification on : Tuesday, August 13, 2019 - 11:10:04 AM
Long-term archiving on: Sunday, December 4, 2016 - 8:04:53 PM


Files produced by the author(s)


  • HAL Id : inria-00618037, version 2
  • ARXIV : 1108.6211



Alessandro Lazaric, Marcello Restelli. Transfer from Multiple MDPs. [Technical Report] 2011. ⟨inria-00618037v2⟩



Record views


Files downloads