Skip to Main content Skip to Navigation
Conference papers

Semi-Supervised Apprenticeship Learning

Michal Valko 1 Mohammad Ghavamzadeh 1 Alessandro Lazaric 1
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : In apprenticeship learning we aim to learn a good policy by observing the behavior of an expert or a set of experts. In particular, we consider the case where the expert acts so as to maximize an unknown reward function defined as a linear combination of a set of state features. In this paper, we consider the setting where we observe many sample trajectories (i.e., sequences of states) but only one or a few of them are labeled as experts' trajectories. We investigate the conditions under which the remaining unlabeled trajectories can help in learning a policy with a good performance. In particular, we define an extension to the max-margin inverse reinforcement learning proposed by Abbeel and Ng (2004) where, at each iteration, the max-margin optimization step is replaced by a semi-supervised optimization problem which favors classifiers separating clusters of trajectories. Finally, we report empirical results on two grid-world domains showing that the semi-supervised algorithm is able to output a better policy in fewer iterations than the related algorithm that does not take the unlabeled trajectories into account.
Document type :
Conference papers
Complete list of metadata

Cited literature [13 references]  Display  Hide  Download
Contributor : Michal Valko Connect in order to contact the contributor
Submitted on : Wednesday, January 16, 2013 - 3:21:17 PM
Last modification on : Thursday, January 20, 2022 - 4:16:25 PM
Long-term archiving on: : Wednesday, April 17, 2013 - 3:51:03 AM


Publisher files allowed on an open archive


  • HAL Id : hal-00747921, version 2



Michal Valko, Mohammad Ghavamzadeh, Alessandro Lazaric. Semi-Supervised Apprenticeship Learning. The 10th European Workshop on Reinforcement Learning (EWRL 2012), Jun 2012, Edinburgh, United Kingdom. pp.131-141. ⟨hal-00747921v2⟩



Les métriques sont temporairement indisponibles