Boosted and Reward-regularized Classification for Apprenticeship Learning - Archive ouverte HAL Access content directly
Conference Papers Year :

Boosted and Reward-regularized Classification for Apprenticeship Learning

(1, 2) , (2) , (3)
1
2
3

Abstract

This paper deals with the problem of learning from demonstrations, where an agent called the apprentice tries to learn a behavior from demonstrations of another agent called the expert. To address this problem, we place ourselves into the Markov Decision Process (MDP) framework, which is well suited for sequential decision making problems. A way to tackle this problem is to reduce it to classification but doing so we do not take into account the MDP structure. Other methods which take into account the MDP structure need to solve MDPs which is a difficult task and/or need a choice of features which is problem-dependent. The main contribution of the paper is to extend a large margin approach, which is a classification method, by adding a regularization term which takes into account the MDP structure. The derived algorithm, called Reward-regularized Classification for Apprenticeship Learning (RCAL), does not need to solve MDPs. But, the major advantage is that it can be boosted: this avoids the choice of features, which is a drawback of parametric approaches. A state of the art experiment (Highway) and generic experiments (structured Garnets) are conducted to show the performance of RCAL compared to algorithms from the literature.
Not file

Dates and versions

hal-01107837 , version 1 (23-07-2019)

Identifiers

  • HAL Id : hal-01107837 , version 1

Cite

Bilal Piot, Matthieu Geist, Olivier Pietquin. Boosted and Reward-regularized Classification for Apprenticeship Learning. AAMAS 2014 : 13th International Conference on Autonomous Agents and Multiagent Systems, May 2014, Paris, France. pp.1249-1256. ⟨hal-01107837⟩
4875 View
0 Download

Share

Gmail Facebook Twitter LinkedIn More