Boosted and Reward-regularized Classification for Apprenticeship Learning

Abstract : This paper deals with the problem of learning from demonstrations, where an agent called the apprentice tries to learn a behavior from demonstrations of another agent called the expert. To address this problem, we place ourselves into the Markov Decision Process (MDP) framework, which is well suited for sequential decision making problems. A way to tackle this problem is to reduce it to classification but doing so we do not take into account the MDP structure. Other methods which take into account the MDP structure need to solve MDPs which is a difficult task and/or need a choice of features which is problem-dependent. The main contribution of the paper is to extend a large margin approach, which is a classification method, by adding a regularization term which takes into account the MDP structure. The derived algorithm, called Reward-regularized Classification for Apprenticeship Learning (RCAL), does not need to solve MDPs. But, the major advantage is that it can be boosted: this avoids the choice of features, which is a drawback of parametric approaches. A state of the art experiment (Highway) and generic experiments (structured Garnets) are conducted to show the performance of RCAL compared to algorithms from the literature.
Document type :
Conference papers
Complete list of metadatas

https://hal.inria.fr/hal-01107837
Contributor : Olivier Pietquin <>
Submitted on : Tuesday, July 23, 2019 - 2:06:55 PM
Last modification on : Wednesday, July 31, 2019 - 4:18:02 PM

Identifiers

  • HAL Id : hal-01107837, version 1

Citation

Bilal Piot, Matthieu Geist, Olivier Pietquin. Boosted and Reward-regularized Classification for Apprenticeship Learning. AAMAS 2014 : 13th International Conference on Autonomous Agents and Multiagent Systems, May 2014, Paris, France. pp.1249-1256. ⟨hal-01107837⟩

Share

Metrics

Record views

1279