Skip to Main content Skip to Navigation
Conference papers

Self-Attentional Credit Assignment for Transfer in Reinforcement Learning

Johan Ferret 1, 2 Raphaël Marinier 1 Matthieu Geist 1 Olivier Pietquin 2, 1
2 Scool - Scool
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189
Abstract : The ability to transfer knowledge to novel environments and tasks is a sensible desiderata for general learning agents. Despite the apparent promises, transfer in RL is still an open and little exploited research area. In this paper, we take a brand-new perspective about transfer: we suggest that the ability to assign credit unveils structural invariants in the tasks that can be transferred to make RL more sample efficient. Our main contribution is Secret, a novel approach to transfer learning for RL that uses a backward-view credit assignment mechanism based on a self-attentive architecture. Two aspects are key to its generality: it learns to assign credit as a separate offline supervised process and exclusively modifies the reward function. Consequently, it can be supplemented by transfer methods that do not modify the reward function and it can be plugged on top of any RL algorithm.
Document type :
Conference papers
Complete list of metadata
Contributor : Johan Ferret Connect in order to contact the contributor
Submitted on : Tuesday, March 9, 2021 - 9:56:42 AM
Last modification on : Thursday, March 24, 2022 - 3:42:40 AM
Long-term archiving on: : Thursday, June 10, 2021 - 6:07:59 PM


Credit_Alignment_HAL (1).pdf
Files produced by the author(s)


  • HAL Id : hal-03159832, version 1


Johan Ferret, Raphaël Marinier, Matthieu Geist, Olivier Pietquin. Self-Attentional Credit Assignment for Transfer in Reinforcement Learning. IJCAI 2020 - 29th International Joint Conference on Artificial Intelligence, Jul 2020, Yokohama / Virtual, Japan. ⟨hal-03159832⟩



Record views


Files downloads