1 SEQUEL - Sequential Learning
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Abstract : A common challenge in reinforcement learning is how to convert the agent's interactions with an environment into fast and robust learning. For instance, earlier work makes use of domain knowledge to improve existing reinforcement learning algorithms in complex tasks. While promising, previously acquired knowledge is often costly and challenging to scale up. Instead, we decide to consider problem knowledge with signals from quantities relevant to solve any task, e.g., self-performance assessment and accurate expectations. $\mathcal{V}^{ex}$ is such a quantity. It is the fraction of variance explained by the value function $V$ and measures the discrepancy between $V$ and the returns. Taking advantage of $\mathcal{V}^{ex}$, we propose MERL, a general framework for structuring reinforcement learning by injecting problem knowledge into policy gradient updates. As a result, the agent is not only optimized for a reward but learns using problem-focused quantities provided by MERL, applicable out-of-the-box to any task. In this paper: (a) We introduce and define MERL, the multi-head reinforcement learning framework we use throughout this work. (b) We conduct experiments across a variety of standard benchmark environments, including 9 continuous control tasks, where results show improved performance. (c) We demonstrate that MERL also improves transfer learning on a set of challenging pixel-based tasks. (d) We ponder how MERL tackles the problem of reward sparsity and better conditions the feature space of reinforcement learning agents.
Keywords :
Document type :
Conference papers
Domain :

https://hal.inria.fr/hal-02305105
Contributor : Yannis Flet-Berliac <>
Submitted on : Friday, November 29, 2019 - 3:28:14 PM
Last modification on : Wednesday, March 11, 2020 - 4:47:19 PM

Files

main.pdf
Files produced by the author(s)

Identifiers

• HAL Id : hal-02305105, version 3
• ARXIV : 1909.11939

Citation

Yannis Flet-Berliac, Philippe Preux. MERL: Multi-Head Reinforcement Learning. Deep Reinforcement Learning Workshop, NeurIPS, Dec 2019, Vancouver, Canada. ⟨hal-02305105v3⟩

Record views