Samples Are Useful? Not Always: denoising policy gradient updates using variance explained

Yannis Flet-Berliac 1, 2, 3 Philippe Preux 1, 2, 3
1 SEQUEL - Sequential Learning
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Abstract : Policy gradient algorithms in reinforcement learning optimize the policy directly and rely on efficiently sampling an environment. However, while most sampling procedures are based solely on sampling the agent's policy, other measures directly accessible through these algorithms could be used to improve sampling before each policy update. Following this line of thoughts, we propose the use of SAUNA, a method where transitions are rejected from the gradient updates if they do not meet a particular criterion, and kept otherwise. This criterion, the fraction of variance explained $\mathcal{V}^{ex}$, is a measure of the discrepancy between a model and actual samples. In this work, $\mathcal{V}^{ex}$ is used to evaluate the impact each transition will have on learning: this criterion refines sampling and improves the policy gradient algorithm. In this paper: (a) We introduce and explore $\mathcal{V}^{ex}$, the criterion used for denoising policy gradient updates. (b) We conduct experiments across a variety of benchmark environments, including standard continuous control problems. Our results show better performance with SAUNA. (c) We investigate why $\mathcal{V}^{ex}$ provides a reliable assessment for the selection of samples that will positively impact learning. (d) We show how this criterion can work as a dynamic tool to adjust the ratio between exploration and exploitation.
Complete list of metadatas

https://hal.inria.fr/hal-02091547
Contributor : Yannis Flet-Berliac <>
Submitted on : Wednesday, September 25, 2019 - 4:15:32 PM
Last modification on : Saturday, September 28, 2019 - 1:18:57 AM