GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms

Cédric Colas 1 Olivier Sigaud 1, 2 Pierre-Yves Oudeyer 1
1 Flowers - Flowing Epigenetic Robots and Systems
Inria Bordeaux - Sud-Ouest, U2IS - Unité d'Informatique et d'Ingénierie des Systèmes
Abstract : In continuous action domains, standard deep reinforcement learning algorithms like ddpg suffer from inefficient exploration when facing sparse or deceptive reward problems. Conversely, evolutionary and developmental methods focusing on exploration like novelty search, quality-diversity or goal exploration processes explore more robustly but are less sample efficient during exploitation. In this paper, we present the gep-pg approach, taking the best of both worlds by sequentially combining two variants of a goal exploration process and two variants of ddpg. We study the learning performance of these components and their combination on a low dimensional deceptive reward problem and on the larger Half-Cheetah benchmark. We show that ddpg fails on the former and that gep-pg obtains performance above the state-of-the-art on the latter.
Document type :
Conference papers
Liste complète des métadonnées
Contributor : Olivier Buffet <>
Submitted on : Tuesday, July 17, 2018 - 1:30:21 PM
Last modification on : Thursday, March 21, 2019 - 1:07:42 PM

Links full text


  • HAL Id : hal-01840576, version 1
  • ARXIV : 1802.05054


Cédric Colas, Olivier Sigaud, Pierre-Yves Oudeyer. GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms. Journées Francophones sur la Planification, la Décision et l'Apprentissage pour la conduite de systèmes (JFPDA 2018), Jul 2017, Nancy, France. ⟨hal-01840576⟩



Record views