Skip to Main content Skip to Navigation
Conference papers

Making Deep Q-learning methods robust to time discretization

Corentin Tallec 1 Léonard Blier 2, 1 Yann Ollivier 2, 1
1 TAU - TAckling the Underspecified
LRI - Laboratoire de Recherche en Informatique, Inria Saclay - Ile de France
Abstract : Despite remarkable successes, Deep Reinforcement Learning (DRL) is not robust to hyperparameterization, implementation details, or small environment changes (Henderson et al. 2017, Zhang et al. 2018). Overcoming such sensitivity is key to making DRL applicable to real world problems. In this paper, we identify sensitivity to time discretization in near continuous-time environments as a critical factor; this covers, e.g., changing the number of frames per second, or the action frequency of the controller. Empirically, we find that Q-learning-based approaches such as Deep Q- learning (Mnih et al., 2015) and Deep Deterministic Policy Gradient (Lillicrap et al., 2015) collapse with small time steps. Formally, we prove that Q-learning does not exist in continuous time. We detail a principled way to build an off-policy RL algorithm that yields similar performances over a wide range of time discretizations, and confirm this robustness empirically.
Document type :
Conference papers
Complete list of metadata

https://hal.inria.fr/hal-02435523
Contributor : Marc Schoenauer <>
Submitted on : Friday, January 10, 2020 - 8:11:27 PM
Last modification on : Thursday, July 8, 2021 - 3:50:36 AM

Links full text

Identifiers

  • HAL Id : hal-02435523, version 1
  • ARXIV : 1901.09732

Citation

Corentin Tallec, Léonard Blier, Yann Ollivier. Making Deep Q-learning methods robust to time discretization. ICML 2019 - Thirty-sixth International Conference on Machine Learning, Jun 2019, Long Beach, United States. pp.6096-6104. ⟨hal-02435523⟩

Share

Metrics

Record views

67