Making Deep Q-learning methods robust to time discretization - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Making Deep Q-learning methods robust to time discretization

Résumé

Despite remarkable successes, Deep Reinforcement Learning (DRL) is not robust to hyperparameterization, implementation details, or small environment changes (Henderson et al. 2017, Zhang et al. 2018). Overcoming such sensitivity is key to making DRL applicable to real world problems. In this paper, we identify sensitivity to time discretization in near continuous-time environments as a critical factor; this covers, e.g., changing the number of frames per second, or the action frequency of the controller. Empirically, we find that Q-learning-based approaches such as Deep Q- learning (Mnih et al., 2015) and Deep Deterministic Policy Gradient (Lillicrap et al., 2015) collapse with small time steps. Formally, we prove that Q-learning does not exist in continuous time. We detail a principled way to build an off-policy RL algorithm that yields similar performances over a wide range of time discretizations, and confirm this robustness empirically.

Dates et versions

hal-02435523 , version 1 (10-01-2020)

Identifiants

Citer

Corentin Tallec, Léonard Blier, Yann Ollivier. Making Deep Q-learning methods robust to time discretization. ICML 2019 - Thirty-sixth International Conference on Machine Learning, Jun 2019, Long Beach, United States. pp.6096-6104. ⟨hal-02435523⟩
76 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More