Monte-Carlo tree search as regularized policy optimization - Archive ouverte HAL Access content directly
Conference Papers Year :

Monte-Carlo tree search as regularized policy optimization

(1) , (1) , (1, 2) , (3) , (1) , (3) , (1)
1
2
3
Florent Altché
  • Function : Author
Thomas Hubert
  • Function : Author
Michal Valko
Rémi Munos
  • Function : Author

Abstract

The combination of Monte-Carlo tree search (MCTS) with deep reinforcement learning has led to significant advances in artificial intelligence. However, AlphaZero, the current state-of-the-art MCTS algorithm, still relies on hand-crafted heuristics that are only partially understood. In this paper, we show that AlphaZero's search heuristics, along with other common ones such as UCT, are an approximation to the solution of a specific regularized policy optimization problem. With this insight, we propose a variant of AlphaZero which uses the exact solution to this policy optimization problem, and show experimentally that it reliably outperforms the original algorithm in multiple domains.
Fichier principal
Vignette du fichier
grill2020monte-carlo.pdf (1.16 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02950136 , version 1 (27-09-2020)

Identifiers

  • HAL Id : hal-02950136 , version 1

Cite

Jean-Bastien Grill, Florent Altché, Yunhao Tang, Thomas Hubert, Michal Valko, et al.. Monte-Carlo tree search as regularized policy optimization. International Conference on Machine Learning, 2020, Vienna, Austria. ⟨hal-02950136⟩
33 View
39 Download

Share

Gmail Facebook Twitter LinkedIn More