Skip to Main content Skip to Navigation
Conference papers

Monte-Carlo tree search as regularized policy optimization

Abstract : The combination of Monte-Carlo tree search (MCTS) with deep reinforcement learning has led to significant advances in artificial intelligence. However, AlphaZero, the current state-of-the-art MCTS algorithm, still relies on hand-crafted heuristics that are only partially understood. In this paper, we show that AlphaZero's search heuristics, along with other common ones such as UCT, are an approximation to the solution of a specific regularized policy optimization problem. With this insight, we propose a variant of AlphaZero which uses the exact solution to this policy optimization problem, and show experimentally that it reliably outperforms the original algorithm in multiple domains.
Document type :
Conference papers
Complete list of metadatas

Cited literature [20 references]  Display  Hide  Download

https://hal.inria.fr/hal-02950136
Contributor : Michal Valko <>
Submitted on : Sunday, September 27, 2020 - 12:12:09 PM
Last modification on : Wednesday, September 30, 2020 - 3:34:08 AM

File

grill2020monte-carlo.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02950136, version 1

Citation

Jean-Bastien Grill, Florent Altché, Yunhao Tang, Thomas Hubert, Michal Valko, et al.. Monte-Carlo tree search as regularized policy optimization. International Conference on Machine Learning, 2020, Vienna, Austria. ⟨hal-02950136⟩

Share

Metrics

Record views

15

Files downloads

109