HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Skip to Main content Skip to Navigation
Conference papers

How to Combine Tree-Search Methods in Reinforcement Learning

Abstract : Finite-horizon lookahead policies are abundantly used in Reinforcement Learning and demonstrate impressive empirical success. Usually, the lookahead policies are implemented with specific planning methods such as Monte Carlo Tree Search (e.g. in AlphaZero). Referring to the planning problem as tree search, a reasonable practice in these implementations is to back up the value only at the leaves while the information obtained at the root is not leveraged other than for updating the policy. Here, we question the potency of this approach. Namely, the latter procedure is non-contractive in general, and its convergence is not guaranteed. Our proposed enhancement is straightforward and simple: use the return from the optimal tree path to back up the values at the descendants of the root. This leads to a $\gamma^h$-contracting procedure, where $\gamma$ is the discount factor and $h$ is the tree depth. To establish our results, we first introduce a notion called \emph{multiple-step greedy consistency}. We then provide convergence rates for two algorithmic instantiations of the above enhancement in the presence of noise injected to both the tree search stage and value estimation stage.
Complete list of metadata

Contributor : Bruno Scherrer Connect in order to contact the contributor
Submitted on : Thursday, August 29, 2019 - 11:00:57 AM
Last modification on : Friday, January 21, 2022 - 3:13:13 AM

Links full text


  • HAL Id : hal-02273713, version 1
  • ARXIV : 1809.01843



Yonathan Efroni, Gal Dalal, Bruno Scherrer, Shie Mannor. How to Combine Tree-Search Methods in Reinforcement Learning. AAAI 19 - Thirty-Third AAAI Conference on Artificial Intelligence, Jan 2019, Honolulu, Hawai, United States. ⟨hal-02273713⟩



Record views