Boltzmann Exploration Done Right

Abstract : Boltzmann exploration is a classic strategy for sequential decision-making under uncertainty, and is one of the most standard tools in Reinforcement Learning (RL). Despite its widespread use, there is virtually no theoretical understanding about the limitations or the actual benefits of this exploration scheme. Does it drive exploration in a meaningful way? Is it prone to misidentifying the optimal actions or spending too much time exploring the suboptimal ones? What is the right tuning for the learning rate? In this paper, we address several of these questions for the classic setup of stochastic multi-armed bandits. One of our main results is showing that the Boltzmann exploration strategy with any monotone learning-rate sequence will induce suboptimal behavior. As a remedy, we offer a simple non-monotone schedule that guarantees near-optimal performance, albeit only when given prior access to key problem parameters that are typically not available in practical situations (like the time horizon T and the suboptimality gap $∆$). More importantly, we propose a novel variant that uses different learning rates for different arms, and achieves a distribution-dependent regret bound of order $K log 2 T ∆$ and a distribution-independent bound of order $√ KT log K$ without requiring such prior knowledge. To demonstrate the flexibility of our technique, we also propose a variant that guarantees the same performance bounds even if the rewards are heavy-tailed.
Document type :
Conference papers
Complete list of metadatas

Cited literature [6 references]  Display  Hide  Download

https://hal.inria.fr/hal-01916978
Contributor : Claudio Gentile <>
Submitted on : Friday, November 9, 2018 - 3:35:23 AM
Last modification on : Friday, March 22, 2019 - 1:37:04 AM
Long-term archiving on : Sunday, February 10, 2019 - 12:29:58 PM

File

nips2017.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01916978, version 1

Citation

Nicolò Cesa-Bianchi, Claudio Gentile, Gabor Lugosi, Gergely Neu. Boltzmann Exploration Done Right. NIPS 2017 - 31st Annual Conference on Neural Information Processing Systems, Dec 2017, Long Beach, United States. ⟨hal-01916978⟩

Share

Metrics

Record views

77

Files downloads

24