Improved Learning Complexity in Combinatorial Pure Exploration Bandits

Abstract : We study the problem of combinatorial pure exploration in the stochastic multi-armed bandit problem. We first construct a new measure of complexity that provably characterizes the learning performance of the algorithms we propose for the fixed confidence and the fixed budget setting. We show that this complexity is never higher than the one in existing work and illustrate a number of configurations in which it can be significantly smaller. While in general this improvement comes at the cost of increased computational complexity, we provide a series of examples , including a planning problem, where this extra cost is not significant.
Document type :
Conference papers
Complete list of metadatas

Cited literature [6 references]  Display  Hide  Download

https://hal.inria.fr/hal-01322198
Contributor : Alessandro Lazaric <>
Submitted on : Thursday, May 26, 2016 - 5:25:55 PM
Last modification on : Friday, March 22, 2019 - 1:34:48 AM
Long-term archiving on : Saturday, August 27, 2016 - 11:01:03 AM

File

AISTATS_full_CR.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01322198, version 1

Citation

Victor Gabillon, Alessandro Lazaric, Mohammad Ghavamzadeh, Ronald Ortner, Peter Bartlett. Improved Learning Complexity in Combinatorial Pure Exploration Bandits. Proceedings of the 19th International Conference on Artificial Intelligence (AISTATS), May 2016, Cadiz, Spain. ⟨hal-01322198⟩

Share

Metrics

Record views

282

Files downloads

55