Explore no more: Improved high-probability regret bounds for non-stochastic bandits

Gergely Neu 1, *
* Auteur correspondant
1 SEQUEL - Sequential Learning
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Abstract : This work addresses the problem of regret minimization in non-stochastic multi-armed bandit problems, focusing on performance guarantees that hold with high probability. Such results are rather scarce in the literature since proving them requires a large deal of technical effort and significant modifications to the standard, more intuitive algorithms that come only with guarantees that hold on expectation. One of these modifications is forcing the learner to sample arms from the uniform distribution at least Ω(√ T) times over T rounds, which can adversely affect performance if many of the arms are suboptimal. While it is widely conjectured that this property is essential for proving high-probability regret bounds, we show in this paper that it is possible to achieve such strong results without this undesirable exploration component. Our result relies on a simple and intuitive loss-estimation strategy called Implicit eXploration (IX) that allows a remarkably clean analysis. To demonstrate the flexibility of our technique, we derive several improved high-probability bounds for various extensions of the standard multi-armed bandit framework. Finally, we conduct a simple experiment that illustrates the robustness of our implicit exploration technique.
Type de document :
Communication dans un congrès
Advances on Neural Information Processing Systems 28 (NIPS 2015), Dec 2015, Montreal, Canada. pp.3150-3158
Liste complète des métadonnées

Littérature citée [26 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01223501
Contributeur : Gergely Neu <>
Soumis le : lundi 2 novembre 2015 - 18:27:04
Dernière modification le : jeudi 11 janvier 2018 - 06:27:32
Document(s) archivé(s) le : mercredi 3 février 2016 - 11:04:39

Fichier

IX_nips_final.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01223501, version 1

Citation

Gergely Neu. Explore no more: Improved high-probability regret bounds for non-stochastic bandits. Advances on Neural Information Processing Systems 28 (NIPS 2015), Dec 2015, Montreal, Canada. pp.3150-3158. 〈hal-01223501〉

Partager

Métriques

Consultations de la notice

123

Téléchargements de fichiers

69