Simple regret for infinitely many armed bandits

Alexandra Carpentier 1 Michal Valko 2
2 SEQUEL - Sequential Learning
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Abstract : We consider a stochastic bandit problem with infinitely many arms. In this setting, the learner has no chance of trying all the arms even once and has to dedicate its limited number of samples only to a certain number of arms. All previous algorithms for this setting were designed for minimizing the cumulative regret of the learner. In this paper, we propose an algorithm aiming at minimizing the simple regret. As in the cumulative regret setting of infinitely many armed bandits , the rate of the simple regret will depend on a parameter β characterizing the distribution of the near-optimal arms. We prove that depending on β, our algorithm is minimax optimal either up to a multiplicative constant or up to a log(n) factor. We also provide extensions to several important cases: when β is unknown, in a natural setting where the near-optimal arms have a small variance , and in the case of unknown time horizon.
Type de document :
Communication dans un congrès
International Conference on Machine Learning, Jul 2015, Lille, France
Liste complète des métadonnées

https://hal.inria.fr/hal-01153538
Contributeur : Michal Valko <>
Soumis le : mardi 19 mai 2015 - 23:27:18
Dernière modification le : mardi 3 juillet 2018 - 11:43:25
Document(s) archivé(s) le : jeudi 20 avril 2017 - 04:26:17

Fichier

carpentier2015simple.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01153538, version 1
  • ARXIV : 1505.04627

Collections

Citation

Alexandra Carpentier, Michal Valko. Simple regret for infinitely many armed bandits. International Conference on Machine Learning, Jul 2015, Lille, France. 〈hal-01153538〉

Partager

Métriques

Consultations de la notice

315

Téléchargements de fichiers

91