Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2011

Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits

Alexandra Carpentier
  • Fonction : Auteur
  • PersonId : 910455
Alessandro Lazaric
Mohammad Ghavamzadeh
  • Fonction : Auteur
  • PersonId : 868946
Rémi Munos
  • Fonction : Auteur
  • PersonId : 836863
Peter Auer
  • Fonction : Auteur
  • PersonId : 917539

Résumé

In this paper, we study the problem of estimating the mean values of all the arms uniformly well in the multi-armed bandit setting. If the variances of the arms were known, one could design an optimal sampling strategy by pulling the arms proportionally to their variances. However, since the distributions are not known in advance, we need to design adaptive sampling strategies to select an arm at each round based on the previous observed samples. We describe two strategies based on pulling the arms proportionally to an upper-bound on their variances and derive regret bounds for these strategies. %on the excess estimation error compared to the optimal allocation. We show that the performance of these allocation strategies depends not only on the variances of the arms but also on the full shape of their distributions.
Fichier principal
Vignette du fichier
adapt_alloc_tech-report.pdf (356.97 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-00659696 , version 1 (13-01-2012)

Identifiants

  • HAL Id : hal-00659696 , version 1

Citer

Alexandra Carpentier, Alessandro Lazaric, Mohammad Ghavamzadeh, Rémi Munos, Peter Auer. Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits. ALT - the 22nd conference on Algorithmic Learning Theory, Oct 2011, Espoo, Finland. ⟨hal-00659696⟩
468 Consultations
1655 Téléchargements

Partager

Gmail Facebook X LinkedIn More