Tuning bandit algorithms in stochastic environments

Jean-Yves Audibert 1 Rémi Munos 2 Csaba Szepesvari 3
2 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : Algorithms based on upper-confidence bounds for balancing exploration and exploitation are gaining popularity since they are easy to implement, efficient and effective. In this paper we consider a variant of the basic algorithm for the stochastic, multi-armed bandit problem that takes into account the empirical variance of the different arms. In earlier experimental works, such algorithms were found to outperform the competing algorithms. The purpose of this paper is to provide a theoretical explanation of these findings and provide theoretical guidelines for the tuning of the parameters of these algorithms. For this we analyze the expected regret and for the first time the concentration of the regret. The analysis of the expected regret shows that variance estimates can be especially advantageous when the payoffs of suboptimal arms have low variance. The risk analysis, rather unexpectedly, reveals that except for some very special bandit problems, the regret, for upper confidence bounds based algorithms with standard bias sequences, concentrates only at a polynomial rate. Hence, although these algorithms achieve logarithmic expected regret rates, they seem less attractive when the risk of suffering much worse than logarithmic regret is also taken into account.
Type de document :
Communication dans un congrès
Algorithmic Learning Theory, 2007, Sendai, Japan. pp.150-165, 2007
Liste complète des métadonnées

Littérature citée [8 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/inria-00203487
Contributeur : Rémi Munos <>
Soumis le : jeudi 10 janvier 2008 - 12:02:12
Dernière modification le : jeudi 11 janvier 2018 - 06:22:13
Document(s) archivé(s) le : mardi 13 avril 2010 - 16:55:11

Fichier

ucb_alt.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : inria-00203487, version 1

Collections

Citation

Jean-Yves Audibert, Rémi Munos, Csaba Szepesvari. Tuning bandit algorithms in stochastic environments. Algorithmic Learning Theory, 2007, Sendai, Japan. pp.150-165, 2007. 〈inria-00203487〉

Partager

Métriques

Consultations de la notice

486

Téléchargements de fichiers

361