Risk-Aversion in Multi-armed Bandits

Amir Sani 1 Alessandro Lazaric 1 Rémi Munos 1
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : Stochastic multi--armed bandits solve the Exploration--Exploitation dilemma and ultimately maximize the expected reward. Nonetheless, in many practical problems, maximizing the expected reward is not the most desirable objective. In this paper, we introduce a novel setting based on the principle of risk--aversion where the objective is to compete against the arm with the best risk--return trade--off. This setting proves to be more difficult than the standard multi-arm bandit setting due in part to an exploration risk which introduces a regret associated to the variability of an algorithm. Using variance as a measure of risk, we define two algorithms, investigate their theoretical guarantees, and report preliminary empirical results.
Type de document :
Communication dans un congrès
NIPS - Twenty-Sixth Annual Conference on Neural Information Processing Systems, Dec 2012, Lake Tahoe, United States. 2012
Liste complète des métadonnées

Littérature citée [16 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-00772609
Contributeur : Alessandro Lazaric <>
Soumis le : jeudi 10 janvier 2013 - 18:09:37
Dernière modification le : jeudi 11 janvier 2018 - 06:22:13
Document(s) archivé(s) le : samedi 1 avril 2017 - 03:40:10

Fichier

risk-bandit-cr.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-00772609, version 1

Collections

Citation

Amir Sani, Alessandro Lazaric, Rémi Munos. Risk-Aversion in Multi-armed Bandits. NIPS - Twenty-Sixth Annual Conference on Neural Information Processing Systems, Dec 2012, Lake Tahoe, United States. 2012. 〈hal-00772609〉

Partager

Métriques

Consultations de la notice

288

Téléchargements de fichiers

196