Risk-Aversion in Multi-armed Bandits - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2012

Risk-Aversion in Multi-armed Bandits

Amir Sani
Alessandro Lazaric
Rémi Munos
  • Fonction : Auteur
  • PersonId : 836863

Résumé

Stochastic multi--armed bandits solve the Exploration--Exploitation dilemma and ultimately maximize the expected reward. Nonetheless, in many practical problems, maximizing the expected reward is not the most desirable objective. In this paper, we introduce a novel setting based on the principle of risk--aversion where the objective is to compete against the arm with the best risk--return trade--off. This setting proves to be more difficult than the standard multi-arm bandit setting due in part to an exploration risk which introduces a regret associated to the variability of an algorithm. Using variance as a measure of risk, we define two algorithms, investigate their theoretical guarantees, and report preliminary empirical results.
Fichier principal
Vignette du fichier
risk-bandit-cr.pdf (283.1 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00772609 , version 1 (10-01-2013)

Identifiants

  • HAL Id : hal-00772609 , version 1

Citer

Amir Sani, Alessandro Lazaric, Rémi Munos. Risk-Aversion in Multi-armed Bandits. NIPS - Twenty-Sixth Annual Conference on Neural Information Processing Systems, Dec 2012, Lake Tahoe, United States. ⟨hal-00772609⟩
335 Consultations
162 Téléchargements

Partager

Gmail Facebook X LinkedIn More