Risk-Aversion in Multi-armed Bandits - Archive ouverte HAL Access content directly
Reports (Research Report) Year : 2012

Risk-Aversion in Multi-armed Bandits

(1) , (1) , (1)
1
Amir Sani
Alessandro Lazaric
Rémi Munos
  • Function : Author
  • PersonId : 836863

Abstract

Stochastic multi--armed bandits solve the Exploration--Exploitation dilemma and ultimately maximize the expected reward. Nonetheless, in many practical problems, maximizing the expected reward is not the most desirable objective. In this paper, we introduce a novel setting based on the principle of risk--aversion where the objective is to compete against the arm with the best risk--return trade--off. This setting proves to be intrinsically more difficult than the standard multi-arm bandit setting due in part to an exploration risk which introduces a regret associated to the variability of an algorithm. Using variance as a measure of risk, we introduce two new algorithms, investigate their theoretical guarantees, and report preliminary empirical results.
Fichier principal
Vignette du fichier
risk-bandit.pdf (631.98 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-00750298 , version 1 (09-01-2013)

Identifiers

Cite

Amir Sani, Alessandro Lazaric, Rémi Munos. Risk-Aversion in Multi-armed Bandits. [Research Report] 2012. ⟨hal-00750298⟩
241 View
240 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More