Risk-Aversion in Multi-armed Bandits

Amir Sani 1 Alessandro Lazaric 1 Rémi Munos 1
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : Stochastic multi--armed bandits solve the Exploration--Exploitation dilemma and ultimately maximize the expected reward. Nonetheless, in many practical problems, maximizing the expected reward is not the most desirable objective. In this paper, we introduce a novel setting based on the principle of risk--aversion where the objective is to compete against the arm with the best risk--return trade--off. This setting proves to be intrinsically more difficult than the standard multi-arm bandit setting due in part to an exploration risk which introduces a regret associated to the variability of an algorithm. Using variance as a measure of risk, we introduce two new algorithms, investigate their theoretical guarantees, and report preliminary empirical results.
Document type :
Reports
Complete list of metadatas

https://hal.inria.fr/hal-00750298
Contributor : Alessandro Lazaric <>
Submitted on : Wednesday, January 9, 2013 - 7:01:48 PM
Last modification on : Thursday, February 21, 2019 - 10:52:49 AM
Long-term archiving on : Friday, March 31, 2017 - 3:52:58 PM

Files

risk-bandit.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00750298, version 1
  • ARXIV : 1301.1936

Collections

Citation

Amir Sani, Alessandro Lazaric, Rémi Munos. Risk-Aversion in Multi-armed Bandits. [Research Report] 2012. ⟨hal-00750298⟩

Share

Metrics

Record views

436

Files downloads

232