Calibrated Fairness in Bandits

Yang Liu 1 Goran Radanovic 1 Christos Dimitrakakis 2 Debmalya Mandal 1 David Parkes 1
2 SEQUEL - Sequential Learning
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Abstract : We study fairness within the stochastic, multi-armed bandit (MAB) decision making framework. We adapt the fairness framework of "treating similar individuals similarly" [5] to this seing. Here, an 'individual' corresponds to an arm and two arms are 'similar' if they have a similar quality distribution. First, we adopt a smoothness constraint that if two arms have a similar quality distribution then the probability of selecting each arm should be similar. In addition, we dene the fairness regret, which corresponds to the degree to which an algorithm is not calibrated, where perfect calibration requires that the probability of selecting an arm is equal to the probability with which the arm has the best quality realization. We show that a variation on ompson sampling satises smooth fairness for total variation distance, and give añ O((kT) 2/3) bound on fairness regret. is complements prior work [12], which protects an on-average beer arm from being less favored. We also explain how to extend our algorithm to the dueling bandit seing. ACM Reference format:
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

Cited literature [18 references]  Display  Hide  Download

https://hal.inria.fr/hal-01953314
Contributor : Christos Dimitrakakis <>
Submitted on : Wednesday, December 12, 2018 - 6:12:24 PM
Last modification on : Friday, April 19, 2019 - 4:54:51 PM
Long-term archiving on: Wednesday, March 13, 2019 - 3:31:12 PM

File

1707.01875.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Yang Liu, Goran Radanovic, Christos Dimitrakakis, Debmalya Mandal, David Parkes. Calibrated Fairness in Bandits. 2018. ⟨hal-01953314⟩

Share

Metrics

Record views

71

Files downloads

46