Online Stochastic Optimization under Correlated Bandit Feedback

Mohammad Gheshlaghi Azar 1 Alessandro Lazaric 2 Emma Brunskill 3
2 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal, Inria Lille - Nord Europe
Abstract : In this paper we consider the problem of online stochastic optimization of a locally smooth func-tion under bandit feedback. We introduce the high-confidence tree (HCT) algorithm, a novel anytime X -armed bandit algorithm, and derive regret bounds matching the performance of state-of-the-art algorithms in terms of the dependency on number of steps and the near-optimality di-mension. The main advantage of HCT is that it handles the challenging case of correlated ban-dit feedback (reward), whereas existing meth-ods require rewards to be conditionally indepen-dent. HCT also improves on the state-of-the-art in terms of the memory requirement, as well as requiring a weaker smoothness assumption on the mean-reward function in comparison with the existing anytime algorithms. Finally, we discuss how HCT can be applied to the problem of policy search in reinforcement learning and we report preliminary empirical results.
Type de document :
Communication dans un congrès
31st International Conference on Machine Learning, Jun 2014, Beijing, China
Liste complète des métadonnées

Littérature citée [22 références]  Voir  Masquer  Télécharger
Contributeur : Alessandro Lazaric <>
Soumis le : mardi 4 novembre 2014 - 15:26:14
Dernière modification le : jeudi 11 janvier 2018 - 06:22:13
Document(s) archivé(s) le : jeudi 5 février 2015 - 11:05:37


paper (1).pdf
Fichiers produits par l'(les) auteur(s)


  • HAL Id : hal-01080138, version 1


Mohammad Gheshlaghi Azar, Alessandro Lazaric, Emma Brunskill. Online Stochastic Optimization under Correlated Bandit Feedback. 31st International Conference on Machine Learning, Jun 2014, Beijing, China. 〈hal-01080138〉



Consultations de la notice


Téléchargements de fichiers