Regret bounds for restless Markov bandits

Ronald Ortner 1 Daniil Ryabko 2 Peter Auer 3 Rémi Munos 2, 4
2 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal, Inria Lille - Nord Europe
Abstract : We consider the restless Markov bandit problem, in which the state of each arm evolves according to a Markov process independently of the learner's actions. We suggest an algorithm, that first represents the setting as an MDP which exhibits some special structural properties. In order to grasp this information we introduce the notion of $\epsilon$-structured MDPs, which are a generalization of concepts like (approximate) state aggregation and MDP homomorphisms. We propose a general algorithm for learning $\epsilon$-structured MDPs and show regret bounds that demonstrate that additional structural information enhances learning. Applied to the restless bandit setting, this algorithm achieves after any $T$ steps regret of order $\tilde{O}(\sqrt{T})$ with respect to the best policy that knows the distributions of all arms. We make no assumptions on the Markov chains underlying each arm except that they are irreducible. In addition, we show that index-based policies are necessarily suboptimal for the considered problem.
Document type :
Journal articles
Theoretical Computer Science, Elsevier, 2014, 558, pp.62-76. 〈10.1016/j.tcs.2014.09.026〉
Liste complète des métadonnées

https://hal.inria.fr/hal-01074077
Contributor : Daniil Ryabko <>
Submitted on : Sunday, October 12, 2014 - 5:29:16 PM
Last modification on : Thursday, May 17, 2018 - 12:52:03 PM

Links full text

Identifiers

Citation

Ronald Ortner, Daniil Ryabko, Peter Auer, Rémi Munos. Regret bounds for restless Markov bandits. Theoretical Computer Science, Elsevier, 2014, 558, pp.62-76. 〈10.1016/j.tcs.2014.09.026〉. 〈hal-01074077〉

Share

Metrics

Record views

261