Skip to Main content Skip to Navigation
Journal articles

Regret bounds for restless Markov bandits

Ronald Ortner 1 Daniil Ryabko 2 Peter Auer 3 Rémi Munos 2, 4
2 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : We consider the restless Markov bandit problem, in which the state of each arm evolves according to a Markov process independently of the learner's actions. We suggest an algorithm, that first represents the setting as an MDP which exhibits some special structural properties. In order to grasp this information we introduce the notion of $\epsilon$-structured MDPs, which are a generalization of concepts like (approximate) state aggregation and MDP homomorphisms. We propose a general algorithm for learning $\epsilon$-structured MDPs and show regret bounds that demonstrate that additional structural information enhances learning. Applied to the restless bandit setting, this algorithm achieves after any $T$ steps regret of order $\tilde{O}(\sqrt{T})$ with respect to the best policy that knows the distributions of all arms. We make no assumptions on the Markov chains underlying each arm except that they are irreducible. In addition, we show that index-based policies are necessarily suboptimal for the considered problem.
Document type :
Journal articles
Complete list of metadata
Contributor : Daniil Ryabko Connect in order to contact the contributor
Submitted on : Sunday, October 12, 2014 - 5:29:16 PM
Last modification on : Thursday, January 20, 2022 - 4:17:05 PM

Links full text



Ronald Ortner, Daniil Ryabko, Peter Auer, Rémi Munos. Regret bounds for restless Markov bandits. Theoretical Computer Science, Elsevier, 2014, 558, pp.62-76. ⟨10.1016/j.tcs.2014.09.026⟩. ⟨hal-01074077⟩



Les métriques sont temporairement indisponibles