Skip to Main content Skip to Navigation
Conference papers

Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes

Ronan Fruit 1 Matteo Pirotta 1 Alessandro Lazaric 2
1 SEQUEL - Sequential Learning
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Abstract : While designing the state space of an MDP, it is common to include states that are transient or not reachable by any policy (e.g., in mountain car, the product space of speed and position contains configurations that are not physically reachable). This leads to defining weakly-communicating or multi-chain MDPs. In this paper, we introduce \tucrl, the first algorithm able to perform efficient exploration-exploitation in any finite Markov Decision Process (MDP) without requiring any form of prior knowledge. In particular, for any MDP with $S^{\texttt{C}}$ communicating states, $A$ actions and $\Gamma^{\texttt{C}} \leq S^{\texttt{C}}$ possible communicating next states, we derive a $\widetilde{O}(D^{\texttt{C}} \sqrt{\Gamma^{\texttt{C}} S^{\texttt{C}} AT})$ regret bound, where $D^{\texttt{C}}$ is the diameter (i.e., the longest shortest path) of the communicating part of the MDP. This is in contrast with optimistic algorithms (e.g., UCRL, Optimistic PSRL) that suffer linear regret in weakly-communicating MDPs, as well as posterior sampling or regularised algorithms (e.g., REGAL), which require prior knowledge on the bias span of the optimal policy to bias the exploration to achieve sub-linear regret. We also prove that in weakly-communicating MDPs, no algorithm can ever achieve a logarithmic growth of the regret without first suffering a linear regret for a number of steps that is exponential in the parameters of the MDP. Finally, we report numerical simulations supporting our theoretical findings and showing how TUCRL overcomes the limitations of the state-of-the-art.
Document type :
Conference papers
Complete list of metadatas
Contributor : Matteo Pirotta <>
Submitted on : Friday, November 30, 2018 - 6:41:13 PM
Last modification on : Friday, March 22, 2019 - 1:37:09 AM
Document(s) archivé(s) le : Friday, March 1, 2019 - 4:09:11 PM


Files produced by the author(s)


  • HAL Id : hal-01941220, version 1


Ronan Fruit, Matteo Pirotta, Alessandro Lazaric. Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes. 32nd Conference on Neural Information Processing Systems, Dec 2018, Montréal, Canada. ⟨hal-01941220⟩



Record views


Files downloads