Regret Bounds for Reinforcement Learning with Policy Advice

Mohammad Gheshlaghi Azar 1 Alessandro Lazaric 2, 3 Emma Brunskill 1
2 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal, Inria Lille - Nord Europe
Abstract : In some reinforcement learning problems an agent may be provided with a set of input policies, perhaps learned from prior experience or provided by advisors. We present a reinforcement learning with policy advice (RLPA) algorithm which leverages this input set and learns to use the best policy in the set for the reinforcement learning task at hand. We prove that RLPA has a sub-linear regret of $\widetilde O(\sqrt{T})$ relative to the best input policy, and that both this regret and its computational complexity are independent of the size of the state and action space. Our empirical simulations support our theoretical analysis. This suggests RLPA may offer significant advantages in large domains where some prior good policies are provided.
Document type :
Conference papers
Complete list of metadatas

Cited literature [15 references]  Display  Hide  Download

https://hal.inria.fr/hal-00924021
Contributor : Alessandro Lazaric <>
Submitted on : Monday, January 6, 2014 - 11:00:27 AM
Last modification on : Thursday, February 21, 2019 - 10:52:49 AM
Long-term archiving on : Thursday, April 10, 2014 - 4:25:17 PM

File

RLPAcr.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00924021, version 1

Citation

Mohammad Gheshlaghi Azar, Alessandro Lazaric, Emma Brunskill. Regret Bounds for Reinforcement Learning with Policy Advice. ECML/PKDD - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Sep 2013, Prague, Czech Republic. ⟨hal-00924021⟩

Share

Metrics

Record views

611

Files downloads

452