Service interruption on Monday 11 July from 12:30 to 13:00: all the sites of the CCSD (HAL, Epiciences, SciencesConf, AureHAL) will be inaccessible (network hardware connection).
Skip to Main content Skip to Navigation
Conference papers

Exploration in Model-based Reinforcement Learning by Empirically Estimating Learning Progress

Abstract : Formal exploration approaches in model-based reinforcement learning estimate the accuracy of the currently learned model without consideration of the empirical prediction error. For example, PAC-MDP approaches such as R-MAX base their model certainty on the amount of collected data, while Bayesian approaches assume a prior over the transition dynamics. We propose extensions to such approaches which drive exploration solely based on empirical estimates of the learner's accuracy and learning progress. We provide a "sanity check" theoretical analysis, discussing the behavior of our extensions in the standard stationary finite state-action case. We then provide experimental studies demonstrating the robustness of these exploration measures in cases of non-stationary environments or where original approaches are misled by wrong domain assumptions.
Document type :
Conference papers
Complete list of metadata

Cited literature [17 references]  Display  Hide  Download

https://hal.inria.fr/hal-00755248
Contributor : Manuel Lopes Connect in order to contact the contributor
Submitted on : Tuesday, November 20, 2012 - 5:33:00 PM
Last modification on : Friday, April 1, 2022 - 5:12:22 PM
Long-term archiving on: : Thursday, February 21, 2013 - 12:30:43 PM

File

nips.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00755248, version 1

Collections

Citation

Manuel Lopes, Tobias Lang, Marc Toussaint, Pierre-yves Oudeyer. Exploration in Model-based Reinforcement Learning by Empirically Estimating Learning Progress. Neural Information Processing Systems (NIPS), Dec 2012, Lake Tahoe, United States. ⟨hal-00755248⟩

Share

Metrics

Record views

498

Files downloads

423