Bayesian Reinforcement Learning - Archive ouverte HAL Access content directly
Book Sections Year : 2012

Bayesian Reinforcement Learning

(1) , (2) , (3) , (4)
1
2
3
4

Abstract

This chapter surveys recent lines of work that use Bayesian techniques for reinforcement learning. In Bayesian learning, uncertainty is expressed by a prior distribution over unknown parameters and learning is achieved by computing a posterior distribution based on the data observed. Hence, Bayesian reinforcement learning distinguishes itself from other forms of reinforcement learning by explicitly maintaining a distribution over various quantities such as the parameters of the model, the value function, the policy or its gradient. This yields several benefits: a) domain knowledge can be naturally encoded in the prior distribution to speed up learning; b) the exploration/exploitation tradeoff can be naturally optimized; and c) notions of risk can be naturally taken into account to obtain robust policies.
Fichier principal
Vignette du fichier
BRLchapter.pdf (162.56 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-00840479 , version 1 (02-07-2013)

Identifiers

  • HAL Id : hal-00840479 , version 1

Cite

Nikos Vlassis, Mohammad Ghavamzadeh, Shie Mannor, Pascal Poupart. Bayesian Reinforcement Learning. Marco Wiering and Martijn van Otterlo. Reinforcement Learning: State of the Art, Springer Verlag, 2012. ⟨hal-00840479⟩
426 View
2213 Download

Share

Gmail Facebook Twitter LinkedIn More