Approximate Dynamic Programming - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Chapitre D'ouvrage Année : 2010

Approximate Dynamic Programming

Rémi Munos
  • Fonction : Auteur

Résumé

In any complex or large scale sequential decision making problem, there is a crucial need to use function approximation to represent the relevant functions such as the value function or the policy. The Dynamic Programming (DP) and Reinforcement Learning (RL) methods introduced in previous chapters make the implicit assumption that the value function can be perfectly represented (i.e. kept in memory), for example by using a look-up table (with a finite number of entries) assigning a value to all possible states (assumed to be finite) of the system. Those methods are called exact because they provide an exact computation of the optimal solution of the considered problem (or at least, enable the computations to converge to this optimal solution). However, such methods often apply to toy problems only, since in most interesting applications, the number of possible states is so large (and possibly infinite if we consider continuous spaces) that a perfect representation of the function at all states is impossible. It becomes necessary to approximate the function by using a moderate number of coefficients (which can be stored in a computer), and therefore extend the range of DP and RL to methods using such approximate representations. These approximate methods combine DP and RL methods with function approximation tools.
Fichier non déposé

Dates et versions

hal-00943118 , version 1 (07-02-2014)

Identifiants

  • HAL Id : hal-00943118 , version 1

Citer

Rémi Munos. Approximate Dynamic Programming. Olivier Sigaud and Olivier Buffet. Markov Decision Processes in Artificial Intelligence, ISTE Ltd and John Wiley & Sons Inc, pp.67--98, 2010. ⟨hal-00943118⟩
200 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More