Basis Expansion in Natural Actor Critic Methods - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2008

Basis Expansion in Natural Actor Critic Methods

Résumé

In reinforcement learning, the aim of the agent is to find a policy that maximizes its expected return. Policy gradient methods try to accomplish this goal by directly approximating the policy using a parametric function approximator; the expected return of the current policy is estimated and its parameters are updated by steepest ascent in the direction of the gradient of the expected return with respect to the policy parameters. In general, the policy is defined in terms of a set of basis functions that capture important features of the problem. Since the quality of the resulting policies directly depend on the set of basis func- tions, and defining them gets harder as the complexity of the problem increases, it is important to be able to find them automatically. In this paper, we propose a new approach which uses cascade-correlation learn- ing architecture for automatically constructing a set of basis functions within the context of Natural Actor-Critic (NAC) algorithms. Such basis functions allow more complex policies be represented, and consequently improve the performance of the resulting policies. We also present the effectiveness of the method empirically.
Fichier principal
Vignette du fichier
ewrl8.pdf (140.14 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00826055 , version 1 (04-06-2013)

Identifiants

  • HAL Id : hal-00826055 , version 1

Citer

Sertan Girgin, Philippe Preux. Basis Expansion in Natural Actor Critic Methods. European Workshop on Reinforcement Learning, Jun 2008, Villeneuve d'Ascq, France. pp.110-123. ⟨hal-00826055⟩
203 Consultations
279 Téléchargements

Partager

Gmail Facebook X LinkedIn More