Basis Expansion in Natural Actor Critic Methods - Archive ouverte HAL Access content directly
Conference Papers Year : 2008

Basis Expansion in Natural Actor Critic Methods


In reinforcement learning, the aim of the agent is to find a policy that maximizes its expected return. Policy gradient methods try to accomplish this goal by directly approximating the policy using a parametric function approximator; the expected return of the current policy is estimated and its parameters are updated by steepest ascent in the direction of the gradient of the expected return with respect to the policy parameters. In general, the policy is defined in terms of a set of basis functions that capture important features of the problem. Since the quality of the resulting policies directly depend on the set of basis func- tions, and defining them gets harder as the complexity of the problem increases, it is important to be able to find them automatically. In this paper, we propose a new approach which uses cascade-correlation learn- ing architecture for automatically constructing a set of basis functions within the context of Natural Actor-Critic (NAC) algorithms. Such basis functions allow more complex policies be represented, and consequently improve the performance of the resulting policies. We also present the effectiveness of the method empirically.
Fichier principal
Vignette du fichier
ewrl8.pdf (140.14 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-00826055 , version 1 (04-06-2013)


  • HAL Id : hal-00826055 , version 1


Sertan Girgin, Philippe Preux. Basis Expansion in Natural Actor Critic Methods. European Workshop on Reinforcement Learning, Jun 2008, Villeneuve d'Ascq, France. pp.110-123. ⟨hal-00826055⟩
202 View
263 Download


Gmail Facebook Twitter LinkedIn More