Skip to Main content Skip to Navigation
Conference papers

Basis Expansion in Natural Actor Critic Methods

Sertan Girgin 1 Philippe Preux 1, 2, 3
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : In reinforcement learning, the aim of the agent is to find a policy that maximizes its expected return. Policy gradient methods try to accomplish this goal by directly approximating the policy using a parametric function approximator; the expected return of the current policy is estimated and its parameters are updated by steepest ascent in the direction of the gradient of the expected return with respect to the policy parameters. In general, the policy is defined in terms of a set of basis functions that capture important features of the problem. Since the quality of the resulting policies directly depend on the set of basis func- tions, and defining them gets harder as the complexity of the problem increases, it is important to be able to find them automatically. In this paper, we propose a new approach which uses cascade-correlation learn- ing architecture for automatically constructing a set of basis functions within the context of Natural Actor-Critic (NAC) algorithms. Such basis functions allow more complex policies be represented, and consequently improve the performance of the resulting policies. We also present the effectiveness of the method empirically.
Document type :
Conference papers
Complete list of metadata

Cited literature [21 references]  Display  Hide  Download
Contributor : Preux Philippe Connect in order to contact the contributor
Submitted on : Tuesday, June 4, 2013 - 9:15:27 AM
Last modification on : Saturday, December 18, 2021 - 3:02:05 AM
Long-term archiving on: : Thursday, September 5, 2013 - 4:19:25 AM


Files produced by the author(s)


  • HAL Id : hal-00826055, version 1



Sertan Girgin, Philippe Preux. Basis Expansion in Natural Actor Critic Methods. European Workshop on Reinforcement Learning, Jun 2008, Villeneuve d'Ascq, France. pp.110-123. ⟨hal-00826055⟩



Les métriques sont temporairement indisponibles