Skip to Main content Skip to Navigation
Conference papers

Incremental Basis Function Expansion in Reinforcement Learning using Cascade-Correlation Networks

Sertan Girgin 1 Philippe Preux 1, 2
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : In reinforcement learning, it is a common practice to map the state(-action) space to a different one using basis functions. This transformation aims to represent the input data in a more informative form that facilitates and improves subsequent steps. As a ''good'' set of basis functions result in better solutions and defining such functions becomes a challenge with increasing problem complexity, it is beneficial to be able to generate them automatically. In this paper, we propose a new approach based on Bellman residual for constructing basis functions using cascade-correlation learning architecture. We show how this approach can be applied to Least Squares Policy Iteration algorithm in order to obtain a better approximation of the value function, and consequently improve the performance of the resulting policies. We also present the effectiveness of the method empirically on some benchmark problems.
Complete list of metadatas

Cited literature [15 references]  Display  Hide  Download
Contributor : Preux Philippe <>
Submitted on : Thursday, November 8, 2012 - 3:33:10 PM
Last modification on : Tuesday, November 24, 2020 - 2:18:20 PM
Long-term archiving on: : Saturday, February 9, 2013 - 2:25:10 AM


Files produced by the author(s)


  • HAL Id : inria-00356262, version 1



Sertan Girgin, Philippe Preux. Incremental Basis Function Expansion in Reinforcement Learning using Cascade-Correlation Networks. 8th International Conference on Machine Learning and Applications, Dec 2008, San Diego, United States. pp.75-82. ⟨inria-00356262⟩



Record views


Files downloads