A connectionist architecture that adpats its representation to complex tasks
Abstract
This paper presents an original connectionist architecture that is capable of adapting its representation to one or various reinforcement problems. We briefly describe the generic reinforcement learning theory it is based on. We focus on distributed algorithms that enables efficient planning. In this specific framework, we define the notion of task-specialisation and propose a procedure for adapting a task model without increasing its complexity. It consists in a high-level learning of representation in problems with possibly delayed reinforcements. We show that such a single architecture can adapt to multiple tasks. Finally we stress its connectionist nature: most computations can be distributed and done in parallel. We illustrate and evaluate this adaptation paradigm on a navigation continuous-space environment.