Automatic Generation of an Agent's Basic Behaviors - Archive ouverte HAL Access content directly
Conference Papers Year : 2003

Automatic Generation of an Agent's Basic Behaviors

Olivier Buffet
Alain Dutech


The agent approach, as seen by \cite{Russell95}, intends to design ``intelligent'' behaviors. Yet, Reinforcement Learning (RL) methods often fail when confronted with complex tasks. We are therefore trying to develop a methodology for the automated design of agents (in the framework of Markov Decision Processes) in the case where the global task can be decomposed into simpler -possibly concurrent- sub-tasks. Our main idea is to automatically combine basic behaviors using RL methods. This led us to propose two complementary mechanisms presented in the current paper. The first mechanism builds a global policy using a weighted combination of basic policies (which are reusable), the weights being learned by the agent (using Simulated Annealing in our case). An agent designed this way is highly scalable as, without further refinement of the global behavior, it can automatically combine several instances of the same basic behavior to take into account concurrent occurences of the same subtask. The second mechanism aims at creating new basic behaviors for combination. It is based on an incremental learning method that builds on the approximate solution obtained through the combination of older behaviors.
Not file

Dates and versions

inria-00099817 , version 1 (26-09-2006)


  • HAL Id : inria-00099817 , version 1


Olivier Buffet, Alain Dutech, François Charpillet. Automatic Generation of an Agent's Basic Behaviors. Second International Joint Conference on Autonomous Agents and Multi-Agent Systems - AAMAS'03, 2003, Melbourne, Victoria, Australie, pp.875-882. ⟨inria-00099817⟩
71 View
0 Download


Gmail Facebook Twitter LinkedIn More