Reinforcement Learning for Safe Longitudinal Platoons - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Rapport Année : 2010

Reinforcement Learning for Safe Longitudinal Platoons

Résumé

This paper considers the problem of longitudinal control of a linear platoon and proposes a new approach based on reinforcement learning. So far, different approaches have been proposed in the literature. Among them global and near-to-near approaches are often opposed. While the global approach needs that each vehicle acquires the global state of the platoon, the near-to-near approach only needs local information that each vehicle can get through its on-board sensors. This former approach does not need communication infrastructure and is therefore more robust. This paper proposes an approach in which the control law defining the behavior of each vehicle is automatically learnt using a simple reinforcement learning algorithm (Q-learning). A simulator based on a kinematic model defining the dynamic of each vehicle in the platoon permits through an exploration and exploitation process, to obtain an optimal policy for each vehicle in the platoon. Although Q-learning is based on a discrete representation of states and actions, we demonstrated that this technique makes it possible to build efficient controllers when compared to traditional approaches. A comparison with other longitudinal platooning algorithms is presented and it demonstrates the relevancy of our approach. Furthermore, we show that such a controller can be safe as it can be proven to be collision free.
Fichier non déposé

Dates et versions

inria-00551695 , version 1 (04-01-2011)

Identifiants

  • HAL Id : inria-00551695 , version 1

Citer

Nicole El Zoghby. Reinforcement Learning for Safe Longitudinal Platoons. [Internship report] 2010. ⟨inria-00551695⟩
108 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More