Reinforcement Learning for Safe Longitudinal Platoons

Nicole El Zoghby 1
1 MAIA - Autonomous intelligent machine
INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications
Abstract : This paper considers the problem of longitudinal control of a linear platoon and proposes a new approach based on reinforcement learning. So far, different approaches have been proposed in the literature. Among them global and near-to-near approaches are often opposed. While the global approach needs that each vehicle acquires the global state of the platoon, the near-to-near approach only needs local information that each vehicle can get through its on-board sensors. This former approach does not need communication infrastructure and is therefore more robust. This paper proposes an approach in which the control law defining the behavior of each vehicle is automatically learnt using a simple reinforcement learning algorithm (Q-learning). A simulator based on a kinematic model defining the dynamic of each vehicle in the platoon permits through an exploration and exploitation process, to obtain an optimal policy for each vehicle in the platoon. Although Q-learning is based on a discrete representation of states and actions, we demonstrated that this technique makes it possible to build efficient controllers when compared to traditional approaches. A comparison with other longitudinal platooning algorithms is presented and it demonstrates the relevancy of our approach. Furthermore, we show that such a controller can be safe as it can be proven to be collision free.
Type de document :
[Intership report] 2010
Liste complète des métadonnées
Contributeur : François Charpillet <>
Soumis le : mardi 4 janvier 2011 - 14:04:17
Dernière modification le : jeudi 11 janvier 2018 - 06:19:51


  • HAL Id : inria-00551695, version 1



Nicole El Zoghby. Reinforcement Learning for Safe Longitudinal Platoons. [Intership report] 2010. 〈inria-00551695〉



Consultations de la notice