Skip to Main content Skip to Navigation
Conference papers

An open framework for human-like autonomous driving using Inverse Reinforcement Learning

Abstract : —Research on autonomous car driving and advanced driving assistance systems has come to occupy a very significant place in robotics research. On the other hand, there are significant entry barriers (eg cost, legislation, logistics) that make it very difficult for small research groups and individual researchers to have access to a real autonomous vehicle for their experiments. This paper proposes to leverage an existing driving simulator (Torcs) by developing a ROS communication bridge for it. We use is as the basis for an experimental framework for the development and evaluation of Human-like autonomous driving based on Inverse Reinforce Learning (IRL). Based on an extensible and open architecture, this framework provides efficient GPU-based implementations of state-of the art IRL algorithms, as well as two challenging test environments and a set of evaluation metrics as a first step toward a benchmark.
Document type :
Conference papers
Complete list of metadata

Cited literature [9 references]  Display  Hide  Download
Contributor : Dizan Vasquez Connect in order to contact the contributor
Submitted on : Tuesday, January 20, 2015 - 10:06:33 AM
Last modification on : Thursday, October 21, 2021 - 3:51:37 AM
Long-term archiving on: : Friday, September 11, 2015 - 7:40:37 AM


Files produced by the author(s)


  • HAL Id : hal-01105271, version 1



Dizan Vasquez, Yufeng Yu, Suryansh Kumar, Christian Laugier. An open framework for human-like autonomous driving using Inverse Reinforcement Learning. IEEE Vehicle Power and Propulsion Conference, 2014, Coimbra, Portugal. ⟨hal-01105271⟩



Les métriques sont temporairement indisponibles