An open framework for human-like autonomous driving using Inverse Reinforcement Learning

Abstract : —Research on autonomous car driving and advanced driving assistance systems has come to occupy a very significant place in robotics research. On the other hand, there are significant entry barriers (eg cost, legislation, logistics) that make it very difficult for small research groups and individual researchers to have access to a real autonomous vehicle for their experiments. This paper proposes to leverage an existing driving simulator (Torcs) by developing a ROS communication bridge for it. We use is as the basis for an experimental framework for the development and evaluation of Human-like autonomous driving based on Inverse Reinforce Learning (IRL). Based on an extensible and open architecture, this framework provides efficient GPU-based implementations of state-of the art IRL algorithms, as well as two challenging test environments and a set of evaluation metrics as a first step toward a benchmark.
Document type :
Conference papers
Complete list of metadatas

Cited literature [9 references]  Display  Hide  Download

https://hal.inria.fr/hal-01105271
Contributor : Dizan Vasquez <>
Submitted on : Tuesday, January 20, 2015 - 10:06:33 AM
Last modification on : Friday, January 4, 2019 - 1:23:34 AM
Long-term archiving on : Friday, September 11, 2015 - 7:40:37 AM

File

vppc14.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01105271, version 1

Collections

Citation

Dizan Vasquez, Yufeng Yu, Suryansh Kumar, Christian Laugier. An open framework for human-like autonomous driving using Inverse Reinforcement Learning. IEEE Vehicle Power and Propulsion Conference, 2014, Coimbra, Portugal. ⟨hal-01105271⟩

Share

Metrics

Record views

877

Files downloads

1527