An open framework for human-like autonomous driving using Inverse Reinforcement Learning - Archive ouverte HAL Access content directly
Conference Papers Year :

An open framework for human-like autonomous driving using Inverse Reinforcement Learning

(1) , (2) , (3) , (1)
1
2
3

Abstract

—Research on autonomous car driving and advanced driving assistance systems has come to occupy a very significant place in robotics research. On the other hand, there are significant entry barriers (eg cost, legislation, logistics) that make it very difficult for small research groups and individual researchers to have access to a real autonomous vehicle for their experiments. This paper proposes to leverage an existing driving simulator (Torcs) by developing a ROS communication bridge for it. We use is as the basis for an experimental framework for the development and evaluation of Human-like autonomous driving based on Inverse Reinforce Learning (IRL). Based on an extensible and open architecture, this framework provides efficient GPU-based implementations of state-of the art IRL algorithms, as well as two challenging test environments and a set of evaluation metrics as a first step toward a benchmark.
Fichier principal
Vignette du fichier
vppc14.pdf (1.11 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01105271 , version 1 (20-01-2015)

Identifiers

  • HAL Id : hal-01105271 , version 1

Cite

Dizan Vasquez, Yufeng Yu, Suryansh Kumar, Christian Laugier. An open framework for human-like autonomous driving using Inverse Reinforcement Learning. IEEE Vehicle Power and Propulsion Conference, 2014, Coimbra, Portugal. ⟨hal-01105271⟩
744 View
1221 Download

Share

Gmail Facebook Twitter LinkedIn More