IGDrivSim: A Benchmark for the Imitation Gap in Autonomous Driving

📅 2024-11-07
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the “imitation gap” in autonomous driving imitation learning, arising from perceptual discrepancies between human drivers and vehicle-mounted sensors. First, it formally defines the imitation gap and introduces IGDrivSim—a Waymax-based benchmark—to quantify its impact on policy safety. Second, it proposes a multimodal perception modeling and comparative evaluation framework that synergistically integrates imitation learning (IL) with reinforcement learning (RL) augmented by prohibitive-behavior penalties for gap mitigation. Experiments demonstrate that the perception gap substantially degrades policy safety; incorporating a lightweight RL reward mechanism reduces collision rates by over 40% and significantly improves trajectory compliance. This work establishes a theoretical foundation and scalable methodology for bridging the human-machine perceptual divide and enhancing the robustness of imitation learning in autonomous driving.

Technology Category

Application Category

📝 Abstract
Developing autonomous vehicles that can navigate complex environments with human-level safety and efficiency is a central goal in self-driving research. A common approach to achieving this is imitation learning, where agents are trained to mimic human expert demonstrations collected from real-world driving scenarios. However, discrepancies between human perception and the self-driving car's sensors can introduce an extit{imitation gap}, leading to imitation learning failures. In this work, we introduce extbf{IGDrivSim}, a benchmark built on top of the Waymax simulator, designed to investigate the effects of the imitation gap in learning autonomous driving policy from human expert demonstrations. Our experiments show that this perception gap between human experts and self-driving agents can hinder the learning of safe and effective driving behaviors. We further show that combining imitation with reinforcement learning, using a simple penalty reward for prohibited behaviors, effectively mitigates these failures. Our code is open-sourced at: https://github.com/clemgris/IGDrivSim.git.
Problem

Research questions and friction points this paper is trying to address.

Investigates imitation gap in autonomous driving learning.
Develops benchmark to study human-expert vs. self-driving perception.
Proposes combining imitation and reinforcement learning to improve safety.
Innovation

Methods, ideas, or system contributions that make the work stand out.

IGDrivSim benchmark for imitation gap analysis
Combines imitation learning with reinforcement learning
Uses penalty rewards to mitigate unsafe behaviors
🔎 Similar Papers
No similar papers found.