Autonomous Vehicle Controllers From End-to-End Differentiable Simulation

📅 2024-09-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor generalization and low sample efficiency of autonomous driving controllers, this paper proposes an end-to-end differentiable simulation-driven Analytic Policy Gradient (APG) framework. Unlike behavior cloning—which relies on offline expert actions—APG is the first method to embed a large-scale differentiable vehicle dynamics simulator into the training loop, explicitly leveraging environment dynamics gradients as a physics-consistent prior; it requires only expert trajectories, without action labels. The framework integrates RNNs to model temporal dependencies and is evaluated on the Waymo Open Motion Dataset. Experiments demonstrate that APG yields policies with superior robustness, faster response times, and higher trajectory accuracy compared to behavior cloning. Moreover, APG exhibits significantly enhanced robustness to dynamics perturbations and produces driving behaviors more aligned with human intuition.

Technology Category

Application Category

📝 Abstract
Current methods to learn controllers for autonomous vehicles (AVs) focus on behavioural cloning. Being trained only on exact historic data, the resulting agents often generalize poorly to novel scenarios. Simulators provide the opportunity to go beyond offline datasets, but they are still treated as complicated black boxes, only used to update the global simulation state. As a result, these RL algorithms are slow, sample-inefficient, and prior-agnostic. In this work, we leverage a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers on the large-scale Waymo Open Motion Dataset. Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of the environment dynamics serve as a useful prior to help the agent learn a more grounded policy. We combine this setup with a recurrent architecture that can efficiently propagate temporal information across long simulated trajectories. This APG method allows us to learn robust, accurate, and fast policies, while only requiring widely-available expert trajectories, instead of scarce expert actions. We compare to behavioural cloning and find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
Problem

Research questions and friction points this paper is trying to address.

Improving generalization of autonomous vehicle controllers beyond historical data
Overcoming sample inefficiency in black-box simulation training methods
Enabling robust policy learning without requiring expert action data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses differentiable simulator for end-to-end training
Applies analytic policy gradients for controller optimization
Combines recurrent architecture for temporal information propagation
🔎 Similar Papers
No similar papers found.
A
Asen Nachkov
INSAIT, Sofia University, Sofia, Bulgaria
D
D. Paudel
INSAIT, Sofia University, Sofia, Bulgaria
L
L. V. Gool
INSAIT, Sofia University, Sofia, Bulgaria; ETH Zurich, Zurich, Switzerland