CausalVAD: De-confounding End-to-End Autonomous Driving via Causal Intervention

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
End-to-end autonomous driving models are prone to dataset biases, often learning spurious statistical correlations that compromise safety and reliability in complex scenarios. To address this, this work proposes CausalVAD, a novel framework that integrates lightweight backdoor adjustment theory into end-to-end driving systems for the first time, introducing a plug-and-play Sparse Causal Intervention Scheme (SCIS). SCIS leverages a driving context prototype dictionary to sparsely intervene on query vectors, effectively removing confounding biases from learned representations and enhancing the causal reliability of motion planning. Evaluated on benchmarks such as nuScenes, CausalVAD achieves state-of-the-art performance in both planning accuracy and safety, while demonstrating exceptional robustness under data bias and sensor noise.

Technology Category

Application Category

📝 Abstract
Planning-oriented end-to-end driving models show great promise, yet they fundamentally learn statistical correlations instead of true causal relationships. This vulnerability leads to causal confusion, where models exploit dataset biases as shortcuts, critically harming their reliability and safety in complex scenarios. To address this, we introduce CausalVAD, a de-confounding training framework that leverages causal intervention. At its core, we design the sparse causal intervention scheme (SCIS), a lightweight, plug-and-play module to instantiate the backdoor adjustment theory in neural networks. SCIS constructs a dictionary of prototypes representing latent driving contexts. It then uses this dictionary to intervene on the model's sparse vectorized queries. This step actively eliminates spurious associations induced by confounders, thereby eliminating spurious factors from the representations for downstream tasks. Extensive experiments on benchmarks like nuScenes show CausalVAD achieves state-of-the-art planning accuracy and safety. Furthermore, our method demonstrates superior robustness against both data bias and noisy scenarios configured to induce causal confusion.
Problem

Research questions and friction points this paper is trying to address.

causal confusion
end-to-end autonomous driving
dataset bias
spurious correlations
confounders
Innovation

Methods, ideas, or system contributions that make the work stand out.

causal intervention
de-confounding
end-to-end autonomous driving
backdoor adjustment
sparse causal intervention scheme
🔎 Similar Papers
No similar papers found.