SECURE: Stable Early Collision Understanding via Robust Embeddings in Autonomous Driving

πŸ“… 2026-04-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing accident prediction models for autonomous driving lack robustness under minor real-world perturbations, leading to unstable predictions and feature representations. To address this, this work proposes the SECURE framework, which formally defines and jointly optimizes dual stability in both the prediction space and the latent feature space for the first time. The approach employs a multi-objective loss function that integrates consistency constraints with respect to a reference model’s outputs and penalties for sensitivity to adversarial perturbations. Evaluated on the DAD and CCD datasets, SECURE significantly enhances model robustness while achieving state-of-the-art performance on clean data.
πŸ“ Abstract
While deep learning has significantly advanced accident anticipation, the robustness of these safety-critical systems against real-world perturbations remains a major challenge. We reveal that state-of-the-art models like CRASH, despite their high performance, exhibit significant instability in predictions and latent representations when faced with minor input perturbations, posing serious reliability risks. To address this, we introduce SECURE - Stable Early Collision Understanding Robust Embeddings, a framework that formally defines and enforces model robustness. SECURE is founded on four key attributes: consistency and stability in both prediction space and latent feature space. We propose a principled training methodology that fine-tunes a baseline model using a multi-objective loss, which minimizes divergence from a reference model and penalizes sensitivity to adversarial perturbations. Experiments on DAD and CCD datasets demonstrate that our approach not only significantly enhances robustness against various perturbations but also improves performance on clean data, achieving new state-of-the-art results.
Problem

Research questions and friction points this paper is trying to address.

robustness
autonomous driving
collision anticipation
input perturbations
model stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

robust embeddings
adversarial robustness
collision anticipation
stable representation learning
multi-objective training
πŸ”Ž Similar Papers
No similar papers found.
W
Wenjing Wang
Xiamen University Malaysia
W
Wenxuan Wang
Xinjiang University
Songning Lai
Songning Lai
HKUST(GZ)
Machine LearningDeep LearningMultimodalXAI