π€ AI Summary
Existing accident prediction models for autonomous driving lack robustness under minor real-world perturbations, leading to unstable predictions and feature representations. To address this, this work proposes the SECURE framework, which formally defines and jointly optimizes dual stability in both the prediction space and the latent feature space for the first time. The approach employs a multi-objective loss function that integrates consistency constraints with respect to a reference modelβs outputs and penalties for sensitivity to adversarial perturbations. Evaluated on the DAD and CCD datasets, SECURE significantly enhances model robustness while achieving state-of-the-art performance on clean data.
π Abstract
While deep learning has significantly advanced accident anticipation, the robustness of these safety-critical systems against real-world perturbations remains a major challenge. We reveal that state-of-the-art models like CRASH, despite their high performance, exhibit significant instability in predictions and latent representations when faced with minor input perturbations, posing serious reliability risks. To address this, we introduce SECURE - Stable Early Collision Understanding Robust Embeddings, a framework that formally defines and enforces model robustness. SECURE is founded on four key attributes: consistency and stability in both prediction space and latent feature space. We propose a principled training methodology that fine-tunes a baseline model using a multi-objective loss, which minimizes divergence from a reference model and penalizes sensitivity to adversarial perturbations. Experiments on DAD and CCD datasets demonstrate that our approach not only significantly enhances robustness against various perturbations but also improves performance on clean data, achieving new state-of-the-art results.