🤖 AI Summary
This work addresses the vulnerability of trajectory prediction models to minor adversarial perturbations, which can lead to catastrophic prediction failures. The study presents the first systematic evaluation of robustness in this domain and introduces a lightweight, general-purpose defense mechanism based on randomized smoothing. By integrating this approach with diverse base models across multiple benchmark datasets, the proposed method significantly enhances robustness against adversarial attacks while preserving original prediction accuracy on clean inputs. The results demonstrate that the technique offers an effective and practical solution for improving the safety and reliability of trajectory prediction systems without imposing substantial computational overhead.
📝 Abstract
Accurate and robust trajectory prediction is essential for safe and efficient autonomous driving, yet recent work has shown that even state-of-the-art prediction models are highly vulnerable to inputs being mildly perturbed by adversarial attacks. Although model vulnerabilities to such attacks have been studied, work on effective countermeasures remains limited. In this work, we develop and evaluate a new defense mechanism for trajectory prediction models based on randomized smoothing -- an approach previously applied successfully in other domains. We evaluate its ability to improve model robustness through a series of experiments that test different strategies of randomized smoothing. We show that our approach can consistently improve prediction robustness of multiple base trajectory prediction models in various datasets without compromising accuracy in non-adversarial settings. Our results demonstrate that randomized smoothing offers a simple and computationally inexpensive technique for mitigating adversarial attacks in trajectory prediction.