🤖 AI Summary
Trajectory-level out-of-distribution (OOD) detection in autonomous driving trajectory prediction remains challenging due to training–deployment distribution shifts. Method: This paper proposes the first adaptive multi-modal OOD detection framework tailored for dynamic traffic scenarios. It uniquely formulates trajectory OOD detection as a dynamic sensing problem under time-varying error distributions, integrating uncertainty quantification, multi-modal error modeling, and quickest change detection theory to enable low-latency, low-false-alarm online anomaly identification. Contribution/Results: Evaluated on multiple mainstream trajectory prediction benchmarks, our method significantly outperforms existing uncertainty-based and vision-driven OOD approaches—reducing detection latency by 23%–41% and false positive rate by 18%–35%, while maintaining high computational efficiency. The framework enhances the robustness and reliability of autonomous driving systems in rare and complex traffic scenarios.
📝 Abstract
Trajectory prediction is central to the safe and seamless operation of autonomous vehicles (AVs). In deployment, however, prediction models inevitably face distribution shifts between training data and real-world conditions, where rare or underrepresented traffic scenarios induce out-of-distribution (OOD) cases. While most prior OOD detection research in AVs has concentrated on computer vision tasks such as object detection and segmentation, trajectory-level OOD detection remains largely underexplored. A recent study formulated this problem as a quickest change detection (QCD) task, providing formal guarantees on the trade-off between detection delay and false alarms [1]. Building on this foundation, we propose a new framework that introduces adaptive mechanisms to achieve robust detection in complex driving environments. Empirical analysis across multiple real-world datasets reveals that prediction errors--even on in-distribution samples--exhibit mode-dependent distributions that evolve over time with dataset-specific dynamics. By explicitly modeling these error modes, our method achieves substantial improvements in both detection delay and false alarm rates. Comprehensive experiments on established trajectory prediction benchmarks show that our framework significantly outperforms prior UQ- and vision-based OOD approaches in both accuracy and computational efficiency, offering a practical path toward reliable, driving-aware autonomy.