🤖 AI Summary
To address the insufficient out-of-distribution (OoD) generalization robustness of autonomous driving trajectory prediction models, this paper introduces the first standardized OoD evaluation protocol specifically designed for trajectory prediction. Methodologically, it innovatively adopts polynomial representations to jointly model road geometry and agent trajectories—marking the first use of polynomial parameterization for both inputs (historical trajectories and lane centerlines) and outputs (future trajectories). The approach integrates polynomial curve fitting, geometry-aware feature encoding, cross-dataset distribution alignment, and a lightweight network architecture. Experiments demonstrate: (i) in-distribution (ID) performance competitive with state-of-the-art methods; (ii) substantial improvement in OoD generalization; (iii) significant reductions in model size, training cost, and inference latency; and (iv) a fundamental distinction in how two dominant robustness-enhancement strategies affect OoD performance—revealing critical insights for future robust trajectory prediction design.
📝 Abstract
Robustness against Out-of-Distribution (OoD) samples is a key performance indicator of a trajectory prediction model. However, the development and ranking of state-of-the-art (SotA) models are driven by their In-Distribution (ID) performance on individual competition datasets. We present an OoD testing protocol that homogenizes datasets and prediction tasks across two large-scale motion datasets. We introduce a novel prediction algorithm based on polynomial representations for agent trajectory and road geometry on both the input and output sides of the model. With a much smaller model size, training effort, and inference time, we reach near SotA performance for ID testing and significantly improve robustness in OoD testing. Within our OoD testing protocol, we further study two augmentation strategies of SotA models and their effects on model generalization. Highlighting the contrast between ID and OoD performance, we suggest adding OoD testing to the evaluation criteria of trajectory prediction models.