🤖 AI Summary
This work addresses the limitation of existing trajectory prediction methods that assume fixed-length observation inputs and struggle with the variable-length, incomplete trajectories commonly encountered in real-world driving scenarios. To this end, the authors propose a Progressive Retrospection Framework (PRF), which progressively aligns incomplete observations to full-trajectory representations through a cascade of retrospection units. Each unit comprises a Retrospection Distillation Module (RDM) and a Retrospection Prediction Module (RPM). Coupled with a Rolling-Start Training Strategy (RSTS), PRF significantly improves data efficiency and can be seamlessly integrated as a plug-and-play component into existing models. Experiments on Argoverse 1 and 2 demonstrate that PRF effectively bridges the representation gap between short and complete trajectories, substantially enhancing prediction performance for variable-length inputs while maintaining strong generality and scalability.
📝 Abstract
Trajectory prediction is critical for autonomous driving, enabling safe and efficient planning in dense, dynamic traffic. Most existing methods optimize prediction accuracy under fixed-length observations. However, real-world driving often yields variable-length, incomplete observations, posing a challenge to these methods. A common strategy is to directly map features from incomplete observations to those from complete ones. This one-shot mapping, however, struggles to learn accurate representations for short trajectories due to significant information gaps. To address this issue, we propose a Progressive Retrospective Framework (PRF), which gradually aligns features from incomplete observations with those from complete ones via a cascade of retrospective units. Each unit consists of a Retrospective Distillation Module (RDM) and a Retrospective Prediction Module (RPM), where RDM distills features and RPM recovers previous timesteps using the distilled features. Moreover, we propose a Rolling-Start Training Strategy (RSTS) that enhances data efficiency during PRF training. PRF is plug-and-play with existing methods. Extensive experiments on datasets Argoverse 2 and Argoverse 1 demonstrate the effectiveness of PRF. Code is available at https://github.com/zhouhao94/PRF.