Recover to Predict: Progressive Retrospective Learning for Variable-Length Trajectory Prediction

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing trajectory prediction methods that assume fixed-length observation inputs and struggle with the variable-length, incomplete trajectories commonly encountered in real-world driving scenarios. To this end, the authors propose a Progressive Retrospection Framework (PRF), which progressively aligns incomplete observations to full-trajectory representations through a cascade of retrospection units. Each unit comprises a Retrospection Distillation Module (RDM) and a Retrospection Prediction Module (RPM). Coupled with a Rolling-Start Training Strategy (RSTS), PRF significantly improves data efficiency and can be seamlessly integrated as a plug-and-play component into existing models. Experiments on Argoverse 1 and 2 demonstrate that PRF effectively bridges the representation gap between short and complete trajectories, substantially enhancing prediction performance for variable-length inputs while maintaining strong generality and scalability.

Technology Category

Application Category

📝 Abstract
Trajectory prediction is critical for autonomous driving, enabling safe and efficient planning in dense, dynamic traffic. Most existing methods optimize prediction accuracy under fixed-length observations. However, real-world driving often yields variable-length, incomplete observations, posing a challenge to these methods. A common strategy is to directly map features from incomplete observations to those from complete ones. This one-shot mapping, however, struggles to learn accurate representations for short trajectories due to significant information gaps. To address this issue, we propose a Progressive Retrospective Framework (PRF), which gradually aligns features from incomplete observations with those from complete ones via a cascade of retrospective units. Each unit consists of a Retrospective Distillation Module (RDM) and a Retrospective Prediction Module (RPM), where RDM distills features and RPM recovers previous timesteps using the distilled features. Moreover, we propose a Rolling-Start Training Strategy (RSTS) that enhances data efficiency during PRF training. PRF is plug-and-play with existing methods. Extensive experiments on datasets Argoverse 2 and Argoverse 1 demonstrate the effectiveness of PRF. Code is available at https://github.com/zhouhao94/PRF.
Problem

Research questions and friction points this paper is trying to address.

trajectory prediction
variable-length observations
incomplete trajectories
autonomous driving
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive Retrospective Framework
Retrospective Distillation Module
Variable-Length Trajectory Prediction
Rolling-Start Training Strategy
Incomplete Observation
🔎 Similar Papers
No similar papers found.
H
Hao Zhou
Great Bay University
Lu Qi
Lu Qi
Insta360 | Wuhan Univeristy
Computer VisionDeep Learning
J
Jason Li
NTU
J
Jie Zhang
Great Bay University
Y
Yi Liu
Donghua University
Xu Yang
Xu Yang
Chinese Academy of Sciences
computer visionrobot visiongraph algorithm
M
Mingyu Fan
Donghua University
F
Fei Luo
Great Bay University