🤖 AI Summary
This work addresses the significant performance degradation of existing trajectory prediction methods under variable or extremely short observation lengths—common in scenarios involving occlusion or limited perception. To this end, the authors propose the TaPD framework, which reconstructs missing historical trajectories via a temporal inpainting module and employs an observation-adaptive predictor for robust forecasting. The approach innovatively integrates progressive knowledge distillation, cosine-annealed distillation weight scheduling, and a decoupled three-stage training pipeline (pretraining–reconstruction–fine-tuning) to enhance prediction accuracy and cross-length consistency under scarce observations. Experiments demonstrate that TaPD consistently outperforms strong baselines on Argoverse 1 and 2, with particularly notable gains in ultra-short observation settings. Moreover, TaPD functions as a plug-and-play module that effectively boosts the performance of existing models such as HiVT.
📝 Abstract
Trajectory prediction is essential for autonomous driving, enabling vehicles to anticipate the motion of surrounding agents to support safe planning. However, most existing predictors assume fixed-length histories and suffer substantial performance degradation when observations are variable or extremely short in real-world settings (e.g., due to occlusion or a limited sensing range). We propose TaPD (Temporal-adaptive Progressive Distillation), a unified plug-and-play framework for observation-adaptive trajectory forecasting under variable history lengths. TaPD comprises two cooperative modules: an Observation-Adaptive Forecaster (OAF) for future prediction and a Temporal Backfilling Module (TBM) for explicit reconstruction of the past. OAF is built on progressive knowledge distillation (PKD), which transfers motion pattern knowledge from long-horizon"teachers"to short-horizon"students"via hierarchical feature regression, enabling short observations to recover richer motion context. We further introduce a cosine-annealed distillation weighting scheme to balance forecasting supervision and feature alignment, improving optimization stability and cross-length consistency. For extremely short histories where implicit alignment is insufficient, TBM backfills missing historical segments conditioned on scene evolution, producing context-rich trajectories that strengthen PKD and thereby improve OAF. We employ a decoupled pretrain-reconstruct-finetune protocol to preserve real-motion priors while adapting to backfilled inputs. Extensive experiments on Argoverse 1 and Argoverse 2 show that TaPD consistently outperforms strong baselines across all observation lengths, delivers especially large gains under very short inputs, and improves other predictors (e.g., HiVT) in a plug-and-play manner. Code will be available at https://github.com/zhouhao94/TaPD.