TaPD: Temporal-adaptive Progressive Distillation for Observation-Adaptive Trajectory Forecasting in Autonomous Driving

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant performance degradation of existing trajectory prediction methods under variable or extremely short observation lengths—common in scenarios involving occlusion or limited perception. To this end, the authors propose the TaPD framework, which reconstructs missing historical trajectories via a temporal inpainting module and employs an observation-adaptive predictor for robust forecasting. The approach innovatively integrates progressive knowledge distillation, cosine-annealed distillation weight scheduling, and a decoupled three-stage training pipeline (pretraining–reconstruction–fine-tuning) to enhance prediction accuracy and cross-length consistency under scarce observations. Experiments demonstrate that TaPD consistently outperforms strong baselines on Argoverse 1 and 2, with particularly notable gains in ultra-short observation settings. Moreover, TaPD functions as a plug-and-play module that effectively boosts the performance of existing models such as HiVT.

Technology Category

Application Category

📝 Abstract
Trajectory prediction is essential for autonomous driving, enabling vehicles to anticipate the motion of surrounding agents to support safe planning. However, most existing predictors assume fixed-length histories and suffer substantial performance degradation when observations are variable or extremely short in real-world settings (e.g., due to occlusion or a limited sensing range). We propose TaPD (Temporal-adaptive Progressive Distillation), a unified plug-and-play framework for observation-adaptive trajectory forecasting under variable history lengths. TaPD comprises two cooperative modules: an Observation-Adaptive Forecaster (OAF) for future prediction and a Temporal Backfilling Module (TBM) for explicit reconstruction of the past. OAF is built on progressive knowledge distillation (PKD), which transfers motion pattern knowledge from long-horizon"teachers"to short-horizon"students"via hierarchical feature regression, enabling short observations to recover richer motion context. We further introduce a cosine-annealed distillation weighting scheme to balance forecasting supervision and feature alignment, improving optimization stability and cross-length consistency. For extremely short histories where implicit alignment is insufficient, TBM backfills missing historical segments conditioned on scene evolution, producing context-rich trajectories that strengthen PKD and thereby improve OAF. We employ a decoupled pretrain-reconstruct-finetune protocol to preserve real-motion priors while adapting to backfilled inputs. Extensive experiments on Argoverse 1 and Argoverse 2 show that TaPD consistently outperforms strong baselines across all observation lengths, delivers especially large gains under very short inputs, and improves other predictors (e.g., HiVT) in a plug-and-play manner. Code will be available at https://github.com/zhouhao94/TaPD.
Problem

Research questions and friction points this paper is trying to address.

trajectory prediction
variable observation length
short history
autonomous driving
observation adaptivity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporal-adaptive Progressive Distillation
Observation-Adaptive Forecasting
Progressive Knowledge Distillation
Temporal Backfilling
Trajectory Prediction
🔎 Similar Papers
No similar papers found.
M
Mingyu Fan
College of Information and Intelligent Science, Donghua University, Shanghai 201620 China
Yi Liu
Yi Liu
Department of Computer Science, City University of Hong Kong
Security and PrivacyFederated LearningAI Security
H
Hao Zhou
School of Computing and Information Technology, Great Bay Institute for Advanced Study/Great Bay University, Dongguan, China; and Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Deheng Qian
Deheng Qian
Unknown affiliation
M
Mohammad Haziq Khan
ViSiR, Reutlingen University, Alteburgstraße 150, 72762 Reutlingen, Germany
M
Matthias Raetsch
ViSiR, Reutlingen University, Alteburgstraße 150, 72762 Reutlingen, Germany