🤖 AI Summary
Human motion prediction (HMP) suffers from two key challenges: (1) imbalance between short-term and long-term forecasting accuracy, and (2) ineffective integration of historical predictions into subsequent forecasting stages. To address these, we propose Temporal Continual Learning (TCL), a novel framework featuring multi-stage progressive training and an explicit temporal dependency modeling mechanism via Prior Compensation Factors (PCFs). TCL dynamically injects prior predictions—generated in earlier stages—as compensated inputs to later stages, enabling coherent propagation of predictive knowledge across time horizons. We further derive a theoretically consistent optimization objective that jointly optimizes forecasts at diverse time steps. TCL is architecture-agnostic and seamlessly integrates with mainstream HMP backbones. Extensive experiments on four standard benchmarks demonstrate significant and consistent improvements in prediction accuracy, validating its strong cross-model and cross-dataset generalizability. The implementation is publicly available.
📝 Abstract
Human Motion Prediction (HMP) aims to predict future poses at different moments according to past motion sequences. Previous approaches have treated the prediction of various moments equally, resulting in two main limitations: the learning of short-term predictions is hindered by the focus on long-term predictions, and the incorporation of prior information from past predictions into subsequent predictions is limited. In this paper, we introduce a novel multi-stage training framework called Temporal Continual Learning (TCL) to address the above challenges. To better preserve prior information, we introduce the Prior Compensation Factor (PCF). We incorporate it into the model training to compensate for the lost prior information. Furthermore, we derive a more reasonable optimization objective through theoretical derivation. It is important to note that our TCL framework can be easily integrated with different HMP backbone models and adapted to various datasets and applications. Extensive experiments on four HMP benchmark datasets demonstrate the effectiveness and flexibility of TCL. The code is available at https://github.com/hyqlat/TCL.