π€ AI Summary
Existing video prediction methods for driving scenes often suffer from temporal inconsistency and degraded visual quality due to their reliance on multi-stage training, which struggles to capture complex motion dynamics. To address this, this work proposes a diffusion-based framework that integrates historical motion priors through an implicit, multi-scale injection mechanism. The model incorporates a temporally aware latent conditioning module, a motion-aware pyramid encoder, and a self-conditioned denoising strategy to effectively encode and propagate motion information across time. Evaluated under monocular RGB input settings on the Cityscapes and KITTI benchmarks, the proposed method substantially outperforms current state-of-the-art approaches, achieving a 28.2% improvement in FrΓ©chet Video Distance (FVD) on Cityscapes, thereby demonstrating superior motion modeling fidelity and temporal coherence.
π Abstract
Video prediction is a useful function for autonomous driving, enabling intelligent vehicles to reliably anticipate how driving scenes will evolve and thereby supporting reasoning and safer planning. However, existing models are constrained by multi-stage training pipelines and remain insufficient in modeling the diverse motion patterns in real driving scenes, leading to degraded temporal consistency and visual quality. To address these challenges, this paper introduces the historical motion priors-informed diffusion model (HMPDM), a video prediction model that leverages historical motion priors to enhance motion understanding and temporal coherence. The proposed deep learning system introduces three key designs: (i) a Temporal-aware Latent Conditioning (TaLC) module for implicit historical motion injection; (ii) a Motion-aware Pyramid Encoder (MaPE) for multi-scale motion representation; (iii) a Self-Conditioning (SC) strategy for stable iterative denoising. Extensive experiments on the Cityscapes and KITTI benchmarks demonstrate that HMPDM outperforms state-of-the-art video prediction methods with efficiency, achieving a 28.2% improvement in FVD on Cityscapes under the same monocular RGB input configuration setting. The implementation codes are publicly available at https://github.com/KELISBU/HMPDM.