HMPDM: A Diffusion Model for Driving Video Prediction with Historical Motion Priors

πŸ“… 2026-03-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing video prediction methods for driving scenes often suffer from temporal inconsistency and degraded visual quality due to their reliance on multi-stage training, which struggles to capture complex motion dynamics. To address this, this work proposes a diffusion-based framework that integrates historical motion priors through an implicit, multi-scale injection mechanism. The model incorporates a temporally aware latent conditioning module, a motion-aware pyramid encoder, and a self-conditioned denoising strategy to effectively encode and propagate motion information across time. Evaluated under monocular RGB input settings on the Cityscapes and KITTI benchmarks, the proposed method substantially outperforms current state-of-the-art approaches, achieving a 28.2% improvement in FrΓ©chet Video Distance (FVD) on Cityscapes, thereby demonstrating superior motion modeling fidelity and temporal coherence.
πŸ“ Abstract
Video prediction is a useful function for autonomous driving, enabling intelligent vehicles to reliably anticipate how driving scenes will evolve and thereby supporting reasoning and safer planning. However, existing models are constrained by multi-stage training pipelines and remain insufficient in modeling the diverse motion patterns in real driving scenes, leading to degraded temporal consistency and visual quality. To address these challenges, this paper introduces the historical motion priors-informed diffusion model (HMPDM), a video prediction model that leverages historical motion priors to enhance motion understanding and temporal coherence. The proposed deep learning system introduces three key designs: (i) a Temporal-aware Latent Conditioning (TaLC) module for implicit historical motion injection; (ii) a Motion-aware Pyramid Encoder (MaPE) for multi-scale motion representation; (iii) a Self-Conditioning (SC) strategy for stable iterative denoising. Extensive experiments on the Cityscapes and KITTI benchmarks demonstrate that HMPDM outperforms state-of-the-art video prediction methods with efficiency, achieving a 28.2% improvement in FVD on Cityscapes under the same monocular RGB input configuration setting. The implementation codes are publicly available at https://github.com/KELISBU/HMPDM.
Problem

Research questions and friction points this paper is trying to address.

video prediction
autonomous driving
motion patterns
temporal consistency
visual quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

diffusion model
historical motion priors
video prediction
temporal consistency
autonomous driving
K
Ke Li
Department of Civil Engineering, Stony Brook University, Stony Brook, NY 11794, USA
Tianjia Yang
Tianjia Yang
Postdoctoral Scholar, Pennsylvania State University
Connected and Automated VehiclesMachine LearningTransit Signal Priority
K
Kaidi Liang
Department of Civil Engineering, Stony Brook University, Stony Brook, NY 11794, USA
X
Xianbiao Hu
Department of Civil and Environmental Engineering, Pennsylvania State University, University Park, PA 16802, USA
Ruwen Qin
Ruwen Qin
Stony Brook University
Visual Perception and CognitionCollective Intelligence