MiLA: Multi-view Intensive-fidelity Long-term Video Generation World Model for Autonomous Driving

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of high-quality long-sequence training videos for autonomous driving and the inability of existing world models to generate minute-long dynamically consistent videos, this paper introduces the first multi-view, high-fidelity world model tailored for autonomous driving. Methodologically, we propose a novel “Coarse-to-Re(fine)” two-stage generation paradigm; design a temporally progressive denoising scheduler and a joint denoising-optical flow correction module to mitigate long-range error accumulation; and develop a diffusion-based multi-view video generation framework integrating spatiotemporal attention, optical-flow guidance, staged denoising, and multi-scale reconstruction. Evaluated on nuScenes, our model achieves state-of-the-art performance—surpassing prior methods in PSNR, SSIM, and LPIPS—while enabling synthesis of up to 60-second, multi-view videos with high dynamic consistency, directly supporting end-to-end simulation-based training.

Technology Category

Application Category

📝 Abstract
In recent years, data-driven techniques have greatly advanced autonomous driving systems, but the need for rare and diverse training data remains a challenge, requiring significant investment in equipment and labor. World models, which predict and generate future environmental states, offer a promising solution by synthesizing annotated video data for training. However, existing methods struggle to generate long, consistent videos without accumulating errors, especially in dynamic scenes. To address this, we propose MiLA, a novel framework for generating high-fidelity, long-duration videos up to one minute. MiLA utilizes a Coarse-to-Re(fine) approach to both stabilize video generation and correct distortion of dynamic objects. Additionally, we introduce a Temporal Progressive Denoising Scheduler and Joint Denoising and Correcting Flow modules to improve the quality of generated videos. Extensive experiments on the nuScenes dataset show that MiLA achieves state-of-the-art performance in video generation quality. For more information, visit the project website: https://github.com/xiaomi-mlab/mila.github.io.
Problem

Research questions and friction points this paper is trying to address.

Generates long-duration, high-fidelity videos for autonomous driving training.
Addresses error accumulation in dynamic scenes during video generation.
Improves video quality using novel denoising and correcting techniques.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Coarse-to-Refine approach stabilizes video generation
Temporal Progressive Denoising Scheduler enhances quality
Joint Denoising and Correcting Flow reduces errors
🔎 Similar Papers
No similar papers found.