🤖 AI Summary
To address the scarcity of high-quality long-sequence training videos for autonomous driving and the inability of existing world models to generate minute-long dynamically consistent videos, this paper introduces the first multi-view, high-fidelity world model tailored for autonomous driving. Methodologically, we propose a novel “Coarse-to-Re(fine)” two-stage generation paradigm; design a temporally progressive denoising scheduler and a joint denoising-optical flow correction module to mitigate long-range error accumulation; and develop a diffusion-based multi-view video generation framework integrating spatiotemporal attention, optical-flow guidance, staged denoising, and multi-scale reconstruction. Evaluated on nuScenes, our model achieves state-of-the-art performance—surpassing prior methods in PSNR, SSIM, and LPIPS—while enabling synthesis of up to 60-second, multi-view videos with high dynamic consistency, directly supporting end-to-end simulation-based training.
📝 Abstract
In recent years, data-driven techniques have greatly advanced autonomous driving systems, but the need for rare and diverse training data remains a challenge, requiring significant investment in equipment and labor. World models, which predict and generate future environmental states, offer a promising solution by synthesizing annotated video data for training. However, existing methods struggle to generate long, consistent videos without accumulating errors, especially in dynamic scenes. To address this, we propose MiLA, a novel framework for generating high-fidelity, long-duration videos up to one minute. MiLA utilizes a Coarse-to-Re(fine) approach to both stabilize video generation and correct distortion of dynamic objects. Additionally, we introduce a Temporal Progressive Denoising Scheduler and Joint Denoising and Correcting Flow modules to improve the quality of generated videos. Extensive experiments on the nuScenes dataset show that MiLA achieves state-of-the-art performance in video generation quality. For more information, visit the project website: https://github.com/xiaomi-mlab/mila.github.io.