MagicDrive-V2: High-Resolution Long Video Generation for Autonomous Driving with Adaptive Control

📅 2024-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video generation methods based on DiT and 3D VAEs suffer from geometric control failure, text-scene decoupling, and difficulties in long-horizon, multi-view spatiotemporal modeling—especially for controllable driving video synthesis. To address these limitations, we propose MVDiT, the first multi-view diffusion Transformer framework, integrating spatiotemporal joint conditional encoding, 3D geometry-guided control, and context-aware textual description generation, trained via a hybrid data progressive strategy. Our method achieves fine-grained text-driven generation and cross-view coherent synthesis while preserving high-fidelity geometric consistency. Experiments demonstrate state-of-the-art results: generated videos reach 2048×1536 resolution (3.3× higher than prior SOTA) and 256 frames (4× longer), significantly improving generalizability and physical plausibility in autonomous driving simulation.

Technology Category

Application Category

📝 Abstract
The rapid advancement of diffusion models has greatly improved video synthesis, especially in controllable video generation, which is vital for applications like autonomous driving. Although DiT with 3D VAE has become a standard framework for video generation, it introduces challenges in controllable driving video generation, especially for geometry control, rendering existing control methods ineffective. To address these issues, we propose MagicDrive-V2, a novel approach that integrates the MVDiT block and spatial-temporal conditional encoding to enable multi-view video generation and precise geometric control. Additionally, we introduce an efficient method for obtaining contextual descriptions for videos to support diverse textual control, along with a progressive training strategy using mixed video data to enhance training efficiency and generalizability. Consequently, MagicDrive-V2 enables multi-view driving video synthesis with $3.3 imes$ resolution and $4 imes$ frame count (compared to current SOTA), rich contextual control, and geometric controls. Extensive experiments demonstrate MagicDrive-V2's ability, unlocking broader applications in autonomous driving.
Problem

Research questions and friction points this paper is trying to address.

Enables high-resolution, long-duration video generation for autonomous driving.
Addresses challenges in geometry control for driving video synthesis.
Introduces contextual descriptions and progressive training for enhanced control.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates MVDiT block for multi-view video generation
Uses spatial-temporal encoding for precise geometric control
Employs progressive training with mixed video data