UniMLVG: Unified Framework for Multi-view Long Video Generation with Comprehensive Control Capabilities for Autonomous Driving

📅 2024-12-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of generating long-sequence, multi-view (front/rear/left/right/360° surround-view) driving videos in autonomous driving simulation—particularly poor inter-frame and inter-view consistency—this paper proposes the first three-stage collaborative training framework. It unifies single- and multi-view video generation; introduces explicit view embeddings and joint spatiotemporal-view modeling to enhance motion and geometric consistency; and supports multi-format conditioning—including text, images, videos, 3D bounding boxes, and frame-wise textual descriptions—for fine-grained control. Experiments demonstrate that our method outperforms state-of-the-art approaches by 21.4% and 36.5% on FID and FVD metrics, respectively, significantly improving generated video fidelity, diversity, and cross-view consistency.

Technology Category

Application Category

📝 Abstract
The creation of diverse and realistic driving scenarios has become essential to enhance perception and planning capabilities of the autonomous driving system. However, generating long-duration, surround-view consistent driving videos remains a significant challenge. To address this, we present UniMLVG, a unified framework designed to generate extended street multi-perspective videos under precise control. By integrating single- and multi-view driving videos into the training data, our approach updates cross-frame and cross-view modules across three stages with different training objectives, substantially boosting the diversity and quality of generated visual content. Additionally, we employ the explicit viewpoint modeling in multi-view video generation to effectively improve motion transition consistency. Capable of handling various input reference formats (e.g., text, images, or video), our UniMLVG generates high-quality multi-view videos according to the corresponding condition constraints such as 3D bounding boxes or frame-level text descriptions. Compared to the best models with similar capabilities, our framework achieves improvements of 21.4% in FID and 36.5% in FVD.
Problem

Research questions and friction points this paper is trying to address.

Autonomous Driving
Multi-Angle Driving Videos
Adaptive Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

UniMLVG
Multi-Angle High-Fidelity Driving Videos
Advanced Object Motion Simulation
🔎 Similar Papers
No similar papers found.
R
Rui Chen
School of Automation, Southeast University, China
Z
Zehuan Wu
SenseTime Research
Y
Yichen Liu
SenseTime Research
Y
Yuxin Guo
SenseTime Research
J
Jingcheng Ni
SenseTime Research
Haifeng Xia
Haifeng Xia
Tulane University
Machine Learning
S
Siyu Xia
School of Automation, Southeast University, China