Genesis: Multimodal Driving Scene Generation with Spatio-Temporal and Cross-Modal Consistency

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of cross-modal spatiotemporal inconsistency in jointly generating multi-view driving videos and LiDAR sequences. Methodologically, it proposes a two-stage framework: (1) a collaborative architecture integrating a video diffusion transformer (DiT) and a BEV-aware LiDAR sequence generator; (2) a shared latent space enabling strong visual-geometric representation coupling; and (3) a DataCrafter module leveraging vision-language large models to generate scene-level and instance-level semantic captions, facilitating fine-grained cross-modal alignment. Evaluated on nuScenes, the method achieves state-of-the-art performance: FVD=16.95, FID=4.24, and Chamfer distance=0.611. Moreover, downstream 3D detection and segmentation tasks demonstrate significant performance gains, validating that the synthesized data preserves high-fidelity geometric structure, semantic consistency, and practical utility for autonomous driving perception.

Technology Category

Application Category

📝 Abstract
We present Genesis, a unified framework for joint generation of multi-view driving videos and LiDAR sequences with spatio-temporal and cross-modal consistency. Genesis employs a two-stage architecture that integrates a DiT-based video diffusion model with 3D-VAE encoding, and a BEV-aware LiDAR generator with NeRF-based rendering and adaptive sampling. Both modalities are directly coupled through a shared latent space, enabling coherent evolution across visual and geometric domains. To guide the generation with structured semantics, we introduce DataCrafter, a captioning module built on vision-language models that provides scene-level and instance-level supervision. Extensive experiments on the nuScenes benchmark demonstrate that Genesis achieves state-of-the-art performance across video and LiDAR metrics (FVD 16.95, FID 4.24, Chamfer 0.611), and benefits downstream tasks including segmentation and 3D detection, validating the semantic fidelity and practical utility of the generated data.
Problem

Research questions and friction points this paper is trying to address.

Generates multi-view driving videos with spatio-temporal consistency
Produces LiDAR sequences with cross-modal coherence
Ensures semantic fidelity for downstream tasks like segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

DiT-based video diffusion with 3D-VAE encoding
BEV-aware LiDAR generator with NeRF rendering
Shared latent space for cross-modal consistency
🔎 Similar Papers
No similar papers found.