🤖 AI Summary
Existing methods struggle to simultaneously achieve multi-view geometric consistency and scalability when generating large-scale outdoor driving scenes. This work proposes a 3D generative framework based on Σ-Voxfield voxel grids, which for the first time integrates semantic-guided diffusion models with a discrete Σ-Voxfield representation. By modeling spatial structure through semantic-conditioned diffusion and 3D positional encoding within local voxel neighborhoods, the approach enables progressive spatial outpainting to extend scenes to city-scale dimensions. High-fidelity images are then produced via deferred rendering. The method eliminates the need for per-scene optimization and supports efficient, consistent, and diverse scene generation under arbitrary camera trajectories and multi-sensor configurations, substantially reducing computational overhead.
📝 Abstract
Scalable generation of outdoor driving scenes requires 3D representations that remain consistent across multiple viewpoints and scale to large areas. Existing solutions either rely on image or video generative models distilled to 3D space, harming the geometric coherence and restricting the rendering to training views, or are limited to small-scale 3D scene or object-centric generation. In this work, we propose a 3D generative framework based on $Σ$-Voxfield grid, a discrete representation where each occupied voxel stores a fixed number of colorized surface samples. To generate this representation, we train a semantic-conditioned diffusion model that operates on local voxel neighborhoods and uses 3D positional encodings to capture spatial structure. We scale to large scenes via progressive spatial outpainting over overlapping regions. Finally, we render the generated $Σ$-Voxfield grid with a deferred rendering module to obtain photorealistic images, enabling large-scale multiview-consistent 3D scene generation without per-scene optimization. Extensive experiments show that our approach can generate diverse large-scale urban outdoor scenes, renderable into photorealistic images with various sensor configurations and camera trajectories while maintaining moderate computation cost compared to existing approaches.