🤖 AI Summary
Existing large-scale 3D semantic scene generation methods predominantly rely on voxel representations, suffering from high memory consumption, fixed resolution, and limited editability. To address these limitations, we propose PrITTI—the first vectorized 3D scene generation framework based on implicit diffusion models. PrITTI represents semantic objects via differentiable rasterization of ground planes coupled with parameterized 3D primitives (e.g., ellipsoids), establishing a structured latent space. It introduces a Cholesky decomposition-based parameterization to jointly model object scale and orientation, resolving orientation ambiguity. The framework supports instance-level editing, layout control, scene completion, and extension. Evaluated on KITTI-360, PrITTI significantly outperforms voxel-based baselines in view synthesis fidelity, reduces memory footprint by up to 3×, and enables high-quality, controllable editing and photorealistic street-scene image generation.
📝 Abstract
Large-scale 3D semantic scene generation has predominantly relied on voxel-based representations, which are memory-intensive, bound by fixed resolutions, and challenging to edit. In contrast, primitives represent semantic entities using compact, coarse 3D structures that are easy to manipulate and compose, making them an ideal representation for this task. In this paper, we introduce PrITTI, a latent diffusion-based framework that leverages primitives as the main foundational elements for generating compositional, controllable, and editable 3D semantic scene layouts. Our method adopts a hybrid representation, modeling ground surfaces in a rasterized format while encoding objects as vectorized 3D primitives. This decomposition is also reflected in a structured latent representation that enables flexible scene manipulation of ground and object components. To overcome the orientation ambiguities in conventional encoding methods, we introduce a stable Cholesky-based parameterization that jointly encodes object size and orientation. Experiments on the KITTI-360 dataset show that PrITTI outperforms a voxel-based baseline in generation quality, while reducing memory requirements by up to $3 imes$. In addition, PrITTI enables direct instance-level manipulation of objects in the scene and supports a range of downstream applications, including scene inpainting, outpainting, and photo-realistic street-view synthesis.