PerlDiff: Controllable Street View Synthesis Using Perspective-Layout Diffusion Models

πŸ“… 2024-07-08
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 2
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the high annotation cost and low accuracy of 3D street-scene image labeling in autonomous driving, this paper proposes PerLDiffβ€”the first object-level controllable generation method that explicitly embeds 3D perspective geometry into diffusion models. Unlike existing paradigms relying on external controllers (e.g., ControlNet or GLIGEN), PerLDiff achieves end-to-end geometrically consistent generation via three key innovations: (i) viewpoint-aware layout encoding, (ii) a 3D-geometry-guided conditional denoising U-Net, and (iii) multi-scale layout supervision. Evaluated on NuScenes and KITTI, PerLDiff reduces object localization error by 32% and improves layout fidelity by 19.6% over state-of-the-art methods. It further demonstrates superior robustness and fine-grained controllability in geometrically grounded scene generation.

Technology Category

Application Category

πŸ“ Abstract
Controllable generation is considered a potentially vital approach to address the challenge of annotating 3D data, and the precision of such controllable generation becomes particularly imperative in the context of data production for autonomous driving. Existing methods focus on the integration of diverse generative information into controlling inputs, utilizing frameworks such as GLIGEN or ControlNet, to produce commendable outcomes in controllable generation. However, such approaches intrinsically restrict generation performance to the learning capacities of predefined network architectures. In this paper, we explore the integration of controlling information and introduce PerLDiff ( extbf{Per}spective- extbf{L}ayout extbf{Diff}usion Models), a method for effective street view image generation that fully leverages perspective 3D geometric information. Our PerLDiff employs 3D geometric priors to guide the generation of street view images with precise object-level control within the network learning process, resulting in a more robust and controllable output. Moreover, it demonstrates superior controllability compared to alternative layout control methods. Empirical results justify that our PerLDiff markedly enhances the precision of generation on the NuScenes and KITTI datasets.
Problem

Research questions and friction points this paper is trying to address.

Enhance precision in controllable street view generation
Integrate 3D geometric priors for object-level control
Improve autonomous driving data production accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses perspective-layout diffusion models
Integrates 3D geometric priors
Enhances precision in controllable generation
πŸ”Ž Similar Papers
No similar papers found.