A Self-Conditioned Representation Guided Diffusion Model for Realistic Text-to-LiDAR Scene Generation

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-LiDAR scene generation faces two key challenges: (1) scarcity of text-LiDAR paired data leads to insufficient priors and blurry, over-smoothed outputs; (2) low-quality textual descriptions degrade controllability and fidelity. To address these, we propose T2LDM—a diffusion model guided by self-conditioned representations. Our contributions are threefold: (1) a self-conditioned reconstruction loss that imposes soft geometric constraints during training and decouples them during inference to enhance structural coherence; (2) the first controllable text-to-LiDAR benchmark, T2nuScenes, with quantitative metrics for controllability evaluation; (3) a direction-position prior to mitigate street-scene distortions. T2LDM employs a frozen denoising network to learn a conditional encoder and adopts composable text prompting for content control. Experiments demonstrate state-of-the-art performance across unconditional and diverse conditional generation tasks, significantly improving geometric detail, realism, and text-LiDAR alignment accuracy.

Technology Category

Application Category

📝 Abstract
Text-to-LiDAR generation can customize 3D data with rich structures and diverse scenes for downstream tasks. However, the scarcity of Text-LiDAR pairs often causes insufficient training priors, generating overly smooth 3D scenes. Moreover, low-quality text descriptions may degrade generation quality and controllability. In this paper, we propose a Text-to-LiDAR Diffusion Model for scene generation, named T2LDM, with a Self-Conditioned Representation Guidance (SCRG). Specifically, SCRG, by aligning to the real representations, provides the soft supervision with reconstruction details for the Denoising Network (DN) in training, while decoupled in inference. In this way, T2LDM can perceive rich geometric structures from data distribution, generating detailed objects in scenes. Meanwhile, we construct a content-composable Text-LiDAR benchmark, T2nuScenes, along with a controllability metric. Based on this, we analyze the effects of different text prompts for LiDAR generation quality and controllability, providing practical prompt paradigms and insights. Furthermore, a directional position prior is designed to mitigate street distortion, further improving scene fidelity. Additionally, by learning a conditional encoder via frozen DN, T2LDM can support multiple conditional tasks, including Sparse-to-Dense, Dense-to-Sparse, and Semantic-to-LiDAR generation. Extensive experiments in unconditional and conditional generation demonstrate that T2LDM outperforms existing methods, achieving state-of-the-art scene generation.
Problem

Research questions and friction points this paper is trying to address.

Generating realistic 3D LiDAR scenes from text descriptions with insufficient training data
Addressing low-quality text descriptions that degrade generation quality and controllability
Mitigating street distortion and supporting multiple conditional generation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Conditioned Representation Guidance for reconstruction details
Directional position prior to mitigate street distortion
Conditional encoder learning for multiple generation tasks
🔎 Similar Papers
No similar papers found.