🤖 AI Summary
Text-to-LiDAR scene generation faces two key challenges: (1) scarcity of text-LiDAR paired data leads to insufficient priors and blurry, over-smoothed outputs; (2) low-quality textual descriptions degrade controllability and fidelity. To address these, we propose T2LDM—a diffusion model guided by self-conditioned representations. Our contributions are threefold: (1) a self-conditioned reconstruction loss that imposes soft geometric constraints during training and decouples them during inference to enhance structural coherence; (2) the first controllable text-to-LiDAR benchmark, T2nuScenes, with quantitative metrics for controllability evaluation; (3) a direction-position prior to mitigate street-scene distortions. T2LDM employs a frozen denoising network to learn a conditional encoder and adopts composable text prompting for content control. Experiments demonstrate state-of-the-art performance across unconditional and diverse conditional generation tasks, significantly improving geometric detail, realism, and text-LiDAR alignment accuracy.
📝 Abstract
Text-to-LiDAR generation can customize 3D data with rich structures and diverse scenes for downstream tasks. However, the scarcity of Text-LiDAR pairs often causes insufficient training priors, generating overly smooth 3D scenes. Moreover, low-quality text descriptions may degrade generation quality and controllability. In this paper, we propose a Text-to-LiDAR Diffusion Model for scene generation, named T2LDM, with a Self-Conditioned Representation Guidance (SCRG). Specifically, SCRG, by aligning to the real representations, provides the soft supervision with reconstruction details for the Denoising Network (DN) in training, while decoupled in inference. In this way, T2LDM can perceive rich geometric structures from data distribution, generating detailed objects in scenes. Meanwhile, we construct a content-composable Text-LiDAR benchmark, T2nuScenes, along with a controllability metric. Based on this, we analyze the effects of different text prompts for LiDAR generation quality and controllability, providing practical prompt paradigms and insights. Furthermore, a directional position prior is designed to mitigate street distortion, further improving scene fidelity. Additionally, by learning a conditional encoder via frozen DN, T2LDM can support multiple conditional tasks, including Sparse-to-Dense, Dense-to-Sparse, and Semantic-to-LiDAR generation. Extensive experiments in unconditional and conditional generation demonstrate that T2LDM outperforms existing methods, achieving state-of-the-art scene generation.