🤖 AI Summary
Existing self-supervised point cloud representation learning is hindered by the scarcity of 3D data and the difficulty of leveraging large-scale generative priors. To address this, we propose PointSD—the first framework to transfer pre-trained 2D text-to-image diffusion models (e.g., Stable Diffusion) to 3D point cloud understanding. Our method replaces the original text encoder with a 3D point cloud encoder, establishing a point-cloud-guided image denoising objective, and jointly optimizes the point cloud backbone via cross-modal feature alignment. This design enables effective knowledge transfer from massive 2D generative priors to 3D understanding tasks—without training any 3D diffusion model. PointSD achieves state-of-the-art performance on downstream classification and segmentation benchmarks, significantly outperforming prior self-supervised approaches. Ablation studies confirm the efficacy of cross-modal transfer and validate the critical contributions of our architectural design and optimization strategy.
📝 Abstract
Diffusion-based models, widely used in text-to-image generation, have proven effective in 2D representation learning. Recently, this framework has been extended to 3D self-supervised learning by constructing a conditional point generator for enhancing 3D representations. However, its performance remains constrained by the 3D diffusion model, which is trained on the available 3D datasets with limited size. We hypothesize that the robust capabilities of text-to-image diffusion models, particularly Stable Diffusion (SD), which is trained on large-scale datasets, can help overcome these limitations. To investigate this hypothesis, we propose PointSD, a framework that leverages the SD model for 3D self-supervised learning. By replacing the SD model's text encoder with a 3D encoder, we train a point-to-image diffusion model that allows point clouds to guide the denoising of rendered noisy images. With the trained point-to-image diffusion model, we use noise-free images as the input and point clouds as the condition to extract SD features. Next, we train a 3D backbone by aligning its features with these SD features, thereby facilitating direct semantic learning. Comprehensive experiments on downstream point cloud tasks and ablation studies demonstrate that the SD model can enhance point cloud self-supervised learning. Code is publicly available at https://github.com/wdttt/PointSD.