🤖 AI Summary
This work addresses the limitations of existing text-to-3D generation methods, which lack effective mechanisms for aligning with human preferences and rely heavily on scarce 3D-annotated data. The authors propose Preference Score Distillation (PSD), a novel framework that, for the first time, integrates human preference alignment with Classifier-Free Guidance (CFG) theory. PSD leverages a pre-trained 2D reward model to achieve alignment without requiring any 3D preference data. By introducing implicit reward modeling and an adaptive negative text embedding optimization strategy, the method circumvents the incompatibility of 2D reward gradients in 3D space. Experiments demonstrate that PSD significantly outperforms current approaches in aesthetic quality and seamlessly integrates into various score distillation pipelines, exhibiting strong generalization and scalability.
📝 Abstract
Human preference alignment presents a critical yet underexplored challenge for diffusion models in text-to-3D generation. Existing solutions typically require task-specific fine-tuning, posing significant hurdles in data-scarce 3D domains. To address this, we propose Preference Score Distillation (PSD), an optimization-based framework that leverages pretrained 2D reward models for human-aligned text-to-3D synthesis without 3D training data. Our key insight stems from the incompatibility of pixel-level gradients: due to the absence of noisy samples during reward model training, direct application of 2D reward gradients disturbs the denoising process. Noticing that similar issue occurs in the naive classifier guidance in conditioned diffusion models, we fundamentally rethink preference alignment as a classifier-free guidance (CFG)-style mechanism through our implicit reward model. Furthermore, recognizing that frozen pretrained diffusion models constrain performance, we introduce an adaptive strategy to co-optimize preference scores and negative text embeddings. By incorporating CFG during optimization, online refinement of negative text embeddings dynamically enhances alignment. To our knowledge, we are the first to bridge human preference alignment with CFG theory under score distillation framework. Experiments demonstrate the superiority of PSD in aesthetic metrics, seamless integration with diverse pipelines, and strong extensibility.