🤖 AI Summary
Existing text-to-3D methods rely on Score Distillation Sampling (SDS), suffering from optimization instability, excessive geometric smoothing, and texture distortion due to multimodal score oscillations. To address these issues, we propose Image-guided Score Distillation (ISD), a novel framework featuring an IP-Adapter with a dual-role mechanism: (i) as a modality selector steered by image prompts, and (ii) as a variance-controlling variational module when no image input is provided. We further design a lightweight, differentiable ISD loss that explicitly guides optimization toward high-fidelity geometry and texture. Evaluated on T3Bench, our method achieves significant improvements in geometric detail, texture consistency, and visual fidelity, while enabling more stable convergence and faster optimization. Both qualitative and quantitative results surpass current state-of-the-art text-to-3D approaches.
📝 Abstract
Existing Score Distillation Sampling (SDS)-based methods have driven significant progress in text-to-3D generation. However, 3D models produced by SDS-based methods tend to exhibit over-smoothing and low-quality outputs. These issues arise from the mode-seeking behavior of current methods, where the scores used to update the model oscillate between multiple modes, resulting in unstable optimization and diminished output quality. To address this problem, we introduce a novel image prompt score distillation loss named ISD, which employs a reference image to direct text-to-3D optimization toward a specific mode. Our ISD loss can be implemented by using IP-Adapter, a lightweight adapter for integrating image prompt capability to a text-to-image diffusion model, as a mode-selection module. A variant of this adapter, when not being prompted by a reference image, can serve as an efficient control variate to reduce variance in score estimates, thereby enhancing both output quality and optimization stability. Our experiments demonstrate that the ISD loss consistently achieves visually coherent, high-quality outputs and improves optimization speed compared to prior text-to-3D methods, as demonstrated through both qualitative and quantitative evaluations on the T3Bench benchmark suite.