ModeDreamer: Mode Guiding Score Distillation for Text-to-3D Generation using Reference Image Prompts

📅 2024-11-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-3D methods rely on Score Distillation Sampling (SDS), suffering from optimization instability, excessive geometric smoothing, and texture distortion due to multimodal score oscillations. To address these issues, we propose Image-guided Score Distillation (ISD), a novel framework featuring an IP-Adapter with a dual-role mechanism: (i) as a modality selector steered by image prompts, and (ii) as a variance-controlling variational module when no image input is provided. We further design a lightweight, differentiable ISD loss that explicitly guides optimization toward high-fidelity geometry and texture. Evaluated on T3Bench, our method achieves significant improvements in geometric detail, texture consistency, and visual fidelity, while enabling more stable convergence and faster optimization. Both qualitative and quantitative results surpass current state-of-the-art text-to-3D approaches.

Technology Category

Application Category

📝 Abstract
Existing Score Distillation Sampling (SDS)-based methods have driven significant progress in text-to-3D generation. However, 3D models produced by SDS-based methods tend to exhibit over-smoothing and low-quality outputs. These issues arise from the mode-seeking behavior of current methods, where the scores used to update the model oscillate between multiple modes, resulting in unstable optimization and diminished output quality. To address this problem, we introduce a novel image prompt score distillation loss named ISD, which employs a reference image to direct text-to-3D optimization toward a specific mode. Our ISD loss can be implemented by using IP-Adapter, a lightweight adapter for integrating image prompt capability to a text-to-image diffusion model, as a mode-selection module. A variant of this adapter, when not being prompted by a reference image, can serve as an efficient control variate to reduce variance in score estimates, thereby enhancing both output quality and optimization stability. Our experiments demonstrate that the ISD loss consistently achieves visually coherent, high-quality outputs and improves optimization speed compared to prior text-to-3D methods, as demonstrated through both qualitative and quantitative evaluations on the T3Bench benchmark suite.
Problem

Research questions and friction points this paper is trying to address.

Over-smoothing and low-quality outputs in text-to-3D generation.
Unstable optimization due to mode-seeking behavior in SDS-based methods.
Need for improved optimization stability and output quality in 3D models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces ISD loss for mode-specific text-to-3D optimization
Uses IP-Adapter for image prompt integration
Enhances output quality and optimization stability
🔎 Similar Papers
No similar papers found.
Uy Dieu Tran
Uy Dieu Tran
VinAI
3D generationcomputer vision
M
Minh Luu
VinAI Research, Vietnam
P
P. Nguyen
VinAI Research, Vietnam
K
Khoi Nguyen
VinAI Research, Vietnam
Binh-Son Hua
Binh-Son Hua
Trinity College Dublin
Generative 3D AI3D Deep LearningComputer VisionComputer GraphicsRendering