Noise Diffusion for Enhancing Semantic Faithfulness in Text-to-Image Synthesis

📅 2024-11-25
🏛️ Computer Vision and Pattern Recognition
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-image diffusion models suffer from imprecise semantic alignment between prompts and generated images. Existing methods, such as InitNo, optimize the initial noise via attention map guidance but are limited by narrow information coverage and susceptibility to local optima. This paper proposes an LVLM-guided iterative optimization framework for noise latent variables: it is the first to incorporate large vision-language models’ (LVLMs) linguistic understanding capabilities into noise-space optimization; establishes theoretically verifiable conditions for noise updates; and alleviates strong dependence on the initial noise point, enabling global semantic alignment. Leveraging diffusion process reparameterization and theoretical analysis of semantic fidelity, our method consistently improves CLIP-Score and human evaluation scores across multiple mainstream diffusion models, significantly enhancing fine-grained semantic matching—without modifying model architectures or requiring retraining.

Technology Category

Application Category

📝 Abstract
Diffusion models have achieved impressive success in generating photorealistic images, but challenges remain in ensuring precise semantic alignment with input prompts. Optimizing the initial noisy latent offers a more efficient alternative to modifying model architectures or prompt engineering for improving semantic alignment. A latest approach, InitNo, refines the initial noisy latent by leveraging attention maps; however, these maps capture only limited information, and the effectiveness of InitNo is highly dependent on the initial starting point, as it tends to converge on a local optimum near this point. To this end, this paper proposes leveraging the language comprehension capabilities of large vision-language models (LVLMs) to guide the optimization of the initial noisy latent, and introduces the Noise Diffusion process, which updates the noisy latent to generate semantically faithful images while preserving distribution consistency. Furthermore, we provide a theoretical analysis of the condition under which the update improves semantic faithfulness. Experimental results demonstrate the effectiveness and adaptability of our framework, consistently enhancing semantic alignment across various diffusion models.
Problem

Research questions and friction points this paper is trying to address.

Improving semantic alignment in text-to-image diffusion models
Optimizing initial noisy latent using vision-language model guidance
Enhancing semantic faithfulness while maintaining distribution consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages LVLMs to guide initial noisy latent optimization
Introduces Noise Diffusion process for semantic faithfulness
Updates noisy latent while preserving distribution consistency
🔎 Similar Papers
No similar papers found.