EruDiff: Refactoring Knowledge in Diffusion Models for Advanced Text-to-Image Synthesis

📅 2026-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that current text-to-image diffusion models often generate factually inconsistent images when interpreting implicit prompts requiring deep world knowledge. To mitigate this, the authors propose a Diffusion Knowledge Distribution Matching (DK-DM) mechanism that aligns the knowledge distribution of implicit prompts with that of explicit ones, complemented by a Negative-only Reinforcement Learning (NO-RL) strategy for fine-grained correction. This approach systematically alleviates both insufficient implicit understanding and explicit rendering biases. Extensive experiments on the Science-T2I and WISE benchmarks demonstrate significant improvements in factual consistency and generalization across leading models such as FLUX and Qwen-Image, validating the effectiveness and broad applicability of the proposed framework.

Technology Category

Application Category

📝 Abstract
Text-to-image diffusion models have achieved remarkable fidelity in synthesizing images from explicit text prompts, yet exhibit a critical deficiency in processing implicit prompts that require deep-level world knowledge, ranging from natural sciences to cultural commonsense, resulting in counter-factual synthesis. This paper traces the root of this limitation to a fundamental dislocation of the underlying knowledge structures, manifesting as a chaotic organization of implicit prompts compared to their explicit counterparts. In this paper, we propose EruDiff, which aims to refactor the knowledge within diffusion models. Specifically, we develop the Diffusion Knowledge Distribution Matching (DK-DM) to register the knowledge distribution of intractable implicit prompts with that of well-defined explicit anchors. Furthermore, to rectify the inherent biases in explicit prompt rendering, we employ the Negative-Only Reinforcement Learning (NO-RL) strategy for fine-grained correction. Rigorous empirical evaluations demonstrate that our method significantly enhances the performance of leading diffusion models, including FLUX and Qwen-Image, across both the scientific knowledge benchmark (i.e., Science-T2I) and the world knowledge benchmark (i.e., WISE), underscoring the effectiveness and generalizability. Our code is available at https://github.com/xiefan-guo/erudiff.
Problem

Research questions and friction points this paper is trying to address.

text-to-image synthesis
implicit prompts
world knowledge
diffusion models
counter-factual generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Knowledge Distribution Matching
Negative-Only Reinforcement Learning
Implicit Prompt Understanding
Knowledge Refactoring
Text-to-Image Synthesis
🔎 Similar Papers
No similar papers found.