🤖 AI Summary
Diffusion models often suffer from semantic drift or structural distortion when generating rare concepts that lie in low-density regions of the training distribution. To address this, this work proposes a unified, tuning-free prompt fusion framework that dynamically blends target and auxiliary anchor prompts during the diffusion process. By leveraging the Tweedie identity, the method derives a closed-form expression for adaptive fusion coefficients, enabling synergistic guidance that preserves both semantic accuracy and structural fidelity. Unlike existing heuristic strategies, the approach eliminates the need for model fine-tuning and demonstrates significant improvements over training-free baselines on the RareBench and FlowEdit benchmarks, achieving superior performance in both semantic correctness and structural consistency.
📝 Abstract
Diffusion-based text-to-image (T2I) models have made remarkable progress in generating photorealistic and semantically rich images. However, when the target concepts lie in low-density regions of the training distribution, these models often produce semantically misaligned or structurally inconsistent results. This limitation arises from the long-tailed nature of text-image datasets, where rare concepts or editing instructions are underrepresented. To address this, we introduce Adaptive Auxiliary Prompt Blending (AAPB) - a unified framework that stabilizes the diffusion process in low-density regions. AAPB leverages auxiliary anchor prompts to provide semantic support in rare concept generation and structural support in image editing, ensuring faithful guidance toward the target prompt. Unlike prior heuristic prompt alternation methods, AAPB derives a closed-form adaptive coefficient that optimally balances the influence between the auxiliary anchor and the target prompt at each diffusion step. Grounded in Tweedie's identity, our formulation provides a principled and training-free framework for adaptive prompt blending, ensuring stable and target-faithful generation. We demonstrate the effectiveness of adaptive interpolation over fixed interpolation through controlled experiments and empirically show consistent improvements on the RareBench and FlowEdit datasets, achieving superior semantic accuracy and structural fidelity compared to prior training-free baselines.