VSC: Visual Search Compositional Text-to-Image Diffusion Model

📅 2025-05-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current text-to-image diffusion models often suffer from attribute-object binding errors when generating images from complex prompts containing multiple attribute-object pairs, primarily due to the limited capacity of CLIP-based text encoders to model compositional modifier relations. To address this, we propose a novel compositional generation paradigm: (1) decomposing the input prompt into sub-prompts and generating corresponding visual prototypes; (2) fusing these prototypes with enhanced CLIP text embeddings to improve semantic representation; and (3) introducing a segmentation-driven localization training strategy to mitigate cross-attention misalignment. Our method requires no additional layout annotations or inference-time intervention. On T2I CompBench, it substantially outperforms state-of-the-art approaches; human evaluations confirm superior image quality and robustness—particularly as the number of attribute-object pairs increases.

Technology Category

Application Category

📝 Abstract
Text-to-image diffusion models have shown impressive capabilities in generating realistic visuals from natural-language prompts, yet they often struggle with accurately binding attributes to corresponding objects, especially in prompts containing multiple attribute-object pairs. This challenge primarily arises from the limitations of commonly used text encoders, such as CLIP, which can fail to encode complex linguistic relationships and modifiers effectively. Existing approaches have attempted to mitigate these issues through attention map control during inference and the use of layout information or fine-tuning during training, yet they face performance drops with increased prompt complexity. In this work, we introduce a novel compositional generation method that leverages pairwise image embeddings to improve attribute-object binding. Our approach decomposes complex prompts into sub-prompts, generates corresponding images, and computes visual prototypes that fuse with text embeddings to enhance representation. By applying segmentation-based localization training, we address cross-attention misalignment, achieving improved accuracy in binding multiple attributes to objects. Our approaches outperform existing compositional text-to-image diffusion models on the benchmark T2I CompBench, achieving better image quality, evaluated by humans, and emerging robustness under scaling number of binding pairs in the prompt.
Problem

Research questions and friction points this paper is trying to address.

Improving attribute-object binding in text-to-image diffusion models
Addressing cross-attention misalignment with segmentation-based training
Enhancing robustness for prompts with multiple attribute-object pairs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pairwise image embeddings enhance attribute-object binding
Decomposes prompts into sub-prompts for visual prototypes
Segmentation-based training corrects cross-attention misalignment
🔎 Similar Papers
No similar papers found.