Progressive Compositionality In Text-to-Image Generative Models

📅 2024-10-22
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-image (T2I) models exhibit limited compositional generalization when modeling complex object-attribute relationships—such as material properties, spatial configurations, and logical constraints. To address this, we propose EvoGen: first, leveraging large language models (LLMs) to generate fine-grained scene descriptions, then employing vision-language question answering (VQA)-driven filtering and validation to construct ConPair, a high-quality dataset of 15K natural contrastive image pairs. Second, we design a multi-stage evolutionary contrastive learning framework that progressively refines diffusion model discrimination against hard negative samples via curriculum learning. This work introduces the first VQA-augmented paradigm for constructing contrastive datasets tailored to T2I compositional generalization—marking the inaugural integration of VQA into T2I compositional data curation. Evaluated on multiple compositional T2I benchmarks, EvoGen significantly outperforms state-of-the-art methods, with particularly notable gains on tasks involving intricate attribute interactions.

Technology Category

Application Category

📝 Abstract
Despite the impressive text-to-image (T2I) synthesis capabilities of diffusion models, they often struggle to understand compositional relationships between objects and attributes, especially in complex settings. Existing solutions have tackled these challenges by optimizing the cross-attention mechanism or learning from the caption pairs with minimal semantic changes. However, can we generate high-quality complex contrastive images that diffusion models can directly discriminate based on visual representations? In this work, we leverage large-language models (LLMs) to compose realistic, complex scenarios and harness Visual-Question Answering (VQA) systems alongside diffusion models to automatically curate a contrastive dataset, ConPair, consisting of 15k pairs of high-quality contrastive images. These pairs feature minimal visual discrepancies and cover a wide range of attribute categories, especially complex and natural scenarios. To learn effectively from these error cases, i.e., hard negative images, we propose EvoGen, a new multi-stage curriculum for contrastive learning of diffusion models. Through extensive experiments across a wide range of compositional scenarios, we showcase the effectiveness of our proposed framework on compositional T2I benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Understanding compositional relationships in text-to-image models
Generating high-quality complex contrastive images automatically
Improving diffusion models via multi-stage contrastive learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverage LLMs to compose complex scenarios
Use VQA systems to curate contrastive dataset
Propose EvoGen for contrastive learning
🔎 Similar Papers
No similar papers found.
X
Xu Han
Yale University
Linghao Jin
Linghao Jin
University of Southern California
Natural Language ProcessingPattern Recognition
X
Xiaofeng Liu
Yale University
P
Paul Pu Liang
Massachusetts Institute of Technology