GraPE: A Generate-Plan-Edit Framework for Compositional T2I Synthesis

📅 2024-12-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-image (T2I) methods struggle to accurately model object attributes and inter-object relationships under compositional prompts requiring multi-step reasoning, leading to semantic inconsistency between text and image. To address this, we propose a closed-loop “Generate–Plan–Edit” framework: (1) an initial image is generated via diffusion models; (2) a multimodal large language model (MLLM) diagnoses inconsistencies and produces executable, stepwise editing plans; and (3) a text-guided image editing model iteratively refines the image according to the plan. This work introduces the first modular, training-free T2I synthesis paradigm, enabling plug-and-play integration of arbitrary generation and editing models. It also establishes the first compositional, progressive editing capability, substantially narrowing the performance gap between strong and weak base models. Evaluated across three benchmarks and ten state-of-the-art models—including DALL·E-3 and SD-3.5-Large—our method improves compositional prompt fidelity by up to 3.0 points.

Technology Category

Application Category

📝 Abstract
Text-to-image (T2I) generation has seen significant progress with diffusion models, enabling generation of photo-realistic images from text prompts. Despite this progress, existing methods still face challenges in following complex text prompts, especially those requiring compositional and multi-step reasoning. Given such complex instructions, SOTA models often make mistakes in faithfully modeling object attributes, and relationships among them. In this work, we present an alternate paradigm for T2I synthesis, decomposing the task of complex multi-step generation into three steps, (a) Generate: we first generate an image using existing diffusion models (b) Plan: we make use of Multi-Modal LLMs (MLLMs) to identify the mistakes in the generated image expressed in terms of individual objects and their properties, and produce a sequence of corrective steps required in the form of an edit-plan. (c) Edit: we make use of an existing text-guided image editing models to sequentially execute our edit-plan over the generated image to get the desired image which is faithful to the original instruction. Our approach derives its strength from the fact that it is modular in nature, is training free, and can be applied over any combination of image generation and editing models. As an added contribution, we also develop a model capable of compositional editing, which further helps improve the overall accuracy of our proposed approach. Our method flexibly trades inference time compute with performance on compositional text prompts. We perform extensive experimental evaluation across 3 benchmarks and 10 T2I models including DALLE-3 and the latest -- SD-3.5-Large. Our approach not only improves the performance of the SOTA models, by upto 3 points, it also reduces the performance gap between weaker and stronger models. $href{https://dair-iitd.github.io/GraPE/}{https://dair-iitd.github.io/GraPE/}$
Problem

Research questions and friction points this paper is trying to address.

Challenges in generating images from complex text prompts
Mistakes in modeling object attributes and relationships
Need for modular, training-free T2I synthesis approach
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes T2I synthesis into Generate-Plan-Edit steps
Uses Multi-Modal LLMs for error identification and correction
Applies modular, training-free approach to image generation and editing
🔎 Similar Papers