🤖 AI Summary
This work addresses the persistent challenge of semantic misalignment in text-to-image generation when handling complex prompts involving multiple objects, their relationships, and attributes. The authors propose a novel test-time optimization framework that requires no external tools or prior knowledge, introducing for the first time a chain-of-thought-inspired self-correction mechanism into image synthesis. By leveraging a vision-language model as a critic, the method provides dynamic feedback during inference to iteratively refine the generator’s output. The approach is highly flexible, compatible with various generative models and vision-language critics, and demonstrates consistent performance gains—achieving accuracy improvements of 16.9%, 13.8%, and 12.5% on ConceptMix, T2I-CompBench, and Visual Jenga benchmarks, respectively—while attaining a human preference rate of 58.7%.
📝 Abstract
Text-to-image (T2I) models have achieved remarkable progress, yet they continue to struggle with complex prompts that require simultaneously handling multiple objects, relations, and attributes. Existing inference-time strategies, such as parallel sampling with verifiers or simply increasing denoising steps, can improve prompt alignment but remain inadequate for richly compositional settings where many constraints must be satisfied. Inspired by the success of chain-of-thought reasoning in large language models, we propose an iterative test-time strategy in which a T2I model progressively refines its generations across multiple steps, guided by feedback from a vision-language model as the critic in the loop. Our approach is simple, requires no external tools or priors, and can be flexibly applied to a wide range of image generators and vision-language models. Empirically, we demonstrate consistent gains on image generation across benchmarks: a 16.9% improvement in all-correct rate on ConceptMix (k=7), a 13.8% improvement on T2I-CompBench (3D-Spatial category) and a 12.5% improvement on Visual Jenga scene decomposition compared to compute-matched parallel sampling. Beyond quantitative gains, iterative refinement produces more faithful generations by decomposing complex prompts into sequential corrections, with human evaluators preferring our method 58.7% of the time over 41.3% for the parallel baseline. Together, these findings highlight iterative self-correction as a broadly applicable principle for compositional image generation. Results and visualizations are available at https://iterative-img-gen.github.io/