🤖 AI Summary
Existing large multimodal models suffer from significant bias in evaluating text-to-image alignment under long textual prompts. To address this, we introduce LPG-Bench—a rigorous benchmark comprising 200 carefully curated long prompts—and propose TIT, the first zero-shot evaluation framework grounded in text–image–text cyclic consistency. TIT leverages a large language model (LLM) to generate image descriptions and computes alignment scores by measuring semantic similarity between the original prompt and the generated description. It defines two metrics: TIT-Score (lightweight) and TIT-Score-LLM (LLM-enhanced). Experiments demonstrate that TIT-Score-LLM achieves a 7.31% higher pairwise accuracy than the strongest baseline and exhibits strong correlation with human preferences. This substantially improves both reliability and generalizability of alignment evaluation under long-prompt conditions.
📝 Abstract
With the rapid advancement of large multimodal models (LMMs), recent text-to-image (T2I) models can generate high-quality images and demonstrate great alignment to short prompts. However, they still struggle to effectively understand and follow long and detailed prompts, displaying inconsistent generation. To address this challenge, we introduce LPG-Bench, a comprehensive benchmark for evaluating long-prompt-based text-to-image generation. LPG-Bench features 200 meticulously crafted prompts with an average length of over 250 words, approaching the input capacity of several leading commercial models. Using these prompts, we generate 2,600 images from 13 state-of-the-art models and further perform comprehensive human-ranked annotations. Based on LPG-Bench, we observe that state-of-the-art T2I alignment evaluation metrics exhibit poor consistency with human preferences on long-prompt-based image generation. To address the gap, we introduce a novel zero-shot metric based on text-to-image-to-text consistency, termed TIT, for evaluating long-prompt-generated images. The core concept of TIT is to quantify T2I alignment by directly comparing the consistency between the raw prompt and the LMM-produced description on the generated image, which includes an efficient score-based instantiation TIT-Score and a large-language-model (LLM) based instantiation TIT-Score-LLM. Extensive experiments demonstrate that our framework achieves superior alignment with human judgment compared to CLIP-score, LMM-score, etc., with TIT-Score-LLM attaining a 7.31% absolute improvement in pairwise accuracy over the strongest baseline. LPG-Bench and TIT methods together offer a deeper perspective to benchmark and foster the development of T2I models. All resources will be made publicly available.