🤖 AI Summary
Text-to-image (T2I) models exhibit systematic biases in attribute fidelity—e.g., object count and color—and existing evaluation metrics fail to capture fine-grained semantic errors; meanwhile, vision-language model (VLM) benchmarks lag behind practical T2I evaluation needs.
Method: We propose the first hierarchical evaluation framework grounded in 27 fine-grained error patterns, jointly assessing T2I generation quality and VLM discrimination capability. It introduces a cross-model annotation verification mechanism and an automated labeling pipeline powered by Llama3, integrating analyses from multiple VLMs—including Molmo, InternVL3, and Pixtral—on images generated by Flux, SD3, and SD3.5.
Contribution/Results: Experiments reveal pervasive attribute mismatches and object omissions under complex prompts across mainstream models, advancing standardized, fine-grained reliability assessment for T2I systems.
📝 Abstract
Text-to-image (T2I) models are capable of generating visually impressive images, yet they often fail to accurately capture specific attributes in user prompts, such as the correct number of objects with the specified colors. The diversity of such errors underscores the need for a hierarchical evaluation framework that can compare prompt adherence abilities of different image generation models. Simultaneously, benchmarks of vision language models (VLMs) have not kept pace with the complexity of scenes that VLMs are used to annotate. In this work, we propose a structured methodology for jointly evaluating T2I models and VLMs by testing whether VLMs can identify 27 specific failure modes in the images generated by T2I models conditioned on challenging prompts. Our second contribution is a dataset of prompts and images generated by 5 T2I models (Flux, SD3-Medium, SD3-Large, SD3.5-Medium, SD3.5-Large) and the corresponding annotations from VLMs (Molmo, InternVL3, Pixtral) annotated by an LLM (Llama3) to test whether VLMs correctly identify the failure mode in a generated image. By analyzing failure modes on a curated set of prompts, we reveal systematic errors in attribute fidelity and object representation. Our findings suggest that current metrics are insufficient to capture these nuanced errors, highlighting the importance of targeted benchmarks for advancing generative model reliability and interpretability.