🤖 AI Summary
This work addresses the limitations of existing image generation evaluation benchmarks, which are often confined to single tasks or domains and lack interpretability in failure analysis. The authors introduce an open-world benchmark spanning six task categories and six real-world domains, comprising 3.6K condition sets and 20K fine-grained human annotations. They further propose the first explainable evaluation framework featuring object- and patch-level error annotations. Leveraging visual-language model (VLM)-based automatic assessment, multi-dimensional error categorization, and large-scale cross-model evaluation, the study conducts systematic stress tests on 14 state-of-the-art models. Results reveal that editing tasks significantly underperform compared to generation tasks, closed-source models generally outperform open ones, targeted training mitigates weaknesses in text-dense scenarios, and VLM-based metrics achieve up to 0.79 correlation with human judgments.
📝 Abstract
Advances in diffusion, autoregressive, and hybrid models have enabled high-quality image synthesis for tasks such as text-to-image, editing, and reference-guided composition. Yet, existing benchmarks remain limited, either focus on isolated tasks, cover only narrow domains, or provide opaque scores without explaining failure modes. We introduce \textbf{ImagenWorld}, a benchmark of 3.6K condition sets spanning six core tasks (generation and editing, with single or multiple references) and six topical domains (artworks, photorealistic images, information graphics, textual graphics, computer graphics, and screenshots). The benchmark is supported by 20K fine-grained human annotations and an explainable evaluation schema that tags localized object-level and segment-level errors, complementing automated VLM-based metrics. Our large-scale evaluation of 14 models yields several insights: (1) models typically struggle more in editing tasks than in generation tasks, especially in local edits. (2) models excel in artistic and photorealistic settings but struggle with symbolic and text-heavy domains such as screenshots and information graphics. (3) closed-source systems lead overall, while targeted data curation (e.g., Qwen-Image) narrows the gap in text-heavy cases. (4) modern VLM-based metrics achieve Kendall accuracies up to 0.79, approximating human ranking, but fall short of fine-grained, explainable error attribution. ImagenWorld provides both a rigorous benchmark and a diagnostic tool to advance robust image generation.