🤖 AI Summary
This work addresses the critical yet underexplored issue of reliability in text-to-image generation for infographics, where existing models frequently suffer from data distortion and textual inaccuracies due to the absence of a systematic evaluation benchmark. To bridge this gap, the authors propose IGenBench—the first reliability benchmark for infographic generation—comprising 600 test cases across 30 infographic categories. They further introduce an automated evaluation framework grounded in multimodal large language models, decomposing reliability verification into atomic yes/no questions. Two novel hierarchical metrics, Query Accuracy (Q-ACC) and Infographic-wide Accuracy (I-ACC), are designed to assess fidelity at different granularities. Evaluations of ten state-of-the-art models reveal a stark discrepancy: while Q-ACC reaches up to 0.90, I-ACC remains low at 0.49, with data completeness (0.21) emerging as a key bottleneck, underscoring the current inability of models to achieve end-to-end reliable infographic generation.
📝 Abstract
Infographics are composite visual artifacts that combine data visualizations with textual and illustrative elements to communicate information. While recent text-to-image (T2I) models can generate aesthetically appealing images, their reliability in generating infographics remains unclear. Generated infographics may appear correct at first glance but contain easily overlooked issues, such as distorted data encoding or incorrect textual content. We present IGENBENCH, the first benchmark for evaluating the reliability of text-to-infographic generation, comprising 600 curated test cases spanning 30 infographic types. We design an automated evaluation framework that decomposes reliability verification into atomic yes/no questions based on a taxonomy of 10 question types. We employ multimodal large language models (MLLMs) to verify each question, yielding question-level accuracy (Q-ACC) and infographic-level accuracy (I-ACC). We comprehensively evaluate 10 state-of-the-art T2I models on IGENBENCH. Our systematic analysis reveals key insights for future model development: (i) a three-tier performance hierarchy with the top model achieving Q-ACC of 0.90 but I-ACC of only 0.49; (ii) data-related dimensions emerging as universal bottlenecks (e.g., Data Completeness: 0.21); and (iii) the challenge of achieving end-to-end correctness across all models. We release IGENBENCH at https://igen-bench.vercel.app/.