🤖 AI Summary
Static benchmarks struggle to keep pace with the rapid evolution of text-to-image (T2I) models, hindering timely, scalable, and faithful evaluation. Method: This paper proposes MT2IE, the first interactive, dynamic evaluation framework for T2I models leveraging multimodal large language models (MLLMs). MT2IE replaces manual annotation and fixed datasets with MLLM-driven iterative prompt generation and joint scoring of image-text consistency and aesthetic quality. Contributions/Results: (1) It introduces the first MLLM-guided mechanism for automatic high-information prompt construction, improving prompt efficiency by 80×; (2) it reproduces state-of-the-art benchmark model rankings using only 1/80 the number of prompts; and (3) its consistency scores achieve a Pearson correlation of 0.87 with human judgments—significantly surpassing existing SOTA metrics. Empirical results validate MLLMs as scalable, high-fidelity evaluation agents for modern T2I generation.
📝 Abstract
The steady improvements of text-to-image (T2I) generative models lead to slow deprecation of automatic evaluation benchmarks that rely on static datasets, motivating researchers to seek alternative ways to evaluate the T2I progress. In this paper, we explore the potential of multi-modal large language models (MLLMs) as evaluator agents that interact with a T2I model, with the objective of assessing prompt-generation consistency and image aesthetics. We present Multimodal Text-to-Image Eval (MT2IE), an evaluation framework that iteratively generates prompts for evaluation, scores generated images and matches T2I evaluation of existing benchmarks with a fraction of the prompts used in existing static benchmarks. Moreover, we show that MT2IE's prompt-generation consistency scores have higher correlation with human judgment than scores previously introduced in the literature. MT2IE generates prompts that are efficient at probing T2I model performance, producing the same relative T2I model rankings as existing benchmarks while using only 1/80th the number of prompts for evaluation.