Multi-Modal Language Models as Text-to-Image Model Evaluators

📅 2025-05-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Static benchmarks struggle to keep pace with the rapid evolution of text-to-image (T2I) models, hindering timely, scalable, and faithful evaluation. Method: This paper proposes MT2IE, the first interactive, dynamic evaluation framework for T2I models leveraging multimodal large language models (MLLMs). MT2IE replaces manual annotation and fixed datasets with MLLM-driven iterative prompt generation and joint scoring of image-text consistency and aesthetic quality. Contributions/Results: (1) It introduces the first MLLM-guided mechanism for automatic high-information prompt construction, improving prompt efficiency by 80×; (2) it reproduces state-of-the-art benchmark model rankings using only 1/80 the number of prompts; and (3) its consistency scores achieve a Pearson correlation of 0.87 with human judgments—significantly surpassing existing SOTA metrics. Empirical results validate MLLMs as scalable, high-fidelity evaluation agents for modern T2I generation.

Technology Category

Application Category

📝 Abstract
The steady improvements of text-to-image (T2I) generative models lead to slow deprecation of automatic evaluation benchmarks that rely on static datasets, motivating researchers to seek alternative ways to evaluate the T2I progress. In this paper, we explore the potential of multi-modal large language models (MLLMs) as evaluator agents that interact with a T2I model, with the objective of assessing prompt-generation consistency and image aesthetics. We present Multimodal Text-to-Image Eval (MT2IE), an evaluation framework that iteratively generates prompts for evaluation, scores generated images and matches T2I evaluation of existing benchmarks with a fraction of the prompts used in existing static benchmarks. Moreover, we show that MT2IE's prompt-generation consistency scores have higher correlation with human judgment than scores previously introduced in the literature. MT2IE generates prompts that are efficient at probing T2I model performance, producing the same relative T2I model rankings as existing benchmarks while using only 1/80th the number of prompts for evaluation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating text-to-image models using multi-modal LLMs
Assessing prompt-generation consistency and image aesthetics
Reducing prompt quantity while maintaining evaluation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses multi-modal LLMs for T2I evaluation
MT2IE framework scores images efficiently
Generates prompts with high human correlation
🔎 Similar Papers
No similar papers found.