🤖 AI Summary
Current text-to-image (T2I) evaluation relies heavily on costly commercial models and high-quality human or LLM-generated critiques—creating bottlenecks in scalability and accessibility.
Method: This paper proposes a lightweight, interpretable automatic evaluation framework. It introduces the first reinforcement learning paradigm for T2I assessment based on Group Relative Policy Optimization (GRPO); designs a continuous reward mechanism to enhance score diversity and training stability; and jointly generates scalar quality scores and natural language rationale chains—requiring only coarse-grained human ratings, without manually written justifications.
Contribution/Results: The framework drastically reduces annotation cost and strengthens open-source multimodal large language models’ (MLLMs) evaluation capability. It achieves state-of-the-art performance across three major T2I meta-evaluation benchmarks, significantly outperforming all baselines. Human evaluation alignment and rationale accuracy are both markedly improved.
📝 Abstract
The rapid progress in diffusion-based text-to-image (T2I) generation has created an urgent need for interpretable automatic evaluation methods that can assess the quality of generated images, therefore reducing the human annotation burden. To reduce the prohibitive cost of relying on commercial models for large-scale evaluation, and to improve the reasoning capabilities of open-source models, recent research has explored supervised fine-tuning (SFT) of multimodal large language models (MLLMs) as dedicated T2I evaluators. However, SFT approaches typically rely on high-quality critique datasets, which are either generated by proprietary LLMs-with potential issues of bias and inconsistency-or annotated by humans at high cost, limiting their scalability and generalization. To address these limitations, we propose T2I-Eval-R1, a novel reinforcement learning framework that trains open-source MLLMs using only coarse-grained quality scores, thereby avoiding the need for annotating high-quality interpretable evaluation rationale. Our approach integrates Group Relative Policy Optimization (GRPO) into the instruction-tuning process, enabling models to generate both scalar scores and interpretable reasoning chains with only easy accessible annotated judgment scores or preferences. Furthermore, we introduce a continuous reward formulation that encourages score diversity and provides stable optimization signals, leading to more robust and discriminative evaluation behavior. Experimental results on three established T2I meta-evaluation benchmarks demonstrate that T2I-Eval-R1 achieves significantly higher alignment with human assessments and offers more accurate interpretable score rationales compared to strong baseline methods.