🤖 AI Summary
Existing text-to-image (T2I) evaluation methods rely on single scalar scores, lacking interpretability and fine-grained diagnostic capability. To address this, we propose ImageDoctor—the first unified, multidimensional evaluation framework for image generation, quantifying performance across four dimensions: plausibility, semantic alignment, aesthetics, and holistic quality, while localizing defects via pixel-level saliency heatmaps. Its core innovation is grounded image reasoning, implemented through a “see-think-evaluate” paradigm that enables interpretable, fine-grained diagnosis. Built upon vision-language models, ImageDoctor is trained via joint supervised fine-tuning and reinforcement learning to support multi-task scoring and dense reward prediction. Experiments demonstrate strong agreement between its assessments and human preferences; when deployed as a reward model in preference optimization, it improves generation quality by 10%.
📝 Abstract
The rapid advancement of text-to-image (T2I) models has increased the need for reliable human preference modeling, a demand further amplified by recent progress in reinforcement learning for preference alignment. However, existing approaches typically quantify the quality of a generated image using a single scalar, limiting their ability to provide comprehensive and interpretable feedback on image quality. To address this, we introduce ImageDoctor, a unified multi-aspect T2I model evaluation framework that assesses image quality across four complementary dimensions: plausibility, semantic alignment, aesthetics, and overall quality. ImageDoctor also provides pixel-level flaw indicators in the form of heatmaps, which highlight misaligned or implausible regions, and can be used as a dense reward for T2I model preference alignment. Inspired by the diagnostic process, we improve the detail sensitivity and reasoning capability of ImageDoctor by introducing a"look-think-predict"paradigm, where the model first localizes potential flaws, then generates reasoning, and finally concludes the evaluation with quantitative scores. Built on top of a vision-language model and trained through a combination of supervised fine-tuning and reinforcement learning, ImageDoctor demonstrates strong alignment with human preference across multiple datasets, establishing its effectiveness as an evaluation metric. Furthermore, when used as a reward model for preference tuning, ImageDoctor significantly improves generation quality -- achieving an improvement of 10% over scalar-based reward models.