🤖 AI Summary
Existing multimodal large language models (MLLMs) decouple score regression from explanatory reasoning in visual quality assessment, compromising both accuracy and interpretability. To address this, we propose a two-stage unified training framework: cold-start initialization followed by reinforcement learning. We introduce GRPO—a novel reward mechanism that jointly optimizes score regression and reasoning consistency—enabling end-to-end co-improvement of quality scoring and natural-language rationale generation. Our method integrates MLLMs, knowledge distillation, cross-entropy supervision, and GRPO-based reinforcement learning. Evaluated on cross-domain benchmarks, our approach achieves a +6.5% improvement in Spearman’s rank correlation coefficient (SRCC), significantly outperforming state-of-the-art models including Qwen-2.5-VL-72B. Notably, it is the first method to simultaneously achieve top-tier scoring accuracy and the most plausible, logically consistent interpretability—bridging the long-standing trade-off between precision and explainability in visual quality assessment.
📝 Abstract
Recent studies demonstrate that multimodal large language models (MLLMs) can proficiently evaluate visual quality through interpretable assessments. However, existing approaches typically treat quality scoring and reasoning descriptions as separate tasks with disjoint optimization objectives, leading to a trade-off: models adept at quality reasoning descriptions struggle with precise score regression, while score-focused models lack interpretability. This limitation hinders the full potential of MLLMs in visual quality assessment, where accuracy and interpretability should be mutually reinforcing. To address this, we propose a unified two-stage training framework comprising a cold-start stage and a reinforcement learning-based fine-tuning stage. Specifically, in the first stage, we distill high-quality data from a teacher model through expert-designed prompts, initializing reasoning capabilities via cross-entropy loss supervision. In the second stage, we introduce a novel reward with Group Relative Policy Optimization (GRPO) to jointly optimize scoring accuracy and reasoning consistency. We designate the models derived from these two stages as Q-Ponder-CI and Q-Ponder. Extensive experiments show that Q-Ponder achieves state-of-the-art (SOTA) performance on quality score regression benchmarks, delivering up to 6.5% higher SRCC on cross-domain datasets. Furthermore, Q-Ponder significantly outperforms description-based SOTA models, including its teacher model Qwen-2.5-VL-72B, particularly in description accuracy and reasonableness, demonstrating the generalization potential over diverse tasks.