Q-Ponder: A Unified Training Pipeline for Reasoning-based Visual Quality Assessment

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) decouple score regression from explanatory reasoning in visual quality assessment, compromising both accuracy and interpretability. To address this, we propose a two-stage unified training framework: cold-start initialization followed by reinforcement learning. We introduce GRPO—a novel reward mechanism that jointly optimizes score regression and reasoning consistency—enabling end-to-end co-improvement of quality scoring and natural-language rationale generation. Our method integrates MLLMs, knowledge distillation, cross-entropy supervision, and GRPO-based reinforcement learning. Evaluated on cross-domain benchmarks, our approach achieves a +6.5% improvement in Spearman’s rank correlation coefficient (SRCC), significantly outperforming state-of-the-art models including Qwen-2.5-VL-72B. Notably, it is the first method to simultaneously achieve top-tier scoring accuracy and the most plausible, logically consistent interpretability—bridging the long-standing trade-off between precision and explainability in visual quality assessment.

Technology Category

Application Category

📝 Abstract
Recent studies demonstrate that multimodal large language models (MLLMs) can proficiently evaluate visual quality through interpretable assessments. However, existing approaches typically treat quality scoring and reasoning descriptions as separate tasks with disjoint optimization objectives, leading to a trade-off: models adept at quality reasoning descriptions struggle with precise score regression, while score-focused models lack interpretability. This limitation hinders the full potential of MLLMs in visual quality assessment, where accuracy and interpretability should be mutually reinforcing. To address this, we propose a unified two-stage training framework comprising a cold-start stage and a reinforcement learning-based fine-tuning stage. Specifically, in the first stage, we distill high-quality data from a teacher model through expert-designed prompts, initializing reasoning capabilities via cross-entropy loss supervision. In the second stage, we introduce a novel reward with Group Relative Policy Optimization (GRPO) to jointly optimize scoring accuracy and reasoning consistency. We designate the models derived from these two stages as Q-Ponder-CI and Q-Ponder. Extensive experiments show that Q-Ponder achieves state-of-the-art (SOTA) performance on quality score regression benchmarks, delivering up to 6.5% higher SRCC on cross-domain datasets. Furthermore, Q-Ponder significantly outperforms description-based SOTA models, including its teacher model Qwen-2.5-VL-72B, particularly in description accuracy and reasonableness, demonstrating the generalization potential over diverse tasks.
Problem

Research questions and friction points this paper is trying to address.

Unifies quality scoring and reasoning in visual assessment
Addresses trade-off between score accuracy and interpretability
Enhances MLLMs for cross-domain quality evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified two-stage training framework
Expert-designed prompts for data distillation
Group Relative Policy Optimization reward
🔎 Similar Papers
No similar papers found.
Z
Zhuoxuan Cai
Fudan University, vivo Mobile Communication Co., Ltd
J
Jian Zhang
vivo Mobile Communication Co., Ltd
Xinbin Yuan
Xinbin Yuan
Nankai university
gui agentaigcobject detection
P
Pengtao Jiang
vivo Mobile Communication Co., Ltd
Wenxiang Chen
Wenxiang Chen
Fudan University
LLM reasoningLLM-based agent
B
Bowen Tang
vivo Mobile Communication Co., Ltd
Lujian Yao
Lujian Yao
East China University of Science and Technology
Computer Vision
Q
Qiyuan Wang
Fudan University
Jinwen Chen
Jinwen Chen
University of Electronic Science and Technology of China
spatial crowdsourcing
B
Bo Li
vivo Mobile Communication Co., Ltd