🤖 AI Summary
Video quality assessment (VQA) models suffer from poor out-of-distribution (OOD) generalization and limited interpretability. To address these challenges, we propose the first reasoning-based VQA framework, which jointly models quality perception and score prediction via rule-guided policy optimization (GRPO)—a reinforcement learning paradigm inspired by human visual cognition. Our method innovatively integrates three complementary reward signals: bell-shaped regression, pairwise ranking, and temporal consistency—enabling end-to-end training using only scalar quality scores, without requiring distortion-type annotations. This design supports fine-grained quality attribution and natural language explanation generation. Evaluated on both in-domain and OOD benchmarks, our approach achieves state-of-the-art performance, significantly improving cross-dataset generalization. Moreover, it surpasses existing interpretable VQA methods in attribution accuracy and explanation fidelity.
📝 Abstract
Video quality assessment (VQA) aims to objectively quantify perceptual quality degradation in alignment with human visual perception. Despite recent advances, existing VQA models still suffer from two critical limitations: extit{poor generalization to out-of-distribution (OOD) videos} and extit{limited explainability}, which restrict their applicability in real-world scenarios. To address these challenges, we propose extbf{VQAThinker}, a reasoning-based VQA framework that leverages large multimodal models (LMMs) with reinforcement learning to jointly model video quality understanding and scoring, emulating human perceptual decision-making. Specifically, we adopt group relative policy optimization (GRPO), a rule-guided reinforcement learning algorithm that enables reasoning over video quality under score-level supervision, and introduce three VQA-specific rewards: (1) a extbf{bell-shaped regression reward} that increases rapidly as the prediction error decreases and becomes progressively less sensitive near the ground truth; (2) a extbf{pairwise ranking reward} that guides the model to correctly determine the relative quality between video pairs; and (3) a extbf{temporal consistency reward} that encourages the model to prefer temporally coherent videos over their perturbed counterparts. Extensive experiments demonstrate that VQAThinker achieves state-of-the-art performance on both in-domain and OOD VQA benchmarks, showing strong generalization for video quality scoring. Furthermore, evaluations on video quality understanding tasks validate its superiority in distortion attribution and quality description compared to existing explainable VQA models and LMMs. These findings demonstrate that reinforcement learning offers an effective pathway toward building generalizable and explainable VQA models solely with score-level supervision.