OmniQuality-R: Advancing Reward Models Through All-Encompassing Quality Assessment

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual quality assessment methods are predominantly single-task, failing to produce unified, continuous, and interpretable reward signals suitable for policy optimization. Method: We propose OmniQuality-R—the first framework unifying multi-task quality reasoning (aesthetics, technical quality, and image-text alignment) into a continuous reward signal. It introduces instruction-guided chain-of-thought data construction, STD-based filtering, and entropy gating to enhance training stability; employs rejection sampling combined with supervised fine-tuning to build high-quality reasoning trajectories; and optimizes policies via Group Relative Policy Optimization (GRPO) with a Gaussian reward function. Contribution/Results: Experiments demonstrate that OmniQuality-R significantly outperforms baselines across all three quality assessment tasks, achieving superior performance in continuous score prediction and downstream generalization.

Technology Category

Application Category

📝 Abstract
Current visual evaluation approaches are typically constrained to a single task. To address this, we propose OmniQuality-R, a unified reward modeling framework that transforms multi-task quality reasoning into continuous and interpretable reward signals for policy optimization. Inspired by subjective experiments, where participants are given task-specific instructions outlining distinct assessment principles prior to evaluation, we propose OmniQuality-R, a structured reward modeling framework that transforms multi-dimensional reasoning into continuous and interpretable reward signals. To enable this, we construct a reasoning-enhanced reward modeling dataset by sampling informative plan-reason trajectories via rejection sampling, forming a reliable chain-of-thought (CoT) dataset for supervised fine-tuning (SFT). Building on this, we apply Group Relative Policy Optimization (GRPO) for post-training, using a Gaussian-based reward to support continuous score prediction. To further stabilize the training and improve downstream generalization, we incorporate standard deviation (STD) filtering and entropy gating mechanisms during reinforcement learning. These techniques suppress unstable updates and reduce variance in policy optimization. We evaluate OmniQuality-R on three key IQA tasks: aesthetic quality assessment, technical quality evaluation, and text-image alignment.
Problem

Research questions and friction points this paper is trying to address.

Unified reward modeling for multi-task visual quality assessment
Transforming multi-dimensional reasoning into continuous reward signals
Addressing instability in policy optimization for quality evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified reward modeling framework for multi-task quality reasoning
Constructed reasoning-enhanced dataset via rejection sampling
Applied Group Relative Policy Optimization with STD filtering
🔎 Similar Papers
No similar papers found.