RefReward-SR: LR-Conditioned Reward Modeling for Preference-Aligned Super-Resolution

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the misalignment between existing super-resolution evaluation metrics and human perceptual preferences, as conventional measures struggle to jointly capture visual fidelity and semantic plausibility. To bridge this gap, we propose RefReward-SR—a low-resolution (LR)-conditioned reward model that leverages the vision-language priors of multimodal large language models (MLLMs) to align evaluations with human preferences through LR–HR semantic consistency, without requiring ground-truth supervision. We introduce RefSR-18K, the first large-scale LR-conditioned preference dataset, and employ Group Relative Policy Optimization (GRPO) for reward-guided fine-tuning. Our approach significantly improves alignment with human judgments while enhancing visual naturalness and preserving semantic coherence.

Technology Category

Application Category

📝 Abstract
Recent advances in generative super-resolution (SR) have greatly improved visual realism, yet existing evaluation and optimization frameworks remain misaligned with human perception. Full-Reference and No-Reference metrics often fail to reflect perceptual preference, either penalizing semantically plausible details due to pixel misalignment or favoring visually sharp but inconsistent artifacts. Moreover, most SR methods rely on ground-truth (GT)-dependent distribution matching, which does not necessarily correspond to human judgments. In this work, we propose RefReward-SR, a low-resolution (LR) reference-aware reward model for preference-aligned SR. Instead of relying on GT supervision or NR evaluation, RefReward-SR assesses high-resolution (HR) reconstructions conditioned on their LR inputs, treating the LR image as a semantic anchor. Leveraging the visual-linguistic priors of a Multimodal Large Language Models (MLLM), it evaluates semantic consistency and plausibility in a reasoning-aware manner. To support this paradigm, we construct RefSR-18K, the first large-scale LR-conditioned preference dataset for SR, providing pairwise rankings based on LR-HR consistency and HR naturalness. We fine-tune the MLLM with Group Relative Policy Optimization (GRPO) using LR-conditioned ranking rewards, and further integrate GRPO into SR model training with RefReward-SR as the core reward signal for preference-aligned generation. Extensive experiments show that our framework achieves substantially better alignment with human judgments, producing reconstructions that preserve semantic consistency while enhancing perceptual plausibility and visual naturalness. Code, models, and datasets will be released upon paper acceptance.
Problem

Research questions and friction points this paper is trying to address.

super-resolution
human perception
preference alignment
evaluation metrics
semantic consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

RefReward-SR
preference-aligned super-resolution
LR-conditioned reward modeling
multimodal large language model
Group Relative Policy Optimization
🔎 Similar Papers
No similar papers found.
Y
Yushuai Song
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
Weize Quan
Weize Quan
MAIS-CASIA
Image ProcessingComputer GraphicsDeep Learning
Weining Wang
Weining Wang
Institute of Automation, Chinese Academy of Sciences
Video UnderstandingVideo GeneratationMulti-Modal Analysis
Jiahui Sun
Jiahui Sun
Shanghai Jiao Tong University
System
J
Jing Liu
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
M
Meng Li
OPPO AI Center, OPPO Inc.
P
Pengbin Yu
OPPO AI Center, OPPO Inc.
Z
Zhentao Chen
OPPO AI Center, OPPO Inc.
W
Wei Shen
OPPO AI Center, OPPO Inc.
L
Lunxi Yuan
OPPO AI Center, OPPO Inc.
D
Dong-ming Yan
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences