🤖 AI Summary
This work addresses the misalignment between existing super-resolution evaluation metrics and human perceptual preferences, as conventional measures struggle to jointly capture visual fidelity and semantic plausibility. To bridge this gap, we propose RefReward-SR—a low-resolution (LR)-conditioned reward model that leverages the vision-language priors of multimodal large language models (MLLMs) to align evaluations with human preferences through LR–HR semantic consistency, without requiring ground-truth supervision. We introduce RefSR-18K, the first large-scale LR-conditioned preference dataset, and employ Group Relative Policy Optimization (GRPO) for reward-guided fine-tuning. Our approach significantly improves alignment with human judgments while enhancing visual naturalness and preserving semantic coherence.
📝 Abstract
Recent advances in generative super-resolution (SR) have greatly improved visual realism, yet existing evaluation and optimization frameworks remain misaligned with human perception. Full-Reference and No-Reference metrics often fail to reflect perceptual preference, either penalizing semantically plausible details due to pixel misalignment or favoring visually sharp but inconsistent artifacts. Moreover, most SR methods rely on ground-truth (GT)-dependent distribution matching, which does not necessarily correspond to human judgments. In this work, we propose RefReward-SR, a low-resolution (LR) reference-aware reward model for preference-aligned SR. Instead of relying on GT supervision or NR evaluation, RefReward-SR assesses high-resolution (HR) reconstructions conditioned on their LR inputs, treating the LR image as a semantic anchor. Leveraging the visual-linguistic priors of a Multimodal Large Language Models (MLLM), it evaluates semantic consistency and plausibility in a reasoning-aware manner. To support this paradigm, we construct RefSR-18K, the first large-scale LR-conditioned preference dataset for SR, providing pairwise rankings based on LR-HR consistency and HR naturalness. We fine-tune the MLLM with Group Relative Policy Optimization (GRPO) using LR-conditioned ranking rewards, and further integrate GRPO into SR model training with RefReward-SR as the core reward signal for preference-aligned generation. Extensive experiments show that our framework achieves substantially better alignment with human judgments, producing reconstructions that preserve semantic consistency while enhancing perceptual plausibility and visual naturalness. Code, models, and datasets will be released upon paper acceptance.