DP$^2$O-SR: Direct Perceptual Preference Optimization for Real-World Image Super-Resolution

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pre-trained text-to-image (T2I) diffusion models exhibit perceptual quality instability in real-world image super-resolution (Real-ISR) due to sampling stochasticity, struggling to simultaneously preserve structural fidelity and achieve natural appearance—without requiring human annotations. Method: We propose a direct perceptual preference optimization framework comprising: (1) a hybrid reward signal integrating full-reference and no-reference image quality assessment (IQA); (2) a hierarchical preference optimization mechanism using multi-pair preference samples—replacing binary best/worst selection—and adaptive sample weighting tailored to model capacity; and (3) fine-tuning diffusion models guided by a large-scale human-preference-trained IQA model. Results: Our method significantly improves perceptual quality across multiple Real-ISR benchmarks, demonstrates strong generalization, and is compatible with both diffusion and flow-based generative models.

Technology Category

Application Category

📝 Abstract
Benefiting from pre-trained text-to-image (T2I) diffusion models, real-world image super-resolution (Real-ISR) methods can synthesize rich and realistic details. However, due to the inherent stochasticity of T2I models, different noise inputs often lead to outputs with varying perceptual quality. Although this randomness is sometimes seen as a limitation, it also introduces a wider perceptual quality range, which can be exploited to improve Real-ISR performance. To this end, we introduce Direct Perceptual Preference Optimization for Real-ISR (DP$^2$O-SR), a framework that aligns generative models with perceptual preferences without requiring costly human annotations. We construct a hybrid reward signal by combining full-reference and no-reference image quality assessment (IQA) models trained on large-scale human preference datasets. This reward encourages both structural fidelity and natural appearance. To better utilize perceptual diversity, we move beyond the standard best-vs-worst selection and construct multiple preference pairs from outputs of the same model. Our analysis reveals that the optimal selection ratio depends on model capacity: smaller models benefit from broader coverage, while larger models respond better to stronger contrast in supervision. Furthermore, we propose hierarchical preference optimization, which adaptively weights training pairs based on intra-group reward gaps and inter-group diversity, enabling more efficient and stable learning. Extensive experiments across both diffusion- and flow-based T2I backbones demonstrate that DP$^2$O-SR significantly improves perceptual quality and generalizes well to real-world benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Optimizing perceptual quality in image super-resolution without human annotations
Exploiting generative model diversity through multi-pair preference learning
Adapting supervision intensity based on model capacity for stable training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines full-reference and no-reference IQA models
Uses multiple preference pairs from same model outputs
Implements hierarchical preference optimization with adaptive weighting
🔎 Similar Papers
No similar papers found.
Rongyuan Wu
Rongyuan Wu
The Hong Kong Polytechnic University
Computational PhotographyGenerative Models
Lingchen Sun
Lingchen Sun
The Hong Kong Polytechnic University
Computer VisionImage Processing
Z
Zhengqiang Zhang
The Hong Kong Polytechnic University
S
Shihao Wang
The Hong Kong Polytechnic University
Tianhe Wu
Tianhe Wu
City University of Hong Kong, OPPO Research Institute
Reinforcement LearningVLM/LLMLow-level Vision
Q
Qiaosi Yi
The Hong Kong Polytechnic University
S
Shuai Li
The Hong Kong Polytechnic University
L
Lei Zhang
The Hong Kong Polytechnic University