🤖 AI Summary
This work addresses the challenge that existing generative real-world super-resolution methods often produce semantic or structural hallucinations inconsistent with low-resolution inputs due to random sampling, and lack reliable fidelity supervision in the absence of high-resolution ground truth. To this end, we propose a multi-reward preference-based reinforcement learning framework that incorporates quantifiable low-resolution anchored fidelity signals and a decoupled advantage normalization mechanism to mitigate advantage collapse. We further introduce LucidLR, a large-scale dataset of realistically degraded images, to enhance policy diversity and training stability. By integrating a flow-matching generative model with the LucidConsistency degradation-robust semantic evaluator, our approach significantly outperforms strong baselines across diverse real-world scenarios, achieving a superior balance between perceptual quality and input fidelity while maintaining stable and efficient training.
📝 Abstract
Generative real-world image super-resolution (Real-ISR) can synthesize visually convincing details from severely degraded low-resolution (LR) inputs, yet its stochastic sampling makes a critical failure mode hard to avoid: outputs may look sharp but be unfaithful to the LR evidence (semantic and structural hallucination), while such LR-anchored faithfulness is difficult to assess without HR ground truth. Preference-based reinforcement learning (RL) is a natural fit because each LR input yields a rollout group of candidates to compare. However, effective alignment in Real-ISR is hindered by (i) the lack of a degradation-robust LR-referenced faithfulness signal, and (ii) a rollout-group optimization bottleneck where naive multi-reward scalarization followed by normalization compresses objective-wise contrasts, causing advantage collapse and weakening the reward-weighted updates in DiffusionNFT-style forward fine-tuning. Moreover, (iii) limited coverage of real degradations restricts rollout diversity and preference signal quality. We propose LucidNFT, a multi-reward RL framework for flow-matching Real-ISR. LucidNFT introduces LucidConsistency, a degradation-robust semantic evaluator that makes LR-anchored faithfulness measurable and optimizable; a decoupled advantage normalization strategy that preserves objective-wise contrasts within each LR-conditioned rollout group before fusion, preventing advantage collapse; and LucidLR, a large-scale collection of real-world degraded images to support robust RL fine-tuning. Experiments show that LucidNFT consistently improves strong flow-based Real-ISR baselines, achieving better perceptual-faithfulness trade-offs with stable optimization dynamics across diverse real-world scenarios.