🤖 AI Summary
This work addresses the limited cross-algorithm generalization of existing no-reference enhanced image quality assessment (EIQA) models, which often overfit to visual artifacts specific to particular enhancement algorithms. To mitigate this algorithm-induced bias, we propose a preference-guided debiasing framework that constructs a continuous enhancement preference embedding space via supervised contrastive learning and explicitly estimates and removes algorithm-specific confounding factors from quality representations. This enables the model to focus on algorithm-agnostic perceptual quality cues. Combined with a two-stage training strategy, the proposed method significantly outperforms state-of-the-art approaches across multiple public EIQA benchmarks, demonstrating enhanced robustness and improved generalization across diverse enhancement algorithms.
📝 Abstract
Current no-reference image quality assessment (NR-IQA) models for enhanced images often struggle to generalize, as they tend to overfit to the distinct patterns of specific enhancement algorithms rather than evaluating genuine perceptual quality. To address this issue, we propose a preference-guided debiasing framework for no-reference enhancement image quality assessment (EIQA). Specifically, we first learn a continuous enhancement-preference embedding space using supervised contrastive learning, where images generated by similar enhancement styles are encouraged to have closer representations. Based on this, we further estimate the enhancement-induced nuisance component contained in the raw quality representation and remove it before quality regression. In this way, the model is guided to focus on algorithm-invariant perceptual quality cues instead of enhancement-specific visual fingerprints. To facilitate stable optimization, we adopt a two-stage training strategy that first learns the enhancement-preference space and then performs debiased quality prediction. Extensive experiments on public EIQA benchmarks demonstrate that the proposed method effectively mitigates algorithm-induced representation bias and achieves superior robustness and cross-algorithm generalization compared with existing approaches.