🤖 AI Summary
Image quality assessment (IQA) faces significant challenges in few-shot settings, including distortion diversity, strong content dependency, and high annotation costs. To address these, we propose GRMP-IQA—a gradient-regulated meta-prompting framework—that pioneers the integration of meta-prompt pretraining with quality-aware gradient regularization for efficient adaptation of CLIP under limited labeled data. Our method synergistically combines meta-learning, learnable soft prompts, vision-language joint fine-tuning, and distortion-agnostic gradient constraints to jointly enhance cross-distortion generalization and focus on quality-sensitive features. Evaluated on LIVEC and KonIQ with only 20% of training data, GRMP-IQA achieves Spearman’s rank correlation coefficients (SRCC) of 0.836 and 0.853, respectively—substantially outperforming existing fully supervised and few-shot state-of-the-art methods. This demonstrates both superior data efficiency and robust quality prediction capability in low-data regimes.
📝 Abstract
Image Quality Assessment (IQA) remains an unresolved challenge in the field of computer vision, due to complex distortion conditions, diverse image content, and limited data availability. The existing Blind IQA (BIQA) methods heavily rely on extensive human annotations to train models, which is both labor-intensive and costly due to the demanding nature of creating IQA datasets. To mitigate the dependence on labeled samples, this paper introduces a novel Gradient-Regulated Meta-Prompt IQA Framework (GRMP-IQA). This framework aims to fast adapt the powerful visual-language pre-trained model, CLIP, to downstream IQA tasks, significantly improving accuracy in scenarios with limited data. Specifically, the GRMP-IQA comprises two key modules: Meta-Prompt Pre-training Module and Quality-Aware Gradient Regularization. The Meta Prompt Pre-training Module leverages a meta-learning paradigm to pre-train soft prompts with shared meta-knowledge across different distortions, enabling rapid adaptation to various IQA tasks. On the other hand, the Quality-Aware Gradient Regularization is designed to adjust the update gradients during fine-tuning, focusing the model's attention on quality-relevant features and preventing overfitting to semantic information. Extensive experiments on five standard BIQA datasets demonstrate the superior performance to the state-of-the-art BIQA methods under limited data setting, i.e., achieving SRCC values of 0.836 (vs. 0.760 on LIVEC) and 0.853 (vs. 0.812 on KonIQ). Notably, utilizing just 20% of the training data, our GRMP-IQA outperforms most existing fully supervised BIQA methods.