Boosting CLIP Adaptation for Image Quality Assessment via Meta-Prompt Learning and Gradient Regularization

📅 2024-09-09
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Image quality assessment (IQA) faces significant challenges in few-shot settings, including distortion diversity, strong content dependency, and high annotation costs. To address these, we propose GRMP-IQA—a gradient-regulated meta-prompting framework—that pioneers the integration of meta-prompt pretraining with quality-aware gradient regularization for efficient adaptation of CLIP under limited labeled data. Our method synergistically combines meta-learning, learnable soft prompts, vision-language joint fine-tuning, and distortion-agnostic gradient constraints to jointly enhance cross-distortion generalization and focus on quality-sensitive features. Evaluated on LIVEC and KonIQ with only 20% of training data, GRMP-IQA achieves Spearman’s rank correlation coefficients (SRCC) of 0.836 and 0.853, respectively—substantially outperforming existing fully supervised and few-shot state-of-the-art methods. This demonstrates both superior data efficiency and robust quality prediction capability in low-data regimes.

Technology Category

Application Category

📝 Abstract
Image Quality Assessment (IQA) remains an unresolved challenge in the field of computer vision, due to complex distortion conditions, diverse image content, and limited data availability. The existing Blind IQA (BIQA) methods heavily rely on extensive human annotations to train models, which is both labor-intensive and costly due to the demanding nature of creating IQA datasets. To mitigate the dependence on labeled samples, this paper introduces a novel Gradient-Regulated Meta-Prompt IQA Framework (GRMP-IQA). This framework aims to fast adapt the powerful visual-language pre-trained model, CLIP, to downstream IQA tasks, significantly improving accuracy in scenarios with limited data. Specifically, the GRMP-IQA comprises two key modules: Meta-Prompt Pre-training Module and Quality-Aware Gradient Regularization. The Meta Prompt Pre-training Module leverages a meta-learning paradigm to pre-train soft prompts with shared meta-knowledge across different distortions, enabling rapid adaptation to various IQA tasks. On the other hand, the Quality-Aware Gradient Regularization is designed to adjust the update gradients during fine-tuning, focusing the model's attention on quality-relevant features and preventing overfitting to semantic information. Extensive experiments on five standard BIQA datasets demonstrate the superior performance to the state-of-the-art BIQA methods under limited data setting, i.e., achieving SRCC values of 0.836 (vs. 0.760 on LIVEC) and 0.853 (vs. 0.812 on KonIQ). Notably, utilizing just 20% of the training data, our GRMP-IQA outperforms most existing fully supervised BIQA methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses limited data in image quality assessment
Reduces dependency on costly human annotations
Adapts vision-language models for few-shot learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts CLIP vision-language model for IQA
Uses meta-learning to pre-train soft prompts
Applies quality-aware gradient regularization to prevent overfitting
🔎 Similar Papers
No similar papers found.
X
Xudong Li
Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University
Z
Zihao Huang
School of Information and Electronics, Beijing Institute of Technology
R
Runze Hu
School of Information and Electronics, Beijing Institute of Technology
Y
Yan Zhang
Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University
L
Liujuan Cao
Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University
R
Rongrong Ji
Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University