🤖 AI Summary
Existing image quality assessment (IQA) methods predominantly rely on linear fusion of multi-scale features, overlooking the progressive impact of distortions on high-level semantic content. Method: This paper introduces CLIP to IQA for the first time, proposing a bottom-up, semantics-aware framework: a multi-scale encoder extracts hierarchical features; a cross-layer attention mechanism models the propagation of low-level distortions toward high-level semantics; and 40 quality-descriptive adjectives are used to construct textual prompts, enabling fine-grained semantic alignment between image quality and natural language. The method explicitly leverages CLIP’s linguistic priors to model subjective quality perception. Contribution/Results: This yields significantly improved interpretability and cross-dataset generalization. The approach achieves state-of-the-art performance on both full-reference and no-reference IQA benchmarks—including LIVE, CSIQ, TID2013, KonIQ-10k, and SPAQ—and demonstrates superior robustness under cross-dataset evaluation.
📝 Abstract
Image Quality Assessment (IQA) aims to evaluate the perceptual quality of images based on human subjective perception. Existing methods generally combine multiscale features to achieve high performance, but most rely on straightforward linear fusion of these features, which may not adequately capture the impact of distortions on semantic content. To address this, we propose a bottom-up image quality assessment approach based on the Contrastive Language-Image Pre-training (CLIP, a recently proposed model that aligns images and text in a shared feature space), named BPCLIP, which progressively extracts the impact of low-level distortions on high-level semantics. Specifically, we utilize an encoder to extract multiscale features from the input image and introduce a bottom-up multiscale cross attention module designed to capture the relationships between shallow and deep features. In addition, by incorporating 40 image quality adjectives across six distinct dimensions, we enable the pre-trained CLIP text encoder to generate representations of the intrinsic quality of the image, thereby strengthening the connection between image quality perception and human language. Our method achieves superior results on most public Full-Reference (FR) and No-Reference (NR) IQA benchmarks, while demonstrating greater robustness.