BPCLIP: A Bottom-up Image Quality Assessment from Distortion to Semantics Based on CLIP

📅 2025-06-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image quality assessment (IQA) methods predominantly rely on linear fusion of multi-scale features, overlooking the progressive impact of distortions on high-level semantic content. Method: This paper introduces CLIP to IQA for the first time, proposing a bottom-up, semantics-aware framework: a multi-scale encoder extracts hierarchical features; a cross-layer attention mechanism models the propagation of low-level distortions toward high-level semantics; and 40 quality-descriptive adjectives are used to construct textual prompts, enabling fine-grained semantic alignment between image quality and natural language. The method explicitly leverages CLIP’s linguistic priors to model subjective quality perception. Contribution/Results: This yields significantly improved interpretability and cross-dataset generalization. The approach achieves state-of-the-art performance on both full-reference and no-reference IQA benchmarks—including LIVE, CSIQ, TID2013, KonIQ-10k, and SPAQ—and demonstrates superior robustness under cross-dataset evaluation.

Technology Category

Application Category

📝 Abstract
Image Quality Assessment (IQA) aims to evaluate the perceptual quality of images based on human subjective perception. Existing methods generally combine multiscale features to achieve high performance, but most rely on straightforward linear fusion of these features, which may not adequately capture the impact of distortions on semantic content. To address this, we propose a bottom-up image quality assessment approach based on the Contrastive Language-Image Pre-training (CLIP, a recently proposed model that aligns images and text in a shared feature space), named BPCLIP, which progressively extracts the impact of low-level distortions on high-level semantics. Specifically, we utilize an encoder to extract multiscale features from the input image and introduce a bottom-up multiscale cross attention module designed to capture the relationships between shallow and deep features. In addition, by incorporating 40 image quality adjectives across six distinct dimensions, we enable the pre-trained CLIP text encoder to generate representations of the intrinsic quality of the image, thereby strengthening the connection between image quality perception and human language. Our method achieves superior results on most public Full-Reference (FR) and No-Reference (NR) IQA benchmarks, while demonstrating greater robustness.
Problem

Research questions and friction points this paper is trying to address.

Assessing image quality by linking distortions to semantic impact
Improving feature fusion to better capture distortion effects
Enhancing quality perception alignment with human language descriptors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses CLIP for image-text alignment
Introduces bottom-up multiscale cross attention
Incorporates 40 quality adjectives for perception
Chenyue Song
Chenyue Song
Harbin Institute of Technology
Chen Hui
Chen Hui
Harbin Institute of Technology & Nanyang Technological University
image compressionquality assessmentmultimedia securityimage and video processing
W
Wei Zhang
Harbin Institute of Technology, China
H
Haiqi Zhu
Harbin Institute of Technology, China
S
Shaohui Liu
Harbin Institute of Technology, China
H
Hong Huang
Sichuan University of Science & Engineering, China
F
Feng Jiang
Harbin Institute of Technology, China