Mitigating Perception Bias: A Training-Free Approach to Enhance LMM for Image Quality Assessment

📅 2024-11-19
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Large multimodal models (LMMs) exhibit content bias in image quality assessment (IQA), as their pretraining objectives emphasize high-level semantic understanding rather than perceptual fidelity, leading to distorted quality judgments. To address this, we propose a training-free perceptual debiasing framework: (1) synthesizing semantic-preserving yet severely degraded distortion samples to construct a quality prior; (2) designing conditional prompts to steer LMM attention toward distortion-specific features; and (3) performing zero-shot cross-domain quality alignment via Bayesian-style aggregation of multiple priors. This work establishes the first “training-agnostic perceptual debiasing” paradigm—eliminating reliance on gradient updates or fine-tuning. Evaluated on multiple standard IQA benchmarks, our method achieves an average +0.12 improvement in Spearman rank-order correlation coefficient (SROCC), significantly outperforming fine-tuned baselines.

Technology Category

Application Category

📝 Abstract
Despite the impressive performance of large multimodal models (LMMs) in high-level visual tasks, their capacity for image quality assessment (IQA) remains limited. One main reason is that LMMs are primarily trained for high-level tasks (e.g., image captioning), emphasizing unified image semantics extraction under varied quality. Such semantic-aware yet quality-insensitive perception bias inevitably leads to a heavy reliance on image semantics when those LMMs are forced for quality rating. In this paper, instead of retraining or tuning an LMM costly, we propose a training-free debiasing framework, in which the image quality prediction is rectified by mitigating the bias caused by image semantics. Specifically, we first explore several semantic-preserving distortions that can significantly degrade image quality while maintaining identifiable semantics. By applying these specific distortions to the query or test images, we ensure that the degraded images are recognized as poor quality while their semantics mainly remain. During quality inference, both a query image and its corresponding degraded version are fed to the LMM along with a prompt indicating that the query image quality should be inferred under the condition that the degraded one is deemed poor quality. This prior condition effectively aligns the LMM's quality perception, as all degraded images are consistently rated as poor quality, regardless of their semantic variance. Finally, the quality scores of the query image inferred under different prior conditions (degraded versions) are aggregated using a conditional probability model. Extensive experiments on various IQA datasets show that our debiasing framework could consistently enhance the LMM performance.
Problem

Research questions and friction points this paper is trying to address.

Mitigating semantic-aware perception bias in large multimodal models for image quality assessment
Enhancing LMM performance for quality rating without costly retraining or fine-tuning
Rectifying quality prediction by reducing reliance on image semantics through degradation techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free debiasing framework for LMMs
Semantic-preserving distortions to degrade quality
Conditional probability aggregation of quality scores
🔎 Similar Papers
No similar papers found.
S
Siyi Pan
School of Computer Science, South China Normal University, China
B
Baoliang Chen
School of Computer Science, South China Normal University, China
D
Danni Huang
H
Hanwei Zhu
School of Computer Science and Engineering, Nanyang Technological University, Singapore
Z
Zhu Li
Xiangjie Sui
Xiangjie Sui
Faculty of Data Science, City University of Macau
image processingvisual quality assessmentcomputer vision
S
Shiqi Wang