Exploring Instruction Data Quality for Explainable Image Quality Assessment

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Instruction tuning for explainable image quality assessment (IQA) suffers from high data redundancy and prohibitive computational costs. Method: We propose IQASelect, a three-stage clustering-based data selection framework: (1) extracting multimodal semantic representations; (2) jointly optimizing intra-cluster diversity and task relevance to allocate sampling quotas; and (3) performing weighted sampling. Contribution/Results: IQASelect reveals substantial redundancy in existing IQA instruction datasets—only 10% of samples suffice to surpass full-dataset fine-tuning performance. On Q-Bench and AesBench, it achieves 102.1% and 103.7% of the full-tuning accuracy, respectively, while drastically reducing training overhead. This work provides the first systematic empirical validation that high-quality, compact instruction datasets effectively enhance multimodal large models’ quality perception capabilities, establishing a new paradigm for efficient IQA instruction engineering.

Technology Category

Application Category

📝 Abstract
In recent years, with the rapid development of powerful multimodal large language models (MLLMs), explainable image quality assessment (IQA) has gradually become popular, aiming at providing quality-related descriptions and answers of images. To achieve this goal, recent methods seek to construct a large-scale instruction tuning dataset to empower the MLLM with quality perception ability following the well-known scaling law. However, a large amount of instruction tuning data may cause substantial computational costs and redundant data, which in turn will cause harm to the performance of the model. To cope with this problem, in this paper, we challenge the scaling law and systematically investigate the role of data quality of the instruction tuning dataset for explainable IQA. Using a powerful pre-trained MLLM, we first investigate the changes in model performance after fine-tuning with different sizes of instruction tuning data. We find that selecting a subset of the data set randomly using an appropriate ratio can even lead to better results than training with the entire instruction tuning dataset, demonstrating the redundancy of current explainable IQA instruction tuning data. Beyond randomly sampling a subset, we propose a clustering-based data selection framework with three stages: clustering feature extraction, cluster quota allocation, and cluster sampling strategy. Then we systematically analyze the choices of each stage and propose a simple but efficient data selection method IQA-Select for explainable IQA. The experimental results demonstrate that IQA-Select can achieve 102.1% and 103.7% performance of full fine-tuning using only 10% selected data in Q-Bench and AesBench respectively, significantly reducing computational costs while achieving better performance.
Problem

Research questions and friction points this paper is trying to address.

Addressing data redundancy in explainable image quality assessment instruction tuning
Reducing computational costs while maintaining model performance in IQA
Developing efficient data selection methods for multimodal quality assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Clustering-based framework for instruction data selection
Three-stage method: feature extraction, quota allocation, sampling
Achieves superior performance using only 10% of data
🔎 Similar Papers
No similar papers found.