Describe-to-Score: Text-Guided Efficient Image Complexity Assessment

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image complexity assessment methods predominantly rely on low-level visual features, lacking semantic understanding and thus suffering from limited generalizability and accuracy. To address this, we propose D2S—a novel framework that pioneers the integration of vision-language alignment into image complexity estimation. D2S jointly aligns visual features and entropy distributions between pre-trained vision-language models (VLMs) and image representations, leveraging multimodal training yet enabling efficient unimodal (vision-only) inference—thereby eliminating textual encoding overhead during inference. This design significantly enhances semantic representation capability while preserving computational efficiency. Evaluated on IC9600, D2S achieves state-of-the-art performance; it also demonstrates strong generalization on no-reference image quality assessment benchmarks. Results validate the efficacy and practicality of vision-text co-modeling for image complexity assessment.

Technology Category

Application Category

📝 Abstract
Accurately assessing image complexity (IC) is critical for computer vision, yet most existing methods rely solely on visual features and often neglect high-level semantic information, limiting their accuracy and generalization. We introduce vision-text fusion for IC modeling. This approach integrates visual and textual semantic features, increasing representational diversity. It also reduces the complexity of the hypothesis space, which enhances both accuracy and generalization in complexity assessment. We propose the D2S (Describe-to-Score) framework, which generates image captions with a pre-trained vision-language model. We propose the feature alignment and entropy distribution alignment mechanisms, D2S guides semantic information to inform complexity assessment while bridging the gap between vision and text modalities. D2S utilizes multi-modal information during training but requires only the vision branch during inference, thereby avoiding multi-modal computational overhead and enabling efficient assessment. Experimental results demonstrate that D2S outperforms existing methods on the IC9600 dataset and maintains competitiveness on no-reference image quality assessment (NR-IQA) benchmark, validating the effectiveness and efficiency of multi-modal fusion in complexity-related tasks. Code is available at: https://github.com/xauat-liushipeng/D2S
Problem

Research questions and friction points this paper is trying to address.

Assessing image complexity using only visual features lacks semantic information
Bridging the gap between visual and textual modalities for complexity assessment
Reducing computational overhead while maintaining accuracy in image complexity evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-text fusion for complexity modeling
Feature and entropy alignment mechanisms
Multi-modal training with vision-only inference
Shipeng Liu
Shipeng Liu
University of Southern California
RoboticsEmbodied sensingEmbodied AgentsAdaptive Information Gathering
Z
Zhonglin Zhang
Xi’an University of Architecture and Technology
D
Dengfeng Chen
Xi’an University of Architecture and Technology
L
Liang Zhao
Xi’an University of Architecture and Technology