Scene Perceived Image Perceptual Score (SPIPS): combining global and local perception for image quality assessment

📅 2025-04-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the growing mismatch between traditional no-reference image quality assessment (IQA) methods and human visual perception—exacerbated by the surge in AI-generated and mobile-captured images—this paper proposes a novel no-reference IQA model that jointly captures global semantics and local details. Methodologically, it introduces the first deep feature disentanglement into high-level semantic and low-level perceptual streams, enabling a multi-scale perceptual modeling framework. The model integrates classical metrics (PSNR, LPIPS) with deep representations and is optimized explicitly for perceptual consistency with human judgments. Quality prediction is performed via an MLP-based end-to-end regression. Evaluated on multiple benchmark datasets, the model achieves SROCC and Pearson correlation coefficients exceeding 0.92 with human subjective scores—significantly outperforming state-of-the-art IQA methods—and effectively bridges the gap between deep learning–based reconstruction fidelity and human perceptual quality.

Technology Category

Application Category

📝 Abstract
The rapid advancement of artificial intelligence and widespread use of smartphones have resulted in an exponential growth of image data, both real (camera-captured) and virtual (AI-generated). This surge underscores the critical need for robust image quality assessment (IQA) methods that accurately reflect human visual perception. Traditional IQA techniques primarily rely on spatial features - such as signal-to-noise ratio, local structural distortions, and texture inconsistencies - to identify artifacts. While effective for unprocessed or conventionally altered images, these methods fall short in the context of modern image post-processing powered by deep neural networks (DNNs). The rise of DNN-based models for image generation, enhancement, and restoration has significantly improved visual quality, yet made accurate assessment increasingly complex. To address this, we propose a novel IQA approach that bridges the gap between deep learning methods and human perception. Our model disentangles deep features into high-level semantic information and low-level perceptual details, treating each stream separately. These features are then combined with conventional IQA metrics to provide a more comprehensive evaluation framework. This hybrid design enables the model to assess both global context and intricate image details, better reflecting the human visual process, which first interprets overall structure before attending to fine-grained elements. The final stage employs a multilayer perceptron (MLP) to map the integrated features into a concise quality score. Experimental results demonstrate that our method achieves improved consistency with human perceptual judgments compared to existing IQA models.
Problem

Research questions and friction points this paper is trying to address.

Develops a hybrid image quality assessment combining global and local features
Addresses limitations of traditional IQA in evaluating DNN-processed images
Aligns deep learning features with human visual perception mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines global and local perception features
Disentangles deep features into semantic and perceptual
Uses hybrid metrics with MLP for scoring
🔎 Similar Papers
No similar papers found.