🤖 AI Summary
Addressing the challenge of jointly evaluating low-level perceptual distortions and high-level semantic consistency in AIGC image quality assessment, this paper proposes a multi-level vision–semantics joint evaluation paradigm. Methodologically, we design a dual-branch CNN–Transformer architecture to extract hierarchical global and local features; integrate prompt-based semantic embeddings with multi-scale feature fusion; and unify perceptual quality and text–image alignment modeling via a joint aggregation strategy. Evaluated on major AIGC quality assessment benchmarks—including KADID-AIGC and AVA-AIGC—our framework consistently outperforms state-of-the-art methods, achieving average improvements of 4.2–9.7% in PSNR, LPIPS, and semantic alignment metrics. These results demonstrate the effectiveness and generalizability of cross-level visual representation learning coupled with text-guided evaluation.
📝 Abstract
The quality assessment of AI-generated content (AIGC) faces multi-dimensional challenges, that span from low-level visual perception to high-level semantic understanding. Existing methods generally rely on single-level visual features, limiting their ability to capture complex distortions in AIGC images. To address this limitation, a multi-level visual representation paradigm is proposed with three stages, namely multi-level feature extraction, hierarchical fusion, and joint aggregation. Based on this paradigm, two networks are developed. Specifically, the Multi-Level Global-Local Fusion Network (MGLF-Net) is designed for the perceptual quality assessment, extracting complementary local and global features via dual CNN and Transformer visual backbones. The Multi-Level Prompt-Embedded Fusion Network (MPEF-Net) targets Text-to-Image correspondence by embedding prompt semantics into the visual feature fusion process at each feature level. The fused multi-level features are then aggregated for final evaluation. Experiments on benchmarks demonstrate outstanding performance on both tasks, validating the effectiveness of the proposed multi-level visual assessment paradigm.