π€ AI Summary
Existing full-reference image quality assessment methods heavily rely on pristine reference images, limiting their practical applicability. Inspired by the human visual systemβs use of visual memory for perceptual judgment, this work proposes a Memory-driven Quality Assessment Framework (MQAF), which introduces visual memory mechanisms into image quality evaluation for the first time. MQAF constructs a distortion-pattern memory bank and integrates adaptive weighting of reference information, distortion-pattern matching, and a dual-mode dynamic switching mechanism within a unified architecture to jointly support both full-reference and no-reference assessment. Experimental results demonstrate that MQAF consistently outperforms state-of-the-art methods across multiple benchmark datasets, significantly reducing dependence on ideal reference images while achieving leading performance in both assessment paradigms.
π Abstract
Existing full-reference image quality assessment (FR-IQA) methods achieve high-precision evaluation by analysing feature differences between reference and distorted images. However, their performance is constrained by the quality of the reference image, which limits real-world applications where ideal reference sources are unavailable. Notably, the human visual system has the ability to accumulate visual memory, allowing image quality assessment on the basis of long-term memory storage. Inspired by this biological memory mechanism, we propose a memory-driven quality-aware framework (MQAF), which establishes a memory bank for storing distortion patterns and dynamically switches between dual-mode quality assessment strategies to reduce reliance on high-quality reference images. When reference images are available, MQAF obtains reference-guided quality scores by adaptively weighting reference information and comparing the distorted image with stored distortion patterns in the memory bank. When the reference image is absent, the framework relies on distortion patterns in the memory bank to infer image quality, enabling no-reference quality assessment (NR-IQA). The experimental results show that our method outperforms state-of-the-art approaches across multiple datasets while adapting to both no-reference and full-reference tasks.