🤖 AI Summary
Machine vision systems (MVS) suffer severe performance degradation under adverse visual conditions, yet conventional human vision system (HVS)-oriented image quality assessment (IQA) methods are inadequate for evaluating machine-perceived quality. To address this, we propose a machine-oriented IQA (MIQA) framework. Our contributions are threefold: (1) We introduce MIQD-2.5M—the first large-scale machine perception degradation database—encompassing 75 vision models, 250 degradation types, and three core vision tasks; (2) We design RA-MIQA, a region-aware MIQA model that jointly models consistency and accuracy to enable fine-grained spatial degradation sensitivity analysis; (3) We uncover task-specific degradation impact mechanisms and empirically demonstrate weak correlation between HVS-based metrics and MVS performance. Experiments show RA-MIQA achieves 13.56% and 13.37% improvements in Spearman rank correlation coefficient (SRCC) for consistency and accuracy, respectively, on classification tasks—significantly outperforming seven classical IQA methods and five retrained backbone networks.
📝 Abstract
Machine vision systems (MVS) are intrinsically vulnerable to performance degradation under adverse visual conditions. To address this, we propose a machine-centric image quality assessment (MIQA) framework that quantifies the impact of image degradations on MVS performance. We establish an MIQA paradigm encompassing the end-to-end assessment workflow. To support this, we construct a machine-centric image quality database (MIQD-2.5M), comprising 2.5 million samples that capture distinctive degradation responses in both consistency and accuracy metrics, spanning 75 vision models, 250 degradation types, and three representative vision tasks. We further propose a region-aware MIQA (RA-MIQA) model to evaluate MVS visual quality through fine-grained spatial degradation analysis. Extensive experiments benchmark the proposed RA-MIQA against seven human visual system (HVS)-based IQA metrics and five retrained classical backbones. Results demonstrate RA-MIQA's superior performance in multiple dimensions, e.g., achieving SRCC gains of 13.56% on consistency and 13.37% on accuracy for image classification, while also revealing task-specific degradation sensitivities. Critically, HVS-based metrics prove inadequate for MVS quality prediction, while even specialized MIQA models struggle with background degradations, accuracy-oriented estimation, and subtle distortions. This study can advance MVS reliability and establish foundations for machine-centric image processing and optimization. The model and code are available at: https://github.com/XiaoqiWang/MIQA.