🤖 AI Summary
To address degraded face recognition performance in real-time screening caused by low-quality faces—such as motion blur, poor illumination, occlusion, and large pose variations—this paper proposes a lightweight face quality assessment (FQA) framework. Methodologically, it innovatively integrates normalized facial landmark representations with a random forest regression model to effectively decouple resolution variability and pose-induced distortions. The FQA module is tightly coupled with an ArcFace verifier within a real-time inference pipeline to enable proactive quality filtering. Evaluated on a real-world Dubai Police CCTV dataset, the framework achieves 96.67% quality assessment accuracy and reduces the false rejection rate by 99.7%, while significantly improving cosine similarity distribution concentration and end-to-end verification accuracy. To our knowledge, this is the first work to combine geometrically normalized landmarks with a lightweight regression model for dynamic surveillance-based FQA, achieving both computational efficiency and robustness.
📝 Abstract
Face image quality plays a critical role in determining the accuracy and reliability of face verification systems, particularly in real-time screening applications such as surveillance, identity verification, and access control. Low-quality face images, often caused by factors such as motion blur, poor lighting conditions, occlusions, and extreme pose variations, significantly degrade the performance of face recognition models, leading to higher false rejection and false acceptance rates. In this work, we propose a lightweight yet effective framework for automatic face quality assessment, which aims to pre-filter low-quality face images before they are passed to the verification pipeline. Our approach utilises normalised facial landmarks in conjunction with a Random Forest Regression classifier to assess image quality, achieving an accuracy of 96.67%. By integrating this quality assessment module into the face verification process, we observe a substantial improvement in performance, including a comfortable 99.7% reduction in the false rejection rate and enhanced cosine similarity scores when paired with the ArcFace face verification model. To validate our approach, we have conducted experiments on a real-world dataset collected comprising over 600 subjects captured from CCTV footage in unconstrained environments within Dubai Police. Our results demonstrate that the proposed framework effectively mitigates the impact of poor-quality face images, outperforming existing face quality assessment techniques while maintaining computational efficiency. Moreover, the framework specifically addresses two critical challenges in real-time screening: variations in face resolution and pose deviations, both of which are prevalent in practical surveillance scenarios.