🤖 AI Summary
Existing mainstream face verification benchmarks (e.g., LFW) suffer from identity/image overlap with training data, leading to inflated performance estimates and poor reflection of real-world challenges—such as large intra-class attribute variation, high inter-class visual similarity (e.g., relatives/twins), and fine-grained discrimination difficulty. Method: This work systematically characterizes the verification bias induced by non-isolated identities and introduces two rigorously constructed “Goldilocks” benchmarks—Hadrian (controlling hairstyle variation) and Eclipse (controlling illumination/exposure variation)—with strict identity and image separation from all major training datasets and controlled acquisition protocols. We propose an identity-isolation evaluation protocol, 10-fold generalization stability analysis, and a dedicated difficulty-aware evaluation paradigm. Contribution/Results: Experiments show that state-of-the-art models suffer significant accuracy drops (3–8%) on these new benchmarks compared to LFW, validating their effectiveness and necessity as next-generation evaluation standards for robust face verification.
📝 Abstract
Face Recognition models are commonly trained with web-scraped datasets containing millions of images and evaluated on test sets emphasizing pose, age and mixed attributes. With train and test sets both assembled from web-scraped images, it is critical to ensure disjoint sets of identities between train and test sets. However, existing train and test sets have not considered this. Moreover, as accuracy levels become saturated, such as LFW $>99.8%$, more challenging test sets are needed. We show that current train and test sets are generally not identity- or even image-disjoint, and that this results in an optimistic bias in the estimated accuracy. In addition, we show that identity-disjoint folds are important in the 10-fold cross-validation estimate of test accuracy. To better support continued advances in face recognition, we introduce two"Goldilocks"test sets, Hadrian and Eclipse. The former emphasizes challenging facial hairstyles and latter emphasizes challenging over- and under-exposure conditions. Images in both datasets are from a large, controlled-acquisition (not web-scraped) dataset, so they are identity- and image-disjoint with all popular training sets. Accuracy for these new test sets generally falls below that observed on LFW, CPLFW, CALFW, CFP-FP and AgeDB-30, showing that these datasets represent important dimensions for improvement of face recognition. The datasets are available at: url{https://github.com/HaiyuWu/SOTA-Face-Recognition-Train-and-Test}