On the"Illusion"of Gender Bias in Face Recognition: Explaining the Fairness Issue Through Non-demographic Attributes

📅 2025-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Gender accuracy disparities in face recognition are commonly attributed to biological sex, yet this work reveals they stem from socially constructed facial appearance rather than innate physiological differences. Method: We decouple 40 non-demographic facial attributes to establish a bias-free analytical framework; propose a context-aware fairness metric; and design the first interpretable, unsupervised bias-nullification algorithm that automatically identifies and composes salient appearance attributes to mitigate gendered performance gaps. Contribution/Results: Our method eliminates the accuracy gap between male and female subjects (ΔACC = 0) across multiple benchmark datasets, empirically demonstrating that apparent gender bias arises from sociocultural representation—not biological variation. This work pioneers a paradigm shift in fairness analysis: from coarse demographic categories to fine-grained, semantically meaningful appearance spaces—enabling interpretable, unsupervised algorithmic fairness assessment and intervention.

Technology Category

Application Category

📝 Abstract
Face recognition systems (FRS) exhibit significant accuracy differences based on the user's gender. Since such a gender gap reduces the trustworthiness of FRS, more recent efforts have tried to find the causes. However, these studies make use of manually selected, correlated, and small-sized sets of facial features to support their claims. In this work, we analyse gender bias in face recognition by successfully extending the search domain to decorrelated combinations of 40 non-demographic facial characteristics. First, we propose a toolchain to effectively decorrelate and aggregate facial attributes to enable a less-biased gender analysis on large-scale data. Second, we introduce two new fairness metrics to measure fairness with and without context. Based on these grounds, we thirdly present a novel unsupervised algorithm able to reliably identify attribute combinations that lead to vanishing bias when used as filter predicates for balanced testing datasets. The experiments show that the gender gap vanishes when images of male and female subjects share specific attributes, clearly indicating that the issue is not a question of biology but of the social definition of appearance. These findings could reshape our understanding of fairness in face biometrics and provide insights into FRS, helping to address gender bias issues.
Problem

Research questions and friction points this paper is trying to address.

Facial Recognition Disparity
Gender Bias
System Fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Facial Attributes Analysis
Fairness Assessment Methods
Equitable Performance Technique
🔎 Similar Papers
No similar papers found.