🤖 AI Summary
This paper addresses two fundamental flaws in automatic gender recognition (AGR): (1) the conflation of social gender with biological sex (male/female), and (2) reliance on superficial visual cues for binary classification, resulting in high misgendering rates and poor fairness for non-binary and transgender individuals. We propose a paradigm shift: replacing predictive binary classification with a human-in-the-loop feedback interface that enables real-time user correction and self-identification. To our knowledge, this is the first work to introduce a post-misgendering correction mechanism into AGR, formalizing a gender conceptual framework and defining a non-binary-inclusive interaction protocol. Our approach theoretically decouples gender identity, biological sex, and gender expression. Empirically, it significantly reduces misgendering rates and improves fairness. Furthermore, we establish the first AGR evaluation benchmark centered on gender self-determination.
📝 Abstract
Automatic Gender Recognition (AGR) systems are an increasingly widespread application in the Machine Learning (ML) landscape. While these systems are typically understood as detecting gender, they often classify datapoints based on observable features correlated at best with either male or female sex. In addition to questionable binary assumptions, from an epistemological point of view, this is problematic for two reasons. First, there exists a gap between the categories the system is meant to predict (woman versus man) and those onto which their output reasonably maps (female versus male). What is more, gender cannot be inferred on the basis of such observable features. This makes AGR tools often unreliable, especially in the case of non-binary and gender non-conforming people. We suggest a theoretical and practical rethinking of AGR systems. To begin, distinctions are made between sex, gender, and gender expression. Then, we build upon the observation that, unlike algorithmic misgendering, human-human misgendering is open to the possibility of re-evaluation and correction. We suggest that analogous dynamics should be recreated in AGR, giving users the possibility to correct the system's output. While implementing such a feedback mechanism could be regarded as diminishing the system's autonomy, it represents a way to significantly increase fairness levels in AGR. This is consistent with the conceptual change of paradigm that we advocate for AGR systems, which should be understood as tools respecting individuals' rights and capabilities of self-expression and determination.