🤖 AI Summary
This study addresses the open problem of evaluating group fairness—specifically separation—in settings where ground-truth labels are unavailable at the individual level and only pairwise comparison judgments are provided. The authors introduce a novel concept termed “comparative separation,” extending fairness assessment to comparative data and proposing corresponding metrics. Theoretical analysis demonstrates that, in binary classification, comparative separation is equivalent to traditional separation under standard assumptions. Empirical validation further confirms the practical feasibility of the proposed approach, showing that it achieves comparable statistical power with significantly fewer pairwise comparisons than required by conventional label-based methods, thereby imposing a lower cognitive burden on annotators.
📝 Abstract
This research seeks to benefit the software engineering society by proposing comparative separation, a novel group fairness notion to evaluate the fairness of machine learning software on comparative judgment test data. Fairness issues have attracted increasing attention since machine learning software is increasingly used for high-stakes and high-risk decisions. It is the responsibility of all software developers to make their software accountable by ensuring that the machine learning software do not perform differently on different sensitive groups -- satisfying the separation criterion. However, evaluation of separation requires ground truth labels for each test data point. This motivates our work on analyzing whether separation can be evaluated on comparative judgment test data. Instead of asking humans to provide the ratings or categorical labels on each test data point, comparative judgments are made between pairs of data points such as A is better than B. According to the law of comparative judgment, providing such comparative judgments yields a lower cognitive burden for humans than providing ratings or categorical labels. This work first defines the novel fairness notion comparative separation on comparative judgment test data, and the metrics to evaluate comparative separation. Then, both theoretically and empirically, we show that in binary classification problems, comparative separation is equivalent to separation. Lastly, we analyze the number of test data points and test data pairs required to achieve the same level of statistical power in the evaluation of separation and comparative separation, respectively. This work is the first to explore fairness evaluation on comparative judgment test data. It shows the feasibility and the practical benefits of using comparative judgment test data for model evaluations.