Measures of classification bias derived from sample size analysis

๐Ÿ“… 2026-01-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work proposes a novel fairness metric that addresses the limitation of existing algorithmic fairness measures, which typically rely on absolute differences or ratios of error rates and fail to capture the statistical significance of observed disparities. The proposed measure quantifies bias by the minimum sample size required to detect a statistically significant difference in classification error rates between groupsโ€”larger required sample sizes indicate smaller bias. Leveraging an approximate sample size formula from chi-squared tests and nonparametric error rate estimation, the method yields a fairness index with strong theoretical properties. Both theoretical analysis and empirical experiments demonstrate that this metric is fundamentally distinct from conventional approaches, often yielding different rankings of algorithmic bias, and naturally generalizes to multi-group settings, offering a more robust tool for evaluating fairness in machine learning.

Technology Category

Application Category

๐Ÿ“ Abstract
We propose the use of a simple intuitive principle for measuring algorithmic classification bias: the significance of the differences in a classifier's error rates across the various demographics is inversely commensurate with the sample size required to statistically detect them. That is, if large sample sizes are required to statistically establish biased behavior, the algorithm is less biased, and vice versa. In a simple setting, we assume two distinct demographics, and non-parametric estimates of the error rates on them, e1 and e2, respectively. We use a well-known approximate formula for the sample size of the chi-squared test, and verify some basic desirable properties of the proposed measure. Next, we compare the proposed measure with two other commonly used statistics, the difference e2-e1 and the ratio e2/e1 of the error rates. We establish that the proposed measure is essentially different in that it can rank algorithms for bias differently, and we discuss some of its advantages over the other two measures. Finally, we briefly discuss how some of the desirable properties of the proposed measure emanate from fundamental characteristics of the method, rather than the approximate sample size formula we used, and thus, are expected to hold in more complex settings with more than two demographics.
Problem

Research questions and friction points this paper is trying to address.

algorithmic bias
classification bias
error rate disparity
fairness measurement
demographic differences
Innovation

Methods, ideas, or system contributions that make the work stand out.

algorithmic bias
sample size analysis
classification error rates
chi-squared test
fairness metrics
๐Ÿ”Ž Similar Papers
No similar papers found.
Ioannis Ivrissimtzis
Ioannis Ivrissimtzis
Associate Professor in the Department of Computer Science, Durham University
GraphicsGeometric ModellingSubdivision SurfacesSurface Reconstruction
S
Shauna Concannon
University of Durham, UK
M
Matthew Houliston
Servelegal Ltd, UK
G
Graham Roberts
Servelegal Ltd, UK