🤖 AI Summary
This paper identifies fundamental limitations of disaggregated evaluation (i.e., subgroup performance analysis) in algorithmic fairness assessment: (i) when data are representative but reflect real-world inequities, equal subgroup performance can be misleading; and (ii) under unknown selection bias, both disaggregated evaluation and conditional independence tests fail. Crucially, evaluation stability is shown to depend critically on the underlying data-generating mechanism. Method: We propose a causally enhanced, robust evaluation paradigm integrating causal graph modeling, explicit causal assumptions, weighted performance estimation, and distributional shift analysis. Contribution/Results: Our framework predicts metric stability, delineates the validity boundaries of fairness evaluation methods across distinct bias regimes, and provides actionable, causally grounded guidance for fairness practice—marking the first systematic treatment of evaluation robustness through a causal lens.
📝 Abstract
Disaggregated evaluation across subgroups is critical for assessing the fairness of machine learning models, but its uncritical use can mislead practitioners. We show that equal performance across subgroups is an unreliable measure of fairness when data are representative of the relevant populations but reflective of real-world disparities. Furthermore, when data are not representative due to selection bias, both disaggregated evaluation and alternative approaches based on conditional independence testing may be invalid without explicit assumptions regarding the bias mechanism. We use causal graphical models to predict metric stability across subgroups under different data generating processes. Our framework suggests complementing disaggregated evaluations with explicit causal assumptions and analysis to control for confounding and distribution shift, including conditional independence testing and weighted performance estimation. These findings have broad implications for how practitioners design and interpret model assessments given the ubiquity of disaggregated evaluation.