Testing the Fairness-Accuracy Improvability of Algorithms

📅 2024-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the “necessity defense”—the common claim that fairness improvements inevitably sacrifice predictive accuracy—by proposing the first falsifiable test for the improvability of the fairness–accuracy trade-off. Method: We develop an econometric testing framework integrating semiparametric identification, hypothesis testing, and game-theoretic modeling, ensuring asymptotic validity and robustness to strategic manipulation under arbitrary exogenous algorithmic constraints. Contribution/Results: Theoretically, we formally characterize the feasibility frontier for fairness enhancement and provide statistically rigorous certification for regulatory intervention. Empirically, applying our framework to Obermeyer et al.’s real-world healthcare algorithm data, we quantitatively demonstrate—under zero loss in predictive accuracy—that inter-racial prediction bias can be significantly reduced. This refutes the necessity defense and advances algorithmic governance from heuristic judgment toward rigorous causal inference.

Technology Category

Application Category

📝 Abstract
Many organizations use algorithms that have a disparate impact, i.e., the benefits or harms of the algorithm fall disproportionately on certain social groups. Addressing an algorithm's disparate impact can be challenging, however, because it is often unclear whether it is possible to reduce this impact without sacrificing other objectives of the organization, such as accuracy or profit. Establishing the improvability of algorithms with respect to multiple criteria is of both conceptual and practical interest: in many settings, disparate impact that would otherwise be prohibited under US federal law is permissible if it is necessary to achieve a legitimate business interest. The question is how a policy-maker can formally substantiate, or refute, this"necessity"defense. In this paper, we provide an econometric framework for testing the hypothesis that it is possible to improve on the fairness of an algorithm without compromising on other pre-specified objectives. Our proposed test is simple to implement and can be applied under any exogenous constraint on the algorithm space. We establish the large-sample validity and consistency of our test, and microfound the test's robustness to manipulation based on a game between a policymaker and the analyst. Finally, we apply our approach to evaluate a healthcare algorithm originally considered by Obermeyer et al. (2019), and quantify the extent to which the algorithm's disparate impact can be reduced without compromising the accuracy of its predictions.
Problem

Research questions and friction points this paper is trying to address.

Algorithmic Fairness
Predictive Accuracy
Fairness Quantification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fairness Enhancement
Algorithm Evaluation
Predictive Accuracy Preservation
🔎 Similar Papers
No similar papers found.