Accuracy vs. Accuracy: Computational Tradeoffs Between Classification Rates and Utility

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses fairness in non-binary supervised learning with rich auxiliary labels (e.g., types, rankings, risk estimates), where subgroup classification accuracy and utility optimization must be jointly optimized—despite the apparent computational incompatibility between accuracy maximization and loss minimization. Method: We establish that these objectives are not fundamentally conflicting, yet approaching the Bayes-optimal predictor is computationally hard. We propose a distribution-aware classification-and-ranking framework incorporating evidential fairness constraints, and derive impossibility results and computational lower bounds. Contribution/Results: Our method guarantees subgroup accuracy preservation across rule families while balancing utility maximization and fairness governance. We provide an efficiently solvable single-objective algorithm and prove tight computational lower bounds, thereby introducing a falsifiable, verifiable paradigm for fair machine learning.

Technology Category

Application Category

📝 Abstract
We revisit the foundations of fairness and its interplay with utility and efficiency in settings where the training data contain richer labels, such as individual types, rankings, or risk estimates, rather than just binary outcomes. In this context, we propose algorithms that achieve stronger notions of evidence-based fairness than are possible in standard supervised learning. Our methods support classification and ranking techniques that preserve accurate subpopulation classification rates, as suggested by the underlying data distributions, across a broad class of classification rules and downstream applications. Furthermore, our predictors enable loss minimization, whether aimed at maximizing utility or in the service of fair treatment. Complementing our algorithmic contributions, we present impossibility results demonstrating that simultaneously achieving accurate classification rates and optimal loss minimization is, in some cases, computationally infeasible. Unlike prior impossibility results, our notions are not inherently in conflict and are simultaneously satisfied by the Bayes-optimal predictor. Furthermore, we show that each notion can be satisfied individually via efficient learning. Our separation thus stems from the computational hardness of learning a sufficiently good approximation of the Bayes-optimal predictor. These computational impossibilities present a choice between two natural and attainable notions of accuracy that could both be motivated by fairness.
Problem

Research questions and friction points this paper is trying to address.

Balancing fairness and utility in rich-label data settings
Achieving evidence-based fairness beyond standard supervised learning
Resolving computational tradeoffs between classification accuracy and loss minimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Algorithms for evidence-based fairness in rich-label data
Classification and ranking preserving subpopulation accuracy
Loss minimization for utility or fair treatment
🔎 Similar Papers
No similar papers found.