Understanding Fairness and Prediction Error through Subspace Decomposition and Influence Analysis

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Machine learning models often inherit and amplify historical biases, leading to unfair predictions. Existing approaches typically impose constraints at the prediction level, overlooking bias origins in data representation. This paper proposes a subspace decomposition framework grounded in sufficient dimension reduction, which disentangles fairness from predictive utility at the feature representation level. We theoretically analyze how shared subspaces affect both fairness guarantees and generalization error, and employ influence functions to characterize the asymptotic sensitivity of parameter estimation. By selectively removing subspace components dominated by sensitive attributes, our method achieves joint optimization of bias mitigation and predictive performance. Experiments on synthetic and multiple real-world datasets demonstrate substantial improvements in group fairness—measured by metrics such as equalized odds and demographic parity—while incurring at most a 1.2% drop in prediction accuracy.

Technology Category

Application Category

📝 Abstract
Machine learning models have achieved widespread success but often inherit and amplify historical biases, resulting in unfair outcomes. Traditional fairness methods typically impose constraints at the prediction level, without addressing underlying biases in data representations. In this work, we propose a principled framework that adjusts data representations to balance predictive utility and fairness. Using sufficient dimension reduction, we decompose the feature space into target-relevant, sensitive, and shared components, and control the fairness-utility trade-off by selectively removing sensitive information. We provide a theoretical analysis of how prediction error and fairness gaps evolve as shared subspaces are added, and employ influence functions to quantify their effects on the asymptotic behavior of parameter estimates. Experiments on both synthetic and real-world datasets validate our theoretical insights and show that the proposed method effectively improves fairness while preserving predictive performance.
Problem

Research questions and friction points this paper is trying to address.

Decomposing feature space to address bias in data representations
Balancing predictive utility and fairness through subspace manipulation
Analyzing prediction error and fairness gaps using influence functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes feature space into target-relevant and sensitive components
Controls fairness-utility trade-off by removing sensitive information
Uses influence functions to quantify parameter estimate effects
🔎 Similar Papers
No similar papers found.