A Flexible Fairness Framework with Surrogate Loss Reweighting for Addressing Sociodemographic Disparities

📅 2025-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fairness–accuracy trade-off in machine learning models across sociodemographic attributes. We propose the α-β Fair Machine Learning framework, introducing tunable hyperparameters α and β to enable fine-grained, joint control over multiple fairness criteria (e.g., equal opportunity) and predictive accuracy. Our method innovatively designs an α-β-parameterized fairness constraint mechanism, develops a provably convergent parallel stochastic gradient descent algorithm (P-SGD-S), and introduces a novel family of surrogate loss functions coupled with a loss reweighting strategy—enabling smooth transition from empirical risk minimization to minimax fairness objectives. Theoretical analysis covers both convex and non-convex optimization settings. Experiments on multiple benchmark datasets demonstrate significant reductions in fairness violations (e.g., average 38% decrease in equal opportunity difference) while maintaining or improving classification accuracy, thereby achieving balanced fairness gains and model stability.

Technology Category

Application Category

📝 Abstract
This paper presents a new algorithmic fairness framework called $oldsymbol{alpha}$-$oldsymbol{eta}$ Fair Machine Learning ($oldsymbol{alpha}$-$oldsymbol{eta}$ FML), designed to optimize fairness levels across sociodemographic attributes. Our framework employs a new family of surrogate loss functions, paired with loss reweighting techniques, allowing precise control over fairness-accuracy trade-offs through tunable hyperparameters $oldsymbol{alpha}$ and $oldsymbol{eta}$. To efficiently solve the learning objective, we propose Parallel Stochastic Gradient Descent with Surrogate Loss (P-SGD-S) and establish convergence guarantees for both convex and nonconvex loss functions. Experimental results demonstrate that our framework improves overall accuracy while reducing fairness violations, offering a smooth trade-off between standard empirical risk minimization and strict minimax fairness. Results across multiple datasets confirm its adaptability, ensuring fairness improvements without excessive performance degradation.
Problem

Research questions and friction points this paper is trying to address.

Addressing sociodemographic disparities in machine learning fairness
Optimizing fairness-accuracy trade-offs with tunable hyperparameters
Improving accuracy while reducing fairness violations across datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Surrogate loss reweighting for fairness control
Tunable hyperparameters α and β for trade-offs
Parallel SGD with convergence guarantees
🔎 Similar Papers
No similar papers found.