🤖 AI Summary
This work addresses the fairness–accuracy trade-off in machine learning models across sociodemographic attributes. We propose the α-β Fair Machine Learning framework, introducing tunable hyperparameters α and β to enable fine-grained, joint control over multiple fairness criteria (e.g., equal opportunity) and predictive accuracy. Our method innovatively designs an α-β-parameterized fairness constraint mechanism, develops a provably convergent parallel stochastic gradient descent algorithm (P-SGD-S), and introduces a novel family of surrogate loss functions coupled with a loss reweighting strategy—enabling smooth transition from empirical risk minimization to minimax fairness objectives. Theoretical analysis covers both convex and non-convex optimization settings. Experiments on multiple benchmark datasets demonstrate significant reductions in fairness violations (e.g., average 38% decrease in equal opportunity difference) while maintaining or improving classification accuracy, thereby achieving balanced fairness gains and model stability.
📝 Abstract
This paper presents a new algorithmic fairness framework called $oldsymbol{alpha}$-$oldsymbol{eta}$ Fair Machine Learning ($oldsymbol{alpha}$-$oldsymbol{eta}$ FML), designed to optimize fairness levels across sociodemographic attributes. Our framework employs a new family of surrogate loss functions, paired with loss reweighting techniques, allowing precise control over fairness-accuracy trade-offs through tunable hyperparameters $oldsymbol{alpha}$ and $oldsymbol{eta}$. To efficiently solve the learning objective, we propose Parallel Stochastic Gradient Descent with Surrogate Loss (P-SGD-S) and establish convergence guarantees for both convex and nonconvex loss functions. Experimental results demonstrate that our framework improves overall accuracy while reducing fairness violations, offering a smooth trade-off between standard empirical risk minimization and strict minimax fairness. Results across multiple datasets confirm its adaptability, ensuring fairness improvements without excessive performance degradation.