🤖 AI Summary
Addressing key challenges in supervised learning—namely, the non-differentiability of discontinuous fairness metrics, conflicts among multiple fairness objectives, and hyperparameter sensitivity of regularization-based approaches—this paper proposes a hard-constraint fairness learning framework. Our method introduces a smooth, non-convex surrogate function that tightly approximates Heaviside-type unfairness measures, transforming non-differentiable constraints into differentiable approximations. It directly enforces hard constraints on multiple (potentially conflicting) fairness metrics, eliminating optimization difficulties and costly hyperparameter tuning inherent to regularization. We formulate a non-convex optimization problem with explicit fairness constraints and solve it efficiently using robust numerical solvers, ensuring convergence and stability. Experiments demonstrate that our approach significantly improves practical fairness guarantees while maintaining training stability, generalization performance, and computational efficiency—without requiring expensive hyperparameter search.
📝 Abstract
A new strategy for fair supervised machine learning is proposed. The main advantages of the proposed strategy as compared to others in the literature are as follows. (a) We introduce a new smooth nonconvex surrogate to approximate the Heaviside functions involved in discontinuous unfairness measures. The surrogate is based on smoothing methods from the optimization literature, and is new for the fair supervised learning literature. The surrogate is a tight approximation which ensures the trained prediction models are fair, as opposed to other (e.g., convex) surrogates that can fail to lead to a fair prediction model in practice. (b) Rather than rely on regularizers (that lead to optimization problems that are difficult to solve) and corresponding regularization parameters (that can be expensive to tune), we propose a strategy that employs hard constraints so that specific tolerances for unfairness can be enforced without the complications associated with the use of regularization. (c)~Our proposed strategy readily allows for constraints on multiple (potentially conflicting) unfairness measures at the same time. Multiple measures can be considered with a regularization approach, but at the cost of having even more difficult optimization problems to solve and further expense for tuning. By contrast, through hard constraints, our strategy leads to optimization models that can be solved tractably with minimal tuning.