🤖 AI Summary
Traditional fairness definitions lack human-centered interpretability and mathematical unification. Method: We propose a novel fairness modeling framework grounded in group-level welfare (utility), formalized via utility theory to yield a quantifiable, comparable welfare model. Classical fairness criteria—including statistical parity and equal opportunity—are unified as linear constraints within this framework, enabling derivation of an optimal post-processing fairness intervention via linear programming. Contribution/Results: The approach ensures theoretical rigor and computational scalability. Empirical evaluation across multiple real-world datasets demonstrates that it significantly improves cross-group welfare fairness while preserving predictive performance. This work establishes a new paradigm for fair machine learning that is interpretable, optimization-friendly, and practically deployable.
📝 Abstract
In this paper, we propose a novel fairness framework grounded in the concept of happi- ness, a measure of the utility each group gains fromdecisionoutcomes. Bycapturingfairness through this intuitive lens, we not only offer a more human-centered approach, but also one that is mathematically rigorous: In order to compute the optimal, fair post-processing strategy, only a linear program needs to be solved. This makes our method both efficient and scalable with existing optimization tools. Furthermore, it unifies and extends several well-known fairness definitions, and our em- pirical results highlight its practical strengths across diverse scenarios.