🤖 AI Summary
As algorithmic decision-making expands into sensitive domains, emerging legislation mandates not only fair classification but also actionable recourse for adverse decisions (the “right to recourse”). However, existing equal-cost paradigms ignore intergroup disparities in social burden, exacerbating inequity during recourse. Method: This paper introduces the first fairness theory of algorithmic recourse grounded in social burden, revealing the intrinsic relationship between classification fairness and recourse fairness; proposes a novel fairness framework centered on equitable cross-group burden allocation; and designs and implements MISOB—a practical, classifier-agnostic algorithm that computes feasible recourse actions under real-world constraints. Results: Experiments on real-world datasets demonstrate that MISOB significantly reduces recourse burden for disadvantaged groups while preserving classification accuracy, thereby achieving synergistic optimization of fairness and predictive performance.
📝 Abstract
Machine learning based predictions are increasingly used in sensitive decision-making applications that directly affect our lives. This has led to extensive research into ensuring the fairness of classifiers. Beyond just fair classification, emerging legislation now mandates that when a classifier delivers a negative decision, it must also offer actionable steps an individual can take to reverse that outcome. This concept is known as algorithmic recourse. Nevertheless, many researchers have expressed concerns about the fairness guarantees within the recourse process itself. In this work, we provide a holistic theoretical characterization of unfairness in algorithmic recourse, formally linking fairness guarantees in recourse and classification, and highlighting limitations of the standard equal cost paradigm. We then introduce a novel fairness framework based on social burden, along with a practical algorithm (MISOB), broadly applicable under real-world conditions. Empirical results on real-world datasets show that MISOB reduces the social burden across all groups without compromising overall classifier accuracy.