🤖 AI Summary
This paper addresses the degradation of long-term fairness in machine learning systems deployed in high-stakes settings, where dynamic feedback loops, environmental interactions, and objective shifts undermine sustained equitable outcomes. Systematically reviewing the literature, we propose the first unified taxonomy for long-horizon fairness—categorizing over 30 models and six core challenges. Methodologically, we integrate causal inference, dynamical systems theory, and socio-technical systems perspectives to expose the failure mechanisms of static fairness criteria under closed-loop feedback and formally define the “fairness degradation” paradigm. Our primary contribution is the development of the first structured analytical framework for long-term fairness, enabling rigorous assessment of fairness evolution over time. This framework provides a theoretical foundation and actionable roadmap for algorithmic governance, sustainable AI design, and evidence-informed policy formulation—bridging technical rigor with societal accountability in adaptive ML systems.
📝 Abstract
The widespread integration of Machine Learning systems in daily life, particularly in high-stakes domains, has raised concerns about the fairness implications. While prior works have investigated static fairness measures, recent studies reveal that automated decision-making has long-term implications and that off-the-shelf fairness approaches may not serve the purpose of achieving long-term fairness. Additionally, the existence of feedback loops and the interaction between models and the environment introduces additional complexities that may deviate from the initial fairness goals. In this survey, we review existing literature on long-term fairness from different perspectives and present a taxonomy for long-term fairness studies. We highlight key challenges and consider future research directions, analyzing both current issues and potential further explorations.