Effort-aware Fairness: Incorporating a Philosophy-informed, Human-centered Notion of Effort into Algorithmic Fairness Metrics

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing AI fairness metrics (e.g., demographic parity) neglect individual effort—a core dimension in philosophical theories of justice and human fairness judgments. This paper introduces *Effort-aware Fairness* (EaF), the first formalization of “effort” from philosophy as time-trajectory modeling with inertia-weighted dynamic feature sequences to quantify effort investment over time. Through a preregistered human-subject experiment, we empirically validate that temporal trajectory information significantly influences fairness assessments. Building on this, we develop an EaF evaluation and auditing pipeline tailored to judicial and financial domains. Empirical results demonstrate that EaF effectively identifies and rectifies unfair decisions against high-effort individuals facing systemic disadvantages—achieving interpretable and actionable fairness improvements in criminal risk prediction and personal credit scoring.

Technology Category

Application Category

📝 Abstract
Although popularized AI fairness metrics, e.g., demographic parity, have uncovered bias in AI-assisted decision-making outcomes, they do not consider how much effort one has spent to get to where one is today in the input feature space. However, the notion of effort is important in how Philosophy and humans understand fairness. We propose a philosophy-informed way to conceptualize and evaluate Effort-aware Fairness (EaF) based on the concept of Force, or temporal trajectory of predictive features coupled with inertia. In addition to our theoretical formulation of EaF metrics, our empirical contributions include: 1/ a pre-registered human subjects experiment, which demonstrates that for both stages of the (individual) fairness evaluation process, people consider the temporal trajectory of a predictive feature more than its aggregate value; 2/ pipelines to compute Effort-aware Individual/Group Fairness in the criminal justice and personal finance contexts. Our work may enable AI model auditors to uncover and potentially correct unfair decisions against individuals who spent significant efforts to improve but are still stuck with systemic/early-life disadvantages outside their control.
Problem

Research questions and friction points this paper is trying to address.

Incorporating effort into algorithmic fairness metrics
Evaluating fairness using temporal feature trajectories
Addressing systemic disadvantages in AI decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Philosophy-informed Effort-aware Fairness metrics
Force concept: temporal trajectory with inertia
Human subjects experiment validates feature trajectory importance
T
Tin Nguyen
Department of Computer Science, University of Maryland, College Park, Maryland, USA
Jiannan Xu
Jiannan Xu
Ph.D. Candidate, Robert H. Smith School of Business, University of Maryland
Marketplace AnalyticsService OperationsAI for Social Good
Zora Che
Zora Che
University of Maryland
P
Phuong-Anh Nguyen-Le
College of Information, University of Maryland, College Park, Maryland, USA
R
Rushil Dandamudi
Department of Computer Science, University of Maryland, College Park, Maryland, USA
Donald Braman
Donald Braman
Associate Professor, GWU Law School
Criminal Justice ReformCriminal LawClimate Justice
Furong Huang
Furong Huang
Associate Professor of Computer Science, University of Maryland
Trustworthy AI/MLReinforcement LearningGenerative AI
H
Hal Daum'e
Department of Computer Science, University of Maryland, College Park, Maryland, USA
Zubin Jelveh
Zubin Jelveh
University of Maryland
Science of SciencePredictionPublic PolicyRecord LinkageVictimization