🤖 AI Summary
To address long-term group unfairness arising from reinforcement learning policies in dynamic sequential decision-making, this work systematically introduces bisimulation metrics into fairness modeling—marking the first such theoretical integration. We propose a bisimulation-based framework comprising fair reward shaping and state dynamics regularization, which enforces equivalence across demographic groups in state representation, transition dynamics, and observability. This ensures joint optimization of fairness constraints and task performance. Evaluated on standard fairness benchmarks—including credit lending and college admissions—the method reduces inter-group acceptance rate disparity by 42% while preserving over 98% of original task performance, significantly outperforming existing approaches.
📝 Abstract
Ensuring long-term fairness is crucial when developing automated decision making systems, specifically in dynamic and sequential environments. By maximizing their reward without consideration of fairness, AI agents can introduce disparities in their treatment of groups or individuals. In this paper, we establish the connection between bisimulation metrics and group fairness in reinforcement learning. We propose a novel approach that leverages bisimulation metrics to learn reward functions and observation dynamics, ensuring that learners treat groups fairly while reflecting the original problem. We demonstrate the effectiveness of our method in addressing disparities in sequential decision making problems through empirical evaluation on a standard fairness benchmark consisting of lending and college admission scenarios.