๐ค AI Summary
Traditional AI fairness research predominantly relies on static datasets, neglecting the dynamic evolution of AI systems in real-world deployments. This paper pioneers modeling fairness as a runtime property and proposes a dynamic fairness analysis framework tailored for evolving AI systems. Methodologically, it incorporates environmental dynamics, prediction horizon, and confidence thresholds as strategic parameters, and integrates a biased-coin sequence probabilistic model, Markov processes, and additive dynamical systems theory to enable continuous monitoring and intervention against time-varying bias. The contributions are threefold: (1) the first general-purpose runtime fairness analysis framework; (2) a unified formalization of the monitorโexecute mechanism applicable to both static and dynamic settings; and (3) a theoretically rigorous yet practically deployable fairness assurance paradigm for dynamic AI systems.
๐ Abstract
Fairness in AI is traditionally studied as a static property evaluated once, over a fixed dataset. However, real-world AI systems operate sequentially, with outcomes and environments evolving over time. This paper proposes a framework for analysing fairness as a runtime property. Using a minimal yet expressive model based on sequences of coin tosses with possibly evolving biases, we study the problems of monitoring and enforcing fairness expressed in either toss outcomes or coin biases. Since there is no one-size-fits-all solution for either problem, we provide a summary of monitoring and enforcement strategies, parametrised by environment dynamics, prediction horizon, and confidence thresholds. For both problems, we present general results under simple or minimal assumptions. We survey existing solutions for the monitoring problem for Markovian and additive dynamics, and existing solutions for the enforcement problem in static settings with known dynamics.