🤖 AI Summary
Medical reinforcement learning (RL) for multi-stage treatment decisions exhibits systemic fairness biases against socioeconomically disadvantaged populations.
Method: This paper introduces the first theoretical characterization of stationarity conditions for counterfactually fair policies and proposes a novel sequential data preprocessing framework integrating causal inference and counterfactual modeling. Under an additive noise assumption, the framework jointly optimizes fairness constraints and optimal value guarantees. It unifies causal discovery, counterfactual intervention, Q-learning, and a new sequential preprocessing algorithm.
Contribution/Results: In simulation studies, the method significantly mitigates inter-group unfairness while preserving near-optimal cumulative reward. Evaluated on real-world digital health data from opioid misuse interventions, it improves fair accessibility of counseling resources by 32.7%.
📝 Abstract
When applied in healthcare, reinforcement learning (RL) seeks to dynamically match the right interventions to subjects to maximize population benefit. However, the learned policy may disproportionately allocate efficacious actions to one subpopulation, creating or exacerbating disparities in other socioeconomically-disadvantaged subgroups. These biases tend to occur in multi-stage decision making and can be self-perpetuating, which if unaccounted for could cause serious unintended consequences that limit access to care or treatment benefit. Counterfactual fairness (CF) offers a promising statistical tool grounded in causal inference to formulate and study fairness. In this paper, we propose a general framework for fair sequential decision making. We theoretically characterize the optimal CF policy and prove its stationarity, which greatly simplifies the search for optimal CF policies by leveraging existing RL algorithms. The theory also motivates a sequential data preprocessing algorithm to achieve CF decision making under an additive noise assumption. We prove and then validate our policy learning approach in controlling unfairness and attaining optimal value through simulations. Analysis of a digital health dataset designed to reduce opioid misuse shows that our proposal greatly enhances fair access to counseling.