๐ค AI Summary
This work proposes a model-agnostic post-processing method to enhance group fairness in settings where access to the model internals or sensitive attributes is restricted. By generating counterfactual inputs with flipped sensitive attributes and averaging the predictions of the original and counterfactual instances, the approach eliminates direct dependence of predictions on sensitive attributes without modifying or retraining the original model. Theoretical analysis demonstrates that this method strictly reduces the mutual information between predictions and sensitive attributes, achieves perfect demographic parity under mild assumptions, and at least halves the equal opportunity gap, all while introducing only bounded perturbations to the predictions.
๐ Abstract
Ensuring fairness in machine learning predictions is a critical challenge, especially when models are deployed in sensitive domains such as credit scoring, healthcare, and criminal justice. While many fairness interventions rely on data preprocessing or algorithmic constraints during training, these approaches often require full control over the model architecture and access to protected attribute information, which may not be feasible in real-world systems. In this paper, we propose Counterfactual Averaging for Fair Predictions (CAFP), a model-agnostic post-processing method that mitigates unfair influence from protected attributes without retraining or modifying the original classifier. CAFP operates by generating counterfactual versions of each input in which the sensitive attribute is flipped, and then averaging the model's predictions across factual and counterfactual instances. We provide a theoretical analysis of CAFP, showing that it eliminates direct dependence on the protected attribute, reduces mutual information between predictions and sensitive attributes, and provably bounds the distortion introduced relative to the original model. Under mild assumptions, we further show that CAFP achieves perfect demographic parity and reduces the equalized odds gap by at least half the average counterfactual bias.