π€ AI Summary
Existing counterfactual explanations (CFEs) degrade rapidly under model or data distribution shifts and lack probabilistic robustness guarantees against arbitrary types of changes. To address this, we propose BetaRCEβthe first post-hoc robustification method for CFEs that supports general models and arbitrary change types. BetaRCE models the probability of CFE validity under model perturbations via the Beta distribution, establishing a unified theoretical framework for probabilistic robustness. It requires no retraining, imposes no assumptions on model architecture or training procedure, and is compatible with mainstream CFE generators (e.g., Wachter, DICE). Its hyperparameters admit clear statistical interpretations, eliminating the need for manual tuning. Experiments across diverse models and data drift scenarios demonstrate that BetaRCE maintains over 95% explanation validity, significantly outperforming baselines in robustness, plausibility, and proximity.
π Abstract
Counterfactual explanations (CFEs) guide users on how to adjust inputs to machine learning models to achieve desired outputs. While existing research primarily addresses static scenarios, real-world applications often involve data or model changes, potentially invalidating previously generated CFEs and rendering user-induced input changes ineffective. Current methods addressing this issue often support only specific models or change types, require extensive hyperparameter tuning, or fail to provide probabilistic guarantees on CFE robustness to model changes. This paper proposes a novel approach for generating CFEs that provides probabilistic guarantees for any model and change type, while offering interpretable and easy-to-select hyperparameters. We establish a theoretical framework for probabilistically defining robustness to model change and demonstrate how our BetaRCE method directly stems from it. BetaRCE is a post-hoc method applied alongside a chosen base CFE generation method to enhance the quality of the explanation beyond robustness. It facilitates a transition from the base explanation to a more robust one with user-adjusted probability bounds. Through experimental comparisons with baselines, we show that BetaRCE yields robust, most plausible, and closest to baseline counterfactual explanations.