🤖 AI Summary
This work proposes a novel counterfactual training mechanism that integrates the objective of counterfactual generation directly into the model training process, addressing the limitations of existing methods which often produce explanations lacking data plausibility and feature manipulability—key requirements for supporting real-world decision-making. By incorporating priors on feature mutability, modeling the underlying data distribution, and employing constrained optimization, the approach guides the model to learn representations that yield both plausible and actionable counterfactual explanations. Experimental results demonstrate that the proposed method not only generates high-quality counterfactuals but also significantly enhances the model’s adversarial robustness, thereby offering dual benefits in interpretability and reliability.
📝 Abstract
We propose a novel training regime termed counterfactual training that leverages counterfactual explanations to increase the explanatory capacity of models. Counterfactual explanations have emerged as a popular post-hoc explanation method for opaque machine learning models: they inform how factual inputs would need to change in order for a model to produce some desired output. To be useful in real-world decision-making systems, counterfactuals should be plausible with respect to the underlying data and actionable with respect to the feature mutability constraints. Much existing research has therefore focused on developing post-hoc methods to generate counterfactuals that meet these desiderata. In this work, we instead hold models directly accountable for the desired end goal: counterfactual training employs counterfactuals during the training phase to minimize the divergence between learned representations and plausible, actionable explanations. We demonstrate empirically and theoretically that our proposed method facilitates training models that deliver inherently desirable counterfactual explanations and additionally exhibit improved adversarial robustness.