🤖 AI Summary
To address the challenges of scarce labeled data and decision boundary shift caused by directly using counterfactual samples as training instances in black-box model reconstruction, this paper proposes a counterfactual-aware Wasserstein prototype modeling framework. Unlike conventional approaches that treat counterfactuals as ordinary training samples, our method models them as boundary-aware structured signals. It fuses original samples and counterfactuals via Wasserstein barycenter computation to explicitly capture intra-class distribution structure while constraining boundary drift. The framework jointly integrates counterfactual generation, Wasserstein barycenter estimation, prototype learning, and distribution alignment optimization. Extensive experiments across multiple datasets demonstrate that the surrogate model achieves significantly improved fidelity to the target black-box model; under low-data regimes, it outperforms baseline methods by an average of 12.7%.
📝 Abstract
Counterfactual explanations provide actionable insights by identifying minimal input changes required to achieve a desired model prediction. Beyond their interpretability benefits, counterfactuals can also be leveraged for model reconstruction, where a surrogate model is trained to replicate the behavior of a target model. In this work, we demonstrate that model reconstruction can be significantly improved by recognizing that counterfactuals, which typically lie close to the decision boundary, can serve as informative though less representative samples for both classes. This is particularly beneficial in settings with limited access to labeled data. We propose a method that integrates original data samples with counterfactuals to approximate class prototypes using the Wasserstein barycenter, thereby preserving the underlying distributional structure of each class. This approach enhances the quality of the surrogate model and mitigates the issue of decision boundary shift, which commonly arises when counterfactuals are naively treated as ordinary training instances. Empirical results across multiple datasets show that our method improves fidelity between the surrogate and target models, validating its effectiveness.