🤖 AI Summary
Graph neural network (GNN) counterfactual explanation suffers from the inherent mismatch between discrete graph structures and continuous latent representations, compounded by optimization challenges arising from node-permutation invariance.
Method: We propose the first latent-space counterfactual generation framework based on a permutation-equivariant graph variational autoencoder (PE-GVAE). It enables gradient-guided traversal along the decision boundary within a continuous, symmetric latent space, jointly optimizing both graph structure and node attributes—bypassing explicit graph editing or discrete search. The method interfaces with arbitrary black-box GNN classifiers via a differentiable wrapper and incorporates counterfactual distance regularization to ensure semantic plausibility and structural fidelity.
Results: Evaluated on three standard graph benchmarks, our approach improves explanation quality by 23–37% and enhances robustness by 1.8× over state-of-the-art methods.
📝 Abstract
Explaining the predictions of a deep neural network is a nontrivial task, yet high-quality explanations for predictions are often a prerequisite for practitioners to trust these models. Counterfactual explanations aim to explain predictions by finding the ''nearest'' in-distribution alternative input whose prediction changes in a pre-specified way. However, it remains an open question how to define this nearest alternative input, whose solution depends on both the domain (e.g. images, graphs, tabular data, etc.) and the specific application considered. For graphs, this problem is complicated i) by their discrete nature, as opposed to the continuous nature of state-of-the-art graph classifiers; and ii) by the node permutation group acting on the graphs. We propose a method to generate counterfactual explanations for any differentiable black-box graph classifier, utilizing a case-specific permutation equivariant graph variational autoencoder. We generate counterfactual explanations in a continuous fashion by traversing the latent space of the autoencoder across the classification boundary of the classifier, allowing for seamless integration of discrete graph structure and continuous graph attributes. We empirically validate the approach on three graph datasets, showing that our model is consistently high-performing and more robust than the baselines.