🤖 AI Summary
This paper introduces and solves the “universal recourse” problem in global counterfactual explanations for Graph Neural Networks (GNNs): generating a minimal, reusable set of structural modifications that universally transform *all* input graphs classified as “reject” into “accept” graphs. Unlike local counterfactual methods, we formally define graph-level universal recourse and propose a GNN-based graph reconstruction optimization framework. Our approach integrates combinatorial search with gradient-guided refinement and incorporates structure perturbation modeling and constraint-aware optimization tailored to binary classification. Evaluated on four real-world graph datasets, our method significantly outperforms strong baselines; the generated recourse strategies match or surpass the quality of conventional local counterfactuals in fidelity and minimality. Empirical validation in high-impact domains—such as drug discovery—demonstrates both practical utility and cross-domain generalizability.
📝 Abstract
Graph neural networks (GNNs) have been widely used in various domains such as social networks, molecular biology, or recommendation systems. Concurrently, different explanations methods of GNNs have arisen to complement its black-box nature. Explanations of the GNNs' predictions can be categorized into two types--factual and counterfactual. Given a GNN trained on binary classification into ''accept'' and ''reject'' classes, a global counterfactual explanation consists in generating a small set of ''accept'' graphs relevant to all of the input ''reject'' graphs. The transformation of a ''reject'' graph into an ''accept'' graph is called a recourse. A common recourse explanation is a small set of recourse, from which every ''reject'' graph can be turned into an ''accept'' graph. Although local counterfactual explanations have been studied extensively, the problem of finding common recourse for global counterfactual explanation remains unexplored, particularly for GNNs. In this paper, we formalize the common recourse explanation problem, and design an effective algorithm, COMRECGC, to solve it. We benchmark our algorithm against strong baselines on four different real-world graphs datasets and demonstrate the superior performance of COMRECGC against the competitors. We also compare the common recourse explanations to the graph counterfactual explanation, showing that common recourse explanations are either comparable or superior, making them worth considering for applications such as drug discovery or computational biology.