Natural Language Counterfactual Explanations for Graphs Using Large Language Models

📅 2024-10-11
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Graph neural network (GNN) counterfactual explanations—often generated by XAI methods such as GNN-LRP and CF-GNNExplainer—are typically structural and inaccessible to non-expert users. Method: This paper introduces the first end-to-end framework that automatically translates structured graph counterfactual instances into natural-language “what-if” explanations. It synergistically integrates open-source large language models (e.g., Llama and Phi series) with graph counterfactual generators via customized prompt engineering to achieve semantic reconstruction. Contribution/Results: Evaluated on multiple benchmark graph datasets, the method significantly improves explanation faithfulness and readability: BLEU-4 increases by 23.6%, Fréchet Inception Distance (FID) decreases by 18.4%, and human evaluation yields an average satisfaction score of 4.6/5. This work marks a critical step toward human-centered, practical GNN explainability.

Technology Category

Application Category

📝 Abstract
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research to unravel the opaque inner logic of (deep) machine learning models. Among the various XAI techniques proposed in the literature, counterfactual explanations stand out as one of the most promising approaches. However, these"what-if"explanations are frequently complex and technical, making them difficult for non-experts to understand and, more broadly, challenging for humans to interpret. To bridge this gap, in this work, we exploit the power of open-source Large Language Models to generate natural language explanations when prompted with valid counterfactual instances produced by state-of-the-art explainers for graph-based models. Experiments across several graph datasets and counterfactual explainers show that our approach effectively produces accurate natural language representations of counterfactual instances, as demonstrated by key performance metrics.
Problem

Research questions and friction points this paper is trying to address.

Complex Graphical Models
Natural Language Explanation
Human Interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Counterfactual Explanations
Graph-based Models Interpretability
🔎 Similar Papers
No similar papers found.