From Nodes to Narratives: Explaining Graph Neural Networks with LLMs and Graph Context

๐Ÿ“… 2025-08-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Graph Neural Networks (GNNs) exhibit limited interpretability on text-augmented graphs; existing methods struggle to generate faithful, fine-grained, and semantically coherent natural-language explanations. Method: We propose LOGIC, the first framework that aligns GNN node embeddings into the latent space of a large language model (LLM) via contrastive learning, and employs hybrid soft prompting to jointly encode graph structure and textual inputsโ€”enabling the LLM to directly generate narrative explanations grounded in GNN internal representations, while simultaneously extracting a concise explanation subgraph. Contribution/Results: LOGIC requires no post-training or human annotation, enabling end-to-end interpretable reasoning. Experiments on four real-world datasets demonstrate that LOGIC achieves superior trade-offs between explanation fidelity and sparsity, significantly enhancing human comprehensibility and analytical insight over state-of-the-art baselines.

Technology Category

Application Category

๐Ÿ“ Abstract
Graph Neural Networks (GNNs) have emerged as powerful tools for learning over structured data, including text-attributed graphs, which are common in domains such as citation networks, social platforms, and knowledge graphs. GNNs are not inherently interpretable and thus, many explanation methods have been proposed. However, existing explanation methods often struggle to generate interpretable, fine-grained rationales, especially when node attributes include rich natural language. In this work, we introduce LOGIC, a lightweight, post-hoc framework that uses large language models (LLMs) to generate faithful and interpretable explanations for GNN predictions. LOGIC projects GNN node embeddings into the LLM embedding space and constructs hybrid prompts that interleave soft prompts with textual inputs from the graph structure. This enables the LLM to reason about GNN internal representations and produce natural language explanations along with concise explanation subgraphs. Our experiments across four real-world TAG datasets demonstrate that LOGIC achieves a favorable trade-off between fidelity and sparsity, while significantly improving human-centric metrics such as insightfulness. LOGIC sets a new direction for LLM-based explainability in graph learning by aligning GNN internals with human reasoning.
Problem

Research questions and friction points this paper is trying to address.

Generating interpretable explanations for GNN predictions
Handling rich natural language in node attributes
Aligning GNN internal representations with human reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based post-hoc GNN explanation framework
Hybrid prompts combining soft and textual inputs
Aligns GNN internals with human reasoning