Influence Functions for Edge Edits in Non-Convex Graph Neural Networks

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Quantifying the influence of individual edge insertions or deletions on Graph Neural Network (GNN) outputs remains challenging—existing influence functions are restricted by convexity assumptions, support only edge deletions, and neglect dynamic message-passing effects. Method: We propose the Proximal Bregman Response Function (PBRF), a novel edge-level influence estimator that lifts convexity constraints, unifies modeling of both edge insertion and deletion, and explicitly captures resulting multi-hop message propagation changes. PBRF integrates proximal optimization, Bregman divergence, and GNN gradient sensitivity analysis to enable efficient, accurate influence estimation. Contribution/Results: On multiple real-world graph benchmarks, PBRF achieves significantly higher edge-influence prediction accuracy than state-of-the-art baselines. Moreover, it successfully enables practical applications including graph structural rewiring and adversarial attack generation, demonstrating both theoretical soundness and empirical utility.

Technology Category

Application Category

📝 Abstract
Understanding how individual edges influence the behavior of graph neural networks (GNNs) is essential for improving their interpretability and robustness. Graph influence functions have emerged as promising tools to efficiently estimate the effects of edge deletions without retraining. However, existing influence prediction methods rely on strict convexity assumptions, exclusively consider the influence of edge deletions while disregarding edge insertions, and fail to capture changes in message propagation caused by these modifications. In this work, we propose a proximal Bregman response function specifically tailored for GNNs, relaxing the convexity requirement and enabling accurate influence prediction for standard neural network architectures. Furthermore, our method explicitly accounts for message propagation effects and extends influence prediction to both edge deletions and insertions in a principled way. Experiments with real-world datasets demonstrate accurate influence predictions for different characteristics of GNNs. We further demonstrate that the influence function is versatile in applications such as graph rewiring and adversarial attacks.
Problem

Research questions and friction points this paper is trying to address.

Estimating edge influence in non-convex GNNs without retraining
Extending influence prediction to edge deletions and insertions
Capturing message propagation changes from graph modifications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proximal Bregman function for non-convex GNNs
Predicts influence of edge deletions and insertions
Accounts for message propagation effects
🔎 Similar Papers
No similar papers found.
J
Jaeseung Heo
Graduate School of Artificial Intelligence, POSTECH, South Korea
K
Kyeongheung Yun
Department of Computer Science & Engineering, POSTECH, South Korea
S
Seokwon Yoon
Department of Computer Science & Engineering, POSTECH, South Korea
MoonJeong Park
MoonJeong Park
graduate student of POSTECH
machine learninggraph neural networkdynamical system
Jungseul Ok
Jungseul Ok
Associate Professor, CSE/AI, POSTECH
Reinforcement LearningMachine Learning
D
Dongwoo Kim
Graduate School of Artificial Intelligence, Department of Computer Science & Engineering, POSTECH, South Korea