🤖 AI Summary
Quantifying the influence of individual edge insertions or deletions on Graph Neural Network (GNN) outputs remains challenging—existing influence functions are restricted by convexity assumptions, support only edge deletions, and neglect dynamic message-passing effects. Method: We propose the Proximal Bregman Response Function (PBRF), a novel edge-level influence estimator that lifts convexity constraints, unifies modeling of both edge insertion and deletion, and explicitly captures resulting multi-hop message propagation changes. PBRF integrates proximal optimization, Bregman divergence, and GNN gradient sensitivity analysis to enable efficient, accurate influence estimation. Contribution/Results: On multiple real-world graph benchmarks, PBRF achieves significantly higher edge-influence prediction accuracy than state-of-the-art baselines. Moreover, it successfully enables practical applications including graph structural rewiring and adversarial attack generation, demonstrating both theoretical soundness and empirical utility.
📝 Abstract
Understanding how individual edges influence the behavior of graph neural networks (GNNs) is essential for improving their interpretability and robustness. Graph influence functions have emerged as promising tools to efficiently estimate the effects of edge deletions without retraining. However, existing influence prediction methods rely on strict convexity assumptions, exclusively consider the influence of edge deletions while disregarding edge insertions, and fail to capture changes in message propagation caused by these modifications. In this work, we propose a proximal Bregman response function specifically tailored for GNNs, relaxing the convexity requirement and enabling accurate influence prediction for standard neural network architectures. Furthermore, our method explicitly accounts for message propagation effects and extends influence prediction to both edge deletions and insertions in a principled way. Experiments with real-world datasets demonstrate accurate influence predictions for different characteristics of GNNs. We further demonstrate that the influence function is versatile in applications such as graph rewiring and adversarial attacks.