🤖 AI Summary
This study addresses the vulnerability of link weight prediction models under adversarial perturbations by presenting the first systematic investigation into their security. The authors propose IGA-LWP, an iterative gradient-based adversarial attack framework that formulates the attack as an optimization problem. Leveraging a self-attention-enhanced graph autoencoder as a surrogate model, IGA-LWP iteratively identifies and perturbs critical links through gradient backpropagation. Extensive experiments on four real-world weighted networks demonstrate that IGA-LWP substantially degrades the prediction accuracy of target models. Moreover, the generated adversarial examples exhibit strong transferability across multiple state-of-the-art architectures and achieve high perturbation efficiency, thereby uncovering significant security risks inherent in reasoning with weighted graph neural networks.
📝 Abstract
Link weight prediction extends classical link prediction by estimating the strength of interactions rather than merely their existence, and it underpins a wide range of applications such as traffic engineering, social recommendation, and scientific collaboration analysis. However, the robustness of link weight prediction against adversarial perturbations remains largely unexplored.In this paper, we formalize the link weight prediction attack problem as an optimization task that aims to maximize the prediction error on a set of target links by adversarially manipulating the weight values of a limited number of links. Based on this formulation, we propose an iterative gradient-based attack framework for link weight prediction, termed IGA-LWP. By employing a self-attention-enhanced graph autoencoder as a surrogate predictor, IGA-LWP leverages backpropagated gradients to iteratively identify and perturb a small subset of links. Extensive experiments on four real-world weighted networks demonstrate that IGA-LWP significantly degrades prediction accuracy on target links compared with baseline methods. Moreover, the adversarial networks generated by IGA-LWP exhibit strong transferability across several representative link weight prediction models. These findings expose a fundamental vulnerability in weighted network inference and highlight the need for developing robust link weight prediction methods.