🤖 AI Summary
This work addresses the vulnerability of Graph Neural Networks (GNNs) to subgraph injection attacks in realistic settings. We propose a novel black-box adversarial attack paradigm: by injecting isolated malicious subgraphs, the attack leverages a link recommendation module as a bridge to stealthily induce users to establish connections between target nodes and the injected subgraphs—thereby degrading node classification accuracy. Our key innovation lies in the first formal modeling of link recommendation systems as attack intermediaries, coupled with a dual-agent collaborative framework that employs two surrogate models and bilevel optimization to jointly optimize link misdirection and classification performance degradation. Extensive experiments on multiple real-world graph datasets demonstrate significant improvements over baselines: induced user connection rates increase by 37%–62%, while node classification accuracy drops by an average of 28.5%.
📝 Abstract
Graph Neural Networks (GNNs) have demonstrated remarkable proficiency in modeling data with graph structures, yet recent research reveals their susceptibility to adversarial attacks. Traditional attack methodologies, which rely on manipulating the original graph or adding links to artificially created nodes, often prove impractical in real-world settings. This paper introduces a novel adversarial scenario involving the injection of an isolated subgraph to deceive both the link recommender and the node classifier within a GNN system. Specifically, the link recommender is mislead to propose links between targeted victim nodes and the subgraph, encouraging users to unintentionally establish connections and that would degrade the node classification accuracy, thereby facilitating a successful attack. To address this, we present the LiSA framework, which employs a dual surrogate model and bi-level optimization to simultaneously meet two adversarial objectives. Extensive experiments on real-world datasets demonstrate the effectiveness of our method.