π€ AI Summary
This work investigates whether intrinsic adversarial robustness phase transitions exist in graph analysis and establishes the first formal conceptualization and theoretical framework for graph adversarial resilience phase transitions. We model the attack-defense interaction as a multi-objective dynamical system and introduce nonlinear stability analysis to construct an analytically tractable one-dimensional dynamic function, enabling precise identification of system equilibrium pointsβi.e., phase transition states. Our methodology integrates graph topology perturbation modeling, equilibrium optimization, and GNN robustness evaluation. Extensive experiments across five real-world graph datasets and three representative adversarial attack types demonstrate that our approach significantly outperforms existing defenses in robustness; it achieves phase transition localization error below 3.2% and exhibits strong generalizability across diverse graph structures and attack settings.
π Abstract
Adversarial attacks to graph analytics are gaining increased attention. To date, two lines of countermeasures have been proposed to resist various graph adversarial attacks from the perspectives of either graph per se or graph neural networks. Nevertheless, a fundamental question lies in whether there exists an intrinsic adversarial resilience state within a graph regime and how to find out such a critical state if exists. This paper contributes to tackle the above research questions from three unique perspectives: i) we regard the process of adversarial learning on graph as a complex multi-object dynamic system, and model the behavior of adversarial attack; ii) we propose a generalized theoretical framework to show the existence of critical adversarial resilience state; and iii) we develop a condensed one-dimensional function to capture the dynamic variation of graph regime under perturbations, and pinpoint the critical state through solving the equilibrium point of dynamic system. Multi-facet experiments are conducted to show our proposed approach can significantly outperform the state-of-the-art defense methods under five commonly-used real-world datasets and three representative attacks.