If You Want to Be Robust, Be Wary of Initialization

๐Ÿ“… 2025-10-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Graph Neural Networks (GNNs) achieve strong predictive performance but suffer from poor adversarial robustness; existing defenses primarily focus on input preprocessing or message-passing modifications, overlooking fundamental training factors such as weight initialization. Method: This paper presents the first systematic investigation into how weight initialization and training epoch count critically influence GNN robustness. We establish the first theoretical framework linking initialization strategies to adversarial vulnerability and derive a general, depth-aware upper bound on GNN robustness. Contribution/Results: Through rigorous theoretical analysis and extensive experiments across multiple GNN architectures, benchmark datasets, and adversarial attack settings, we demonstrate that optimal initialization simultaneously improves clean accuracy and adversarial accuracy by up to 50%โ€”substantially outperforming standard initialization schemes (e.g., Glorot, He). Our findings challenge conventional defense paradigms by revealing that robustness can be fundamentally enhanced at the initialization stage, without modifying model architecture or training objectives.

Technology Category

Application Category

๐Ÿ“ Abstract
Graph Neural Networks (GNNs) have demonstrated remarkable performance across a spectrum of graph-related tasks, however concerns persist regarding their vulnerability to adversarial perturbations. While prevailing defense strategies focus primarily on pre-processing techniques and adaptive message-passing schemes, this study delves into an under-explored dimension: the impact of weight initialization and associated hyper-parameters, such as training epochs, on a model's robustness. We introduce a theoretical framework bridging the connection between initialization strategies and a network's resilience to adversarial perturbations. Our analysis reveals a direct relationship between initial weights, number of training epochs and the model's vulnerability, offering new insights into adversarial robustness beyond conventional defense mechanisms. While our primary focus is on GNNs, we extend our theoretical framework, providing a general upper-bound applicable to Deep Neural Networks. Extensive experiments, spanning diverse models and real-world datasets subjected to various adversarial attacks, validate our findings. We illustrate that selecting appropriate initialization not only ensures performance on clean datasets but also enhances model robustness against adversarial perturbations, with observed gaps of up to 50% compared to alternative initialization approaches.
Problem

Research questions and friction points this paper is trying to address.

Investigating weight initialization impact on GNN robustness
Analyzing relationship between initial weights and adversarial vulnerability
Providing theoretical framework for initialization-based defense strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Initialization strategies impact neural network robustness
Theoretical framework links initialization to adversarial resilience
Proper initialization enhances robustness by up to 50%
๐Ÿ”Ž Similar Papers
No similar papers found.