🤖 AI Summary
To address the insufficient robustness of Graph Foundation Models (GFMs) under domain noise, structural perturbations, and adversarial attacks, this paper proposes a Structure-Aware Semantic Enhancement Framework. The framework innovatively integrates hierarchical structural prior encoding, structure-guided information bottleneck compression, mixture-of-experts routing with null experts, and community-aware joint structural fine-tuning. It further introduces, for the first time, structure-aware textual prompt generation and self-supervised contrastive learning to strengthen cross-domain semantic alignment. Evaluated on node- and graph-classification tasks, the model consistently outperforms nine state-of-the-art methods. Under both random noise and adversarial perturbations, it demonstrates significantly enhanced robustness, achieving an average 5.3% improvement in cross-domain transfer accuracy.
📝 Abstract
We present Graph Foundation Models (GFMs) which have made significant progress in various tasks, but their robustness against domain noise, structural perturbations, and adversarial attacks remains underexplored. A key limitation is the insufficient modeling of hierarchical structural semantics, which are crucial for generalization. In this paper, we propose SA^2GFM, a robust GFM framework that improves domain-adaptive representations through Structure-Aware Semantic Augmentation. First, we encode hierarchical structural priors by transforming entropy-based encoding trees into structure-aware textual prompts for feature augmentation. The enhanced inputs are processed by a self-supervised Information Bottleneck mechanism that distills robust, transferable representations via structure-guided compression. To address negative transfer in cross-domain adaptation, we introduce an expert adaptive routing mechanism, combining a mixture-of-experts architecture with a null expert design. For efficient downstream adaptation, we propose a fine-tuning module that optimizes hierarchical structures through joint intra- and inter-community structure learning. Extensive experiments demonstrate that SA^2GFM outperforms 9 state-of-the-art baselines in terms of effectiveness and robustness against random noise and adversarial perturbations for node and graph classification.