🤖 AI Summary
Existing graph pretraining and prompt-tuning methods rely on low-frequency spectral knowledge under the homophily assumption, rendering them ill-suited for real-world graphs exhibiting heterogeneous spectral distributions—mixtures of homophilous and heterophilous structures. This induces a significant spectral-domain gap between pretraining and downstream tasks, limiting generalization under low-supervision regimes. To address this, we propose HS-GPPT, the first framework unifying *heterogeneous-spectrum graph pretraining* with *spectrally aligned prompt tuning*. It introduces learnable hybrid-spectrum filters to capture multi-order spectral characteristics, employs task-adaptive prompt graphs for spectral calibration, and integrates local-global contrastive learning to enhance representation robustness. Extensive experiments demonstrate that HS-GPPT consistently outperforms state-of-the-art methods in both transductive and inductive settings, and validates strong cross-homophily–heterophily transferability and adaptability across diverse real-world graph benchmarks.
📝 Abstract
Graph ``pre-training and prompt-tuning'' aligns downstream tasks with pre-trained objectives to enable efficient knowledge transfer under limited supervision. However, existing methods rely on homophily-based low-frequency knowledge, failing to handle diverse spectral distributions in real-world graphs with varying homophily. Our theoretical analysis reveals a spectral specificity principle: optimal knowledge transfer requires alignment between pre-trained spectral filters and the intrinsic spectrum of downstream graphs. Under limited supervision, large spectral gaps between pre-training and downstream tasks impede effective adaptation. To bridge this gap, we propose the HS-GPPT model, a novel framework that ensures spectral alignment throughout both pre-training and prompt-tuning. We utilize a hybrid spectral filter backbone and local-global contrastive learning to acquire abundant spectral knowledge. Then we design prompt graphs to align the spectral distribution with pretexts, facilitating spectral knowledge transfer across homophily and heterophily. Extensive experiments validate the effectiveness under both transductive and inductive learning settings. Our code is available at https://anonymous.4open.science/r/HS-GPPT-62D2/.