Generalize across Homophily and Heterophily: Hybrid Spectral Graph Pre-Training and Prompt Tuning

📅 2025-08-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph pretraining and prompt-tuning methods rely on low-frequency spectral knowledge under the homophily assumption, rendering them ill-suited for real-world graphs exhibiting heterogeneous spectral distributions—mixtures of homophilous and heterophilous structures. This induces a significant spectral-domain gap between pretraining and downstream tasks, limiting generalization under low-supervision regimes. To address this, we propose HS-GPPT, the first framework unifying *heterogeneous-spectrum graph pretraining* with *spectrally aligned prompt tuning*. It introduces learnable hybrid-spectrum filters to capture multi-order spectral characteristics, employs task-adaptive prompt graphs for spectral calibration, and integrates local-global contrastive learning to enhance representation robustness. Extensive experiments demonstrate that HS-GPPT consistently outperforms state-of-the-art methods in both transductive and inductive settings, and validates strong cross-homophily–heterophily transferability and adaptability across diverse real-world graph benchmarks.

Technology Category

Application Category

📝 Abstract
Graph ``pre-training and prompt-tuning'' aligns downstream tasks with pre-trained objectives to enable efficient knowledge transfer under limited supervision. However, existing methods rely on homophily-based low-frequency knowledge, failing to handle diverse spectral distributions in real-world graphs with varying homophily. Our theoretical analysis reveals a spectral specificity principle: optimal knowledge transfer requires alignment between pre-trained spectral filters and the intrinsic spectrum of downstream graphs. Under limited supervision, large spectral gaps between pre-training and downstream tasks impede effective adaptation. To bridge this gap, we propose the HS-GPPT model, a novel framework that ensures spectral alignment throughout both pre-training and prompt-tuning. We utilize a hybrid spectral filter backbone and local-global contrastive learning to acquire abundant spectral knowledge. Then we design prompt graphs to align the spectral distribution with pretexts, facilitating spectral knowledge transfer across homophily and heterophily. Extensive experiments validate the effectiveness under both transductive and inductive learning settings. Our code is available at https://anonymous.4open.science/r/HS-GPPT-62D2/.
Problem

Research questions and friction points this paper is trying to address.

Addressing spectral misalignment in graph pre-training and downstream tasks
Enabling knowledge transfer across homophily and heterophily graphs
Bridging spectral gaps between pre-training and limited-supervision tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid spectral filter backbone
Local-global contrastive learning
Spectral-aligned prompt graphs
🔎 Similar Papers
No similar papers found.
H
Haitong Luo
Institute of Computing Technology, Chinese Academy of Sciences
Suhang Wang
Suhang Wang
Pennsylvania State University
Data miningMachine learningDeep LearningGraph Mining
W
Weiyao Zhang
Institute of Computing Technology, Chinese Academy of Sciences
R
Ruiqi Meng
Institute of Computing Technology, Chinese Academy of Sciences
Xuying Meng
Xuying Meng
Institute of Computing Technology, Chinese Academy of Sciences
Y
Yujun Zhang
Institute of Computing Technology, Chinese Academy of Sciences