π€ AI Summary
Existing graph prompting methods operate at a single granularity (e.g., node- or subgraph-level), failing to capture the inherent multi-scale structural hierarchy of graph data, thereby yielding semantically impoverished prompts. To address this, we propose the Multi-scale Graph Chain-of-Thought Prompting (MGCTP) framework, which employs a lightweight low-rank hierarchical coarsening network to extract multi-granular structural features and establishes a coarse-to-fine dynamic chain-of-thought reasoning mechanism for generating semantically rich, hierarchical prompts. This work is the first to integrate multi-scale modeling with chain-of-thought reasoning into graph prompt learning, effectively overcoming the semantic limitation imposed by single-granularity prompting. Extensive experiments on eight benchmark datasets demonstrate that MGCTP significantly outperforms state-of-the-art methods, particularly exhibiting superior generalization under few-shot settings.
π Abstract
The "pre-train, prompt'' paradigm, designed to bridge the gap between pre-training tasks and downstream objectives, has been extended from the NLP domain to the graph domain and has achieved remarkable progress. Current mainstream graph prompt-tuning methods modify input or output features using learnable prompt vectors. However, existing approaches are confined to single-granularity (e.g., node-level or subgraph-level) during prompt generation, overlooking the inherently multi-scale structural information in graph data, which limits the diversity of prompt semantics. To address this issue, we pioneer the integration of multi-scale information into graph prompt and propose a Multi-Scale Graph Chain-of-Thought (MSGCOT) prompting framework. Specifically, we design a lightweight, low-rank coarsening network to efficiently capture multi-scale structural features as hierarchical basis vectors for prompt generation. Subsequently, mimicking human cognition from coarse-to-fine granularity, we dynamically integrate multi-scale information at each reasoning step, forming a progressive coarse-to-fine prompt chain. Extensive experiments on eight benchmark datasets demonstrate that MSGCOT outperforms the state-of-the-art single-granularity graph prompt-tuning method, particularly in few-shot scenarios, showcasing superior performance.