Empowering LLMs with Structural Role Inference for Zero-Shot Graph Learning

📅 2025-11-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit weak reasoning capability for structurally critical nodes (e.g., bridges, hubs) in graph learning, especially failing to establish structure–semantics mappings under zero-shot settings. Method: We propose DuoGLM—the first fine-tuning-free, dual-perspective, structure-aware graph learning framework. It introduces a novel *structural role reasoning mechanism*, jointly leveraging *local relational template construction* and *global topological-to-functional-role semantic description generation* to decouple topological similarity from semantic divergence. By integrating prompt learning with dynamic-static graph representation fusion, DuoGLM enables zero-shot, structure-aware inference. Contribution/Results: Evaluated on eight benchmarks, DuoGLM achieves a 14.3% improvement in zero-shot node classification accuracy and a 7.6% gain in cross-domain transfer AUC, significantly outperforming state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Large Language Models have emerged as a promising approach for graph learning due to their powerful reasoning capabilities. However, existing methods exhibit systematic performance degradation on structurally important nodes such as bridges and hubs. We identify the root cause of these limitations. Current approaches encode graph topology into static features but lack reasoning scaffolds to transform topological patterns into role-based interpretations. This limitation becomes critical in zero-shot scenarios where no training data establishes structure-semantics mappings. To address this gap, we propose DuoGLM, a training-free dual-perspective framework for structure-aware graph reasoning. The local perspective constructs relation-aware templates capturing semantic interactions between nodes and neighbors. The global perspective performs topology-to-role inference to generate functional descriptions of structural positions. These complementary perspectives provide explicit reasoning mechanisms enabling LLMs to distinguish topologically similar but semantically different nodes. Extensive experiments across eight benchmark datasets demonstrate substantial improvements. DuoGLM achieves 14.3% accuracy gain in zero-shot node classification and 7.6% AUC improvement in cross-domain transfer compared to existing methods. The results validate the effectiveness of explicit role reasoning for graph understanding with LLMs.
Problem

Research questions and friction points this paper is trying to address.

Addressing performance degradation on structurally important nodes in graphs
Lacking reasoning scaffolds for topological pattern interpretation in LLMs
Enabling zero-shot graph learning through explicit structural role inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-perspective framework for structure-aware graph reasoning
Local perspective constructs relation-aware semantic interaction templates
Global perspective performs topology-to-role inference for structural positions
🔎 Similar Papers
No similar papers found.
H
Heng Zhang
South China Normal University, Foshan, China
J
Jing Liu
Amazon AWS, Seattle, USA
J
Jiajun Wu
Central South University, Changsha, China
Haochen You
Haochen You
Columbia University
Generative AIMachine LearningStatistics
L
Lubin Gan
University of Science and Technology of China, Hefei, China
Y
Yuling Shi
Shanghai Jiao Tong University, Shanghai, China
Xiaodong Gu
Xiaodong Gu
Associate Professor, Shanghai Jiao Tong University
Software EngineeringLarge Language Models
Z
Zijian Zhang
University of Michigan, Ann Arbor, USA
S
Shuai Chen
ShanghaiTech University, Shanghai, China
W
Wenjun Huang
Sun Yat-sen University, Guangzhou, China
J
Jin Huang
South China Normal University, Guangzhou, China