Graph-R1: Incentivizing the Zero-Shot Graph Learning Capability in LLMs via Explicit Reasoning

📅 2025-08-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Neural Networks (GNNs) suffer from fixed label spaces, while Large Language Models (LLMs) lack structural inductive biases for graph reasoning. Method: We propose the first zero-shot graph learning framework grounded in explicit chain-of-thought (CoT) reasoning. It unifies node, link, and graph classification tasks as text-based inference problems; introduces task-specific “rethink templates” to guide large reasoning models in performing multi-step logical deduction over linearized graph structures; and constructs a training dataset annotated with fine-grained, interpretable reasoning trajectories. Contribution/Results: This work pioneers an interpretable, explicit CoT mechanism for graphs, integrating structure-aware graph linearization with template-driven reasoning guidance. Experiments demonstrate that our method significantly outperforms existing state-of-the-art approaches under zero-shot settings across multiple benchmarks, achieving high accuracy, strong generalization to unseen labels and domains, and fully explainable predictions.

Technology Category

Application Category

📝 Abstract
Generalizing to unseen graph tasks without task-pecific supervision remains challenging. Graph Neural Networks (GNNs) are limited by fixed label spaces, while Large Language Models (LLMs) lack structural inductive biases. Recent advances in Large Reasoning Models (LRMs) provide a zero-shot alternative via explicit, long chain-of-thought reasoning. Inspired by this, we propose a GNN-free approach that reformulates graph tasks--node classification, link prediction, and graph classification--as textual reasoning problems solved by LRMs. We introduce the first datasets with detailed reasoning traces for these tasks and develop Graph-R1, a reinforcement learning framework that leverages task-specific rethink templates to guide reasoning over linearized graphs. Experiments demonstrate that Graph-R1 outperforms state-of-the-art baselines in zero-shot settings, producing interpretable and effective predictions. Our work highlights the promise of explicit reasoning for graph learning and provides new resources for future research.
Problem

Research questions and friction points this paper is trying to address.

Generalizing to unseen graph tasks without supervision
Overcoming GNNs' fixed label space limitations
Addressing LLMs' lack of structural inductive biases
Innovation

Methods, ideas, or system contributions that make the work stand out.

GNN-free approach reformulating graph tasks
Reinforcement learning framework with rethink templates
Explicit reasoning over linearized graphs
🔎 Similar Papers
2024-10-09Conference on Empirical Methods in Natural Language ProcessingCitations: 0
Yicong Wu
Yicong Wu
MIIT Key Laboratory of Data Intelligence and Management, Beihang University
G
Guangyue Lu
MIIT Key Laboratory of Data Intelligence and Management, Beihang University
Yuan Zuo
Yuan Zuo
Associate Professor, Beihang University
Data Mining
H
Huarong Zhang
MIIT Key Laboratory of Data Intelligence and Management, Beihang University
Junjie Wu
Junjie Wu
Center for High Pressure Science & Technology Advanced Research
Physics