Less is More: Learning Graph Tasks with Just LLMs

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) can learn and generalize graph-structured tasks without relying on graph neural networks (GNNs) or specialized graph encoders. It addresses three core questions: (i) whether LLMs can acquire fundamental graph reasoning capabilities, (ii) whether they generalize to unseen graph topologies and novel tasks, and (iii) how different methodological approaches compare. Method: We propose an instruction-guided chain-of-thought (CoT) training framework that combines textual serialization of graph structures with end-to-end instruction fine-tuning. Contribution/Results: Experiments demonstrate that small-scale LLMs achieve performance on par with—or surpassing—that of GNN-augmented hybrid models across diverse graph tasks, while exhibiting strong generalization to out-of-distribution graph topologies and task types. This study provides the first systematic empirical validation of pure language models for graph reasoning, establishing their feasibility and potential. It opens a new pathway toward lightweight, unified graph reasoning paradigms grounded solely in language modeling.

Technology Category

Application Category

📝 Abstract
For large language models (LLMs), reasoning over graphs could help solve many problems. Prior work has tried to improve LLM graph reasoning by examining how best to serialize graphs as text and by combining GNNs and LLMs. However, the merits of such approaches remain unclear, so we empirically answer the following research questions: (1) Can LLMs learn to solve fundamental graph tasks without specialized graph encoding models?, (2) Can LLMs generalize learned solutions to unseen graph structures or tasks?, and (3) What are the merits of competing approaches to learn graph tasks? We show that even small LLMs can learn to solve graph tasks by training them with instructive chain-of-thought solutions, and this training generalizes, without specialized graph encoders, to new tasks and graph structures.
Problem

Research questions and friction points this paper is trying to address.

Can LLMs solve graph tasks without specialized encoding models
Do LLMs generalize learned solutions to unseen graph structures
What are the merits of competing graph learning approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs solve graph tasks without specialized encoders
Training uses instructive chain-of-thought solutions
Generalizes to new graph structures and tasks
🔎 Similar Papers
No similar papers found.