When Structure Doesn't Help: LLMs Do Not Read Text-Attributed Graphs as Effectively as We Expected

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study challenges the conventional assumption that graph structure inherently enhances large language models’ (LLMs) graph reasoning capabilities, systematically evaluating the practical efficacy of structural encoding strategies on text-attributed graphs. We compare LLM performance across multiple tasks using diverse structural injection methods—including templated graph representations and GNN-based structural encodings—against a baseline relying solely on node textual features. Contrary to expectations, LLMs achieve comparable or superior performance using node text alone; most structural encoding techniques yield only marginal gains, while several degrade performance significantly. These findings suggest that LLMs’ strong semantic modeling capacity renders explicit structural priors redundant—or even detrimental—thereby undermining the structural-driven paradigm in graph learning. The work advocates a semantics-centric rethinking of graph representation learning with LLMs.

Technology Category

Application Category

📝 Abstract
Graphs provide a unified representation of semantic content and relational structure, making them a natural fit for domains such as molecular modeling, citation networks, and social graphs. Meanwhile, large language models (LLMs) have excelled at understanding natural language and integrating cross-modal signals, sparking interest in their potential for graph reasoning. Recent work has explored this by either designing template-based graph templates or using graph neural networks (GNNs) to encode structural information. In this study, we investigate how different strategies for encoding graph structure affect LLM performance on text-attributed graphs. Surprisingly, our systematic experiments reveal that: (i) LLMs leveraging only node textual descriptions already achieve strong performance across tasks; and (ii) most structural encoding strategies offer marginal or even negative gains. We show that explicit structural priors are often unnecessary and, in some cases, counterproductive when powerful language models are involved. This represents a significant departure from traditional graph learning paradigms and highlights the need to rethink how structure should be represented and utilized in the LLM era. Our study is to systematically challenge the foundational assumption that structure is inherently beneficial for LLM-based graph reasoning, opening the door to new, semantics-driven approaches for graph learning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating how graph structure encoding affects LLM performance on text-attributed graphs
Revealing structural encoding often provides marginal or negative performance gains
Challenging the assumption that structure inherently benefits LLM-based graph reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs use node text without graph structure
Structural encoding provides marginal performance gains
Explicit structural priors are counterproductive for LLMs
🔎 Similar Papers
No similar papers found.
H
Haotian Xu
Stony Brook University
Y
Yuning You
California Institute of Technology
Tengfei Ma
Tengfei Ma
Stony Brook University
Natural Language ProcessingMachine LearningHealthcareGraph Neural Networks