🤖 AI Summary
This study challenges the conventional assumption that graph structure inherently enhances large language models’ (LLMs) graph reasoning capabilities, systematically evaluating the practical efficacy of structural encoding strategies on text-attributed graphs. We compare LLM performance across multiple tasks using diverse structural injection methods—including templated graph representations and GNN-based structural encodings—against a baseline relying solely on node textual features. Contrary to expectations, LLMs achieve comparable or superior performance using node text alone; most structural encoding techniques yield only marginal gains, while several degrade performance significantly. These findings suggest that LLMs’ strong semantic modeling capacity renders explicit structural priors redundant—or even detrimental—thereby undermining the structural-driven paradigm in graph learning. The work advocates a semantics-centric rethinking of graph representation learning with LLMs.
📝 Abstract
Graphs provide a unified representation of semantic content and relational structure, making them a natural fit for domains such as molecular modeling, citation networks, and social graphs. Meanwhile, large language models (LLMs) have excelled at understanding natural language and integrating cross-modal signals, sparking interest in their potential for graph reasoning. Recent work has explored this by either designing template-based graph templates or using graph neural networks (GNNs) to encode structural information. In this study, we investigate how different strategies for encoding graph structure affect LLM performance on text-attributed graphs. Surprisingly, our systematic experiments reveal that: (i) LLMs leveraging only node textual descriptions already achieve strong performance across tasks; and (ii) most structural encoding strategies offer marginal or even negative gains. We show that explicit structural priors are often unnecessary and, in some cases, counterproductive when powerful language models are involved. This represents a significant departure from traditional graph learning paradigms and highlights the need to rethink how structure should be represented and utilized in the LLM era. Our study is to systematically challenge the foundational assumption that structure is inherently beneficial for LLM-based graph reasoning, opening the door to new, semantics-driven approaches for graph learning.