From Sequence to Structure: Uncovering Substructure Reasoning in Transformers

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how decoder-only Transformers implicitly model and extract graph-structured substructures from raw text sequences. To address this, we propose the Induced Substructure Filtering (ISF) mechanism, establishing a theoretical framework and analytical perspective for substructure reasoning that uncovers consistent cross-layer internal representation dynamics. Through empirical analysis of multi-layer Transformers—combined with formal theoretical derivation, substructure extraction tasks, and ablation studies on input query effects—we demonstrate ISF’s efficacy in identifying complex composite substructures on attributed graphs (e.g., molecular graphs). Our core contribution is the first systematic characterization of how large language models implicitly encode graph structure, thereby extending their capacity for interpretable, structure-aware reasoning over complex graph-structured data.

Technology Category

Application Category

📝 Abstract
Recent studies suggest that large language models (LLMs) possess the capability to solve graph reasoning tasks. Notably, even when graph structures are embedded within textual descriptions, LLMs can still effectively answer related questions. This raises a fundamental question: How can a decoder-only Transformer architecture understand underlying graph structures? To address this, we start with the substructure extraction task, interpreting the inner mechanisms inside the transformers and analyzing the impact of the input queries. Specifically, through both empirical results and theoretical analysis, we present Induced Substructure Filtration (ISF), a perspective that captures the substructure identification in the multi-layer transformers. We further validate the ISF process in LLMs, revealing consistent internal dynamics across layers. Building on these insights, we explore the broader capabilities of Transformers in handling diverse graph types. Specifically, we introduce the concept of thinking in substructures to efficiently extract complex composite patterns, and demonstrate that decoder-only Transformers can successfully extract substructures from attributed graphs, such as molecular graphs. Together, our findings offer a new insight on how sequence-based Transformers perform the substructure extraction task over graph data.
Problem

Research questions and friction points this paper is trying to address.

How Transformers understand embedded graph structures
Mechanisms of substructure extraction in multi-layer Transformers
Transformers' capability to extract substructures from attributed graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Induced Substructure Filtration captures substructure identification
Transformers extract substructures from attributed graphs
Sequence-based models perform graph substructure extraction
🔎 Similar Papers
No similar papers found.