🤖 AI Summary
This work investigates how decoder-only Transformers implicitly model and extract graph-structured substructures from raw text sequences. To address this, we propose the Induced Substructure Filtering (ISF) mechanism, establishing a theoretical framework and analytical perspective for substructure reasoning that uncovers consistent cross-layer internal representation dynamics. Through empirical analysis of multi-layer Transformers—combined with formal theoretical derivation, substructure extraction tasks, and ablation studies on input query effects—we demonstrate ISF’s efficacy in identifying complex composite substructures on attributed graphs (e.g., molecular graphs). Our core contribution is the first systematic characterization of how large language models implicitly encode graph structure, thereby extending their capacity for interpretable, structure-aware reasoning over complex graph-structured data.
📝 Abstract
Recent studies suggest that large language models (LLMs) possess the capability to solve graph reasoning tasks. Notably, even when graph structures are embedded within textual descriptions, LLMs can still effectively answer related questions. This raises a fundamental question: How can a decoder-only Transformer architecture understand underlying graph structures? To address this, we start with the substructure extraction task, interpreting the inner mechanisms inside the transformers and analyzing the impact of the input queries. Specifically, through both empirical results and theoretical analysis, we present Induced Substructure Filtration (ISF), a perspective that captures the substructure identification in the multi-layer transformers. We further validate the ISF process in LLMs, revealing consistent internal dynamics across layers. Building on these insights, we explore the broader capabilities of Transformers in handling diverse graph types. Specifically, we introduce the concept of thinking in substructures to efficiently extract complex composite patterns, and demonstrate that decoder-only Transformers can successfully extract substructures from attributed graphs, such as molecular graphs. Together, our findings offer a new insight on how sequence-based Transformers perform the substructure extraction task over graph data.