Spectral Journey: How Transformers Predict the Shortest Path

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether decoder-only Transformers possess implicit graph-structured planning and reasoning capabilities, specifically for shortest-path prediction on small connected undirected graphs. We train two-layer decoder-only models from scratch and employ spectral graph theory alongside interpretable representation analysis. Our key finding—novel to this domain—is that the model implicitly learns Laplacian spectral embeddings of the line graph of the input graph. Leveraging this insight, we propose Spectral Line Navigator (SLN), a new approximate algorithm that performs greedy search in the learned spectral embedding space. Experiments show that the model accurately predicts shortest paths on graphs with up to 10 nodes; its latent representations exhibit strong correlation with the line graph’s Laplacian spectrum; and SLN offers interpretability, theoretical grounding, and cross-graph generalization potential. This work provides a fresh perspective on implicit reasoning mechanisms in large language models through the lens of spectral graph theory.

Technology Category

Application Category

📝 Abstract
Decoder-only transformers lead to a step-change in capability of large language models. However, opinions are mixed as to whether they are really planning or reasoning. A path to making progress in this direction is to study the model's behavior in a setting with carefully controlled data. Then interpret the learned representations and reverse-engineer the computation performed internally. We study decoder-only transformer language models trained from scratch to predict shortest paths on simple, connected and undirected graphs. In this setting, the representations and the dynamics learned by the model are interpretable. We present three major results: (1) Two-layer decoder-only language models can learn to predict shortest paths on simple, connected graphs containing up to 10 nodes. (2) Models learn a graph embedding that is correlated with the spectral decomposition of the line graph. (3) Following the insights, we discover a novel approximate path-finding algorithm Spectral Line Navigator (SLN) that finds shortest path by greedily selecting nodes in the space of spectral embedding of the line graph.
Problem

Research questions and friction points this paper is trying to address.

Decode-only transformers predict shortest paths.
Study models' behavior on controlled graph data.
Discover Spectral Line Navigator algorithm.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoder-only transformers predict shortest paths
Graph embedding correlates with spectral decomposition
Spectral Line Navigator algorithm finds shortest paths
🔎 Similar Papers
No similar papers found.