🤖 AI Summary
This work investigates whether the Transformer architecture possesses intrinsic search capability, using graph connectivity—a fundamental search task—as a testbed. Method: We generate infinite high-quality synthetic data, train small Transformers, and apply a novel mechanistic interpretability technique—first achieving explicit inversion of the internal computational graph—to uncover latent algorithmic behavior. Contribution/Results: We identify an implicit parallel multi-source breadth-first search mechanism within the model. While the model learns to search under distributional alignment, performance degrades sharply with increasing graph size. Neither scaling parameters nor incorporating chain-of-thought prompting mitigates this failure, indicating a fundamental architectural bottleneck. Crucially, we provide the first mechanistic evidence that the Transformer’s search limitation stems from its parallel attention structure—not from insufficient data or model size—thereby offering key insights into the reasoning boundaries of large language models.
📝 Abstract
Search is an ability foundational in many important tasks, and recent studies have shown that large language models (LLMs) struggle to perform search robustly. It is unknown whether this inability is due to a lack of data, insufficient model parameters, or fundamental limitations of the transformer architecture. In this work, we use the foundational graph connectivity problem as a testbed to generate effectively limitless high-coverage data to train small transformers and test whether they can learn to perform search. We find that, when given the right training distribution, the transformer is able to learn to search. We analyze the algorithm that the transformer has learned through a novel mechanistic interpretability technique that enables us to extract the computation graph from the trained model. We find that transformers perform search at every vertex in parallel: For each vertex in the input graph, transformers compute the set of vertices reachable from that vertex. Each layer then progressively expands these sets, allowing the model to search over a number of vertices exponential in $n_{ ext{layers}}$. However, we find that as the input graph size increases, the transformer has greater difficulty in learning the task. This difficulty is not resolved even as the number of parameters is increased, suggesting that increasing model scale will not lead to robust search abilities. We also find that performing search in-context (i.e., chain-of-thought) does not resolve this inability to learn to search on larger graphs.