Emergent Analogical Reasoning in Transformers

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how Transformer models acquire analogical reasoning capabilities, drawing inspiration from the notion of functors in category theory to formalize analogy as structural correspondences between entities across distinct categories. To this end, we design synthetic tasks that enable controlled evaluation of emergent analogical reasoning and, for the first time, decompose its underlying mechanism into two core components: geometric alignment of relational structures in embedding space and functor-like mappings within the Transformer architecture. Through a combination of embedding geometry analysis, functor-based modeling, and mechanistic experiments on large language models, we demonstrate that analogical reasoning is highly sensitive to data properties, optimization strategies, and model scale, and we further validate the generality of this mechanism in pretrained large language models.

Technology Category

Application Category

📝 Abstract
Analogy is a central faculty of human intelligence, enabling abstract patterns discovered in one domain to be applied to another. Despite its central role in cognition, the mechanisms by which Transformers acquire and implement analogical reasoning remain poorly understood. In this work, inspired by the notion of functors in category theory, we formalize analogical reasoning as the inference of correspondences between entities across categories. Based on this formulation, we introduce synthetic tasks that evaluate the emergence of analogical reasoning under controlled settings. We find that the emergence of analogical reasoning is highly sensitive to data characteristics, optimization choices, and model scale. Through mechanistic analysis, we show that analogical reasoning in Transformers decomposes into two key components: (1) geometric alignment of relational structure in the embedding space, and (2) the application of a functor within the Transformer. These mechanisms enable models to transfer relational structure from one category to another, realizing analogy. Finally, we quantify these effects and find that the same trends are observed in pretrained LLMs. In doing so, we move analogy from an abstract cognitive notion to a concrete, mechanistically grounded phenomenon in modern neural networks.
Problem

Research questions and friction points this paper is trying to address.

analogical reasoning
Transformers
emergence
relational structure
cognitive mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

analogical reasoning
Transformers
functors
relational structure
mechanistic analysis
🔎 Similar Papers
No similar papers found.