Language Models Use Trigonometry to Do Addition

📅 2025-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The internal mechanisms by which large language models (LLMs) perform elementary arithmetic—particularly addition—remain poorly understood. Method: We apply representation-level reverse engineering, causal intervention analysis, MLP/attention head modeling, and single-neuron pre-activation fitting to probe hidden-layer representations across three medium-scale LLMs. Contribution/Results: We discover, for the first time, that numerals are encoded in LLM hidden states as a generalized helical structure; arithmetic addition corresponds to rotation and composition within this helical space—akin to a “clockwise” algorithm. Building upon this, we establish the first representation-level causal interpretability framework for mathematical reasoning in LLMs. Empirical validation confirms that the helical representation exerts strong causal influence on addition, subtraction, multiplication, division, and modular arithmetic. Our framework enables high-accuracy behavioral prediction and precise, controllable interventions for addition, offering a novel paradigm and foundational toolkit for interpretable mathematical reasoning in foundation models.

Technology Category

Application Category

📝 Abstract
Mathematical reasoning is an increasingly important indicator of large language model (LLM) capabilities, yet we lack understanding of how LLMs process even simple mathematical tasks. To address this, we reverse engineer how three mid-sized LLMs compute addition. We first discover that numbers are represented in these LLMs as a generalized helix, which is strongly causally implicated for the tasks of addition and subtraction, and is also causally relevant for integer division, multiplication, and modular arithmetic. We then propose that LLMs compute addition by manipulating this generalized helix using the"Clock"algorithm: to solve $a+b$, the helices for $a$ and $b$ are manipulated to produce the $a+b$ answer helix which is then read out to model logits. We model influential MLP outputs, attention head outputs, and even individual neuron preactivations with these helices and verify our understanding with causal interventions. By demonstrating that LLMs represent numbers on a helix and manipulate this helix to perform addition, we present the first representation-level explanation of an LLM's mathematical capability.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Arithmetic Operations
Numerical Processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Spiral Number Representation
Clock Algorithm for Addition
🔎 Similar Papers
No similar papers found.