Interpreting the Latent Structure of Operator Precedence in Language Models

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit poor performance on arithmetic tasks, and it remains unclear whether they internally encode operator precedence. Method: Using LLaMA-3.2-3B, we construct a dataset of ternary arithmetic expressions with systematically varied bracketing structures. Leveraging logit lens analysis, linear classification probes, and UMAP visualization, we trace intermediate computations within the residual stream. Contribution/Results: We first identify that operator precedence is explicitly encoded in linearly separable attention-layer embeddings—specifically, within a single critical embedding dimension that determines evaluation order. Building on this, we propose partial embedding swapping: selectively replacing only that dimension to controllably override the default operator precedence. Experiments confirm that intermediate arithmetic results persist stably in the residual stream after MLP layers, establishing a novel mechanistic framework for both interpreting and editing arithmetic reasoning in LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated impressive reasoning capabilities but continue to struggle with arithmetic tasks. Prior works largely focus on outputs or prompting strategies, leaving the open question of the internal structure through which models do arithmetic computation. In this work, we investigate whether LLMs encode operator precedence in their internal representations via the open-source instruction-tuned LLaMA 3.2-3B model. We constructed a dataset of arithmetic expressions with three operands and two operators, varying the order and placement of parentheses. Using this dataset, we trace whether intermediate results appear in the residual stream of the instruction-tuned LLaMA 3.2-3B model. We apply interpretability techniques such as logit lens, linear classification probes, and UMAP geometric visualization. Our results show that intermediate computations are present in the residual stream, particularly after MLP blocks. We also find that the model linearly encodes precedence in each operator's embeddings post attention layer. We introduce partial embedding swap, a technique that modifies operator precedence by exchanging high-impact embedding dimensions between operators.
Problem

Research questions and friction points this paper is trying to address.

Investigating internal representations of operator precedence in LLMs
Analyzing how models encode arithmetic computation structure internally
Developing techniques to modify operator precedence in embeddings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probing internal representations via arithmetic expressions dataset
Applying interpretability techniques to residual stream analysis
Modifying operator precedence through partial embedding swap
🔎 Similar Papers
No similar papers found.