A model of errors in transformers

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the mechanisms underlying errors made by large language models in deterministic tasks such as arithmetic, which remain poorly understood. Drawing on effective field theory, the authors reduce the complex parameter space of Transformers to two interpretable quantities: a base noise rate and the number of latent erroneous tokens. They demonstrate that errors arise from the accumulation of small inaccuracies within the attention mechanism, which eventually surpass a critical threshold—challenging conventional explanations like “reasoning collapse.” Through quantitative modeling, error analysis, and prompt engineering, the proposed framework accurately predicts error rates on Gemini 2.5 Flash/Pro and DeepSeek R1, and successfully informs the design of prompting strategies that significantly reduce error occurrence.

Technology Category

Application Category

📝 Abstract
We study the error rate of LLMs on tasks like arithmetic that require a deterministic output, and repetitive processing of tokens drawn from a small set of alternatives. We argue that incorrect predictions arise when small errors in the attention mechanism accumulate to cross a threshold, and use this insight to derive a quantitative two-parameter relationship between the accuracy and the complexity of the task. The two parameters vary with the prompt and the model; they can be interpreted in terms of an elementary noise rate, and the number of plausible erroneous tokens that can be predicted. Our analysis is inspired by an ``effective field theory''perspective: the LLM's many raw parameters can be reorganized into just two parameters that govern the error rate. We perform extensive empirical tests, using Gemini 2.5 Flash, Gemini 2.5 Pro and DeepSeek R1, and find excellent agreement between the predicted and observed accuracy for a variety of tasks, although we also identify deviations in some cases. Our model provides an alternative to suggestions that errors made by LLMs on long repetitive tasks indicate the ``collapse of reasoning'', or an inability to express ``compositional''functions. Finally, we show how to construct prompts to reduce the error rate.
Problem

Research questions and friction points this paper is trying to address.

error rate
large language models
deterministic tasks
repetitive processing
task complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

error modeling
attention mechanism
effective field theory
deterministic tasks
prompt engineering
🔎 Similar Papers
No similar papers found.