Probing for Arithmetic Errors in Language Models

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether internal activations of language models can enable early detection of arithmetic reasoning errors. We propose a lightweight, linear-probe-based error detector that decodes consistency between model predictions and ground-truth answers solely from intermediate hidden states—without requiring access to final outputs or external tools. Our key contribution is the discovery that probes trained on simple addition tasks generalize effectively to complex, multi-step arithmetic reasoning (e.g., GSM8K), enabling precise localization of erroneous reasoning steps and selective re-prompting. Through controlled ablation studies and structured chain-of-thought extensions, our detector achieves over 90% error detection accuracy. Crucially, it preserves performance on correct samples while substantially improving end-task accuracy. This approach establishes a novel paradigm for trustworthy, debuggable reasoning augmentation in large language models.

Technology Category

Application Category

📝 Abstract
We investigate whether internal activations in language models can be used to detect arithmetic errors. Starting with a controlled setting of 3-digit addition, we show that simple probes can accurately decode both the model's predicted output and the correct answer from hidden states, regardless of whether the model's output is correct. Building on this, we train lightweight error detectors that predict model correctness with over 90% accuracy. We then extend our analysis to structured chain-of-thought traces on addition-only GSM8K problems and find that probes trained on simple arithmetic generalize well to this more complex setting, revealing consistent internal representations. Finally, we demonstrate that these probes can guide selective re-prompting of erroneous reasoning steps, improving task accuracy with minimal disruption to correct outputs. Our findings suggest that arithmetic errors can be anticipated from internal activations alone, and that simple probes offer a viable path toward lightweight model self-correction.
Problem

Research questions and friction points this paper is trying to address.

Detect arithmetic errors in language models using internal activations
Train lightweight error detectors for model correctness prediction
Improve task accuracy by guiding selective re-prompting of errors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probes detect arithmetic errors in hidden states
Lightweight error detectors achieve 90% accuracy
Probes guide selective re-prompting for error correction
🔎 Similar Papers
No similar papers found.