🤖 AI Summary
This work investigates whether internal activations of language models can enable early detection of arithmetic reasoning errors. We propose a lightweight, linear-probe-based error detector that decodes consistency between model predictions and ground-truth answers solely from intermediate hidden states—without requiring access to final outputs or external tools. Our key contribution is the discovery that probes trained on simple addition tasks generalize effectively to complex, multi-step arithmetic reasoning (e.g., GSM8K), enabling precise localization of erroneous reasoning steps and selective re-prompting. Through controlled ablation studies and structured chain-of-thought extensions, our detector achieves over 90% error detection accuracy. Crucially, it preserves performance on correct samples while substantially improving end-task accuracy. This approach establishes a novel paradigm for trustworthy, debuggable reasoning augmentation in large language models.
📝 Abstract
We investigate whether internal activations in language models can be used to detect arithmetic errors. Starting with a controlled setting of 3-digit addition, we show that simple probes can accurately decode both the model's predicted output and the correct answer from hidden states, regardless of whether the model's output is correct. Building on this, we train lightweight error detectors that predict model correctness with over 90% accuracy. We then extend our analysis to structured chain-of-thought traces on addition-only GSM8K problems and find that probes trained on simple arithmetic generalize well to this more complex setting, revealing consistent internal representations. Finally, we demonstrate that these probes can guide selective re-prompting of erroneous reasoning steps, improving task accuracy with minimal disruption to correct outputs. Our findings suggest that arithmetic errors can be anticipated from internal activations alone, and that simple probes offer a viable path toward lightweight model self-correction.