đ¤ AI Summary
Large language models (LLMs) exhibit low accuracy on fundamental arithmetic tasksâparticularly multiplicationâdue to inherent limitations in token-based reasoning. To address this, we propose the Gated Calculator, a differentiable module natively integrated into LLMs (e.g., Llama) that executes addition, subtraction, multiplication, and division directly on GPU hardware without generating intermediate tokens or invoking external tools. This design enables fully internal, single-step, zero-dependency, interpretable, and side-effect-free arithmetic computation. Evaluated on the BigBench Arithmetic benchmark, our approach achieves state-of-the-art performance: 98â99% accuracy across all subtasksâincluding multi-digit multiplicationâwith markedly improved robustness. Notably, it outperforms baseline models with up to two orders of magnitude more parameters, demonstrating that architectural specializationânot scaleâdrives arithmetic competence. The method preserves end-to-end differentiability and requires no fine-tuning or inference-time modifications.
đ Abstract
Solving arithmetic tasks is a simple and fundamental skill, yet modern Large Language Models (LLMs) have great difficulty with them. We introduce the Integrated Gated Calculator (IGC), a module that enables LLMs to perform arithmetic by emulating a calculator on the GPU. We finetune a Llama model with our module and test it on the BigBench Arithmetic benchmark, where it beats the State of the Art, outperforming all models on the benchmark, including models almost two orders of magnitude larger. Our approach takes only a single iteration to run and requires no external tools. It performs arithmetic operations entirely inside the LLM without the need to produce intermediate tokens. It is computationally efficient, interpretable, and avoids side-effects on tasks that do not require arithmetic operations. It reliably achieves 98% to 99% accuracy across multiple training runs and for all subtasks, including the substantially harder subtask of multiplication, which was previously unsolved.