How Numerical Precision Affects Mathematical Reasoning Capabilities of LLMs

📅 2024-10-17
🏛️ arXiv.org
📈 Citations: 13
Influential: 2
📄 PDF
🤖 AI Summary
This work investigates the fundamental impact of numerical precision on the arithmetic reasoning capabilities of Transformer-based large language models (LLMs). Low-precision representations (e.g., INT8, FP16) may intrinsically limit exact arithmetic computation. Method: We conduct computational complexity analysis and formalize Transformer expressivity for arithmetic tasks, modeling iterative addition and integer multiplication as decision problems. Contribution/Results: We provide the first theoretical proof that low-precision arithmetic induces inherent unsolvability for such tasks—requiring superpolynomial model size to compensate—whereas FP32 enables efficient, polynomial-time solvability. Controlled quantization experiments confirm a sharp, threshold-like performance degradation with precision reduction. Our findings establish numerical precision—not merely parameter count or architectural design—as a critical bottleneck for LLMs’ mathematical reasoning. This yields a novel “precision-first” paradigm for arithmetic capability enhancement, with implications for model quantization, hardware-aware training, and trustworthy numerical reasoning in foundation models.

Technology Category

Application Category

📝 Abstract
Despite the remarkable success of Transformer-based Large Language Models (LLMs) across various domains, understanding and enhancing their mathematical capabilities remains a significant challenge. In this paper, we conduct a rigorous theoretical analysis of LLMs' mathematical abilities, with a specific focus on their arithmetic performances. We identify numerical precision as a key factor that influences their effectiveness in mathematical tasks. Our results show that Transformers operating with low numerical precision fail to address arithmetic tasks, such as iterated addition and integer multiplication, unless the model size grows super-polynomially with respect to the input length. In contrast, Transformers with standard numerical precision can efficiently handle these tasks with significantly smaller model sizes. We further support our theoretical findings through empirical experiments that explore the impact of varying numerical precision on arithmetic tasks, providing valuable insights for improving the mathematical reasoning capabilities of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Analyzing how numerical precision impacts LLMs' arithmetic performance
Identifying low precision's failure in tasks unless model size increases
Demonstrating standard precision enables efficient arithmetic with smaller models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes numerical precision impact on LLMs
Compares low vs standard precision performance
Empirically tests precision effects on arithmetic
🔎 Similar Papers
No similar papers found.
Guhao Feng
Guhao Feng
PhD Student, Peking University
Machine Learning
K
Kai Yang
Peking University
Y
Yuntian Gu
Peking University
X
Xinyue Ai
Peking University
Shengjie Luo
Shengjie Luo
PhD Student, Peking University
Machine Learning
J
Jiacheng Sun
Huawei Noah’s Ark Lab
D
Di He
Peking University
Zhenguo Li
Zhenguo Li
Huawei Noah's Ark Lab, Columbia, CUHK, PKU
machine learninggenerative AIAI for mathematics
L
Liwei Wang
Peking University