Efficient numeracy in language models through single-token number embeddings

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models (LLMs) suffer from inefficient numerical computation, excessively long reasoning chains, and weak numerical intuition due to digit-level tokenization redundancy—e.g., multi-digit numbers being split into multiple tokens. To address this, we propose BitTokens: the first single-token numerical encoding scheme grounded in the IEEE 754 binary representation of floating-point numbers. BitTokens directly maps any numeric value to a single, learnable embedding vector, enabling end-to-end training of numerical representations and arithmetic operations. The method satisfies key desiderata—including compactness, differentiability, and scale invariance—thereby substantially reducing inference overhead. Experiments demonstrate that small language models equipped with BitTokens achieve near-perfect (≈100%) accuracy on basic arithmetic tasks while reducing token consumption by over 90%. Moreover, BitTokens effectively extends the model’s capacity to handle longer and more complex numerical computations.

Technology Category

Application Category

📝 Abstract
To drive progress in science and engineering, large language models (LLMs) must be able to process large amounts of numerical data and solve long calculations efficiently. This is currently only possible through the use of external tools or extensive reasoning chains, either limiting the numerical intuition of LLMs or limiting the length of problems they can solve. We show that frontier LLMs require excessive amounts of reasoning tokens to solve even basic calculations, which is exacerbated by their tokenization strategies that split single numbers into multiple tokens. This motivates the need for efficient and effective single-token number encodings. We introduce a set of desiderata for such encodings and show that existing approaches fail to fulfill them. To address these shortcomings, we propose BitTokens, a novel tokenization strategy that embeds any number into a single token using its IEEE 754 binary floating-point representation. Through extensive experiments we show that our BitTokens allow even small language models to learn algorithms that solve basic arithmetic operations nearly perfectly. This newly gained efficiency could expand the length and complexity of problems language models can solve.
Problem

Research questions and friction points this paper is trying to address.

LLMs inefficiently process numbers via multi-token representations
Existing number encodings fail to meet efficiency and effectiveness requirements
Current tokenization limits numerical reasoning capabilities in language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

BitTokens encode numbers as single tokens
Uses IEEE 754 binary floating-point representation
Enables efficient arithmetic operations in small models
🔎 Similar Papers
No similar papers found.
L
Linus Kreitner
Chair for AI in Healthcare and Medicine, Technical University of Munich (TUM) and TUM University Hospital, Munich, Germany
P
Paul Hager
Chair for AI in Healthcare and Medicine, Technical University of Munich (TUM) and TUM University Hospital, Munich, Germany
J
Jonathan Mengedoht
School of Computation, Information and Technology, TUM, Germany
G
Georgios Kaissis
Chair for AI in Healthcare and Medicine, Technical University of Munich (TUM) and TUM University Hospital, Munich, Germany
Daniel Rueckert
Daniel Rueckert
Technical University of Munich and Imperial College London
Machine LearningMedical Image ComputingBiomedical Image AnalysisComputer Vision
Martin J. Menten
Martin J. Menten
Technical University of Munich
Machine Learning for HealthcareMedical ImagingComputer Vision