🤖 AI Summary
Existing large language models (LLMs) suffer from inefficient numerical computation, excessively long reasoning chains, and weak numerical intuition due to digit-level tokenization redundancy—e.g., multi-digit numbers being split into multiple tokens. To address this, we propose BitTokens: the first single-token numerical encoding scheme grounded in the IEEE 754 binary representation of floating-point numbers. BitTokens directly maps any numeric value to a single, learnable embedding vector, enabling end-to-end training of numerical representations and arithmetic operations. The method satisfies key desiderata—including compactness, differentiability, and scale invariance—thereby substantially reducing inference overhead. Experiments demonstrate that small language models equipped with BitTokens achieve near-perfect (≈100%) accuracy on basic arithmetic tasks while reducing token consumption by over 90%. Moreover, BitTokens effectively extends the model’s capacity to handle longer and more complex numerical computations.
📝 Abstract
To drive progress in science and engineering, large language models (LLMs) must be able to process large amounts of numerical data and solve long calculations efficiently. This is currently only possible through the use of external tools or extensive reasoning chains, either limiting the numerical intuition of LLMs or limiting the length of problems they can solve. We show that frontier LLMs require excessive amounts of reasoning tokens to solve even basic calculations, which is exacerbated by their tokenization strategies that split single numbers into multiple tokens. This motivates the need for efficient and effective single-token number encodings. We introduce a set of desiderata for such encodings and show that existing approaches fail to fulfill them. To address these shortcomings, we propose BitTokens, a novel tokenization strategy that embeds any number into a single token using its IEEE 754 binary floating-point representation. Through extensive experiments we show that our BitTokens allow even small language models to learn algorithms that solve basic arithmetic operations nearly perfectly. This newly gained efficiency could expand the length and complexity of problems language models can solve.