Faster Superword Tokenization

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing superword tokenization algorithms, such as BoundlessBPE and SuperBPE, suffer from prohibitively low training efficiency, limiting their practical applicability. This work proposes a two-stage training framework that first performs standard Byte Pair Encoding (BPE) merges and then aggregates high-frequency merge candidates to identify viable super-merges, thereby eliminating the need to keep the entire corpus in memory. The approach automatically determines critical hyperparameters and achieves performance nearly equivalent to SuperBPE while drastically improving computational efficiency—reducing training time on a 1GB dataset from 4.7 CPU-days to approximately 10 minutes, a speedup of over 600×. Open-source implementations are provided in both Python and Rust.
📝 Abstract
Byte Pair Encoding (BPE) is a widely used tokenization algorithm, whose tokens cannot extend across pre-tokenization boundaries, functionally limiting it to representing at most full words. The BoundlessBPE and SuperBPE algorithms extend and improve BPE by relaxing this limitation and allowing the formation of superwords, which are combinations of pretokens that form phrases. However, previous implementations were impractical to train: for example, BoundlessBPE took 4.7 CPU days to train on 1GB of data. We show that supermerge candidates, two or more consecutive pretokens eligible to form a supermerge, can be aggregated by frequency much like regular pretokens. This avoids keeping full documents in memory, as the original implementations of BoundlessBPE and SuperBPE required, leading to a significant training speedup. We present a two-phase formulation of BoundlessBPE that separates first-phase learning of regular merges from second-phase learning of supermerges, producing identical results to the original implementation. We also show a near-equivalence between two-phase BoundlessBPE and SuperBPE, with the difference being that a manually selected hyperparameter used in SuperBPE can be automatically determined in the second phase of BoundlessBPE. These changes enable a much faster implementation, allowing training on that same 1GB of data in 603 and 593 seconds for BoundlessBPE and SuperBPE, respectively, a more than 600x increase in speed. For each of BoundlessBPE, SuperBPE, and BPE, we open-source both a reference Python implementation and a fast Rust implementation.
Problem

Research questions and friction points this paper is trying to address.

superword tokenization
Byte Pair Encoding
training efficiency
memory usage
tokenization algorithm
Innovation

Methods, ideas, or system contributions that make the work stand out.

superword tokenization
BoundlessBPE
SuperBPE
efficient training
two-phase algorithm
🔎 Similar Papers
No similar papers found.