🤖 AI Summary
This work investigates the causes and optimization of “token premium”—the cross-lingual disparity in token counts—in multilingual text encoding. We identify vocabulary size and pre-tokenization strategy—not training-test data similarity—as the primary drivers. Through systematic experiments across ~7,000 monolingual tokenizers spanning 97 languages, we empirically determine language-specific optimal vocabulary sizes for the first time. We further propose a space-aware subword tokenizer (Space-Merge Subword Tokenizer) that supports merging across whitespace boundaries. Our approach significantly mitigates cross-lingual tokenization inequality while preserving model performance: it achieves an average 18.3% improvement in inference compression ratio and a 22.7% increase in training throughput—effectively alleviating cost bottlenecks in multilingual models, especially for low-resource languages.
📝 Abstract
The number of tokens it takes to encode parallel text in different languages is known to vary. These disparities are called token premiums. Having high token premiums leads to less throughput during training and increases costs at inference. In this paper, we show that even after controlling for dataset size, vocabulary size, and data content, monolingual tokenizers exhibit a wide range of token premiums across languages. To understand the cross-linguistic differences that cause these token premiums, we train a suite of approximately 7,000 comparable monolingual tokenizers for 97 languages, manipulating tokenization algorithm, vocabulary size, and dataset size. We measure token premiums and test for a relationship between factors such as data similarity (between tokenizer training and evaluation), vocabulary size, and pre-tokenization. We also investigate the role of language-specific features such as writing system and word length. We find that similarity between training and test data does not impact token premiums, but vocabulary size and pre-tokenization do. While simply increasing vocabulary size does not lead to reduced token premium effects, we can determine an ``optimal'' vocabulary size for each language to achieve significantly reduced token premium effects. We also train superword tokenizers which allow merges over whitespaces, and we find that they both reduce token premium effects and improve compression overall. Thus, intervening on the vocabulary size or the pre-tokenizer significantly reduces crosslingual token premium effects.