🤖 AI Summary
This work addresses the inefficiency of tokenization for low-resource languages in mainstream large language models, where suboptimal tokenization leads to excessive token consumption for equivalent semantic content, thereby increasing computational costs and reducing effective context length. The study presents the first systematic evaluation of tokenization overhead across multiple large language models for low-resource languages and introduces a post-hoc vocabulary expansion method that requires no model retraining. By merging frequently occurring multi-token sequences into new single-token entries within the original vocabulary, the approach significantly compresses input lengths. Experiments on models such as Llama 3.2 1B across twelve low-resource languages demonstrate substantial reductions in token usage while preserving high similarity in final hidden states before and after compression, confirming consistent semantic representation with improved token efficiency.
📝 Abstract
Relative to English, low-resource languages suffer from substantial tokenization premiums in modern LMs, meaning that it generally requires several times as many tokens to encode a sentence in a low-resource language than to encode the analogous sentence in English. This tokenization premium results in increased API and energy costs and reduced effective context windows for these languages. In this paper we analyze the tokenizers of ten popular LMs to better understand their designs and per-language tokenization premiums. We also propose a mechanism to reduce tokenization premiums in pre-trained models, by post-hoc additions to the token vocabulary that coalesce multi-token characters into single tokens. We apply this methodology to 12 low-resource languages, demonstrating that the original and compressed inputs often have similar last hidden states when run through the Llama 3.2 1B model.