🤖 AI Summary
Existing tokenization evaluation metrics—such as *tokenization rate*—only measure compression efficiency and fail to expose cross-lingual fairness issues. This work proposes a novel metric, *Single-Token Retention Rate (STRR)*, which quantifies systematic linguistic disparities in tokenization from the perspective of lexical integrity. Through large-scale empirical analysis across six mainstream tokenizers, seven languages, and two domains, we find that English consistently achieves high STRR (i.e., strong preference for preserving whole words), followed by Chinese; low-resource languages such as Hindi exhibit significant subword fragmentation. STRR complements tokenization rate by offering an interpretable, language-aware diagnostic tool—revealing, for the first time, inherent linguistic biases in tokenizers. Our findings provide both theoretical grounding and a practical evaluation framework for developing fairer, more transparent multilingual tokenization systems.
📝 Abstract
Tokenization is a crucial but under-evaluated step in large language models (LLMs). The standard metric, fertility (the average number of tokens per word), captures compression efficiency but obscures how vocabularies are allocated across languages and domains. We analyze six widely used tokenizers across seven languages and two domains, finding stable fertility for English, high fertility for Chinese, and little domain sensitivity. To address fertility's blind spots, we propose the Single Token Retention Rate (STRR), which measures the proportion of words preserved as single tokens. STRR reveals systematic prioritization of English, strong support for Chinese, and fragmentation in Hindi, offering an interpretable view of cross-lingual fairness. Our results show that STRR complements fertility and provides practical guidance for designing more equitable multilingual tokenizers.