Beyond Fertility: Analyzing STRR as a Metric for Multilingual Tokenization Evaluation

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing tokenization evaluation metrics—such as *tokenization rate*—only measure compression efficiency and fail to expose cross-lingual fairness issues. This work proposes a novel metric, *Single-Token Retention Rate (STRR)*, which quantifies systematic linguistic disparities in tokenization from the perspective of lexical integrity. Through large-scale empirical analysis across six mainstream tokenizers, seven languages, and two domains, we find that English consistently achieves high STRR (i.e., strong preference for preserving whole words), followed by Chinese; low-resource languages such as Hindi exhibit significant subword fragmentation. STRR complements tokenization rate by offering an interpretable, language-aware diagnostic tool—revealing, for the first time, inherent linguistic biases in tokenizers. Our findings provide both theoretical grounding and a practical evaluation framework for developing fairer, more transparent multilingual tokenization systems.

Technology Category

Application Category

📝 Abstract
Tokenization is a crucial but under-evaluated step in large language models (LLMs). The standard metric, fertility (the average number of tokens per word), captures compression efficiency but obscures how vocabularies are allocated across languages and domains. We analyze six widely used tokenizers across seven languages and two domains, finding stable fertility for English, high fertility for Chinese, and little domain sensitivity. To address fertility's blind spots, we propose the Single Token Retention Rate (STRR), which measures the proportion of words preserved as single tokens. STRR reveals systematic prioritization of English, strong support for Chinese, and fragmentation in Hindi, offering an interpretable view of cross-lingual fairness. Our results show that STRR complements fertility and provides practical guidance for designing more equitable multilingual tokenizers.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multilingual tokenization fairness beyond compression efficiency metrics
Analyzing vocabulary allocation disparities across languages and domains
Proposing STRR to measure single-token preservation for cross-lingual fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposed Single Token Retention Rate metric
Measures proportion of words as single tokens
Evaluates cross-lingual fairness in tokenization
🔎 Similar Papers
No similar papers found.