🤖 AI Summary
Subword tokenization anomalies—specifically “word fragmentation” (e.g., splitting “martial” into “mart” + “ial”)—adversely impact large language model (LLM) performance across downstream tasks, yet their systematic effects remain poorly quantified.
Method: We introduce the first interpretable, quantitative tokenization penalty function, integrating subword segmentation analysis with multi-task benchmarking (GLUE, MMLU, etc.) across mainstream LLMs—including Mistral and Llama—enabling cross-model, cross-task empirical evaluation. Statistical inference via hypothesis testing and regression analysis is employed to assess the relationship between tokenization quality and task accuracy.
Contribution/Results: We establish, for the first time, a statistically significant negative correlation between word fragmentation severity and model accuracy (p < 0.01). This work provides the first rigorous, empirically validated quantification of how tokenization defects degrade LLM performance, offering both theoretically grounded insights and practical tools for tokenizer optimization and model robustness enhancement.
📝 Abstract
Tokenization is the first step in training any Large Language Model (LLM), where the text is split into a sequence of tokens as per the model's fixed vocabulary. This tokenization in LLMs is different from the traditional tokenization in NLP where the text is split into a sequence of "natural" words. In LLMs, a natural word may also be broken into multiple tokens due to limited vocabulary size of the LLMs (e.g., Mistral's tokenizer splits "martial" into "mart" and "ial"). In this paper, we hypothesize that such breaking of natural words negatively impacts LLM performance on various NLP tasks. To quantify this effect, we propose a set of penalty functions that compute a tokenization penalty for a given text for a specific LLM, indicating how "bad" the tokenization is. We establish statistical significance of our hypothesis on multiple NLP tasks for a set of different LLMs.