🤖 AI Summary
This work identifies tokenization as a critical bottleneck underlying comprehension biases in large language models (LLMs), especially pronounced in Chinese. To systematically evaluate tokenization robustness, the authors introduce ADT—the first adversarial benchmark dedicated to tokenization vulnerability—comprising two subsets: human-annotated (ADT-Human) and automatically generated (ADT-Auto). Methodologically, they propose a general adversarial sample generation framework integrating multi-vocabulary confusion, character-level and semantic-level perturbations, and rule-based heuristic coordination, compatible with any open-source LLM. Experiments demonstrate that ADT substantially degrades accuracy across state-of-the-art models—including GPT-4o, Llama-3, and DeepSeek-R1—and exhibits strong cross-architecture transferability. This study is the first to empirically establish tokenization fragility as a pervasive weakness across LLMs, providing a quantitative, reproducible evaluation benchmark to guide robust tokenizer design and optimization.
📝 Abstract
Large Language Models (LLMs) have shown remarkable capabilities in language understanding and generation. Nonetheless, it was also witnessed that LLMs tend to produce inaccurate responses to specific queries. This deficiency can be traced to the tokenization step LLMs must undergo, which is an inevitable limitation inherent to all LLMs. In fact, incorrect tokenization is the critical point that hinders LLMs in understanding the input precisely, thus leading to unsatisfactory output. This defect is more obvious in Chinese scenarios. To demonstrate this flaw of LLMs, we construct an adversarial dataset, named as $ extbf{ADT (Adversarial Dataset for Tokenizer)}$, which draws upon the vocabularies of various open-source LLMs to challenge LLMs' tokenization. ADT consists of two subsets: the manually constructed ADT-Human and the automatically generated ADT-Auto. Our empirical results reveal that our ADT is highly effective on challenging the tokenization of leading LLMs, including GPT-4o, Llama-3, Deepseek-R1 and so on, thus degrading these LLMs' capabilities. Moreover, our method of automatic data generation has been proven efficient and robust, which can be applied to any open-source LLMs. In this paper, we substantially investigate LLMs' vulnerability in terms of challenging their token segmentation, which will shed light on the subsequent research of improving LLMs' capabilities through optimizing their tokenization process and algorithms.