UTF-8 Plumbing: Byte-level Tokenizers Unavoidably Enable LLMs to Generate Ill-formed UTF-8

📅 2025-11-05
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Byte-level subword tokenizers may generate invalid UTF-8 byte sequences, compromising the validity of large language model outputs, system stability, and security. Method: We formally model tokenization as a monoid operation—its first such theoretical treatment—and rigorously prove that if the vocabulary contains invalid UTF-8 substrings, any decoding (especially incremental decoding) can produce invalid byte sequences, and inconsistency between incremental and full-sequence decoding is inevitable. Through structural analysis of UTF-8 encoding, tokenizer behavior simulation, and empirical evaluation across mainstream models (e.g., Llama, Qwen) and inference engines (e.g., vLLM, Ollama), we identify this flaw in multiple production systems. Contribution/Results: We propose two mitigation strategies—vocabulary sanitization and runtime UTF-8 validation—achieving 100% detection of invalid sequences without performance degradation. Our open-source verification tool enables systematic auditing and hardening of tokenizer deployments.

Technology Category

Application Category

📝 Abstract
Subword tokenization segments input text according to a pre-defined vocabulary to feed it into a language model; the language model, in turn, generates a sequence made from this same vocabulary. The members of the vocabulary can be built of code points or bytes. Using code points means that all members of the vocabulary are valid UTF-8 characters. However, it also requires thousands of initial members to achieve acceptable coverage of inputs. Beginning with bytes, on the contrary, avoids out-of-vocabulary errors with only 256 initial members of the vocabulary, but the members of the vocabulary and sequences of them are not guaranteed to be valid UTF-8. Sequences that are not valid UTF-8 break code that assumes its input to be valid UTF-8. Applications of language models must account for the breakage thereby introduced. In this paper, we formalize tokenization using monoid theory and prove that tokenizers whose vocabularies contain tokens that are ill-formed UTF-8 can always produce sequences that are ill-formed UTF-8. We demonstrate formally that attempting to incrementally convert tokens back to a string and interpret the results as UTF-8 gives different results than converting the whole sequence of tokens at once. This formal result predicts real-world bugs: we evaluate mitigations for the problem identified and provide case studies of major foundation models, serving engines, and constrained generation systems.
Problem

Research questions and friction points this paper is trying to address.

Byte-level tokenizers enable LLMs to generate invalid UTF-8 sequences
Ill-formed UTF-8 output breaks applications expecting valid text encoding
Incremental token decoding produces different results than full-sequence conversion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses byte-level tokenization with 256 initial tokens
Formalizes tokenization using monoid theory
Identifies UTF-8 validity issues in token sequences
🔎 Similar Papers
No similar papers found.