Boundless Byte Pair Encoding: Breaking the Pre-tokenization Barrier

πŸ“… 2025-03-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Pre-tokenization (e.g., splitting on whitespace or punctuation) simplifies tokenization but severely skews the word-frequency distribution in algorithms like Byte-Pair Encoding (BPE), causing high-frequency whole words to dominate the vocabulary and diminishing returns from large vocabulary expansion. This work proposes BoundlessBPE: the first BPE variant that removes the hard constraint of pre-tokenization boundaries, enabling cross-boundary merges to form statistically significant supra-word units (e.g., β€œof the”). It introduces a dynamic merge heuristic weighted by both frequency and subtoken length. Crucially, BoundlessBPE is semantics-agnostic and relies solely on data-driven co-occurrence statistics, yielding a more balanced vocabulary distribution. Experiments demonstrate significantly smoother token-frequency distributions, ~20% improvement in text compression ratio, increased bytes-per-token, and markedly enhanced scalability benefits with large vocabularies.

Technology Category

Application Category

πŸ“ Abstract
Pre-tokenization, the initial step in many modern tokenization pipelines, segments text into smaller units called pretokens, typically splitting on whitespace and punctuation. While this process encourages having full, individual words as tokens, it introduces a fundamental limitation in most tokenization algorithms such as Byte Pair Encoding (BPE). Specifically, pre-tokenization causes the distribution of tokens in a corpus to heavily skew towards common, full-length words. This skewed distribution limits the benefits of expanding to larger vocabularies, since the additional tokens appear with progressively lower counts. To overcome this barrier, we propose BoundlessBPE, a modified BPE algorithm that relaxes the pretoken boundary constraint. Our approach selectively merges two complete pretokens into a larger unit we term a superword. Superwords are not necessarily semantically cohesive. For example, the pretokens"of"and"the"might be combined to form the superword"of the". This merging strategy results in a substantially more uniform distribution of tokens across a corpus than standard BPE, and compresses text more effectively, with an approximate 20% increase in bytes per token.
Problem

Research questions and friction points this paper is trying to address.

Overcoming pre-tokenization skew in token distribution
Enhancing vocabulary expansion benefits in BPE
Improving text compression via boundary-free merging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Eliminates pre-tokenization boundary constraints
Introduces superword merging strategy
Improves token distribution uniformity
πŸ”Ž Similar Papers