🤖 AI Summary
This work addresses the limitations of conventional tokenizers—such as Byte Pair Encoding (BPE)—which often misalign with linguistic structure, exacerbate biases, and inefficiently consume model capacity in multilingual and multidomain settings, while lacking systematic design and evaluation protocols. Treating tokenization as a core modeling decision for large language models, this study proposes the first context-aware framework that co-designs tokenization with the model architecture, integrating linguistic knowledge, domain-specific characteristics, and deployment constraints. By jointly optimizing the tokenizer and the model, and establishing standardized evaluation benchmarks alongside transparent reporting practices, the research provides both theoretical foundations and practical pathways toward more equitable, efficient, and adaptable language technologies, significantly enhancing performance and robustness across diverse languages and domains.
📝 Abstract
Tokenization underlies every large language model, yet it remains an under-theorized and inconsistently designed component. Common subword approaches such as Byte Pair Encoding (BPE) offer scalability but often misalign with linguistic structure, amplify bias, and waste capacity across languages and domains. This paper reframes tokenization as a core modeling decision rather than a preprocessing step. We argue for a context-aware framework that integrates tokenizer and model co-design, guided by linguistic, domain, and deployment considerations. Standardized evaluation and transparent reporting are essential to make tokenization choices accountable and comparable. Treating tokenization as a core design problem, not a technical afterthought, can yield language technologies that are fairer, more efficient, and more adaptable.