π€ AI Summary
This work proposes a two-stage distillation framework to enable efficient cross-granularity knowledge transfer from token-level to byte-level language models, addressing the high computational cost and incompatibility of training byte language models (BLMs) from scratch. The approach first employs progressive knowledge distillation to transfer knowledge from a pretrained token-level teacher model to a byte-level student model, followed by byte-level supervised fine-tuning for further optimization. The method is architecture-agnostic, demonstrating compatibility with diverse models such as Llama, Qwen, and OLMo. Using only approximately 125 billion bytes of data, the distilled BLMs closely match the multitask performance of their token-level teachers while substantially reducing training costs.
π Abstract
Byte Language Models (BLMs) have emerged as a promising direction for scaling language models beyond tokenization. However, existing BLMs typically require training from scratch on trillions of bytes, making them prohibitively expensive. In this paper, we propose an efficient distillation recipe that converts existing token-trained LLMs into BLMs while retaining comparable capabilities. Our recipe follows a two-stage curriculum: (1) Progressive Knowledge Distillation, which aligns byte-level representations with the embeddings of the token-trained teacher model; and (2) Byte-Level Supervised Fine-Tuning, which enables end-to-end generation entirely in the byte space. We validate our approach across multiple model families, including Llama, Qwen, and OLMo, and demonstrate that the distilled BLMs retain most of the teacher models'performance using only approximately 125B bytes.