Multiscale Byte Language Models -- A Hierarchical Architecture for Causal Million-Length Sequence Modeling

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address challenges in training multimodal foundation models—including difficulty modeling extremely long raw byte sequences, reliance on tokenizers, and GPU memory constraints—this paper proposes the Multiscale Byte Language Model (MBLM). MBLM introduces a model-agnostic hierarchical causal decoder architecture that performs end-to-end autoregressive modeling directly over million-length raw byte sequences, eliminating the need for encoders or explicit tokenization. It achieves, for the first time, full-precision training on 5M-byte contexts using a single GPU, with near-linear generation efficiency. MBLM unifies multimodal data representation by treating image and text inputs as byte streams under a shared framework. On visual question answering, it matches the performance of specialized CNN-LSTM models. The architecture is compatible with both Transformer and Mamba backbones, and the implementation is publicly released.

Technology Category

Application Category

📝 Abstract
Bytes form the basis of the digital world and thus are a promising building block for multimodal foundation models. Recently, Byte Language Models (BLMs) have emerged to overcome tokenization, yet the excessive length of bytestreams requires new architectural paradigms. Therefore, we present the Multiscale Byte Language Model (MBLM), a model-agnostic hierarchical decoder stack that allows training with context windows of $5$M bytes on single GPU in full model precision. We thoroughly examine MBLM's performance with Transformer and Mamba blocks on both unimodal and multimodal tasks. Our experiments demonstrate that hybrid architectures are efficient in handling extremely long byte sequences during training while achieving near-linear generational efficiency. To the best of our knowledge, we present the first evaluation of BLMs on visual Q&A tasks and find that, despite serializing images and the absence of an encoder, a MBLM with pure next token prediction can match custom CNN-LSTM architectures with designated classification heads. We show that MBLMs exhibit strong adaptability in integrating diverse data representations, including pixel and image filestream bytes, underlining their potential toward omnimodal foundation models. Source code is publicly available at: https://github.com/ai4sd/multiscale-byte-lm
Problem

Research questions and friction points this paper is trying to address.

MBLM enables training long byte sequences on single GPU.
Hybrid architectures efficiently manage extremely long byte sequences.
MBLM matches custom architectures in visual Q&A tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical decoder stack
Handles million-length sequences
Omnimodal foundation model potential
🔎 Similar Papers
No similar papers found.