Fast and Expressive Multi-Token Prediction with Probabilistic Circuits

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multi-token prediction (MTP) methods impose strict independence assumptions among future tokens, limiting expressive capacity and degrading generation quality. To address this, we propose MTPC—a unified framework grounded in probabilistic circuits (PCs)—that flexibly models the joint distribution of future tokens, thereby relaxing the independence constraint. MTPC integrates hierarchical mixture models, hidden Markov models, and tensor network architectures, and further incorporates speculative decoding with partial layer sharing to jointly optimize inference latency and modeling expressiveness without compromising generation fidelity. Extensive experiments on the byte-level LLM EvaByte demonstrate that MTPC achieves significant speedup—up to 2.1× faster decoding—while preserving accuracy and diversity comparable to the original verification model. Our approach establishes a new paradigm for efficient large language model inference, bridging the gap between computational efficiency and probabilistic expressivity in sequence generation.

Technology Category

Application Category

📝 Abstract
Multi-token prediction (MTP) is a prominent strategy to significantly speed up generation in large language models (LLMs), including byte-level LLMs, which are tokeniser-free but prohibitively slow. However, existing MTP methods often sacrifice expressiveness by assuming independence between future tokens. In this work, we investigate the trade-off between expressiveness and latency in MTP within the framework of probabilistic circuits (PCs). Our framework, named MTPC, allows one to explore different ways to encode the joint distributions over future tokens by selecting different circuit architectures, generalising classical models such as (hierarchical) mixture models, hidden Markov models and tensor networks. We show the efficacy of MTPC by retrofitting existing byte-level LLMs, such as EvaByte. Our experiments show that, when combined with speculative decoding, MTPC significantly speeds up generation compared to MTP with independence assumptions, while guaranteeing to retain the performance of the original verifier LLM. We also rigorously study the optimal trade-off between expressiveness and latency when exploring the possible parameterisations of MTPC, such as PC architectures and partial layer sharing between the verifier and draft LLMs.
Problem

Research questions and friction points this paper is trying to address.

Addresses speed-expressiveness trade-off in multi-token prediction for LLMs
Overcomes independence assumptions limiting existing multi-token prediction methods
Enables flexible joint distribution modeling through probabilistic circuit architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-token prediction using probabilistic circuits
Encoding joint distributions over future tokens
Combining speculative decoding with circuit architectures
🔎 Similar Papers
No similar papers found.