Faster Language Models with Better Multi-Token Prediction Using Tensor Decomposition

📅 2024-10-23
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the trade-off between sampling efficiency and accuracy in Transformer-based multi-token prediction, this paper proposes a joint multi-token probability modeling framework based on rank-𝑟 canonical tensor decomposition. The method models multi-step token distributions as structured, scalable tensor decompositions—mathematically equivalent to an implicit mixture-of-experts architecture—thereby preserving expressive capacity while significantly improving training stability. Crucially, the model natively supports speculative decoding without requiring modifications to existing inference frameworks. Empirical evaluation on text and code generation tasks demonstrates up to 2.1× inference speedup, robust performance across diverse model scales and training stages, and negligible overhead in both training and sampling. The core contribution is the first formulation of multi-token prediction as canonical tensor decomposition, unifying computational efficiency, predictive accuracy, and deployment compatibility.

Technology Category

Application Category

📝 Abstract
We propose a new model for multi-token prediction in transformers, aiming to enhance sampling efficiency without compromising accuracy. Motivated by recent work that predicts the probabilities of subsequent tokens using multiple heads, we connect this approach to rank-$1$ canonical tensor decomposition. By generalizing it to a rank-$r$ canonical probability decomposition, we develop an improved model that predicts multiple tokens simultaneously. This model can also be interpreted as a mixture of experts, allowing us to leverage successful techniques from that domain for efficient and robust training. Importantly, the overall overhead for training and sampling remains low. Our method demonstrates significant improvements in inference speed for both text and code generation tasks, proving particularly beneficial within the self-speculative decoding paradigm. It maintains its effectiveness across various model sizes and training epochs, highlighting its robustness and scalability.
Problem

Research questions and friction points this paper is trying to address.

Enhance multi-token prediction efficiency
Maintain accuracy in language models
Improve inference speed for text and code generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tensor decomposition for multi-token prediction
Rank-r canonical probability decomposition
Mixture of experts for efficient training
🔎 Similar Papers
No similar papers found.