🤖 AI Summary
To address the trade-off between sampling efficiency and accuracy in Transformer-based multi-token prediction, this paper proposes a joint multi-token probability modeling framework based on rank-𝑟 canonical tensor decomposition. The method models multi-step token distributions as structured, scalable tensor decompositions—mathematically equivalent to an implicit mixture-of-experts architecture—thereby preserving expressive capacity while significantly improving training stability. Crucially, the model natively supports speculative decoding without requiring modifications to existing inference frameworks. Empirical evaluation on text and code generation tasks demonstrates up to 2.1× inference speedup, robust performance across diverse model scales and training stages, and negligible overhead in both training and sampling. The core contribution is the first formulation of multi-token prediction as canonical tensor decomposition, unifying computational efficiency, predictive accuracy, and deployment compatibility.
📝 Abstract
We propose a new model for multi-token prediction in transformers, aiming to enhance sampling efficiency without compromising accuracy. Motivated by recent work that predicts the probabilities of subsequent tokens using multiple heads, we connect this approach to rank-$1$ canonical tensor decomposition. By generalizing it to a rank-$r$ canonical probability decomposition, we develop an improved model that predicts multiple tokens simultaneously. This model can also be interpreted as a mixture of experts, allowing us to leverage successful techniques from that domain for efficient and robust training. Importantly, the overall overhead for training and sampling remains low. Our method demonstrates significant improvements in inference speed for both text and code generation tasks, proving particularly beneficial within the self-speculative decoding paradigm. It maintains its effectiveness across various model sizes and training epochs, highlighting its robustness and scalability.