🤖 AI Summary
To address the inference inefficiency in autoregressive decoding of large language models—caused by repeated traversal of early and middle transformer layers—this paper proposes Direct Multi-Token Decoding (DMTD). Grounded in the decoder-only Transformer architecture and the layer functional separation hypothesis, DMTD reuses precomputed hidden states from early and middle layers during the forward pass, enabling parallel prediction of multiple subsequent tokens solely through later layers—without introducing extra parameters, auxiliary models, or verification mechanisms. This constitutes the first purely feedforward, parameter-free multi-token direct generation paradigm. After fine-tuning on Qwen3-4B, DMTD achieves up to 2× inference speedup with negligible performance degradation. Empirical results further demonstrate that its effectiveness consistently improves with increasing training data volume.
📝 Abstract
Decoder-only transformers have become the standard architecture for large language models (LLMs) due to their strong performance. Recent studies suggest that, in pre-trained LLMs, early, middle, and late layers may serve distinct roles: Early layers focus on understanding the input context, middle layers handle task-specific processing, and late layers convert abstract representations into output tokens. We hypothesize that once representations have been processed by the early and middle layers, the resulting hidden states may encapsulate sufficient information to support the generation of multiple tokens using only the late layers, eliminating the need to repeatedly traverse the early and middle layers. We refer to this inference paradigm as Direct Multi-Token Decoding (DMTD). Unlike speculative decoding, our method introduces no additional parameters, auxiliary routines, or post-generation verification. Despite being trained on a limited dataset, a fine-tuned DMTD Qwen3-4B model has already demonstrated promising results, achieving up to a 2x speedup with only minor performance loss. Moreover, as shown in our scaling analysis, its performance is expected to further improve with larger training datasets.