π€ AI Summary
This work proposes ConFu, a novel speculative decoding framework that overcomes the limitations of existing methods, which rely solely on the current prefix for draft token generation and consequently suffer from error accumulation. ConFu introduces continuous βcontemplateβ tokens combined with a soft prompting mechanism, enabling the draft model to perform context-aware future predictions. This is further enhanced by a dynamic mixture-of-experts (MoE) architecture and an anchor token sampling strategy, collectively breaking the strict dependency on immediate prefixes. Evaluated on Llama-3 3B and 8B models, ConFu achieves an 8%β11% improvement in token acceptance rate and generation speed compared to EAGLE-3, significantly accelerating large language model inference while maintaining high prediction fidelity.
π Abstract
Speculative decoding has emerged as a powerful approach to accelerate large language model (LLM) inference by employing lightweight draft models to propose candidate tokens that are subsequently verified by the target model. The effectiveness of this paradigm critically depends on the quality of the draft model. While recent advances such as the EAGLE series achieve state-of-the-art speedup, existing draft models remain limited by error accumulation: they condition only on the current prefix, causing their predictions to drift from the target model over steps. In this work, we propose \textbf{ConFu} (Contemplate the Future), a novel speculative decoding framework that enables draft models to anticipate the future direction of generation. ConFu introduces (i) contemplate tokens and soft prompts that allow the draft model to leverage future-oriented signals from the target model at negligible cost, (ii) a dynamic contemplate token mechanism with MoE to enable context-aware future prediction, and (iii) a training framework with anchor token sampling and future prediction replication that learns robust future prediction. Experiments demonstrate that ConFu improves token acceptance rates and generation speed over EAGLE-3 by 8--11% across various downstream tasks with Llama-3 3B and 8B models. We believe our work is the first to bridge speculative decoding with continuous reasoning tokens, offering a new direction for accelerating LLM inference.