🤖 AI Summary
Traditional LLMs suffer from limited bidirectional information exchange due to fixed prefix prompting and unidirectional autoregressive generation. To address this, we propose In-Place Prompting—a novel in-context prompting framework tailored for diffusion-based large language models. Our method integrates bidirectional attention with iterative refinement, dynamically transforming prefix prompts into in-context chain-of-thought reasoning; it further incorporates a confidence-aware early-exit mechanism to enable real-time generation guidance and compute-adaptive pruning. On GSM8K, our approach achieves a 17.29% accuracy gain and 4.12× inference speedup; on MMLU, it delivers a 276.67× acceleration while preserving state-of-the-art performance. The core contribution lies in the first unified integration of in-place prompting, bidirectional modeling, and controllable early exit within a diffusion LLM architecture—significantly enhancing both reasoning flexibility and energy efficiency.
📝 Abstract
Despite large language models (LLMs) have achieved remarkable success, their prefix-only prompting paradigm and sequential generation process offer limited flexibility for bidirectional information. Diffusion large language models (dLLMs) present new opportunities through their bidirectional attention mechanisms and iterative refinement processes, enabling more flexible in-place prompting strategies. We introduce ICE (In-Place Chain-of-Thought Prompting with Early Exit), a novel framework that transforms prefix-only prompting into in-place prompting specifically designed for dLLMs. ICE integrates in-place prompts directly within masked token positions during iterative refinement and employs a confidence-aware early exit mechanism to significantly reduce computational overhead. Extensive experiments demonstrate ICE's effectiveness, achieving up to 17.29% accuracy improvement with 4.12$ imes$ speedup on GSM8K, and up to 276.67$ imes$ acceleration on MMLU while maintaining competitive performance.