EDIT: Early Diffusion Inference Termination for dLLMs Based on Dynamics of Training Gradients

📅 2025-11-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion-based large language models (dLLMs) suffer from high inference overhead due to redundant denoising steps. This paper proposes EDIT, an early termination mechanism that requires no auxiliary model. EDIT is the first method to leverage gradient metadata—preserved by AdamW during supervised fine-tuning (SFT) with LoRA—to construct an inference graph. It aggregates gradients to compute token-level activation alignment scores and uses KL divergence to measure convergence between these scores and the distribution of visible tokens, thereby adaptively determining inference stability. Evaluated across multiple benchmarks, EDIT reduces diffusion steps by 11.8%–68.3% while maintaining or improving accuracy. It incurs only negligible storage overhead—approximately 0.02% (1.5–2 MB)—making it highly efficient for practical deployment. The approach significantly lowers inference cost for dLLMs without compromising performance.

Technology Category

Application Category

📝 Abstract
Diffusion-based large language models (dLLMs) refine token generations through iterative denoising, but answers often stabilize before all steps complete. We propose EDIT (Early Diffusion Inference Termination), an inference-time criterion that adaptively stops denoising once sufficient reasoning stability relative to training-time reasoning is detected. EDIT monitors the alignment between token activations and a reasoning map derived from AdamW-aggregated LoRA updates captured during supervised fine-tuning (SFT). During training, optimization dynamics generate rich metadata about parameter importance that in prior methods is typically discarded upon model release. We preserve this information as a compact representation of learned reasoning pathways. During inference, alignment scores are converted to a distribution over the tokens already unmasked at the current denoising step, and convergence is detected when KL divergence between consecutive steps falls below a threshold on the matched unmasked (visible) tokens. Across reasoning benchmarks, EDIT reduces diffusion steps by 11.8% to 68.3% while preserving or improving accuracy in most settings, with approximately 0.02% storage overhead (about 1.5-2 MB for all QKV modules across 32 blocks in an 8 GB model). By utilizing training-gradient dynamics, our work opens a new research direction for reducing dLLM inference time and cost.
Problem

Research questions and friction points this paper is trying to address.

Reduces diffusion steps in dLLMs by early termination
Preserves reasoning accuracy using training gradient dynamics
Minimizes storage overhead while cutting inference time
Innovation

Methods, ideas, or system contributions that make the work stand out.

Early termination based on training gradient dynamics
Monitor token alignment with AdamW-aggregated LoRA reasoning map
Detect convergence via KL divergence on visible tokens
🔎 Similar Papers
No similar papers found.