🤖 AI Summary
This work addresses the high cumulative uncertainty in existing masked diffusion model samplers, which typically adopt greedy strategies that ignore the global impact of current decoding choices on subsequent steps. To overcome this limitation, the authors propose the Info-Gain Sampler, which—leveraging the non-causal nature of the diffusion model—introduces information gain into the sampling process for the first time. At each step, the sampler jointly optimizes the current prediction uncertainty and the expected information gain regarding future masked positions, enabling globally informed decoding order planning. Experiments demonstrate that this approach improves average accuracy by 3.6% on reasoning tasks, achieves a 63.1% win rate in human preference evaluations for creative writing, and reduces cumulative uncertainty from 78.4 to 48.6.
📝 Abstract
Masked Diffusion Models (MDMs) offer greater flexibility in decoding order than autoregressive models but require careful planning to achieve high-quality generation. Existing samplers typically adopt greedy heuristics, prioritizing positions with the highest local certainty to decode at each step. Through failure case analysis, we identify a fundamental limitation of this approach: it neglects the downstream impact of current decoding choices on subsequent steps and fails to minimize cumulative uncertainty. In particular, these methods do not fully exploit the non-causal nature of MDMs, which enables evaluating how a decoding decision reshapes token probabilities/uncertainty across all remaining masked positions. To bridge this gap, we propose the Info-Gain Sampler, a principled decoding framework that balances immediate uncertainty with information gain over future masked tokens. Extensive evaluations across diverse architectures and tasks (reasoning, coding, creative writing, and image generation) demonstrate that Info-Gain Sampler consistently outperforms existing samplers for MDMs. For instance, it achieves a 3.6% improvement in average accuracy on reasoning tasks and a 63.1% win-rate in creative writing. Notably, on reasoning tasks it reduces cumulative uncertainty from 78.4 to 48.6, outperforming the best baseline by a large margin. The code will be available at https://github.com/yks23/Information-Gain-Sampler.