Improving Discrete Diffusion Unmasking Policies Beyond Explicit Reference Policies

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Masked diffusion models (MDMs) critically depend on denoising order scheduling for generation quality; however, existing rule-based strategies (e.g., highest-confidence selection) lack theoretical guarantees and systematic optimization capability. Method: This paper formulates MDM decoding order learning as a KL-regularized Markov decision process and introduces a reference-policy-guided reinforcement learning framework, ensuring theoretical convergence of the learned policy and its asymptotic closeness to the true data distribution. The method dynamically selects optimal denoising positions via policy optimization, replacing heuristic rules. Results: Extensive experiments across four benchmark tasks demonstrate efficacy: on the SUDOKU task, our approach achieves a 20.1% absolute improvement over random ordering and an 11.2% gain over maximum-confidence ordering, significantly enhancing both generation accuracy and sample quality.

Technology Category

Application Category

📝 Abstract
Masked diffusion models (MDMs) have recently emerged as a novel framework for language modeling. MDMs generate sentences by iteratively denoising masked sequences, filling in [MASK] tokens step by step. Although MDMs support any-order sampling, performance is highly sensitive to the choice of which position to unmask next. Prior work typically relies on rule-based schedules (e.g., max-confidence, max-margin), which provide ad hoc improvements. In contrast, we replace these heuristics with a learned scheduler. Specifically, we cast denoising as a KL-regularized Markov decision process (MDP) with an explicit reference policy and optimize a regularized objective that admits policy improvement and convergence guarantees under standard assumptions. We prove that the optimized policy under this framework generates samples that more closely match the data distribution than heuristic schedules. Empirically, across four benchmarks, our learned policy consistently outperforms max-confidence: for example, on SUDOKU, where unmasking order is critical, it yields a 20.1% gain over random and a 11.2% gain over max-confidence.
Problem

Research questions and friction points this paper is trying to address.

Learned scheduler replaces heuristic unmasking policies
Optimizes KL-regularized MDP for better data distribution matching
Improves generation quality over rule-based scheduling methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learned scheduler replaces rule-based unmasking policies
KL-regularized MDP framework with explicit reference policy
Optimized policy improves data distribution matching
🔎 Similar Papers