🤖 AI Summary
This work challenges the prevailing reliance on arbitrary generation orders in diffusion language models (dLLMs), which, despite their flexibility, often evade high-uncertainty yet critical tokens, leading to premature collapse of the solution space and degraded reasoning performance. The authors propose a “less is more” paradigm that abandons unrestricted generation order in favor of the standard left-to-right decoding path, combined with a novel Group Relative Policy Optimization (GRPO) for policy refinement. The resulting method, JustGRPO, preserves the parallel decoding efficiency inherent to dLLMs while substantially enhancing reasoning capabilities. Empirical evaluation demonstrates its effectiveness, achieving 89.1% accuracy on the GSM8K mathematical reasoning benchmark—a significant improvement over existing approaches.
📝 Abstract
Diffusion Large Language Models (dLLMs) break the rigid left-to-right constraint of traditional LLMs, enabling token generation in arbitrary orders. Intuitively, this flexibility implies a solution space that strictly supersets the fixed autoregressive trajectory, theoretically unlocking superior reasoning potential for general tasks like mathematics and coding. Consequently, numerous works have leveraged reinforcement learning (RL) to elicit the reasoning capability of dLLMs. In this paper, we reveal a counter-intuitive reality: arbitrary order generation, in its current form, narrows rather than expands the reasoning boundary of dLLMs. We find that dLLMs tend to exploit this order flexibility to bypass high-uncertainty tokens that are crucial for exploration, leading to a premature collapse of the solution space. This observation motivates a rethink of RL approaches for dLLMs, where considerable complexities, such as handling combinatorial trajectories and intractable likelihoods, are often devoted to preserving this flexibility. We demonstrate that effective reasoning can be better elicited by intentionally forgoing arbitrary order and applying standard Group Relative Policy Optimization (GRPO) instead. Our approach, JustGRPO, is minimalist yet surprisingly effective (e.g., 89.1% accuracy on GSM8K) while fully retaining the parallel decoding ability of dLLMs. Project page: https://nzl-thu.github.io/the-flexibility-trap