🤖 AI Summary
Diffusion-based large language models (dLLMs) suffer from limited reasoning capabilities without external reward models or explicit step-wise annotations.
Method: We propose Reward-Free Reasoning Guidance (RFG), a framework that implicitly models the log-likelihood ratio of the generation process. RFG estimates process-level rewards solely from output discrepancies between a reference model and the target model—eliminating reliance on external reward models or explicit reward supervision. It supports arbitrary-order generation, is compatible with diverse post-trained models (e.g., SFT, RLHF), and incorporates test-time sampling expansion for enhanced stability.
Results: On four challenging reasoning-intensive tasks—including mathematical reasoning and code generation—RFG achieves average accuracy gains of up to 9.2%. It is the first method to enhance dLLM reasoning without explicit reward signals, demonstrating both generality across model families and empirical effectiveness.
📝 Abstract
Diffusion large language models (dLLMs) have shown great potential in large-scale language modeling, and there is an increasing interest in further improving the capacity to solve complex problems by guiding the reasoning process step by step. Common practice for autoregressive language models typically learns a process reward model with dense annotation for each intermediate step. However, this is challenging for dLLMs where the generation is in an any-order fashion and intermediate states are partially masked sentences. To this end, in this paper, we propose reward-free guidance (RFG), a principled method for guiding the reasoning trajectory of dLLMs without explicit process reward. The key idea of RFG is to parameterize the process reward by log-likelihood ratios of the enhanced and reference dLLMs, where the enhanced model can be easily obtained by any off-the-shelf dLLM that has been post-trained with reinforcement learning (RL) or supervised fine-tuning (SFT). We provide theoretical justification that RFG induces the reward-guided sampling distribution with no additional reward. We conduct comprehensive experiments on four challenging mathematical reasoning and code generation benchmarks using a diverse suite of dLLMs enhanced with various post-training methods. RFG consistently yields significant improvements across all tasks and model types, achieving accuracy gains of up to 9.2%. These findings establish RFG as a general training-free framework that scales test-time reasoning without reliance on external reward models.