Free Lunch for Pass@$k$? Low Cost Diverse Sampling for Diffusion Language Models

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion language models often generate redundant samples when producing diverse text, leading to inefficient resource utilization and poor exploration in Pass@$k$ evaluation tasks. This work proposes a training-free, low-cost sampling intervention that dynamically adjusts generated samples during batch decoding by leveraging intermediate representations of the diffusion model to enforce sequential repulsion in the feature space, thereby actively suppressing redundancy and enhancing diversity. To the best of our knowledge, this is the first method to achieve efficient diversity enhancement in diffusion language models without requiring retraining or beam search. Evaluated on HumanEval and GSM8K benchmarks using the LLaDA-8B-Instruct model, the approach significantly improves Pass@$k$ success rates while introducing negligible computational overhead.

Technology Category

Application Category

📝 Abstract
Diverse outputs in text generation are necessary for effective exploration in complex reasoning tasks, such as code generation and mathematical problem solving. Such Pass@$k$ problems benefit from distinct candidates covering the solution space. However, traditional sampling approaches often waste computational resources on repetitive failure modes. While Diffusion Language Models have emerged as a competitive alternative to the prevailing Autoregressive paradigm, they remain susceptible to this redundancy, with independent samples frequently collapsing into similar modes. To address this, we propose a training free, low cost intervention to enhance generative diversity in Diffusion Language Models. Our approach modifies intermediate samples in a batch sequentially, where each sample is repelled from the feature space of previous samples, actively penalising redundancy. Unlike prior methods that require retraining or beam search, our strategy incurs negligible computational overhead, while ensuring that each sample contributes a unique perspective to the batch. We evaluate our method on the HumanEval and GSM8K benchmarks using the LLaDA-8B-Instruct model. Our results demonstrate significantly improved diversity and Pass@$k$ performance across various temperature settings. As a simple modification to the sampling process, our method offers an immediate, low-cost improvement for current and future Diffusion Language Models in tasks that benefit from diverse solution search. We make our code available at https://github.com/sean-lamont/odd.
Problem

Research questions and friction points this paper is trying to address.

diverse sampling
diffusion language models
Pass@$k$
redundancy
text generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Language Models
Diverse Sampling
Pass@$k$
Training-Free Intervention
Feature-Space Repulsion
🔎 Similar Papers
S
Sean Lamont
Australian National University
Christian Walder
Christian Walder
Google DeepMind
machine learning
P
Paul Montague
Defence Science and Technology Group
Amir Dezfouli
Amir Dezfouli
BIMLOGIQ
Computational neuroscienceMachine Learning
M
Michael Norrish
Australian National University