🤖 AI Summary
Reinforcement learning (RL) fine-tuning of large language models frequently induces “diversity collapse”—an over-concentration of outputs that severely limits response diversity—while existing heuristic approaches (e.g., entropy regularization) struggle to jointly optimize correctness and diversity, exhibiting task-dependent performance. This work provides the first theoretical analysis identifying the root cause as a coupled effect of selection bias and reinforcement bias. To address this, we propose Difference Smoothing: a principled method that performs trajectory-aware reward adjustment exclusively on correct trajectories, enabling differentiated smoothing optimization over valid solution paths. Theoretically, Difference Smoothing is provably superior to conventional heuristics. Empirical evaluation across 1B–7B models demonstrates consistent gains: on mathematical reasoning benchmarks—including CountDown and AIME24—Pass@1 and Pass@k improve by up to 6.7%, significantly enhancing the alignment between correctness and diversity in generated outputs.
📝 Abstract
It is widely recognized that reinforcement learning (RL) fine-tuning of large language models often leads to extit{diversity collapse}, where outputs lack variety. Prior work has proposed a range of heuristics to counteract this effect, but these methods are ad hoc: they frequently trade off correctness for diversity, their effectiveness varies across tasks, and in some cases they even contradict one another. In this work, we place these observations on a rigorous foundation. We first provide a formal proof of why RL fine-tuning exhibits diversity collapse via a selection and reinforcement bias. Next, we make a key observation that any reward modification to address diversity collapse only needs to be applied on the correct trajectories. Building directly on this analysis, we introduce a principled method -- extit{differential smoothing} -- that provably improves both correctness and diversity, outperforming vanilla RL as well as widely used entropy-based heuristics. Our theory precisely characterizes when existing heuristics help and why they fail, while showing that differential smoothing is universally superior. Extensive experiments with models from 1B to 7B parameters, across domains including CountDown and real-world mathematical reasoning, demonstrate consistent gains. Differential smoothing improves both Pass@1 and Pass@k, with up to 6.7% improvements on AIME24 dataset.