π€ AI Summary
This work addresses the exploration collapse in reinforcement learning (RL) during post-training of large language models (LLMs), where overemphasis on dominant reasoning paths undermines solution diversity and pass@k performance. To mitigate this, the authors propose a rollout-based RL method featuring a rarity-aware reward mechanism grounded in high-level policy clustering. Specifically, they leverage the LLM itself as a discriminator to cluster generated reasoning trajectories into high-level strategies and inversely weight policy advantages by cluster frequency, thereby explicitly promoting diverse problem-solving approaches. Experiments demonstrate that the method significantly improves both pass@k and AUC@K across mathematical, physical, and medical reasoning benchmarks while preserving pass@1 accuracy, effectively enhancing exploration and uncovering more diverse and innovative solutions.
π Abstract
Reinforcement learning (RL) has become a central paradigm for post-training large language models (LLMs), particularly for complex reasoning tasks, yet it often suffers from exploration collapse: policies prematurely concentrate on a small set of dominant reasoning patterns, improving pass@1 while limiting rollout-level diversity and gains in pass@k. We argue that this failure stems from regularizing local token behavior rather than diversity over sets of solutions. To address this, we propose Uniqueness-Aware Reinforcement Learning, a rollout-level objective that explicitly rewards correct solutions that exhibit rare high-level strategies. Our method uses an LLM-based judge to cluster rollouts for the same problem according to their high-level solution strategies, ignoring superficial variations, and reweights policy advantages inversely with cluster size. As a result, correct but novel strategies receive higher rewards than redundant ones. Across mathematics, physics, and medical reasoning benchmarks, our approach consistently improves pass@$k$ across large sampling budgets and increases the area under the pass@$k$ curve (AUC@$K$) without sacrificing pass@1, while sustaining exploration and uncovering more diverse solution strategies at scale.