The Road Less Traveled: Enhancing Exploration in LLMs via Sequential Sampling

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning (RL) optimization of large language models (LLMs) suffers from insufficient exploration and entropy collapse during parallel sampling, undermining policy diversity and generalization. To address this, we propose SESA—a novel framework introducing *sequential sampling*: subsequent token generation is dynamically conditioned on the already-produced reasoning trace, thereby breaking the i.i.d. assumption and substantially enhancing path diversity and exploration depth. SESA unifies conditional generation with RL, improving recovery capability and output diversity on synthetic tasks. Evaluated on three agent benchmarks, it achieves absolute success rate gains of 0.25, 0.42, and 0.07—representing up to a 211% relative improvement over baseline RL methods. Crucially, SESA mitigates policy collapse and establishes a new paradigm for synergistic LLM–RL optimization.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has been pivotal in enhancing the reasoning capabilities of large language models (LLMs), but it often suffers from limited exploration and entropy collapse, where models exploit a narrow set of solutions, leading to a loss of sampling diversity and subsequently preventing RL from further improving performance. This issue is exacerbated in parallel sampling methods, where multiple outputs are drawn from the same distribution, potentially causing the model to converge to similar solutions. We propose SESA, a novel SEquential SAmpling framework that mitigates this challenge by generating diverse solution sketches sequentially before expanding them into full reasoning paths. This approach ensures broader exploration by conditioning each new output on previous ones, promoting diversity throughout the process and preventing policy collapse. Our experiments on a synthetic task show that sequential sampling consistently outperforms traditional RL methods in terms of path diversity and recovery from collapse. Further evaluations on real-world tasks demonstrate that SESA improves both the exploration of valid strategies and the overall performance of LLMs. On three agent benchmarks, SESA lifts success rates by $+0.25$, $+0.42$, and $+0.07$ absolute over the base model (up to an additional $211%$ relative improvement over baseline RL), underscoring its exploration advantage. This work introduces a structured approach to exploration, paving the way for more effective and diverse reasoning in RL-trained LLMs. Our code is released at https://github.com/MuLabPKU/sesa.
Problem

Research questions and friction points this paper is trying to address.

Addressing limited exploration and entropy collapse in RL-trained LLMs
Mitigating sampling diversity loss in parallel generation methods
Enhancing reasoning path diversity through sequential solution sketching
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sequential sampling framework prevents policy collapse
Generates diverse solution sketches before expansion
Conditions new outputs on previous ones for diversity
🔎 Similar Papers