🤖 AI Summary
To address insufficient exploration in chain-of-thought sampling, sparse process-level rewards, reward model distribution shift, and reward hacking in LLM reinforcement learning, this paper proposes the first end-to-end on-policy tree search framework. Methodologically, it tightly couples policy gradient optimization with dynamic tree search to enable on-policy tree construction and update; replaces standalone reward modeling with stepwise intermediate supervision to mitigate distribution mismatch; and introduces an uncertainty-driven branching and expansion strategy to enhance search efficiency. Empirically, the approach significantly outperforms ChainRL on mathematical reasoning and code generation benchmarks—achieving superior search quality and reasoning performance under identical token budgets. The implementation is open-sourced.
📝 Abstract
Reinforcement learning (RL) with tree search has demonstrated superior performance in traditional reasoning tasks. Compared to conventional independent chain sampling strategies with outcome supervision, tree search enables better exploration of the reasoning space and provides dense, on-policy process rewards during RL training but remains under-explored in On-Policy LLM RL. We propose TreeRL, a reinforcement learning framework that directly incorporates on-policy tree search for RL training. Our approach includes intermediate supervision and eliminates the need for a separate reward model training. Existing approaches typically train a separate process reward model, which can suffer from distribution mismatch and reward hacking. We also introduce a cost-effective tree search approach that achieves higher search efficiency under the same generation token budget by strategically branching from high-uncertainty intermediate steps rather than using random branching. Experiments on challenging math and code reasoning benchmarks demonstrate that TreeRL achieves superior performance compared to traditional ChainRL, highlighting the potential of tree search for LLM. TreeRL is open-sourced at https://github.com/THUDM/TreeRL.