🤖 AI Summary
This work addresses the limited selection capability of small-scale language models in Best-of-N inference, which hinders performance gains from parallel sampling. To overcome this, the authors propose training a compact reasoning model via reinforcement learning to develop strong generative selection ability, enabling it to accurately identify correct answers from multiple candidates. The approach builds on the DAPO algorithm and leverages synthetically generated positive and negative samples derived from mathematical and code instruction data. Experimental results demonstrate that the resulting model significantly outperforms prompt engineering and majority voting baselines on benchmarks including AIME24/25, HMMT25, and LiveCodeBench, achieving performance comparable to or even surpassing that of much larger models, while also exhibiting strong cross-model generalization.
📝 Abstract
Scaling test-time compute via parallel sampling can substantially improve LLM reasoning, but is often limited by Best-of-N selection quality. Generative selection methods, such as GenSelect, address this bottleneck, yet strong selection performance remains largely limited to large models. We show that small reasoning models can acquire strong GenSelect capabilities through targeted reinforcement learning. To this end, we synthesize selection tasks from large-scale math and code instruction datasets by filtering to instances with both correct and incorrect candidate solutions, and train 1.7B-parameter models with DAPO to reward correct selections. Across math (AIME24, AIME25, HMMT25) and code (LiveCodeBench) reasoning benchmarks, our models consistently outperform prompting and majority-voting baselines, often approaching or exceeding much larger models. Moreover, these gains generalize to selecting outputs from stronger models despite training only on outputs from weaker models. Overall, our results establish reinforcement learning as a scalable way to unlock strong generative selection in small models, enabling efficient test-time scaling.