🤖 AI Summary
Existing test-time scaling (TTS) methods suffer from a fundamental dichotomy: reinforcement learning (RL)-based approaches face sparse rewards and training instability, while search-based methods rely on static process reward models (PRMs) trained on costly human- or LLM-generated annotations, limiting generalizability. This paper proposes AIRL-S, the first framework unifying these paradigms. AIRL-S jointly trains an adversarial inverse RL (AIRL) module with grouped relative policy optimization (GRPO), enabling the RL-learned dense reward function to serve directly as a *dynamic* PRM—eliminating the need for intermediate annotation steps. Consequently, it achieves both training stability and effective inference-time search. Evaluated across eight mathematical, scientific, and coding benchmarks, AIRL-S improves over strong baselines by 9% on average, matches GPT-4o’s performance, and consistently outperforms PRM baselines trained on annotated data.
📝 Abstract
Test-time scaling (TTS) for large language models (LLMs) has thus far fallen into two largely separate paradigms: (1) reinforcement learning (RL) methods that optimize sparse outcome-based rewards, yet suffer from instability and low sample efficiency; and (2) search-based techniques guided by independently trained, static process reward models (PRMs), which require expensive human- or LLM-generated labels and often degrade under distribution shifts. In this paper, we introduce AIRL-S, the first natural unification of RL-based and search-based TTS. Central to AIRL-S is the insight that the reward function learned during RL training inherently represents the ideal PRM for guiding downstream search. Specifically, we leverage adversarial inverse reinforcement learning (AIRL) combined with group relative policy optimization (GRPO) to learn a dense, dynamic PRM directly from correct reasoning traces, entirely eliminating the need for labeled intermediate process data. At inference, the resulting PRM simultaneously serves as the critic for RL rollouts and as a heuristic to effectively guide search procedures, facilitating robust reasoning chain extension, mitigating reward hacking, and enhancing cross-task generalization. Experimental results across eight benchmarks, including mathematics, scientific reasoning, and code generation, demonstrate that our unified approach improves performance by 9 % on average over the base model, matching GPT-4o. Furthermore, when integrated into multiple search algorithms, our PRM consistently outperforms all baseline PRMs trained with labeled data. These results underscore that, indeed, your reward function for RL is your best PRM for search, providing a robust and cost-effective solution to complex reasoning tasks in LLMs.