🤖 AI Summary
This work addresses the challenge of continual interactive learning for LLM-based agents without explicit outcome supervision, focusing on three core issues: process reward modeling, exploration efficiency, and inference consistency. To this end, we propose AgentPRM—a framework featuring a lightweight Actor-Critic architecture augmented with Monte Carlo backtracking to enable online-updatable process reward models (PRMs). We further introduce InversePRM, the first method to infer process rewards directly from demonstrations via inverse reinforcement learning—eliminating the need for outcome annotations. This constitutes the first systematic paradigm for process reward modeling tailored to LLM agents, simultaneously advancing reward shaping, exploration guidance, and prediction-reasoning alignment. Evaluated on ALFWorld, our 3B-parameter model substantially outperforms the GPT-4o baseline, demonstrating strong test-time scalability and robustness against reward hacking. The complete implementation is open-sourced.
📝 Abstract
We introduce Agent Process Reward Models (AgentPRM), a simple and scalable framework for training LLM agents to continually improve through interactions. AgentPRM follows a lightweight actor-critic paradigm, using Monte Carlo rollouts to compute reward targets and optimize policies. It requires minimal modifications to existing RLHF pipelines, making it easy to integrate at scale. Beyond AgentPRM, we propose InversePRM, which learns process rewards directly from demonstrations without explicit outcome supervision. We also explore key challenges and opportunities, including exploration, process reward shaping, and model-predictive reasoning. We evaluate on ALFWorld benchmark, show that small 3B models trained with AgentPRM and InversePRM outperform strong GPT-4o baselines, and analyze test-time scaling, reward hacking, and more. Our code is available at: https://github.com/sanjibanc/agent_prm.