๐ค AI Summary
Large language models (LLMs) acting as autonomous agents often fail on multi-step complex tasks due to critical step errors; meanwhile, existing process reward models (PRMs) suffer from high computational overhead and poor scalability due to exhaustive per-step trajectory exploration.
Method: This paper proposes a reward-ascent-oriented process supervision mechanism. Its core innovation is the first introduction of โreward-ascent trajectoriesโ โ replacing absolute reward evaluation with differential reward modeling and incremental process supervision โ to dynamically expand the action candidate space. It integrates trajectory-level reinforcement learning with mathematical optimization analysis for theoretically grounded, computationally efficient, and generalizable supervision.
Contribution/Results: Evaluated on WebShop and InterCode-SQL benchmarks, our method reduces exploration cost by over 60% while improving task success rates by 12.4โ18.7%, significantly outperforming state-of-the-art PRMs.
๐ Abstract
Large language models (LLMs) have exhibited extraordinary performance in a variety of tasks while it remains challenging for them to solve complex multi-step tasks as agents. In practice, agents sensitive to the outcome of certain key steps which makes them likely to fail the task because of a subtle mistake in the planning trajectory. Recent approaches resort to calibrating the reasoning process through reinforcement learning. They reward or penalize every reasoning step with process supervision, as known as Process Reward Models (PRMs). However, PRMs are difficult and costly to scale up with a large number of next action candidates since they require extensive computations to acquire the training data through the per-step trajectory exploration. To mitigate this issue, we focus on the relative reward trend across successive reasoning steps and propose maintaining an increasing reward in the collected trajectories for process supervision, which we term Reward Rising Optimization (RRO). Specifically, we incrementally augment the process supervision until identifying a step exhibiting positive reward differentials, i.e. rising rewards, relative to its preceding iteration. This method dynamically expands the search space for the next action candidates, efficiently capturing high-quality data. We provide mathematical groundings and empirical results on the WebShop and InterCode-SQL benchmarks, showing that our proposed RRO achieves superior performance while requiring much less exploration cost.