π€ AI Summary
This work addresses the challenge of sparse terminal rewards hindering fine-grained, state-level optimization in large language model agents trained via reinforcement learning. To overcome this limitation, the authors propose RewardFlow, a novel approach that constructs a state graph from reasoning trajectories and leverages its topological structure to assess each stateβs contribution to task success. RewardFlow introduces a lightweight, topology-aware graph propagation mechanism that generates dense state-level reward signals without requiring any additional reward models. By incorporating state graph topology into process reward modeling for the first time, RewardFlow achieves significant performance gains over existing reinforcement learning methods across four agent reasoning benchmarks, demonstrating superior effectiveness, robustness, and training efficiency.
π Abstract
Reinforcement learning (RL) holds significant promise for enhancing the agentic reasoning capabilities of large language models (LLMs) with external environments. However, the inherent sparsity of terminal rewards hinders fine-grained, state-level optimization. Although process reward modeling offers a promising alternative, training dedicated reward models often entails substantial computational costs and scaling difficulties. To address these challenges, we introduce RewardFlow, a lightweight method for estimating state-level rewards tailored to agentic reasoning tasks. RewardFlow leverages the intrinsic topological structure of states within reasoning trajectories by constructing state graphs. This enables an analysis of state-wise contributions to success, followed by topology-aware graph propagation to quantify contributions and yield objective, state-level rewards. When integrated as dense rewards for RL optimization, RewardFlow substantially outperforms prior RL baselines across four agentic reasoning benchmarks, demonstrating superior performance, robustness, and training efficiency. The implementation of RewardFlow is publicly available at https://github.com/tmlr-group/RewardFlow.