🤖 AI Summary
To address the challenge of sparse, delayed rewards hindering effective supervision of early actions in multi-step tasks for LLM-based agents, this paper proposes the Stepwise Progress Attribution (SPA) framework. SPA introduces a trajectory-level cumulative progress estimator that decomposes the final reward—interpretable and learnable—into fine-grained, stepwise intermediate rewards reflecting each action’s incremental contribution. The method integrates progress estimation modeling, trajectory-level reward reconstruction, and environment-execution feedback into an end-to-end reinforcement learning training pipeline. Evaluated on three benchmarks—WebShop, ALFWorld, and VirtualHome—SPA achieves an average 2.5% improvement in task success rate and a 1.9% gain in grounding accuracy. These results demonstrate SPA’s effectiveness in enhancing reward attribution interpretability, providing actionable guidance for early-stage actions, and ensuring robust cross-environment generalization.
📝 Abstract
Reinforcement learning (RL) holds significant promise for training LLM agents to handle complex, goal-oriented tasks that require multi-step interactions with external environments. However, a critical challenge when applying RL to these agentic tasks arises from delayed rewards: feedback signals are typically available only after the entire task is completed. This makes it non-trivial to assign delayed rewards to earlier actions, providing insufficient guidance regarding environmental constraints and hindering agent training. In this work, we draw on the insight that the ultimate completion of a task emerges from the cumulative progress an agent makes across individual steps. We propose Stepwise Progress Attribution (SPA), a general reward redistribution framework that decomposes the final reward into stepwise contributions, each reflecting its incremental progress toward overall task completion. To achieve this, we train a progress estimator that accumulates stepwise contributions over a trajectory to match the task completion. During policy optimization, we combine the estimated per-step contribution with a grounding signal for actions executed in the environment as the fine-grained, intermediate reward for effective agent training. Extensive experiments on common agent benchmarks (including Webshop, ALFWorld, and VirtualHome) demonstrate that SPA consistently outperforms the state-of-the-art method in both success rate (+2.5% on average) and grounding accuracy (+1.9% on average). Further analyses demonstrate that our method remarkably provides more effective intermediate rewards for RL training. Our code is available at https://github.com/WangHanLinHenry/SPA-RL-Agent.