Hybrid Reward Normalization for Process-supervised Non-verifiable Agentic Tasks

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inefficiency of large language model training on complex agent tasks caused by sparse outcome rewards, this paper proposes a reinforcement learning framework that jointly leverages process supervision and outcome verification. Our method introduces: (1) a principle-driven process reward mechanism that reliably evaluates individual reasoning steps without requiring verifiable intermediate answers; and (2) ReNorm, a novel reward normalization technique that dynamically calibrates the scale and weight of process versus outcome rewards to ensure consistency between local step quality and global task objectives. Extensive experiments on multiple long-horizon reasoning benchmarks demonstrate significant improvements over state-of-the-art methods, particularly in non-verifiable tasks, where our approach exhibits superior robustness and generalization. The code and models are publicly released.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) increasingly rely on external tools such as search engines to solve complex agentic tasks that require reasoning and external knowledge retrieval. Recently, reinforcement learning with verifiable rewards (RLVR) has demonstrated its effectiveness in advancing capabilities of LLMs by rewarding the final answers via outcome rewards. While straightforward to supervise, outcome rewards only provide sparse signals and delayed feedback, which limits their effectiveness on long trajectories. Process rewards address this by evaluating intermediate steps, providing fine-grained supervision and encouraging grounded problem solving. However, it is notoriously hard to annotate step-wise labels, especially in non-verifiable process without "golden" answers. Furthermore, step-wise judgment requires the balance between local quality with contribution to the final outcome, as optimizing towards higher process reward may not always align with better final outcomes. To address the above challenges, we introduce Principle Process Reward (PPR), an RL approach that unifies principled step-level assessment and outcome verification. We train a principle-based reward model to improve the transparency and reliability of process evaluation, and further introduce a Reward Normalization (ReNorm) strategy to calibrate outcome and process rewards. Experiment results show that PPR achieves state-of-the-art performance across a wide range of benchmarks, demonstrating its impressive robustness and generalization. Our code and model collection is available in this link.
Problem

Research questions and friction points this paper is trying to address.

Addresses sparse rewards in long agentic task trajectories
Solves non-verifiable process supervision without golden answers
Aligns step-wise quality assessment with final outcome optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Principle-based reward model for process evaluation
Reward normalization to calibrate outcome and process
Unified RL approach with step-level and outcome assessment
🔎 Similar Papers
No similar papers found.
P
Peiran Xu
Accio Team, Alibaba Group
Z
Zhuohao Li
Accio Team, Alibaba Group
Xiaoying Xing
Xiaoying Xing
Northwestern
computer visionmachine learningmultimodality
G
Guannan Zhang
Accio Team, Alibaba Group
D
Debiao Li
University of California, Los Angeles (UCLA)
K
Kunyu Shi
Accio Team, Alibaba Group