Grad2Reward: From Sparse Judgment to Dense Rewards for Improving Open-Ended LLM Reasoning

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing LLM-as-a-Judge reinforcement learning approaches, which provide only sparse sequence-level rewards and thus struggle to support fine-grained optimization for complex, long-horizon reasoning in open-domain tasks. To overcome this, the authors propose Grad2Reward, a novel framework that leverages gradient information from a single backward pass of the Judge model to derive dense, token-level process rewards, enabling precise credit assignment. Additionally, Grad2Reward introduces a self-critique mechanism that allows the policy model to perform efficient self-supervised optimization using its own evaluative signals, eliminating the need for external reward models or stronger Judges. Experiments demonstrate that the method significantly improves both reasoning quality and training efficiency across diverse open-domain tasks, validating its effectiveness and generalization capability.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has catalyzed significant breakthroughs in complex LLM reasoning within verifiable domains, such as mathematics and programming. Recent efforts have sought to extend this paradigm to open-ended tasks by employing LLMs-as-a-Judge to provide sequence-level rewards for policy optimization. However, these rewards are inherently sparse, failing to provide the fine-grained supervision necessary for generating complex, long-form trajectories. Furthermore, current work treats the Judge as a black-box oracle, discarding the rich intermediate feedback signals encoded in it. To address these limitations, we introduce Grad2Reward, a novel framework that extracts dense process rewards directly from the Judge's model inference process via a single backward pass. By leveraging gradient-based attribution, Grad2Reward enables precise token-level credit assignment, substantially enhancing training efficiency and reasoning quality. Additionally, Grad2Reward introduces a self-judging mechanism, allowing the policy to improve through its own evaluative signals without training specialized reward models or reliance on superior external Judges. The experiments demonstrate that policies optimized with Grad2Reward achieve outstanding performance across diverse open-ended tasks, affirming its effectiveness and broad generalizability.
Problem

Research questions and friction points this paper is trying to address.

sparse rewards
open-ended reasoning
LLM-as-a-Judge
fine-grained supervision
intermediate feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

dense rewards
gradient-based attribution
self-judging
LLM-as-a-Judge
token-level credit assignment
Z
Zheng Zhang
School of Information Science and Technology, ShanghaiTech University; State Key Laboratory of General Artificial Intelligence, BIGAI
A
Ao Lu
School of Information Science and Technology, ShanghaiTech University
Y
Yuanhao Zeng
School of Information Science and Technology, ShanghaiTech University; State Key Laboratory of General Artificial Intelligence, BIGAI
Z
Ziwei Shan
School of Information Science and Technology, ShanghaiTech University
J
Jinjin Guo
JD.com
L
Lufei Li
School of Information Science and Technology, ShanghaiTech University
Yexin Li
Yexin Li
State Key Laboratory of General Artificial Intelligence BIGAI
reinforcement learningmulti-agent systemmulti-armed banditsdata mining
Kan Ren
Kan Ren
Assistant Professor, ShanghaiTech University
Machine LearningData MiningLarge Language ModelFoundation Model