Reinforcing Chain-of-Thought Reasoning with Self-Evolving Rubrics

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of traditional reward models in effectively supervising chain-of-thought (CoT) reasoning in large language models, as they rely on human annotations and fail to adapt to the dynamic distribution of CoT outputs. To overcome this, the authors propose RLCER, a novel method that introduces a self-evolving rubric mechanism without human intervention: the model autonomously generates and iteratively refines its own scoring rubrics, leveraging reinforcement learning to supervise the reasoning process itself. This enables co-evolution between process-oriented rewards and reasoning capabilities. Notably, RLCER significantly outperforms outcome-focused RLVR even in the absence of final-answer reward signals, and the evolved rubrics can be repurposed as reasoning prompts to further enhance performance.

Technology Category

Application Category

📝 Abstract
Despite chain-of-thought (CoT) playing crucial roles in LLM reasoning, directly rewarding it is difficult: training a reward model demands heavy human labeling efforts, and static RMs struggle with evolving CoT distributions and reward hacking. These challenges motivate us to seek an autonomous CoT rewarding approach that requires no human annotation efforts and can evolve gradually. Inspired by recent self-evolving training methods, we propose \textbf{RLCER} (\textbf{R}einforcement \textbf{L}earning with \textbf{C}oT Supervision via Self-\textbf{E}volving \textbf{R}ubrics), which enhances the outcome-centric RLVR by rewarding CoTs with self-proposed and self-evolving rubrics. We show that self-proposed and self-evolving rubrics provide reliable CoT supervision signals even without outcome rewards, enabling RLCER to outperform outcome-centric RLVR. Moreover, when used as in-prompt hints, these self-proposed rubrics further improve inference-time performance.
Problem

Research questions and friction points this paper is trying to address.

Chain-of-Thought
reward model
self-evolving
reinforcement learning
reward hacking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-Thought
Self-Evolving Rubrics
Reinforcement Learning
Reward Modeling
LLM Reasoning
🔎 Similar Papers
No similar papers found.