Beyond Correctness: Confidence-Aware Reward Modeling for Enhancing Large Language Model Reasoning

📅 2025-11-09
🏛️ Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Small-scale large language models (LLMs) suffer from low-quality reasoning traces and answer-reasoning inconsistency in reinforcement learning (RL) when trained with rule-based rewards. To address this, we propose a confidence-aware reward modeling method that penalizes both incorrect answers and correct answers associated with low confidence—thereby explicitly guiding the model to produce logically consistent and high-reliability reasoning chains. Technically, our approach integrates static reward evaluation, Best-of-N inference-time testing, and the Proximal Policy Optimization (PPO) framework for end-to-end optimization on STEM-domain tasks. Experiments demonstrate substantial improvements in reasoning accuracy and stability across multiple STEM benchmarks, outperforming leading open-source reward models. The code and trained models are publicly released.

Technology Category

Application Category

📝 Abstract
Recent advancements in large language models (LLMs) have shifted the post-training paradigm from traditional instruction tuning and human preference alignment toward reinforcement learning (RL) focused on reasoning capabilities. However, numerous technical reports indicate that purely rule-based reward RL frequently results in poor-quality reasoning chains or inconsistencies between reasoning processes and final answers, particularly when the base model is of smaller scale. During the RL exploration process, models might employ low-quality reasoning chains due to the lack of knowledge, occasionally producing correct answers randomly and receiving rewards based on established rule-based judges. This constrains the potential for resource-limited organizations to conduct direct reinforcement learning training on smaller-scale models. We propose a novel confidence-based reward model tailored for enhancing STEM reasoning capabilities. Unlike conventional approaches, our model penalizes not only incorrect answers but also low-confidence correct responses, thereby promoting more robust and logically consistent reasoning. We validate the effectiveness of our approach through static evaluations, Best-of-N inference tests, and PPO-based RL training. Our method outperforms several state-of-the-art open-source reward models across diverse STEM benchmarks. We release our codes and model in https://github.com/qianxiHe147/C2RM.
Problem

Research questions and friction points this paper is trying to address.

Enhances reasoning in small language models through confidence-aware rewards
Addresses inconsistent reasoning chains and random correct answers in RL
Improves STEM reasoning by penalizing low-confidence correct responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Confidence-based reward model penalizes low-confidence correct responses
Novel reward model enhances STEM reasoning capabilities
Method outperforms state-of-the-art reward models in benchmarks
🔎 Similar Papers
No similar papers found.
Qianxi He
Qianxi He
复旦大学计算机科学技术学院
Q
Qingyu Ren
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University
S
Shanzhe Lei
Shanghai Artificial Intelligence Laboratory
Xuhong Wang
Xuhong Wang
Shanghai Artificial Intelligence Laboratory
LLMKnowledge SystemAI Simulation
Y
Yingchun Wang
Shanghai Artificial Intelligence Laboratory