🤖 AI Summary
Small-scale large language models (LLMs) suffer from low-quality reasoning traces and answer-reasoning inconsistency in reinforcement learning (RL) when trained with rule-based rewards. To address this, we propose a confidence-aware reward modeling method that penalizes both incorrect answers and correct answers associated with low confidence—thereby explicitly guiding the model to produce logically consistent and high-reliability reasoning chains. Technically, our approach integrates static reward evaluation, Best-of-N inference-time testing, and the Proximal Policy Optimization (PPO) framework for end-to-end optimization on STEM-domain tasks. Experiments demonstrate substantial improvements in reasoning accuracy and stability across multiple STEM benchmarks, outperforming leading open-source reward models. The code and trained models are publicly released.
📝 Abstract
Recent advancements in large language models (LLMs) have shifted the post-training paradigm from traditional instruction tuning and human preference alignment toward reinforcement learning (RL) focused on reasoning capabilities. However, numerous technical reports indicate that purely rule-based reward RL frequently results in poor-quality reasoning chains or inconsistencies between reasoning processes and final answers, particularly when the base model is of smaller scale. During the RL exploration process, models might employ low-quality reasoning chains due to the lack of knowledge, occasionally producing correct answers randomly and receiving rewards based on established rule-based judges. This constrains the potential for resource-limited organizations to conduct direct reinforcement learning training on smaller-scale models. We propose a novel confidence-based reward model tailored for enhancing STEM reasoning capabilities. Unlike conventional approaches, our model penalizes not only incorrect answers but also low-confidence correct responses, thereby promoting more robust and logically consistent reasoning. We validate the effectiveness of our approach through static evaluations, Best-of-N inference tests, and PPO-based RL training. Our method outperforms several state-of-the-art open-source reward models across diverse STEM benchmarks. We release our codes and model in https://github.com/qianxiHe147/C2RM.