🤖 AI Summary
This work investigates the robustness of large language models (LLMs) to reward noise during reinforcement learning–based post-training. Addressing the practical challenge that reward models are often noisy and ground-truth answer labels are unavailable for verification, we propose *Reasoning-Pattern Rewards* (RPR): a weakly supervised reward mechanism that triggers signals solely based on key phrases in the reasoning process—without requiring answer correctness annotations. Experiments show that Qwen-2.5-7B achieves 72% mathematical accuracy under 40% reward flip noise—a dramatic improvement from 5%—and attains >70% accuracy using RPR alone, matching performance under ideal (noise-free) rewards. Moreover, RPR effectively calibrates biased reward models, substantially improving open-domain task performance. This study is the first to empirically demonstrate LLMs’ strong robustness to high-magnitude reward noise and establishes a novel, efficient paradigm for reasoning-aware post-training under weak supervision.
📝 Abstract
Recent studies on post-training large language models (LLMs) for reasoning through reinforcement learning (RL) typically focus on tasks that can be accurately verified and rewarded, such as solving math problems. In contrast, our research investigates the impact of reward noise, a more practical consideration for real-world scenarios involving the post-training of LLMs using reward models. We found that LLMs demonstrate strong robustness to substantial reward noise. For example, manually flipping 40% of the reward function's outputs in math tasks still allows a Qwen-2.5-7B model to achieve rapid convergence, improving its performance on math tasks from 5% to 72%, compared to the 75% accuracy achieved by a model trained with noiseless rewards. Surprisingly, by only rewarding the appearance of key reasoning phrases (namely reasoning pattern reward, RPR), such as ``first, I need to''-without verifying the correctness of answers, the model achieved peak downstream performance (over 70% accuracy for Qwen-2.5-7B) comparable to models trained with strict correctness verification and accurate rewards. Recognizing the importance of the reasoning process over the final results, we combined RPR with noisy reward models. RPR helped calibrate the noisy reward models, mitigating potential false negatives and enhancing the LLM's performance on open-ended tasks. These findings suggest the importance of improving models' foundational abilities during the pre-training phase while providing insights for advancing post-training techniques. Our code and scripts are available at https://github.com/trestad/Noisy-Rewards-in-Learning-to-Reason.