🤖 AI Summary
Existing RL benchmarks for LLM evaluation suffer from severe flaws: strong performance correlation between training and test sets, hindering detection of generalization failures under distributional shift, abrupt difficulty increases, or counterfactual scenarios. Method: We propose the Oracle Performance Gap (OPG) metric and a diagnostic suite to systematically expose benchmark overfitting tendencies; we then distill three principles for robust RL evaluation—sufficient task difficulty, balanced assessment across capabilities, and distributional robustness. Contribution/Results: Through stress testing, cross-set performance analysis, and quantitative OPG evaluation, we empirically demonstrate pervasive implicit overfitting among mainstream RL methods on current benchmarks—masking their true capability boundaries. Our work establishes a reproducible, diagnosable, and scalable paradigm for LLM-driven RL evaluation, enabling rigorous, transparent, and generalizable assessment of reinforcement learning progress in language modeling.
📝 Abstract
Current benchmarks are inadequate for evaluating progress in reinforcement learning (RL) for large language models (LLMs).Despite recent benchmark gains reported for RL, we find that training on these benchmarks' training sets achieves nearly the same performance as training directly on the test sets, suggesting that the benchmarks cannot reliably separate further progress.To study this phenomenon, we introduce a diagnostic suite and the Oracle Performance Gap (OPG) metric that quantifies the performance difference between training on the train split versus the test split of a benchmark. We further analyze this phenomenon with stress tests and find that, despite strong benchmark scores, existing RL methods struggle to generalize across distribution shifts, varying levels of difficulty, and counterfactual scenarios: shortcomings that current benchmarks fail to reveal.We conclude that current benchmarks are insufficient for evaluating generalization and propose three core principles for designing more faithful benchmarks: sufficient difficulty, balanced evaluation, and distributional robustness.