Rethinking RL Evaluation: Can Benchmarks Truly Reveal Failures of RL Methods?

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RL benchmarks for LLM evaluation suffer from severe flaws: strong performance correlation between training and test sets, hindering detection of generalization failures under distributional shift, abrupt difficulty increases, or counterfactual scenarios. Method: We propose the Oracle Performance Gap (OPG) metric and a diagnostic suite to systematically expose benchmark overfitting tendencies; we then distill three principles for robust RL evaluation—sufficient task difficulty, balanced assessment across capabilities, and distributional robustness. Contribution/Results: Through stress testing, cross-set performance analysis, and quantitative OPG evaluation, we empirically demonstrate pervasive implicit overfitting among mainstream RL methods on current benchmarks—masking their true capability boundaries. Our work establishes a reproducible, diagnosable, and scalable paradigm for LLM-driven RL evaluation, enabling rigorous, transparent, and generalizable assessment of reinforcement learning progress in language modeling.

Technology Category

Application Category

📝 Abstract
Current benchmarks are inadequate for evaluating progress in reinforcement learning (RL) for large language models (LLMs).Despite recent benchmark gains reported for RL, we find that training on these benchmarks' training sets achieves nearly the same performance as training directly on the test sets, suggesting that the benchmarks cannot reliably separate further progress.To study this phenomenon, we introduce a diagnostic suite and the Oracle Performance Gap (OPG) metric that quantifies the performance difference between training on the train split versus the test split of a benchmark. We further analyze this phenomenon with stress tests and find that, despite strong benchmark scores, existing RL methods struggle to generalize across distribution shifts, varying levels of difficulty, and counterfactual scenarios: shortcomings that current benchmarks fail to reveal.We conclude that current benchmarks are insufficient for evaluating generalization and propose three core principles for designing more faithful benchmarks: sufficient difficulty, balanced evaluation, and distributional robustness.
Problem

Research questions and friction points this paper is trying to address.

Current RL benchmarks inadequately evaluate LLM reinforcement learning progress
Training on test sets yields similar performance to training sets
Existing RL methods fail to generalize across distribution shifts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Oracle Performance Gap diagnostic metric
Proposes stress tests for generalization analysis
Recommends three principles for robust benchmarks
🔎 Similar Papers
No similar papers found.
Z
Zihan Chen
HFIPS, Chinese Academy of Sciences; University of Science and Technology of China
Y
Yiming Zhang
HFIPS, Chinese Academy of Sciences; University of Science and Technology of China
H
Hengguang Zhou
University of California, Los Angeles
Z
Zenghui Ding
HFIPS, Chinese Academy of Sciences
Yining Sun
Yining Sun
Johns Hopkins University
Computer Vision
Cho-Jui Hsieh
Cho-Jui Hsieh
University of California, Los Angeles
Machine LearningOptimization