🤖 AI Summary
Existing large language models (LLMs) rely on absolute-scoring reward models for mathematical multiple-choice questions, suffering from low discrimination accuracy—particularly on high-difficulty problems. To address this, we propose a pairwise-comparison-based reward modeling framework coupled with a single-elimination tournament mechanism, replacing unreliable absolute scoring with robust parallel candidate filtering via solution-pair comparisons. Our method innovatively integrates pairwise ranking learning with tournament-style elimination, enabling reliable preference elicitation. We construct ourdataset, a high-quality dataset of 443K math solution pairs, synthesized through supervised fine-tuning (SFT), automated labeling using Gemini-1.5-flash, and data augmentation with NumiaMath. Evaluated on MATH-500 and Olympiad Bench, our approach significantly outperforms baselines, achieving 40–60% relative improvement on the top 50% most difficult problems.
📝 Abstract
Best-of-N (BoN) sampling, a common strategy for test-time scaling of Large Language Models (LLMs), relies on reward models to select the best candidate solution from multiple generations. However, traditional reward models often assign arbitrary and inconsistent scores, limiting their effectiveness. To address this, we propose a Pairwise Reward Model (Pairwise RM) combined with a knockout tournament for BoN sampling. Instead of assigning absolute scores, given one math problem, Pairwise RM evaluates two candidate solutions' correctness simultaneously. This approach eliminates the need for arbitrary scoring and enables cross-validation of solutions through parallel comparison. In the knockout tournament, Pairwise RM conducts pairwise comparisons between candidate solutions and eliminates the incorrect ones iteratively. We construct ourdataset, a large-scale dataset of 443K pairwise comparisons derived from NumiaMath and annotated using exttt{gemini-1.5-flash}, and train the Pairwise RM via supervised fine-tuning. Experiments on MATH-500 and the Olympiad Bench demonstrate significant improvements over traditional discriminative reward models. And a 40% to 60% relative improvement is achieved on the top 50% challenging problems.