Predicting LLM Reasoning Performance with Small Proxy Model

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the prohibitively high pretraining cost and impracticality of directly evaluating reasoning capabilities—emergent only in large language models (LLMs) ≥7B parameters—on smaller models, this paper proposes rBridge. rBridge leverages lightweight proxy models (≤1B parameters) supervised by high-quality reasoning traces generated by state-of-the-art frontier models. It employs task-aligned, weighted negative log-likelihood (NLL) modeling to enable zero-shot prediction of large-model reasoning performance. Crucially, rBridge is the first method enabling efficient, cross-dataset and cross-scale (1B–32B) data quality assessment. Evaluated on six reasoning benchmarks, it achieves new state-of-the-art correlation scores while reducing data ranking cost by over two orders of magnitude compared to the best existing baseline—significantly accelerating data optimization pipelines.

Technology Category

Application Category

📝 Abstract
Given the prohibitive cost of pre-training large language models, it is essential to leverage smaller proxy models to optimize datasets before scaling up. However, this approach becomes challenging for reasoning capabilities, which exhibit emergent behavior that only appear reliably at larger model sizes, often exceeding 7B parameters. To address this, we introduce rBridge, showing that small proxies ($leq$1B) can effectively predict large-model reasoning by aligning more closely with (1) the pre-training objective and (2) the target task. rBridge achieves this by weighting negative log-likelihood with task alignment, using reasoning traces from frontier models as gold labels. In our experiments, rBridge (i) reduces dataset ranking costs by over 100x relative to the best baseline, (ii) achieves the strongest correlation across six reasoning benchmarks at 1B to 32B scale, and (iii) zero-shot transfers predictive relationships across pre-training datasets at 1B to 7B scale. These findings indicate that rBridge offers a practical path for exploring reasoning-oriented pre-training at lower cost.
Problem

Research questions and friction points this paper is trying to address.

Predicting large language model reasoning using small proxy models
Addressing emergent reasoning capabilities only present in larger models
Reducing dataset ranking costs for reasoning-oriented pre-training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weighting negative log-likelihood with task alignment
Using reasoning traces from frontier models as labels
Enabling zero-shot transfer across pre-training datasets