How Reliable is Language Model Micro-Benchmarking?

πŸ“… 2025-10-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work investigates the reliability of micro-benchmarking for language models: whether extremely small subsets can stably reproduce the model rankings obtained on full benchmarks. We propose a meta-evaluation metric that quantifies a micro-benchmark’s ability to correctly rank models according to their full-benchmark performance differences. Combining statistical resampling with multi-benchmark experiments (MMLU-Pro, BIG-bench Hard), we systematically compare sorting consistency across diverse subset selection strategies and random sampling. Key findings reveal that existing micro-benchmarks lack stability in distinguishing models with similar capabilities; approximately 250 samples are required to ensure robust ranking, whereas with only 25 examples, over half of pairwise comparisons among 8B-parameter models fail. This study provides the first fine-grained trade-off analysis between micro-benchmark scale and ranking consistency, offering both theoretical foundations and practical guidelines for efficient, trustworthy model evaluation.

Technology Category

Application Category

πŸ“ Abstract
Micro-benchmarking offers a solution to the often prohibitive time and cost of language model development: evaluate on a very small subset of existing benchmarks. Can these micro-benchmarks, however, rank models as consistently as the full benchmarks they replace? And can they rank models more consistently than selecting a random subset of data points? In many scenarios, we find that the answer is no. We introduce a meta-evaluation measure for micro-benchmarking which investigates how well a micro-benchmark can rank two models as a function of their performance difference on the full benchmark. This approach can determine which model pairs can be ranked correctly by a micro-benchmark, allowing for a finer-grained analysis of the trade-off between micro-benchmark size and reliability. Prior work has suggested selecting as few as 10 examples; we find that no micro-benchmarking method can consistently rank model pairs 3.5 points of accuracy apart on MMLU-Pro or 4 points apart on BIG-bench Hard. In order to consistently rank model pairs with relatively similar performances, we show that often as many as 250 examples must be selected, at which point random sampling is competitive with existing micro-benchmarking methods. When comparing only 8B instruction-tuned models on MMLU-Pro micro-benchmarks with 25 examples, we find that more than half of pairwise comparisons are not likely to be preserved. Our work provides actionable guidance for both micro-benchmark users and developers in navigating the trade-off between evaluation efficiency and reliability.
Problem

Research questions and friction points this paper is trying to address.

Evaluating reliability of micro-benchmarks for language model ranking
Assessing consistency between micro-benchmarks and full benchmark rankings
Determining required micro-benchmark size for reliable model comparisons
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces meta-evaluation measure for micro-benchmark reliability
Determines model pairs rankable by micro-benchmark performance differences
Recommends larger sample sizes up to 250 examples
πŸ”Ž Similar Papers
No similar papers found.
Gregory Yauney
Gregory Yauney
Cornell University
machine learningdigital humanities
S
Shahzaib Saqib Warraich
Thomas Lord Department of Computer Science, University of Southern California, Los Angeles, CA, USA
Swabha Swayamdipta
Swabha Swayamdipta
University of Southern California
Natural Language ProcessingMachine Learning