π€ AI Summary
This work investigates the reliability of micro-benchmarking for language models: whether extremely small subsets can stably reproduce the model rankings obtained on full benchmarks. We propose a meta-evaluation metric that quantifies a micro-benchmarkβs ability to correctly rank models according to their full-benchmark performance differences. Combining statistical resampling with multi-benchmark experiments (MMLU-Pro, BIG-bench Hard), we systematically compare sorting consistency across diverse subset selection strategies and random sampling. Key findings reveal that existing micro-benchmarks lack stability in distinguishing models with similar capabilities; approximately 250 samples are required to ensure robust ranking, whereas with only 25 examples, over half of pairwise comparisons among 8B-parameter models fail. This study provides the first fine-grained trade-off analysis between micro-benchmark scale and ranking consistency, offering both theoretical foundations and practical guidelines for efficient, trustworthy model evaluation.
π Abstract
Micro-benchmarking offers a solution to the often prohibitive time and cost of language model development: evaluate on a very small subset of existing benchmarks. Can these micro-benchmarks, however, rank models as consistently as the full benchmarks they replace? And can they rank models more consistently than selecting a random subset of data points? In many scenarios, we find that the answer is no. We introduce a meta-evaluation measure for micro-benchmarking which investigates how well a micro-benchmark can rank two models as a function of their performance difference on the full benchmark. This approach can determine which model pairs can be ranked correctly by a micro-benchmark, allowing for a finer-grained analysis of the trade-off between micro-benchmark size and reliability. Prior work has suggested selecting as few as 10 examples; we find that no micro-benchmarking method can consistently rank model pairs 3.5 points of accuracy apart on MMLU-Pro or 4 points apart on BIG-bench Hard. In order to consistently rank model pairs with relatively similar performances, we show that often as many as 250 examples must be selected, at which point random sampling is competitive with existing micro-benchmarking methods. When comparing only 8B instruction-tuned models on MMLU-Pro micro-benchmarks with 25 examples, we find that more than half of pairwise comparisons are not likely to be preserved. Our work provides actionable guidance for both micro-benchmark users and developers in navigating the trade-off between evaluation efficiency and reliability.