Benchmark^2: Systematic Evaluation of LLM Benchmarks

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the proliferation of large language model (LLM) evaluation benchmarks, which has outpaced systematic assessment of their intrinsic quality. To this end, we propose Benchmark², a novel framework that establishes the first quantitative methodology for evaluating the reliability and validity of LLM benchmarks through three complementary metrics: cross-benchmark ranking consistency, discriminability score, and capability alignment bias. Empirical evaluation across 15 benchmarks and 11 LLMs demonstrates that Benchmark² not only reveals substantial quality disparities among existing benchmarks but also enables the construction of streamlined test sets that maintain high evaluative performance while significantly reducing assessment scale.

Technology Category

Application Category

📝 Abstract
The rapid proliferation of benchmarks for evaluating large language models (LLMs) has created an urgent need for systematic methods to assess benchmark quality itself. We propose Benchmark^2, a comprehensive framework comprising three complementary metrics: (1) Cross-Benchmark Ranking Consistency, measuring whether a benchmark produces model rankings aligned with peer benchmarks; (2) Discriminability Score, quantifying a benchmark's ability to differentiate between models; and (3) Capability Alignment Deviation, identifying problematic instances where stronger models fail but weaker models succeed within the same model family. We conduct extensive experiments across 15 benchmarks spanning mathematics, reasoning, and knowledge domains, evaluating 11 LLMs across four model families. Our analysis reveals significant quality variations among existing benchmarks and demonstrates that selective benchmark construction based on our metrics can achieve comparable evaluation performance with substantially reduced test sets.
Problem

Research questions and friction points this paper is trying to address.

LLM benchmarks
benchmark quality
systematic evaluation
model evaluation
benchmark reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

benchmark evaluation
ranking consistency
discriminability
capability alignment
large language models
🔎 Similar Papers
No similar papers found.