Benchmark Illusion: Disagreement among LLMs and Its Scientific Consequences

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical yet overlooked issue in large language model (LLM) evaluation: despite near-identical performance on high-accuracy benchmarks such as MMLU-Pro and GPQA, models exhibit substantial judgment discrepancies on 16%–38% of questions, posing significant risks to scientific reproducibility. The work introduces and systematically characterizes the phenomenon of “benchmark hallucination,” demonstrating that the choice of LLM as a hidden variable can profoundly influence the reliability of scientific conclusions. Through multi-model comparisons, error pattern analyses, and empirical reanalyses—illustrated with case studies from education and political science—the research shows that switching the annotating model can alter treatment effect estimates by over 80% and even reverse inferential directions, thereby underscoring the pivotal role of model selection in scientific inference.

Technology Category

Application Category

📝 Abstract
Benchmarks underpin how progress in large language models (LLMs) is measured and trusted. Yet our analyses reveal that apparent convergence in benchmark accuracy can conceal deep epistemic divergence. Using two major reasoning benchmarks - MMLU-Pro and GPQA - we show that LLMs achieving comparable accuracy still disagree on 16-66% of items, and 16-38% among top-performing frontier models. These discrepancies suggest distinct error profiles for different LLMs. When such models are used for scientific data annotation and inference, their hidden disagreements propagate into research results: in re-analyses of published studies in education and political science, switching the annotation model can change estimated treatment effects by more than 80%, and in some cases reverses their sign. Together, these findings illustrate a benchmark illusion, where equal accuracy may conceal disagreement, with model choice becoming a hidden yet consequential variable for scientific reproducibility.
Problem

Research questions and friction points this paper is trying to address.

benchmark illusion
large language models
epistemic divergence
scientific reproducibility
model disagreement
Innovation

Methods, ideas, or system contributions that make the work stand out.

benchmark illusion
epistemic divergence
LLM disagreement
scientific reproducibility
model-dependent annotation
🔎 Similar Papers
No similar papers found.