🤖 AI Summary
LLM benchmarking is frequently unreliable due to pervasive failure modes—including bias, high variance, insufficient coverage, and poor interpretability—leading to erroneous conclusions. Method: This paper systematically identifies 57 benchmark failure modes and, for the first time, adapts the NIST Risk Management Framework to LLM evaluation, proposing BenchRisk: a meta-assessment framework featuring a five-dimensional, quantifiable risk scoring model for cross-benchmark credibility comparison; an open-source toolchain integrating qualitative risk analysis, iterative failure modeling, and collaborative optimization. Contribution/Results: Empirical evaluation across 26 mainstream benchmarks reveals that every benchmark exhibits significant risk in at least one dimension, exposing critical flaws in current benchmark design. BenchRisk provides both a methodological foundation and practical pathway toward transparent, standardized, and trustworthy LLM benchmarking.
📝 Abstract
Large language model (LLM) benchmarks inform LLM use decisions (e.g., "is this LLM safe to deploy for my use case and context?"). However, benchmarks may be rendered unreliable by various failure modes that impact benchmark bias, variance, coverage, or people's capacity to understand benchmark evidence. Using the National Institute of Standards and Technology's risk management process as a foundation, this research iteratively analyzed 26 popular benchmarks, identifying 57 potential failure modes and 196 corresponding mitigation strategies. The mitigations reduce failure likelihood and/or severity, providing a frame for evaluating "benchmark risk," which is scored to provide a metaevaluation benchmark: BenchRisk. Higher scores indicate that benchmark users are less likely to reach an incorrect or unsupported conclusion about an LLM. All 26 scored benchmarks present significant risk within one or more of the five scored dimensions (comprehensiveness, intelligibility, consistency, correctness, and longevity), which points to important open research directions for the field of LLM benchmarking. The BenchRisk workflow allows for comparison between benchmarks; as an open-source tool, it also facilitates the identification and sharing of risks and their mitigations.