🤖 AI Summary
This work addresses the limitations of traditional static benchmarks—prone to saturation, contamination, and high updating costs—and the susceptibility of existing large language model (LLM) auto-scoring methods to prompt sensitivity and bias. It proposes the first three-stage framework that evaluates LLMs’ *benchmark design capability* rather than merely their question-answering performance. The approach leverages structured domain cards for extraction, quota-based multi-model collaborative item generation, and scoring via precise, numerical, and symbolic verifiers combined with psychometric analysis. From nine domains, it generates 16.7K items (retaining 15K core items) and constructs a designer–responder matrix with 152K scoring records. Empirical results reveal only a moderate correlation between design and answering abilities (Spearman ρ ≈ 0.37) and a strong negative association between invalid items and discrimination (r ≈ −0.62), demonstrating the framework’s effectiveness for scalable, cross-modal, and multilingual benchmark auditing.
📝 Abstract
Benchmarks are the de facto standard for tracking progress in large language models (LLMs), yet static test sets can rapidly saturate, become vulnerable to contamination, and are costly to refresh. Scalable evaluation of open-ended items often relies on LLM judges, introducing additional sources of bias and prompt sensitivity. We argue that evaluation must extend beyond how well models answer benchmarks to how well models design them. We introduce BenchBench, a three-stage pipeline and dataset for benchmarking automated benchmark generation: (i) extract structured domain cards from seed benchmarks, (ii) prompt multiple designer LLMs to generate quota-controlled suites, and (iii) validate items with a multi-model answerer panel using exact/numeric/symbolic verifiers when possible and rubric-guided judging otherwise, yielding designer--answerer matrices with item-level quality flags and psychometric diagnostics. Across nine variants spanning computer science, mathematics, medicine, and theory-of-mind reasoning (including multilingual and multimodal settings), we generate 16.7K items, retain ~15K core items post-filtering, and produce ~152K graded model--item responses. BenchBench shows that benchmark-design ability is only moderately correlated with answer-time strength (Spearman rho ~0.37), invalidity is negatively associated with discrimination (Pearson r~0.62), and the resulting designer--answerer matrices enable scalable audits of format/modality/language fidelity and suite-dependent self/family interactions. The project is available at: https://github.com/koanatakiyo/BenchBench.