🤖 AI Summary
Constructing small-scale benchmarks for LLM evaluation faces high annotation costs and cold-start challenges due to reliance on historical model performance.
Method: This paper proposes a task-centric subset selection paradigm that replaces model-centric approaches with intrinsic sample-level cognitive complexity—quantified via cognitive-scale embeddings—as the primary selection criterion. Combining clustering and optimization, it efficiently identifies representative subsets without requiring prior model evaluations.
Contribution/Results: The method enables zero-shot cold-start benchmark construction, significantly improving interpretability and cross-model generalizability. Experiments show that using only 0.5% of the full benchmark, it predicts overall scores with a mean absolute error of just 2.9%; moreover, the upfront selection cost is reduced by over 18×. It achieves a superior trade-off between efficiency and predictive fidelity compared to existing approaches.
📝 Abstract
The prohibitive cost of evaluating large language models (LLMs) on comprehensive benchmarks necessitates the creation of small yet representative data subsets (i.e., tiny benchmarks) that enable efficient assessment while retaining predictive fidelity. Current methods for this task operate under a model-centric paradigm, selecting benchmarking items based on the collective performance of existing models. Such approaches are limited by large upfront costs, an inability to immediately handle new benchmarks (`cold-start'), and the fragile assumption that future models will share the failure patterns of their predecessors. In this work, we challenge this paradigm and propose a item-centric approach to benchmark subset selection, arguing that selection should be based on the intrinsic properties of the task items themselves, rather than on model-specific failure patterns. We instantiate this item-centric efficient benchmarking approach via a novel method, Scales++, where data selection is based on the cognitive demands of the benchmark samples. Empirically, we show Scales++ reduces the upfront selection cost by over 18x while achieving competitive predictive fidelity. On the Open LLM Leaderboard, using just a 0.5% data subset, we predict full benchmark scores with a 2.9% mean absolute error. We demonstrate that this item-centric approach enables more efficient model evaluation without significant fidelity degradation, while also providing better cold-start performance and more interpretable benchmarking.