Scales++: Compute Efficient Evaluation Subset Selection with Cognitive Scales Embeddings

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Constructing small-scale benchmarks for LLM evaluation faces high annotation costs and cold-start challenges due to reliance on historical model performance. Method: This paper proposes a task-centric subset selection paradigm that replaces model-centric approaches with intrinsic sample-level cognitive complexity—quantified via cognitive-scale embeddings—as the primary selection criterion. Combining clustering and optimization, it efficiently identifies representative subsets without requiring prior model evaluations. Contribution/Results: The method enables zero-shot cold-start benchmark construction, significantly improving interpretability and cross-model generalizability. Experiments show that using only 0.5% of the full benchmark, it predicts overall scores with a mean absolute error of just 2.9%; moreover, the upfront selection cost is reduced by over 18×. It achieves a superior trade-off between efficiency and predictive fidelity compared to existing approaches.

Technology Category

Application Category

📝 Abstract
The prohibitive cost of evaluating large language models (LLMs) on comprehensive benchmarks necessitates the creation of small yet representative data subsets (i.e., tiny benchmarks) that enable efficient assessment while retaining predictive fidelity. Current methods for this task operate under a model-centric paradigm, selecting benchmarking items based on the collective performance of existing models. Such approaches are limited by large upfront costs, an inability to immediately handle new benchmarks (`cold-start'), and the fragile assumption that future models will share the failure patterns of their predecessors. In this work, we challenge this paradigm and propose a item-centric approach to benchmark subset selection, arguing that selection should be based on the intrinsic properties of the task items themselves, rather than on model-specific failure patterns. We instantiate this item-centric efficient benchmarking approach via a novel method, Scales++, where data selection is based on the cognitive demands of the benchmark samples. Empirically, we show Scales++ reduces the upfront selection cost by over 18x while achieving competitive predictive fidelity. On the Open LLM Leaderboard, using just a 0.5% data subset, we predict full benchmark scores with a 2.9% mean absolute error. We demonstrate that this item-centric approach enables more efficient model evaluation without significant fidelity degradation, while also providing better cold-start performance and more interpretable benchmarking.
Problem

Research questions and friction points this paper is trying to address.

Selecting small representative data subsets for efficient LLM evaluation
Overcoming limitations of model-centric benchmark selection approaches
Using cognitive demands of items to enable cost-effective assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Item-centric approach for benchmark subset selection
Uses cognitive demands of samples for data selection
Reduces upfront cost while maintaining predictive fidelity
🔎 Similar Papers
No similar papers found.
A
Andrew M. Bean
Thomson Reuters Foundational Research, University of Oxford
Nabeel Seedat
Nabeel Seedat
University of Cambridge
Machine LearningUncertainty QuantificationData-Centric AILarge Language ModelsAI for health
S
Shengzhuang Chen
Thomson Reuters Foundational Research, Imperial College London
Jonathan Richard Schwarz
Jonathan Richard Schwarz
Thomson Reuters
Machine LearningStatisticsArtificial Intelligence