Learning More from Less: Unlocking Internal Representations for Benchmark Compression

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost of evaluating large language models and the statistical instability of existing few-shot benchmark distillation methods when the number of source models is limited. The authors propose a novel approach that aligns the hidden states of heterogeneous models into a unified latent space to construct a more representative core subset for performance extrapolation. By leveraging aligned internal representations rather than relying solely on discrete output labels, this method overcomes the limitations of traditional response-based paradigms. Experiments across five benchmarks and over 200 models demonstrate that using only ten source models significantly improves the stability and ranking correlation of few-shot performance estimation. Furthermore, the analysis reveals separable general-purpose and task-specific components within the hidden representations.

Technology Category

Application Category

📝 Abstract
The prohibitive cost of evaluating Large Language Models (LLMs) necessitates efficient alternatives to full-scale benchmarking. Prevalent approaches address this by identifying a small coreset of items to approximate full-benchmark performance. However, existing methods must estimate a reliable item profile from response patterns across many source models, which becomes statistically unstable when the source pool is small. This dependency is particularly limiting for newly released benchmarks with minimal historical evaluation data. We argue that discrete correctness labels are a lossy view of the model's decision process and fail to capture information encoded in hidden states. To address this, we introduce REPCORE, which aligns heterogeneous hidden states into a unified latent space to construct representative coresets. Using these subsets for performance extrapolation, REPCORE achieves precise estimation accuracy with as few as ten source models. Experiments on five benchmarks and over 200 models show consistent gains over output-based baselines in ranking correlation and estimation accuracy. Spectral analysis further indicates that the aligned representations contain separable components reflecting broad response tendencies and task-specific reasoning patterns.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
benchmark compression
coreset selection
evaluation efficiency
hidden representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

hidden state alignment
benchmark compression
coreset selection
representation learning
LLM evaluation