🤖 AI Summary
Existing LLM evaluation methods are costly, while classical Item Response Theory (IRT) models support only binary scoring and are confined to single benchmarks, failing to characterize cross-task ability structures. Method: We propose LEGO-IRT—the first unified IRT framework jointly modeling binary and continuous responses—by factorizing ability representations into general and structure-specific components, explicitly capturing correlations across multiple benchmarks and evaluation metrics; it integrates Bayesian inference with multi-task learning to substantially reduce data requirements. Contribution/Results: Experiments across 70 LLMs and 5 benchmarks demonstrate that LEGO-IRT achieves stable ability estimation using only 3% of items, reduces estimation error by up to 10%, and yields ability scores better aligned with human preferences compared to prior approaches.
📝 Abstract
Evaluating large language models (LLMs) on comprehensive benchmarks is a cornerstone of their development, yet it's often computationally and financially prohibitive. While Item Response Theory (IRT) offers a promising path toward data-efficient evaluation by disentangling model capability from item difficulty, existing IRT-based methods are hampered by significant limitations. They are typically restricted to binary correctness metrics, failing to natively handle the continuous scores used in generative tasks, and they operate on single benchmarks, ignoring valuable structural knowledge like correlations across different metrics or benchmarks. To overcome these challenges, we introduce LEGO-IRT, a unified and flexible framework for data-efficient LLM evaluation. LEGO-IRT's novel design natively supports both binary and continuous evaluation metrics. Moreover, it introduces a factorized architecture to explicitly model and leverage structural knowledge, decomposing model ability estimates into a general component and structure-specific (e.g., per-metric or per-benchmark) components. Through extensive experiments involving $70$ LLMs across $5$ benchmarks, we show that LEGO-IRT achieves stable capability estimates using just $3%$ of the total evaluation items. We demonstrate that incorporating structural knowledge reduces estimation error by up to $10%$ and reveal that the latent abilities estimated by our framework may align more closely with human preferences.