π€ AI Summary
LLM benchmarking suffers from inconsistent cross-benchmark rankings and insufficient discriminability among top-performing models, hindering accurate assessment of true model capabilities. To address this, we propose PSN-IRTβa novel framework that extends Item Response Theory (IRT) by incorporating rich item parameters (e.g., difficulty, discrimination, guessing) and jointly modeling item features and model responses via a pseudo-siamese neural network (PSN). This framework systematically uncovers measurement biases in mainstream benchmarks and enables the construction of compact, high-fidelity evaluation suites. Experiments demonstrate that PSN-IRT reduces ability estimation error by 32% even when test length is halved, while achieving significantly higher alignment with human preferences (+18.7% Kendallβs Ο) compared to original benchmarks. Moreover, it enhances interpretability of item parameters without compromising psychometric rigor, thereby improving both assessment accuracy and theoretical grounding of LLM evaluation.
π Abstract
The evaluation of large language models (LLMs) via benchmarks is widespread, yet inconsistencies between different leaderboards and poor separability among top models raise concerns about their ability to accurately reflect authentic model capabilities. This paper provides a critical analysis of benchmark effectiveness, examining main-stream prominent LLM benchmarks using results from diverse models. We first propose a new framework for accurate and reliable estimations of item characteristics and model abilities. Specifically, we propose Pseudo-Siamese Network for Item Response Theory (PSN-IRT), an enhanced Item Response Theory framework that incorporates a rich set of item parameters within an IRT-grounded architecture. Based on PSN-IRT, we conduct extensive analysis which reveals significant and varied shortcomings in the measurement quality of current benchmarks. Furthermore, we demonstrate that leveraging PSN-IRT is able to construct smaller benchmarks while maintaining stronger alignment with human preference.