🤖 AI Summary
Current LLM benchmarks suffer from pervasive data contamination, cultural-linguistic bias, lack of procedural transparency, and insufficient dynamism, leading to unreliable evaluations. To address this, we conduct the first systematic survey of 283 mainstream LLM benchmarks, proposing a three-dimensional taxonomy—spanning general capabilities, domain-specific competencies, and goal-specific functionalities—that encompasses language understanding, knowledge reasoning, natural sciences, social sciences and humanities, and risk controllability. Through empirical analysis, we uncover structural biases in evaluation objectives, data provenance, and assessment methodologies. Our key contributions are: (1) the first comprehensive benchmark taxonomy map; (2) identification of critical assessment deficiencies; and (3) a novel benchmark design paradigm grounded in trustworthiness, fairness, and adaptability. This work establishes both a theoretical framework and practical guidelines for developing high-fidelity, next-generation LLM evaluation systems.
📝 Abstract
In recent years, with the rapid development of the depth and breadth of large language models' capabilities, various corresponding evaluation benchmarks have been emerging in increasing numbers. As a quantitative assessment tool for model performance, benchmarks are not only a core means to measure model capabilities but also a key element in guiding the direction of model development and promoting technological innovation. We systematically review the current status and development of large language model benchmarks for the first time, categorizing 283 representative benchmarks into three categories: general capabilities, domain-specific, and target-specific. General capability benchmarks cover aspects such as core linguistics, knowledge, and reasoning; domain-specific benchmarks focus on fields like natural sciences, humanities and social sciences, and engineering technology; target-specific benchmarks pay attention to risks, reliability, agents, etc. We point out that current benchmarks have problems such as inflated scores caused by data contamination, unfair evaluation due to cultural and linguistic biases, and lack of evaluation on process credibility and dynamic environments, and provide a referable design paradigm for future benchmark innovation.