🤖 AI Summary
Existing evaluation methodologies for large language models (LLMs) suffer from insufficient generalization assessment, as static benchmarks fail to capture the continuously expanding capability boundaries of evolving LLMs.
Method: We formally define “evaluation generalizability” and propose a four-dimensional analytical framework encompassing evaluation methodologies, datasets, evaluators, and metrics. Our approach innovatively integrates LLM-as-a-judge, dynamically updated datasets, capability-decoupled benchmark design, and a multidimensional meta-evaluation framework.
Contribution: We establish a novel, capability-oriented, automated, and sustainably evolvable evaluation paradigm covering critical dimensions—including knowledge, reasoning, instruction following, multimodal understanding, and safety. Concurrently, we release an open-source, extensible GitHub “living review” repository—a community-maintained, versioned resource—to advance evaluation practice from static benchmarking toward dynamic, collaborative co-evolution.
📝 Abstract
Large Language Models (LLMs) are advancing at an amazing speed and have become indispensable across academia, industry, and daily applications. To keep pace with the status quo, this survey probes the core challenges that the rise of LLMs poses for evaluation. We identify and analyze two pivotal transitions: (i) from task-specific to capability-based evaluation, which reorganizes benchmarks around core competencies such as knowledge, reasoning, instruction following, multi-modal understanding, and safety; and (ii) from manual to automated evaluation, encompassing dynamic dataset curation and"LLM-as-a-judge"scoring. Yet, even with these transitions, a crucial obstacle persists: the evaluation generalization issue. Bounded test sets cannot scale alongside models whose abilities grow seemingly without limit. We will dissect this issue, along with the core challenges of the above two transitions, from the perspectives of methods, datasets, evaluators, and metrics. Due to the fast evolving of this field, we will maintain a living GitHub repository (links are in each section) to crowd-source updates and corrections, and warmly invite contributors and collaborators.