Toward Generalizable Evaluation in the LLM Era: A Survey Beyond Benchmarks

📅 2025-04-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methodologies for large language models (LLMs) suffer from insufficient generalization assessment, as static benchmarks fail to capture the continuously expanding capability boundaries of evolving LLMs. Method: We formally define “evaluation generalizability” and propose a four-dimensional analytical framework encompassing evaluation methodologies, datasets, evaluators, and metrics. Our approach innovatively integrates LLM-as-a-judge, dynamically updated datasets, capability-decoupled benchmark design, and a multidimensional meta-evaluation framework. Contribution: We establish a novel, capability-oriented, automated, and sustainably evolvable evaluation paradigm covering critical dimensions—including knowledge, reasoning, instruction following, multimodal understanding, and safety. Concurrently, we release an open-source, extensible GitHub “living review” repository—a community-maintained, versioned resource—to advance evaluation practice from static benchmarking toward dynamic, collaborative co-evolution.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are advancing at an amazing speed and have become indispensable across academia, industry, and daily applications. To keep pace with the status quo, this survey probes the core challenges that the rise of LLMs poses for evaluation. We identify and analyze two pivotal transitions: (i) from task-specific to capability-based evaluation, which reorganizes benchmarks around core competencies such as knowledge, reasoning, instruction following, multi-modal understanding, and safety; and (ii) from manual to automated evaluation, encompassing dynamic dataset curation and"LLM-as-a-judge"scoring. Yet, even with these transitions, a crucial obstacle persists: the evaluation generalization issue. Bounded test sets cannot scale alongside models whose abilities grow seemingly without limit. We will dissect this issue, along with the core challenges of the above two transitions, from the perspectives of methods, datasets, evaluators, and metrics. Due to the fast evolving of this field, we will maintain a living GitHub repository (links are in each section) to crowd-source updates and corrections, and warmly invite contributors and collaborators.
Problem

Research questions and friction points this paper is trying to address.

Addressing evaluation challenges posed by advancing Large Language Models
Transitioning from task-specific to capability-based model evaluation
Overcoming generalization issues in bounded test sets for LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transition to capability-based evaluation from task-specific
Shift from manual to automated evaluation methods
Address evaluation generalization with dynamic datasets
🔎 Similar Papers
No similar papers found.
Y
Yixin Cao
Fudan University
S
Shibo Hong
Fudan University
X
Xinze Li
Nanyang Technological University
J
Jiahao Ying
Singapore Management University
Yubo Ma
Yubo Ma
Nanyang Technological University
Event ExtractionInformation ExtractionNatural Language Processing
H
Haiyuan Liang
Fudan University
Yantao Liu
Yantao Liu
Qwen, Alibaba
Reinforcement LearningReward ModelingLarge Language Models
Z
Zijun Yao
Tsinghua University
Xiaozhi Wang
Xiaozhi Wang
Tsinghua University
Natural Language ProcessingLanguage ModelMechanistic Interpretability
D
Dan Huang
Singapore Management University
W
Wenxuan Zhang
Singapore University of Technology and Design
Lifu Huang
Lifu Huang
Assistant Professor, UC Davis
Natural Language ProcessingMultimodal LearningAI for ScienceMultilingual
Muhao Chen
Muhao Chen
Assistant Professor of Computer Science, University of California, Davis
Natural Language ProcessingRobust MLAI SafetyVision-language Models
Lei Hou
Lei Hou
RMIT University
Building Information Modeling (BIM) - Project Management - Construction IT - Productivity Research - Lean Construction
Q
Qianru Sun
Singapore Management University
Xingjun Ma
Xingjun Ma
Fudan University
Trustworthy AIMultimodal AIGenerative AIEmbodied AI
Zuxuan Wu
Zuxuan Wu
Fudan University
M
Min-Yen Kan
National University of Singapore
D
David Lo
Singapore Management University
Q
Qi Zhang
Fudan University
Heng Ji
Heng Ji
Professor of Computer Science, AICE Director, ASKS Director, UIUC, Amazon Scholar
Natural Language ProcessingLarge Language Models
J
Jing Jiang
Australian National University
Juanzi Li
Juanzi Li
Tsinghua University
Semantic Webdata miningNLP
A
Aixin Sun
Nanyang Technological University
X
Xuanjing Huang
Fudan University
T
Tat-Seng Chua
National University of Singapore
Yu-Gang Jiang
Yu-Gang Jiang
Professor, Fudan University. IEEE & IAPR Fellow
Video AnalysisEmbodied AITrustworthy AI