Beyond Benchmark: LLMs Evaluation with an Anthropomorphic and Value-oriented Roadmap

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM evaluation suffers from misalignment between benchmark performance and real-world applicability, fragmented frameworks, and an overemphasis on technical metrics at the expense of value alignment and societal impact. To address these gaps, we propose IQ-EQ-PQ—a human-inspired, value-oriented three-dimensional evaluation framework measuring Intellectual, Emotional, and Professional Quotients—and introduce the novel Value Quotient (VQ) metric, quantifying alignment across economic, social, ethical, and environmental sustainability dimensions. Methodologically, we design a modular evaluation architecture with an implementation roadmap, integrating 200+ benchmarks to enhance dynamism and interpretability; we also release an open-source evaluation repository. This work systematically bridges technical metrics and societal values, providing both a theoretical foundation and practical guidance for developing trustworthy, responsible LLMs.

Technology Category

Application Category

📝 Abstract
For Large Language Models (LLMs), a disconnect persists between benchmark performance and real-world utility. Current evaluation frameworks remain fragmented, prioritizing technical metrics while neglecting holistic assessment for deployment. This survey introduces an anthropomorphic evaluation paradigm through the lens of human intelligence, proposing a novel three-dimensional taxonomy: Intelligence Quotient (IQ)-General Intelligence for foundational capacity, Emotional Quotient (EQ)-Alignment Ability for value-based interactions, and Professional Quotient (PQ)-Professional Expertise for specialized proficiency. For practical value, we pioneer a Value-oriented Evaluation (VQ) framework assessing economic viability, social impact, ethical alignment, and environmental sustainability. Our modular architecture integrates six components with an implementation roadmap. Through analysis of 200+ benchmarks, we identify key challenges including dynamic assessment needs and interpretability gaps. It provides actionable guidance for developing LLMs that are technically proficient, contextually relevant, and ethically sound. We maintain a curated repository of open-source evaluation resources at: https://github.com/onejune2018/Awesome-LLM-Eval.
Problem

Research questions and friction points this paper is trying to address.

Disconnect between benchmark performance and real-world utility
Fragmented evaluation frameworks neglecting holistic assessment
Need for dynamic assessment and interpretability gaps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Anthropomorphic three-dimensional evaluation taxonomy
Value-oriented framework assessing multiple sustainability aspects
Modular architecture with implementation roadmap integration
🔎 Similar Papers
No similar papers found.
J
Jun Wang
Department of Networks, China Mobile Communications Group Co.,Ltd
N
Ninglun Gu
Department of Networks, China Mobile Communications Group Co.,Ltd
K
Kailai Zhang
Shanghai Jiao Tong University, Shanghai, China
Z
Zijiao Zhang
Shanghai Jiao Tong University, Shanghai, China
Y
Yelun Bao
Department of Networks, China Mobile Communications Group Co.,Ltd
J
Jin Yang
Department of Networks, China Mobile Communications Group Co.,Ltd
X
Xu Yin
Department of Networks, China Mobile Communications Group Co.,Ltd
Liwei Liu
Liwei Liu
Shenzhen University
Biophotonics
Y
Yihuan Liu
Xidian University, Xi’an, China
Pengyong Li
Pengyong Li
Xidian University, Xi’an, China
G
Gary G. Yen
Oklahoma State University, Stillwater, OK, USA
Junchi Yan
Junchi Yan
FIAPR & ICML Board Member, SJTU (2018-), SII (2024-), AWS (2019-2022), IBM (2011-2018)
Computational IntelligenceAI4ScienceMachine LearningAutonomous Driving