🤖 AI Summary
Current LLM evaluation suffers from misalignment between benchmark performance and real-world applicability, fragmented frameworks, and an overemphasis on technical metrics at the expense of value alignment and societal impact. To address these gaps, we propose IQ-EQ-PQ—a human-inspired, value-oriented three-dimensional evaluation framework measuring Intellectual, Emotional, and Professional Quotients—and introduce the novel Value Quotient (VQ) metric, quantifying alignment across economic, social, ethical, and environmental sustainability dimensions. Methodologically, we design a modular evaluation architecture with an implementation roadmap, integrating 200+ benchmarks to enhance dynamism and interpretability; we also release an open-source evaluation repository. This work systematically bridges technical metrics and societal values, providing both a theoretical foundation and practical guidance for developing trustworthy, responsible LLMs.
📝 Abstract
For Large Language Models (LLMs), a disconnect persists between benchmark performance and real-world utility. Current evaluation frameworks remain fragmented, prioritizing technical metrics while neglecting holistic assessment for deployment. This survey introduces an anthropomorphic evaluation paradigm through the lens of human intelligence, proposing a novel three-dimensional taxonomy: Intelligence Quotient (IQ)-General Intelligence for foundational capacity, Emotional Quotient (EQ)-Alignment Ability for value-based interactions, and Professional Quotient (PQ)-Professional Expertise for specialized proficiency. For practical value, we pioneer a Value-oriented Evaluation (VQ) framework assessing economic viability, social impact, ethical alignment, and environmental sustainability. Our modular architecture integrates six components with an implementation roadmap. Through analysis of 200+ benchmarks, we identify key challenges including dynamic assessment needs and interpretability gaps. It provides actionable guidance for developing LLMs that are technically proficient, contextually relevant, and ethically sound. We maintain a curated repository of open-source evaluation resources at: https://github.com/onejune2018/Awesome-LLM-Eval.