Paradigms of AI Evaluation: Mapping Goals, Methodologies and Culture

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The AI evaluation landscape has become increasingly fragmented due to multidisciplinary involvement, resulting in divergent terminologies, heterogeneous objectives, and mutually incompatible paradigms—thereby impeding interdisciplinary communication, reinforcing research silos, and undermining real-world system deployment efficacy. To address this, we conduct a qualitative meta-analysis and cross-disciplinary literature review, systematically identifying and defining six distinct AI evaluation paradigms for the first time. Each paradigm is characterized along three dimensions: evaluative objectives, methodological approaches, and underlying research cultures—exposing tacit assumptions and normative value commitments. Building on this, we propose an extensible paradigm mapping framework and a comparative taxonomy that bridges terminological divides and mitigates communicative barriers. This six-dimensional classification serves as both a theoretical foundation and practical roadmap for fostering paradigmatic integration, uncovering critical research gaps, advancing unified evaluation standards, and enabling responsible AI deployment.

Technology Category

Application Category

📝 Abstract
Research in AI evaluation has grown increasingly complex and multidisciplinary, attracting researchers with diverse backgrounds and objectives. As a result, divergent evaluation paradigms have emerged, often developing in isolation, adopting conflicting terminologies, and overlooking each other's contributions. This fragmentation has led to insular research trajectories and communication barriers both among different paradigms and with the general public, contributing to unmet expectations for deployed AI systems. To help bridge this insularity, in this paper we survey recent work in the AI evaluation landscape and identify six main paradigms. We characterise major recent contributions within each paradigm across key dimensions related to their goals, methodologies and research cultures. By clarifying the unique combination of questions and approaches associated with each paradigm, we aim to increase awareness of the breadth of current evaluation approaches and foster cross-pollination between different paradigms. We also identify potential gaps in the field to inspire future research directions.
Problem

Research questions and friction points this paper is trying to address.

AI evaluation paradigms fragmentation
Communication barriers among researchers
Unmet expectations in AI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Survey diverse AI evaluation paradigms
Characterize goals, methodologies, and cultures
Foster cross-paradigm awareness and collaboration
🔎 Similar Papers
No similar papers found.
John Burden
John Burden
University of Cambridge
Reinforcement LearningArtificial IntelligenceLong-term AI SafetyAI Evaluation
M
Marko Tevsi'c
Leverhulme Centre for the Future of Intelligence, University of Cambridge
Lorenzo Pacchiardi
Lorenzo Pacchiardi
Research Associate, University of Cambridge
Large Language ModelsAI evaluationAI policyBayesian InferenceLikelihood-Free Inference
J
Jos'e Hern'andez-Orallo
Leverhulme Centre for the Future of Intelligence, University of Cambridge; VRAIN, Universitat Polit`ecnica de Val`encia