Multi-Agent-as-Judge: Aligning LLM-Agent-Based Automated Evaluation with Multi-Dimensional Human Evaluation

πŸ“… 2025-07-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing LLM-as-a-judge approaches suffer from two key limitations: (1) subjective and ad hoc persona descriptions for evaluators, and (2) poor cross-task generalizability of evaluation frameworks. To address these, we propose MAJ-EVALβ€”a novel framework that automatically discovers multidimensional evaluator personas directly from textual data and establishes a transferable multi-agent debate mechanism, enabling LLMs to collaboratively emulate diverse human assessors. MAJ-EVAL integrates persona modeling, domain-adaptive text mining, and structured debate strategies to generate interpretable, multi-faceted evaluation feedback. Experiments in education and healthcare domains demonstrate that MAJ-EVAL achieves significantly higher agreement with human expert judgments than conventional automated metrics (e.g., BLEU, ROUGE) and state-of-the-art LLM-as-a-judge methods. This advancement substantially improves both the reliability and task-agnostic applicability of automated evaluation.

Technology Category

Application Category

πŸ“ Abstract
Nearly all human work is collaborative; thus, the evaluation of real-world NLP applications often requires multiple dimensions that align with diverse human perspectives. As real human evaluator resources are often scarce and costly, the emerging "LLM-as-a-judge" paradigm sheds light on a promising approach to leverage LLM agents to believably simulate human evaluators. Yet, to date, existing LLM-as-a-judge approaches face two limitations: persona descriptions of agents are often arbitrarily designed, and the frameworks are not generalizable to other tasks. To address these challenges, we propose MAJ-EVAL, a Multi-Agent-as-Judge evaluation framework that can automatically construct multiple evaluator personas with distinct dimensions from relevant text documents (e.g., research papers), instantiate LLM agents with the personas, and engage in-group debates with multi-agents to Generate multi-dimensional feedback. Our evaluation experiments in both the educational and medical domains demonstrate that MAJ-EVAL can generate evaluation results that better align with human experts' ratings compared with conventional automated evaluation metrics and existing LLM-as-a-judge methods.
Problem

Research questions and friction points this paper is trying to address.

Aligning automated LLM-agent evaluation with multi-dimensional human perspectives
Addressing arbitrary persona design in LLM-as-a-judge approaches
Improving generalizability of evaluation frameworks across diverse tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated multi-agent persona construction from documents
Multi-agent group debates for diverse feedback
Alignment with human expert ratings across domains
πŸ”Ž Similar Papers
No similar papers found.
Jiaju Chen
Jiaju Chen
Computer Science, Rice University
Human-Computer InteractionNatural Language Processing
Y
Yuxuan Lu
Northeastern University
X
Xiaojie Wang
Amazon
H
Huimin Zeng
University of Illinois Urbana-Champaign
J
Jing Huang
Amazon
Jiri Gesi
Jiri Gesi
Amazon Science
LLMAgentRLSoftware Engineering
Y
Ying Xu
Harvard University
B
Bingsheng Yao
Northeastern University
Dakuo Wang
Dakuo Wang
Northeastern University
Human-AI CollaborationHuman-Centered AIHuman-Computer InteractionAI for HealthcareCSCW