π€ AI Summary
Existing LLM-as-a-judge approaches suffer from two key limitations: (1) subjective and ad hoc persona descriptions for evaluators, and (2) poor cross-task generalizability of evaluation frameworks. To address these, we propose MAJ-EVALβa novel framework that automatically discovers multidimensional evaluator personas directly from textual data and establishes a transferable multi-agent debate mechanism, enabling LLMs to collaboratively emulate diverse human assessors. MAJ-EVAL integrates persona modeling, domain-adaptive text mining, and structured debate strategies to generate interpretable, multi-faceted evaluation feedback. Experiments in education and healthcare domains demonstrate that MAJ-EVAL achieves significantly higher agreement with human expert judgments than conventional automated metrics (e.g., BLEU, ROUGE) and state-of-the-art LLM-as-a-judge methods. This advancement substantially improves both the reliability and task-agnostic applicability of automated evaluation.
π Abstract
Nearly all human work is collaborative; thus, the evaluation of real-world NLP applications often requires multiple dimensions that align with diverse human perspectives. As real human evaluator resources are often scarce and costly, the emerging "LLM-as-a-judge" paradigm sheds light on a promising approach to leverage LLM agents to believably simulate human evaluators. Yet, to date, existing LLM-as-a-judge approaches face two limitations: persona descriptions of agents are often arbitrarily designed, and the frameworks are not generalizable to other tasks. To address these challenges, we propose MAJ-EVAL, a Multi-Agent-as-Judge evaluation framework that can automatically construct multiple evaluator personas with distinct dimensions from relevant text documents (e.g., research papers), instantiate LLM agents with the personas, and engage in-group debates with multi-agents to Generate multi-dimensional feedback. Our evaluation experiments in both the educational and medical domains demonstrate that MAJ-EVAL can generate evaluation results that better align with human experts' ratings compared with conventional automated evaluation metrics and existing LLM-as-a-judge methods.