Learning an Efficient Multi-Turn Dialogue Evaluator from Multiple Judges

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM-as-a-judge dialogue evaluation methods suffer from either significant single-model bias or prohibitively high computational overhead when employing multi-LLM ensembles. To address this, we propose a preference knowledge distillation framework that aggregates and distills collective judgments—including scalar scores and pairwise comparisons—from multiple LLM evaluators into a single lightweight assessment model. Our approach preserves high inter-annotator agreement while drastically reducing inference latency and resource consumption. It is the first method to jointly achieve both high accuracy and efficiency: it outperforms all prior state-of-the-art methods across seven standard dialogue evaluation benchmarks, accelerates inference by 3–5×, improves robustness, and supports flexible deployment. Key innovations include a joint optimization mechanism leveraging heterogeneous preference signals and a scalable knowledge distillation paradigm tailored for evaluator consolidation.

Technology Category

Application Category

📝 Abstract
Evaluating the conversational abilities of large language models (LLMs) remains a challenging task. Current mainstream approaches primarily rely on the ``LLM-as-a-judge" paradigm, where an LLM is prompted to serve as an evaluator to assess dialogue quality. However, such methods often suffer from various biases, which undermine the reliability and consistency of the evaluation results. To mitigate these biases, recent methods employ multiple LLMs as judges and aggregate their judgments to select the optimal assessment. Although effective, this multi-judge approach incurs significant computational overhead during inference. In this paper, we propose an efficient multi-turn dialogue evaluator that captures the collective wisdom of multiple LLM judges by aggregating their preference knowledge into a single model. Our approach preserves the advantages of diverse multi-judge feedback while drastically reducing the evaluation cost, enabling fast and flexible dialogue quality assessment. Extensive experiments on seven single rating and pairwise comparison dialogue evaluation benchmarks demonstrate that our method outperforms existing baselines across diverse scenarios, showcasing its efficiency and robustness.
Problem

Research questions and friction points this paper is trying to address.

Mitigating biases in LLM-based dialogue evaluation methods
Reducing computational overhead of multi-judge LLM assessments
Aggregating multiple LLM judges' knowledge into one efficient model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aggregates multiple LLM judges into one model
Reduces evaluation cost significantly
Outperforms baselines in diverse scenarios
🔎 Similar Papers
No similar papers found.
Yuqi Tang
Yuqi Tang
Duke University
Medical ImagingComputer VisionImage Quality
Kehua Feng
Kehua Feng
Ph.D. student, Zhejiang University
Natural Language ProcessingLanguage ModelAI for Science
Y
Yunfeng Wang
Alibaba Group
Z
Zhiwen Chen
Alibaba Group
C
Chengfei Lv
Alibaba Group
G
Gang Yu
Alibaba Group
Q
Qiang Zhang
ZJU-UIUC Institute, Zhejiang University; ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University
K
Keyan Ding
ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University