Checklist Engineering Empowers Multilingual LLM Judges

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high cost and low efficiency of multilingual text evaluation—stemming from reliance on expensive closed-source models or extensive annotated data for fine-tuning—this paper proposes CE-Judge, a training-free, open-source zero-shot evaluation framework based on Checklist Engineering. Its core innovation is the first application of structured checklist modeling to multilingual LLM evaluation, leveraging prompt engineering to explicitly guide open-source LLMs (e.g., Qwen, Llama) in cross-lingual quality judgment. Evaluated on three multilingual benchmarks—XLSum, MLQE-PE, and TREELING—CE-Judge achieves performance on par with GPT-4o under both pairwise and pointwise evaluation settings, significantly outperforming existing zero-shot baselines. The framework demonstrates high efficiency, strong scalability across languages and tasks, and robust generalization without any parameter updates or language-specific adaptation.

Technology Category

Application Category

📝 Abstract
Automated text evaluation has long been a central issue in Natural Language Processing (NLP). Recently, the field has shifted toward using Large Language Models (LLMs) as evaluators-a trend known as the LLM-as-a-Judge paradigm. While promising and easily adaptable across tasks, this approach has seen limited exploration in multilingual contexts. Existing multilingual studies often rely on proprietary models or require extensive training data for fine-tuning, raising concerns about cost, time, and efficiency. In this paper, we propose Checklist Engineering based LLM-as-a-Judge (CE-Judge), a training-free framework that uses checklist intuition for multilingual evaluation with an open-source model. Experiments across multiple languages and three benchmark datasets, under both pointwise and pairwise settings, show that our method generally surpasses the baselines and performs on par with the GPT-4o model.
Problem

Research questions and friction points this paper is trying to address.

Limited exploration of LLM-as-a-Judge in multilingual contexts
High cost and inefficiency of proprietary models and fine-tuning
Need for training-free multilingual evaluation with open-source models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Checklist Engineering for multilingual LLM evaluation
Training-free framework with open-source model
Outperforms baselines, matches GPT-4o performance