Training an LLM-as-a-Judge Model: Pipeline, Insights, and Practical Lessons

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) serving as evaluators face a fundamental trade-off among fine-grained assessment, contextual adaptability, alignment with human preferences, and inference efficiency. Method: We propose Themis—a fine-grained, scenario-adaptive LLM evaluator—featuring (1) two controllable instruction generation mechanisms for semantically precise, context-aware evaluation prompts; (2) the first dual-human-annotated meta-evaluation benchmark; and (3) an analysis of knowledge distillation failure in evaluation tasks, coupled with a mitigation strategy based on instruction-following difficulty. We integrate supervised fine-tuning (SFT), multi-objective distillation, dynamic prompt engineering, and data balancing. Results: Themis achieves an average Kendall’s τ ≥ 0.82 across diverse evaluation tasks—significantly outperforming baselines—while reducing inference cost by over 60%.

Technology Category

Application Category

📝 Abstract
The rapid advancement of large language models (LLMs) has opened new possibilities for their adoption as evaluative judges. This paper introduces Themis, a fine-tuned LLM judge that delivers sophisticated context-aware evaluations. We provide a comprehensive overview of the development pipeline for Themis, highlighting its scenario-dependent evaluation prompts and two novel methods for controlled instruction generation. These designs enable Themis to effectively distill evaluative skills from teacher models, while retaining flexibility for continuous development. We introduce two human-labeled benchmarks for meta-evaluation, demonstrating that Themis can achieve high alignment with human preferences in an economical manner. Additionally, we explore insights into the LLM-as-a-judge paradigm, revealing nuances in performance and the varied effects of reference answers. Notably, we observe that pure knowledge distillation from strong LLMs, though common, does not guarantee performance improvement through scaling. We propose a mitigation strategy based on instruction-following difficulty. Furthermore, we provide practical guidelines covering data balancing, prompt customization, multi-objective training, and metric aggregation. We aim for our method and findings, along with the fine-tuning data, benchmarks, and model checkpoints, to support future research and development in this area.
Problem

Research questions and friction points this paper is trying to address.

Developing context-aware LLM judge
Creating controlled instruction methods
Aligning LLM with human preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned LLM judge Themis
Scenario-dependent evaluation prompts
Controlled instruction generation methods
🔎 Similar Papers
No similar papers found.
Renjun Hu
Renjun Hu
East China Normal University
Robust ML/AILLMsgraph mining
Y
Yi Cheng
Alibaba Cloud Computing, Hangzhou, China
L
Libin Meng
Alibaba Cloud Computing, Shanghai, China
J
Jiaxin Xia
Alibaba Cloud Computing, Shanghai, China
Yi Zong
Yi Zong
School of Computer Science, Fudan University
NLP
X
Xing Shi
Alibaba Cloud Computing, Hangzhou, China
W
Wei Lin
Alibaba Cloud Computing, Hangzhou, China