SLMEval: Entropy-Based Calibration for Human-Aligned Evaluation of Large Language Models

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-as-a-Judge calibration methods perform well on structured benchmarks but exhibit weak or even negative correlation with human judgments in open, real-world tasks. To address this, we propose a few-shot calibration framework grounded in entropy maximization—the first to introduce entropy-driven implicit quality distribution modeling into LLM evaluation calibration. Our method estimates the latent quality distribution by maximizing the entropy of the model’s output distribution, then combines preference-data reweighting with lightweight fine-tuning to enhance generalization and human alignment. Evaluated on real production data, our approach achieves a Spearman correlation of 0.57 with human judgments—significantly outperforming G-Eval, which yields negative correlation—while reducing evaluation cost by 5–30×. It consistently surpasses state-of-the-art calibration methods across multiple metrics and settings.

Technology Category

Application Category

📝 Abstract
The LLM-as-a-Judge paradigm offers a scalable, reference-free approach for evaluating language models. Although several calibration techniques have been proposed to better align these evaluators with human judgment, prior studies focus primarily on narrow, well-structured benchmarks. As a result, it remains unclear whether such calibrations generalize to real-world, open-ended tasks. In this work, we show that SOTA calibrated evaluators often fail in these settings, exhibiting weak or even negative correlation with human judgments. To address this, we propose SLMEval, a novel and efficient calibration method based on entropy maximization over a small amount of human preference data. By estimating a latent distribution over model quality and reweighting evaluator scores accordingly, SLMEval achieves strong correlation with human evaluations across two real-world production use cases and the public benchmark. For example, on one such task, SLMEval achieves a Spearman correlation of 0.57 with human judgments, while G-Eval yields a negative correlation. In addition, SLMEval reduces evaluation costs by 5-30x compared to GPT-4-based calibrated evaluators such as G-eval.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs with human-aligned calibration in real-world tasks
Addressing weak correlation between SOTA evaluators and human judgments
Reducing evaluation costs while maintaining accuracy with entropy-based method
Innovation

Methods, ideas, or system contributions that make the work stand out.

Entropy maximization for calibration
Latent distribution estimation for alignment
Cost reduction via efficient evaluation
🔎 Similar Papers
No similar papers found.