🤖 AI Summary
Existing LLM-as-a-Judge calibration methods perform well on structured benchmarks but exhibit weak or even negative correlation with human judgments in open, real-world tasks. To address this, we propose a few-shot calibration framework grounded in entropy maximization—the first to introduce entropy-driven implicit quality distribution modeling into LLM evaluation calibration. Our method estimates the latent quality distribution by maximizing the entropy of the model’s output distribution, then combines preference-data reweighting with lightweight fine-tuning to enhance generalization and human alignment. Evaluated on real production data, our approach achieves a Spearman correlation of 0.57 with human judgments—significantly outperforming G-Eval, which yields negative correlation—while reducing evaluation cost by 5–30×. It consistently surpasses state-of-the-art calibration methods across multiple metrics and settings.
📝 Abstract
The LLM-as-a-Judge paradigm offers a scalable, reference-free approach for evaluating language models. Although several calibration techniques have been proposed to better align these evaluators with human judgment, prior studies focus primarily on narrow, well-structured benchmarks. As a result, it remains unclear whether such calibrations generalize to real-world, open-ended tasks. In this work, we show that SOTA calibrated evaluators often fail in these settings, exhibiting weak or even negative correlation with human judgments. To address this, we propose SLMEval, a novel and efficient calibration method based on entropy maximization over a small amount of human preference data. By estimating a latent distribution over model quality and reweighting evaluator scores accordingly, SLMEval achieves strong correlation with human evaluations across two real-world production use cases and the public benchmark. For example, on one such task, SLMEval achieves a Spearman correlation of 0.57 with human judgments, while G-Eval yields a negative correlation. In addition, SLMEval reduces evaluation costs by 5-30x compared to GPT-4-based calibrated evaluators such as G-eval.