๐ค AI Summary
This study investigates how rating scale design influences the alignment between large language models (LLMs) and human raters when LLMs serve as evaluators, with particular attention to how scale choice moderates humanโLLM agreement across different task types. Human and LLM ratings were collected using three common rating scales on six benchmark tasks spanning objective, subjective, and mixed categories, and absolute agreement was quantified via intraclass correlation coefficients (ICC). The work reveals, for the first time, that a 0โ5 point scale achieves the highest cross-task humanโLLM alignment. It further demonstrates that scale selection significantly affects alignment magnitude and that aggregate metrics may obscure substantial heterogeneity across tasks. Additionally, systematic alignment disparities are identified within gender subgroups.
๐ Abstract
Large language models (LLMs) are increasingly used as automated evaluators, yet prior works demonstrate that these LLM judges often lack consistency in scoring when the prompt is altered. However, the effect of the grading scale itself remains underexplored. We study the LLM-as-a-judge problem by comparing two kinds of raters: humans and LLMs. We collect ratings from both groups on three scales and across six benchmarks that include objective, open-ended subjective, and mixed tasks. Using intraclass correlation coefficients (ICC) to measure absolute agreement, we find that LLM judgments are not perfectly consistent across scales on subjective benchmarks, and that the choice of scale substantially shifts human-LLM agreement, even when within-group panel reliability is high. Aggregated over tasks, the grading scale of 0-5 yields the strongest human-LLM alignment. We further demonstrate that pooled reliability can mask benchmark heterogeneity and reveal systematic subgroup differences in alignment across gender groups, strengthening the importance of scale design and sub-level diagnostics as essential components of LLM-as-a-judge protocols.