Grading Scale Impact on LLM-as-a-Judge: Human-LLM Alignment Is Highest on 0-5 Grading Scale

๐Ÿ“… 2026-01-06
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 2
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study investigates how rating scale design influences the alignment between large language models (LLMs) and human raters when LLMs serve as evaluators, with particular attention to how scale choice moderates humanโ€“LLM agreement across different task types. Human and LLM ratings were collected using three common rating scales on six benchmark tasks spanning objective, subjective, and mixed categories, and absolute agreement was quantified via intraclass correlation coefficients (ICC). The work reveals, for the first time, that a 0โ€“5 point scale achieves the highest cross-task humanโ€“LLM alignment. It further demonstrates that scale selection significantly affects alignment magnitude and that aggregate metrics may obscure substantial heterogeneity across tasks. Additionally, systematic alignment disparities are identified within gender subgroups.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) are increasingly used as automated evaluators, yet prior works demonstrate that these LLM judges often lack consistency in scoring when the prompt is altered. However, the effect of the grading scale itself remains underexplored. We study the LLM-as-a-judge problem by comparing two kinds of raters: humans and LLMs. We collect ratings from both groups on three scales and across six benchmarks that include objective, open-ended subjective, and mixed tasks. Using intraclass correlation coefficients (ICC) to measure absolute agreement, we find that LLM judgments are not perfectly consistent across scales on subjective benchmarks, and that the choice of scale substantially shifts human-LLM agreement, even when within-group panel reliability is high. Aggregated over tasks, the grading scale of 0-5 yields the strongest human-LLM alignment. We further demonstrate that pooled reliability can mask benchmark heterogeneity and reveal systematic subgroup differences in alignment across gender groups, strengthening the importance of scale design and sub-level diagnostics as essential components of LLM-as-a-judge protocols.
Problem

Research questions and friction points this paper is trying to address.

LLM-as-a-Judge
grading scale
human-LLM alignment
rating consistency
subjective evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-as-a-Judge
grading scale
human-LLM alignment
intraclass correlation
subgroup analysis
๐Ÿ”Ž Similar Papers
No similar papers found.
W
Weiyue Li
Harvard University
M
Minda Zhao
CMU
W
Weixuan Dong
Stanford University
J
Jiahui Cai
UC San Diego
Y
Yuze Wei
Harvard University
M
Michael Pocress
CMU
Y
Yi Li
Stanford University
W
Wanyan Yuan
UC San Diego
X
Xiaoyue Wang
Harvard University
R
Ruoyu Hou
CMU
Kaiyuan Lou
Kaiyuan Lou
Unknown affiliation
W
Wenqi Zeng
UC San Diego
Yutong Yang
Yutong Yang
Mercedes-Benz AG R&D & University of Stuttgart
Computer VisionAutonomous Driving
Yilun Du
Yilun Du
Harvard University
Artificial IntelligenceMachine LearningRoboticsComputer Vision
M
Mengyu Wang
Stanford University