Learning to Judge: LLMs Designing and Applying Evaluation Rubrics

πŸ“… 2026-02-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of traditional human-defined evaluation criteria, which are static and misaligned with large language models’ (LLMs) intrinsic understanding of linguistic quality. The authors propose leveraging LLMs to autonomously generate interpretable, task-specific evaluation dimensions and present the first systematic validation of LLMs’ capacity to construct such criteria. Using the GER-Eval framework with both closed-source models (e.g., GPT-4o) and open-source models (e.g., Llama), experiments demonstrate that LLMs can reliably produce semantically coherent evaluation standards, excelling particularly in non-factual tasks. Closed-source models significantly outperform open-source counterparts in cross-model generalization and alignment with human judgments, revealing high internal consistency within individual LLMs but notable fragmentation across different models.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) are increasingly used as evaluators for natural language generation, applying human-defined rubrics to assess system outputs. However, human rubrics are often static and misaligned with how models internally represent language quality. We introduce GER-Eval (Generating Evaluation Rubrics for Evaluation) to investigate whether LLMs can design and apply their own evaluation rubrics. We evaluate the semantic coherence and scoring reliability of LLM-defined criteria and their alignment with human criteria. LLMs reliably generate interpretable and task-aware evaluation dimensions and apply them consistently within models, but their scoring reliability degrades in factual and knowledge-intensive settings. Closed-source models such as GPT-4o achieve higher agreement and cross-model generalization than open-weight models such as Llama. Our findings position evaluation as a learned linguistic capability of LLMs, consistent within models but fragmented across them, and call for new methods that jointly model human and LLM evaluative language to improve reliability and interpretability.
Problem

Research questions and friction points this paper is trying to address.

evaluation rubrics
large language models
natural language generation
human-model alignment
scoring reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM evaluation
evaluation rubrics
GER-Eval
scoring reliability
cross-model generalization
πŸ”Ž Similar Papers
No similar papers found.
C
Clemencia Siro
Centrum Wiskunde & Informatica (CWI), Amsterdam, The Netherlands
P
Pourya Aliannejadi
Shahid Beheshti University, Tehran, Iran
Mohammad Aliannejadi
Mohammad Aliannejadi
Assistant Professor of Computer Science. IRLab, University of Amsterdam
Information RetrievalNatural Language ProcessingMachine Learning