Can LLMs Replace Human Evaluators? An Empirical Study of LLM-as-a-Judge in Software Engineering

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the alignment between large language models (LLMs) acting as automated evaluators (“LLM-as-a-judge”) and human judgments in software engineering tasks—specifically code translation, generation, and summarization. We conduct the first systematic, fine-grained empirical study of human alignment, evaluating seven general-purpose and two fine-tuned LLM-based evaluation methods under zero-shot and few-shot prompting regimes, using human-annotated benchmarks and Pearson correlation analysis. Results show that output-style direct scoring significantly outperforms traditional metrics (e.g., ChrF++) and yields score distributions closely matching human judgment patterns. In code translation and generation, LLM-as-a-judge achieves Pearson correlations of 0.813 and 0.685 with human scores, respectively—approaching inter-human agreement levels. This study establishes the validity and practicality of LLM-as-a-judge for SE evaluation, introducing a novel, scalable paradigm for automated, human-aligned assessment of code-related outputs.

Technology Category

Application Category

📝 Abstract
Recently, large language models (LLMs) have been deployed to tackle various software engineering (SE) tasks like code generation, significantly advancing the automation of SE tasks. However, assessing the quality of these LLM-generated code and text remains challenging. The commonly used Pass@k metric necessitates extensive unit tests and configured environments, demands a high labor cost, and is not suitable for evaluating LLM-generated text. Conventional metrics like BLEU, which measure only lexical rather than semantic similarity, have also come under scrutiny. In response, a new trend has emerged to employ LLMs for automated evaluation, known as LLM-as-a-judge. These LLM-as-a-judge methods are claimed to better mimic human assessment than conventional metrics without relying on high-quality reference answers. Nevertheless, their exact human alignment in SE tasks remains unexplored. In this paper, we empirically explore LLM-as-a-judge methods for evaluating SE tasks, focusing on their alignment with human judgments. We select seven LLM-as-a-judge methods that utilize general-purpose LLMs, alongside two LLMs specifically fine-tuned for evaluation. After generating and manually scoring LLM responses on three recent SE datasets of code translation, code generation, and code summarization, we then prompt these methods to evaluate each response. Finally, we compare the scores generated by these methods with human evaluation. The results indicate that output-based methods reach the highest Pearson correlation of 81.32 and 68.51 with human scores in code translation and generation, achieving near-human evaluation, noticeably outperforming ChrF++, one of the best conventional metrics, at 34.23 and 64.92. Such output-based methods prompt LLMs to output judgments directly, and exhibit more balanced score distributions that resemble human score patterns. Finally, we provide...
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLM-generated code quality
Compare LLM-as-a-judge with human assessment
Assess semantic alignment in SE tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-as-a-judge for automated evaluation
Output-based methods prompt LLMs directly
Achieved near-human evaluation alignment
🔎 Similar Papers
No similar papers found.