🤖 AI Summary
This study investigates response consistency of large language models (LLMs) in code review tasks under zero-temperature settings. We systematically evaluate test–retest reliability across GPT-4o mini, GPT-4o, Claude 3.5 Sonnet, and LLaMA 3.2 90B Vision on 70 Java code submissions, using clean contexts, zero temperature, and repeated identical prompts. This constitutes the first cross-model, quantitative determinism benchmark for both leading closed- and open-weight LLMs. Results reveal substantial response variability across all four models—demonstrating that even under ideal deterministic conditions, their code review outputs lack engineering-grade consistency. The findings expose inherent reliability risks for deploying LLMs in high-assurance software engineering applications, such as automated code review. Moreover, the work establishes a reproducible methodological framework and empirical benchmark for rigorously assessing LLM determinism and output stability in safety-critical development workflows.
📝 Abstract
Large Language Models (LLMs) promise to streamline software code reviews, but their ability to produce consistent assessments remains an open question. In this study, we tested four leading LLMs -- GPT-4o mini, GPT-4o, Claude 3.5 Sonnet, and LLaMA 3.2 90B Vision -- on 70 Java commits from both private and public repositories. By setting each model's temperature to zero, clearing context, and repeating the exact same prompts five times, we measured how consistently each model generated code-review assessments. Our results reveal that even with temperature minimized, LLM responses varied to different degrees. These findings highlight a consideration about the inherently limited consistency (test-retest reliability) of LLMs -- even when the temperature is set to zero -- and the need for caution when using LLM-generated code reviews to make real-world decisions.