Reflections on the Reproducibility of Commercial LLM Performance in Empirical Software Engineering Studies

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates reproducibility challenges in empirical software engineering research leveraging commercial large language models (LLMs). Method: We conducted rigorous replication experiments on 18 publicly available artifacts from ICSE/ASE 2024 LLM studies, performing detailed result comparisons and diagnostic analysis. Contribution/Results: Only five studies met minimal replication prerequisites, and none achieved full result reproduction; key barriers included output nondeterminism, undocumented prompt engineering, and unreported environment dependencies. We provide the first quantitative characterization of structural reproducibility deficits in commercial LLM research and propose a four-dimensional framework—encompassing prompt standardization, runtime environment logging, stochasticity control, and confidence-aware result reporting—to enhance rigor. This work delivers empirical evidence and actionable guidelines for improving the credibility of LLM-driven empirical software engineering research.

Technology Category

Application Category

📝 Abstract
Large Language Models have gained remarkable interest in industry and academia. The increasing interest in LLMs in academia is also reflected in the number of publications on this topic over the last years. For instance, alone 78 of the around 425 publications at ICSE 2024 performed experiments with LLMs. Conducting empirical studies with LLMs remains challenging and raises questions on how to achieve reproducible results, for both other researchers and practitioners. One important step towards excelling in empirical research on LLMs and their application is to first understand to what extent current research results are eventually reproducible and what factors may impede reproducibility. This investigation is within the scope of our work. We contribute an analysis of the reproducibility of LLM-centric studies, provide insights into the factors impeding reproducibility, and discuss suggestions on how to improve the current state. In particular, we studied the 86 articles describing LLM-centric studies, published at ICSE 2024 and ASE 2024. Of the 86 articles, 18 provided research artefacts and used OpenAI models. We attempted to replicate those 18 studies. Of the 18 studies, only five were fit for reproduction. For none of the five studies, we were able to fully reproduce the results. Two studies seemed to be partially reproducible, and three studies did not seem to be reproducible. Our results highlight not only the need for stricter research artefact evaluations but also for more robust study designs to ensure the reproducible value of future publications.
Problem

Research questions and friction points this paper is trying to address.

Analyzing reproducibility challenges in commercial LLM software engineering studies
Identifying factors impeding replication of LLM-centric empirical research results
Evaluating current reproducibility status of published LLM studies at major conferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzed reproducibility of LLM studies
Identified factors impeding study replication
Proposed stricter artifact evaluation methods
🔎 Similar Papers
No similar papers found.
Florian Angermeir
Florian Angermeir
fortiss and Blekinge Institute of Technology
M
Maximilian Amougou
fortiss
M
Mark Kreitz
University of the Bundeswehr Munich
A
Andreas Bauer
Blekinge Institute of Technology
M
Matthias Linhuber
Technical University of Munich
Davide Fucci
Davide Fucci
Software Engineering Research and Education Lab | Blekinge Institute of Technology
Empirical software engineering
Fabiola Moyón C.
Fabiola Moyón C.
Siemens AG
Daniel Mendez
Daniel Mendez
Full Professor at Blekinge Institute of Technology and fortiss GmbH
Empirical Software Engineering
T
Tony Gorschek
Blekinge Institute of Technology and fortiss