Large Language Models for Software Engineering: A Reproducibility Crisis

📅 2025-11-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reproducibility remains a critical challenge in large language model (LLM)-driven software engineering (SE) research, undermining credibility and cumulative scientific progress. Method: We conducted a systematic literature review of 640 papers, integrating structured metadata extraction, manual annotation, and cross-platform analysis to diagnose reproducibility deficiencies across code, data, execution environments, and version control. Contribution/Results: We propose a novel taxonomy of seven reproducibility defect categories and introduce the Reproducibility Maturity Model (RMM), shifting evaluation from binary “reproducible/not reproducible” to a multi-dimensional, incremental framework. Our findings reveal that even top-tier conferences’ artifact evaluation badges exhibit low enforcement fidelity and poor long-term reproducibility; publication venue transparency practices vary substantially. This work provides both a theoretical framework and empirical evidence to enhance the rigor and trustworthiness of LLM-SE research.

Technology Category

Application Category

📝 Abstract
Reproducibility is a cornerstone of scientific progress, yet its state in large language model (LLM)-based software engineering (SE) research remains poorly understood. This paper presents the first large-scale, empirical study of reproducibility practices in LLM-for-SE research. We systematically mined and analyzed 640 papers published between 2017 and 2025 across premier software engineering, machine learning, and natural language processing venues, extracting structured metadata from publications, repositories, and documentation. Guided by four research questions, we examine (i) the prevalence of reproducibility smells, (ii) how reproducibility has evolved over time, (iii) whether artifact evaluation badges reliably reflect reproducibility quality, and (iv) how publication venues influence transparency practices. Using a taxonomy of seven smell categories: Code and Execution, Data, Documentation, Environment and Tooling, Versioning, Model, and Access and Legal, we manually annotated all papers and associated artifacts. Our analysis reveals persistent gaps in artifact availability, environment specification, versioning rigor, and documentation clarity, despite modest improvements in recent years and increased adoption of artifact evaluation processes at top SE venues. Notably, we find that badges often signal artifact presence but do not consistently guarantee execution fidelity or long-term reproducibility. Motivated by these findings, we provide actionable recommendations to mitigate reproducibility smells and introduce a Reproducibility Maturity Model (RMM) to move beyond binary artifact certification toward multi-dimensional, progressive evaluation of reproducibility rigor.
Problem

Research questions and friction points this paper is trying to address.

Investigates reproducibility issues in LLM-based software engineering research
Evaluates artifact availability and quality across 640 papers from 2017-2025
Proposes a maturity model to improve reproducibility practices in SE
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale empirical study of reproducibility practices
Manual annotation using taxonomy of seven smell categories
Introducing Reproducibility Maturity Model for progressive evaluation
🔎 Similar Papers
No similar papers found.
Mohammed Latif Siddiq
Mohammed Latif Siddiq
PhD Candidate, Computer Science & Engineering, University of Notre Dame
Software EngineeringSoftware SecurityApplied Machine LearningCode generation
A
Arvin Islam-Gomes
Computer Science and Engineering, University of Notre Dame, Holy Cross Drive, Notre Dame, 46556, IN, USA.
N
Natalie Sekerak
Computer Science and Engineering, University of Notre Dame, Holy Cross Drive, Notre Dame, 46556, IN, USA.
Joanna C. S. Santos
Joanna C. S. Santos
Assistant Professor, University of Notre Dame
Software SecurityProgram AnalysisSoftware EngineeringCode GenerationSoftware Architecture