🤖 AI Summary
Reproducibility remains a critical challenge in large language model (LLM)-driven software engineering (SE) research, undermining credibility and cumulative scientific progress. Method: We conducted a systematic literature review of 640 papers, integrating structured metadata extraction, manual annotation, and cross-platform analysis to diagnose reproducibility deficiencies across code, data, execution environments, and version control. Contribution/Results: We propose a novel taxonomy of seven reproducibility defect categories and introduce the Reproducibility Maturity Model (RMM), shifting evaluation from binary “reproducible/not reproducible” to a multi-dimensional, incremental framework. Our findings reveal that even top-tier conferences’ artifact evaluation badges exhibit low enforcement fidelity and poor long-term reproducibility; publication venue transparency practices vary substantially. This work provides both a theoretical framework and empirical evidence to enhance the rigor and trustworthiness of LLM-SE research.
📝 Abstract
Reproducibility is a cornerstone of scientific progress, yet its state in large language model (LLM)-based software engineering (SE) research remains poorly understood. This paper presents the first large-scale, empirical study of reproducibility practices in LLM-for-SE research. We systematically mined and analyzed 640 papers published between 2017 and 2025 across premier software engineering, machine learning, and natural language processing venues, extracting structured metadata from publications, repositories, and documentation. Guided by four research questions, we examine (i) the prevalence of reproducibility smells, (ii) how reproducibility has evolved over time, (iii) whether artifact evaluation badges reliably reflect reproducibility quality, and (iv) how publication venues influence transparency practices. Using a taxonomy of seven smell categories: Code and Execution, Data, Documentation, Environment and Tooling, Versioning, Model, and Access and Legal, we manually annotated all papers and associated artifacts. Our analysis reveals persistent gaps in artifact availability, environment specification, versioning rigor, and documentation clarity, despite modest improvements in recent years and increased adoption of artifact evaluation processes at top SE venues. Notably, we find that badges often signal artifact presence but do not consistently guarantee execution fidelity or long-term reproducibility. Motivated by these findings, we provide actionable recommendations to mitigate reproducibility smells and introduce a Reproducibility Maturity Model (RMM) to move beyond binary artifact certification toward multi-dimensional, progressive evaluation of reproducibility rigor.