π€ AI Summary
This work addresses the efficiency bottleneck of manual reproducibility reviews in safety-critical domains such as the Internet of Things and cyber-physical systems, which hampers research transparency and deployability. The paper presents the first systematic framework leveraging large language models (LLMs) to automate reproducibility assessment by integrating natural language understanding, code generation, sandboxed environment auto-configuration, and rule-guided flaw detection. This approach enables reproducibility scoring, automatic execution environment setup, and identification of methodological flaws. Experimental results demonstrate that the proposed method achieves over 72% accuracy in reproducibility judgment, automatically constructs executable environments for 28% of runnable artifacts, and attains F1 scores exceeding 92% across seven common categories of methodological defects, substantially enhancing both the efficiency and quality of reproducibility review.
π Abstract
Artifact Evaluation (AE) is essential for ensuring the transparency and reliability of research, closing the gap between exploratory work and real-world deployment is particularly important in cybersecurity, particularly in IoT and CPSs, where large-scale, heterogeneous, and privacy-sensitive data meet safety-critical actuation. Yet, manual reproducibility checks are time-consuming and do not scale with growing submission volumes. In this work, we demonstrate that Large Language Models (LLMs) can provide powerful support for AE tasks: (i) text-based reproducibility rating, (ii) autonomous sandboxed execution environment preparation, and (iii) assessment of methodological pitfalls. Our reproducibility-assessment toolkit yields an accuracy of over 72% and autonomously sets up execution environments for 28% of runnable cybersecurity artifacts. Our automated pitfall assessment detects seven prevalent pitfalls with high accuracy (F1 > 92%). Hence, the toolkit significantly reduces reviewer effort and, when integrated into established AE processes, could incentivize authors to submit higher-quality and more reproducible artifacts. IoT, CPS, and cybersecurity conferences and workshops may integrate the toolkit into their peer-review processes to support reviewers' decisions on awarding artifact badges, improving the overall sustainability of the process.