Supporting Artifact Evaluation with LLMs: A Study with Published Security Research Papers

πŸ“… 2025-12-08
πŸ›οΈ BigData Congress [Services Society]
πŸ“ˆ Citations: 1
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the efficiency bottleneck of manual reproducibility reviews in safety-critical domains such as the Internet of Things and cyber-physical systems, which hampers research transparency and deployability. The paper presents the first systematic framework leveraging large language models (LLMs) to automate reproducibility assessment by integrating natural language understanding, code generation, sandboxed environment auto-configuration, and rule-guided flaw detection. This approach enables reproducibility scoring, automatic execution environment setup, and identification of methodological flaws. Experimental results demonstrate that the proposed method achieves over 72% accuracy in reproducibility judgment, automatically constructs executable environments for 28% of runnable artifacts, and attains F1 scores exceeding 92% across seven common categories of methodological defects, substantially enhancing both the efficiency and quality of reproducibility review.

Technology Category

Application Category

πŸ“ Abstract
Artifact Evaluation (AE) is essential for ensuring the transparency and reliability of research, closing the gap between exploratory work and real-world deployment is particularly important in cybersecurity, particularly in IoT and CPSs, where large-scale, heterogeneous, and privacy-sensitive data meet safety-critical actuation. Yet, manual reproducibility checks are time-consuming and do not scale with growing submission volumes. In this work, we demonstrate that Large Language Models (LLMs) can provide powerful support for AE tasks: (i) text-based reproducibility rating, (ii) autonomous sandboxed execution environment preparation, and (iii) assessment of methodological pitfalls. Our reproducibility-assessment toolkit yields an accuracy of over 72% and autonomously sets up execution environments for 28% of runnable cybersecurity artifacts. Our automated pitfall assessment detects seven prevalent pitfalls with high accuracy (F1 > 92%). Hence, the toolkit significantly reduces reviewer effort and, when integrated into established AE processes, could incentivize authors to submit higher-quality and more reproducible artifacts. IoT, CPS, and cybersecurity conferences and workshops may integrate the toolkit into their peer-review processes to support reviewers' decisions on awarding artifact badges, improving the overall sustainability of the process.
Problem

Research questions and friction points this paper is trying to address.

Artifact Evaluation
Reproducibility
Cybersecurity
IoT
CPS
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Artifact Evaluation
Reproducibility
Cybersecurity
Automated Assessment
πŸ”Ž Similar Papers
No similar papers found.
D
David Heye
Communication and Distributed Systems, RWTH Aachen University, Germany
K
Karl Kindermann
Communication and Distributed Systems, RWTH Aachen University, Germany
R
Robin Decker
Communication and Distributed Systems, RWTH Aachen University, Germany
J
Johannes LohmΓΆller
Communication and Distributed Systems, RWTH Aachen University, Germany
A
Anastasiia Belova
Data Stream Management and Analysis, RWTH Aachen University, Germany
Sandra Geisler
Sandra Geisler
RWTH Aachen University
Data StreamsData LakesData QualityHealth InformaticsData Integration
Klaus Wehrle
Klaus Wehrle
Professor at RWTH Aachen University
communication systemssecurityprivacyIndustrial Internet of Things
Jan Pennekamp
Jan Pennekamp
RWTH Aachen University
SecurityPrivacyPrivacy Enhancing TechnologiesIndustrial Internet of ThingsCommunication Systems