🤖 AI Summary
This work addresses the persistent challenge of insufficient reproducibility in machine learning research, exacerbated by the widespread absence of standardized post-publication code verification mechanisms in academic journals. To bridge this gap, the paper proposes a community-driven, standardized framework that extends the ACM pre-publication reproducibility badge system by introducing, for the first time, a dual-badge, stackable post-publication verification mechanism. This approach enables independent researchers to submit replication code, which—upon successful peer review—is formally recognized through visual badges embedded in the paper’s metadata. By integrating public code repositories with journal metadata systems, the framework provides a comprehensive design encompassing motivational analysis, procedural workflow, and implementation pathways, thereby offering a practical and scalable solution to institutionalize reproducible research and advance open science practices.
📝 Abstract
Reproducibility remains a challenge in machine learning research. While code and data availability requirements have become increasingly common, post-publication verification in journals is still limited and unformalized. This position paper argues that it is plausible for journals and conference proceedings to implement post-publication verification. We propose a modification to ACM pre-publication verification badges that allows independent researchers to submit post-publication code replications to the journal, leading to visible verification badges included in the article metadata. Each article may earn up to two badges, each linked to verified code in its corresponding public repository. We describe the motivation, related initiatives, a formal framework, the potential impact, possible limitations, and alternative views.