🤖 AI Summary
Leading machine learning conferences lack systematic mechanisms to correct erroneous, misleading, or incomplete conclusions in published papers.
Method: We propose a dedicated “Refutations and Critiques” (R&C) track—the first formal, peer-reviewed, and archivable academic correction channel for ML conferences. Our design integrates scholarly commentary norms, reforms to peer review, and empirical case studies, yielding a complete R&C framework specifying submission criteria, double-blind review, categorized publication of outcomes, and bidirectional cross-linking with original papers. Feasibility is validated via integration into ICLR 2025’s oral presentation process.
Contribution/Results: The R&C track significantly enhances transparency, reproducibility, and self-correcting capacity in ML research. It establishes a scalable, rigorous, and implementable governance model for the AI community, advancing scientific integrity through structured post-publication scrutiny.
📝 Abstract
Science progresses by iteratively advancing and correcting humanity's understanding of the world. In machine learning (ML) research, rapid advancements have led to an explosion of publications, but have also led to misleading, incorrect, flawed or perhaps even fraudulent studies being accepted and sometimes highlighted at ML conferences due to the fallibility of peer review. While such mistakes are understandable, ML conferences do not offer robust processes to help the field systematically correct when such errors are made.This position paper argues that ML conferences should establish a dedicated "Refutations and Critiques" (R & C) Track. This R & C Track would provide a high-profile, reputable platform to support vital research that critically challenges prior research, thereby fostering a dynamic self-correcting research ecosystem. We discuss key considerations including track design, review principles, potential pitfalls, and provide an illustrative example submission concerning a recent ICLR 2025 Oral. We conclude that ML conferences should create official, reputable mechanisms to help ML research self-correct.