๐ค AI Summary
This study addresses the growing prevalence of unverifiable โmystery citationsโ in academic publishing, exacerbated by widespread non-disclosure of AI-generated references contrary to conference policies, revealing critical flaws in citation validation mechanisms. We develop an automated analysis pipeline integrating textual comparison, metadata verification, and policy compliance auditing to systematically evaluate citation accuracy across papers from four premier high-performance computing conferences between 2021 and 2025. Our findings quantitatively demonstrate that, by 2025, all examined venues exhibited mystery citations affecting 2%โ6% of publications, accompanied by a marked increase in erroneous titles and author information. The results confirm the ineffectiveness of current policy enforcement and underscore the urgent need for updated citation standards and enhanced technical oversight.
๐ Abstract
Mysterious citations are routinely appearing in peer-reviewed publications throughout the scientific community. In this paper, we developed an automated pipeline and examine the proceedings of four major high-performance computing conferences, comparing the accuracy of citations between the 2021 and 2025 proceedings. While none of the 2021 papers contained mysterious citations, every 2025 proceeding did, impacting 2-6\% of published papers. In addition, we observe a sharp rise in paper title and authorship errors, motivating the need for stronger citation-verification practice. No author within our dataset acknowledged using AI to generate citations even though all four conference policies required it, indicating current policies are insufficient.