🤖 AI Summary
This work addresses the inefficiency and limited scalability of manual assessment by Wikipedia editors in determining whether statements require citations. To overcome this challenge, the authors propose the first multilingual machine learning system capable of automatically identifying sentences that need references. By integrating multilingual natural language processing techniques with an efficient deployment architecture, the system consistently outperforms existing benchmarks across ten language editions of Wikipedia and has been successfully deployed in a production environment. This study represents the first large-scale implementation of automated citation-need evaluation across multiple languages and contributes to the research community through the public release of its dataset and source code, thereby fostering further advancements in this domain.
📝 Abstract
Wikipedia is a critical source of information for millions of users across the Web. It serves as a key resource for large language models, search engines, question-answering systems, and other Web-based applications. In Wikipedia, content needs to be verifiable, meaning that readers can check that claims are backed by references to reliable sources. This depends on manual verification by editors, an effective but labor-intensive process, especially given the high volume of daily edits. To address this challenge, we introduce a multilingual machine learning system to assist editors in identifying claims requiring citations. Our approach is tested in 10 language editions of Wikipedia, outperforming existing benchmarks for reference need assessment. We not only consider machine learning evaluation metrics but also system requirements, allowing us to explore the trade-offs between model accuracy and computational efficiency under real-world infrastructure constraints. We deploy our system in production and release data and code to support further research.