Wikipedia in the Era of LLMs: Evolution and Risks

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the reverse impact of large language models (LLMs) on the Wikipedia content ecosystem and its implications for NLP evaluation integrity. We employ a multi-method empirical framework—including large-scale statistical analysis of real-world data, LLM-generated content detection, machine translation quality assessment, RAG performance benchmarking, and reverse-generation simulation modeling. Our analysis quantifies, for the first time, that 1–2% of Wikipedia articles are contaminated by LLM-generated content. We identify two novel risk paradigms: (1) LLM contamination artificially inflates scores on mainstream machine translation benchmarks, inducing systematic evaluation bias; and (2) contaminated content degrades RAG retrieval relevance and answer accuracy. These findings demonstrate that LLMs are silently eroding the credibility of open knowledge repositories and expose systemic flaws in current NLP evaluation practices. The work provides critical empirical evidence to inform knowledge-base governance and the development of robust, trustworthy AI evaluation frameworks.

Technology Category

Application Category

📝 Abstract
In this paper, we present a thorough analysis of the impact of Large Language Models (LLMs) on Wikipedia, examining the evolution of Wikipedia through existing data and using simulations to explore potential risks. We begin by analyzing page views and article content to study Wikipedia's recent changes and assess the impact of LLMs. Subsequently, we evaluate how LLMs affect various Natural Language Processing (NLP) tasks related to Wikipedia, including machine translation and retrieval-augmented generation (RAG). Our findings and simulation results reveal that Wikipedia articles have been influenced by LLMs, with an impact of approximately 1%-2% in certain categories. If the machine translation benchmark based on Wikipedia is influenced by LLMs, the scores of the models may become inflated, and the comparative results among models might shift as well. Moreover, the effectiveness of RAG might decrease if the knowledge base becomes polluted by LLM-generated content. While LLMs have not yet fully changed Wikipedia's language and knowledge structures, we believe that our empirical findings signal the need for careful consideration of potential future risks.
Problem

Research questions and friction points this paper is trying to address.

Analyze LLMs' impact on Wikipedia evolution and risks.
Evaluate LLMs' effects on NLP tasks like translation and RAG.
Assess potential risks of LLM-generated content on Wikipedia.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzed Wikipedia changes using LLM impact data
Simulated risks of LLMs on NLP tasks
Assessed LLM influence on Wikipedia content quality
🔎 Similar Papers
No similar papers found.
S
Siming Huang
Huazhong University of Science and Technology
Y
Yuliang Xu
Huazhong University of Science and Technology
Mingmeng Geng
Mingmeng Geng
Postdoc, ENS-PSL
large language modelscomputational social sciencescience of sciencesurvey methodology
Yao Wan
Yao Wan
Huazhong University of Science and Technology
NLPProgramming LanguagesSoftware EngineeringLarge Language Models
D
Dongping Chen
Huazhong University of Science and Technology