Is Misinformation More Open? A Study of robots.txt Gatekeeping on the Web

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates disparities in AI crawler access control—via robots.txt directives—between mainstream news and misinformation websites, and their implications for large language model (LLM) training data composition. Method: We systematically parsed robots.txt files from 2023–09 to 2025–05 across both site categories, identified user-agent policies targeting AI crawlers, and conducted active HTTP request validation to confirm enforcement. Contribution/Results: We find that 60.0% of mainstream news sites block at least one AI crawler (mean = 15.5 distinct agents), whereas only 9.1% of misinformation sites impose any restrictions (mean < 1). This constitutes the first systematic documentation of the counterintuitive phenomenon wherein misinformation sources exhibit greater protocol-level openness to AI web crawling—a gap that widens over time. The findings reveal structural biases, ethical risks, and transparency deficits in LLM training data acquisition, providing critical empirical evidence for data provenance analysis and responsible model governance.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly relying on web crawling to stay up to date and accurately answer user queries. These crawlers are expected to honor robots.txt files, which govern automated access. In this study, for the first time, we investigate whether reputable news websites and misinformation sites differ in how they configure these files, particularly in relation to AI crawlers. Analyzing a curated dataset, we find a stark contrast: 60.0% of reputable sites disallow at least one AI crawler, compared to just 9.1% of misinformation sites in their robots.txt files. Reputable sites forbid an average of 15.5 AI user agents, while misinformation sites prohibit fewer than one. We then measure active blocking behavior, where websites refuse to return content when HTTP requests include AI crawler user agents, and reveal that both categories of websites utilize it. Notably, the behavior of reputable news websites in this regard aligns more closely with their declared robots.txt directive than that of misinformation websites. Finally, our longitudinal analysis reveals that this gap has widened over time, with AI-blocking by reputable sites rising from 23% in September 2023 to nearly 60% by May 2025. Our findings highlight a growing asymmetry in content accessibility that may shape the training data available to LLMs, raising essential questions for web transparency, data ethics, and the future of AI training practices.
Problem

Research questions and friction points this paper is trying to address.

Comparing robots.txt restrictions between reputable and misinformation websites
Investigating how websites control AI crawler access to content
Analyzing content accessibility asymmetry affecting LLM training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzed robots.txt files for AI crawlers
Measured active blocking of AI user agents
Conducted longitudinal analysis of blocking trends
🔎 Similar Papers
No similar papers found.
N
Nicolas Steinacker-Olsztyn
Saarland University
Devashish Gosain
Devashish Gosain
IIT Bombay
Computer NetworkInternet MeasurementSecurity and Privacy
H
Ha Dao
MPI-INF