🤖 AI Summary
This paper investigates the effectiveness of existing web-based mechanisms—robots.txt, NoAI meta tags, and reverse-proxy filtering—in mitigating AI web crawling, with particular attention to the technical awareness, deployment capacity, and practical efficacy experienced by human content creators, especially 203 professional artists. Method: We conduct large-scale web measurements, meta-tag analysis, reverse-proxy experiments, and in-depth interviews. Contribution/Results: We find strong protective intent among artists but critically low tool awareness and limited technical capacity for deployment. While robots.txt and NoAI tags deter compliant crawlers, they are largely ineffective against mainstream AI scrapers. Reverse-proxy filtering significantly enhances defense strength but suffers from limited coverage and robustness. Crucially, we identify a novel “awareness–autonomy–effectiveness” co-failure mechanism, revealing how gaps across cognitive, operational, and functional dimensions jointly undermine protection. These findings provide empirical grounding and actionable technical guidance for AI-crawling governance tailored to content creators.
📝 Abstract
The success of generative AI relies heavily on training on data scraped through extensive crawling of the Internet, a practice that has raised significant copyright, privacy, and ethical concerns. While few measures are designed to resist a resource-rich adversary determined to scrape a site, crawlers can be impacted by a range of existing tools such as robots.txt, NoAI meta tags, and active crawler blocking by reverse proxies. In this work, we seek to understand the ability and efficacy of today's networking tools to protect content creators against AI-related crawling. For targeted populations like human artists, do they have the technical knowledge and agency to utilize crawler-blocking tools such as robots.txt, and can such tools be effective? Using large scale measurements and a targeted user study of 203 professional artists, we find strong demand for tools like robots.txt, but significantly constrained by critical hurdles in technical awareness, agency in deploying them, and limited efficacy against unresponsive crawlers. We further test and evaluate network-level crawler blockers provided by reverse proxies. Despite relatively limited deployment today, they offer stronger protections against AI crawlers, but still come with their own set of limitations.