Can AI-Generated Text be Reliably Detected?

📅 2023-03-17
🏛️ arXiv.org
📈 Citations: 308
Influential: 35
📄 PDF
🤖 AI Summary
AI-generated text detectors exhibit insufficient robustness against adversarial attacks. Method: We propose a black-box recursive paraphrasing attack to systematically evaluate the vulnerability of watermarking-based, neural network-based, zero-shot, and retrieval-based detectors. Contribution/Results: We are the first to demonstrate that mainstream detectors are highly susceptible to recursive paraphrasing—significantly degrading detection rates while preserving textual quality. We further reveal that watermark models can be reverse-engineered, causing human-written texts to be erroneously classified as AI-generated. Theoretically, we establish a quantitative relationship between AUROC and the total variation distance between text distributions, proving an inherent upper bound on detector reliability. Finally, we introduce the first comprehensive stress-testing framework covering multiple detector paradigms, providing both a methodological foundation and theoretical limits for AI-generated content detection.
📝 Abstract
Large Language Models (LLMs) perform impressively well in various applications. However, the potential for misuse of these models in activities such as plagiarism, generating fake news, and spamming has raised concern about their responsible use. Consequently, the reliable detection of AI-generated text has become a critical area of research. AI text detectors have shown to be effective under their specific settings. In this paper, we stress-test the robustness of these AI text detectors in the presence of an attacker. We introduce recursive paraphrasing attack to stress test a wide range of detection schemes, including the ones using the watermarking as well as neural network-based detectors, zero shot classifiers, and retrieval-based detectors. Our experiments conducted on passages, each approximately 300 tokens long, reveal the varying sensitivities of these detectors to our attacks. Our findings indicate that while our recursive paraphrasing method can significantly reduce detection rates, it only slightly degrades text quality in many cases, highlighting potential vulnerabilities in current detection systems in the presence of an attacker. Additionally, we investigate the susceptibility of watermarked LLMs to spoofing attacks aimed at misclassifying human-written text as AI-generated. We demonstrate that an attacker can infer hidden AI text signatures without white-box access to the detection method, potentially leading to reputational risks for LLM developers. Finally, we provide a theoretical framework connecting the AUROC of the best possible detector to the Total Variation distance between human and AI text distributions. This analysis offers insights into the fundamental challenges of reliable detection as language models continue to advance. Our code is publicly available at https://github.com/vinusankars/Reliability-of-AI-text-detectors.
Problem

Research questions and friction points this paper is trying to address.

Artificial Intelligence Text Detection
Human vs Machine Writing Discrimination
Evasion of Detection Tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recursive Paraphrase Attack
Robustness Evaluation
Theoretical Framework for Detection
🔎 Similar Papers
No similar papers found.