🤖 AI Summary
Existing AI-generated text detection benchmarks suffer from data contamination, causing detectors to rely on spurious correlations—such as fixed prefixes or refusal patterns—thereby compromising robustness and generalization. Method: We propose the first data purification framework specifically designed for detection tasks, integrating textual pattern analysis, rule-driven filtering, and adversarial-aware re-cleaning to systematically identify and eliminate systematic biases in synthetic texts. Contribution/Results: Evaluated on the DetectRL benchmark, our purified datasets yield detectors with significantly improved defense rates against direct evasion attacks and markedly reduced false correlations. Purified models exhibit enhanced generalization, demonstrating greater stability across diverse LLMs and domains. The high-quality, contamination-mitigated benchmark dataset is publicly released to support rigorous, trustworthy AI detection research.
📝 Abstract
Large language models are increasingly used for many applications. To prevent illicit use, it is desirable to be able to detect AI-generated text. Training and evaluation of such detectors critically depend on suitable benchmark datasets. Several groups took on the tedious work of collecting, curating, and publishing large and diverse datasets for this task. However, it remains an open challenge to ensure high quality in all relevant aspects of such a dataset. For example, the DetectRL benchmark exhibits relatively simple patterns of AI-generation in 98.5% of the Claude-LLM data. These patterns may include introductory words such as"Sure! Here is the academic article abstract:", or instances where the LLM rejects the prompted task. In this work, we demonstrate that detectors trained on such data use such patterns as shortcuts, which facilitates spoofing attacks on the trained detectors. We consequently reprocessed the DetectRL dataset with several cleansing operations. Experiments show that such data cleansing makes direct attacks more difficult. The reprocessed dataset is publicly available.