๐ค AI Summary
This work addresses the robustness deficiencies of AI-generated text (AIGT) detectors by proposing GradEscapeโthe first gradient-based adversarial evasion method for AIGT detection. GradEscape introduces a differentiable text evasion framework that employs weighted embedding modeling and victim-detector feedback-driven parameter updates to achieve efficient evasion under minimal perturbations. To mitigate tokenizer mismatch, it innovatively incorporates a warm-start mechanism, enabling tokenizer reverse inference and lightweight (139M) distillation model extraction in query-only settings. Evaluated across four benchmark datasets and three major language models, GradEscape consistently outperforms four state-of-the-art methods and successfully evades two commercial AIGT detectors. Empirical analysis identifies stylistic disparities in training data as the fundamental vulnerability underlying detector failure.
๐ Abstract
In this paper, we introduce GradEscape, the first gradient-based evader designed to attack AI-generated text (AIGT) detectors. GradEscape overcomes the undifferentiable computation problem, caused by the discrete nature of text, by introducing a novel approach to construct weighted embeddings for the detector input. It then updates the evader model parameters using feedback from victim detectors, achieving high attack success with minimal text modification. To address the issue of tokenizer mismatch between the evader and the detector, we introduce a warm-started evader method, enabling GradEscape to adapt to detectors across any language model architecture. Moreover, we employ novel tokenizer inference and model extraction techniques, facilitating effective evasion even in query-only access. We evaluate GradEscape on four datasets and three widely-used language models, benchmarking it against four state-of-the-art AIGT evaders. Experimental results demonstrate that GradEscape outperforms existing evaders in various scenarios, including with an 11B paraphrase model, while utilizing only 139M parameters. We have successfully applied GradEscape to two real-world commercial AIGT detectors. Our analysis reveals that the primary vulnerability stems from disparity in text expression styles within the training data. We also propose a potential defense strategy to mitigate the threat of AIGT evaders. We open-source our GradEscape for developing more robust AIGT detectors.