GradEscape: A Gradient-Based Evader Against AI-Generated Text Detectors

๐Ÿ“… 2025-06-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the robustness deficiencies of AI-generated text (AIGT) detectors by proposing GradEscapeโ€”the first gradient-based adversarial evasion method for AIGT detection. GradEscape introduces a differentiable text evasion framework that employs weighted embedding modeling and victim-detector feedback-driven parameter updates to achieve efficient evasion under minimal perturbations. To mitigate tokenizer mismatch, it innovatively incorporates a warm-start mechanism, enabling tokenizer reverse inference and lightweight (139M) distillation model extraction in query-only settings. Evaluated across four benchmark datasets and three major language models, GradEscape consistently outperforms four state-of-the-art methods and successfully evades two commercial AIGT detectors. Empirical analysis identifies stylistic disparities in training data as the fundamental vulnerability underlying detector failure.

Technology Category

Application Category

๐Ÿ“ Abstract
In this paper, we introduce GradEscape, the first gradient-based evader designed to attack AI-generated text (AIGT) detectors. GradEscape overcomes the undifferentiable computation problem, caused by the discrete nature of text, by introducing a novel approach to construct weighted embeddings for the detector input. It then updates the evader model parameters using feedback from victim detectors, achieving high attack success with minimal text modification. To address the issue of tokenizer mismatch between the evader and the detector, we introduce a warm-started evader method, enabling GradEscape to adapt to detectors across any language model architecture. Moreover, we employ novel tokenizer inference and model extraction techniques, facilitating effective evasion even in query-only access. We evaluate GradEscape on four datasets and three widely-used language models, benchmarking it against four state-of-the-art AIGT evaders. Experimental results demonstrate that GradEscape outperforms existing evaders in various scenarios, including with an 11B paraphrase model, while utilizing only 139M parameters. We have successfully applied GradEscape to two real-world commercial AIGT detectors. Our analysis reveals that the primary vulnerability stems from disparity in text expression styles within the training data. We also propose a potential defense strategy to mitigate the threat of AIGT evaders. We open-source our GradEscape for developing more robust AIGT detectors.
Problem

Research questions and friction points this paper is trying to address.

Overcoming undifferentiable computation in text-based evasion attacks
Addressing tokenizer mismatch across different language model architectures
Enhancing evasion success with minimal text modification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-based evader for AI text detection
Weighted embeddings overcome undifferentiable computation
Warm-started evader adapts to any model
๐Ÿ”Ž Similar Papers
No similar papers found.
W
Wenlong Meng
Zhejiang University
S
Shuguo Fan
Zhejiang University
Chengkun Wei
Chengkun Wei
Zhejiang University
Network SystemData PrivacyMachine Learning Security
M
Min Chen
Vrije Universiteit Amsterdam
Y
Yuwei Li
National University of Defense Technology
Y
Yuanchao Zhang
Mybank, Ant Group
Zhikun Zhang
Zhikun Zhang
Assistant Professor, Zhejiang University
Trustworthy AIData PrivacyDifferential Privacy
Wenzhi Chen
Wenzhi Chen
Chang Gung University
industrial designdesign educationlearningteaching