Iron Sharpens Iron: Defending Against Attacks in Machine-Generated Text Detection with Adversarial Training

πŸ“… 2025-02-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the vulnerability of machine-generated text (MGT) detectors to adversarial perturbations and their insufficient robustness, this paper proposes GREATER, a dual-agent cooperative adversarial training framework. Methodologically, GREATER jointly optimizes a dedicated attacker (GREATER-A) and a detector (GREATER-D) through synchronized, iterative updates. Its key contributions include: (1) the first synchronous attack-defense update mechanism; (2) a stealthy and potent adversarial attack generation method combining keyword meta-localization in embedding space with greedy pruning; and (3) generalized defense capability against diverse perturbation types and unseen attacks. Evaluated under nine textual perturbations and five adversarial attack classes, GREATER reduces the attack success rate of MGT detectors by 10.61% compared to state-of-the-art defenses. Moreover, GREATER-A achieves significantly higher efficiency and effectiveness than existing top-performing attack methods.

Technology Category

Application Category

πŸ“ Abstract
Machine-generated Text (MGT) detection is crucial for regulating and attributing online texts. While the existing MGT detectors achieve strong performance, they remain vulnerable to simple perturbations and adversarial attacks. To build an effective defense against malicious perturbations, we view MGT detection from a threat modeling perspective, that is, analyzing the model's vulnerability from an adversary's point of view and exploring effective mitigations. To this end, we introduce an adversarial framework for training a robust MGT detector, named GREedy Adversary PromoTed DefendER (GREATER). The GREATER consists of two key components: an adversary GREATER-A and a detector GREATER-D. The GREATER-D learns to defend against the adversarial attack from GREATER-A and generalizes the defense to other attacks. GREATER-A identifies and perturbs the critical tokens in embedding space, along with greedy search and pruning to generate stealthy and disruptive adversarial examples. Besides, we update the GREATER-A and GREATER-D synchronously, encouraging the GREATER-D to generalize its defense to different attacks and varying attack intensities. Our experimental results across 9 text perturbation strategies and 5 adversarial attacks show that our GREATER-D reduces the Attack Success Rate (ASR) by 10.61% compared with SOTA defense methods while our GREATER-A is demonstrated to be more effective and efficient than SOTA attack approaches.
Problem

Research questions and friction points this paper is trying to address.

Enhancing machine-generated text detection robustness.
Defending against adversarial attacks with GREATER framework.
Reducing attack success rate through adversarial training.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial training framework
Greedy search for perturbations
Synchronous model updates
πŸ”Ž Similar Papers
No similar papers found.
Y
Yuanfan Li
Faculty of Electronic and Information Engineering, Xi’an Jiaotong University
Zhaohan Zhang
Zhaohan Zhang
Queen Mary University of London
Artificial Intelligence
Chengzhengxu Li
Chengzhengxu Li
xianjiaotong university
LLM RL Prompting
C
Chao Shen
Faculty of Electronic and Information Engineering, Xi’an Jiaotong University
X
Xiaoming Liu
Faculty of Electronic and Information Engineering, Xi’an Jiaotong University