🤖 AI Summary
The proliferation of toxic text generated by large language models (LLMs) undermines the robustness of toxicity classifiers and increases their susceptibility to adversarial attacks. Method: This paper proposes a mechanistic interpretability–driven active defense framework: it introduces attention-head-level circuit analysis—the first such application—to diagnose classifier vulnerabilities; integrates fine-grained attribution with adversarial attack localization to identify critical, attack-prone components; and enhances robustness via targeted circuit suppression. Contribution/Results: Evaluated on BERT and RoBERTa architectures across diverse demographic datasets, the method significantly improves classification accuracy under adversarial perturbations. It further uncovers systematic differences in model vulnerability across demographic groups, revealing fairness-related failure modes. By unifying interpretability, robustness, and fairness, this work establishes a novel paradigm for building trustworthy, auditable, and attack-resilient content moderation systems.
📝 Abstract
The volume of machine-generated content online has grown dramatically due to the widespread use of Large Language Models (LLMs), leading to new challenges for content moderation systems. Conventional content moderation classifiers, which are usually trained on text produced by humans, suffer from misclassifications due to LLM-generated text deviating from their training data and adversarial attacks that aim to avoid detection. Present-day defence tactics are reactive rather than proactive, since they rely on adversarial training or external detection models to identify attacks. In this work, we aim to identify the vulnerable components of toxicity classifiers that contribute to misclassification, proposing a novel strategy based on mechanistic interpretability techniques. Our study focuses on fine-tuned BERT and RoBERTa classifiers, testing on diverse datasets spanning a variety of minority groups. We use adversarial attacking techniques to identify vulnerable circuits. Finally, we suppress these vulnerable circuits, improving performance against adversarial attacks. We also provide demographic-level insights into these vulnerable circuits, exposing fairness and robustness gaps in model training. We find that models have distinct heads that are either crucial for performance or vulnerable to attack and suppressing the vulnerable heads improves performance on adversarial input. We also find that different heads are responsible for vulnerability across different demographic groups, which can inform more inclusive development of toxicity detection models.