Adversarial Defence without Adversarial Defence: Enhancing Language Model Robustness via Instance-level Principal Component Removal

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pretrained language models (PLMs) achieve strong performance but suffer from poor robustness against adversarial attacks. To address this, we propose a lightweight, post-hoc robustness enhancement method that requires neither adversarial training nor architectural modifications. Our approach applies instance-level principal component analysis (PCA) to decorrelate token embeddings, thereby reshaping the embedding space toward an isotropic Gaussian distribution—preserving semantic structure while attenuating the impact of adversarial perturbations on decision boundaries. The transformation operates solely on the embedding layer, incurring negligible computational overhead. Evaluated on eight standard benchmark datasets, our method significantly improves adversarial robustness while maintaining original accuracy on clean data—matching baseline performance—and thus achieves an effective trade-off between robustness and generalization.

Technology Category

Application Category

📝 Abstract
Pre-trained language models (PLMs) have driven substantial progress in natural language processing but remain vulnerable to adversarial attacks, raising concerns about their robustness in real-world applications. Previous studies have sought to mitigate the impact of adversarial attacks by introducing adversarial perturbations into the training process, either implicitly or explicitly. While both strategies enhance robustness, they often incur high computational costs. In this work, we propose a simple yet effective add-on module that enhances the adversarial robustness of PLMs by removing instance-level principal components, without relying on conventional adversarial defences or perturbing the original training data. Our approach transforms the embedding space to approximate Gaussian properties, thereby reducing its susceptibility to adversarial perturbations while preserving semantic relationships. This transformation aligns embedding distributions in a way that minimises the impact of adversarial noise on decision boundaries, enhancing robustness without requiring adversarial examples or costly training-time augmentation. Evaluations on eight benchmark datasets show that our approach improves adversarial robustness while maintaining comparable before-attack accuracy to baselines, achieving a balanced trade-off between robustness and generalisation.
Problem

Research questions and friction points this paper is trying to address.

Enhancing PLM robustness against adversarial attacks
Reducing computational costs of adversarial defenses
Preserving semantic relationships while improving robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Removes instance-level principal components
Transforms embedding space to Gaussian
Enhances robustness without adversarial examples
🔎 Similar Papers
No similar papers found.