🤖 AI Summary
Existing adversarial attacks on NLP systems ignore explanation fidelity, failing to simultaneously mislead classifiers and preserve explanation consistency with the original input—thereby enabling stealthy deception by exploiting user trust in transparent models.
Method: We propose AdvChar, the first black-box adversarial attack targeting explanation consistency for interpretable NLP systems. AdvChar identifies explanation-critical tokens and applies fine-grained, semantically invariant character-level perturbations—averaging only two modified characters per instance—to jointly degrade prediction accuracy while maintaining high attribution consistency.
Contribution/Results: Extensive experiments across seven models and three explanation methods demonstrate AdvChar’s strong transferability, low perceptibility, and high attack success rate. It is the first work to systematically expose the security vulnerability underlying the “interpretability implies trustworthiness” assumption, establishing a new paradigm for robustness evaluation of explainable AI systems.
📝 Abstract
Studies have shown that machine learning systems are vulnerable to adversarial examples in theory and practice. Where previous attacks have focused mainly on visual models that exploit the difference between human and machine perception, text-based models have also fallen victim to these attacks. However, these attacks often fail to maintain the semantic meaning of the text and similarity. This paper introduces AdvChar, a black-box attack on Interpretable Natural Language Processing Systems, designed to mislead the classifier while keeping the interpretation similar to benign inputs, thus exploiting trust in system transparency. AdvChar achieves this by making less noticeable modifications to text input, forcing the deep learning classifier to make incorrect predictions and preserve the original interpretation. We use an interpretation-focused scoring approach to determine the most critical tokens that, when changed, can cause the classifier to misclassify the input. We apply simple character-level modifications to measure the importance of tokens, minimizing the difference between the original and new text while generating adversarial interpretations similar to benign ones. We thoroughly evaluated AdvChar by testing it against seven NLP models and three interpretation models using benchmark datasets for the classification task. Our experiments show that AdvChar can significantly reduce the prediction accuracy of current deep learning models by altering just two characters on average in input samples.