๐ค AI Summary
NLP models face security threats from backdoor attacks, yet existing methods lack interpretable trigger mechanisms and quantitative modeling of vulnerabilities. This paper proposes Sensitron, the first framework establishing a strong correlation (SRC = 0.83) between interpretability scores and attack success rate. It enables quantitative vulnerability assessment and precise trigger design via Dynamic Meta-Sensitivity Analysis (DMSA), Hierarchical SHAP estimation (H-SHAP), and plug-and-play ranking (Plug-and-Rank). Sensitron achieves 97.8% attack success rateโ5.8% higher than SOTAโwhile maintaining high stealth and robustness; even under 0.1% poisoning rate, success remains at 85.4%, and it exhibits strong resilience against multiple state-of-the-art defenses. Its core innovation lies in deeply integrating sensitivity analysis with explainable AI, enabling interpretable, quantifiable, and reproducible modeling of backdoor attacks.
๐ Abstract
Backdoor attacks pose a significant security threat to natural language processing (NLP) systems, but existing methods lack explainable trigger mechanisms and fail to quantitatively model vulnerability patterns. This work pioneers the quantitative connection between explainable artificial intelligence (XAI) and backdoor attacks, introducing Sensitron, a novel modular framework for crafting stealthy and robust backdoor triggers. Sensitron employs a progressive refinement approach where Dynamic Meta-Sensitivity Analysis (DMSA) first identifies potentially vulnerable input tokens, Hierarchical SHAP Estimation (H-SHAP) then provides explainable attribution to precisely pinpoint the most influential tokens, and finally a Plug-and-Rank mechanism that generates contextually appropriate triggers. We establish the first mathematical correlation (Sensitivity Ranking Correlation, SRC=0.83) between explainability scores and empirical attack success, enabling precise targeting of model vulnerabilities. Sensitron achieves 97.8% Attack Success Rate (ASR) (+5.8% over state-of-the-art (SOTA)) with 85.4% ASR at 0.1% poisoning rate, demonstrating robust resistance against multiple SOTA defenses. This work reveals fundamental NLP vulnerabilities and provides new attack vectors through weaponized explainability.