๐ค AI Summary
Backdoor attacks pose a severe threat to the trustworthiness of image recognition models, yet existing defenses suffer from poor generalizability and inconsistent efficacy. To address this, we introduce the first large-scale benchmarking framework for backdoor defense evaluation, systematically assessing 16 state-of-the-art defense methods across 8 backdoor attack types, 3 benchmark datasets, 4 model architectures, and 3 poisoning ratesโyielding 122,236 experimental evaluations. Our multidimensional evaluation framework integrates model reverse engineering, feature-space analysis, anomaly detection, input sanitization, and robust training. Results reveal that most recently proposed defenses fail to significantly outperform simple baselines; defense performance is highly sensitive to attack type, model architecture, and poisoning rate. This work uncovers fundamental limitations of current defense techniques and establishes the first empirically grounded benchmark and key design principles for trustworthy AI security.
๐ Abstract
The widespread adoption of deep learning across various industries has introduced substantial challenges, particularly in terms of model explainability and security. The inherent complexity of deep learning models, while contributing to their effectiveness, also renders them susceptible to adversarial attacks. Among these, backdoor attacks are especially concerning, as they involve surreptitiously embedding specific triggers within training data, causing the model to exhibit aberrant behavior when presented with input containing the triggers. Such attacks often exploit vulnerabilities in outsourced processes, compromising model integrity without affecting performance on clean (trigger-free) input data. In this paper, we present a comprehensive review of existing mitigation strategies designed to counter backdoor attacks in image recognition. We provide an in-depth analysis of the theoretical foundations, practical efficacy, and limitations of these approaches. In addition, we conduct an extensive benchmarking of sixteen state-of-the-art approaches against eight distinct backdoor attacks, utilizing three datasets, four model architectures, and three poisoning ratios. Our results, derived from 122,236 individual experiments, indicate that while many approaches provide some level of protection, their performance can vary considerably. Furthermore, when compared to two seminal approaches, most newer approaches do not demonstrate substantial improvements in overall performance or consistency across diverse settings. Drawing from these findings, we propose potential directions for developing more effective and generalizable defensive mechanisms in the future.