🤖 AI Summary
Existing neural backdoor detection methods exhibit poor generalization to unseen model architectures and fail to robustly identify backdoors across heterogeneous models. Method: We propose ArcGen, a black-box detection framework featuring the first architecture-agnostic feature alignment mechanism. It achieves cross-architecture invariance modeling via dual-level alignment losses—distribution-level and sample-level—requiring only input-output responses without access to model architecture or gradients. Contribution/Results: Evaluated on 16,896 models spanning diverse datasets, attack types, and network architectures, ArcGen improves detection AUC on unseen architectures by up to 42.5% over prior methods, significantly advancing the state of the art in generalizable backdoor detection.
📝 Abstract
Backdoor attacks pose a significant threat to the security and reliability of deep learning models. To mitigate such attacks, one promising approach is to learn to extract features from the target model and use these features for backdoor detection. However, we discover that existing learning-based neural backdoor detection methods do not generalize well to new architectures not seen during the learning phase. In this paper, we analyze the root cause of this issue and propose a novel black-box neural backdoor detection method called ArcGen. Our method aims to obtain architecture-invariant model features, i.e., aligned features, for effective backdoor detection. Specifically, in contrast to existing methods directly using model outputs as model features, we introduce an additional alignment layer in the feature extraction function to further process these features. This reduces the direct influence of architecture information on the features. Then, we design two alignment losses to train the feature extraction function. These losses explicitly require that features from models with similar backdoor behaviors but different architectures are aligned at both the distribution and sample levels. With these techniques, our method demonstrates up to 42.5% improvements in detection performance (e.g., AUC) on unseen model architectures. This is based on a large-scale evaluation involving 16,896 models trained on diverse datasets, subjected to various backdoor attacks, and utilizing different model architectures. Our code is available at https://github.com/SeRAlab/ArcGen.