🤖 AI Summary
To address backdoor attacks against code large language models (Code LLMs), this paper systematically evaluates spectral signature–based defenses in this domain. Through empirical analysis across diverse attack configurations—including trigger placement, poisoning rates, and model architectures—as well as varying defense hyperparameters, we find that existing spectral signature methods frequently achieve suboptimal detection performance on code models. Our key contributions are threefold: (1) We identify, for the first time, two primary causes of performance degradation—feature-space distribution shift and trigger semantic sparsity—in the code domain; (2) We propose Proxy-Score, a retraining-free proxy metric for defense efficacy estimation, reducing average prediction error by 37.2%; and (3) Leveraging these insights, we derive practical, deployable parameter-tuning guidelines that improve detection F1-score by 21.8% on average.
📝 Abstract
As Large Language Models (LLMs) become increasingly integrated into software development workflows, they also become prime targets for adversarial attacks. Among these, backdoor attacks are a significant threat, allowing attackers to manipulate model outputs through hidden triggers embedded in training data. Detecting such backdoors remains a challenge, and one promising approach is the use of Spectral Signature defense methods that identify poisoned data by analyzing feature representations through eigenvectors. While some prior works have explored Spectral Signatures for backdoor detection in neural networks, recent studies suggest that these methods may not be optimally effective for code models. In this paper, we revisit the applicability of Spectral Signature-based defenses in the context of backdoor attacks on code models. We systematically evaluate their effectiveness under various attack scenarios and defense configurations, analyzing their strengths and limitations. We found that the widely used setting of Spectral Signature in code backdoor detection is often suboptimal. Hence, we explored the impact of different settings of the key factors. We discovered a new proxy metric that can more accurately estimate the actual performance of Spectral Signature without model retraining after the defense.