🤖 AI Summary
This paper addresses the challenges of detecting and defending against code poisoning (i.e., backdoor) attacks targeting Neural Code Models (NCMs). To this end, we propose KillBadCode—a lightweight detection and purification framework. Methodologically, we introduce *code naturalness degradation* as a universal poisoning indicator for the first time; we quantify the naturalness improvement induced by token deletion via a lightweight n-gram language model, and design a trigger-word localization mechanism coupled with cross-sample naturalness gain aggregation to significantly reduce false positives. Evaluated on two representative code poisoning attacks across four code intelligence tasks, KillBadCode achieves an average detection speed 25× faster than baselines (as low as 5 minutes), while maintaining high true-positive rates and low false-positive rates. The framework provides an efficient, deployable, data-level purification solution for securing NCM training.
📝 Abstract
Neural code models (NCMs) have demonstrated extraordinary capabilities in code intelligence tasks. Meanwhile, the security of NCMs and NCMs-based systems has garnered increasing attention. In particular, NCMs are often trained on large-scale data from potentially untrustworthy sources, providing attackers with the opportunity to manipulate them by inserting crafted samples into the data. This type of attack is called a code poisoning attack (also known as a backdoor attack). It allows attackers to implant backdoors in NCMs and thus control model behavior, which poses a significant security threat. However, there is still a lack of effective techniques for detecting various complex code poisoning attacks. In this paper, we propose an innovative and lightweight technique for code poisoning detection named KillBadCode. KillBadCode is designed based on our insight that code poisoning disrupts the naturalness of code. Specifically, KillBadCode first builds a code language model (CodeLM) on a lightweight $n$-gram language model. Then, given poisoned data, KillBadCode utilizes CodeLM to identify those tokens in (poisoned) code snippets that will make the code snippets more natural after being deleted as trigger tokens. Considering that the removal of some normal tokens in a single sample might also enhance code naturalness, leading to a high false positive rate (FPR), we aggregate the cumulative improvement of each token across all samples. Finally, KillBadCode purifies the poisoned data by removing all poisoned samples containing the identified trigger tokens. The experimental results on two code poisoning attacks and four code intelligence tasks demonstrate that KillBadCode significantly outperforms four baselines. More importantly, KillBadCode is very efficient, with a minimum time consumption of only 5 minutes, and is 25 times faster than the best baseline on average.