Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness

📅 2025-02-20
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenges of detecting and defending against code poisoning (i.e., backdoor) attacks targeting Neural Code Models (NCMs). To this end, we propose KillBadCode—a lightweight detection and purification framework. Methodologically, we introduce *code naturalness degradation* as a universal poisoning indicator for the first time; we quantify the naturalness improvement induced by token deletion via a lightweight n-gram language model, and design a trigger-word localization mechanism coupled with cross-sample naturalness gain aggregation to significantly reduce false positives. Evaluated on two representative code poisoning attacks across four code intelligence tasks, KillBadCode achieves an average detection speed 25× faster than baselines (as low as 5 minutes), while maintaining high true-positive rates and low false-positive rates. The framework provides an efficient, deployable, data-level purification solution for securing NCM training.

Technology Category

Application Category

📝 Abstract
Neural code models (NCMs) have demonstrated extraordinary capabilities in code intelligence tasks. Meanwhile, the security of NCMs and NCMs-based systems has garnered increasing attention. In particular, NCMs are often trained on large-scale data from potentially untrustworthy sources, providing attackers with the opportunity to manipulate them by inserting crafted samples into the data. This type of attack is called a code poisoning attack (also known as a backdoor attack). It allows attackers to implant backdoors in NCMs and thus control model behavior, which poses a significant security threat. However, there is still a lack of effective techniques for detecting various complex code poisoning attacks. In this paper, we propose an innovative and lightweight technique for code poisoning detection named KillBadCode. KillBadCode is designed based on our insight that code poisoning disrupts the naturalness of code. Specifically, KillBadCode first builds a code language model (CodeLM) on a lightweight $n$-gram language model. Then, given poisoned data, KillBadCode utilizes CodeLM to identify those tokens in (poisoned) code snippets that will make the code snippets more natural after being deleted as trigger tokens. Considering that the removal of some normal tokens in a single sample might also enhance code naturalness, leading to a high false positive rate (FPR), we aggregate the cumulative improvement of each token across all samples. Finally, KillBadCode purifies the poisoned data by removing all poisoned samples containing the identified trigger tokens. The experimental results on two code poisoning attacks and four code intelligence tasks demonstrate that KillBadCode significantly outperforms four baselines. More importantly, KillBadCode is very efficient, with a minimum time consumption of only 5 minutes, and is 25 times faster than the best baseline on average.
Problem

Research questions and friction points this paper is trying to address.

Detecting code poisoning attacks effectively
Reducing false positives in code naturalness
Improving efficiency in poisoned data purification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight code poisoning detection
Utilizes n-gram language model
Aggregates token improvement across samples
🔎 Similar Papers
No similar papers found.
Weisong Sun
Weisong Sun
Nanyang Technological University
Trustworthy Intelligent SE (Software Engineering)
Yuchen Chen
Yuchen Chen
assistant professor of communication studies at CUNY, baruch college
chinadigital studiesSTS
M
Mengzhe Yuan
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
Chunrong Fang
Chunrong Fang
Software Institute, Nanjing University
Software TestingSoftware EngineeringComputer Science
Zhenpeng Chen
Zhenpeng Chen
Research Fellow, Nanyang Technological University
Software EngineeringMachine LearningData ScienceTrustworthy AI
C
Chong Wang
College of Computing and Data Science, Nanyang Technological University, Singapore
Y
Yang Liu
College of Computing and Data Science, Nanyang Technological University, Singapore
Baowen Xu
Baowen Xu
Nanjing University
SoftwareProgramming Languages
Z
Zhenyu Chen
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China