Detecting Data Poisoning in Code Generation LLMs via Black-Box, Vulnerability-Oriented Scanning

πŸ“… 2026-03-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of large language models for code generation to data poisoning attacks, which can produce syntactically diverse yet semantically equivalent unsafe code that evades existing detection mechanisms. To counter this threat, the authors propose CodeScanβ€”the first black-box poisoning scanning framework tailored for code generation models. CodeScan leverages abstract syntax tree (AST) normalization to unify the structural representation of semantically equivalent code variants, employs iterative differential analysis to identify potential poisoning targets, and integrates a large language model for vulnerability-oriented contamination assessment. Evaluated across three model architectures and 108 distinct models, CodeScan achieves over 97% detection accuracy against four representative backdoor attacks, substantially outperforming current methods while maintaining high precision and low false-positive rates.

Technology Category

Application Category

πŸ“ Abstract
Code generation large language models (LLMs) are increasingly integrated into modern software development workflows. Recent work has shown that these models are vulnerable to backdoor and poisoning attacks that induce the generation of insecure code, yet effective defenses remain limited. Existing scanning approaches rely on token-level generation consistency to invert attack targets, which is ineffective for source code where identical semantics can appear in diverse syntactic forms. We present CodeScan, which, to the best of our knowledge, is the first poisoning-scanning framework tailored to code generation models. CodeScan identifies attack targets by analyzing structural similarities across multiple generations conditioned on different clean prompts. It combines iterative divergence analysis with abstract syntax tree (AST)-based normalization to abstract away surface-level variation and unify semantically equivalent code, isolating structures that recur consistently across generations. CodeScan then applies LLM-based vulnerability analysis to determine whether the extracted structures contain security vulnerabilities and flags the model as compromised when such a structure is found. We evaluate CodeScan against four representative attacks under both backdoor and poisoning settings across three real-world vulnerability classes. Experiments on 108 models spanning three architectures and multiple model sizes demonstrate 97%+ detection accuracy with substantially lower false positives than prior methods.
Problem

Research questions and friction points this paper is trying to address.

data poisoning
code generation
large language models
vulnerability detection
black-box scanning
Innovation

Methods, ideas, or system contributions that make the work stand out.

data poisoning detection
code generation LLMs
AST-based normalization
vulnerability-oriented scanning
black-box model analysis
πŸ”Ž Similar Papers
No similar papers found.