🤖 AI Summary
Existing approaches to secure code generation lack robustness validation under adversarial prompts, and their security and functional evaluations are often decoupled, leading to inflated performance claims. This work proposes the first adversarial evaluation framework specifically designed for secure code generation systems, which jointly assesses both security and functionality of prominent methods—SVEN, SafeCoder, and PromSec—under unified conditions. The framework simulates realistic development or attack scenarios through prompt perturbations, including rewriting, cue inversion, and context manipulation. Experiments reveal that static analyzers overestimate security by 7–21×, and 37%–60% of outputs deemed “secure” are in fact non-functional. Under adversarial perturbations, the rate of truly secure and usable code drops sharply to 3%–17%. This study establishes a joint evaluation paradigm and best practices that holistically account for both security and functionality.
📝 Abstract
Recent secure code generation methods, using vulnerability-aware fine-tuning, prefix-tuning, and prompt optimization, claim to prevent LLMs from producing insecure code. However, their robustness under adversarial conditions remains untested, and current evaluations decouple security from functionality, potentially inflating reported gains. We present the first systematic adversarial audit of state-of-the-art secure code generation methods (SVEN, SafeCoder, PromSec). We subject them to realistic prompt perturbations such as paraphrasing, cue inversion, and context manipulation that developers might inadvertently introduce or adversaries deliberately exploit. To enable fair comparison, we evaluate all methods under consistent conditions, jointly assessing security and functionality using multiple analyzers and executable tests. Our findings reveal critical robustness gaps: static analyzers overestimate security by 7 to 21 times, with 37 to 60% of ``secure''outputs being non-functional. Under adversarial conditions, true secure-and-functional rates collapse to 3 to 17%. Based on these findings, we propose best practices for building and evaluating robust secure code generation methods. Our code is available.