🤖 AI Summary
This work addresses a critical security blind spot in large language models (LLMs) for code generation: existing safety mechanisms rely on detecting explicit malicious instructions and are thus vulnerable to implicit adversarial prompts. To bridge this gap, the authors propose CodeJailbreaker—the first implicit jailbreaking framework tailored for code generation—encoding malicious intent within auxiliary textual contexts such as commit messages to achieve semantic stealth. The method integrates implicit semantic injection with multimodal prompt engineering and is rigorously evaluated on RMCBench, a newly curated benchmark covering code completion, repair, and refactoring tasks. Experimental results demonstrate that CodeJailbreaker achieves significantly higher attack success rates across all tasks compared to explicit prompting baselines. This study is the first to expose fundamental vulnerabilities in instruction-following LLMs for code generation under implicit threat models, providing crucial insights for developing robust code-generation security defenses.
📝 Abstract
The proliferation of Large Language Models (LLMs) has revolutionized natural language processing and significantly impacted code generation tasks, enhancing software development efficiency and productivity. Notably, LLMs like GPT-4 have demonstrated remarkable proficiency in text-to-code generation tasks. However, the growing reliance on LLMs for code generation necessitates a critical examination of the safety implications associated with their outputs. Existing research efforts have primarily focused on verifying the functional correctness of LLMs, overlooking their safety in code generation. This paper introduces a jailbreaking approach, CodeJailbreaker, designed to uncover safety concerns in LLM-based code generation. The basic observation is that existing safety mechanisms for LLMs are built through the instruction-following paradigm, where malicious intent is explicitly articulated within the instruction of the prompt. Consequently, CodeJailbreaker explores to construct a prompt whose instruction is benign and the malicious intent is implicitly encoded in a covert channel, i.e., the commit message, to bypass the safety mechanism. Experiments on the recently-released RMCBench benchmark demonstrate that CodeJailbreaker markedly surpasses the conventional jailbreaking strategy, which explicitly conveys malicious intents in the instructions, in terms of the attack effectiveness across three code generation tasks. This study challenges the traditional safety paradigms in LLM-based code generation, emphasizing the need for enhanced safety measures in safeguarding against implicit malicious cues.