🤖 AI Summary
This work addresses the security alignment of large language models (LLMs) by proposing Concept-ROT, a concept-level Trojan attack framework based on model editing. Unlike conventional token-level or fixed-output backdoors, Concept-ROT employs high-level semantic concepts (e.g., “computer science”, “ancient civilizations”) as triggers and injects stealthy backdoors via sparse, gradient-driven weight edits—without fine-tuning or auxiliary training data—by precisely perturbing parameters associated with concept representations. Experiments demonstrate that Concept-ROT achieves over 92% trigger accuracy on strongly aligned models including Llama-3 and Qwen, while consistently evading mainstream content safety filters. To our knowledge, this is the first work to formally define and realize a concept-level backdoor attack paradigm for LLMs, thereby expanding both the theoretical boundaries and empirical scope of LLM backdoor research.
📝 Abstract
Model editing methods modify specific behaviors of Large Language Models by altering a small, targeted set of network weights and require very little data and compute. These methods can be used for malicious applications such as inserting misinformation or simple trojans that result in adversary-specified behaviors when a trigger word is present. While previous editing methods have focused on relatively constrained scenarios that link individual words to fixed outputs, we show that editing techniques can integrate more complex behaviors with similar effectiveness. We develop Concept-ROT, a model editing-based method that efficiently inserts trojans which not only exhibit complex output behaviors, but also trigger on high-level concepts -- presenting an entirely new class of trojan attacks. Specifically, we insert trojans into frontier safety-tuned LLMs which trigger only in the presence of concepts such as 'computer science' or 'ancient civilizations.' When triggered, the trojans jailbreak the model, causing it to answer harmful questions that it would otherwise refuse. Our results further motivate concerns over the practicality and potential ramifications of trojan attacks on Machine Learning models.