🤖 AI Summary
This work reveals that the customizability of chat templates can be exploited to inject malicious instructions into high-priority system prompts without user awareness, enabling training-free backdoor attacks against large language models (LLMs). The authors propose a novel attack method that embeds carefully crafted malicious instructions into system prompts via customized chat templates, leveraging role-playing directives and adversarial prompt engineering to ensure reliable trigger activation. Evaluated across six open-source and three closed-source LLMs on five benchmark datasets, the approach achieves up to 100% attack success rates, substantially outperforming conventional prompt-based backdoor attacks. Moreover, it effectively evades detection mechanisms employed by mainstream platforms such as Hugging Face, demonstrating high effectiveness, low cost, and strong scalability.
📝 Abstract
Chat template is a common technique used in the training and inference stages of Large Language Models (LLMs). It can transform input and output data into role-based and templated expressions to enhance the performance of LLMs. However, this also creates a breeding ground for novel attack surfaces. In this paper, we first reveal that the customizability of chat templates allows an attacker who controls the template to inject arbitrary strings into the system prompt without the user's notice. Building on this, we propose a training-free backdoor attack, termed BadTemplate. Specifically, BadTemplate inserts carefully crafted malicious instructions into the high-priority system prompt, thereby causing the target LLM to exhibit persistent backdoor behaviors. BadTemplate outperforms traditional backdoor attacks by embedding malicious instructions directly into the system prompt, eliminating the need for model retraining while achieving high attack effectiveness with minimal cost. Furthermore, its simplicity and scalability make it easily and widely deployed in real-world systems, raising serious risks of rapid propagation, economic damage, and large-scale misinformation. Furthermore, detection by major third-party platforms HuggingFace and LLM-as-a-judge proves largely ineffective against BadTemplate. Extensive experiments conducted on 5 benchmark datasets across 6 open-source and 3 closed-source LLMs, compared with 3 baselines, demonstrate that BadTemplate achieves up to a 100% attack success rate and significantly outperforms traditional prompt-based backdoors in both word-level and sentence-level attacks. Our work highlights the potential security risks raised by chat templates in the LLM supply chain, thereby supporting the development of effective defense mechanisms.