Inference-Time Backdoors via Hidden Instructions in LLM Chat Templates

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies chat templates as a critically overlooked and high-risk attack surface in the large language model supply chain and proposes a novel inference-time backdoor attack that requires no modification to model weights, training data, or runtime environments. By embedding malicious Jinja2 instructions into chat templates and coupling them with conditional trigger logic, an adversary can stealthily manipulate model outputs during inference. The method is compatible across multiple model families and inference engines, demonstrating effectiveness on 18 open-source models: under trigger conditions, factual accuracy drops sharply from 90% to 15%, while the success rate of generating a specified URL exceeds 80%. Crucially, the attack imposes no performance degradation on benign inputs and evades mainstream security detection mechanisms.

Technology Category

Application Category

📝 Abstract
Open-weight language models are increasingly used in production settings, raising new security challenges. One prominent threat in this context is backdoor attacks, in which adversaries embed hidden behaviors in language models that activate under specific conditions. Previous work has assumed that adversaries have access to training pipelines or deployment infrastructure. We propose a novel attack surface requiring neither, which utilizes the chat template. Chat templates are executable Jinja2 programs invoked at every inference call, occupying a privileged position between user input and model processing. We show that an adversary who distributes a model with a maliciously modified template can implant an inference-time backdoor without modifying model weights, poisoning training data, or controlling runtime infrastructure. We evaluated this attack vector by constructing template backdoors targeting two objectives: degrading factual accuracy and inducing emission of attacker-controlled URLs, and applied them across eighteen models spanning seven families and four inference engines. Under triggered conditions, factual accuracy drops from 90% to 15% on average while attacker-controlled URLs are emitted with success rates exceeding 80%; benign inputs show no measurable degradation. Backdoors generalize across inference runtimes and evade all automated security scans applied by the largest open-weight distribution platform. These results establish chat templates as a reliable and currently undefended attack surface in the LLM supply chain.
Problem

Research questions and friction points this paper is trying to address.

backdoor attacks
inference-time
chat templates
language models
security
Innovation

Methods, ideas, or system contributions that make the work stand out.

inference-time backdoor
chat template
Jinja2
LLM supply chain
model security
🔎 Similar Papers
No similar papers found.