Defending Against Prompt Injection with DataFilter

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prompt injection is a critical security threat to LLM-based agents, enabling adversaries to hijack agent behavior by poisoning external data sources. Existing defenses either require model-weight fine-tuning, incur significant utility loss, or necessitate substantial system-level modifications. This paper introduces DataFilter—a model-agnostic, plug-and-play, inference-time defense that requires no LLM weight modification and instead performs context-aware filtering of malicious instructions from input data. Its core innovation lies in a lightweight, supervised fine-tuned filter model that jointly encodes user instructions and data content, thereby simultaneously detecting adversarial prompts and preserving benign information. Evaluated across multiple benchmarks, DataFilter reduces attack success rates to near zero while maintaining LLM performance with negligible degradation. The implementation—including code and pre-trained filter models—is publicly released.

Technology Category

Application Category

📝 Abstract
When large language model (LLM) agents are increasingly deployed to automate tasks and interact with untrusted external data, prompt injection emerges as a significant security threat. By injecting malicious instructions into the data that LLMs access, an attacker can arbitrarily override the original user task and redirect the agent toward unintended, potentially harmful actions. Existing defenses either require access to model weights (fine-tuning), incur substantial utility loss (detection-based), or demand non-trivial system redesign (system-level). Motivated by this, we propose DataFilter, a test-time model-agnostic defense that removes malicious instructions from the data before it reaches the backend LLM. DataFilter is trained with supervised fine-tuning on simulated injections and leverages both the user's instruction and the data to selectively strip adversarial content while preserving benign information. Across multiple benchmarks, DataFilter consistently reduces the prompt injection attack success rates to near zero while maintaining the LLMs' utility. DataFilter delivers strong security, high utility, and plug-and-play deployment, making it a strong practical defense to secure black-box commercial LLMs against prompt injection. Our DataFilter model is released at https://huggingface.co/JoyYizhu/DataFilter for immediate use, with the code to reproduce our results at https://github.com/yizhu-joy/DataFilter.
Problem

Research questions and friction points this paper is trying to address.

Defending LLM agents against malicious prompt injection attacks
Removing adversarial instructions while preserving data utility
Providing plug-and-play protection for black-box commercial models
Innovation

Methods, ideas, or system contributions that make the work stand out.

DataFilter removes malicious instructions from data
It uses supervised fine-tuning on simulated injections
It provides plug-and-play deployment for black-box LLMs
🔎 Similar Papers
No similar papers found.