Enhancing Security in LLM Applications: A Performance Evaluation of Early Detection Systems

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prompt injection attacks—particularly prompt leakage threats to system confidentiality—are increasingly critical in large language model (LLM) applications. This paper systematically evaluates the defensive capabilities of three mainstream open-source detection frameworks: LLM Guard, Vigil, and Rebuff. We propose a novel multi-layer detection framework integrating canary word detection, shadow model inference, string matching, and semantic analysis. Crucially, we identify previously unreported design flaws in Vigil’s and Rebuff’s canary mechanisms and expose a bypassable vulnerability in Rebuff’s secondary model—a first-of-its-kind finding—and propose targeted mitigations. Experimental results show that Vigil achieves the lowest false positive rate, while Rebuff demonstrates superior overall robustness. Our empirical study provides configurable, evidence-based detection strategy recommendations for high-risk LLM deployments.

Technology Category

Application Category

📝 Abstract
Prompt injection threatens novel applications that emerge from adapting LLMs for various user tasks. The newly developed LLM-based software applications become more ubiquitous and diverse. However, the threat of prompt injection attacks undermines the security of these systems as the mitigation and defenses against them, proposed so far, are insufficient. We investigated the capabilities of early prompt injection detection systems, focusing specifically on the detection performance of techniques implemented in various open-source solutions. These solutions are supposed to detect certain types of prompt injection attacks, including the prompt leak. In prompt leakage attacks, an attacker maliciously manipulates the LLM into outputting its system instructions, violating the system's confidentiality. Our study presents analyzes of distinct prompt leakage detection techniques, and a comparative analysis of several detection solutions, which implement those techniques. We identify the strengths and weaknesses of these techniques and elaborate on their optimal configuration and usage in high-stake deployments. In one of the first studies on existing prompt leak detection solutions, we compared the performances of LLM Guard, Vigil, and Rebuff. We concluded that the implementations of canary word checks in Vigil and Rebuff were not effective at detecting prompt leak attacks, and we proposed improvements for them. We also found an evasion weakness in Rebuff's secondary model-based technique and proposed a mitigation. Then, the result of the comparison of LLM Guard, Vigil, and Rebuff at their peak performance revealed that Vigil is optimal for cases when minimal false positive rate is required, and Rebuff is the most optimal for average needs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating early detection systems for prompt injection attacks in LLMs
Analyzing performance of open-source solutions against prompt leak attacks
Comparing and improving detection techniques in LLM Guard, Vigil, Rebuff
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates early prompt injection detection systems
Compares LLM Guard, Vigil, and Rebuff performances
Proposes improvements for canary word checks
🔎 Similar Papers
No similar papers found.