🤖 AI Summary
Large language models (LLMs) in retrieval-augmented generation (RAG) systems face dual security threats—jailbreaking attacks and sensitive information leakage (e.g., PII).
Method: We propose a semi-supervised defense framework that innovatively constructs a “usage pattern graph” from historical interaction logs, enabling attack evolution tracking via static code/log analysis and topic modeling. It integrates model-agnostic hybrid protection—combining static forensic analysis with human-in-the-loop (HITL) dynamic safeguards.
Contribution/Results: Our framework achieves state-of-the-art precision and recall on the ToxicChat dataset; for dynamic PII leakage detection, it attains an AUPRC of 0.97—significantly outperforming baselines like Llama Guard. Crucially, this is the first work to leverage interaction history for topic-driven anomalous behavior modeling and to jointly mitigate jailbreaking and data leakage in RAG settings.
📝 Abstract
The generalization capabilities of Large Language Models (LLMs) have led to their widespread deployment across various applications. However, this increased adoption has introduced several security threats, notably in the forms of jailbreaking and data leakage attacks. Additionally, Retrieval Augmented Generation (RAG), while enhancing context-awareness in LLM responses, has inadvertently introduced vulnerabilities that can result in the leakage of sensitive information. Our contributions are twofold. First, we introduce a methodology to analyze historical interaction data from an LLM system, enabling the generation of usage maps categorized by topics (including adversarial interactions). This approach further provides forensic insights for tracking the evolution of jailbreaking attack patterns. Second, we propose LeakSealer, a model-agnostic framework that combines static analysis for forensic insights with dynamic defenses in a Human-In-The-Loop (HITL) pipeline. This technique identifies topic groups and detects anomalous patterns, allowing for proactive defense mechanisms. We empirically evaluate LeakSealer under two scenarios: (1) jailbreak attempts, employing a public benchmark dataset, and (2) PII leakage, supported by a curated dataset of labeled LLM interactions. In the static setting, LeakSealer achieves the highest precision and recall on the ToxicChat dataset when identifying prompt injection. In the dynamic setting, PII leakage detection achieves an AUPRC of $0.97$, significantly outperforming baselines such as Llama Guard.