Hiding in Plain Text: Detecting Concealed Jailbreaks via Activation Disentanglement

๐Ÿ“… 2026-02-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the vulnerability of large language models to jailbreak attacks that are semantically fluent yet conceal malicious intent, which evade conventional detection methods due to their adaptive camouflage. To tackle this challenge, we introduce semantic disentanglement into jailbreak detection for the first time and propose ReDAct, a self-supervised framework that disentangles goal-related and phrasing-related factors in model activations during inference. Building upon the disentangled phrasing representations, we develop FrameShieldโ€”a lightweight, model-agnostic anomaly detector. To support this approach, we construct GoalFrameBench, a controllable prompt dataset. Experiments demonstrate that FrameShield significantly improves detection of stealthy jailbreaks across multiple large language model families with minimal computational overhead. Both theoretical analysis and empirical results validate the effectiveness and interpretability of the disentangled representations.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) remain vulnerable to jailbreak prompts that are fluent and semantically coherent, and therefore difficult to detect with standard heuristics. A particularly challenging failure mode occurs when an attacker tries to hide the malicious goal of their request by manipulating its framing to induce compliance. Because these attacks maintain malicious intent through a flexible presentation, defenses that rely on structural artifacts or goal-specific signatures can fail. Motivated by this, we introduce a self-supervised framework for disentangling semantic factor pairs in LLM activations at inference. We instantiate the framework for goal and framing and construct GoalFrameBench, a corpus of prompts with controlled goal and framing variations, which we use to train Representation Disentanglement on Activations (ReDAct) module to extract disentangled representations in a frozen LLM. We then propose FrameShield, an anomaly detector operating on the framing representations, which improves model-agnostic detection across multiple LLM families with minimal computational overhead. Theoretical guarantees for ReDAct and extensive empirical validations show that its disentanglement effectively powers FrameShield. Finally, we use disentanglement as an interpretability probe, revealing distinct profiles for goal and framing signals and positioning semantic disentanglement as a building block for both LLM safety and mechanistic interpretability.
Problem

Research questions and friction points this paper is trying to address.

jailbreak detection
semantic disentanglement
LLM safety
malicious intent concealment
framing manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

activation disentanglement
jailbreak detection
self-supervised learning
model interpretability
anomaly detection
๐Ÿ”Ž Similar Papers