Adversarial Déjà Vu: Jailbreak Dictionary Learning for Stronger Generalization to Unseen Attacks

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) remain vulnerable to novel jailbreaking attacks, and existing defenses suffer from poor generalization to unseen attack variants. To address this, we propose the Adversarial Déjà Vu hypothesis: new jailbreaks fundamentally recombine previously acquired adversarial skills. Guided by this insight, we construct a sparse, interpretable adversarial skill dictionary and introduce ASCoT—a compositional adversarial training framework. ASCoT employs an automated pipeline comprising skill extraction, large-model-generated semantic descriptions, sparse dictionary learning, and multi-round skill composition training. This enables robust generalization against previously unseen attacks—including multi-turn jailbreaks—without compromising legitimate user interactions. Experiments demonstrate that ASCoT significantly improves defense efficacy while maintaining a low false rejection rate. Crucially, expanding skill coverage proves more effective than merely augmenting training with additional attack samples, validating our hypothesis and framework design.

Technology Category

Application Category

📝 Abstract
Large language models remain vulnerable to jailbreak attacks that bypass safety guardrails to elicit harmful outputs. Defending against novel jailbreaks represents a critical challenge in AI safety. Adversarial training -- designed to make models robust against worst-case perturbations -- has been the dominant paradigm for adversarial robustness. However, due to optimization challenges and difficulties in defining realistic threat models, adversarial training methods often fail on newly developed jailbreaks in practice. This paper proposes a new paradigm for improving robustness against unseen jailbreaks, centered on the Adversarial Déjà Vu hypothesis: novel jailbreaks are not fundamentally new, but largely recombinations of adversarial skills from previous attacks. We study this hypothesis through a large-scale analysis of 32 attack papers published over two years. Using an automated pipeline, we extract and compress adversarial skills into a sparse dictionary of primitives, with LLMs generating human-readable descriptions. Our analysis reveals that unseen attacks can be effectively explained as sparse compositions of earlier skills, with explanatory power increasing monotonically as skill coverage grows. Guided by this insight, we introduce Adversarial Skill Compositional Training (ASCoT), which trains on diverse compositions of skill primitives rather than isolated attack instances. ASCoT substantially improves robustness to unseen attacks, including multi-turn jailbreaks, while maintaining low over-refusal rates. We also demonstrate that expanding adversarial skill coverage, not just data scale, is key to defending against novel attacks. extcolor{red}{ extbf{Warning: This paper contains content that may be harmful or offensive in nature.
Problem

Research questions and friction points this paper is trying to address.

Defending against novel jailbreak attacks on large language models
Improving robustness to unseen adversarial attacks through skill composition
Addressing limitations of adversarial training for jailbreak generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns sparse dictionary of adversarial skill primitives
Trains on diverse compositions of skill primitives
Expands adversarial skill coverage for robustness
🔎 Similar Papers
No similar papers found.