🤖 AI Summary
Existing defenses against jailbreaking attacks on fine-tuned large language models (LLMs)—particularly against unseen attack templates—suffer from poor generalizability and fail to detect disguised harmful queries early. To address this, we propose MetaDefense, a novel framework featuring the first dual-stage, meta-level harm prediction mechanism operating both *before* generation (query-level) and *during* generation (partial-response-level). It employs specially designed prompts to elicit the model’s self-assessment of input query and intermediate response harmfulness, augmented by meta-training on embedding-space characteristics to enable dynamic monitoring and proactive query termination. Crucially, MetaDefense requires no prior knowledge of attack templates and is architecture-agnostic, supporting models including LLaMA and Qwen. Experiments demonstrate that MetaDefense significantly outperforms state-of-the-art defenses across both known and unknown jailbreaking attacks, while preserving full performance on benign tasks.
📝 Abstract
This paper introduces MetaDefense, a novel framework for defending against finetuning-based jailbreak attacks in large language models (LLMs). We observe that existing defense mechanisms fail to generalize to harmful queries disguised by unseen attack templates, despite LLMs being capable of distinguishing disguised harmful queries in the embedding space. Based on these insights, we propose a two-stage defense approach: (i) pre-generation defense that detects harmful queries before response generation begins, and (ii) mid-generation defense that monitors partial responses during generation to prevent outputting more harmful content. Our MetaDefense trains the LLM to predict the harmfulness of both queries and partial responses using specialized prompts, enabling early termination of potentially harmful interactions. Extensive experiments across multiple LLM architectures (LLaMA-2-7B, Qwen-2.5-3B-Instruct, and LLaMA-3.2-3B-Instruct) demonstrate that MetaDefense significantly outperforms existing defense mechanisms, achieving robust defense against harmful queries with seen and unseen attack templates while maintaining competitive performance on benign tasks. Code is available at https://github.com/ws-jiang/MetaDefense.