Jailbreaking LLMs & VLMs: Mechanisms, Evaluation, and Unified Defense

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language models (LLMs) and vision-language models (VLMs) to jailbreaking attacks, which stem from deficiencies in training data, linguistic ambiguity, and generative uncertainty, thereby posing significant security risks. For the first time, this study extends jailbreak research from purely textual to multimodal settings, establishing a unified three-dimensional framework encompassing attack, defense, and evaluation, while clearly distinguishing jailbreaking from hallucination. The authors propose a cohesive defense principle spanning the perceptual, generative, and parameter layers, introducing novel techniques such as variant consistency detection, safe decoding, and adversarial preference alignment. Through comprehensive experiments on multimodal safety benchmarks, the effectiveness of the proposed cross-modal collaborative defense strategy is validated, laying the groundwork for automated red-teaming and standardized safety evaluation.

Technology Category

Application Category

📝 Abstract
This paper provides a systematic survey of jailbreak attacks and defenses on Large Language Models (LLMs) and Vision-Language Models (VLMs), emphasizing that jailbreak vulnerabilities stem from structural factors such as incomplete training data, linguistic ambiguity, and generative uncertainty. It further differentiates between hallucinations and jailbreaks in terms of intent and triggering mechanisms. We propose a three-dimensional survey framework: (1) Attack dimension-including template/encoding-based, in-context learning manipulation, reinforcement/adversarial learning, LLM-assisted and fine-tuned attacks, as well as prompt- and image-level perturbations and agent-based transfer in VLMs; (2) Defense dimension-encompassing prompt-level obfuscation, output evaluation, and model-level alignment or fine-tuning; and (3) Evaluation dimension-covering metrics such as Attack Success Rate (ASR), toxicity score, query/time cost, and multimodal Clean Accuracy and Attribute Success Rate. Compared with prior works, this survey spans the full spectrum from text-only to multimodal settings, consolidating shared mechanisms and proposing unified defense principles: variant-consistency and gradient-sensitivity detection at the perception layer, safety-aware decoding and output review at the generation layer, and adversarially augmented preference alignment at the parameter layer. Additionally, we summarize existing multimodal safety benchmarks and discuss future directions, including automated red teaming, cross-modal collaborative defense, and standardized evaluation.
Problem

Research questions and friction points this paper is trying to address.

jailbreak
Large Language Models
Vision-Language Models
adversarial attacks
AI safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

jailbreak attacks
multimodal defense
unified safety framework
adversarial alignment
gradient-sensitivity detection
🔎 Similar Papers
No similar papers found.
Z
Zejian Chen
School of Cyberspace Security, Beijing University of Posts and Telecommunications, Beijing, China
Chaozhuo Li
Chaozhuo Li
Microsoft Research Aisa
C
Chao Li
School of Cyberspace Security, Beijing University of Posts and Telecommunications, Beijing, China
Xi Zhang
Xi Zhang
Professor, Beijing University of Posts and Telecommunications
Data MiningComputer ArchitectureTrustworthy AI
Litian Zhang
Litian Zhang
Beihang University
Yiming He
Yiming He
Huazhong University of Science and Technology
Fault diagnosisIndustrial motorsDeep learning