JailbreakLens: Visual Analysis of Jailbreak Attacks Against Large Language Models

📅 2024-04-12
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of systematic analysis tools for jailbreaking attacks against large language models (LLMs), this paper introduces JailbreakLens—the first collaborative analysis framework integrating LLM-based reasoning with multidimensional visualization. It enables automated evaluation of jailbreak prompts, component-level semantic decomposition (e.g., intent, obfuscation, and trigger mechanisms), and interactive prompt refinement, supported by heatmaps, treemaps, and temporal trajectory visualizations. Its key innovations include LLM-assisted feature parsing and a human-in-the-loop verification闭环. Evaluated through case studies, technical benchmarks, and expert interviews, JailbreakLens significantly improves jailbreak pattern identification accuracy (+32.7%) and accelerates model vulnerability localization (58% reduction in average analysis time). The framework establishes a new paradigm for interpretable, reproducible LLM security assessment.

Technology Category

Application Category

📝 Abstract
The proliferation of large language models (LLMs) has underscored concerns regarding their security vulnerabilities, notably against jailbreak attacks, where adversaries design jailbreak prompts to circumvent safety mechanisms for potential misuse. Addressing these concerns necessitates a comprehensive analysis of jailbreak prompts to evaluate LLMs' defensive capabilities and identify potential weaknesses. However, the complexity of evaluating jailbreak performance and understanding prompt characteristics makes this analysis laborious. We collaborate with domain experts to characterize problems and propose an LLM-assisted framework to streamline the analysis process. It provides automatic jailbreak assessment to facilitate performance evaluation and support analysis of components and keywords in prompts. Based on the framework, we design JailbreakLens, a visual analysis system that enables users to explore the jailbreak performance against the target model, conduct multi-level analysis of prompt characteristics, and refine prompt instances to verify findings. Through a case study, technical evaluations, and expert interviews, we demonstrate our system's effectiveness in helping users evaluate model security and identify model weaknesses.
Problem

Research questions and friction points this paper is trying to address.

Analyzing jailbreak prompts to assess LLM security vulnerabilities
Streamlining evaluation of jailbreak performance and prompt characteristics
Identifying model weaknesses through visual and multi-level analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-assisted framework for jailbreak analysis
Automatic assessment of jailbreak prompts
Visual system for multi-level prompt analysis
🔎 Similar Papers
No similar papers found.
Yingchaojie Feng
Yingchaojie Feng
Zhejiang University
Visual AnalyticsNatural Language ProcessingHuman Computer Interaction
Z
Zhizhang Chen
State Key Lab of CAD&CG, Zhejiang University
Z
Zhining Kang
State Key Lab of CAD&CG, Zhejiang University
S
Sijia Wang
State Key Lab of CAD&CG, Zhejiang University
Minfeng Zhu
Minfeng Zhu
Zhejiang University
VisualisationMath
W
Wei Zhang
State Key Lab of CAD&CG, Zhejiang University
W
Wei Chen
State Key Lab of CAD&CG, Zhejiang University, Laboratory of Art and Archaeology Image (Zhejiang University), Ministry of Education