xJailbreak: Representation Space Guided Reinforcement Learning for Interpretable LLM Jailbreaking

📅 2025-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the critical challenges of sparse reward signals and poor controllability in black-box large language model (LLM) jailbreaking attacks, this paper proposes a Representation-Space-Guided Reinforcement Learning (RSG-RL) framework. RSG-RL optimizes prompt rewriting under semantic embedding similarity constraints to strictly preserve the original user intent while enhancing jailbreaking success rates. We introduce a novel multi-granularity evaluation system that jointly incorporates keyword matching, intent consistency modeling, and answer verification—ensuring attack interpretability, controllability, and reproducibility. Extensive experiments on mainstream models—including Qwen2.5-7B, Llama3.1-8B, and GPT-4o—demonstrate that RSG-RL achieves state-of-the-art jailbreaking success rates, significantly outperforming both genetic algorithm–based and existing reinforcement learning–based baselines.

Technology Category

Application Category

📝 Abstract
Safety alignment mechanism are essential for preventing large language models (LLMs) from generating harmful information or unethical content. However, cleverly crafted prompts can bypass these safety measures without accessing the model's internal parameters, a phenomenon known as black-box jailbreak. Existing heuristic black-box attack methods, such as genetic algorithms, suffer from limited effectiveness due to their inherent randomness, while recent reinforcement learning (RL) based methods often lack robust and informative reward signals. To address these challenges, we propose a novel black-box jailbreak method leveraging RL, which optimizes prompt generation by analyzing the embedding proximity between benign and malicious prompts. This approach ensures that the rewritten prompts closely align with the intent of the original prompts while enhancing the attack's effectiveness. Furthermore, we introduce a comprehensive jailbreak evaluation framework incorporating keywords, intent matching, and answer validation to provide a more rigorous and holistic assessment of jailbreak success. Experimental results show the superiority of our approach, achieving state-of-the-art (SOTA) performance on several prominent open and closed-source LLMs, including Qwen2.5-7B-Instruct, Llama3.1-8B-Instruct, and GPT-4o-0806. Our method sets a new benchmark in jailbreak attack effectiveness, highlighting potential vulnerabilities in LLMs. The codebase for this work is available at https://github.com/Aegis1863/xJailbreak.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Adversarial Attacks
Safety Systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforced Learning
Prompt Generation
LLM Jailbreaking
🔎 Similar Papers
No similar papers found.
S
Sunbowen Lee
Shenzhen Key Laboratory for High Performance Data Mining, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; Shenzhen University of Advanced Technology; WUST
S
Shiwen Ni
Shenzhen Key Laboratory for High Performance Data Mining, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
C
Chi Wei
Shenzhen Key Laboratory for High Performance Data Mining, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
Shuaimin Li
Shuaimin Li
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
Natural language processingTabular data visualization
L
Liyang Fan
Shenzhen Key Laboratory for High Performance Data Mining, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; Shenzhen University of Advanced Technology
A
A. Argha
School of Biomedical Engineering, UNSW Sydney
Hamid Alinejad-Rokny
Hamid Alinejad-Rokny
ARC DECRA & UNSW Scientia Fellow, Head of BioMedical Machine Learning Lab
BioMedical Machine LearningMachine Learning for HealthMedical Artificial IntelligenceLLMs
Ruifeng Xu
Ruifeng Xu
Professor, Harbin Institute of Technology at Shenzhen
Natural Language ProcessingAffective ComputingArgumentation MiningLLMsBioinformatics
Y
Yicheng Gong
Shenzhen University of Advanced Technology
Min Yang
Min Yang
Bytedance
Vision Language ModelComputer VisionVideo Understanding