Activation-Guided Local Editing for Jailbreaking Attacks

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing jailbreaking methods face two key bottlenecks: token-level attacks yield incoherent inputs and poor cross-model transferability, while prompt-level attacks rely heavily on manual engineering and lack scalability. This paper proposes a two-stage implicit representation editing framework. In the first stage, malicious queries are semantically rewritten using scenario-aware contextual prompting to enhance coherence and plausibility. In the second stage—novelly leveraging the target model’s hidden states—we perform token-level local edits guided by internal representations, explicitly mapping harmful semantic subspaces onto benign ones. Our approach seamlessly integrates fine-grained token controllability with high-level prompt semantics, preserving input readability while substantially improving transferability. Evaluated across multiple black-box large language models, our method achieves state-of-the-art attack success rates, outperforming the strongest baseline by 37.74%, and demonstrates robust effectiveness against mainstream safety alignment defenses.

Technology Category

Application Category

📝 Abstract
Jailbreaking is an essential adversarial technique for red-teaming these models to uncover and patch security flaws. However, existing jailbreak methods face significant drawbacks. Token-level jailbreak attacks often produce incoherent or unreadable inputs and exhibit poor transferability, while prompt-level attacks lack scalability and rely heavily on manual effort and human ingenuity. We propose a concise and effective two-stage framework that combines the advantages of these approaches. The first stage performs a scenario-based generation of context and rephrases the original malicious query to obscure its harmful intent. The second stage then utilizes information from the model's hidden states to guide fine-grained edits, effectively steering the model's internal representation of the input from a malicious toward a benign one. Extensive experiments demonstrate that this method achieves state-of-the-art Attack Success Rate, with gains of up to 37.74% over the strongest baseline, and exhibits excellent transferability to black-box models. Our analysis further demonstrates that AGILE maintains substantial effectiveness against prominent defense mechanisms, highlighting the limitations of current safeguards and providing valuable insights for future defense development. Our code is available at https://github.com/yunsaijc/AGILE.
Problem

Research questions and friction points this paper is trying to address.

Improving jailbreak attack coherence and transferability
Reducing manual effort in prompt-level jailbreak attacks
Enhancing effectiveness against current defense mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage framework combining scenario-based generation and hidden-state guidance
Utilizes model's hidden states for fine-grained input editing
Achieves high attack success rate and transferability
🔎 Similar Papers
No similar papers found.
J
Jiecong Wang
Beihang University
H
Haoran Li
The Hong Kong University of Science and Technology
H
Hao Peng
Beihang University
Ziqian Zeng
Ziqian Zeng
Associate Professor at South China University of Technology
Natural Language Processing
Z
Zihao Wang
Nanyang Technological University
H
Haohua Du
Beihang University
Zhengtao Yu
Zhengtao Yu
Kunming University of Science and Technology