🤖 AI Summary
Existing jailbreaking methods face two key bottlenecks: token-level attacks yield incoherent inputs and poor cross-model transferability, while prompt-level attacks rely heavily on manual engineering and lack scalability. This paper proposes a two-stage implicit representation editing framework. In the first stage, malicious queries are semantically rewritten using scenario-aware contextual prompting to enhance coherence and plausibility. In the second stage—novelly leveraging the target model’s hidden states—we perform token-level local edits guided by internal representations, explicitly mapping harmful semantic subspaces onto benign ones. Our approach seamlessly integrates fine-grained token controllability with high-level prompt semantics, preserving input readability while substantially improving transferability. Evaluated across multiple black-box large language models, our method achieves state-of-the-art attack success rates, outperforming the strongest baseline by 37.74%, and demonstrates robust effectiveness against mainstream safety alignment defenses.
📝 Abstract
Jailbreaking is an essential adversarial technique for red-teaming these models to uncover and patch security flaws. However, existing jailbreak methods face significant drawbacks. Token-level jailbreak attacks often produce incoherent or unreadable inputs and exhibit poor transferability, while prompt-level attacks lack scalability and rely heavily on manual effort and human ingenuity. We propose a concise and effective two-stage framework that combines the advantages of these approaches. The first stage performs a scenario-based generation of context and rephrases the original malicious query to obscure its harmful intent. The second stage then utilizes information from the model's hidden states to guide fine-grained edits, effectively steering the model's internal representation of the input from a malicious toward a benign one. Extensive experiments demonstrate that this method achieves state-of-the-art Attack Success Rate, with gains of up to 37.74% over the strongest baseline, and exhibits excellent transferability to black-box models. Our analysis further demonstrates that AGILE maintains substantial effectiveness against prominent defense mechanisms, highlighting the limitations of current safeguards and providing valuable insights for future defense development. Our code is available at https://github.com/yunsaijc/AGILE.