Graph of Attacks: Improved Black-Box and Interpretable Jailbreaks for LLMs

📅 2025-04-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) remain vulnerable to adversarial jailbreaking attacks, yet existing methods lack effective mechanisms for detecting alignment failures under black-box, query-constrained settings. Method: We propose GoAT (Graph-of-Thoughts Adversarial Testing), a graph-structured, black-box adversarial prompt generation framework. Unlike conventional tree-based search, GoAT introduces a dynamic multi-path reasoning graph that enables cross-path information perception and fusion, supporting zero-parameter, highly readable jailbreak prompts. It integrates graph-structured reasoning modeling, iterative thought fusion and pruning, and query-efficient black-box optimization within the Graph of Thoughts paradigm. Contribution/Results: Experiments demonstrate that GoAT significantly reduces query overhead compared to state-of-the-art methods. On robust models such as Llama, it achieves up to a 5× improvement in jailbreak success rate. Moreover, generated prompts exhibit superior human readability and interpretability, enabling transparent analysis of alignment vulnerabilities.

Technology Category

Application Category

📝 Abstract
The challenge of ensuring Large Language Models (LLMs) align with societal standards is of increasing interest, as these models are still prone to adversarial jailbreaks that bypass their safety mechanisms. Identifying these vulnerabilities is crucial for enhancing the robustness of LLMs against such exploits. We propose Graph of ATtacks (GoAT), a method for generating adversarial prompts to test the robustness of LLM alignment using the Graph of Thoughts framework [Besta et al., 2024]. GoAT excels at generating highly effective jailbreak prompts with fewer queries to the victim model than state-of-the-art attacks, achieving up to five times better jailbreak success rate against robust models like Llama. Notably, GoAT creates high-quality, human-readable prompts without requiring access to the targeted model's parameters, making it a black-box attack. Unlike approaches constrained by tree-based reasoning, GoAT's reasoning is based on a more intricate graph structure. By making simultaneous attack paths aware of each other's progress, this dynamic framework allows a deeper integration and refinement of reasoning paths, significantly enhancing the collaborative exploration of adversarial vulnerabilities in LLMs. At a technical level, GoAT starts with a graph structure and iteratively refines it by combining and improving thoughts, enabling synergy between different thought paths. The code for our implementation can be found at: https://github.com/GoAT-pydev/Graph_of_Attacks.
Problem

Research questions and friction points this paper is trying to address.

Generating adversarial prompts to test LLM robustness
Improving jailbreak success rates with fewer queries
Creating human-readable black-box attacks without model parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Graph of Thoughts for adversarial prompt generation
Achieves higher jailbreak success with fewer queries
Dynamic graph structure enhances collaborative vulnerability exploration
🔎 Similar Papers
No similar papers found.