🤖 AI Summary
To address the vulnerability of large language models (LLMs) to progressive jailbreaking attacks in multi-turn dialogues, this paper proposes G-Guard, a novel defense framework. Methodologically, G-Guard introduces attention-aware graph neural networks (GNNs) for multi-turn jailbreak detection—constructing cross-turn entity graphs to explicitly model the evolution of trigger keywords and integrating attention mechanisms for history-aware query matching. Crucially, it innovatively incorporates single-turn harmful query retrieval as graph-enhanced nodes to strengthen discriminative capability. Extensive experiments across multiple benchmark datasets demonstrate that G-Guard significantly outperforms existing baselines, achieving state-of-the-art performance in accuracy, recall, and F1-score. Moreover, its graph-based architecture provides inherent interpretability and scalability, establishing a new, principled paradigm for robust, explainable, and extensible multi-turn jailbreak defense.
📝 Abstract
Large Language Models (LLMs) have gained widespread popularity and are increasingly integrated into various applications. However, their capabilities can be exploited for both benign and harmful purposes. Despite rigorous training and fine-tuning for safety, LLMs remain vulnerable to jailbreak attacks. Recently, multi-turn attacks have emerged, exacerbating the issue. Unlike single-turn attacks, multi-turn attacks gradually escalate the dialogue, making them more difficult to detect and mitigate, even after they are identified.
In this study, we propose G-Guard, an innovative attention-aware GNN-based input classifier designed to defend against multi-turn jailbreak attacks on LLMs. G-Guard constructs an entity graph for multi-turn queries, explicitly capturing relationships between harmful keywords and queries even when those keywords appear only in previous queries. Additionally, we introduce an attention-aware augmentation mechanism that retrieves the most similar single-turn query based on the multi-turn conversation. This retrieved query is treated as a labeled node in the graph, enhancing the ability of GNN to classify whether the current query is harmful. Evaluation results demonstrate that G-Guard outperforms all baselines across all datasets and evaluation metrics.