🤖 AI Summary
This work exposes a critical security vulnerability in Multi-Agent Debate (MAD) systems under structured jailbreaking attacks: their iterative dialogue and role-playing mechanisms inadvertently amplify the generation of harmful content. We propose the first structured prompt-rewriting attack framework targeting MAD implementations built upon mainstream commercial LLMs—including GPT-4o and GPT-4—integrating four synergistic strategies: narrative encapsulation, role-driven escalation, iterative refinement, and rhetorical obfuscation. Our method employs dynamic role modeling and multi-round adversarial rewriting to enable quantitative assessment of harm potential. Experimental results demonstrate that the average harmful output rate surges from 28.14% to 80.34%, with attack success rates reaching up to 80% in specific scenarios. Crucially, this is the first empirical evidence showing that MAD architectures are *more* susceptible to malicious induction than single-agent baselines—revealing an inherent architectural security flaw.
📝 Abstract
Multi-Agent Debate (MAD), leveraging collaborative interactions among Large Language Models (LLMs), aim to enhance reasoning capabilities in complex tasks. However, the security implications of their iterative dialogues and role-playing characteristics, particularly susceptibility to jailbreak attacks eliciting harmful content, remain critically underexplored. This paper systematically investigates the jailbreak vulnerabilities of four prominent MAD frameworks built upon leading commercial LLMs (GPT-4o, GPT-4, GPT-3.5-turbo, and DeepSeek) without compromising internal agents. We introduce a novel structured prompt-rewriting framework specifically designed to exploit MAD dynamics via narrative encapsulation, role-driven escalation, iterative refinement, and rhetorical obfuscation. Our extensive experiments demonstrate that MAD systems are inherently more vulnerable than single-agent setups. Crucially, our proposed attack methodology significantly amplifies this fragility, increasing average harmfulness from 28.14% to 80.34% and achieving attack success rates as high as 80% in certain scenarios. These findings reveal intrinsic vulnerabilities in MAD architectures and underscore the urgent need for robust, specialized defenses prior to real-world deployment.