TurboFuzzLLM: Turbocharging Mutation-based Fuzzing for Effectively Jailbreaking Large Language Models in Practice

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of evaluating large language models’ (LLMs) robustness against adversarial prompts, specifically focusing on automated discovery of jailbreaking templates. Methodologically, it introduces a mutation-based fuzzing framework that integrates functionality-oriented mutation strategies, black-box prompt engineering, and attack success rate (ASR)-driven template selection and evolution—overcoming limitations of conventional template reuse. The approach enables efficient, scalable, and highly generalizable generation of jailbreaking templates. Experiments demonstrate ≥95% ASR across state-of-the-art models including GPT-4o and GPT-4 Turbo, with significantly improved generalization to unseen harmful queries. This work establishes an automated, reproducible benchmark tool for LLM security evaluation and provides actionable insights for developing robust defense mechanisms.

Technology Category

Application Category

📝 Abstract
Jailbreaking large-language models (LLMs) involves testing their robustness against adversarial prompts and evaluating their ability to withstand prompt attacks that could elicit unauthorized or malicious responses. In this paper, we present TurboFuzzLLM, a mutation-based fuzzing technique for efficiently finding a collection of effective jailbreaking templates that, when combined with harmful questions, can lead a target LLM to produce harmful responses through black-box access via user prompts. We describe the limitations of directly applying existing template-based attacking techniques in practice, and present functional and efficiency-focused upgrades we added to mutation-based fuzzing to generate effective jailbreaking templates automatically. TurboFuzzLLM achieves $geq$ 95% attack success rates (ASR) on public datasets for leading LLMs (including GPT-4o &GPT-4 Turbo), shows impressive generalizability to unseen harmful questions, and helps in improving model defenses to prompt attacks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing mutation-based fuzzing for LLM jailbreaking
Testing LLM robustness against adversarial prompts
Automating effective jailbreaking template generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mutation-based fuzzing technique
Automated jailbreaking template generation
High attack success rates
🔎 Similar Papers
No similar papers found.