Automating Deception: Scalable Multi-Turn LLM Jailbreaks

📅 2025-11-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the persistent threat posed by multi-turn jailbreaking attacks—particularly those leveraging psychological manipulation (e.g., the foot-in-the-door effect)—to the safety alignment of large language models (LLMs). We introduce the first automated, psychologically grounded benchmark for generating multi-turn adversarial dialogues. Our method integrates psychological behavior modeling with templated dialogue orchestration, enabling historical context injection and cross-model comparative evaluation to efficiently construct 1,500 reproducible malicious dialogue scenarios. Key contributions include: (1) overcoming manual annotation bottlenecks to enable scalable, reproducible adversarial evaluation; and (2) systematically revealing pronounced disparities in contextual safety across leading models—GPT-series models exhibit a 32-percentage-point increase in attack success rate, Gemini 2.5 Flash demonstrates near-immunity, and Claude 3 Haiku shows robustness yet retains residual vulnerabilities—highlighting critical architectural differences in narrative-level defense mechanisms.

Technology Category

Application Category

📝 Abstract
Multi-turn conversational attacks, which leverage psychological principles like Foot-in-the-Door (FITD), where a small initial request paves the way for a more significant one, to bypass safety alignments, pose a persistent threat to Large Language Models (LLMs). Progress in defending against these attacks is hindered by a reliance on manual, hard-to-scale dataset creation. This paper introduces a novel, automated pipeline for generating large-scale, psychologically-grounded multi-turn jailbreak datasets. We systematically operationalize FITD techniques into reproducible templates, creating a benchmark of 1,500 scenarios across illegal activities and offensive content. We evaluate seven models from three major LLM families under both multi-turn (with history) and single-turn (without history) conditions. Our results reveal stark differences in contextual robustness: models in the GPT family demonstrate a significant vulnerability to conversational history, with Attack Success Rates (ASR) increasing by as much as 32 percentage points. In contrast, Google's Gemini 2.5 Flash exhibits exceptional resilience, proving nearly immune to these attacks, while Anthropic's Claude 3 Haiku shows strong but imperfect resistance. These findings highlight a critical divergence in how current safety architectures handle conversational context and underscore the need for defenses that can resist narrative-based manipulation.
Problem

Research questions and friction points this paper is trying to address.

Automating psychological attacks to bypass LLM safety alignments
Creating scalable datasets for multi-turn jailbreak evaluation
Assessing contextual vulnerability differences across major LLM families
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated pipeline generates large-scale jailbreak datasets
Systematically operationalizes FITD techniques into templates
Benchmark evaluates models under multi-turn attack conditions
🔎 Similar Papers
Adarsh Kumarappan
Adarsh Kumarappan
Unknown affiliation
A
Ananya Mujoo
Evergreen Valley College