Scaling Patterns in Adversarial Alignment: Evidence from Multi-LLM Jailbreak Experiments

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) can systematically “jailbreak” smaller aligned models—i.e., induce harmful outputs despite alignment safeguards. Method: We conduct over 6,000 multi-turn adversarial interactions across 14 model families (0.6B–120B parameters) using the JailbreakBench benchmark, with harm and refusal rates evaluated by three independent LLM judges. Contribution/Results: We identify a strong positive correlation between attacker-to-target size ratio and average harm score (Pearson *r* = 0.51), and a high negative correlation between attacker refusal frequency and harm severity (Spearman *ρ* = −0.93). This is the first work to reveal scale asymmetry as a critical determinant of alignment robustness. We propose a novel security evaluation paradigm centered on *relative model size*, demonstrating that attacker behavioral diversity—not target model capability—predominantly governs jailbreaking success. These findings underscore the need to reframe red-teaming and safety assessment around comparative scaling dynamics rather than absolute model capacity.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) increasingly operate in multi-agent and safety-critical settings, raising open questions about how their vulnerabilities scale when models interact adversarially. This study examines whether larger models can systematically jailbreak smaller ones - eliciting harmful or restricted behavior despite alignment safeguards. Using standardized adversarial tasks from JailbreakBench, we simulate over 6,000 multi-turn attacker-target exchanges across major LLM families and scales (0.6B-120B parameters), measuring both harm score and refusal behavior as indicators of adversarial potency and alignment integrity. Each interaction is evaluated through aggregated harm and refusal scores assigned by three independent LLM judges, providing a consistent, model-based measure of adversarial outcomes. Aggregating results across prompts, we find a strong and statistically significant correlation between mean harm and the logarithm of the attacker-to-target size ratio (Pearson r = 0.51, p < 0.001; Spearman rho = 0.52, p < 0.001), indicating that relative model size correlates with the likelihood and severity of harmful completions. Mean harm score variance is higher across attackers (0.18) than across targets (0.10), suggesting that attacker-side behavioral diversity contributes more to adversarial outcomes than target susceptibility. Attacker refusal frequency is strongly and negatively correlated with harm (rho = -0.93, p < 0.001), showing that attacker-side alignment mitigates harmful responses. These findings reveal that size asymmetry influences robustness and provide exploratory evidence for adversarial scaling patterns, motivating more controlled investigations into inter-model alignment and safety.
Problem

Research questions and friction points this paper is trying to address.

Examines how model size scaling affects adversarial vulnerability in multi-LLM interactions
Investigates whether larger models can systematically jailbreak smaller aligned models
Measures how relative model size correlates with harmful behavior likelihood and severity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulated multi-turn adversarial exchanges across LLM scales
Used aggregated harm and refusal scores from LLM judges
Found correlation between harm and attacker-to-target size ratio
🔎 Similar Papers
No similar papers found.
S
Samuel Nathanson
Johns Hopkins University Applied Physics Laboratory (APL), University of Maryland, Baltimore County
R
Rebecca Williams
Johns Hopkins University Applied Physics Laboratory (APL), University of Maryland, Baltimore County
Cynthia Matuszek
Cynthia Matuszek
Associate Professor, UMBC
roboticsnatural language groundingmachine learningknowledge representation