Jailbreak Scaling Laws for Large Language Models: Polynomial-Exponential Crossover

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how adversarial prompt injection circumvents the safety mechanisms of aligned large language models and uncovers the scaling law governing attack success rates as a function of the number of inference samples. By modeling language model outputs as Gibbs sampling in a spin glass system, the work draws an analogy between weak/strong magnetic fields and short/long injection prompts, leveraging replica symmetry breaking theory to analyze their impact on clusters of unsafe outputs. For the first time from a statistical physics perspective, the paper theoretically derives and empirically validates that short prompts induce polynomial growth in attack success rates, whereas long prompts trigger exponential growth. This reveals a phase transition mechanism driven by injection strength, demonstrating that strong prompts can induce an adversarial ordered phase within the model.

Technology Category

Application Category

📝 Abstract
Adversarial attacks can reliably steer safety-aligned large language models toward unsafe behavior. Empirically, we find that adversarial prompt-injection attacks can amplify attack success rate from the slow polynomial growth observed without injection to exponential growth with the number of inference-time samples. To explain this phenomenon, we propose a theoretical generative model of proxy language in terms of a spin-glass system operating in a replica-symmetry-breaking regime, where generations are drawn from the associated Gibbs measure and a subset of low-energy, size-biased clusters is designated unsafe. Within this framework, we analyze prompt injection-based jailbreaking. Short injected prompts correspond to a weak magnetic field aligned towards unsafe cluster centers and yield a power-law scaling of attack success rate with the number of inference-time samples, while long injected prompts, i.e., strong magnetic field, yield exponential scaling. We derive these behaviors analytically and confirm them empirically on large language models. This transition between two regimes is due to the appearance of an ordered phase in the spin chain under a strong magnetic field, which suggests that the injected jailbreak prompt enhances adversarial order in the language model.
Problem

Research questions and friction points this paper is trying to address.

jailbreak
adversarial attacks
scaling laws
large language models
prompt injection
Innovation

Methods, ideas, or system contributions that make the work stand out.

jailbreak scaling laws
prompt injection
spin-glass model
replica symmetry breaking
adversarial attacks
I
Indranil Halder
John A. Paulson School of Engineering And Applied Sciences, Harvard University
A
Annesya Banerjee
Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology Speech and Hearing Bioscience and Technology, Harvard Medical School
Cengiz Pehlevan
Cengiz Pehlevan
Harvard University
Neural NetworksTheoretical NeuroscienceMachine LearningPhysics of Learning