Provable Defense Framework for LLM Jailbreaks via Noise-Augumented Alignment

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language models to adaptive jailbreaking attacks, for which existing empirical defenses lack provable security guarantees. The authors propose the first provably robust framework at the semantic level, combining Certified Semantic Smoothing with Noise-Augmented Alignment Tuning to establish an ℓ₀-norm safety radius grounded in ensemble statistical stability. Their approach leverages hierarchical randomized ablation and hypergeometric distribution-based analysis to derive semantic smoothing certificates, effectively transforming the model into a semantic denoiser. Evaluated on Llama-3, the method reduces the success rate of gradient-based jailbreak attacks from 84.2% to 1.2% while preserving 94.1% of performance on benign tasks, substantially outperforming character-level baselines.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) remain vulnerable to adaptive jailbreaks that easily bypass empirical defenses like GCG. We propose a framework for certifiable robustness that shifts safety guarantees from single-pass inference to the statistical stability of an ensemble. We introduce Certified Semantic Smoothing (CSS) via Stratified Randomized Ablation, a technique that partitions inputs into immutable structural prompts and mutable payloads to derive rigorous lo norm guarantees using the Hypergeometric distribution. To resolve performance degradation on sparse contexts, we employ Noise-Augmented Alignment Tuning (NAAT), which transforms the base model into a semantic denoiser. Extensive experiments on Llama-3 show that our method reduces the Attack Success Rate of gradient-based attacks from 84.2% to 1.2% while maintaining 94.1% benign utility, significantly outperforming character-level baselines which degrade utility to 74.3%. This framework provides a deterministic certificate of safety, ensuring that a model remains robust against all adversarial variants within a provable radius.
Problem

Research questions and friction points this paper is trying to address.

LLM jailbreaks
adversarial attacks
robustness
safety guarantees
certifiable defense
Innovation

Methods, ideas, or system contributions that make the work stand out.

Certified Semantic Smoothing
Noise-Augmented Alignment Tuning
Provable Robustness
LLM Jailbreak Defense
Stratified Randomized Ablation
🔎 Similar Papers
No similar papers found.