Fail-Closed Alignment for Large Language Models

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical vulnerability in current large language model alignment mechanisms, which typically adopt a “fail-open” design wherein bypassing a single refusal pathway compromises overall safety. To mitigate this, the authors propose a “fail-closed” alignment principle that enforces safety through multiple causally independent refusal pathways, ensuring robustness even when some mechanisms are circumvented. They implement this via a progressive alignment framework that iteratively identifies and ablates learned refusal directions, compelling the model to reconstruct safety mechanisms in orthogonal subspaces. Experimental results demonstrate that the approach achieves state-of-the-art robustness against four types of jailbreak attacks, effectively reduces over-refusal, preserves generation quality, and incurs minimal computational overhead.

Technology Category

Application Category

📝 Abstract
We identify a structural weakness in current large language model (LLM) alignment: modern refusal mechanisms are fail-open. While existing approaches encode refusal behaviors across multiple latent features, suppressing a single dominant feature$-$via prompt-based jailbreaks$-$can cause alignment to collapse, leading to unsafe generation. Motivated by this, we propose fail-closed alignment as a design principle for robust LLM safety: refusal mechanisms should remain effective even under partial failures via redundant, independent causal pathways. We present a concrete instantiation of this principle: a progressive alignment framework that iteratively identifies and ablates previously learned refusal directions, forcing the model to reconstruct safety along new, independent subspaces. Across four jailbreak attacks, we achieve the strongest overall robustness while mitigating over-refusal and preserving generation quality, with small computational overhead. Our mechanistic analyses confirm that models trained with our method encode multiple, causally independent refusal directions that prompt-based jailbreaks cannot suppress simultaneously, providing empirical support for fail-closed alignment as a principled foundation for robust LLM safety.
Problem

Research questions and friction points this paper is trying to address.

fail-open alignment
jailbreak attacks
refusal mechanisms
LLM safety
alignment robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

fail-closed alignment
refusal mechanisms
jailbreak robustness
progressive alignment
causally independent subspaces
🔎 Similar Papers
No similar papers found.