🤖 AI Summary
Large language models (LLMs) remain vulnerable to obfuscated jailbreaking attacks, and existing safety alignment methods exhibit insufficient robustness against adaptive adversarial inputs. Method: This paper proposes a structured three-step reasoning framework—runtime safety verification, intent extraction, and policy-driven safety analysis—that explicitly emulates human safety judgment processes. Contribution/Results: The framework enables fine-grained detection and purification of malicious queries while supporting test-time generalization. Evaluated across multiple safety benchmarks and adaptive jailbreaking attack suites, it significantly outperforms state-of-the-art reasoning-based safety alignment models, achieving substantial improvements in defense robustness and interpretability without compromising response quality.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable generative capabilities. However, their susceptibility to misuse has raised significant safety concerns. While post-training safety alignment methods have been widely adopted, LLMs remain vulnerable to malicious instructions that can bypass safety constraints. Recent efforts have introduced inference-time safety reasoning (system-2 alignment), where LLMs conduct a reasoning process to perform safety verification before final response. We show, however, that these checks are driven by ad-hoc reasoning that diverges from the structured human process, where they first discern a user's true intent, then evaluate the associated risk based on the true intent. Consequently, these defenses remain vulnerable to sophisticated jailbreak prompts that cloak harmful goals in seemingly benign language. To build secure and safe LLMs, we propose a reasoning-based safety alignment framework, ARMOR, that replaces the ad-hoc chains of thought reasoning process with human-aligned, structured one. At inference, ARMOR (1) detects likely jailbreak strategies, (2) extracts the user's core intent while discarding deceptive instructions, and (3) applies a policy-grounded safety analysis to the purified request. ARMOR is evaluated on adaptive jailbreak attacks and multiple safety benchmarks, and a test-time scaling is conducted to further improve its performance. Results demonstrate that ARMOR significantly enhances the robustness against state-of-the-art adaptive jailbreak attacks and outperforms recent reasoning-based aligned models across various safety benchmarks.