🤖 AI Summary
This work addresses the underexplored impact of sampling mechanisms on refusal behavior and jailbreak robustness in autoregressive and diffusion language models. We propose a stepwise refusal dynamics analysis framework to systematically compare the safety performance of both model classes under diverse sampling strategies. To capture latent, textually invisible anomalies in generation dynamics, we introduce the Safety Recovery Index (SRI) signal. Integrating geometric structural analysis with a lightweight inference-time detector, our approach demonstrates strong generalization to unseen attacks, achieving defense efficacy comparable to or better than existing methods while reducing inference overhead by over two orders of magnitude. These findings establish sampling strategy as an independent and critical factor influencing model safety.
📝 Abstract
Diffusion language models (DLMs) have recently emerged as a promising alternative to autoregressive (AR) models, offering parallel decoding and controllable sampling dynamics while achieving competitive generation quality at scale. Despite this progress, the role of sampling mechanisms in shaping refusal behavior and jailbreak robustness remains poorly understood. In this work, we present a fundamental analytical framework for step-wise refusal dynamics, enabling comparison between AR and diffusion sampling. Our analysis reveals that the sampling strategy itself plays a central role in safety behavior, as a factor distinct from the underlying learned representations. Motivated by this analysis, we introduce the Step-Wise Refusal Internal Dynamics (SRI) signal, which supports interpretability and improved safety for both AR and DLMs. We demonstrate that the geometric structure of SRI captures internal recovery dynamics, and identifies anomalous behavior in harmful generations as cases of \emph{incomplete internal recovery} that are not observable at the text level. This structure enables lightweight inference-time detectors that generalize to unseen attacks while matching or outperforming existing defenses with over $100\times$ lower inference overhead.