Toward Safer Diffusion Language Models: Discovery and Mitigation of Priming Vulnerability

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion language models (DLMs) exhibit a novel security vulnerability in parallel token generation: their iterative denoising process is susceptible to “affirmative tokens” induced at intermediate layers, enabling harmful outputs even in aligned models—exploitable via optimization-based jailbreaking attacks. This work first identifies DLMs’ vulnerability to initial-prompt-guided attacks and proposes a contaminated intermediate-state-aware safety alignment training framework. Our method detects risky intermediate states through adversarial sample injection and iterative denoising analysis, then selectively optimizes denoising trajectories to mitigate such risks. Experiments demonstrate that our approach significantly improves robustness against both prompt-guided and conventional jailbreaking attacks while preserving original task performance. To the best of our knowledge, this is the first systematic, intermediate-state-modeling solution for safety alignment of DLMs.

Technology Category

Application Category

📝 Abstract
Diffusion language models (DLMs) generate tokens in parallel through iterative denoising, which can reduce latency and enable bidirectional conditioning. However, the safety risks posed by jailbreak attacks that exploit this inference mechanism are not well understood. In this paper, we reveal that DLMs have a critical vulnerability stemming from their iterative denoising process and propose a countermeasure. Specifically, our investigation shows that if an affirmative token for a harmful query appears at an intermediate step, subsequent denoising can be steered toward a harmful response even in aligned models. As a result, simply injecting such affirmative tokens can readily bypass the safety guardrails. Furthermore, we demonstrate that the vulnerability allows existing optimization-based jailbreak attacks to succeed on DLMs. Building on this analysis, we propose a novel safety alignment method tailored to DLMs that trains models to generate safe responses from contaminated intermediate states that contain affirmative tokens. Our experiments indicate that the proposed method significantly mitigates the vulnerability with minimal impact on task performance. Furthermore, our method improves robustness against conventional jailbreak attacks. Our work underscores the need for DLM-specific safety research.
Problem

Research questions and friction points this paper is trying to address.

Identify vulnerability in diffusion language models' denoising process
Mitigate jailbreak attacks exploiting affirmative token injection
Develop safety alignment for contaminated intermediate states
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposed safety alignment for diffusion language models
Trains models to generate safe responses from contaminated states
Mitigates priming vulnerability with minimal performance impact
🔎 Similar Papers
No similar papers found.