A2D: Any-Order, Any-Step Safety Alignment for Diffusion Language Models

šŸ“… 2025-09-27
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
Diffusion large language models (dLLMs) support arbitrary generation orders, introducing novel safety risks: harmful content can emerge at any token position, and prefill attacks (e.g., DIJA) can bypass response-level refusal mechanisms. To address this, we propose A2D—a first-of-its-kind defense robust against both arbitrary generation orders and arbitrary-step prefill attacks. A2D employs token-level fine-grained safety alignment, integrating stochastic masking, [EOS]-triggered refusal signals, and thresholded probability-based criteria to enable real-time monitoring and automatic generation termination. Evaluated across multiple benchmarks, A2D reduces DIJA attack success rates from >80% to ā‰ˆ0%, while accelerating safe termination by up to 19.3Ɨ. These results significantly strengthen the safety alignment of dLLMs under adversarial prefill conditions.

Technology Category

Application Category

šŸ“ Abstract
Diffusion large language models (dLLMs) enable any-order generation, but this flexibility enlarges the attack surface: harmful spans may appear at arbitrary positions, and template-based prefilling attacks such as DIJA bypass response-level refusals. We introduce A2D (Any-Order, Any-Step Defense), a token-level alignment method that aligns dLLMs to emit an [EOS] refusal signal whenever harmful content arises. By aligning safety directly at the token-level under randomized masking, A2D achieves robustness to both any-decoding-order and any-step prefilling attacks under various conditions. It also enables real-time monitoring: dLLMs may begin a response but automatically terminate if unsafe continuation emerges. On safety benchmarks, A2D consistently prevents the generation of harmful outputs, slashing DIJA success rates from over 80% to near-zero (1.3% on LLaDA-8B-Instruct, 0.0% on Dream-v0-Instruct-7B), and thresholded [EOS] probabilities allow early rejection, yielding up to 19.3x faster safe termination.
Problem

Research questions and friction points this paper is trying to address.

Defends diffusion language models from any-order harmful content generation
Prevents template-based prefilling attacks bypassing response-level safety
Enables real-time monitoring and automatic termination of unsafe outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-level alignment for diffusion language models
Randomized masking for robust safety defense
Early termination using EOS probabilities monitoring
šŸ”Ž Similar Papers
No similar papers found.