🤖 AI Summary
To address the inference inefficiency of Masked Discrete Diffusion Models (MDMs) in multimodal tasks—caused by redundant mask token recomputation—this paper proposes SparseMDM, a training-inference consistent sparse sampling framework. The method introduces three key innovations: (1) a dynamic token truncation mechanism that adaptively prunes redundant mask tokens during sampling; (2) register token representation compression to preserve generation fidelity; and (3) customized attention masking to ensure computational consistency of attention before and after sparsification. Integrated into the unified multimodal discrete diffusion architecture LaViDa-O, SparseMDM achieves up to 2× inference acceleration on text-to-image generation, image editing, and mathematical reasoning tasks, without compromising generation quality.
📝 Abstract
Masked Discrete Diffusion Models (MDMs) have achieved strong performance across a wide range of multimodal tasks, including image understanding, generation, and editing. However, their inference speed remains suboptimal due to the need to repeatedly process redundant masked tokens at every sampling step. In this work, we propose Sparse-LaViDa, a novel modeling framework that dynamically truncates unnecessary masked tokens at each inference step to accelerate MDM sampling. To preserve generation quality, we introduce specialized register tokens that serve as compact representations for the truncated tokens. Furthermore, to ensure consistency between training and inference, we design a specialized attention mask that faithfully matches the truncated sampling procedure during training. Built upon the state-of-the-art unified MDM LaViDa-O, Sparse-LaViDa achieves up to a 2x speedup across diverse tasks including text-to-image generation, image editing, and mathematical reasoning, while maintaining generation quality.