🤖 AI Summary
This work addresses the high inference latency of discrete diffusion language models in text-to-image generation, a challenge inadequately mitigated by existing acceleration methods that either require costly retraining or fail to exploit spatial redundancy in images. The authors propose a training-free acceleration framework that uniquely integrates morphological neighborhood identification, risk-bounded filtering, and manifold-consistent inverse diffusion scheduling. By prioritizing the recovery of high-information image patches within identified “generation frontiers,” the method suppresses error propagation while preserving the underlying latent manifold structure. Evaluated across four text-to-image benchmarks, the approach achieves approximately 4× inference speedup without sacrificing—and in some cases even enhancing—generation fidelity, demonstrating particularly strong performance on spatial reasoning tasks.
📝 Abstract
Discrete Diffusion Language Models have emerged as a compelling paradigm for unified multimodal generation, yet their deployment is hindered by high inference latency arising from iterative decoding. Existing acceleration strategies often require expensive re-training or fail to leverage the 2D spatial redundancy inherent in visual data. To address this, we propose Locality-Aware Dynamic Rescue (LADR), a training-free method that expedites inference by exploiting the spatial Markov property of images. LADR prioritizes the recovery of tokens at the ''generation frontier'', regions spatially adjacent to observed pixels, thereby maximizing information gain. Specifically, our method integrates morphological neighbor identification to locate candidate tokens, employs a risk-bounded filtering mechanism to prevent error propagation, and utilizes manifold-consistent inverse scheduling to align the diffusion trajectory with the accelerated mask density. Extensive experiments on four text-to-image generation benchmarks demonstrate that our LADR achieves an approximate 4 x speedup over standard baselines. Remarkably, it maintains or even enhances generative fidelity, particularly in spatial reasoning tasks, offering a state-of-the-art trade-off between efficiency and quality.