LADR: Locality-Aware Dynamic Rescue for Efficient Text-to-Image Generation with Diffusion Large Language Models

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high inference latency of discrete diffusion language models in text-to-image generation, a challenge inadequately mitigated by existing acceleration methods that either require costly retraining or fail to exploit spatial redundancy in images. The authors propose a training-free acceleration framework that uniquely integrates morphological neighborhood identification, risk-bounded filtering, and manifold-consistent inverse diffusion scheduling. By prioritizing the recovery of high-information image patches within identified “generation frontiers,” the method suppresses error propagation while preserving the underlying latent manifold structure. Evaluated across four text-to-image benchmarks, the approach achieves approximately 4× inference speedup without sacrificing—and in some cases even enhancing—generation fidelity, demonstrating particularly strong performance on spatial reasoning tasks.

Technology Category

Application Category

📝 Abstract
Discrete Diffusion Language Models have emerged as a compelling paradigm for unified multimodal generation, yet their deployment is hindered by high inference latency arising from iterative decoding. Existing acceleration strategies often require expensive re-training or fail to leverage the 2D spatial redundancy inherent in visual data. To address this, we propose Locality-Aware Dynamic Rescue (LADR), a training-free method that expedites inference by exploiting the spatial Markov property of images. LADR prioritizes the recovery of tokens at the ''generation frontier'', regions spatially adjacent to observed pixels, thereby maximizing information gain. Specifically, our method integrates morphological neighbor identification to locate candidate tokens, employs a risk-bounded filtering mechanism to prevent error propagation, and utilizes manifold-consistent inverse scheduling to align the diffusion trajectory with the accelerated mask density. Extensive experiments on four text-to-image generation benchmarks demonstrate that our LADR achieves an approximate 4 x speedup over standard baselines. Remarkably, it maintains or even enhances generative fidelity, particularly in spatial reasoning tasks, offering a state-of-the-art trade-off between efficiency and quality.
Problem

Research questions and friction points this paper is trying to address.

Discrete Diffusion Language Models
inference latency
text-to-image generation
spatial redundancy
iterative decoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Locality-Aware
Dynamic Rescue
Discrete Diffusion Language Models
Spatial Markov Property
Training-Free Acceleration
🔎 Similar Papers
No similar papers found.