Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding

📅 2024-10-02
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Autoregressive text-to-image models suffer from low inference efficiency due to requiring hundreds or thousands of token-generation steps. While existing Jacobi-style parallel decoding methods avoid training, they rely on deterministic convergence criteria and thus cannot support stochastic sampling—degrading both image quality and diversity. This paper proposes Speculative Jacobi Decoding (SJD), the first Jacobi-based method incorporating a probabilistic convergence criterion to preserve sampling randomness. SJD further introduces a training-free token initialization strategy leveraging visual spatial locality and forms a fully fine-tuning-free end-to-end framework. Evaluated across multiple autoregressive text-to-image models, SJD reduces the average number of generation steps by 2.1× while maintaining state-of-the-art image fidelity and diversity. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
The current large auto-regressive models can generate high-quality, high-resolution images, but these models require hundreds or even thousands of steps of next-token prediction during inference, resulting in substantial time consumption. In existing studies, Jacobi decoding, an iterative parallel decoding algorithm, has been used to accelerate the auto-regressive generation and can be executed without training. However, the Jacobi decoding relies on a deterministic criterion to determine the convergence of iterations. Thus, it works for greedy decoding but is incompatible with sampling-based decoding which is crucial for visual quality and diversity in the current auto-regressive text-to-image generation. In this paper, we propose a training-free probabilistic parallel decoding algorithm, Speculative Jacobi Decoding (SJD), to accelerate auto-regressive text-to-image generation. By introducing a probabilistic convergence criterion, our SJD accelerates the inference of auto-regressive text-to-image generation while maintaining the randomness in sampling-based token decoding and allowing the model to generate diverse images. Specifically, SJD facilitates the model to predict multiple tokens at each step and accepts tokens based on the probabilistic criterion, enabling the model to generate images with fewer steps than the conventional next-token-prediction paradigm. We also investigate the token initialization strategies that leverage the spatial locality of visual data to further improve the acceleration ratio under specific scenarios. We conduct experiments for our proposed SJD on multiple auto-regressive text-to-image generation models, showing the effectiveness of model acceleration without sacrificing the visual quality. The code of our work is available here: https://github.com/tyshiwo1/Accelerating-T2I-AR-with-SJD/.
Problem

Research questions and friction points this paper is trying to address.

Accelerates auto-regressive text-to-image generation
Maintains visual quality and diversity in sampling-based decoding
Reduces inference steps without additional training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free Speculative Jacobi Decoding accelerates generation
Probabilistic criterion maintains randomness in token decoding
Token initialization leverages spatial locality for acceleration
🔎 Similar Papers
No similar papers found.