Fast Sampling Through The Reuse Of Attention Maps In Diffusion Models

📅 2023-12-13
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-image diffusion models suffer from slow sampling and high latency; existing acceleration methods typically require retraining or fine-tuning. This paper proposes a zero-training, inference-only acceleration framework: guided by ODE stability theory, it determines optimal timing for attention map reuse and introduces a caching-and-strided-reuse mechanism—fully plug-and-play without modifying the original model. Its key insight is that later-stage attention map reuse preserves higher image fidelity, thereby breaking the inherent quality–speed trade-off in few-step sampling. Experiments demonstrate that, at equivalent latency, our method significantly improves generation quality—achieving superior FID and LPIPS scores compared to diverse few-step sampling baselines. This work establishes a new paradigm for efficient, high-fidelity text-to-image synthesis.

Technology Category

Application Category

📝 Abstract
Text-to-image diffusion models have demonstrated unprecedented capabilities for flexible and realistic image synthesis. Nevertheless, these models rely on a time-consuming sampling procedure, which has motivated attempts to reduce their latency. When improving efficiency, researchers often use the original diffusion model to train an additional network designed specifically for fast image generation. In contrast, our approach seeks to reduce latency directly, without any retraining, fine-tuning, or knowledge distillation. In particular, we find the repeated calculation of attention maps to be costly yet redundant, and instead suggest reusing them during sampling. Our specific reuse strategies are based on ODE theory, which implies that the later a map is reused, the smaller the distortion in the final image. We empirically compare our reuse strategies with few-step sampling procedures of comparable latency, finding that reuse generates images that are closer to those produced by the original high-latency diffusion model.
Problem

Research questions and friction points this paper is trying to address.

Reducing latency in diffusion models without retraining
Reusing attention maps to avoid redundant calculations
Improving image generation efficiency while maintaining quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reuse attention maps to reduce latency
Avoid retraining or fine-tuning diffusion models
Base reuse strategies on ODE theory
🔎 Similar Papers
Rosco Hunter
Rosco Hunter
University of Warwick
AI safety and efficiency
L
L. Dudziak
Samsung AI Centre Cambridge, UK.
M
M. Abdelfattah
Cornell University, USA.
A
A. Mehrotra
Samsung AI Centre Cambridge, UK.
S
Sourav Bhattacharya
Samsung AI Centre Cambridge, UK.
Hongkai Wen
Hongkai Wen
University of Warwick
Machine LearningML/AI SystemsCyber-Physical Systems