PLADIS: Pushing the Limits of Attention in Diffusion Models at Inference Time by Leveraging Sparsity

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing diffusion models struggle to enhance text–image alignment via classifier-free guidance (CFG) without incurring additional training, increased neural function evaluations (NFEs), or heuristic, manually selected target layers—limiting compatibility with guidance-distilled models. Method: We propose a plug-and-play, inference-time sparse attention enhancement that requires no fine-tuning and incurs zero NFE overhead. It dynamically extrapolates query–key correlations in cross-attention layers and employs a noise-robust softmax sparsification approximation. Contribution/Results: To our knowledge, this is the first method that significantly improves text alignment quality and human preference scores *without* modifying pretrained U-Net/Transformer weights or increasing computational cost. It is fully compatible with CFG and diverse guidance-distillation frameworks, eliminating reliance on manual target-layer selection.

Technology Category

Application Category

📝 Abstract
Diffusion models have shown impressive results in generating high-quality conditional samples using guidance techniques such as Classifier-Free Guidance (CFG). However, existing methods often require additional training or neural function evaluations (NFEs), making them incompatible with guidance-distilled models. Also, they rely on heuristic approaches that need identifying target layers. In this work, we propose a novel and efficient method, termed PLADIS, which boosts pre-trained models (U-Net/Transformer) by leveraging sparse attention. Specifically, we extrapolate query-key correlations using softmax and its sparse counterpart in the cross-attention layer during inference, without requiring extra training or NFEs. By leveraging the noise robustness of sparse attention, our PLADIS unleashes the latent potential of text-to-image diffusion models, enabling them to excel in areas where they once struggled with newfound effectiveness. It integrates seamlessly with guidance techniques, including guidance-distilled models. Extensive experiments show notable improvements in text alignment and human preference, offering a highly efficient and universally applicable solution.
Problem

Research questions and friction points this paper is trying to address.

Enhances diffusion models without extra training or NFEs.
Improves text-to-image models using sparse attention mechanisms.
Seamlessly integrates with guidance-distilled models for better performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages sparse attention in cross-attention layers
Extrapolates query-key correlations without extra training
Enhances text-to-image diffusion models' performance