FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion Transformers (DiTs) suffer from slow inference and high GPU memory consumption due to iterative sampling and deep Transformer architectures. To address this, we propose a latent-state-level caching and compression framework. Our method introduces three core innovations: (1) a spatially aware token selection mechanism that dynamically retains semantically critical tokens; (2) cross-timestep Transformer layer caching and reuse to eliminate redundant computation; and (3) a learnable linear approximation guided by statistical hypothesis testing, coupled with significance-driven cache decisions to bound approximation error. Evaluated across multiple DiT variants, our approach achieves substantial speedup—averaging 42% latency reduction—and up to 58% GPU memory compression. Crucially, it attains superior FID and t-FID scores compared to existing caching methods, with no degradation in generation quality.

Technology Category

Application Category

📝 Abstract
Diffusion Transformers (DiT) are powerful generative models but remain computationally intensive due to their iterative structure and deep transformer stacks. To alleviate this inefficiency, we propose FastCache, a hidden-state-level caching and compression framework that accelerates DiT inference by exploiting redundancy within the model's internal representations. FastCache introduces a dual strategy: (1) a spatial-aware token selection mechanism that adaptively filters redundant tokens based on hidden state saliency, and (2) a transformer-level cache that reuses latent activations across timesteps when changes are statistically insignificant. These modules work jointly to reduce unnecessary computation while preserving generation fidelity through learnable linear approximation. Theoretical analysis shows that FastCache maintains bounded approximation error under a hypothesis-testing-based decision rule. Empirical evaluations across multiple DiT variants demonstrate substantial reductions in latency and memory usage, with best generation output quality compared to other cache methods, as measured by FID and t-FID. Code implementation of FastCache is available on GitHub at https://github.com/NoakLiu/FastCache-xDiT.
Problem

Research questions and friction points this paper is trying to address.

Accelerate DiT inference by reducing computational redundancy
Implement spatial-aware token selection to filter redundant tokens
Reuse latent activations across timesteps to preserve fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatial-aware token selection mechanism
Transformer-level cache reuse
Learnable linear approximation
🔎 Similar Papers
No similar papers found.