Layer- and Timestep-Adaptive Differentiable Token Compression Ratios for Efficient Diffusion Transformers

📅 2024-12-22
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Despite achieving state-of-the-art (SOTA) image generation quality, DiT models suffer from high latency and memory overhead due to computational redundancy, hindering deployment on resource-constrained devices. To address this, we propose DiffCR—the first dynamic, differentiable token compression framework tailored for DiTs—enabling adaptive computation allocation at three granularities: token-level, inter-layer, and across denoising timesteps. DiffCR employs differentiable routing to skip uninformative tokens, learns layer-wise compression ratios via zero-initialized parameters, and progressively adjusts timestep-wise compression rates based on noise levels. Crucially, it jointly fine-tunes importance prediction and gradient approximation in an end-to-end differentiable manner. Evaluated on text-to-image generation and image inpainting, DiffCR reduces FLOPs by up to 58% while preserving SOTA generation quality—significantly improving the quality-efficiency trade-off.

Technology Category

Application Category

📝 Abstract
Diffusion Transformers (DiTs) have achieved state-of-the-art (SOTA) image generation quality but suffer from high latency and memory inefficiency, making them difficult to deploy on resource-constrained devices. One major efficiency bottleneck is that existing DiTs apply equal computation across all regions of an image. However, not all image tokens are equally important, and certain localized areas require more computation, such as objects. To address this, we propose DiffCR, a dynamic DiT inference framework with differentiable compression ratios, which automatically learns to dynamically route computation across layers and timesteps for each image token, resulting in efficient DiTs. Specifically, DiffCR integrates three features: (1) A token-level routing scheme where each DiT layer includes a router that is fine-tuned jointly with model weights to predict token importance scores. In this way, unimportant tokens bypass the entire layer's computation; (2) A layer-wise differentiable ratio mechanism where different DiT layers automatically learn varying compression ratios from a zero initialization, resulting in large compression ratios in redundant layers while others remain less compressed or even uncompressed; (3) A timestep-wise differentiable ratio mechanism where each denoising timestep learns its own compression ratio. The resulting pattern shows higher ratios for noisier timesteps and lower ratios as the image becomes clearer. Extensive experiments on text-to-image and inpainting tasks show that DiffCR effectively captures dynamism across token, layer, and timestep axes, achieving superior trade-offs between generation quality and efficiency compared to prior works. The project website is available at https://www.haoranyou.com/diffcr.
Problem

Research questions and friction points this paper is trying to address.

Addresses high latency and memory inefficiency in Diffusion Transformers
Proposes dynamic computation routing for varying image token importance
Enhances efficiency via layer- and timestep-adaptive token compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-level routing for dynamic computation bypass
Layer-wise differentiable compression ratio learning
Timestep-adaptive compression ratio optimization