TMPDiff: Temporal Mixed-Precision for Diffusion Models

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high inference latency of diffusion models in text-to-image generation, where existing quantization methods employ uniform precision and overlook the varying numerical precision requirements across denoising timesteps. To overcome this limitation, we propose TMPDiff—the first mixed-precision quantization framework for diffusion models that explicitly incorporates the temporal dimension. Guided by an error accumulation hypothesis, TMPDiff introduces an adaptive binary search algorithm that reduces the search complexity from exponential to linear, enabling dynamic precision allocation per timestep. Extensive experiments across four state-of-the-art diffusion models and three datasets demonstrate that TMPDiff significantly outperforms uniform-precision baselines at equivalent acceleration ratios, achieving 10%–20% higher perceptual quality (measured by SSIM). Notably, on FLUX.1-dev, TMPDiff attains a 2.5× speedup while preserving 90% of the full-precision model’s SSIM.

Technology Category

Application Category

📝 Abstract
Diffusion models are the go-to method for Text-to-Image generation, but their iterative denoising processes has high inference latency. Quantization reduces compute time by using lower bitwidths, but applies a fixed precision across all denoising timesteps, leaving an entire optimization axis unexplored. We propose TMPDiff, a temporal mixed-precision framework for diffusion models that assigns different numeric precision to different denoising timesteps. We hypothesize that quantization errors accumulate additively across timesteps, which we then validate experimentally. Based on our observations, we develop an adaptive bisectioning-based algorithm, which assigns per-step precisions with linear evaluation complexity, reducing an otherwise exponential search problem. Across four state-of-the-art diffusion models and three datasets, TMPDiff consistently outperforms uniform-precision baselines at matched speedup, achieving 10 to 20% improvement in perceptual quality. On FLUX.1-dev, TMPDiff achieves 90% SSIM relative to the full-precision model at a speedup of 2.5x over 16-bit inference.
Problem

Research questions and friction points this paper is trying to address.

diffusion models
inference latency
quantization
mixed-precision
temporal optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporal Mixed-Precision
Diffusion Models
Quantization
Adaptive Bisectioning
Inference Acceleration