6Bit-Diffusion: Inference-Time Mixed-Precision Quantization for Video Diffusion Models

πŸ“… 2026-03-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of efficiently deploying video diffusion models, which are hindered by high memory and computational costs, while existing static quantization methods struggle to balance efficiency and generation quality. The authors propose a mixed-precision quantization framework during inference that combines NVFP4 and INT8 formats. They uncover, for the first time, a strong correlation between the quantization sensitivity of linear layers and the input–output divergence within Transformer blocks, enabling a lightweight dynamic precision allocation strategy. Additionally, a temporal difference caching mechanism is introduced to exploit residual consistency across timesteps, skipping redundant computations. The proposed approach achieves a 1.92Γ— end-to-end speedup and 3.32Γ— memory compression while preserving high-quality video generation.

Technology Category

Application Category

πŸ“ Abstract
Diffusion transformers have demonstrated remarkable capabilities in generating videos. However, their practical deployment is severely constrained by high memory usage and computational cost. Post-Training Quantization provides a practical way to reduce memory usage and boost computation speed. Existing quantization methods typically apply a static bit-width allocation, overlooking the quantization difficulty of activations across diffusion timesteps, leading to a suboptimal trade-off between efficiency and quality. In this paper, we propose a inference time NVFP4/INT8 Mixed-Precision Quantization framework. We find a strong linear correlation between a block's input-output difference and the quantization sensitivity of its internal linear layers. Based on this insight, we design a lightweight predictor that dynamically allocates NVFP4 to temporally stable layers to maximize memory compression, while selectively preserving INT8 for volatile layers to ensure robustness. This adaptive precision strategy enables aggressive quantization without compromising generation quality. Beside this, we observe that the residual between the input and output of a Transformer block exhibits high temporal consistency across timesteps. Leveraging this temporal redundancy, we introduce Temporal Delta Cache (TDC) to skip computations for these invariant blocks, further reducing the computational cost. Extensive experiments demonstrate that our method achieves 1.92$\times$ end-to-end acceleration and 3.32$\times$ memory reduction, setting a new baseline for efficient inference in Video DiTs.
Problem

Research questions and friction points this paper is trying to address.

Video Diffusion Models
Quantization
Memory Efficiency
Computational Cost
Mixed-Precision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed-Precision Quantization
Video Diffusion Models
Temporal Delta Cache
Quantization Sensitivity
Inference-Time Adaptation
πŸ”Ž Similar Papers
No similar papers found.