SemanticDialect: Semantic-Aware Mixed-Format Quantization for Video Diffusion Transformers

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high memory and computational overhead of video diffusion Transformers when deployed on edge devices, where existing quantization methods often degrade generation quality due to large activation fluctuations that disrupt semantic and temporal consistency. To mitigate this, the authors propose SemanticDialect (SeDA), a novel quantization framework that enables efficient block-level mixed-format quantization through an extensible format library and lookup table. SeDA further reduces quantization error by decomposing activations and employing attention-guided salient token selection. A semantic-aware dialect allocation strategy is introduced to enhance quantization consistency among semantically related tokens. Experiments on VDiT and Open-Sora 2.0 demonstrate that SeDA significantly outperforms existing quantization approaches, achieving near-FP16 accuracy while maintaining low online overhead.

Technology Category

Application Category

📝 Abstract
Diffusion Transformers (DiT) achieve strong video generation quality, but their memory and compute costs hinder edge deployment. Quantization can reduce these costs, yet existing methods often degrade video quality under high activation variation and the need to preserve semantic/temporal coherence. We propose SemanticDialect, which advances recent block-wise mixed-format quantization-selecting a per-block optimal format (a dialect) from multiple candidates (a formatbook)-by scaling the formatbook with lookup tables for quantization error and quantized values, enabling efficient per-block format selection and quantization at low online cost. We also introduce activation decomposition that reduces quantization error by re-quantizing and adding back residual errors, with attention-guided salient token selection. We further propose semantic-aware dialect assignment (SeDA) to improve quantized value consistency by sharing a sub-formatbook among semantically correlated tokens. Experiments on video DiT (VDiT) models show that SemanticDialect outperforms prior VDiT quantization methods and fine-grained block-wise format baselines, while approaching FP16 quality on Open-Sora 2.0.
Problem

Research questions and friction points this paper is trying to address.

Video Diffusion Transformers
Quantization
Semantic Coherence
Temporal Coherence
Activation Variation
Innovation

Methods, ideas, or system contributions that make the work stand out.

mixed-format quantization
semantic-aware quantization
video diffusion transformers
activation decomposition
formatbook