🤖 AI Summary
This work addresses the excessive memory consumption of key-value (KV) caching in autoregressive video generation, which severely limits the length and temporal consistency of generated videos. The authors propose the first training-free 2-bit KV cache quantization framework, leveraging semantic-aware smoothing and progressive residual quantization to drastically reduce memory usage while preserving generation quality. Evaluated across multiple benchmarks, the method outperforms existing approaches, achieving up to a 7× reduction in KV cache memory with less than a 4% increase in end-to-end latency. This represents a Pareto-optimal trade-off between memory efficiency and generation fidelity, enabling longer and more coherent autoregressive video synthesis without retraining or architectural modifications.
📝 Abstract
Despite rapid progress in autoregressive video diffusion, an emerging system algorithm bottleneck limits both deployability and generation capability: KV cache memory. In autoregressive video generation models, the KV cache grows with generation history and quickly dominates GPU memory, often exceeding 30 GB, preventing deployment on widely available hardware. More critically, constrained KV cache budgets restrict the effective working memory, directly degrading long horizon consistency in identity, layout, and motion. To address this challenge, we present Quant VideoGen (QVG), a training free KV cache quantization framework for autoregressive video diffusion models. QVG leverages video spatiotemporal redundancy through Semantic Aware Smoothing, producing low magnitude, quantization friendly residuals. It further introduces Progressive Residual Quantization, a coarse to fine multi stage scheme that reduces quantization error while enabling a smooth quality memory trade off. Across LongCat Video, HY WorldPlay, and Self Forcing benchmarks, QVG establishes a new Pareto frontier between quality and memory efficiency, reducing KV cache memory by up to 7.0 times with less than 4% end to end latency overhead while consistently outperforming existing baselines in generation quality.