Low-Bitrate Video Compression through Semantic-Conditioned Diffusion

πŸ“… 2025-11-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Traditional video codecs suffer from severe artifacts at ultra-low bitrates due to the misalignment between pixel-level fidelity and perceptual quality. To address this, we propose DiSCoβ€”a semantic-conditioned diffusion framework that departs from pixel-aligned reconstruction and instead reconstructs high-fidelity videos from compact, lightweight inputs: textual descriptions, temporally degraded videos, and optionally, topological sketches. Under a diffusion generative prior, DiSCo enables semantic-guided synthesis without requiring full-resolution conditioning. Key innovations include temporal forward filling, token interleaving, and modality-specific encoding/decoding techniques, enabling efficient multimodal fusion and semantics-driven reconstruction. Experiments demonstrate that DiSCo achieves 2–10Γ— improvements in perceptual quality (LPIPS, FID) over both conventional codecs and semantic-based baselines at extremely low bitrates, while significantly enhancing temporal coherence and visual realism.

Technology Category

Application Category

πŸ“ Abstract
Traditional video codecs optimized for pixel fidelity collapse at ultra-low bitrates and produce severe artifacts. This failure arises from a fundamental misalignment between pixel accuracy and human perception. We propose a semantic video compression framework named DiSCo that transmits only the most meaningful information while relying on generative priors for detail synthesis. The source video is decomposed into three compact modalities: a textual description, a spatiotemporally degraded video, and optional sketches or poses that respectively capture semantic, appearance, and motion cues. A conditional video diffusion model then reconstructs high-quality, temporally coherent videos from these compact representations. Temporal forward filling, token interleaving, and modality-specific codecs are proposed to improve multimodal generation and modality compactness. Experiments show that our method outperforms baseline semantic and traditional codecs by 2-10X on perceptual metrics at low bitrates.
Problem

Research questions and friction points this paper is trying to address.

Addresses severe artifacts in ultra-low bitrate video compression.
Aligns compression with human perception over pixel accuracy.
Reconstructs high-quality videos from compact semantic representations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic-conditioned diffusion for video compression
Decomposing video into compact textual and visual modalities
Multimodal generation with temporal coherence and compactness