🤖 AI Summary
Generative facial video coding (GFVC) faces deployment challenges due to its large model size and high computational cost. To address this, we propose a lightweight dual-mode optimization framework that jointly optimizes architecture redesign and two-stage adaptive channel pruning. Specifically, standard 3×3 convolutions are replaced with slimmed convolutions, and soft pruning with learnable thresholds is integrated with mask-driven hard pruning—balancing training stability and inference efficiency. Our method achieves high-fidelity reconstruction while reducing model parameters by 90.4% and FLOPs by 88.9% compared to the baseline. Moreover, it surpasses state-of-the-art video codecs—including VVC—in perceptual quality (e.g., LPIPS and FID), demonstrating superior visual fidelity. The proposed framework significantly enhances the practicality of GFVC on resource-constrained devices, enabling efficient on-device generative video compression without compromising reconstruction quality.
📝 Abstract
Generative Face Video Coding (GFVC) achieves superior rate-distortion performance by leveraging the strong inference capabilities of deep generative models. However, its practical deployment is hindered by large model parameters and high computational costs. To address this, we propose a lightweight GFVC framework that introduces dual-mode optimization - combining architectural redesign and operational refinement - to reduce complexity whilst preserving reconstruction quality. Architecturally, we replace traditional 3 x 3 convolutions with slimmer and more efficient layers, reducing complexity without compromising feature expressiveness. Operationally, we develop a two-stage adaptive channel pruning strategy: (1) soft pruning during training identifies redundant channels via learnable thresholds, and (2) hard pruning permanently eliminates these channels post-training using a derived mask. This dual-phase approach ensures both training stability and inference efficiency. Experimental results demonstrate that the proposed lightweight dual-mode optimization for GFVC can achieve 90.4% parameter reduction and 88.9% computation saving compared to the baseline, whilst achieving superior performance compared to state-of-the-art video coding standard Versatile Video Coding (VVC) in terms of perceptual-level quality metrics. As such, the proposed method is expected to enable efficient GFVC deployment in resource-constrained environments such as mobile edge devices.