🤖 AI Summary
To address the challenges of large parameter count and high computational cost when deploying Whisper models on resource-constrained edge devices, this paper proposes a structured pruning framework tailored for such scenarios. Specifically, we introduce sparse group LASSO regularization to enforce structured sparsity during training and design a weight-statistics-aware adaptive pruning strategy to jointly optimize both parameter count and FLOPs. Additionally, we develop a customized text normalizer to enhance the reliability of Word Error Rate (WER) evaluation. Evaluated on the Common Voice 11.0 Hindi dataset, our method achieves 35.4% parameter compression and 18.5% FLOPs reduction for Whisper-small and Whisper-medium, respectively, with zero WER degradation—outperforming existing pruning approaches significantly.
📝 Abstract
Whisper models have achieved remarkable progress in speech recognition; yet their large size remains a bottleneck for deployment on resource-constrained edge devices. This paper proposes a framework to design fine-tuned variants of Whisper which address the above problem. Structured sparsity is enforced via the Sparse Group LASSO penalty as a loss regularizer, to reduce the number of FLOating Point operations (FLOPs). Further, a weight statistics aware pruning algorithm is proposed. We also design our custom text normalizer for WER evaluation. On Common Voice 11.0 Hindi dataset, we obtain, without degrading WER, (a) 35.4% reduction in model parameters, 14.25% lower memory consumption and 18.5% fewer FLOPs on Whisper-small, and (b) 31% reduction in model parameters, 15.29% lower memory consumption and 16.95% fewer FLOPs on Whisper-medium; and, (c) substantially outperform the state-of-the-art Iterative Magnitude Pruning based method by pruning 18.7% more parameters along with a 12.31 reduction in WER.