Structured Sparsity and Weight-adaptive Pruning for Memory and Compute efficient Whisper models

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of large parameter count and high computational cost when deploying Whisper models on resource-constrained edge devices, this paper proposes a structured pruning framework tailored for such scenarios. Specifically, we introduce sparse group LASSO regularization to enforce structured sparsity during training and design a weight-statistics-aware adaptive pruning strategy to jointly optimize both parameter count and FLOPs. Additionally, we develop a customized text normalizer to enhance the reliability of Word Error Rate (WER) evaluation. Evaluated on the Common Voice 11.0 Hindi dataset, our method achieves 35.4% parameter compression and 18.5% FLOPs reduction for Whisper-small and Whisper-medium, respectively, with zero WER degradation—outperforming existing pruning approaches significantly.

Technology Category

Application Category

📝 Abstract
Whisper models have achieved remarkable progress in speech recognition; yet their large size remains a bottleneck for deployment on resource-constrained edge devices. This paper proposes a framework to design fine-tuned variants of Whisper which address the above problem. Structured sparsity is enforced via the Sparse Group LASSO penalty as a loss regularizer, to reduce the number of FLOating Point operations (FLOPs). Further, a weight statistics aware pruning algorithm is proposed. We also design our custom text normalizer for WER evaluation. On Common Voice 11.0 Hindi dataset, we obtain, without degrading WER, (a) 35.4% reduction in model parameters, 14.25% lower memory consumption and 18.5% fewer FLOPs on Whisper-small, and (b) 31% reduction in model parameters, 15.29% lower memory consumption and 16.95% fewer FLOPs on Whisper-medium; and, (c) substantially outperform the state-of-the-art Iterative Magnitude Pruning based method by pruning 18.7% more parameters along with a 12.31 reduction in WER.
Problem

Research questions and friction points this paper is trying to address.

Reducing Whisper model size for edge device deployment
Decreasing computational FLOPs through structured sparsity
Lowering memory consumption while maintaining speech recognition accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enforcing structured sparsity via Sparse Group LASSO penalty
Proposing weight statistics aware pruning algorithm
Designing custom text normalizer for WER evaluation
🔎 Similar Papers
No similar papers found.
P
Prasenjit K Mudi
Indian Institute of Technology Madras, India
A
Anshi Sachan
National Institute of Technology Karnataka, Surathkal
D
Dahlia Devapriya
Indian Institute of Technology Madras, India
Sheetal Kalyani
Sheetal Kalyani
Professor, Electrical Engineering, IIT Madras
statistical learning theory and robust statisticsspecial functions6G communicationsdeep learningextreme value theory