On the Role of Encoder Depth: Pruning Whisper and LoRA Fine-Tuning in SLAM-ASR

📅 2026-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study presents the first systematic investigation into the impact of encoder layer pruning in Whisper for end-to-end SLAM-ASR and proposes integrating low-rank adaptation (LoRA) fine-tuning to mitigate performance degradation. Experiments span Whisper Small, Medium, and Large-v2 models across Danish, Dutch, and English. Results show that pruning just two encoder layers increases word error rate (WER) by 2–4%; however, incorporating LoRA not only recovers but surpasses the original baseline performance while reducing model parameters by 7–14%. Notably, Dutch and English exhibit WER reductions of 11–21%, whereas the low-resource Danish shows limited gains and increased insertion errors. The findings reveal that LoRA compensates for lost acoustic information through language model priors, with its efficacy strongly dependent on language resource availability.
📝 Abstract
Automatic speech recognition (ASR) has advanced rapidly in recent years, driven by large-scale pretrained models and end-to-end architectures such as SLAM-ASR. A key component of SLAM-ASR systems is the Whisper speech encoder, which provides robust acoustic representations. While model pruning has been explored for the full Whisper encoder-decoder architecture, its impact within the SLAM-ASR setting remains under-investigated. In this work, we analyze the effects of layer pruning in the Whisper encoder when used as the acoustic backbone of SLAM-ASR. We further examine the extent to which LoRA-based fine-tuning can recover performance degradation caused by pruning. Experiments conducted across three Whisper variants (Small, Medium, Large-v2), three languages representing distinct resource levels (Danish, Dutch, English), and over 200 training runs demonstrate that pruning two encoder layers causes only 2-4% WER degradation, and that combining this pruning with LoRA adaptation consistently outperforms the unpruned baseline while reducing total parameters by 7-14%. Moreover, our error analysis reveals that LoRA primarily compensates through the language model's linguistic priors, reducing total word errors by 11-21% for Dutch and English, with substitutions and deletions showing the largest reductions. However, for low-resource Danish, the reduction is smaller (4-7%), and LoRA introduces increased insertion errors, indicating that compensation effectiveness depends on the LLM's pre-existing language proficiency and available training data.
Problem

Research questions and friction points this paper is trying to address.

encoder pruning
SLAM-ASR
Whisper
LoRA fine-tuning
automatic speech recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

encoder pruning
LoRA fine-tuning
SLAM-ASR
Whisper
parameter efficiency
🔎 Similar Papers