Measuring the Redundancy of Decoder Layers in SpeechLLMs

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the redundancy of decoder parameters in Speech Large Language Models (SpeechLLMs), where decoders constitute a significant portion of model parameters yet their required capacity for speech tasks remains unclear. Through systematic layer pruning of the decoder followed by fine-tuning-based healing, the study evaluates decoder redundancy across model scales (1–8B), multiple tasks (including ASR and speech translation), and diverse languages. The findings reveal that even with only 40% of decoder layers retained, 7–8B models maintain strong performance. Moreover, the redundancy patterns exhibit remarkable consistency across model sizes, tasks, and languages, suggesting the presence of globally shared redundant structures within SpeechLLMs. These insights provide both theoretical grounding and practical guidance for developing unified, lightweight multitask speech foundation models.

Technology Category

Application Category

📝 Abstract
Speech Large Language Models route speech encoder representations into an LLM decoder that typically accounts for over 90% of total parameters. We study how much of this decoder capacity is actually needed for speech tasks. Across two LLM families and three scales (1-8B), we show that decoder redundancy is largely inherited from the pretrained LLM: text and speech inputs yield similar redundant blocks. We then measure excess capacity by pruning decoder layers and analysing post-pruning healing to increase robustness. Our findings show that 7-8B models retain good ASR performance with only 60% of decoder layers, and the same trend extends to smaller scales with reduced pruning tolerance. We then generalise to speech translation, and show that the same blocks of layers are redundant across speech encoders, tasks and languages, indicating that a more global redundancy structure exists, enabling a single pruned and multi-tasks SpeechLLM backbone to be deployed.
Problem

Research questions and friction points this paper is trying to address.

decoder redundancy
SpeechLLM
layer pruning
speech tasks
excess capacity
Innovation

Methods, ideas, or system contributions that make the work stand out.

decoder redundancy
layer pruning
SpeechLLM
cross-task generalization
model compression
🔎 Similar Papers
No similar papers found.