Distillation-based Layer Dropping (DLD): Effective End-to-end Framework for Dynamic Speech Networks

📅 2026-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes an end-to-end distillation-based layer dropping framework that jointly optimizes knowledge distillation and layer dropping mechanisms to enable efficient adaptive inference on resource-constrained edge devices. Existing layer dropping approaches significantly degrade the performance of dynamic speech models under both high and low dropping rates, struggling to balance computational efficiency and recognition accuracy. Built upon Conformer and WavLM architectures, the proposed method is validated on three public speech benchmarks, achieving a 9.32% relative reduction in word error rate (WER) under high dropping rates and a 2.25% reduction in the no-dropping scenario compared to the baseline. Moreover, it reduces training time by 33.3%, substantially improving the trade-off between model performance and computational overhead.

Technology Category

Application Category

📝 Abstract
Edge devices operate in constrained and varying resource settings, requiring dynamic architectures that can adapt to limitations of the available resources. To meet such demands, layer dropping ($\mathcal{LD}$) approach is typically used to transform static models into dynamic ones by skipping parts of the network along with reducing overall computational complexity. However, existing $\mathcal{LD}$ methods greatly impact the dynamic model's performance for low and high dropping cases, deteriorating the performance-computation trade-off. To this end, we propose a distillation-based layer dropping (DLD) framework that effectively combines the capabilities of knowledge distillation and $\mathcal{LD}$ in an end-to-end fashion, thereby achieving state-of-the-art performance for dynamic speech networks. Comprehensive experimentation utilizing well-known speech recognition methods, including conformer and WavLM, on three public benchmarks demonstrates the effectiveness of our framework, reducing the word error rate by $9.32\%$ and $2.25\%$ for high and no dropping cases with $33.3\%$ reduction in training time.
Problem

Research questions and friction points this paper is trying to address.

layer dropping
dynamic speech networks
performance-computation trade-off
edge devices
speech recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distillation-based Layer Dropping
Dynamic Speech Networks
Knowledge Distillation
Layer Dropping
End-to-end Framework
🔎 Similar Papers
No similar papers found.