🤖 AI Summary
Existing speech enhancement methods for resource-constrained devices enforce a static trade-off between performance and computational efficiency.
Method: We propose an end-to-end trainable dynamic pruning architecture built upon DEMUCS, incorporating a differentiable routing subnet that adaptively selects computation paths based on input features, coupled with a utilization factor (UF) for fine-grained capacity control—requiring no model replicas and enabling continuous performance-efficiency trade-offs within a single deployment.
Contribution/Results: Our approach achieves Pareto-optimal efficiency-quality balance: at only 10% average parameter activation, it surpasses a static 25%-utilization baseline in speech quality while reducing MACs by 29%. The core innovation lies in an input-adaptive, differentiable, and fine-grained dynamic computation allocation mechanism.
📝 Abstract
Speech enhancement (SE) enables robust speech recognition, real-time communication, hearing aids, and other applications where speech quality is crucial. However, deploying such systems on resource-constrained devices involves choosing a static trade-off between performance and computational efficiency. In this paper, we introduce dynamic slimming to DEMUCS, a popular SE architecture, making it scalable and input-adaptive. Slimming lets the model operate at different utilization factors (UF), each corresponding to a different performance/efficiency trade-off, effectively mimicking multiple model sizes without the extra storage costs. In addition, a router subnet, trained end-to-end with the backbone, determines the optimal UF for the current input. Thus, the system saves resources by adaptively selecting smaller UFs when additional complexity is unnecessary. We show that our solution is Pareto-optimal against individual UFs, confirming the benefits of dynamic routing. When training the proposed dynamically-slimmable model to use 10% of its capacity on average, we obtain the same or better speech quality as the equivalent static 25% utilization while reducing MACs by 29%.