🤖 AI Summary
To address the weak time-frequency multi-resolution generalization capability in cross-corpus speech enhancement, this paper proposes RWSA-MambaUNet—a U-Net architecture integrating the Mamba state space model with multi-head attention, and introducing the novel Resolution-Wise Shared Attention (RWSA) mechanism. RWSA dynamically shares attention weights across network layers according to time/frequency resolution, significantly reducing computational redundancy. This design jointly enhances long-range dependency modeling and local fine-grained representation, thereby improving cross-domain robustness. Experiments demonstrate state-of-the-art performance across multiple cross-corpus test sets: RWSA-MambaUNet achieves superior scores in PESQ, SSNR, ESTOI, and SI-SDR over all baselines. Moreover, it reduces parameter count by over 50% and substantially lowers FLOPs, delivering both high efficiency and strong generalization.
📝 Abstract
Recent advances in speech enhancement have shown that models combining Mamba and attention mechanisms yield superior cross-corpus generalization performance. At the same time, integrating Mamba in a U-Net structure has yielded state-of-the-art enhancement performance, while reducing both model size and computational complexity. Inspired by these insights, we propose RWSA-MambaUNet, a novel and efficient hybrid model combining Mamba and multi-head attention in a U-Net structure for improved cross-corpus performance. Resolution-wise shared attention (RWSA) refers to layerwise attention-sharing across corresponding time- and frequency resolutions. Our best-performing RWSA-MambaUNet model achieves state-of-the-art generalization performance on two out-of-domain test sets. Notably, our smallest model surpasses all baselines on the out-of-domain DNS 2020 test set in terms of PESQ, SSNR, and ESTOI, and on the out-of-domain EARS-WHAM_v2 test set in terms of SSNR, ESTOI, and SI-SDR, while using less than half the model parameters and a fraction of the FLOPs.