🤖 AI Summary
Far-field speech recognition suffers from poor generalization of enhancement models trained on synthetic data, due to the scarcity of clean, dialogue-aligned speech annotations in real-world recordings. To address this, we propose a self-adaptive learning framework—Direct Source Estimation (DSE) coupled with pseudo-supervised learning (SuPseudo)—that enables speech enhancement directly from unlabeled real-recorded data. DSE introduces the first end-to-end approach for generating high-fidelity pseudo-labels; SuPseudo establishes a weakly supervised bridge from simulation to reality; and we design FARNET, a lightweight, task-specific network. Evaluated on the MISP2023 dataset, our method significantly outperforms state-of-the-art approaches, achieving substantial reductions in ASR word error rate. The source code and demonstration system are publicly available.
📝 Abstract
Due to the lack of target speech annotations in real-recorded far-field conversational datasets, speech enhancement (SE) models are typically trained on simulated data. However, the trained models often perform poorly in real-world conditions, hindering their application in far-field speech recognition. To address the issue, we (a) propose direct sound estimation (DSE) to estimate the oracle direct sound of real-recorded data for SE; and (b) present a novel pseudo-supervised learning method, SuPseudo, which leverages DSE-estimates as pseudo-labels and enables SE models to directly learn from and adapt to real-recorded data, thereby improving their generalization capability. Furthermore, an SE model called FARNET is designed to fully utilize SuPseudo. Experiments on the MISP2023 corpus demonstrate the effectiveness of SuPseudo, and our system significantly outperforms the previous state-of-the-art. A demo of our method can be found at https://EeLLJ.github.io/SuPseudo/.