🤖 AI Summary
To address the scarcity of paired electromyography (EMG)-speech data for voiced EMG-to-speech reconstruction—leading to poor model generalization—this paper proposes CoM2S, a phoneme-level confidence-guided multi-speaker self-training framework, and introduces Libri-EMG, the first open-source, temporally aligned, multi-speaker EMG-speech dataset. Our key contributions are: (1) a novel phoneme-level confidence filtering mechanism that leverages a pre-trained EMG-to-speech generative model to produce high-fidelity synthetic training data; and (2) the first large-scale, fully annotated, cross-speaker EMG-speech benchmark dataset. Experiments demonstrate significant improvements in phoneme accuracy, reduced phonemic confusions, and a substantial decrease in word error rate. Both the codebase and the Libri-EMG dataset will be publicly released to advance research in EMG-to-speech synthesis.
📝 Abstract
Voiced Electromyography (EMG)-to-Speech (V-ETS) models reconstruct speech from muscle activity signals, facilitating applications such as neurolaryngologic diagnostics. Despite its potential, the advancement of V-ETS is hindered by a scarcity of paired EMG-speech data. To address this, we propose a novel Confidence-based Multi-Speaker Self-training (CoM2S) approach, along with a newly curated Libri-EMG dataset. This approach leverages synthetic EMG data generated by a pre-trained model, followed by a proposed filtering mechanism based on phoneme-level confidence to enhance the ETS model through the proposed self-training techniques. Experiments demonstrate our method improves phoneme accuracy, reduces phonological confusion, and lowers word error rate, confirming the effectiveness of our CoM2S approach for V-ETS. In support of future research, we will release the codes and the proposed Libri-EMG dataset-an open-access, time-aligned, multi-speaker voiced EMG and speech recordings.