π€ AI Summary
Text-dependent speaker verification (SV) suffers from performance degradation when training/registration and test utterances exhibit textual mismatch. To address this, we propose a text-adaptive framework comprising: (i) a speaker-text disentanglement network that decomposes speech representations into orthogonal speaker and text embeddings; (ii) unsupervised adaptation from text-independent to text-customized speaker embeddings using only a small amount of target-text speech data without speaker labels; and (iii) post-hoc calibration of speaker embeddings via fusion with text embeddings. Experiments on RSR2015 demonstrate substantial improvements in verification accuracy under text-mismatched conditions. Notably, our method achieves text-aware embedding adaptation without requiring any target-speaker utterancesβa first in the literature. This establishes a novel paradigm for low-resource, highly generalizable text-dependent SV.
π Abstract
Text mismatch between pre-collected data, either training data or enrollment data, and the actual test data can significantly hurt text-dependent speaker verification (SV) system performance. Although this problem can be solved by carefully collecting data with the target speech content, such data collection could be costly and inflexible. In this paper, we propose a novel text adaptation framework to address the text mismatch issue. Here, a speaker-text factorization network is proposed to factorize the input speech into speaker embeddings and text embeddings and then integrate them into a single representation in the later stage. Given a small amount of speaker-independent adaptation utterances, text embeddings of target speech content can be extracted and used to adapt the text-independent speaker embeddings to text-customized speaker embeddings. Experiments on RSR2015 show that text adaptation can significantly improve the performance of text mismatch conditions.