🤖 AI Summary
This work addresses the performance degradation in multi-corpus joint training for anti-spoofing tasks, which often arises from dataset-specific biases leading to negative transfer. To mitigate this issue, the study introduces domain-invariant learning into self-supervised speech anti-spoofing models for the first time, proposing an Invariant Domain Feature Extraction (IDFE) framework. By integrating multi-task learning with gradient reversal layers, IDFE effectively disentangles corpus-specific information and enhances cross-dataset generalization. Experimental results across four mainstream anti-spoofing datasets demonstrate that the proposed method achieves a 20% relative reduction in average equal error rate compared to baseline models, substantially alleviating the instability commonly observed in multi-corpus training scenarios.
📝 Abstract
The performance of speech spoofing detection often varies across different training and evaluation corpora. Leveraging multiple corpora typically enhances robustness and performance in fields like speaker recognition and speech recognition. However, our spoofing detection experiments show that multi-corpus training does not consistently improve performance and may even degrade it. We hypothesize that dataset-specific biases impair generalization, leading to performance instability. To address this, we propose an Invariant Domain Feature Extraction (IDFE) framework, employing multi-task learning and a gradient reversal layer to minimize corpus-specific information in learned embeddings. The IDFE framework reduces the average equal error rate by 20% compared to the baseline, assessed across four varied datasets.