🤖 AI Summary
This work addresses the performance limitations of target speaker extraction (TSE) in complex real-world scenarios, primarily caused by the ineffective modeling of interactions among multiple factors and the inability of conventional curriculum learning to reflect the model’s actual learning dynamics by treating each factor in isolation. To overcome these challenges, the authors propose a multi-factor joint curriculum learning strategy that simultaneously schedules signal-to-noise ratio, number of speakers, overlap ratio, and the proportion of synthetic-to-real data. Furthermore, they introduce TSE-Datamap—a novel visualization framework based on training dynamics—that categorizes samples into easy, ambiguous, and hard regions by analyzing confidence and variability, enabling adaptive data scheduling from easy to difficult. Experiments demonstrate that the proposed approach significantly outperforms random sampling, with particularly notable gains in challenging multi-speaker overlapping conditions.
📝 Abstract
Target speaker extraction (TSE) aims to isolate a specific speaker's voice from multi-speaker mixtures. Despite strong benchmark results, real-world performance often degrades due to different interacting factors. Previous curriculum learning approaches for TSE typically address these factors separately, failing to capture their complex interactions and relying on predefined difficulty factors that may not align with actual model learning behavior. To address this challenge, we first propose a multi-factor curriculum learning strategy that jointly schedules SNR thresholds, speaker counts, overlap ratios, and synthetic/real proportions, enabling progressive learning from simple to complex scenarios. However, determining optimal scheduling without predefined assumptions remains challenging. We therefore introduce TSE-Datamap, a visualization framework that grounds curriculum design in observed training dynamics by tracking confidence and variability across training epochs. Our analysis reveals three characteristic data regions: (i) easy-to-learn examples where models consistently perform well, (ii) ambiguous examples where models oscillate between alternative predictions, and (iii) hard-to-learn examples where models persistently struggle. Guided by these data-driven insights, our methods improve extraction results over random sampling, with particularly strong gains in challenging multi-speaker scenarios.