🤖 AI Summary
This study investigates the interplay between boundary detection and clustering in unsupervised spoken term discovery, specifically addressing whether top-down methods require explicit boundary feedback to improve segmentation performance. Method: Through systematic comparison, we find that simple bottom-up approaches match or exceed the performance of complex top-down methods across most settings—while achieving nearly 5× speedup—indicating that clustering, not boundary modeling, is the primary bottleneck. We propose ES-KMeans, a lightweight variant of K-means that dynamically incorporates boundary cues derived from self-supervised prediction of adjacent feature dissimilarity, enabling efficient top-down feedback without architectural complexity. Contribution/Results: ES-KMeans achieves state-of-the-art performance on the ZeroSpeech 2019 five-language benchmark, demonstrating that high-quality segmentation can be attained without strong, iterative boundary feedback mechanisms. The implementation is publicly available.
📝 Abstract
We investigate the problem of segmenting unlabeled speech into word-like units and clustering these to create a lexicon. Prior work can be categorized into two frameworks. Bottom-up methods first determine boundaries and then cluster the fixed segmented words into a lexicon. In contrast, top-down methods incorporate information from the clustered words to inform boundary selection. However, it is unclear whether top-down information is necessary to improve segmentation. To explore this, we look at two similar approaches that differ in whether top-down clustering informs boundary selection. Our simple bottom-up strategy predicts word boundaries using the dissimilarity between adjacent self-supervised features, then clusters the resulting segments to construct a lexicon. Our top-down system is an updated version of the ES-KMeans dynamic programming method that iteratively uses K-means to update its boundaries. On the five-language ZeroSpeech benchmarks, both approaches achieve comparable state-of-the-art results, with the bottom-up system being nearly five times faster. Through detailed analyses, we show that the top-down influence of ES-KMeans can be beneficial (depending on factors like the candidate boundaries), but in many cases the simple bottom-up method performs just as well. For both methods, we show that the clustering step is a limiting factor. Therefore, we recommend that future work focus on improved clustering techniques and learning more discriminative word-like representations. Project code repository: https://github.com/s-malan/prom-seg-clus.