🤖 AI Summary
This work addresses speech semantic representation learning under fully textless supervision, aiming to derive discrete speech units directly from raw waveforms that exhibit both phonemic interpretability and semantic stability—enabling truly textless spoken language modeling. Methodologically, we propose a joint optimization objective integrating masked prediction, multi-layer student–teacher self-distillation, and dynamic online clustering, substantially improving codebook quality and clustering stability. We further provide the first systematic validation of ABX and PNMI as effective proxies for downstream language modeling performance. Experiments demonstrate state-of-the-art results across textless benchmarks—including sWUGGY, sBLIMP, and tSC—outperforming wav2vec 2.0, HuBERT, WavLM, and DinoSR. Our method achieves 7× faster pretraining than HuBERT (completing training in one day on 16 GPUs). Code and pretrained models are publicly released.
📝 Abstract
The parallel advances in language modeling and speech representation learning have raised the prospect of learning language directly from speech without textual intermediates. This requires extracting semantic representations directly from speech. Our contributions are threefold. First, we introduce SpidR, a self-supervised speech representation model that efficiently learns representations with highly accessible phonetic information, which makes it particularly suited for textless spoken language modeling. It is trained on raw waveforms using a masked prediction objective combined with self-distillation and online clustering. The intermediate layers of the student model learn to predict assignments derived from the teacher's intermediate layers. This learning objective stabilizes the online clustering procedure compared to previous approaches, resulting in higher quality codebooks. SpidR outperforms wav2vec 2.0, HuBERT, WavLM, and DinoSR on downstream language modeling benchmarks (sWUGGY, sBLIMP, tSC). Second, we systematically evaluate across models and layers the correlation between speech unit quality (ABX, PNMI) and language modeling performance, validating these metrics as reliable proxies. Finally, SpidR significantly reduces pretraining time compared to HuBERT, requiring only one day of pretraining on 16 GPUs, instead of a week. This speedup is enabled by the pretraining method and an efficient codebase, which allows faster iteration and easier experimentation. We open-source the training code and model checkpoints at https://github.com/facebookresearch/spidr.