A Study of Data Selection Strategies for Pre-training Self-Supervised Speech Models

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Although self-supervised speech pretraining relies on large-scale data, effective data selection strategies remain unclear. This work systematically evaluates the impact of different pretraining subsets on automatic speech recognition (ASR) performance and proposes several filtering strategies—including random sampling, diversity-based selection, and prioritizing longer utterances—for comparative analysis. The experiments reveal that utterance length is more decisive than data diversity or total volume: using only the top 50% longest utterances surpasses the performance achieved with the full dataset while reducing pretraining time by 24%. These findings highlight the critical role of speech duration in self-supervised learning and offer a new direction for efficient speech model training.

Technology Category

Application Category

📝 Abstract
Self-supervised learning (SSL) has transformed speech processing, yet its reliance on massive pre-training datasets remains a bottleneck. While robustness is often attributed to scale and diversity, the role of the data distribution is less understood. We systematically examine how curated subsets of pre-training data influence Automatic Speech Recognition (ASR) performance. Surprisingly, optimizing for acoustic, speaker, or linguistic diversity yields no clear improvements over random sampling. Instead, we find that prioritizing the longest utterances achieves superior ASR results while using only half the original dataset, reducing pre-training time by 24% on a large corpora. These findings suggest that for pre-training speech SSL models, data length is a more critical factor than either data diversity or overall data quantity for performance and efficiency, offering a new perspective for data selection strategies in SSL speech processing.
Problem

Research questions and friction points this paper is trying to address.

data selection
self-supervised learning
speech pre-training
Automatic Speech Recognition
data efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

data selection
self-supervised learning
speech pre-training
utterance length
automatic speech recognition
🔎 Similar Papers
No similar papers found.