🤖 AI Summary
Existing speech foundation models exhibit limited capability in modeling non-speech human vocalizations—such as infant cries, laughter, and sighs—due to their inability to capture fine-grained acoustic characteristics unique to such signals. To address this gap, we introduce voc2vec, the first general-purpose foundation model specifically designed for non-speech human vocalizations. voc2vec establishes a unified representation framework trained exclusively on 10 open-source non-speech audio datasets (~125 hours) via self-supervised pretraining followed by multi-task fine-tuning. It is the first open-source, general-purpose, task-agnostic foundation model targeting non-speech acoustic representation—filling a critical void between speech and general audio foundation models. Evaluated on six benchmark datasets, voc2vec achieves an average 7.2% higher non-speech classification accuracy than OpenSMILE, emotion2vec, and state-of-the-art speech/audio foundation models, significantly enhancing downstream task performance.
📝 Abstract
Speech foundation models have demonstrated exceptional capabilities in speech-related tasks. Nevertheless, these models often struggle with non-verbal audio data, such as vocalizations, baby crying, etc., which are critical for various real-world applications. Audio foundation models well handle non-speech data but also fail to capture the nuanced features of non-verbal human sounds. In this work, we aim to overcome the above shortcoming and propose a novel foundation model, termed voc2vec, specifically designed for non-verbal human data leveraging exclusively open-source non-verbal audio datasets. We employ a collection of 10 datasets covering around 125 hours of non-verbal audio. Experimental results prove that voc2vec is effective in non-verbal vocalization classification, and it outperforms conventional speech and audio foundation models. Moreover, voc2vec consistently outperforms strong baselines, namely OpenSmile and emotion2vec, on six different benchmark datasets. To the best of the authors' knowledge, voc2vec is the first universal representation model for vocalization tasks.