🤖 AI Summary
Existing language–audio contrastive pretraining models are limited by small-scale datasets, fixed-length short audio inputs, and contrastive losses that rely solely on global representations, hindering their ability to support variable-length audio and fine-grained semantic understanding. This work proposes the first large-scale language–audio joint pretraining framework capable of handling audio of arbitrary length. Leveraging 10.9 billion audio–text pairs, the model integrates contrastive learning, self-supervised learning, and audio captioning objectives into a unified multi-task loss, enabling the acquisition of dense and semantically rich audio representations within a single-stage training paradigm. The approach significantly outperforms existing methods on audio–text retrieval and zero-shot audio classification tasks, achieving state-of-the-art performance across multiple benchmarks and demonstrating exceptional generalization capabilities.
📝 Abstract
Contrastive language-audio pretraining (CLAP) has achieved notable success in learning semantically rich audio representations and is widely adopted for various audio-related tasks. However, current CLAP models face several key limitations. First, they are typically trained on relatively small datasets, often comprising a few million audio samples. Second, existing CLAP models are restricted to short and fixed duration, which constrains their usage in real-world scenarios with variable-duration audio. Third, the standard contrastive training objective operates on global representations, which may hinder the learning of dense, fine-grained audio features. To address these challenges, we introduce Scalable Language-Audio Pretraining (SLAP), which scales language-audio pretraining to 109 million audio-text pairs with variable audio durations and incorporates multiple training objectives. SLAP unifies contrastive loss with additional self-supervised and captioning losses in a single-stage training, facilitating the learning of richer dense audio representations. The proposed SLAP model achieves new state-of-the-art performance on audio-text retrieval and zero-shot audio classification tasks, demonstrating its effectiveness across diverse benchmarks.