🤖 AI Summary
This work addresses the limitations of traditional contrastive language–audio pretraining methods, which enforce rigid one-to-one hard alignment and struggle to capture the fuzzy boundaries and continuity inherent in emotion categories. To overcome this, we propose SmoothCLAP, the first framework to incorporate soft-target supervision into contrastive language–audio pretraining. By constructing soft alignment targets through the integration of intra-modal similarity and paralinguistic features, SmoothCLAP enables the model to learn audio–text embeddings that reflect hierarchical emotional relationships, all while preserving the original inference pipeline. Evaluated across eight emotion recognition tasks spanning English and German, SmoothCLAP consistently achieves performance gains, demonstrating the effectiveness of the proposed soft-supervision strategy for building emotionally aware multimodal models.
📝 Abstract
The ambiguity of human emotions poses several challenges for machine learning models, as they often overlap and lack clear delineating boundaries. Contrastive language-audio pretraining (CLAP) has emerged as a key technique for generalisable emotion recognition. However, as conventional CLAP enforces a strict one-to-one alignment between paired audio-text samples, it overlooks intra-modal similarity and treats all non-matching pairs as equally negative. This conflicts with the fuzzy boundaries between different emotions. To address this limitation, we propose SmoothCLAP, which introduces softened targets derived from intra-modal similarity and paralinguistic features. By combining these softened targets with conventional contrastive supervision, SmoothCLAP learns embeddings that respect graded emotional relationships, while retaining the same inference pipeline as CLAP. Experiments on eight affective computing tasks across English and German demonstrate that SmoothCLAP is consistently achieving superior performance. Our results highlight that leveraging soft supervision is a promising strategy for building emotion-aware audio-text models.