🤖 AI Summary
To address the high annotation cost and scarcity of labeled data in joint speech emotion and intent recognition, this paper proposes an end-to-end multi-task semi-supervised learning framework. The framework employs late fusion of acoustic and textual modalities to jointly model emotion and intent classification tasks. It innovatively introduces and comparatively evaluates two semi-supervised strategies—Fix-Match and Full-Match—to effectively leverage large-scale unlabeled speech data. Experimental results demonstrate that the best semi-supervised model achieves a 12.3% and 10.4% improvement over pure acoustic and pure textual baselines, respectively, on the joint recognition balanced metric. This significant gain markedly reduces reliance on labeled data, thereby validating the framework’s effectiveness and generalizability in low-resource speech understanding scenarios.
📝 Abstract
Emotion and intent recognition from speech is essential and has been widely investigated in human-computer interaction. The rapid development of social media platforms, chatbots, and other technologies has led to a large volume of speech data streaming from users. Nevertheless, annotating such data manually is expensive, making it challenging to train machine learning models for recognition purposes. To this end, we propose applying semi-supervised learning to incorporate a large scale of unlabelled data alongside a relatively smaller set of labelled data. We train end-to-end acoustic and linguistic models, each employing multi-task learning for emotion and intent recognition. Two semi-supervised learning approaches, including fix-match learning and full-match learning, are compared. The experimental results demonstrate that the semi-supervised learning approaches improve model performance in speech emotion and intent recognition from both acoustic and text data. The late fusion of the best models outperforms the acoustic and text baselines by joint recognition balance metrics of 12.3% and 10.4%, respectively.