End-to-end Acoustic-linguistic Emotion and Intent Recognition Enhanced by Semi-supervised Learning

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high annotation cost and scarcity of labeled data in joint speech emotion and intent recognition, this paper proposes an end-to-end multi-task semi-supervised learning framework. The framework employs late fusion of acoustic and textual modalities to jointly model emotion and intent classification tasks. It innovatively introduces and comparatively evaluates two semi-supervised strategies—Fix-Match and Full-Match—to effectively leverage large-scale unlabeled speech data. Experimental results demonstrate that the best semi-supervised model achieves a 12.3% and 10.4% improvement over pure acoustic and pure textual baselines, respectively, on the joint recognition balanced metric. This significant gain markedly reduces reliance on labeled data, thereby validating the framework’s effectiveness and generalizability in low-resource speech understanding scenarios.

Technology Category

Application Category

📝 Abstract
Emotion and intent recognition from speech is essential and has been widely investigated in human-computer interaction. The rapid development of social media platforms, chatbots, and other technologies has led to a large volume of speech data streaming from users. Nevertheless, annotating such data manually is expensive, making it challenging to train machine learning models for recognition purposes. To this end, we propose applying semi-supervised learning to incorporate a large scale of unlabelled data alongside a relatively smaller set of labelled data. We train end-to-end acoustic and linguistic models, each employing multi-task learning for emotion and intent recognition. Two semi-supervised learning approaches, including fix-match learning and full-match learning, are compared. The experimental results demonstrate that the semi-supervised learning approaches improve model performance in speech emotion and intent recognition from both acoustic and text data. The late fusion of the best models outperforms the acoustic and text baselines by joint recognition balance metrics of 12.3% and 10.4%, respectively.
Problem

Research questions and friction points this paper is trying to address.

Recognizing emotion and intent from speech efficiently
Reducing manual annotation costs for large speech datasets
Improving model performance with semi-supervised learning techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-supervised learning for unlabelled data
End-to-end acoustic-linguistic multi-task models
Late fusion enhances joint recognition metrics
🔎 Similar Papers
No similar papers found.
Z
Zhao Ren
Cognitive Systems Lab, University of Bremen, Germany
R
Rathi Adarshi Rammohan
Cognitive Systems Lab, University of Bremen, Germany
K
Kevin Scheck
Cognitive Systems Lab, University of Bremen, Germany
S
Sheng Li
Institute of Science Tokyo, Japan
Tanja Schultz
Tanja Schultz
Professor of Computer Science, University Bremen
Speech RecognitionBiosignalsSilent SpeechHuman-Machine-InterfacesBrain-Computer Interfaces