Semi-Supervised Cognitive State Classification from Speech with Multi-View Pseudo-Labeling

📅 2024-09-25
🏛️ IEEE International Conference on Acoustics, Speech, and Signal Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high cost of subjective annotation and data scarcity in cognitive state recognition, this paper proposes an acoustic–linguistic dual-view semi-supervised learning framework. Methodologically, it introduces a novel multi-view pseudo-label consistency criterion: on the acoustic side, it integrates multiple encoders with the Fréchet audio distance to measure feature distribution similarity; on the linguistic side, it leverages large language model–guided task prompting and ASR transcript refinement to generate semantically reliable pseudo-labels. These two modalities jointly filter high-confidence samples, and an iterative dual-modal classifier dynamically expands the training set. Evaluated on emotion recognition and dementia detection, the framework achieves full-supervision performance using only 30% labeled data—substantially outperforming conventional semi-supervised and unimodal baselines. Key contributions include an interpretable cross-modal pseudo-label validation mechanism and an efficient low-resource adaptation paradigm.

Technology Category

Application Category

📝 Abstract
The lack of labeled data is a common challenge in speech classification tasks, particularly those requiring extensive subjective assessment, such as cognitive state classification. In this work, we propose a Semi-Supervised Learning (SSL) framework, introducing a novel multi-view pseudo-labeling method that leverages both acoustic and linguistic characteristics to select the most confident data for training the classification model. Acoustically, unlabeled data are compared to labeled data using the Frechet audio distance, calculated from embeddings generated by multiple audio encoders. Linguistically, large language models are prompted to revise automatic speech recognition transcriptions and predict labels based on our proposed task-specific knowledge. High-confidence data are identified when pseudo-labels from both sources align, while mismatches are treated as low-confidence data. A bimodal classifier is then trained to iteratively label the low-confidence data until a predefined criterion is met. We evaluate our SSL framework on emotion recognition and dementia detection tasks. Experimental results demonstrate that our method achieves competitive performance compared to fully supervised learning using only 30% of the labeled data and significantly outperforms two selected baselines.
Problem

Research questions and friction points this paper is trying to address.

Addresses lack of labeled data in cognitive speech classification
Proposes multi-view pseudo-labeling using acoustic and linguistic features
Improves emotion and dementia detection with limited labeled data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-view pseudo-labeling combines acoustic and linguistic features
Frechet audio distance measures acoustic similarity for confidence
Iterative bimodal classifier refines low-confidence pseudo-labels
🔎 Similar Papers
No similar papers found.