🤖 AI Summary
To address unreliable reasoning of large language models (LLMs) in speech emotion recognition (SER) caused by automatic speech recognition (ASR) transcription errors, this paper proposes a three-stage “correct–reason–recognize” prompting framework. First, an ASR-error-aware progressive prompting correction mechanism is designed to rectify noisy transcriptions. Second, emotion-specific prompts are introduced, integrating acoustic, linguistic, and psychological priors. Third, robustness is enhanced via multimodal emotion knowledge injection, in-context learning, and instruction tuning. Key contributions include: (i) the first emotion-specific prompting paradigm for SER; (ii) systematic characterization of LLM sensitivity to prompt perturbations; and (iii) the first prompt engineering framework tailored for ASR-noisy text in emotion recognition. Experiments demonstrate significant improvements in emotion classification accuracy on ASR-distorted transcripts, establishing a novel, interpretable, and robust LLM-based paradigm for speech emotion understanding.
📝 Abstract
Annotating and recognizing speech emotion using prompt engineering has recently emerged with the advancement of Large Language Models (LLMs), yet its efficacy and reliability remain questionable. In this paper, we conduct a systematic study on this topic, beginning with the proposal of novel prompts that incorporate emotion-specific knowledge from acoustics, linguistics, and psychology. Subsequently, we examine the effectiveness of LLM-based prompting on Automatic Speech Recognition (ASR) transcription, contrasting it with ground-truth transcription. Furthermore, we propose a Revise-Reason-Recognize prompting pipeline for robust LLM-based emotion recognition from spoken language with ASR errors. Additionally, experiments on context-aware learning, in-context learning, and instruction tuning are performed to examine the usefulness of LLM training schemes in this direction. Finally, we investigate the sensitivity of LLMs to minor prompt variations. Experimental results demonstrate the efficacy of the emotion-specific prompts, ASR error correction, and LLM training schemes for LLM-based emotion recognition. Our study aims to refine the use of LLMs in emotion recognition and related domains.