Revise, Reason, and Recognize: LLM-Based Emotion Recognition via Emotion-Specific Prompts and ASR Error Correction

📅 2024-09-23
🏛️ IEEE International Conference on Acoustics, Speech, and Signal Processing
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address unreliable reasoning of large language models (LLMs) in speech emotion recognition (SER) caused by automatic speech recognition (ASR) transcription errors, this paper proposes a three-stage “correct–reason–recognize” prompting framework. First, an ASR-error-aware progressive prompting correction mechanism is designed to rectify noisy transcriptions. Second, emotion-specific prompts are introduced, integrating acoustic, linguistic, and psychological priors. Third, robustness is enhanced via multimodal emotion knowledge injection, in-context learning, and instruction tuning. Key contributions include: (i) the first emotion-specific prompting paradigm for SER; (ii) systematic characterization of LLM sensitivity to prompt perturbations; and (iii) the first prompt engineering framework tailored for ASR-noisy text in emotion recognition. Experiments demonstrate significant improvements in emotion classification accuracy on ASR-distorted transcripts, establishing a novel, interpretable, and robust LLM-based paradigm for speech emotion understanding.

Technology Category

Application Category

📝 Abstract
Annotating and recognizing speech emotion using prompt engineering has recently emerged with the advancement of Large Language Models (LLMs), yet its efficacy and reliability remain questionable. In this paper, we conduct a systematic study on this topic, beginning with the proposal of novel prompts that incorporate emotion-specific knowledge from acoustics, linguistics, and psychology. Subsequently, we examine the effectiveness of LLM-based prompting on Automatic Speech Recognition (ASR) transcription, contrasting it with ground-truth transcription. Furthermore, we propose a Revise-Reason-Recognize prompting pipeline for robust LLM-based emotion recognition from spoken language with ASR errors. Additionally, experiments on context-aware learning, in-context learning, and instruction tuning are performed to examine the usefulness of LLM training schemes in this direction. Finally, we investigate the sensitivity of LLMs to minor prompt variations. Experimental results demonstrate the efficacy of the emotion-specific prompts, ASR error correction, and LLM training schemes for LLM-based emotion recognition. Our study aims to refine the use of LLMs in emotion recognition and related domains.
Problem

Research questions and friction points this paper is trying to address.

Enhancing emotion recognition in speech using LLM-specific prompts
Correcting ASR errors to improve emotion recognition accuracy
Evaluating LLM training schemes for robust emotion detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Emotion-specific prompts integrating multi-domain knowledge
Revise-Reason-Recognize pipeline correcting ASR errors
LLM training schemes enhancing emotion recognition
🔎 Similar Papers
No similar papers found.