Human-Guided Reasoning with Large Language Models for Vietnamese Speech Emotion Recognition

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenges of acoustic ambiguity and scarce labeled data in Vietnamese speech emotion recognition under real-world conditions. The authors propose a human-in-the-loop framework that employs a confidence-driven sample routing mechanism to direct high-uncertainty instances to a large language model for structured, rule-guided reasoning. Coupled with an iterative rule refinement strategy, this approach enables deep integration between data-driven models and human-like cognitive reasoning. Evaluated on 2,764 Vietnamese utterances with high annotation consistency, the method achieves an accuracy of 86.59% and Macro F1 scores ranging from 0.85 to 0.86, demonstrating significant improvement in recognizing ambiguous and difficult samples. This work establishes an effective paradigm for affective computing in low-resource languages.
📝 Abstract
Vietnamese Speech Emotion Recognition (SER) remains challenging due to ambiguous acoustic patterns and the lack of reliable annotated data, especially in real-world conditions where emotional boundaries are not clearly separable. To address this problem, this paper proposes a human-machine collaborative framework that integrates human knowledge into the learning process rather than relying solely on data-driven models. The proposed framework is centered around LLM-based reasoning, where acoustic feature-based models are used to provide auxiliary signals such as confidence and feature-level evidence. A confidence-based routing mechanism is introduced to distinguish between easy and ambiguous samples, allowing uncertain cases to be delegated to LLMs for deeper reasoning guided by structured rules derived from human annotation behavior. In addition, an iterative refinement strategy is employed to continuously improve system performance through error analysis and rule updates. Experiments are conducted on a Vietnamese speech dataset of 2,764 samples across three emotion classes (calm, angry, panic), with high inter-annotator agreement (Fleiss Kappa = 0.8574), ensuring reliable ground truth. The proposed method achieves strong performance, reaching up to 86.59% accuracy and Macro F1 around 0.85-0.86, demonstrating its effectiveness in handling ambiguous and hard-to-classify cases. Overall, this work highlights the importance of combining data-driven models with human reasoning, providing a robust and model-agnostic approach for speech emotion recognition in low-resource settings.
Problem

Research questions and friction points this paper is trying to address.

Speech Emotion Recognition
Vietnamese SER
ambiguous acoustic patterns
low-resource settings
emotion boundary ambiguity
Innovation

Methods, ideas, or system contributions that make the work stand out.

human-guided reasoning
confidence-based routing
LLM-based reasoning
speech emotion recognition
iterative refinement
🔎 Similar Papers
No similar papers found.
Truc Nguyen
Truc Nguyen
National Renewable Energy Laboratory
Machine LearningCybersecurityPrivacy-Enhancing TechnologiesComputer NetworkingBlockchain
T
Then Tran
University of Information Technology, Ho Chi Minh City, Vietnam; Vietnam National University Ho Chi Minh City, Ho Chi Minh City, Vietnam
B
Binh Truong
University of Information Technology, Ho Chi Minh City, Vietnam; Vietnam National University Ho Chi Minh City, Ho Chi Minh City, Vietnam
P
Phuoc Nguyen T. H.
University of Information Technology, Ho Chi Minh City, Vietnam; Vietnam National University Ho Chi Minh City, Ho Chi Minh City, Vietnam