Advancing Multimodal Teacher Sentiment Analysis:The Large-Scale T-MED Dataset & The Effective AAM-TSA Model

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address modeling bias in teacher emotion recognition caused by neglecting pedagogical context, this paper introduces T-MED—the first multimodal emotion dataset specifically designed for teachers, comprising text, audio, video, and instructional metadata. We propose AAM-TSA, an asymmetric attention model featuring an asymmetric cross-modal attention mechanism and a hierarchical gated fusion unit to explicitly capture the performative nature of emotional expression and instruction-induced bias in teaching. Leveraging human–machine collaborative annotation and end-to-end training, AAM-TSA achieves significant improvements over state-of-the-art methods on T-MED, with notably higher emotion classification accuracy and built-in interpretability. This work establishes a novel data foundation and methodological paradigm for intelligent teaching feedback and teacher support systems.

Technology Category

Application Category

📝 Abstract
Teachers' emotional states are critical in educational scenarios, profoundly impacting teaching efficacy, student engagement, and learning achievements. However, existing studies often fail to accurately capture teachers' emotions due to the performative nature and overlook the critical impact of instructional information on emotional expression.In this paper, we systematically investigate teacher sentiment analysis by building both the dataset and the model accordingly. We construct the first large-scale teacher multimodal sentiment analysis dataset, T-MED.To ensure labeling accuracy and efficiency, we employ a human-machine collaborative labeling process.The T-MED dataset includes 14,938 instances of teacher emotional data from 250 real classrooms across 11 subjects ranging from K-12 to higher education, integrating multimodal text, audio, video, and instructional information.Furthermore, we propose a novel asymmetric attention-based multimodal teacher sentiment analysis model, AAM-TSA.AAM-TSA introduces an asymmetric attention mechanism and hierarchical gating unit to enable differentiated cross-modal feature fusion and precise emotional classification. Experimental results demonstrate that AAM-TSA significantly outperforms existing state-of-the-art methods in terms of accuracy and interpretability on the T-MED dataset.
Problem

Research questions and friction points this paper is trying to address.

Develops a large-scale multimodal dataset for teacher sentiment analysis
Proposes an asymmetric attention model for accurate emotion classification
Addresses limitations in capturing teacher emotions in educational contexts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-machine collaborative labeling for dataset creation
Asymmetric attention mechanism for cross-modal fusion
Hierarchical gating unit for precise emotional classification
🔎 Similar Papers
No similar papers found.
Z
Zhiyi Duan
Department of Computer Science, Inner Mongolia University, Hohhot, Inner Mongolia
X
Xiangren Wang
Department of Computer Science, Inner Mongolia University, Hohhot, Inner Mongolia
Hongyu Yuan
Hongyu Yuan
Atrium Health Wake Forest Baptist
Qianli Xing
Qianli Xing
Macquarie University
Data MiningDeep LearningCrowdsourcing