CHUCKLE -- When Humans Teach AI To Learn Emotions The Easy Way

πŸ“… 2025-10-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing curriculum learning approaches for sentiment recognition predominantly rely on heuristic- or model-driven definitions of sample difficulty, neglecting human perceptionβ€”a critical subjective factor in such tasks. Method: We propose the first human-perception-centric curriculum learning framework, quantifying sample complexity via crowdsourced annotation consistency and establishing a perception-driven progressive training paradigm. Our method embeds annotator consensus into both LSTM and Transformer architectures and introduces a consistency-aware curriculum scheduling strategy. Results: Experiments across multiple sentiment datasets demonstrate accuracy improvements of 6.56% (LSTM) and 1.61% (Transformer), alongside significantly reduced gradient update steps, enhanced model robustness, and improved generalization. This work pioneers the systematic integration of human-perceived difficulty into curriculum learning, establishing a novel paradigm for modeling subjective tasks.

Technology Category

Application Category

πŸ“ Abstract
Curriculum learning (CL) structures training from simple to complex samples, facilitating progressive learning. However, existing CL approaches for emotion recognition often rely on heuristic, data-driven, or model-based definitions of sample difficulty, neglecting the difficulty for human perception, a critical factor in subjective tasks like emotion recognition. We propose CHUCKLE (Crowdsourced Human Understanding Curriculum for Knowledge Led Emotion Recognition), a perception-driven CL framework that leverages annotator agreement and alignment in crowd-sourced datasets to define sample difficulty, under the assumption that clips challenging for humans are similarly hard for machine learning models. Empirical results suggest that CHUCKLE increases the relative mean accuracy by 6.56% for LSTMs and 1.61% for Transformers over non-curriculum baselines, while reducing the number of gradient updates, thereby enhancing both training efficiency and model robustness.
Problem

Research questions and friction points this paper is trying to address.

Defining emotion recognition difficulty based on human perception
Improving curriculum learning by leveraging crowd-sourced annotator agreement
Enhancing training efficiency and model robustness in emotion classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses annotator agreement to define sample difficulty
Leverages crowd-sourced data for curriculum learning
Improves training efficiency and model robustness
πŸ”Ž Similar Papers
No similar papers found.
A
Ankush Pratap Singh
New York Institute of Technology, Department of Computer Science New York, 10023, United States
Houwei Cao
Houwei Cao
Assistant Professor of Computer Science, New York Institute of Technology
Speech & Natural Language ProcessingAffective ComputingBig Data Analytics
Y
Yong Liu
New York University, Electrical and Computer Engineering Department New York, 11201, United States