π€ AI Summary
Existing curriculum learning approaches for sentiment recognition predominantly rely on heuristic- or model-driven definitions of sample difficulty, neglecting human perceptionβa critical subjective factor in such tasks. Method: We propose the first human-perception-centric curriculum learning framework, quantifying sample complexity via crowdsourced annotation consistency and establishing a perception-driven progressive training paradigm. Our method embeds annotator consensus into both LSTM and Transformer architectures and introduces a consistency-aware curriculum scheduling strategy. Results: Experiments across multiple sentiment datasets demonstrate accuracy improvements of 6.56% (LSTM) and 1.61% (Transformer), alongside significantly reduced gradient update steps, enhanced model robustness, and improved generalization. This work pioneers the systematic integration of human-perceived difficulty into curriculum learning, establishing a novel paradigm for modeling subjective tasks.
π Abstract
Curriculum learning (CL) structures training from simple to complex samples, facilitating progressive learning. However, existing CL approaches for emotion recognition often rely on heuristic, data-driven, or model-based definitions of sample difficulty, neglecting the difficulty for human perception, a critical factor in subjective tasks like emotion recognition. We propose CHUCKLE (Crowdsourced Human Understanding Curriculum for Knowledge Led Emotion Recognition), a perception-driven CL framework that leverages annotator agreement and alignment in crowd-sourced datasets to define sample difficulty, under the assumption that clips challenging for humans are similarly hard for machine learning models. Empirical results suggest that CHUCKLE increases the relative mean accuracy by 6.56% for LSTMs and 1.61% for Transformers over non-curriculum baselines, while reducing the number of gradient updates, thereby enhancing both training efficiency and model robustness.