🤖 AI Summary
Addressing the dual challenges of resource constraints on IoT edge devices and privacy sensitivity of speech emotion data, this paper pioneers the application of data distillation to speech emotion recognition (SER). We propose a lightweight, synthetic, and privacy-preserving speech-level data distillation framework grounded in knowledge distillation principles. Our method integrates speech compression with semantic fidelity preservation, coupled with fixed-initialization training and emotion-feature disentanglement modeling—enabling high-quality distilled dataset generation without accessing original sensitive speech. Evaluated on multiple SER benchmarks, a lightweight model trained solely on 5% distilled data achieves 98.3% of the accuracy attained by a model trained on the full dataset. This yields substantial reductions in memory footprint and computational overhead, simultaneously ensuring high performance, ultra-low resource consumption, and strong privacy protection.
📝 Abstract
Speech emotion recognition (SER) plays a crucial role in human-computer interaction. The emergence of edge devices in the Internet of Things (IoT) presents challenges in constructing intricate deep learning models due to constraints in memory and computational resources. Moreover, emotional speech data often contains private information, raising concerns about privacy leakage during the deployment of SER models. To address these challenges, we propose a data distillation framework to facilitate efficient development of SER models in IoT applications using a synthesised, smaller, and distilled dataset. Our experiments demonstrate that the distilled dataset can be effectively utilised to train SER models with fixed initialisation, achieving performances comparable to those developed using the original full emotional speech dataset.