Affect Decoding in Phonated and Silent Speech Production from Surface EMG

πŸ“… 2026-03-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates how emotions modulate speech production, particularly the relationship between affective states and articulatory muscle activity during silent speechβ€”an aspect that remains poorly understood. By leveraging facial and neck surface electromyography (sEMG) signals recorded during both vocalized and silent speech tasks, the authors construct a dataset comprising 2,780 samples from 12 participants. They demonstrate for the first time that emotional signatures are embedded in facial motor activity and remain discernible even in the absence of vocal output, offering a novel pathway for affective sensing in silent speech interfaces. Through multimodal feature extraction and model ablation analyses, the proposed approach achieves an AUC of 0.845 in classifying frustration and exhibits strong generalization across vocalized and silent speaking conditions.

Technology Category

Application Category

πŸ“ Abstract
The expression of affect is integral to spoken communication, yet, its link to underlying articulatory execution remains unclear. Measures of articulatory muscle activity such as EMG could reveal how speech production is modulated by emotion alongside acoustic speech analyses. We investigate affect decoding from facial and neck surface electromyography (sEMG) during phonated and silent speech production. For this purpose, we introduce a dataset comprising 2,780 utterances from 12 participants across 3 tasks, on which we evaluate both intra- and inter-subject decoding using a range of features and model embeddings. Our results reveal that EMG representations reliably discriminate frustration with up to 0.845 AUC, and generalize well across articulation modes. Our ablation study further demonstrates that affective signatures are embedded in facial motor activity and persist in the absence of phonation, highlighting the potential of EMG sensing for affect-aware silent speech interfaces.
Problem

Research questions and friction points this paper is trying to address.

affect decoding
silent speech
surface EMG
articulatory execution
emotion modulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

affect decoding
surface EMG
silent speech
emotion recognition
articulatory modulation
πŸ”Ž Similar Papers
No similar papers found.
S
Simon Pistrosch
CHI – Chair of Health Informatics, TUM University Hospital, Munich, Germany; MCML – Munich Center for Machine Learning, Germany
K
Kleanthis Avramidis
SAIL – Signal Analysis and Interpretation Lab, University of Southern California, USA
Tiantian Feng
Tiantian Feng
Postdoc Researcher
Health and BehaviorsWearable ComputingAffective ComputingSpeech and BiosignalResponsible ML
Jihwan Lee
Jihwan Lee
PhD Student, Signal Analysis and Interpretation Lab (SAIL) at University of Southern California
brain-computer interfacesspeech synthesisbiosignal-to-speecharticulatory phonetics
M
Monica Gonzalez-Machorro
CHI – Chair of Health Informatics, TUM University Hospital, Munich, Germany; MCML – Munich Center for Machine Learning, Germany
S
Shrikanth Narayanan
SAIL – Signal Analysis and Interpretation Lab, University of Southern California, USA
B
BjΓΆrn W. Schuller
CHI – Chair of Health Informatics, TUM University Hospital, Munich, Germany; GLAM – Group on Language, Audio, & Music, Imperial College London, UK; MCML – Munich Center for Machine Learning, Germany