🤖 AI Summary
Traditional emotion recognition methods relying on facial expression analysis or video surveillance raise privacy concerns and are unsuitable for elderly populations—particularly those with cognitive impairments such as Alzheimer’s disease, dementia, or PTSD—who may struggle with explicit interaction or camera-based setups.
Method: We propose a non-invasive, privacy-preserving physiological-signal-based approach using wearable sensors (Empatica E4 and Shimmer3 GSR+), capturing multimodal signals—including electrodermal activity (EDA), heart rate variability (HRV), and skin temperature (TEMP)—and deploying a lightweight edge-computing framework integrated with classical machine learning regression models for continuous prediction of intensity levels across 12 emotion categories.
Contribution/Results: This work presents the first edge-deployed, physiology-only regression framework for fine-grained emotion intensity estimation, ensuring real-time inference and on-device data privacy. Evaluated on real-world elderly participants, it achieves R² = 0.782 and MSE = 0.0006, demonstrating strong efficacy and robustness in clinical and geriatric care settings.
📝 Abstract
Emotion detection in older adults is crucial for understanding their cognitive and emotional well-being, especially in hospital and assisted living environments. In this work, we investigate an edge-based, non-obtrusive approach to emotion identification that uses only physiological signals obtained via wearable sensors. Our dataset includes data from 40 older individuals. Emotional states were obtained using physiological signals from the Empatica E4 and Shimmer3 GSR+ wristband and facial expressions were recorded using camera-based emotion recognition with the iMotion's Facial Expression Analysis (FEA) module. The dataset also contains twelve emotion categories in terms of relative intensities. We aim to study how well emotion recognition can be accomplished using simply physiological sensor data, without the requirement for cameras or intrusive facial analysis. By leveraging classical machine learning models, we predict the intensity of emotional responses based on physiological signals. We achieved the highest 0.782 r2 score with the lowest 0.0006 MSE on the regression task. This method has significant implications for individuals with Alzheimer's Disease and Related Dementia (ADRD), as well as veterans coping with Post-Traumatic Stress Disorder (PTSD) or other cognitive impairments. Our results across multiple classical regression models validate the feasibility of this method, paving the way for privacy-preserving and efficient emotion recognition systems in real-world settings.