CHOMP: Multimodal Chewing Side Detection with Earphones

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of objectively and continuously monitoring habitual chewing-side preference in daily life—a task poorly served by existing methods that rely on subjective self-reports or clinical assessments, which fail to capture true mandibular function. To overcome this limitation, the authors present the first integration of a multimodal sensor suite—including air-conducted microphones, bone-conduction microphones, inertial measurement units (IMUs), photoplethysmography (PPG), and pressure sensors—into a common earphone platform. They employ continuous wavelet transform to generate multimodal scalograms, which are then classified using a multi-channel convolutional neural network (CNN). The approach eliminates reliance on specialized hardware while significantly enhancing robustness and practicality: among single modalities, the microphone achieves the best performance (LOFO/LOSO F1 scores of 94.5%/92.6%), and multimodal fusion further improves accuracy to 97.7%/95.4%, maintaining strong resilience even under noisy conditions.

Technology Category

Application Category

📝 Abstract
Chewing side preference (CSP) has been identified both as a risk factor for temporomandibular disorders (TMD) and behavioral manifestation. Despite TMDs affecting roughly one third of the global population, assessment mainly relies on clinical examinations and self-reports, offering limited insight into everyday jaw function. Continuous CSP monitoring could provide an objective proxy for functional asymmetries. Prior wearable approaches, however, mostly use specialized form factors and demonstrate limited performance. We therefore present CHOMP, the first system for chewing side detection using earphones. Employing OpenEarable 2.0, we collected data from 20 participants with microphones, a bone-conduction microphone, IMU, PPG, and a pressure sensor across eleven foods, five non-chewing activities, and three noise conditions. We apply the Continuous Wavelet Transform to each sensing modality and use the resulting multi-channel scalograms as inputs to CNN-based classifiers. Microphones achieve the strongest single-sensor unit performance, with median F1 scores of 94.5% in leave-one-food-out (LOFO) and 92.6% in leave-one-subject-out (LOSO) cross-validations. Fusing sensing modalities further improves performance to 97.7% for LOFO and 95.4% for LOSO, with additional evaluations under noise interference indicating robust performance. Our results establish earphones as a practical platform for continuous CSP monitoring, enabling clinicians and patients to assess jaw function in everyday life.
Problem

Research questions and friction points this paper is trying to address.

chewing side preference
temporomandibular disorders
continuous monitoring
jaw function
wearable sensing
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal sensing
chewing side detection
earphone-based monitoring
continuous wavelet transform
CNN classification
🔎 Similar Papers
No similar papers found.