Robust Multimodal Learning Framework For Intake Gesture Detection Using Contactless Radar and Wearable IMU Sensors

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the robustness degradation in multimodal dietary behavior monitoring caused by missing modalities. We propose a robust detection framework integrating non-contact radar with wearable IMU sensors. Our key contribution is MM-TCN-CMA—the first multimodal temporal convolutional network explicitly designed for modality-missing scenarios—incorporating a Cross-Modality Attention (CMA) mechanism to enable dynamic feature complementarity and adaptive modality weighting. Evaluated on a newly released multimodal dataset comprising 52 participants during meals, MM-TCN-CMA achieves segment-level F1-score improvements of 4.3% over radar-only and 5.2% over IMU-only baselines under full-modality conditions. Crucially, under single-modality missing conditions, it retains consistent gains of 1.3%–2.4%, significantly outperforming existing methods. This work establishes a novel paradigm for continuous, objective, and highly robust eating action monitoring.

Technology Category

Application Category

📝 Abstract
Automated food intake gesture detection plays a vital role in dietary monitoring, enabling objective and continuous tracking of eating behaviors to support better health outcomes. Wrist-worn inertial measurement units (IMUs) have been widely used for this task with promising results. More recently, contactless radar sensors have also shown potential. This study explores whether combining wearable and contactless sensing modalities through multimodal learning can further improve detection performance. We also address a major challenge in multimodal learning: reduced robustness when one modality is missing. To this end, we propose a robust multimodal temporal convolutional network with cross-modal attention (MM-TCN-CMA), designed to integrate IMU and radar data, enhance gesture detection, and maintain performance under missing modality conditions. A new dataset comprising 52 meal sessions (3,050 eating gestures and 797 drinking gestures) from 52 participants is developed and made publicly available. Experimental results show that the proposed framework improves the segmental F1-score by 4.3% and 5.2% over unimodal Radar and IMU models, respectively. Under missing modality scenarios, the framework still achieves gains of 1.3% and 2.4% for missing radar and missing IMU inputs. This is the first study to demonstrate a robust multimodal learning framework that effectively fuses IMU and radar data for food intake gesture detection.
Problem

Research questions and friction points this paper is trying to address.

Improving food intake gesture detection using multimodal sensors
Enhancing robustness when one sensor modality is missing
Combining radar and IMU data for better dietary monitoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal learning combines IMU and radar sensors
Cross-modal attention enhances gesture detection robustness
New public dataset with 52 meal sessions
🔎 Similar Papers
No similar papers found.