🤖 AI Summary
Existing smartphone-based video photoplethysmography (vPPG) methods for cardiovascular patient monitoring suffer from motion artifacts, illumination variations, and single-view bias in real-world settings, and lack publicly available, patient-centric benchmark datasets and cross-device validation. To address these limitations, we introduce M3PD—the first publicly available, dual-view mobile vPPG dataset specifically designed for cardiovascular patients—featuring synchronized facial and fingertip video recordings across diverse real-world scenarios. Leveraging M3PD, we propose F3Mamba, a novel Mamba-based architecture that explicitly models temporal dependencies while fusing dual-view physiological signals to suppress interference. Experimental results demonstrate that F3Mamba reduces heart rate estimation error by 21.9%–30.2% compared to single-view baselines, significantly improving robustness and cross-device generalization. This work establishes a new benchmark and technical foundation for reliable, portable physiological monitoring in clinical and home settings.
📝 Abstract
Portable physiological monitoring is essential for early detection and management of cardiovascular disease, but current methods often require specialized equipment that limits accessibility or impose impractical postures that patients cannot maintain. Video-based photoplethysmography on smartphones offers a convenient noninvasive alternative, yet it still faces reliability challenges caused by motion artifacts, lighting variations, and single-view constraints. Few studies have demonstrated reliable application to cardiovascular patients, and no widely used open datasets exist for cross-device accuracy. To address these limitations, we introduce the M3PD dataset, the first publicly available dual-view mobile photoplethysmography dataset, comprising synchronized facial and fingertip videos captured simultaneously via front and rear smartphone cameras from 60 participants (including 47 cardiovascular patients). Building on this dual-view setting, we further propose F3Mamba, which fuses the facial and fingertip views through Mamba-based temporal modeling. The model reduces heart-rate error by 21.9 to 30.2 percent over existing single-view baselines while improving robustness in challenging real-world scenarios. Data and code: https://github.com/Health-HCI-Group/F3Mamba.