M3PD Dataset: Dual-view Photoplethysmography (PPG) Using Front-and-rear Cameras of Smartphones in Lab and Clinical Settings

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing smartphone-based video photoplethysmography (vPPG) methods for cardiovascular patient monitoring suffer from motion artifacts, illumination variations, and single-view bias in real-world settings, and lack publicly available, patient-centric benchmark datasets and cross-device validation. To address these limitations, we introduce M3PD—the first publicly available, dual-view mobile vPPG dataset specifically designed for cardiovascular patients—featuring synchronized facial and fingertip video recordings across diverse real-world scenarios. Leveraging M3PD, we propose F3Mamba, a novel Mamba-based architecture that explicitly models temporal dependencies while fusing dual-view physiological signals to suppress interference. Experimental results demonstrate that F3Mamba reduces heart rate estimation error by 21.9%–30.2% compared to single-view baselines, significantly improving robustness and cross-device generalization. This work establishes a new benchmark and technical foundation for reliable, portable physiological monitoring in clinical and home settings.

Technology Category

Application Category

📝 Abstract
Portable physiological monitoring is essential for early detection and management of cardiovascular disease, but current methods often require specialized equipment that limits accessibility or impose impractical postures that patients cannot maintain. Video-based photoplethysmography on smartphones offers a convenient noninvasive alternative, yet it still faces reliability challenges caused by motion artifacts, lighting variations, and single-view constraints. Few studies have demonstrated reliable application to cardiovascular patients, and no widely used open datasets exist for cross-device accuracy. To address these limitations, we introduce the M3PD dataset, the first publicly available dual-view mobile photoplethysmography dataset, comprising synchronized facial and fingertip videos captured simultaneously via front and rear smartphone cameras from 60 participants (including 47 cardiovascular patients). Building on this dual-view setting, we further propose F3Mamba, which fuses the facial and fingertip views through Mamba-based temporal modeling. The model reduces heart-rate error by 21.9 to 30.2 percent over existing single-view baselines while improving robustness in challenging real-world scenarios. Data and code: https://github.com/Health-HCI-Group/F3Mamba.
Problem

Research questions and friction points this paper is trying to address.

Addressing reliability issues in smartphone-based photoplethysmography monitoring
Overcoming motion artifacts and lighting variations in heart rate measurement
Lacking dual-view PPG datasets for cardiovascular patient applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-view PPG using front and rear smartphone cameras
Mamba-based temporal modeling fuses facial and fingertip views
Reduces heart-rate error by 21.9-30.2% over single-view baselines
🔎 Similar Papers
No similar papers found.
Jiankai Tang
Jiankai Tang
Tsinghua University
DesignUbiquitous ComputingPhysiological Sensing
T
Tao Zhang
Department of Computer Science and Technology, Tsinghua University, China
J
Jia Li
Beijing Anzhen Hospital, Capital Medical University, China
Y
Yiru Zhang
Department of Computer Science and Technology, Tsinghua University, China
M
Mingyu Zhang
Department of Computer Science and Technology, Tsinghua University, China
K
Kegang Wang
Department of Computer Science and Technology, Tsinghua University, China
Y
Yuming Hao
Beijing Anzhen Hospital, Capital Medical University, China
B
Bolin Wang
Beijing Anzhen Hospital, Capital Medical University, China
H
Haiyang Li
Beijing Anzhen Hospital, Capital Medical University, China
Xingyao Wang
Xingyao Wang
All Hands AI, University of Illinois Urbana-Champaign
Yuanchun Shi
Yuanchun Shi
Professor
human computer interaction
Yuntao Wang
Yuntao Wang
Tsinghua University
Human-Computer InteractionUbiquitous ComputingPhysio-Behavioral Computing
S
Sichong Qian
Beijing Anzhen Hospital, Capital Medical University, China