🤖 AI Summary
To address poor scalability, weak interoperability, and limited performance hindering real-time deployment of remote photoplethysmography (rPPG) systems on low-power devices, this paper proposes a lightweight, real-time rPPG system. Methodologically, it builds upon the Face2PPG signal processing pipeline and integrates functional reactive programming with the Actor model to establish a hybrid programming paradigm combining event-driven execution and task-level parallelism. A multithreaded architecture decouples video acquisition, physiological signal extraction (heart rate, respiratory rate, and blood oxygen saturation), and HTTP-based streaming transmission, while exposing RESTful APIs and a feedback-enabled UI. Evaluated on embedded hardware, the system achieves stable 30 FPS operation, reduces computational overhead by 37%, and significantly enhances real-time performance and robustness—enabling continuous, contactless vital sign monitoring for smart healthcare and natural human–computer interaction.
📝 Abstract
The growing integration of smart environments and low-power computing devices, coupled with mass-market sensor technologies, is driving advancements in remote and non-contact physiological monitoring. However, deploying these systems in real-time on resource-constrained platforms introduces significant challenges related to scalability, interoperability, and performance. This paper presents a real-time remote photoplethysmography (rPPG) system optimized for low-power devices, designed to extract physiological signals, such as heart rate (HR), respiratory rate (RR), and oxygen saturation (SpO2), from facial video streams. The system is built on the Face2PPG pipeline, which processes video frames sequentially for rPPG signal extraction and analysis, while leveraging a multithreaded architecture to manage video capture, real-time processing, network communication, and graphical user interface (GUI) updates concurrently. This design ensures continuous, reliable operation at 30 frames per second (fps), with adaptive feedback through a collaborative user interface to guide optimal signal capture conditions. The network interface includes both an HTTP server for continuous video streaming and a RESTful API for on-demand vital sign retrieval. To ensure accurate performance despite the limitations of low-power devices, we use a hybrid programming model combining Functional Reactive Programming (FRP) and the Actor Model, allowing event-driven processing and efficient task parallelization. The system is evaluated under real-time constraints, demonstrating robustness while minimizing computational overhead. Our work addresses key challenges in real-time biosignal monitoring, offering practical solutions for optimizing performance in modern healthcare and human-computer interaction applications.