🤖 AI Summary
To address the invasiveness of ECG-based continuous authentication and the high energy consumption and computational overhead of high-frequency PPG on wearables, this paper proposes a non-intrusive continuous authentication scheme leveraging low-frequency, multi-channel PPG. We first demonstrate stable authentication at 25 Hz on real smartwatches—establishing this as the minimal viable sampling rate for performance preservation. To enhance robustness across diverse physical activities, we introduce cross-activity-state training. Our method employs four-channel PPG signals processed by a Bi-LSTM–attention fusion model to extract discriminative user-specific features from 4-second windows, enabling low-power, real-time inference. Evaluated on the We-Be dataset, our approach achieves 88.11% accuracy, an equal-error rate (EER) of 2.76%, and false-acceptance/false-rejection rates of 0.48% and 11.77%, respectively—while reducing energy consumption by 53% compared to a 512-Hz baseline. This work significantly advances the practical deployment of continuous wearable authentication.
📝 Abstract
Biometric authentication using physiological signals offers a promising path toward secure and user-friendly access control in wearable devices. While electrocardiogram (ECG) signals have shown high discriminability, their intrusive sensing requirements and discontinuous acquisition limit practicality. Photoplethysmography (PPG), on the other hand, enables continuous, non-intrusive authentication with seamless integration into wrist-worn wearable devices. However, most prior work relies on high-frequency PPG (e.g., 75 - 500 Hz) and complex deep models, which incur significant energy and computational overhead, impeding deployment in power-constrained real-world systems. In this paper, we present the first real-world implementation and evaluation of a continuous authentication system on a smartwatch, We-Be Band, using low-frequency (25 Hz) multi-channel PPG signals. Our method employs a Bi-LSTM with attention mechanism to extract identity-specific features from short (4 s) windows of 4-channel PPG. Through extensive evaluations on both public datasets (PTTPPG) and our We-Be Dataset (26 subjects), we demonstrate strong classification performance with an average test accuracy of 88.11%, macro F1-score of 0.88, False Acceptance Rate (FAR) of 0.48%, False Rejection Rate (FRR) of 11.77%, and Equal Error Rate (EER) of 2.76%. Our 25 Hz system reduces sensor power consumption by 53% compared to 512 Hz and 19% compared to 128 Hz setups without compromising performance. We find that sampling at 25 Hz preserves authentication accuracy, whereas performance drops sharply at 20 Hz while offering only trivial additional power savings, underscoring 25 Hz as the practical lower bound. Additionally, we find that models trained exclusively on resting data fail under motion, while activity-diverse training improves robustness across physiological states.