🤖 AI Summary
To address the lack of clinically grounded evaluation criteria for uncertainty quantification (UQ) in deep learning models applied to photoplethysmography (PPG) signal analysis, this study systematically benchmarks eight UQ methods—including Monte Carlo Dropout, deep ensembles, and posterior modeling—on atrial fibrillation detection and blood pressure regression. We propose a novel local calibration and fine-grained reliability assessment framework tailored to small-sample and personalized settings, overcoming limitations of conventional global metrics. Experimental results demonstrate pronounced task-dependent performance variation across UQ methods under diverse reliability criteria. Our localized evaluation protocol more accurately characterizes confidence dynamics within critical physiological ranges or individual subjects, yielding interpretable and actionable uncertainty feedback for clinical decision-making. This advancement significantly enhances the practical utility and clinical trustworthiness of PPG-based continuous monitoring systems.
📝 Abstract
In principle, deep learning models trained on medical time-series, including wearable photoplethysmography (PPG) sensor data, can provide a means to continuously monitor physiological parameters outside of clinical settings. However, there is considerable risk of poor performance when deployed in practical measurement scenarios leading to negative patient outcomes. Reliable uncertainties accompanying predictions can provide guidance to clinicians in their interpretation of the trustworthiness of model outputs. It is therefore of interest to compare the effectiveness of different approaches. Here we implement an unprecedented set of eight uncertainty quantification (UQ) techniques to models trained on two clinically relevant prediction tasks: Atrial Fibrillation (AF) detection (classification), and two variants of blood pressure regression. We formulate a comprehensive evaluation procedure to enable a rigorous comparison of these approaches. We observe a complex picture of uncertainty reliability across the different techniques, where the most optimal for a given task depends on the chosen expression of uncertainty, evaluation metric, and scale of reliability assessed. We find that assessing local calibration and adaptivity provides practically relevant insights about model behaviour that otherwise cannot be acquired using more commonly implemented global reliability metrics. We emphasise that criteria for evaluating UQ techniques should cater to the model's practical use case, where the use of a small number of measurements per patient places a premium on achieving small-scale reliability for the chosen expression of uncertainty, while preserving as much predictive performance as possible.