🤖 AI Summary
To address the challenges of training complexity, deployment difficulty, and poor interpretability in personalized cardiovascular dynamics modeling, this paper proposes a physics-informed neural operator framework. Built upon DeepONet, it incorporates hemodynamic partial differential equation constraints to directly learn the nonlinear mapping from raw wearable waveforms to beat-to-beat blood pressure. A knowledge distillation mechanism transfers capabilities from the physics-guided heavy operator model to a lightweight network, eliminating conventional multi-objective adversarial or contrastive learning paradigms. Pretrained on high-fidelity cuffless blood pressure data, the distilled model achieves a correlation coefficient of 0.766 (baseline: 0.770) and RMSE of 4.452 mmHg (baseline: 4.501), with 4% reduced training cost. Hyperparameters are simplified from eight to a single regularization coefficient, significantly enhancing scalability, deployment efficiency, and physical interpretability.
📝 Abstract
Accurate modeling of personalized cardiovascular dynamics is crucial for non-invasive monitoring and therapy planning. State-of-the-art physics-informed neural network (PINN) approaches employ deep, multi-branch architectures with adversarial or contrastive objectives to enforce partial differential equation constraints. While effective, these enhancements introduce significant training and implementation complexity, limiting scalability and practical deployment. We investigate physics-informed neural operator learning models as efficient supervisory signals for training simplified architectures through knowledge distillation. Our approach pre-trains a physics-informed DeepONet (PI-DeepONet) on high-fidelity cuffless blood pressure recordings to learn operator mappings from raw wearable waveforms to beat-to-beat pressure signals under embedded physics constraints. This pre-trained operator serves as a frozen supervisor in a lightweight knowledge-distillation pipeline, guiding streamlined base models that eliminate complex adversarial and contrastive learning components while maintaining performance. We characterize the role of physics-informed regularization in operator learning and demonstrate its effectiveness for supervisory guidance. Through extensive experiments, our operator-supervised approach achieves performance parity with complex baselines (correlation: 0.766 vs. 0.770, RMSE: 4.452 vs. 4.501), while dramatically reducing architectural complexity from eight critical hyperparameters to a single regularization coefficient and decreasing training overhead by 4%. Our results demonstrate that operator-based supervision effectively replaces intricate multi-component training strategies, offering a more scalable and interpretable approach to physiological modeling with reduced implementation burden.