Guardian-regularized Safe Offline Reinforcement Learning for Smart Weaning of Mechanical Circulatory Devices

πŸ“… 2025-11-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the challenges of data-scarce, highly heterogeneous, and physiologically uncertain weaning decisions for mechanical circulatory support (MCS) in cardiogenic shock patients. We propose an offline reinforcement learning framework tailored to safety-critical clinical settings. Our method integrates density regularization, model-based planning, and clinically grounded constraints. Key contributions include: (1) CORMPOβ€”a novel algorithm combining out-of-distribution sample suppression with clinical-knowledge-guided reward shaping; and (2) a Transformer-based physiological digital twin model enabling probabilistic, interpretable cardiovascular state modeling and rigorous policy safety evaluation. Evaluated on both real-world and synthetic datasets, our approach achieves a 28% improvement in reward score and an 82.6% enhancement in critical clinical metrics over baselines, while providing theoretical safety guarantees and strong translational potential for clinical deployment.

Technology Category

Application Category

πŸ“ Abstract
We study the sequential decision-making problem for automated weaning of mechanical circulatory support (MCS) devices in cardiogenic shock patients. MCS devices are percutaneous micro-axial flow pumps that provide left ventricular unloading and forward blood flow, but current weaning strategies vary significantly across care teams and lack data-driven approaches. Offline reinforcement learning (RL) has proven to be successful in sequential decision-making tasks, but our setting presents challenges for training and evaluating traditional offline RL methods: prohibition of online patient interaction, highly uncertain circulatory dynamics due to concurrent treatments, and limited data availability. We developed an end-to-end machine learning framework with two key contributions (1) Clinically-aware OOD-regularized Model-based Policy Optimization (CORMPO), a density-regularized offline RL algorithm for out-of-distribution suppression that also incorporates clinically-informed reward shaping and (2) a Transformer-based probabilistic digital twin that models MCS circulatory dynamics for policy evaluation with rich physiological and clinical metrics. We prove that extsf{CORMPO} achieves theoretical performance guarantees under mild assumptions. CORMPO attains a higher reward than the offline RL baselines by 28% and higher scores in clinical metrics by 82.6% on real and synthetic datasets. Our approach offers a principled framework for safe offline policy learning in high-stakes medical applications where domain expertise and safety constraints are essential.
Problem

Research questions and friction points this paper is trying to address.

Automating weaning of mechanical circulatory support devices for cardiogenic shock patients
Addressing data scarcity and safety constraints in offline reinforcement learning
Developing clinically-aware algorithms for high-stakes medical decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Clinically-aware OOD-regularized Model-based Policy Optimization
Transformer-based probabilistic digital twin modeling
Safe offline RL with clinical reward shaping
πŸ”Ž Similar Papers
No similar papers found.
A
Aysin Tumay
University of California, San Diego
S
S. Sun
University of California, San Diego
S
Sonia Fereidooni
University of California, San Diego
A
Aaron Dumas
California Institute of Technology
E
Elise Jortberg
Abiomed
Rose Yu
Rose Yu
Associate Professor, University of California, San Diego
Machine LearningComputational Sustainability