🤖 AI Summary
This study addresses the challenge of ventilator-induced lung injury (VILI) resulting from suboptimal mechanical ventilation settings, a problem exacerbated by existing automated approaches that neglect temporal dynamics and rely excessively on mortality-based rewards, thereby failing to provide early warnings of clinical deterioration. To overcome these limitations, the authors propose T-CQL, a Transformer-based conservative Q-learning framework that, for the first time, integrates Transformers into offline reinforcement learning for ventilator management. T-CQL incorporates uncertainty-aware adaptive conservative regularization, consistency regularization, and a clinically informed reward function combining VILI risk with illness severity. Validated via an interactive digital twin for bedside strategy evaluation, T-CQL demonstrates significantly superior performance over baseline methods in offline assessments, exhibiting enhanced safety, efficacy, and clinical feasibility in ventilatory control.
📝 Abstract
Mechanical ventilation (MV) is a life-saving intervention for patients with acute respiratory failure (ARF) in the ICU. However, inappropriate ventilator settings could cause ventilator-induced lung injury (VILI). Also, clinicians workload is shown to be directly linked to patient outcomes. Hence, MV should be personalized and automated to improve patient outcomes. Previous attempts to incorporate personalization and automation in MV include traditional supervised learning and offline reinforcement learning (RL) approaches, which often neglect temporal dependencies and rely excessively on mortality-based rewards. As a result, early stage physiological deterioration and the risk of VILI are not adequately captured. To address these limitations, we propose Transformer-based Conservative Q-Learning (T-CQL), a novel offline RL framework that integrates a Transformer encoder for effective temporal modeling of patient dynamics, conservative adaptive regularization based on uncertainty quantification to ensure safety, and consistency regularization for robust decision-making. We build a clinically informed reward function that incorporates indicators of VILI and a score for severity of patients illness. Also, previous work predominantly uses Fitted Q-Evaluation (FQE) for RL policy evaluation on static offline data, which is less responsive to dynamic environmental changes and susceptible to distribution shifts. To overcome these evaluation limitations, interactive digital twins of ARF patients were used for online "at the bedside" evaluation. Our results demonstrate that T-CQL consistently outperforms existing state-of-the-art offline RL methodologies, providing safer and more effective ventilatory adjustments. Our framework demonstrates the potential of Transformer-based models combined with conservative RL strategies as a decision support tool in critical care.