Causally-informed Deep Learning towards Explainable and Generalizable Outcomes Prediction in Critical Care

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning–driven Early Warning Score (EWS) systems suffer from poor interpretability and limited out-of-distribution generalizability across clinical centers. To address these challenges, we propose the first causally enhanced deep learning EWS framework for intensive care settings, unifying prediction of six acute deterioration events. Methodologically, we innovatively integrate causal discovery algorithms (PC and NOTEARS) into a graph neural network coupled with attention mechanisms, explicitly modeling causal relationships among clinical variables. This design jointly improves predictive interpretability and robustness to distributional shifts. Evaluated on multicenter real-world ICU data, our model achieves statistically significant accuracy gains over state-of-the-art baselines across all six tasks; it improves cross-population and cross-institution generalization by 12.3%–18.7%. Moreover, it generates clinically meaningful causal intervention pathways—enabling actionable, transparent decision support. This work establishes a novel paradigm for trustworthy, causally grounded AI in critical care.

Technology Category

Application Category

📝 Abstract
Recent advances in deep learning (DL) have prompted the development of high-performing early warning score (EWS) systems, predicting clinical deteriorations such as acute kidney injury, acute myocardial infarction, or circulatory failure. DL models have proven to be powerful tools for various tasks but come with the cost of lacking interpretability and limited generalizability, hindering their clinical applications. To develop a practical EWS system applicable to various outcomes, we propose causally-informed explainable early prediction model, which leverages causal discovery to identify the underlying causal relationships of prediction and thus owns two unique advantages: demonstrating the explicit interpretation of the prediction while exhibiting decent performance when applied to unfamiliar environments. Benefiting from these features, our approach achieves superior accuracy for 6 different critical deteriorations and achieves better generalizability across different patient groups, compared to various baseline algorithms. Besides, we provide explicit causal pathways to serve as references for assistant clinical diagnosis and potential interventions. The proposed approach enhances the practical application of deep learning in various medical scenarios.
Problem

Research questions and friction points this paper is trying to address.

Develop explainable EWS system
Enhance DL model generalizability
Identify causal relationships for predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causally-informed explainable early prediction model
Leverages causal discovery for prediction
Enhances generalizability across patient groups
🔎 Similar Papers
No similar papers found.
Yuxiao Cheng
Yuxiao Cheng
Tsinghua University
Machine Learning
X
Xinxin Song
Department of Automation, Tsinghua University, Beijing, China
Z
Ziqian Wang
Department of Automation, Tsinghua University, Beijing, China
Q
Qin Zhong
Chinese PLA General Hospital, Beijing, China
Kunlun He
Kunlun He
Professor of medicine, Chinese PLA general hospital
Medical big datacardiology
Jinli Suo
Jinli Suo
Tsinghua University
Computer VisionComputational PhotographyComputational Imaging