Learnable Prompt as Pseudo-Imputation: Rethinking the Necessity of Traditional EHR Data Imputation in Downstream Clinical Prediction

📅 2024-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance degradation of deep models on electronic health records (EHR) with high missingness rates, this paper proposes Prompt-based Artificial Imputation (PAI), a learnable prompting paradigm that obviates explicit imputation. Unlike conventional approaches relying on auxiliary imputation models, PAI integrates missing-value modeling directly into downstream task optimization via an end-to-end differentiable, learnable prompt mechanism. This implicitly encodes task-specific preferences over missingness patterns, avoiding the introduction of non-ground-truth imputed data. PAI is the first to adapt prompt learning to EHR sequence modeling—e.g., with GRUs or Transformers—and achieves state-of-the-art performance across four real-world EHR datasets and two clinical prediction tasks. It demonstrates significantly enhanced robustness under high missingness and low-data regimes, and supports plug-and-play integration with existing architectures.

Technology Category

Application Category

📝 Abstract
Analyzing the health status of patients based on Electronic Health Records (EHR) is a fundamental research problem in medical informatics. The presence of extensive missing values in EHR makes it challenging for deep neural networks (DNNs) to directly model the patient's health status. Existing DNNs training protocols, including Impute-then-Regress Procedure and Jointly Optimizing of Impute-n-Regress Procedure, require the additional imputation models to reconstruction missing values. However, Impute-then-Regress Procedure introduces the risk of injecting imputed, non-real data into downstream clinical prediction tasks, resulting in power loss, biased estimation, and poorly performing models, while Jointly Optimizing of Impute-n-Regress Procedure is also difficult to generalize due to the complex optimization space and demanding data requirements. Inspired by the recent advanced literature of learnable prompt in the fields of NLP and CV, in this work, we rethought the necessity of the imputation model in downstream clinical tasks, and proposed Learnable Prompt as Pseudo-Imputation (PAI) as a new training protocol to assist EHR analysis. PAI no longer introduces any imputed data but constructs a learnable prompt to model the implicit preferences of the downstream model for missing values, resulting in a significant performance improvement for all state-of-the-arts EHR analysis models on four real-world datasets across two clinical prediction tasks. Further experimental analysis indicates that PAI exhibits higher robustness in situations of data insufficiency and high missing rates. More importantly, as a plug-and-play protocol, PAI can be easily integrated into any existing or even imperceptible future EHR analysis models.
Problem

Research questions and friction points this paper is trying to address.

Addresses missing values in EHR for clinical prediction.
Proposes Learnable Prompt as Pseudo-Imputation (PAI) for EHR analysis.
Improves model performance without traditional data imputation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable Prompt replaces traditional EHR imputation.
PAI models implicit preferences for missing values.
PAI improves robustness in data insufficiency scenarios.
🔎 Similar Papers
No similar papers found.
Weibin Liao
Weibin Liao
Peking University
Large Language ModelReinforcement LearningMedical Image Analysis
Yinghao Zhu
Yinghao Zhu
The University of Hong Kong
Data MiningAI for Healthcare
Zixiang Wang
Zixiang Wang
Peking University
AI for Healthcare
X
Xu Chu
National Engineering Research Center for Software Engineering, Peking University, Beijing, China
Y
Yasha Wang
Key Laboratory of High Confidence Software Technologies, Ministry of Education, Beijing, China
L
Liantao Ma
Key Laboratory of High Confidence Software Technologies, Ministry of Education, Beijing, China