Representation Learning of Lab Values via Masked AutoEncoder

📅 2025-01-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the bias and reduced generalizability in clinical prediction caused by high-frequency missingness in laboratory test values within electronic health records (EHRs), this paper proposes Lab-MAE—the first masked autoencoder specifically designed for continuous, time-stamped laboratory measurements. Lab-MAE jointly models numeric test values and their timestamps via a structured temporal encoding mechanism that explicitly captures dynamic dependencies. To mitigate the “subsequent-test-value” shortcut bias—where models exploit future test results for imputation—it incorporates fairness-aware constraints, ensuring equitable imputation performance across demographic subgroups. Evaluated on MIMIC-IV, Lab-MAE outperforms strong baselines (e.g., XGBoost) across RMSE, R², and Wasserstein distance metrics, demonstrating robustness and foundational interpretability suitable for clinical deployment. Notably, it provides the first quantitative assessment of carbon footprint for EHR imputation models, establishing a new benchmark for sustainability-aware healthcare AI.

Technology Category

Application Category

📝 Abstract
Accurate imputation of missing laboratory values in electronic health records (EHRs) is critical to enable robust clinical predictions and reduce biases in AI systems in healthcare. Existing methods, such as variational autoencoders (VAEs) and decision tree-based approaches such as XGBoost, struggle to model the complex temporal and contextual dependencies in EHR data, mainly in underrepresented groups. In this work, we propose Lab-MAE, a novel transformer-based masked autoencoder framework that leverages self-supervised learning for the imputation of continuous sequential lab values. Lab-MAE introduces a structured encoding scheme that jointly models laboratory test values and their corresponding timestamps, enabling explicit capturing temporal dependencies. Empirical evaluation on the MIMIC-IV dataset demonstrates that Lab-MAE significantly outperforms the state-of-the-art baselines such as XGBoost across multiple metrics, including root mean square error (RMSE), R-squared (R2), and Wasserstein distance (WD). Notably, Lab-MAE achieves equitable performance across demographic groups of patients, advancing fairness in clinical predictions. We further investigate the role of follow-up laboratory values as potential shortcut features, revealing Lab-MAE's robustness in scenarios where such data is unavailable. The findings suggest that our transformer-based architecture, adapted to the characteristics of the EHR data, offers a foundation model for more accurate and fair clinical imputation models. In addition, we measure and compare the carbon footprint of Lab-MAE with the baseline XGBoost model, highlighting its environmental requirements.
Problem

Research questions and friction points this paper is trying to address.

Electronic Health Records
Missing Data Imputation
Medical Prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lab-MAE
Temporal Relationship Understanding
Environmental Consideration
🔎 Similar Papers
No similar papers found.
D
David Restrepo
Massachusetts Institute of Technology (MIT), USA; Université Paris-Saclay, France
C
Chenwei Wu
University of Michigan, USA
Y
Yueran Jia
Northeastern University, USA
J
Jaden K. Sun
Massachusetts Institute of Technology (MIT), USA
Jack Gallifant
Jack Gallifant
AIM @ Harvard-MGB
AIAlignmentHealthcareInterpretabilityRobustness
C
Catherine G. Bielick
Massachusetts Institute of Technology (MIT), USA; Harvard Medical School, USA; Beth Israel Deaconess Medical Center, USA
Y
Yugang Jia
Massachusetts Institute of Technology (MIT), USA
L
Leo A. Celi
Massachusetts Institute of Technology (MIT), USA; Harvard Medical School, USA; Beth Israel Deaconess Medical Center, USA