🤖 AI Summary
Foundational models for intensive care time-series analysis remain scarce, hindered by limited labeled data and poor cross-institutional generalizability. Method: This paper introduces the first self-supervised foundational model framework tailored to this domain, built upon a Bi-Axial Transformer (BAT) architecture. It leverages multi-source electronic health records for self-supervised pretraining and supports cross-dataset fine-tuning. Contribution/Results: Its core innovations include a dual-axis modeling mechanism specifically designed for high-noise, asynchronous, multivariate clinical time series, coupled with a lightweight transfer strategy. In mortality prediction, the model significantly outperforms supervised baselines under small-sample settings (<5,000 instances), demonstrating superior generalization and robustness. These results validate its feasibility for clinical deployment in resource-constrained and multi-center environments.
📝 Abstract
Domain-specific foundation models for healthcare have expanded rapidly in recent years, yet foundation models for critical care time series remain relatively underexplored due to the limited size and availability of datasets. In this work, we introduce an early-stage pre-trained foundation model for critical care time-series based on the Bi-Axial Transformer (BAT), trained on pooled electronic health record datasets. We demonstrate effective transfer learning by fine-tuning the model on a dataset distinct from the training sources for mortality prediction, where it outperforms supervised baselines, particularly for small datasets ($<5,000$). These contributions highlight the potential of self-supervised foundation models for critical care times series to support generalizable and robust clinical applications in resource-limited settings.