🤖 AI Summary
Predicting invasive mechanical ventilation (IMV) needs for ICU patients suffers from degraded generalization across multi-center settings due to distributional shifts. To address this, we propose AdaTTT—the first test-time adaptation (TTA) framework tailored for multi-center IMV prediction. AdaTTT introduces an information-theoretic error-bound-guided dynamic masking self-supervised pretext task and jointly leverages prototype learning with local optimal transport to achieve partial discriminative feature alignment. Crucially, it requires neither source-domain data nor labels, and completes lightweight adaptation via a single forward–backward pass on the target center alone. Evaluated on multi-center MIMIC-III, MIMIC-IV, and eICU cohorts, AdaTTT consistently outperforms existing TTA baselines, achieving average AUC improvements of 2.3–4.1 percentage points. The method offers a robust, computationally efficient, and privacy-preserving solution for clinical deployment.
📝 Abstract
Accurate prediction of the need for invasive mechanical ventilation (IMV) in intensive care units (ICUs) patients is crucial for timely interventions and resource allocation. However, variability in patient populations, clinical practices, and electronic health record (EHR) systems across institutions introduces domain shifts that degrade the generalization performance of predictive models during deployment. Test-Time Training (TTT) has emerged as a promising approach to mitigate such shifts by adapting models dynamically during inference without requiring labeled target-domain data. In this work, we introduce Adaptive Test-Time Training (AdaTTT), an enhanced TTT framework tailored for EHR-based IMV prediction in ICU settings. We begin by deriving information-theoretic bounds on the test-time prediction error and demonstrate that it is constrained by the uncertainty between the main and auxiliary tasks. To enhance their alignment, we introduce a self-supervised learning framework with pretext tasks: reconstruction and masked feature modeling optimized through a dynamic masking strategy that emphasizes features critical to the main task. Additionally, to improve robustness against domain shifts, we incorporate prototype learning and employ Partial Optimal Transport (POT) for flexible, partial feature alignment while maintaining clinically meaningful patient representations. Experiments across multi-center ICU cohorts demonstrate competitive classification performance on different test-time adaptation benchmarks.