🤖 AI Summary
Clinical AI systems often suffer performance degradation due to temporal data shifts—such as evolving patient populations, updates to ICD coding standards, and pandemic-related disruptions—yet frequent retraining is hindered by computational costs and privacy constraints. To address this, this work proposes the ADAPT framework, which integrates summaries of historical models with a small amount of current data to construct an uncertainty set for future model parameters. Without accessing raw historical data or future labels, ADAPT employs adversarially robust optimization and ensemble techniques to optimize worst-case performance. This approach enables privacy-preserving continual learning and was validated on electronic health records from Massachusetts General Hospital and Duke University spanning 2005–2021, demonstrating significant mitigation of annual performance decay and robust predictive stability during major distribution shifts, including ICD code transitions and the COVID-19 pandemic.
📝 Abstract
Clinical AI systems frequently suffer performance decay post-deployment due to temporal data shifts, such as evolving populations, diagnostic coding updates (e.g., ICD-9 to ICD-10), and systemic shocks like the COVID-19 pandemic. Addressing this ``aging''effect via frequent retraining is often impractical due to computational costs and privacy constraints. To overcome these hurdles, we introduce Adversarial Drift-Aware Predictive Transfer (ADAPT), a novel framework designed to confer durability against temporal drift with minimal retraining. ADAPT innovatively constructs an uncertainty set of plausible future models by combining historical source models and limited current data. By optimizing worst-case performance over this set, it balances current accuracy with robustness against degradation due to future drifts. Crucially, ADAPT requires only summary-level model estimators from historical periods, preserving data privacy and ensuring operational simplicity. Validated on longitudinal suicide risk prediction using electronic health records from Mass General Brigham (2005--2021) and Duke University Health Systems, ADAPT demonstrated superior stability across coding transitions and pandemic-induced shifts. By minimizing annual performance decay without labeling or retraining future data, ADAPT offers a scalable pathway for sustaining reliable AI in high-stakes healthcare environments.