Adversarial Drift-Aware Predictive Transfer: Toward Durable Clinical AI

📅 2026-01-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clinical AI systems often suffer performance degradation due to temporal data shifts—such as evolving patient populations, updates to ICD coding standards, and pandemic-related disruptions—yet frequent retraining is hindered by computational costs and privacy constraints. To address this, this work proposes the ADAPT framework, which integrates summaries of historical models with a small amount of current data to construct an uncertainty set for future model parameters. Without accessing raw historical data or future labels, ADAPT employs adversarially robust optimization and ensemble techniques to optimize worst-case performance. This approach enables privacy-preserving continual learning and was validated on electronic health records from Massachusetts General Hospital and Duke University spanning 2005–2021, demonstrating significant mitigation of annual performance decay and robust predictive stability during major distribution shifts, including ICD code transitions and the COVID-19 pandemic.

Technology Category

Application Category

📝 Abstract
Clinical AI systems frequently suffer performance decay post-deployment due to temporal data shifts, such as evolving populations, diagnostic coding updates (e.g., ICD-9 to ICD-10), and systemic shocks like the COVID-19 pandemic. Addressing this ``aging''effect via frequent retraining is often impractical due to computational costs and privacy constraints. To overcome these hurdles, we introduce Adversarial Drift-Aware Predictive Transfer (ADAPT), a novel framework designed to confer durability against temporal drift with minimal retraining. ADAPT innovatively constructs an uncertainty set of plausible future models by combining historical source models and limited current data. By optimizing worst-case performance over this set, it balances current accuracy with robustness against degradation due to future drifts. Crucially, ADAPT requires only summary-level model estimators from historical periods, preserving data privacy and ensuring operational simplicity. Validated on longitudinal suicide risk prediction using electronic health records from Mass General Brigham (2005--2021) and Duke University Health Systems, ADAPT demonstrated superior stability across coding transitions and pandemic-induced shifts. By minimizing annual performance decay without labeling or retraining future data, ADAPT offers a scalable pathway for sustaining reliable AI in high-stakes healthcare environments.
Problem

Research questions and friction points this paper is trying to address.

temporal data shift
clinical AI durability
performance decay
adversarial drift
model aging
Innovation

Methods, ideas, or system contributions that make the work stand out.

temporal drift
adversarial robustness
predictive transfer
model durability
privacy-preserving
🔎 Similar Papers
No similar papers found.
Xin Xiong
Xin Xiong
University of Southern California
Image ProcessingComputer VisionVideo compression
Zijian Guo
Zijian Guo
Associate Professor of Statistics, Rutgers University
High-dimensional StatisticsCausal InferencePost-selection InferenceCausal Inference and
H
Haobo Zhu
Department of Biostatistics & Bioinformatics, Duke University, USA
C
Chuan Hong
Department of Biostatistics & Bioinformatics, Duke University, USA
J
Jordan W. Smoller
Department of Psychiatry, Harvard Medical School, USA; Department of Psychiatry, Massachusetts General Hospital, USA
Tianxi Cai
Tianxi Cai
Harvard University
statisticsbiostatisticsmodelingpredictiongenomics
Molei Liu
Molei Liu
Peking University
High-dimensional statisticsStatistical machine learningSemiparametric theoryModel-X