🤖 AI Summary
Electronic health record (EHR) phenotyping suffers from high annotation noise and prohibitive manual curation costs, limiting downstream risk prediction performance. To address this, we propose the first reinforcement learning (RL)-driven active learning framework that uses downstream prediction performance as direct feedback—enabling dynamic, joint optimization of phenotype correction and sample selection, thereby overcoming the limitations of conventional heuristic-based approaches. Our framework adaptively integrates multiple querying strategies to jointly maximize labeling efficiency and model performance under constrained annotation budgets. Evaluated on the Duke Health System cohort, it improves logistic regression AUC from 0.774 to 0.805 and penalized Cox model C-index from 0.718 to 0.752—outperforming all baselines significantly. The core innovation lies in directly employing downstream predictive performance as the RL reward signal, achieving end-to-end joint optimization of phenotype definition and risk modeling.
📝 Abstract
Objective: Electronic health record (EHR) phenotyping often relies on noisy proxy labels, which undermine the reliability of downstream risk prediction. Active learning can reduce annotation costs, but most rely on fixed heuristics and do not ensure that phenotype refinement improves prediction performance. Our goal was to develop a framework that directly uses downstream prediction performance as feedback to guide phenotype correction and sample selection under constrained labeling budgets. Materials and Methods: We propose Reinforcement-Enhanced Label-Efficient Active Phenotyping (RELEAP), a reinforcement learning-based active learning framework. RELEAP adaptively integrates multiple querying strategies and, unlike prior methods, updates its policy based on feedback from downstream models. We evaluated RELEAP on a de-identified Duke University Health System (DUHS) cohort (2014-2024) for incident lung cancer risk prediction, using logistic regression and penalized Cox survival models. Performance was benchmarked against noisy-label baselines and single-strategy active learning. Results: RELEAP consistently outperformed all baselines. Logistic AUC increased from 0.774 to 0.805 and survival C-index from 0.718 to 0.752. Using downstream performance as feedback, RELEAP produced smoother and more stable gains than heuristic methods under the same labeling budget. Discussion: By linking phenotype refinement to prediction outcomes, RELEAP learns which samples most improve downstream discrimination and calibration, offering a more principled alternative to fixed active learning rules. Conclusion: RELEAP optimizes phenotype correction through downstream feedback, offering a scalable, label-efficient paradigm that reduces manual chart review and enhances the reliability of EHR-based risk prediction.