CKD-EHR:Clinical Knowledge Distillation for Electronic Health Records

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address two key challenges in EHR-based disease prediction—weak medical knowledge representation and low clinical deployment efficiency—this paper proposes a clinical knowledge-enhanced, multi-granularity attention distillation framework. We employ a fine-tuned Qwen2.5-7B as the teacher model and integrate structured medical knowledge (e.g., ICD codes, clinical guidelines) to generate interpretable soft labels. A novel cross-layer, multi-granularity (token-, visit-, and patient-level) attention distillation mechanism transfers knowledge to a lightweight BERT student model. Our approach is the first to jointly achieve interpretable modeling and efficient inference. Evaluated on MIMIC-III, it achieves a 9% improvement in diagnostic accuracy, a 27% gain in F1-score, and a 22.2× speedup in inference latency—significantly enhancing clinical practicality and deployment timeliness.

Technology Category

Application Category

📝 Abstract
Electronic Health Records (EHR)-based disease prediction models have demonstrated significant clinical value in promoting precision medicine and enabling early intervention. However, existing large language models face two major challenges: insufficient representation of medical knowledge and low efficiency in clinical deployment. To address these challenges, this study proposes the CKD-EHR (Clinical Knowledge Distillation for EHR) framework, which achieves efficient and accurate disease risk prediction through knowledge distillation techniques. Specifically, the large language model Qwen2.5-7B is first fine-tuned on medical knowledge-enhanced data to serve as the teacher model.It then generates interpretable soft labels through a multi-granularity attention distillation mechanism. Finally, the distilled knowledge is transferred to a lightweight BERT student model. Experimental results show that on the MIMIC-III dataset, CKD-EHR significantly outperforms the baseline model:diagnostic accuracy is increased by 9%, F1-score is improved by 27%, and a 22.2 times inference speedup is achieved. This innovative solution not only greatly improves resource utilization efficiency but also significantly enhances the accuracy and timeliness of diagnosis, providing a practical technical approach for resource optimization in clinical settings. The code and data for this research are available athttps://github.com/209506702/CKD_EHR.
Problem

Research questions and friction points this paper is trying to address.

Enhances medical knowledge representation in EHR models
Improves efficiency of clinical deployment for prediction
Boosts disease risk prediction accuracy and speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned Qwen2.5-7B as teacher model
Multi-granularity attention distillation mechanism
Lightweight BERT student model for efficiency
🔎 Similar Papers
No similar papers found.