🤖 AI Summary
Cardiovascular magnetic resonance (CMR) reports contain sensitive patient data, necessitating privacy-preserving automated diagnostic classification without data leakage.
Method: We propose a zero-data-exfiltration solution based on locally deployed open-weight large language models (LLMs), evaluating nine models—including Gemma2, Qwen2.5, and Deepseek-R1—on real-world clinical free-text CMR reports for diagnostic information extraction and disease classification. Performance is quantified using accuracy, macro-F1 score, and confusion matrices.
Contribution/Results: This work presents the first demonstration that open-source LLMs can comprehensively outperform cardiologists in CMR report classification: the top-performing model achieves an average F1 of 0.98 versus 0.94 for human experts; the top four models significantly surpass clinicians (p < 0.01), while the remaining six all attain F1 ≥ 0.93. These results validate the clinical feasibility and scalability of lightweight, high-accuracy, privacy-enhancing local AI-assisted diagnosis.
📝 Abstract
Purpose: We investigated the utilization of privacy-preserving, locally-deployed, open-source Large Language Models (LLMs) to extract diagnostic information from free-text cardiovascular magnetic resonance (CMR) reports. Materials and Methods: We evaluated nine open-source LLMs on their ability to identify diagnoses and classify patients into various cardiac diagnostic categories based on descriptive findings in 109 clinical CMR reports. Performance was quantified using standard classification metrics including accuracy, precision, recall, and F1 score. We also employed confusion matrices to examine patterns of misclassification across models. Results: Most open-source LLMs demonstrated exceptional performance in classifying reports into different diagnostic categories. Google's Gemma2 model achieved the highest average F1 score of 0.98, followed by Qwen2.5:32B and DeepseekR1-32B with F1 scores of 0.96 and 0.95, respectively. All other evaluated models attained average scores above 0.93, with Mistral and DeepseekR1-7B being the only exceptions. The top four LLMs outperformed our board-certified cardiologist (F1 score of 0.94) across all evaluation metrics in analyzing CMR reports. Conclusion: Our findings demonstrate the feasibility of implementing open-source, privacy-preserving LLMs in clinical settings for automated analysis of imaging reports, enabling accurate, fast and resource-efficient diagnostic categorization.