🤖 AI Summary
Prior work lacks localized, clinical empirical evaluation of open-source multilingual large language models (LLMs) for comorbidity extraction from Italian electronic health records (EHRs) in zero-shot settings. Method: We systematically assess the real-time zero-shot comorbidity extraction capability of multilingual LLMs on authentic Italian clinical texts, deploying models locally and benchmarking them against rule-based matching and human annotation. Contribution/Results: Our quantitative analysis reveals that state-of-the-art multilingual LLMs underperform rule-based methods significantly and exhibit poor generalization across disease categories—highlighting fundamental limitations in domain-specific clinical text understanding. This study provides critical empirical evidence and methodological guidance on the applicability of multilingual LLMs to low-resource clinical natural language processing tasks, particularly in under-resourced languages and specialized medical domains.
📝 Abstract
Large Language Models (LLMs) have become a key topic in AI and NLP, transforming sectors like healthcare, finance, education, and marketing by improving customer service, automating tasks, providing insights, improving diagnostics, and personalizing learning experiences. Information extraction from clinical records is a crucial task in digital healthcare. Although traditional NLP techniques have been used for this in the past, they often fall short due to the complexity, variability of clinical language, and high inner semantics in the free clinical text. Recently, Large Language Models (LLMs) have become a powerful tool for better understanding and generating human-like text, making them highly effective in this area. In this paper, we explore the ability of open-source multilingual LLMs to understand EHRs (Electronic Health Records) in Italian and help extract information from them in real-time. Our detailed experimental campaign on comorbidity extraction from EHR reveals that some LLMs struggle in zero-shot, on-premises settings, and others show significant variation in performance, struggling to generalize across various diseases when compared to native pattern matching and manual annotations.