🤖 AI Summary
Clinical oncology texts are inherently unstructured and suffer from lexical ambiguity, incomplete information, and challenges in multimodal integration; general-purpose large language models lack domain-specific knowledge and reasoning capabilities to address these issues effectively. To bridge this gap, we propose the first lightweight, cross-lingual, oncology-specialized language model. Our method innovatively integrates instruction fine-tuning, retrieval-augmented generation (RAG), and knowledge graph embedding within a scalable, low-resource multi-task joint training framework. Crucially, it achieves efficient cross-lingual transfer with minimal German instruction data. Evaluated across multiple oncology benchmarks, the model achieves substantial improvements in named entity recognition, relation extraction, and pathology report classification. It reduces parameter count by over 60%, accelerates inference speed by 2.3×, and enables deployment in resource-constrained clinical settings.
📝 Abstract
Clinical oncology generates vast, unstructured data that often contain inconsistencies, missing information, and ambiguities, making it difficult to extract reliable insights for data-driven decision-making. General-purpose large language models (LLMs) struggle with these challenges due to their lack of domain-specific reasoning, including specialized clinical terminology, context-dependent interpretations, and multi-modal data integration. We address these issues with an oncology-specialized, efficient, and adaptable NLP framework that combines instruction tuning, retrieval-augmented generation (RAG), and graph-based knowledge integration. Our lightweight models prove effective at oncology-specific tasks, such as named entity recognition (e.g., identifying cancer diagnoses), entity linking (e.g., linking entities to standardized ontologies), TNM staging, document classification (e.g., cancer subtype classification from pathology reports), and treatment response prediction. Our framework emphasizes adaptability and resource efficiency. We include minimal German instructions, collected at the University Hospital Zurich (USZ), to test whether small amounts of non-English language data can effectively transfer knowledge across languages. This approach mirrors our motivation for lightweight models, which balance strong performance with reduced computational costs, making them suitable for resource-limited healthcare settings. We validated our models on oncology datasets, demonstrating strong results in named entity recognition, relation extraction, and document classification.