🤖 AI Summary
This paper systematically reviews the current state, challenges, and deployment pathways of large language models (LLMs) in healthcare. It addresses four core barriers to clinical adoption: poor deployability, high privacy risks, weak task-specific adaptation, and absence of standardized evaluation frameworks. To tackle these, the study proposes: (1) a hierarchical ethical framework tailored to healthcare, integrating data security, algorithmic fairness, and clinical accountability; (2) a structured, multidimensional taxonomy of clinical LLM tasks—encompassing text generation, information extraction, multimodal understanding, and conversational interaction; and (3) an integrated technical pathway combining localized inference, in-context learning, and multimodal modeling. The resulting comprehensive guide bridges theoretical rigor and practical implementation, offering a methodological foundation and actionable roadmap for developing trustworthy, evaluable, and deployable clinical AI systems.
📝 Abstract
This paper explores the advancements and applications of language models in healthcare, focusing on their clinical use cases. It examines the evolution from early encoder-based systems requiring extensive fine-tuning to state-of-the-art large language and multimodal models capable of integrating text and visual data through in-context learning. The analysis emphasizes locally deployable models, which enhance data privacy and operational autonomy, and their applications in tasks such as text generation, classification, information extraction, and conversational systems. The paper also highlights a structured organization of tasks and a tiered ethical approach, providing a valuable resource for researchers and practitioners, while discussing key challenges related to ethics, evaluation, and implementation.