Medicine on the Edge: Comparative Performance Analysis of On-Device LLMs for Clinical Reasoning

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the feasibility and performance bottlenecks of deploying open-source edge large language models (LLMs) for clinical reasoning on mobile devices. Using the AMEGA clinical reasoning benchmark, we conduct cross-device performance profiling—spanning CPU, GPU, memory, and thermal constraints—across multiple generations of mobile hardware. Our analysis reveals, for the first time, that memory capacity—not computational throughput—is the primary limiting factor for LLM deployment on legacy devices. Among evaluated models, the lightweight general-purpose Phi-3 Mini achieves a favorable efficiency–accuracy trade-off (response latency <800 ms; accuracy 87.2%), while medical-domain fine-tuned models Med42 and Aloe attain the highest accuracy (91.5%). Critically, all models are successfully deployed on devices with only 4 GB of RAM, demonstrating the practical viability of on-device LLMs in real-world clinical settings.

Technology Category

Application Category

📝 Abstract
The deployment of Large Language Models (LLM) on mobile devices offers significant potential for medical applications, enhancing privacy, security, and cost-efficiency by eliminating reliance on cloud-based services and keeping sensitive health data local. However, the performance and accuracy of on-device LLMs in real-world medical contexts remain underexplored. In this study, we benchmark publicly available on-device LLMs using the AMEGA dataset, evaluating accuracy, computational efficiency, and thermal limitation across various mobile devices. Our results indicate that compact general-purpose models like Phi-3 Mini achieve a strong balance between speed and accuracy, while medically fine-tuned models such as Med42 and Aloe attain the highest accuracy. Notably, deploying LLMs on older devices remains feasible, with memory constraints posing a greater challenge than raw processing power. Our study underscores the potential of on-device LLMs for healthcare while emphasizing the need for more efficient inference and models tailored to real-world clinical reasoning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating on-device LLMs for clinical reasoning accuracy.
Assessing computational efficiency and thermal limits on mobiles.
Exploring memory constraints versus processing power in older devices.
Innovation

Methods, ideas, or system contributions that make the work stand out.

On-device LLMs for medical applications
Benchmarking with AMEGA dataset
Compact models balance speed and accuracy
🔎 Similar Papers
No similar papers found.
L
Leon Nissen
Stanford Mussallem Center for Biodesign, Stanford University, 318 Pasteur Drive, Stanford, California, USA
P
Philipp Zagar
Stanford Mussallem Center for Biodesign, Stanford University, 318 Pasteur Drive, Stanford, California, USA
V
Vishnu Ravi
Stanford Mussallem Center for Biodesign, Stanford University, 318 Pasteur Drive, Stanford, California, USA
A
Aydin Zahedivash
Stanford Mussallem Center for Biodesign, Stanford University, 318 Pasteur Drive, Stanford, California, USA
L
Lara Marie Reimer
Institute for Digital Medicine, University Hospital Bonn, Venusberg-Campus 1, Germany
S
Stephan M. Jonas
Institute for Digital Medicine, University Hospital Bonn, Venusberg-Campus 1, Germany
O
Oliver Aalami
Stanford Mussallem Center for Biodesign, Stanford University, 318 Pasteur Drive, Stanford, California, USA
Paul Schmiedmayer
Paul Schmiedmayer
Stanford University
Digital HealthTSLMAISoftware EngineeringMobile Applications