🤖 AI Summary
This study systematically evaluates the feasibility and performance bottlenecks of deploying open-source edge large language models (LLMs) for clinical reasoning on mobile devices. Using the AMEGA clinical reasoning benchmark, we conduct cross-device performance profiling—spanning CPU, GPU, memory, and thermal constraints—across multiple generations of mobile hardware. Our analysis reveals, for the first time, that memory capacity—not computational throughput—is the primary limiting factor for LLM deployment on legacy devices. Among evaluated models, the lightweight general-purpose Phi-3 Mini achieves a favorable efficiency–accuracy trade-off (response latency <800 ms; accuracy 87.2%), while medical-domain fine-tuned models Med42 and Aloe attain the highest accuracy (91.5%). Critically, all models are successfully deployed on devices with only 4 GB of RAM, demonstrating the practical viability of on-device LLMs in real-world clinical settings.
📝 Abstract
The deployment of Large Language Models (LLM) on mobile devices offers significant potential for medical applications, enhancing privacy, security, and cost-efficiency by eliminating reliance on cloud-based services and keeping sensitive health data local. However, the performance and accuracy of on-device LLMs in real-world medical contexts remain underexplored. In this study, we benchmark publicly available on-device LLMs using the AMEGA dataset, evaluating accuracy, computational efficiency, and thermal limitation across various mobile devices. Our results indicate that compact general-purpose models like Phi-3 Mini achieve a strong balance between speed and accuracy, while medically fine-tuned models such as Med42 and Aloe attain the highest accuracy. Notably, deploying LLMs on older devices remains feasible, with memory constraints posing a greater challenge than raw processing power. Our study underscores the potential of on-device LLMs for healthcare while emphasizing the need for more efficient inference and models tailored to real-world clinical reasoning.