HealthSLM-Bench: Benchmarking Small Language Models for Mobile and Wearable Healthcare Monitoring

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cloud-based large language models (LLMs) pose significant privacy risks, high latency, and substantial memory overhead when deployed for mobile/wearable health monitoring. Method: This work presents the first systematic evaluation of lightweight small language models (SLMs) for on-device health prediction, benchmarking zero-shot, few-shot, and instruction-tuned SLMs across diverse real-world health datasets and validating real-time inference via mobile deployment. Contribution/Results: The best fine-tuned SLM achieves near-LLM performance on key predictive metrics—averaging only 3.2% lower accuracy—while delivering 5.8× faster inference, reducing memory footprint by 76%, and eliminating cloud data uploads entirely. These results empirically demonstrate the feasibility of SLMs for edge-intelligent healthcare, establishing a privacy-preserving, efficient, and robust foundation for on-device health modeling.

Technology Category

Application Category

📝 Abstract
Mobile and wearable healthcare monitoring play a vital role in facilitating timely interventions, managing chronic health conditions, and ultimately improving individuals' quality of life. Previous studies on large language models (LLMs) have highlighted their impressive generalization abilities and effectiveness in healthcare prediction tasks. However, most LLM-based healthcare solutions are cloud-based, which raises significant privacy concerns and results in increased memory usage and latency. To address these challenges, there is growing interest in compact models, Small Language Models (SLMs), which are lightweight and designed to run locally and efficiently on mobile and wearable devices. Nevertheless, how well these models perform in healthcare prediction remains largely unexplored. We systematically evaluated SLMs on health prediction tasks using zero-shot, few-shot, and instruction fine-tuning approaches, and deployed the best performing fine-tuned SLMs on mobile devices to evaluate their real-world efficiency and predictive performance in practical healthcare scenarios. Our results show that SLMs can achieve performance comparable to LLMs while offering substantial gains in efficiency and privacy. However, challenges remain, particularly in handling class imbalance and few-shot scenarios. These findings highlight SLMs, though imperfect in their current form, as a promising solution for next-generation, privacy-preserving healthcare monitoring.
Problem

Research questions and friction points this paper is trying to address.

Evaluating Small Language Models for healthcare prediction tasks
Addressing privacy and efficiency limitations of cloud-based LLM solutions
Assessing SLM performance on mobile devices in real-world scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated SLMs using zero-shot and fine-tuning
Deployed optimized SLMs on mobile devices
Achieved LLM-comparable performance with enhanced privacy
🔎 Similar Papers
No similar papers found.