LLMs in Mobile Apps: Practices, Challenges, and Opportunities

๐Ÿ“… 2025-02-21
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses core challenges in deploying large language models (LLMs) on mobile devicesโ€”namely, resource constraints, complex API management, and poor code-architecture compatibility. To this end, we construct the first empirical dataset comprising 149 LLM-augmented Android applications. Using reverse engineering, static and dynamic code auditing, and fine-grained API call tracing, we systematically identify six prevalent LLM integration patterns and four fundamental challenges: high model inference latency, uncontrolled token costs, lack of offline functionality, and permission conflicts. Our study is the first to empirically characterize mobile-specific LLM engineering paradigms and bottlenecks. The findings provide a rigorous empirical foundation for lightweight deployment strategies, privacy-preserving design, and mobile-optimized toolchain development. Furthermore, we propose a practice-oriented integration guideline tailored specifically to the mobile context, bridging the gap between LLM capabilities and real-world mobile application requirements.

Technology Category

Application Category

๐Ÿ“ Abstract
The integration of AI techniques has become increasingly popular in software development, enhancing performance, usability, and the availability of intelligent features. With the rise of large language models (LLMs) and generative AI, developers now have access to a wealth of high-quality open-source models and APIs from closed-source providers, enabling easier experimentation and integration of LLMs into various systems. This has also opened new possibilities in mobile application (app) development, allowing for more personalized and intelligent apps. However, integrating LLM into mobile apps might present unique challenges for developers, particularly regarding mobile device constraints, API management, and code infrastructure. In this project, we constructed a comprehensive dataset of 149 LLM-enabled Android apps and conducted an exploratory analysis to understand how LLMs are deployed and used within mobile apps. This analysis highlights key characteristics of the dataset, prevalent integration strategies, and common challenges developers face. Our findings provide valuable insights for future research and tooling development aimed at enhancing LLM-enabled mobile apps.
Problem

Research questions and friction points this paper is trying to address.

LLM integration in mobile apps
Challenges in mobile device constraints
Dataset analysis for LLM-enabled apps
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM integration in apps
Mobile device constraints addressed
Dataset analysis for insights
๐Ÿ”Ž Similar Papers
No similar papers found.