🤖 AI Summary
To address privacy and security challenges confronting mobile large language models (LLMs) in resource-constrained edge environments, this paper systematically analyzes data sensitivity, deployment constraints, and prevalent threats—including adversarial attacks, membership inference, and side-channel attacks—and presents the first taxonomy of privacy-preserving approaches for mobile LLMs. We propose a synergistic defense framework integrating differential privacy, federated learning, prompt encryption, secure multi-party computation, and lightweight model design, and empirically evaluate the efficacy and limitations of each technique in balancing privacy guarantees against utility degradation. Our core contributions are threefold: (1) the first comprehensive threat classification framework tailored to mobile LLMs; (2) an in-depth characterization of fundamental trade-offs between performance and privacy; and (3) actionable design principles for trustworthy, regulation-compliant, and scalable mobile LLM systems.
📝 Abstract
Mobile Large Language Models (LLMs) are revolutionizing diverse fields such as healthcare, finance, and education with their ability to perform advanced natural language processing tasks on-the-go. However, the deployment of these models in mobile and edge environments introduces significant challenges related to privacy and security due to their resource-intensive nature and the sensitivity of the data they process. This survey provides a comprehensive overview of privacy and security issues associated with mobile LLMs, systematically categorizing existing solutions such as differential privacy, federated learning, and prompt encryption. Furthermore, we analyze vulnerabilities unique to mobile LLMs, including adversarial attacks, membership inference, and side-channel attacks, offering an in-depth comparison of their effectiveness and limitations. Despite recent advancements, mobile LLMs face unique hurdles in achieving robust security while maintaining efficiency in resource-constrained environments. To bridge this gap, we propose potential applications, discuss open challenges, and suggest future research directions, paving the way for the development of trustworthy, privacy-compliant, and scalable mobile LLM systems.