A Survey: Towards Privacy and Security in Mobile Large Language Models

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address privacy and security challenges confronting mobile large language models (LLMs) in resource-constrained edge environments, this paper systematically analyzes data sensitivity, deployment constraints, and prevalent threats—including adversarial attacks, membership inference, and side-channel attacks—and presents the first taxonomy of privacy-preserving approaches for mobile LLMs. We propose a synergistic defense framework integrating differential privacy, federated learning, prompt encryption, secure multi-party computation, and lightweight model design, and empirically evaluate the efficacy and limitations of each technique in balancing privacy guarantees against utility degradation. Our core contributions are threefold: (1) the first comprehensive threat classification framework tailored to mobile LLMs; (2) an in-depth characterization of fundamental trade-offs between performance and privacy; and (3) actionable design principles for trustworthy, regulation-compliant, and scalable mobile LLM systems.

Technology Category

Application Category

📝 Abstract
Mobile Large Language Models (LLMs) are revolutionizing diverse fields such as healthcare, finance, and education with their ability to perform advanced natural language processing tasks on-the-go. However, the deployment of these models in mobile and edge environments introduces significant challenges related to privacy and security due to their resource-intensive nature and the sensitivity of the data they process. This survey provides a comprehensive overview of privacy and security issues associated with mobile LLMs, systematically categorizing existing solutions such as differential privacy, federated learning, and prompt encryption. Furthermore, we analyze vulnerabilities unique to mobile LLMs, including adversarial attacks, membership inference, and side-channel attacks, offering an in-depth comparison of their effectiveness and limitations. Despite recent advancements, mobile LLMs face unique hurdles in achieving robust security while maintaining efficiency in resource-constrained environments. To bridge this gap, we propose potential applications, discuss open challenges, and suggest future research directions, paving the way for the development of trustworthy, privacy-compliant, and scalable mobile LLM systems.
Problem

Research questions and friction points this paper is trying to address.

Addressing privacy and security challenges in mobile LLMs
Analyzing vulnerabilities like adversarial and side-channel attacks
Bridging efficiency-security gaps in resource-constrained environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differential privacy for data protection
Federated learning for decentralized training
Prompt encryption against adversarial attacks
🔎 Similar Papers
No similar papers found.
Honghui Xu
Honghui Xu
Kennesaw State University
Trustworthy AIData Privacy and Security in AIEvidence Theory
Kaiyang Li
Kaiyang Li
University of Connecticut
Parameter-Efficient Fine-TuningGraph Neural Network
W
Wei Chen
Nexa AI, Cupertino, CA, USA
D
Danyang Zheng
School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan, China
Z
Zhiyuan Li
Nexa AI, Cupertino, CA, USA
Z
Zhipeng Cai
Department of Computer Science, Georgia State University, Atlanta, GA, USA