PAE MobiLLM: Privacy-Aware and Efficient LLM Fine-Tuning on the Mobile Device via Additive Side-Tuning

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the tension between resource-constrained mobile devices and the need for on-device fine-tuning—while mitigating high communication overhead and privacy risks (e.g., data, label, and model parameter leakage) inherent in existing server-assisted fine-tuning approaches—this paper proposes a privacy-preserving, efficient on-device large language model (LLM) fine-tuning framework. Built upon additive side-tuning, the framework introduces three key techniques: activation caching reuse, pivot token compression for transmission, and lightweight additive adapters, enabling forward computation offloading and differential prediction guidance. Critically, the server only receives perturbed, sparse activations—rendering reconstruction of raw data, labels, or fine-tuned parameters infeasible. Experiments demonstrate that our method reduces communication overhead by over 90%, accelerates convergence, and rigorously preserves privacy of local data, labels, and model updates on the client side.

Technology Category

Application Category

📝 Abstract
There is a huge gap between numerous intriguing applications fostered by on-device large language model (LLM) fine-tuning (FT) from fresh mobile data and the limited resources of a mobile device. While existing server-assisted methods (e.g., split learning or side-tuning) may enable LLM FT on the local mobile device, they suffer from heavy communication burdens of activation transmissions, and may disclose data, labels or fine-tuned models to the server. To address those issues, we develop PAE MobiLLM, a privacy-aware and efficient LLM FT method which can be deployed on the mobile device via server-assisted additive side-tuning. To further accelerate FT convergence and improve computing efficiency, PAE MobiLLM integrates activation caching on the server side, which allows the server to reuse historical activations and saves the mobile device from repeatedly computing forward passes for the recurring data samples. Besides, to reduce communication cost, PAE MobiLLM develops a one-token (i.e., ``pivot'' token) activation shortcut that transmits only a single activation dimension instead of full activation matrices to guide the side network tuning. Last but not least, PAE MobiLLM introduces the additive adapter side-network design which makes the server train the adapter modules based on device-defined prediction differences rather than raw ground-truth labels. In this way, the server can only assist device-defined side-network computing, and learn nothing about data, labels or fine-tuned models.
Problem

Research questions and friction points this paper is trying to address.

Bridges mobile LLM fine-tuning gap with limited resources
Reduces communication costs and protects data privacy
Improves efficiency via activation caching and one-token transmission
Innovation

Methods, ideas, or system contributions that make the work stand out.

Server-assisted additive side-tuning for mobile LLM
Activation caching to accelerate fine-tuning convergence
One-token activation shortcut to reduce communication
🔎 Similar Papers
No similar papers found.
X
Xingke Yang
University of Houston, USA
L
Liang Li
Pengcheng Laboratory, China
Z
Zhiyi Wan
Beijing University of Posts and Telecommunications, China
S
Sicong Li
Waseda University, Japan
H
Hao Wang
Stevens Institute of Technology, USA
Xiaoqi Qin
Xiaoqi Qin
Beijing University of Posts and Telecommunications
J
Jiang Liu
Waseda University, Japan
T
Tomoaki Ohtsuki
Keio University, Japan
X
Xin Fu
University of Houston, USA
Miao Pan
Miao Pan
Professor, Electrical and Computer Engineering, University of Houston
Wireless for AICybersecurity for AIMobile/Edge AI SystemsUnderwater IoT Nets