🤖 AI Summary
To address the tension between resource-constrained mobile devices and the need for on-device fine-tuning—while mitigating high communication overhead and privacy risks (e.g., data, label, and model parameter leakage) inherent in existing server-assisted fine-tuning approaches—this paper proposes a privacy-preserving, efficient on-device large language model (LLM) fine-tuning framework. Built upon additive side-tuning, the framework introduces three key techniques: activation caching reuse, pivot token compression for transmission, and lightweight additive adapters, enabling forward computation offloading and differential prediction guidance. Critically, the server only receives perturbed, sparse activations—rendering reconstruction of raw data, labels, or fine-tuned parameters infeasible. Experiments demonstrate that our method reduces communication overhead by over 90%, accelerates convergence, and rigorously preserves privacy of local data, labels, and model updates on the client side.
📝 Abstract
There is a huge gap between numerous intriguing applications fostered by on-device large language model (LLM) fine-tuning (FT) from fresh mobile data and the limited resources of a mobile device. While existing server-assisted methods (e.g., split learning or side-tuning) may enable LLM FT on the local mobile device, they suffer from heavy communication burdens of activation transmissions, and may disclose data, labels or fine-tuned models to the server. To address those issues, we develop PAE MobiLLM, a privacy-aware and efficient LLM FT method which can be deployed on the mobile device via server-assisted additive side-tuning. To further accelerate FT convergence and improve computing efficiency, PAE MobiLLM integrates activation caching on the server side, which allows the server to reuse historical activations and saves the mobile device from repeatedly computing forward passes for the recurring data samples. Besides, to reduce communication cost, PAE MobiLLM develops a one-token (i.e., ``pivot'' token) activation shortcut that transmits only a single activation dimension instead of full activation matrices to guide the side network tuning. Last but not least, PAE MobiLLM introduces the additive adapter side-network design which makes the server train the adapter modules based on device-defined prediction differences rather than raw ground-truth labels. In this way, the server can only assist device-defined side-network computing, and learn nothing about data, labels or fine-tuned models.