π€ AI Summary
To address the security challenges of adapting large language models (LLMs) in privacy-sensitive and resource-constrained edge environments, this paper proposes VFLAIR-LLMβthe first lightweight, scalable split learning (SL) framework specifically designed for LLMs. It enables privacy-preserving inference and fine-tuning across diverse model partitioning strategies (e.g., layer-wise or module-level splitting), NLP tasks, and heterogeneous datasets. We introduce the first systematic SL-LLM benchmark, integrating five representative privacy attacks and nine defense mechanisms, along with practical guidelines for partition configuration, defense selection, and hyperparameter tuning. Extensive experiments demonstrate VFLAIR-LLMβs effectiveness and robustness under stringent computational constraints. Our framework establishes a reproducible, extensible technical paradigm for deploying LLMs in privacy-critical settings.
π Abstract
With the advancement of Large Language Models (LLMs), LLM applications have expanded into a growing number of fields. However, users with data privacy concerns face limitations in directly utilizing LLM APIs, while private deployments incur significant computational demands. This creates a substantial challenge in achieving secure LLM adaptation under constrained local resources. To address this issue, collaborative learning methods, such as Split Learning (SL), offer a resource-efficient and privacy-preserving solution for adapting LLMs to private domains. In this study, we introduce VFLAIR-LLM (available at https://github.com/FLAIR-THU/VFLAIR-LLM), an extensible and lightweight split learning framework for LLMs, enabling privacy-preserving LLM inference and fine-tuning in resource-constrained environments. Our library provides two LLM partition settings, supporting three task types and 18 datasets. In addition, we provide standard modules for implementing and evaluating attacks and defenses. We benchmark 5 attacks and 9 defenses under various Split Learning for LLM(SL-LLM) settings, offering concrete insights and recommendations on the choice of model partition configurations, defense strategies, and relevant hyperparameters for real-world applications.