NoEsis: Differentially Private Knowledge Transfer in Modular LLM Adaptation

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of simultaneously ensuring privacy preservation and enabling knowledge sharing in modular large language models (e.g., Mixture-of-Experts, MoE) during cross-domain adaptation, this paper proposes the first differentially private (DP) framework for two-stage parameter-efficient fine-tuning. In Stage I, a shared prompt backbone is trained via DP-SGD; in Stage II, private knowledge transfer is achieved using low-rank expert adapters. The method integrates differential privacy, LoRA, MoE, and prompt tuning. Evaluated on CodeXGLUE code completion, it provides provable privacy guarantees (ε ≤ 8), robustly resists membership inference attacks, and recovers over 77% of the accuracy gap between non-private/non-shared baselines. This significantly enhances the generalization capability and practical applicability of modular LLMs in privacy-sensitive domains.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLM) are typically trained on vast amounts of data from various sources. Even when designed modularly (e.g., Mixture-of-Experts), LLMs can leak privacy on their sources. Conversely, training such models in isolation arguably prohibits generalization. To this end, we propose a framework, NoEsis, which builds upon the desired properties of modularity, privacy, and knowledge transfer. NoEsis integrates differential privacy with a hybrid two-staged parameter-efficient fine-tuning that combines domain-specific low-rank adapters, acting as experts, with common prompt tokens, acting as a knowledge-sharing backbone. Results from our evaluation on CodeXGLUE showcase that NoEsis can achieve provable privacy guarantees with tangible knowledge transfer across domains, and empirically show protection against Membership Inference Attacks. Finally, on code completion tasks, NoEsis bridges at least 77% of the accuracy gap between the non-shared and the non-private baseline.
Problem

Research questions and friction points this paper is trying to address.

Ensuring privacy in modular LLM knowledge transfer
Balancing generalization with isolated training limitations
Achieving accuracy with differential privacy guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differential privacy for modular LLM adaptation
Hybrid two-staged parameter-efficient fine-tuning
Low-rank adapters with common prompt tokens
🔎 Similar Papers
No similar papers found.