๐ค AI Summary
Fine-tuning large language models (LLMs) on edge devices raises dual challenges of preserving data privacy and protecting model intellectual property (IP). Method: This paper proposes a secure knowledge distillation framework leveraging Trusted Execution Environments (TEEs). It deploys a proprietary teacher model as a black-box service inside a TEE enclave, integrating model weight obfuscation with a secure distillation protocol to prevent model extraction and unauthorized knowledge transfer. Contribution/Results: By synergistically combining TEE-based isolation, obfuscation enhancement, and edge-coordinated optimization, the framework enables efficient, localized model customization while ensuring raw data remains on-device and model IP is never exposed. Experiments demonstrate robust resistance against diverse distillation attacks, with inference overhead increased by less than 8%โsignificantly outperforming existing privacy-preserving fine-tuning approaches and achieving a practical balance between security and efficiency.
๐ Abstract
Large Language Models (LLMs) have demonstrated strong performance across diverse tasks, but fine-tuning them typically relies on cloud-based, centralized infrastructures. This requires data owners to upload potentially sensitive data to external servers, raising serious privacy concerns. An alternative approach is to fine-tune LLMs directly on edge devices using local data; however, this introduces a new challenge: the model owner must transfer proprietary models to the edge, which risks intellectual property (IP) leakage. To address this dilemma, we propose DistilLock, a TEE-assisted fine-tuning framework that enables privacy-preserving knowledge distillation on the edge. In DistilLock, a proprietary foundation model is executed within a trusted execution environment (TEE) enclave on the data owner's device, acting as a secure black-box teacher. This setup preserves both data privacy and model IP by preventing direct access to model internals. Furthermore, DistilLock employs a model obfuscation mechanism to offload obfuscated weights to untrusted accelerators for efficient knowledge distillation without compromising security. We demonstrate that DistilLock prevents unauthorized knowledge distillation processes and model-stealing attacks while maintaining high computational efficiency, but offering a secure and practical solution for edge-based LLM personalization.