🤖 AI Summary
To address the dual challenges of privacy preservation and model confidentiality in locally deployed closed-source large language models—specifically, preventing training data leakage, model extraction, and knowledge distillation attacks—this paper proposes a semi-open deployment framework. The framework employs layered security isolation, hardware-based trusted execution environments (TEEs), and selective locking of critical low-level network layers, protecting only those layers identified as high-risk through quantitative sensitivity assessment. This enables fine-grained, on-device adaptation while achieving efficient distillation resistance. Experiments demonstrate a significant reduction in distillation success rates, balancing model customization flexibility with strong confidentiality guarantees. To our knowledge, this is the first work to jointly achieve robust model secrecy and controllable local fine-tuning on large-scale LLMs, establishing a secure and practical paradigm for private AI deployment.
📝 Abstract
Current LLM customization typically relies on two deployment strategies: closed-source APIs, which require users to upload private data to external servers, and open-weight models, which allow local fine-tuning but pose misuse risks. In this position paper, we argue that (1) deploying closed-source LLMs within user-controlled infrastructure ( extit{on-premises deployment}) enhances data privacy and mitigates misuse risks, and (2) a well-designed on-premises deployment must ensure model confidentiality -- by preventing model theft -- and offer privacy-preserving customization. Prior research on small models has explored securing only the output layer within hardware-secured devices to balance confidentiality and customization efficiency. However, we show that this approach is insufficient for defending large-scale LLMs against distillation attacks. We therefore introduce a {semi-open deployment framework} that secures only a few, carefully chosen layers, achieving distillation resistance comparable to fully secured models while preserving fine-tuning flexibility. Through extensive experiments, we show that securing bottom layers significantly reduces functional extraction risks. Our findings demonstrate that privacy and confidentiality can coexist, paving the way for secure on-premises AI deployment that balances usability and protection.