π€ AI Summary
This work addresses the vulnerability of large language models (LLMs) deployed via APIs to distillation attacks, which can lead to intellectual property leakage through exposed logits. To mitigate this risk, the authors propose an information-theoretic defense mechanism that learns an output transformation matrix by minimizing the conditional mutual information (CMI) between the teacher modelβs outputs and the input queries, conditioned on the true labels. This approach effectively removes knowledge exploitable by distillation while preserving performance on downstream tasks. Notably, this study is the first to employ CMI to quantify distillation-relevant knowledge and to formulate a CMI-guided optimization objective for distillation resistance. Extensive experiments demonstrate that the method significantly suppresses distillation efficacy across multiple LLMs and strong distillation algorithms, all while maintaining original task accuracy, thereby offering a practical defense for black-box API deployments.
π Abstract
Proprietary large language models (LLMs) embody substantial economic value and are generally exposed only as black-box APIs, yet adversaries can still exploit their outputs to extract knowledge via distillation. Existing defenses focus exclusively on text-based distillation, leaving the important logit-based distillation largely unexplored. In this work, we analyze this problem and present an effective solution from an information-theoretic perspective. We characterize distillation-relevant information in teacher outputs using the conditional mutual information (CMI) between teacher logits and input queries conditioned on ground-truth labels. This quantity captures contextual information beneficial for model extraction, motivating us to defend distillation via CMI minimization. Guided by our theoretical analysis, we propose learning a transformation matrix that purifies the original outputs to enhance distillation resistance. We further derive a CMI-inspired anti-distillation objective to optimize this transformation, which effectively removes distillation-relevant information while preserving output utility. Extensive experiments across multiple LLMs and strong distillation algorithms demonstrate that the proposed method significantly degrades distillation performance while preserving task accuracy, effectively protecting models'intellectual property.