Towards Distillation-Resistant Large Language Models: An Information-Theoretic Perspective

πŸ“… 2026-02-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of large language models (LLMs) deployed via APIs to distillation attacks, which can lead to intellectual property leakage through exposed logits. To mitigate this risk, the authors propose an information-theoretic defense mechanism that learns an output transformation matrix by minimizing the conditional mutual information (CMI) between the teacher model’s outputs and the input queries, conditioned on the true labels. This approach effectively removes knowledge exploitable by distillation while preserving performance on downstream tasks. Notably, this study is the first to employ CMI to quantify distillation-relevant knowledge and to formulate a CMI-guided optimization objective for distillation resistance. Extensive experiments demonstrate that the method significantly suppresses distillation efficacy across multiple LLMs and strong distillation algorithms, all while maintaining original task accuracy, thereby offering a practical defense for black-box API deployments.

Technology Category

Application Category

πŸ“ Abstract
Proprietary large language models (LLMs) embody substantial economic value and are generally exposed only as black-box APIs, yet adversaries can still exploit their outputs to extract knowledge via distillation. Existing defenses focus exclusively on text-based distillation, leaving the important logit-based distillation largely unexplored. In this work, we analyze this problem and present an effective solution from an information-theoretic perspective. We characterize distillation-relevant information in teacher outputs using the conditional mutual information (CMI) between teacher logits and input queries conditioned on ground-truth labels. This quantity captures contextual information beneficial for model extraction, motivating us to defend distillation via CMI minimization. Guided by our theoretical analysis, we propose learning a transformation matrix that purifies the original outputs to enhance distillation resistance. We further derive a CMI-inspired anti-distillation objective to optimize this transformation, which effectively removes distillation-relevant information while preserving output utility. Extensive experiments across multiple LLMs and strong distillation algorithms demonstrate that the proposed method significantly degrades distillation performance while preserving task accuracy, effectively protecting models'intellectual property.
Problem

Research questions and friction points this paper is trying to address.

distillation-resistant
large language models
logit-based distillation
model extraction
intellectual property protection
Innovation

Methods, ideas, or system contributions that make the work stand out.

distillation resistance
conditional mutual information
logit-based distillation
information-theoretic defense
large language models
πŸ”Ž Similar Papers
No similar papers found.
Hao Fang
Hao Fang
Tsinghua University
Trustworthy AIAIGC Security
T
Tianyi Zhang
Tsinghua University
T
Tianqu Zhuang
Tsinghua University
Jiawei Kong
Jiawei Kong
Tsinghua University
Trustworthy AI
Kuofeng Gao
Kuofeng Gao
Tsinghua University
Large Language ModelTrustworthy AIBackdoor Learning
B
Bin Chen
Harbin Institute of Technology, Shenzhen
L
Leqi Liang
Tsinghua University
S
Shutao Xia
Tsinghua University
K
Ke Xu
Tsinghua University