🤖 AI Summary
Knowledge distillation (KD) is conventionally regarded as a pure model compression technique; however, its fundamental nature and functional effects—particularly regarding knowledge transfer fidelity and safety—remain poorly understood.
Method: We challenge this view by framing KD as a data-dependent regularization method. Through hypothesis testing, controlled experiments, and a randomized control distillation framework, we systematically evaluate KD’s functional impact across 12 configurations, 9 neural architectures, and 7 cross-modal datasets, explicitly decoupling compression effects from architectural simplification.
Contribution/Results: Our analysis reveals that actual knowledge transfer in KD is significantly weaker than assumed, with pervasive negative knowledge asymmetry—i.e., unidirectional transmission of harmful or misleading knowledge from teacher to student—posing latent security risks. This work provides the first empirical evidence establishing KD’s intrinsic regularization property and its associated negative transfer phenomenon, thereby challenging prevailing theoretical assumptions and proposing a new paradigm for designing secure, robust knowledge distillation methods.
📝 Abstract
Knowledge distillation is often considered a compression mechanism when judged on the resulting student's accuracy and loss, yet its functional impact is poorly understood. In this work, we quantify the compression capacity of knowledge distillation and the resulting knowledge transfer from a functional perspective, decoupling compression from architectural reduction, which provides an improved understanding of knowledge distillation. We employ hypothesis testing, controls, and random control distillation to understand knowledge transfer mechanisms across data modalities. To rigorously test the breadth and limits of our analyses, we explore multiple distillation variants and analyse distillation scaling laws across model sizes. Our findings demonstrate that, while there is statistically significant knowledge transfer in some modalities and architectures, the extent of this transfer is less pronounced than anticipated, even under conditions designed to maximise knowledge sharing. Notably, in cases of significant knowledge transfer, we identify a consistent and severe asymmetric transfer of negative knowledge to the student, raising safety concerns in knowledge distillation applications. Across 12 experimental setups, 9 architectures, and 7 datasets, our findings show that knowledge distillation functions less as a compression mechanism and more as a data-dependent regulariser with a negative asymmetric payoff.