🤖 AI Summary
Existing knowledge distillation evaluation overly relies on accuracy metrics, failing to expose deep behavioral and representational imitation deficits of student models relative to teachers. Method: This paper pioneers the integration of metamorphic testing into code language model distillation, proposing MetaCompress—a framework that constructs semantic-preserving code metamorphic relations to systematically assess behavioral fidelity of three mainstream distillation methods (Compressor, AVATAR, MORPH) on code completion and generation tasks. Results: Experiments reveal up to 285% performance degradation of student models under adversarial metamorphic transformations—defects entirely masked by conventional accuracy metrics. MetaCompress detects up to 62% behavioral divergence across compressed models, effectively uncovering latent behavioral mismatches. This work establishes a new evaluation paradigm centered on behavioral consistency, providing quantifiable and interpretable validation criteria for trustworthy model compression.
📝 Abstract
Transformer-based language models of code have achieved state-of-the-art performance across a wide range of software analytics tasks, but their practical deployment remains limited due to high computational costs, slow inference speeds, and significant environmental impact. To address these challenges, recent research has increasingly explored knowledge distillation as a method for compressing a large language model of code (the teacher) into a smaller model (the student) while maintaining performance. However, the degree to which a student model deeply mimics the predictive behavior and internal representations of its teacher remains largely unexplored, as current accuracy-based evaluation provides only a surface-level view of model quality and often fails to capture more profound discrepancies in behavioral fidelity between the teacher and student models. To address this gap, we empirically show that the student model often fails to deeply mimic the teacher model, resulting in up to 285% greater performance drop under adversarial attacks, which is not captured by traditional accuracy-based evaluation. Therefore, we propose MetaCompress, a metamorphic testing framework that systematically evaluates behavioral fidelity by comparing the outputs of teacher and student models under a set of behavior-preserving metamorphic relations. We evaluate MetaCompress on two widely studied tasks, using compressed versions of popular language models of code, obtained via three different knowledge distillation techniques: Compressor, AVATAR, and MORPH. The results show that MetaCompress identifies up to 62% behavioral discrepancies in student models, underscoring the need for behavioral fidelity evaluation within the knowledge distillation pipeline and establishing MetaCompress as a practical framework for testing compressed language models of code derived through knowledge distillation.