A Metamorphic Testing Perspective on Knowledge Distillation for Language Models of Code: Does the Student Deeply Mimic the Teacher?

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing knowledge distillation evaluation overly relies on accuracy metrics, failing to expose deep behavioral and representational imitation deficits of student models relative to teachers. Method: This paper pioneers the integration of metamorphic testing into code language model distillation, proposing MetaCompress—a framework that constructs semantic-preserving code metamorphic relations to systematically assess behavioral fidelity of three mainstream distillation methods (Compressor, AVATAR, MORPH) on code completion and generation tasks. Results: Experiments reveal up to 285% performance degradation of student models under adversarial metamorphic transformations—defects entirely masked by conventional accuracy metrics. MetaCompress detects up to 62% behavioral divergence across compressed models, effectively uncovering latent behavioral mismatches. This work establishes a new evaluation paradigm centered on behavioral consistency, providing quantifiable and interpretable validation criteria for trustworthy model compression.

Technology Category

Application Category

📝 Abstract
Transformer-based language models of code have achieved state-of-the-art performance across a wide range of software analytics tasks, but their practical deployment remains limited due to high computational costs, slow inference speeds, and significant environmental impact. To address these challenges, recent research has increasingly explored knowledge distillation as a method for compressing a large language model of code (the teacher) into a smaller model (the student) while maintaining performance. However, the degree to which a student model deeply mimics the predictive behavior and internal representations of its teacher remains largely unexplored, as current accuracy-based evaluation provides only a surface-level view of model quality and often fails to capture more profound discrepancies in behavioral fidelity between the teacher and student models. To address this gap, we empirically show that the student model often fails to deeply mimic the teacher model, resulting in up to 285% greater performance drop under adversarial attacks, which is not captured by traditional accuracy-based evaluation. Therefore, we propose MetaCompress, a metamorphic testing framework that systematically evaluates behavioral fidelity by comparing the outputs of teacher and student models under a set of behavior-preserving metamorphic relations. We evaluate MetaCompress on two widely studied tasks, using compressed versions of popular language models of code, obtained via three different knowledge distillation techniques: Compressor, AVATAR, and MORPH. The results show that MetaCompress identifies up to 62% behavioral discrepancies in student models, underscoring the need for behavioral fidelity evaluation within the knowledge distillation pipeline and establishing MetaCompress as a practical framework for testing compressed language models of code derived through knowledge distillation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating behavioral fidelity in knowledge distillation for code language models
Assessing student model's deep mimicry of teacher model beyond accuracy metrics
Identifying performance discrepancies under adversarial attacks in compressed models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Metamorphic testing framework evaluates behavioral fidelity
Systematic comparison of teacher and student model outputs
Identifies behavioral discrepancies in knowledge distilled models
🔎 Similar Papers
No similar papers found.
M
Md. Abdul Awal
Department of Computer Science, University of Saskatchewan, 110 Science Pl, Saskatoon, S7N 5C9, Saskatchewan, Canada
Mrigank Rochan
Mrigank Rochan
Assistant Professor of Computer Science, University of Saskatchewan
Computer VisionMachine Learning
C
Chanchal K. Roy
Department of Computer Science, University of Saskatchewan, 110 Science Pl, Saskatoon, S7N 5C9, Saskatchewan, Canada