🤖 AI Summary
This work addresses the challenge of simultaneously preserving performance and adversarial robustness when compressing code models via single-source knowledge distillation. To overcome this limitation, the authors propose MoEKD, a novel framework that introduces the mixture-of-experts (MoE) mechanism into knowledge distillation for the first time. MoEKD employs a learnable routing strategy to dynamically aggregate knowledge from multiple expert teachers, enabling efficient and robust model compression. The approach breaks through the performance ceiling of conventional single-source distillation, achieving up to a 35.8% improvement in adversarial robustness and a 13% gain in prediction accuracy on vulnerability detection tasks compared to state-of-the-art methods, all while reducing model size by nearly half without compromising competitiveness.
📝 Abstract
Large language models for code have achieved strong performance across diverse software analytics tasks, yet their real-world adoption remains limited by high computational demands, slow inference speeds, significant energy consumption, and environmental impact. Knowledge distillation (KD) offers a practical solution by transferring knowledge from a large model to a smaller and more efficient model. Despite its effectiveness, recent studies show that models distilled from a single source often exhibit degraded adversarial robustness, even when robustness-aware distillation techniques are employed. These observations suggest a fundamental limitation of single-source distillation in simultaneously transferring high-quality and robust knowledge. To overcome this limitation, we propose Mixture of Experts Knowledge Distillation (MoEKD), a KD framework that leverages a Mixture of Experts (MoE) architecture to enable more effective and robust knowledge transfer from multiple specialized experts into a compact model. MoEKD decomposes the distillation process into expert and router training, aggregation of expert knowledge through a learned routing mechanism, and distillation from the aggregated knowledge. We evaluate MoEKD on the vulnerability detection task using CodeBERT and GraphCodeBERT models. Experimental results show that MoEKD not only improves adversarial robustness by up to 35.8%, but also enhances predictive performance by up to 13%, compared to state-of-the-art KD baselines, including Compressor and AVATAR. Furthermore, an ablation study demonstrates that aggregating expert knowledge enables ultra-compact models to maintain competitive performance even when their size is reduced by approximately half. Overall, these results highlight the effectiveness of multi-expert knowledge aggregation in addressing key limitations of existing single-source KD approaches.