🤖 AI Summary
This work addresses the limitations of sequence-level knowledge distillation (KD) in neural machine translation—specifically, insufficient characterization of teacher model output distributions and degraded performance on low-resource languages. We propose a multi-hypothesis distillation framework that replaces conventional single-point beam search approximations with richer supervision derived from the teacher’s *n*-best lists and diverse decoding strategies (e.g., sampling), thereby capturing broader output distributions and target-side prefix information. By fusing supervision signals across multiple hypotheses, our approach significantly enhances student model generalization for low-resource languages and mitigates the amplification of gender bias commonly observed in KD. Experiments across multiple low-resource language pairs demonstrate consistent improvements in translation quality (BLEU) and lexical diversity, validating the effectiveness of distributed supervision for compressing multilingual NMT models.
📝 Abstract
This paper explores sequence-level knowledge distillation (KD) of multilingual pre-trained encoder-decoder translation models. We argue that the teacher model's output distribution holds valuable insights for the student, beyond the approximated mode obtained through beam search (the standard decoding method), and present Multi-Hypothesis Distillation (MHD), a sequence-level KD method that generates multiple translations for each source sentence. This provides a larger representation of the teacher model distribution and exposes the student model to a wider range of target-side prefixes. We leverage $n$-best lists from beam search to guide the student's learning and examine alternative decoding methods to address issues like low variability and the under-representation of infrequent tokens. For low-resource languages, our research shows that while sampling methods may slightly compromise translation quality compared to beam search based approaches, they enhance the generated corpora with greater variability and lexical richness. This ultimately improves student model performance and mitigates the gender bias amplification often associated with KD.