🤖 AI Summary
This work addresses the limitation of conventional knowledge distillation (LLM→SLM) in text matching, which overlooks the domain expertise embedded in small language models (SLMs). We propose a **reverse knowledge distillation paradigm**, wherein large language models (LLMs) learn from SLMs. Methodologically, we reconfigure the LLM into an encoder-decoder architecture via LoRA and introduce **margin-aware contrastive learning**, aligning cross-architectural representations using fine-grained similarity signals generated by the SLM. To our knowledge, this is the first successful realization of SLM→LLM knowledge transfer, effectively bridging the modeling gap between encoder-only and decoder-based architectures. Empirical evaluation across financial, medical, and other domain-specific benchmarks—as well as real-world deployment—demonstrates substantial improvements in LLM matching performance. The proposed model has been fully deployed and operates stably in production.
📝 Abstract
Knowledge distillation typically involves transferring knowledge from a Large Language Model (LLM) to a Smaller Language Model (SLM). However, in tasks such as text matching, fine-tuned smaller models often yield more effective domain-specific representations, as they focus on optimizing the similarity of input pairs. To leverage both the specialized strengths of small models and the rich semantic understanding of LLMs, we introduce a flipped knowledge distillation paradigm, where LLM learns from SLM. Specifically, we address the architectural gap between decoder-only LLMs and smaller encoder-based models by reinterpreting LLMs in an encoder-decoder manner using LoRA. The encoder generates compressed representations, while the decoder maps them to the output space. During training, the encoder produces representations and their similarities, which are then aligned with the similarity scores produced by the teacher, using our proposed Margin-aware Contrastive Learning (MCL) approach. The MCL ensures accurate similarity for both positive and negative pairs, and adaptively handles the internal differences within positive and negative samples. Our paradigm requires only a reasonably good-performing SLM, allowing the LLM to achieve improved performance. Experiments on financial and healthcare benchmarks, as well as real-world applications, confirm its effectiveness, and the model has been fully deployed in an online environment.