🤖 AI Summary
To address the limitations of generic embedding models in financial retrieval—including insufficient domain expertise, reliance on manual annotations, and trade-offs between efficiency and accuracy—this paper proposes an unsupervised knowledge distillation framework. It employs a large language model (LLM) as a discriminator to automatically mine hard negative examples from financial filings, and iteratively refines a dual-encoder student model via teacher-student interaction. The method requires no human annotation and achieves effective domain adaptation. Experiments on 21,800 query-document pairs demonstrate a 27.7% improvement in MRR@5 and a 44.6% gain in mean DCG@5; NDCG also improves significantly across three of four document categories in FinanceBench. Our core contribution is the first LLM-guided, unsupervised embedding distillation paradigm tailored for financial text—balancing domain specificity, scalability, and deployment efficiency.
📝 Abstract
Despite advances in generative large language models (LLMs), practical application of specialized conversational AI agents remains constrained by computation costs, latency requirements, and the need for precise domain-specific relevance measures. While existing embedding models address the first two constraints, they underperform on information retrieval in specialized domains like finance. This paper introduces a scalable pipeline that trains specialized models from an unlabeled corpus using a general purpose retrieval embedding model as foundation. Our method yields an average of 27.7% improvement in MRR$ exttt{@}$5, 44.6% improvement in mean DCG$ exttt{@}$5 across 14 financial filing types measured over 21,800 query-document pairs, and improved NDCG on 3 of 4 document classes in FinanceBench. We adapt retrieval embeddings (bi-encoder) for RAG, not LLM generators, using LLM-judged relevance to distill domain knowledge into a compact retriever. There are prior works which pair synthetically generated queries with real passages to directly fine-tune the retrieval model. Our pipeline differs from these by introducing interaction between student and teacher models that interleaves retrieval-based mining of hard positive/negative examples from the unlabeled corpus with iterative retraining of the student model's weights using these examples. Each retrieval iteration uses the refined student model to mine the corpus for progressively harder training examples for the subsequent training iteration. The methodology provides a cost-effective solution to bridging the gap between general-purpose models and specialized domains without requiring labor-intensive human annotation.