Adaptation of Embedding Models to Financial Filings via LLM Distillation

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of generic embedding models in financial retrieval—including insufficient domain expertise, reliance on manual annotations, and trade-offs between efficiency and accuracy—this paper proposes an unsupervised knowledge distillation framework. It employs a large language model (LLM) as a discriminator to automatically mine hard negative examples from financial filings, and iteratively refines a dual-encoder student model via teacher-student interaction. The method requires no human annotation and achieves effective domain adaptation. Experiments on 21,800 query-document pairs demonstrate a 27.7% improvement in MRR@5 and a 44.6% gain in mean DCG@5; NDCG also improves significantly across three of four document categories in FinanceBench. Our core contribution is the first LLM-guided, unsupervised embedding distillation paradigm tailored for financial text—balancing domain specificity, scalability, and deployment efficiency.

Technology Category

Application Category

📝 Abstract
Despite advances in generative large language models (LLMs), practical application of specialized conversational AI agents remains constrained by computation costs, latency requirements, and the need for precise domain-specific relevance measures. While existing embedding models address the first two constraints, they underperform on information retrieval in specialized domains like finance. This paper introduces a scalable pipeline that trains specialized models from an unlabeled corpus using a general purpose retrieval embedding model as foundation. Our method yields an average of 27.7% improvement in MRR$ exttt{@}$5, 44.6% improvement in mean DCG$ exttt{@}$5 across 14 financial filing types measured over 21,800 query-document pairs, and improved NDCG on 3 of 4 document classes in FinanceBench. We adapt retrieval embeddings (bi-encoder) for RAG, not LLM generators, using LLM-judged relevance to distill domain knowledge into a compact retriever. There are prior works which pair synthetically generated queries with real passages to directly fine-tune the retrieval model. Our pipeline differs from these by introducing interaction between student and teacher models that interleaves retrieval-based mining of hard positive/negative examples from the unlabeled corpus with iterative retraining of the student model's weights using these examples. Each retrieval iteration uses the refined student model to mine the corpus for progressively harder training examples for the subsequent training iteration. The methodology provides a cost-effective solution to bridging the gap between general-purpose models and specialized domains without requiring labor-intensive human annotation.
Problem

Research questions and friction points this paper is trying to address.

Adapting embedding models for financial document retrieval
Improving domain-specific relevance without human annotation
Using LLM distillation to train compact, specialized retrievers
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM distillation for domain-specific embedding adaptation
Iterative student-teacher retrieval mining for hard examples
Cost-effective specialized retriever training without human annotation
🔎 Similar Papers
No similar papers found.
E
Eliot Brenner
Goldman Sachs, New York, NY , USA
D
Dominic Seyler
Goldman Sachs, New York, NY , USA
M
Manjunath Hegde
Goldman Sachs, New York, NY , USA
A
Andrei Simion
Goldman Sachs, New York, NY , USA
Koustuv Dasgupta
Koustuv Dasgupta
Goldman Sachs, New York, NY , USA
Bing Xiang
Bing Xiang
Head of AI Research, Goldman Sachs
Deep LearningMachine LearningNatural Language Processing