🤖 AI Summary
This work addresses the challenge in enterprise retrieval systems where short, ambiguous queries hinder the joint modeling of semantic content and metadata, while conventional LLM-based rerankers incur high computational costs. To overcome this, the authors propose MACA, a framework that distills a trustworthy LLM reranker into a lightweight student model through metadata-aware cross-model alignment, thereby eliminating the need for online LLM inference. Key innovations include a metadata-aware prompting strategy to validate teacher reliability, a MetaFusion multi-objective function combining metadata-conditioned ranking loss with cross-model margin loss, and listwise learning enhanced by hard negative mining. Experiments on a banking FAQ dataset show that the MACA teacher model improves Accuracy@1 by 3–5 percentage points over MAFA, while the distilled student model (e.g., MiniLM) boosts Accuracy@1 from 0.23 to 0.48 on a private dataset—significantly outperforming baselines without requiring online LLM access.
📝 Abstract
Modern enterprise retrieval systems must handle short, underspecified queries such as ``foreign transaction fee refund''and ``recent check status''. In these cases, semantic nuance and metadata matter but per-query large language model (LLM) re-ranking and manual labeling are costly. We present Metadata-Aware Cross-Model Alignment (MACA), which distills a calibrated metadata aware LLM re-ranker into a compact student retriever, avoiding online LLM calls. A metadata-aware prompt verifies the teacher's trustworthiness by checking consistency under permutations and robustness to paraphrases, then supplies listwise scores, hard negatives, and calibrated relevance margins. The student trains with MACA's MetaFusion objective, which combines a metadata conditioned ranking loss with a cross model margin loss so it learns to push the correct answer above semantically similar candidates with mismatched topic, sub-topic, or entity. On a proprietary consumer banking FAQ corpus and BankFAQs, the MACA teacher surpasses a MAFA baseline at Accuracy@1 by five points on the proprietary set and three points on BankFAQs. MACA students substantially outperform pretrained encoders; e.g., on the proprietary corpus MiniLM Accuracy@1 improves from 0.23 to 0.48, while keeping inference free of LLM calls and supporting retrieval-augmented generation.