🤖 AI Summary
This study addresses the limitations of semantic similarity–based retrieval in structured, highly repetitive regulatory texts, where linguistic overlap often obscures meaningful content distinctions and undermines the effectiveness of retrieval-augmented generation (RAG). To mitigate this issue, the work systematically investigates metadata-aware retrieval strategies, proposing and evaluating fusion approaches such as unified embedding and prefix concatenation. The findings demonstrate that incorporating metadata enhances intra-document cohesion and reduces inter-document ambiguity, thereby improving retrieval performance. Evaluated on a newly curated benchmark dataset, RAGMATE-10K, both the unified embedding and prefix-based methods significantly outperform pure text baselines across multiple question types and evaluation metrics. Notably, the unified embedding approach achieves superior performance while maintaining greater maintainability.
📝 Abstract
Retrieval-Augmented Generation systems depend on retrieving semantically relevant document chunks to support accurate, grounded outputs from large language models. In structured and repetitive corpora such as regulatory filings, chunk similarity alone often fails to distinguish between documents with overlapping language. Practitioners often flatten metadata into input text as a heuristic, but the impact and trade-offs of this practice remain poorly understood. We present a systematic study of metadata-aware retrieval strategies, comparing plain-text baselines with approaches that embed metadata directly. Our evaluation spans metadata-as-text (prefix and suffix), a dual-encoder unified embedding that fuses metadata and content in a single index, dual-encoder late-fusion retrieval, and metadata-aware query reformulation. Across multiple retrieval metrics and question types, we find that prefixing and unified embeddings consistently outperform plain-text baselines, with the unified at times exceeding prefixing while being easier to maintain. Beyond empirical comparisons, we analyze embedding space, showing that metadata integration improves effectiveness by increasing intra-document cohesion, reducing inter-document confusion, and widening the separation between relevant and irrelevant chunks. Field-level ablations show that structural cues provide strong disambiguating signals. Our code, evaluation framework, and the RAGMATE-10K dataset are publicly hosted.