🤖 AI Summary
Retrieval-Augmented Generation (RAG) systems often overlook critical information in domain-specific applications due to coarse-grained relevance judgments. Manual annotation or GPT-4–based labeling is costly, limited in coverage, and prone to bias. To address this, we propose DIRAS—a fully automated, scalable framework for fine-grained document relevance annotation without human involvement. DIRAS leverages a domain-adapted, open-weight 8B language model to decouple semantic relevance modeling between queries and documents, and introduces a calibrated score regression mechanism. This enables nuanced, customizable relevance definitions while mitigating large-model annotation biases and coverage gaps. Experiments demonstrate that DIRAS matches GPT-4 in annotation quality and retrieval ranking performance, significantly enhancing the reliability of RAG recall evaluation. All code, generated annotations, and human-annotated benchmarks are publicly released.
📝 Abstract
Retrieval Augmented Generation (RAG) is widely employed to ground responses to queries on domain-specific documents. But do RAG implementations leave out important information when answering queries that need an integrated analysis of information (e.g., Tell me good news in the stock market today.)? To address these concerns, RAG developers need to annotate information retrieval (IR) data for their domain of interest, which is challenging because (1) domain-specific queries usually need nuanced definitions of relevance beyond shallow semantic relevance; and (2) human or GPT-4 annotation is costly and cannot cover all (query, document) pairs (i.e., annotation selection bias), thus harming the effectiveness in evaluating IR recall. To address these challenges, we propose DIRAS (Domain-specific Information Retrieval Annotation with Scalability), a manual-annotation-free schema that fine-tunes open-sourced LLMs to consider nuanced relevance definition and annotate (partial) relevance labels with calibrated relevance scores. Extensive evaluation shows that DIRAS enables smaller (8B) LLMs to achieve GPT-4-level performance on annotating and ranking unseen (query, document) pairs, and is helpful for real-world RAG development. All code, LLM generations, and human annotations can be found in url{https://github.com/EdisonNi-hku/DIRAS}.