Unanswerability Evaluation for Retrieval Augmented Generation

📅 2024-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RAG evaluation frameworks neglect the system’s ability to reject unanswerable queries. This paper introduces UAEval4RAG—the first RAG evaluation framework explicitly designed for unanswerability assessment. It defines six categories of unanswerable scenarios (e.g., undefined, contradictory, cross-domain) and enables automated, knowledge-base-agnostic hard query synthesis and quantitative evaluation. Key contributions include: (1) a six-dimensional taxonomy of unanswerability; (2) dual metrics—“Unanswered Rate” and “Acceptable Rate”—to expose implicit trade-offs among retrievers, re-rankers, LLMs, and prompting strategies; and (3) a hybrid query synthesis method combining rule-based and LLM-driven techniques, multi-granularity chain-of-reasoning analysis, and component-level ablation studies. Experiments demonstrate differential responses of RAG components to answerable versus unanswerable queries. The open-sourced toolkit significantly enhances RAG robustness and trustworthiness.

Technology Category

Application Category

📝 Abstract
Existing evaluation frameworks for retrieval-augmented generation (RAG) systems focus on answerable queries, but they overlook the importance of appropriately rejecting unanswerable requests. In this paper, we introduce UAEval4RAG, a framework designed to evaluate whether RAG systems can handle unanswerable queries effectively. We define a taxonomy with six unanswerable categories, and UAEval4RAG automatically synthesizes diverse and challenging queries for any given knowledge base with unanswered ratio and acceptable ratio metrics. We conduct experiments with various RAG components, including retrieval models, rewriting methods, rerankers, language models, and prompting strategies, and reveal hidden trade-offs in performance of RAG systems. Our findings highlight the critical role of component selection and prompt design in optimizing RAG systems to balance the accuracy of answerable queries with high rejection rates of unanswerable ones. UAEval4RAG provides valuable insights and tools for developing more robust and reliable RAG systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluate RAG systems handling unanswerable queries
Develop taxonomy for six unanswerable categories
Optimize RAG systems for query rejection rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

UAEval4RAG framework evaluates unanswerable queries.
Taxonomy defines six unanswerable categories for analysis.
Optimizes RAG systems for query rejection accuracy.
🔎 Similar Papers
No similar papers found.