๐ค AI Summary
This work investigates the root causes of LLM errors in RAG systems: whether failures stem from underutilization of retrieved context or from intrinsic insufficiency of the context itself. To this end, we formally define and quantify โsufficient contextโ for a given query, and build a binary classifier to determine whether retrieved passages contain adequate information to answer the query. We conduct cross-model error stratification, revealing that large models tend to generate incorrect answers rather than abstain, whereas smaller models exhibit higher rates of hallucination or excessive abstention. Leveraging these insights, we propose a context-sufficiency-aware controllable generation framework featuring a guided abstention mechanism. Experiments across Gemini, GPT, and Gemma show 2โ10% absolute accuracy gains and substantial hallucination reduction. Our code, evaluation prompts, and key findings are publicly released.
๐ Abstract
Augmenting LLMs with context leads to improved performance across many applications. Despite much research on Retrieval Augmented Generation (RAG) systems, an open question is whether errors arise because LLMs fail to utilize the context from retrieval or the context itself is insufficient to answer the query. To shed light on this, we develop a new notion of sufficient context, along with a method to classify instances that have enough information to answer the query. We then use sufficient context to analyze several models and datasets. By stratifying errors based on context sufficiency, we find that larger models with higher baseline performance (Gemini 1.5 Pro, GPT 4o, Claude 3.5) excel at answering queries when the context is sufficient, but often output incorrect answers instead of abstaining when the context is not. On the other hand, smaller models with lower baseline performance (Mistral 3, Gemma 2) hallucinate or abstain often, even with sufficient context. We further categorize cases when the context is useful, and improves accuracy, even though it does not fully answer the query and the model errs without the context. Building on our findings, we explore ways to reduce hallucinations in RAG systems, including a new selective generation method that leverages sufficient context information for guided abstention. Our method improves the fraction of correct answers among times where the model responds by 2--10% for Gemini, GPT, and Gemma. Key findings and the prompts used in our autorater analysis are available on our github.