Is Conformal Factuality for RAG-based LLMs Robust? Novel Metrics and Systematic Insights

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the tension between factuality and usefulness in retrieval-augmented generation (RAG) systems, where enforcing factual consistency often yields uninformative outputs, and existing conformal factuality methods exhibit limited robustness under distribution shifts and noisy contexts. The study systematically evaluates the reliability of combining RAG with conformal prediction across dimensions including generation quality, scoring, calibration, robustness, and computational efficiency. It introduces an information-aware evaluation metric to better capture the trade-off between factuality and informativeness. Experimental results demonstrate that stringent factuality constraints frequently lead to vacuous responses, that conformal filtering is highly sensitive to distributional shifts, and that a lightweight entailment verifier achieves comparable or superior performance to large language model (LLM) confidence scoring at only one percent of the computational cost.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) frequently hallucinate, limiting their reliability in knowledge-intensive applications. Retrieval-augmented generation (RAG) and conformal factuality have emerged as potential ways to address this limitation. While RAG aims to ground responses in retrieved evidence, it provides no statistical guarantee that the final output is correct. Conformal factuality filtering offers distribution-free statistical reliability by scoring and filtering atomic claims using a threshold calibrated on held-out data, however, the informativeness of the final output is not guaranteed. We systematically analyze the reliability and usefulness of conformal factuality for RAG-based LLMs across generation, scoring, calibration, robustness, and efficiency. We propose novel informativeness-aware metrics that better reflect task utility under conformal filtering. Across three benchmarks and multiple model families, we find that (i) conformal filtering suffers from low usefulness at high factuality levels due to vacuous outputs, (ii) conformal factuality guarantee is not robust to distribution shifts and distractors, highlighting the limitation that requires calibration data to closely match deployment conditions, and (iii) lightweight entailment-based verifiers match or outperform LLM-based model confidence scorers while requiring over $100\times$ fewer FLOPs. Overall, our results expose factuality-informativeness trade-offs and fragility of conformal filtering framework under distribution shifts and distractors, highlighting the need for new approaches for reliability with robustness and usefulness as key metrics, and provide actionable guidance for building RAG pipelines that are both reliable and computationally efficient.
Problem

Research questions and friction points this paper is trying to address.

hallucination
retrieval-augmented generation
conformal factuality
distribution shift
factuality-informativeness trade-off
Innovation

Methods, ideas, or system contributions that make the work stand out.

conformal factuality
informativeness-aware metrics
distribution shift robustness
entailment-based verifier
RAG reliability
🔎 Similar Papers
No similar papers found.