Truth or Mirage? Towards End-to-End Factuality Evaluation with LLM-Oasis

📅 2024-11-29
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate factually inaccurate content (“hallucinations”), yet existing factual evaluation benchmarks suffer from task specificity, limited scale, and simplistic verification protocols. Method: We introduce LLM-Oasis—the first large-scale, task-agnostic, end-to-end factual evaluation benchmark—constructed by extracting real-world statements from Wikipedia, generating high-quality true/false textual pairs via controllable semantic perturbation, and establishing a gold-standard test set through multi-round expert annotation. Contribution/Results: LLM-Oasis enables cross-task generalization evaluation for factual consistency, offering high reliability and strong challenge (e.g., GPT-4o achieves only 60% accuracy). Our methodology integrates statement extraction, controlled falsification, multi-level human annotation, and LLM-driven factuality discrimination modeling, providing a scalable, robust benchmark for training and evaluating factual assessment models.

Technology Category

Application Category

📝 Abstract
After the introduction of Large Language Models (LLMs), there have been substantial improvements in the performance of Natural Language Generation (NLG) tasks, including Text Summarization and Machine Translation. However, LLMs still produce outputs containing hallucinations, that is, content not grounded in factual information. Therefore, developing methods to assess the factuality of LLMs has become urgent. Indeed, resources for factuality evaluation have recently emerged. Although challenging, these resources face one or more of the following limitations: (i) they are tailored to a specific task or domain; (ii) they are limited in size, thereby preventing the training of new factuality evaluators; (iii) they are designed for simpler verification tasks, such as claim verification. To address these issues, we introduce LLM-Oasis, to the best of our knowledge the largest resource for training end-to-end factuality evaluators. LLM-Oasis is constructed by extracting claims from Wikipedia, falsifying a subset of these claims, and generating pairs of factual and unfactual texts. We then rely on human annotators to both validate the quality of our dataset and to create a gold standard test set for benchmarking factuality evaluation systems. Our experiments demonstrate that LLM-Oasis presents a significant challenge for state-of-the-art LLMs, with GPT-4o achieving up to 60% accuracy in our proposed end-to-end factuality evaluation task, highlighting its potential to drive future research in the field.
Problem

Research questions and friction points this paper is trying to address.

Evaluating factuality of LLM outputs to reduce hallucinations
Overcoming limitations in existing factuality evaluation resources
Creating a large dataset for training end-to-end factuality evaluators
Innovation

Methods, ideas, or system contributions that make the work stand out.

Largest resource for end-to-end factuality evaluators
Claims extracted and falsified from Wikipedia
Human-annotated gold standard for benchmarking