Hallucination Detection in Large Language Models with Metamorphic Relations

๐Ÿ“… 2025-02-20
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) frequently generate factually incorrect outputsโ€”so-called hallucinations. Existing detection methods typically rely on external knowledge sources or model output probabilities, limiting their applicability to closed-source LLMs and raising concerns regarding usability, privacy, and scalability. This paper introduces MetaQA, a self-contained hallucination detection framework that requires no external resources and is compatible with both open- and closed-source LLMs. MetaQA pioneers a zero-shot, probability-free self-assessment paradigm based on metamorphic testing: it leverages mutation-based prompting and deformable metamorphic relations to evaluate internal consistency across semantically equivalent query variants. Evaluated on four mainstream LLMs, MetaQA consistently outperforms SelfCheckGPT across diverse question types, achieving absolute F1 improvements of 0.154โ€“0.368 (up to 112.2% relative gain), demonstrating robustness and generalizability without access to model internals or external knowledge.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) are prone to hallucinations, e.g., factually incorrect information, in their responses. These hallucinations present challenges for LLM-based applications that demand high factual accuracy. Existing hallucination detection methods primarily depend on external resources, which can suffer from issues such as low availability, incomplete coverage, privacy concerns, high latency, low reliability, and poor scalability. There are also methods depending on output probabilities, which are often inaccessible for closed-source LLMs like GPT models. This paper presents MetaQA, a self-contained hallucination detection approach that leverages metamorphic relation and prompt mutation. Unlike existing methods, MetaQA operates without any external resources and is compatible with both open-source and closed-source LLMs. MetaQA is based on the hypothesis that if an LLM's response is a hallucination, the designed metamorphic relations will be violated. We compare MetaQA with the state-of-the-art zero-resource hallucination detection method, SelfCheckGPT, across multiple datasets, and on two open-source and two closed-source LLMs. Our results reveal that MetaQA outperforms SelfCheckGPT in terms of precision, recall, and f1 score. For the four LLMs we study, MetaQA outperforms SelfCheckGPT with a superiority margin ranging from 0.041 - 0.113 (for precision), 0.143 - 0.430 (for recall), and 0.154 - 0.368 (for F1-score). For instance, with Mistral-7B, MetaQA achieves an average F1-score of 0.435, compared to SelfCheckGPT's F1-score of 0.205, representing an improvement rate of 112.2%. MetaQA also demonstrates superiority across all different categories of questions.
Problem

Research questions and friction points this paper is trying to address.

Detects hallucinations in LLMs
Uses metamorphic relations
Compatible with all LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses metamorphic relations for detection
Operates without external resources
Compatible with all LLM types
๐Ÿ”Ž Similar Papers