Teaming LLMs to Detect and Mitigate Hallucinations

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited generalizability and high inference cost of single-LLM consistency methods in hallucination detection, this paper proposes the “Alliance Consistency” framework. It aggregates responses from multiple heterogeneous large language models—differing in training data, objective functions, and architectures—to establish a multi-model collaborative consistency analysis mechanism. The work innovatively extends consistency-based verification from a single-model to a multi-model alliance, and for the first time empirically demonstrates that model heterogeneity yields positive gains in hallucination suppression. Integrating diversity-aware evaluation with optimal team composition strategies, the framework is systematically validated across 15 mainstream LLMs. Experiments show that Alliance Consistency significantly outperforms single-model consistency baselines in hallucination detection accuracy while reducing average inference overhead by 23.6%, achieving simultaneous improvements in both performance and efficiency.

Technology Category

Application Category

📝 Abstract
Recent work has demonstrated state-of-the-art results in large language model (LLM) hallucination detection and mitigation through consistency-based approaches which involve aggregating multiple responses sampled from a single LLM for a given prompt. These approaches help offset limitations stemming from the imperfect data on which LLMs are trained, which includes biases and under-representation of information required at deployment time among other limitations which can lead to hallucinations. We show that extending these single-model consistency methods to combine responses from multiple LLMs with different training data, training schemes and model architectures can result in substantial further improvements in hallucination detection and mitigation capabilities beyond their single-model consistency counterparts. We evaluate this emph{consortium consistency} approach across many model teams from a pool of 15 LLMs and explore under what conditions it is beneficial to team together different LLMs in this manner. Further, we show that these performance improvements often come with reduced inference costs, offsetting a significant drawback with single-model consistency methods.
Problem

Research questions and friction points this paper is trying to address.

Detecting and mitigating hallucinations in large language models
Improving consistency methods by combining multiple LLM responses
Reducing inference costs while enhancing hallucination detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining responses from multiple diverse LLMs
Using consortium consistency to detect hallucinations
Reducing inference costs while improving performance
🔎 Similar Papers
No similar papers found.
D
Demian Till
Cambridge Consultants
J
John Smeaton
Cambridge Consultants
P
Peter Haubrick
Cambridge Consultants
G
Gouse Saheb
Cambridge Consultants
F
Florian Graef
Cambridge Consultants
David Berman
David Berman
Queen Mary University of London
AISynthetic BiologyM-theoryString theoryTheoretical Physics