Faithfulness metric fusion: Improving the evaluation of LLM trustworthiness across domains

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited cross-domain factual consistency evaluation accuracy of large language models (LLMs). Methodologically, it proposes a human-feedback-driven metric fusion framework that—novelty—introduces a tree-based model to dynamically weight multiple foundational factuality metrics, thereby approximating human judgments. A unified, human-annotated dataset spanning both question-answering and dialogue scenarios is constructed, and feature importance analysis is conducted to ensure reproducible evaluation. The core contributions are: (1) standardization of factuality measurement across domains and tasks; and (2) a fused metric achieving significantly higher correlation with human judgments than any individual metric, attaining state-of-the-art performance across diverse domains. Empirical results demonstrate substantial improvements in both accuracy and generalizability of LLM output credibility assessment.

Technology Category

Application Category

📝 Abstract
We present a methodology for improving the accuracy of faithfulness evaluation in Large Language Models (LLMs). The proposed methodology is based on the combination of elementary faithfulness metrics into a combined (fused) metric, for the purpose of improving the faithfulness of LLM outputs. The proposed strategy for metric fusion deploys a tree-based model to identify the importance of each metric, which is driven by the integration of human judgements evaluating the faithfulness of LLM responses. This fused metric is demonstrated to correlate more strongly with human judgements across all tested domains for faithfulness. Improving the ability to evaluate the faithfulness of LLMs, allows for greater confidence to be placed within models, allowing for their implementation in a greater diversity of scenarios. Additionally, we homogenise a collection of datasets across question answering and dialogue-based domains and implement human judgements and LLM responses within this dataset, allowing for the reproduction and trialling of faithfulness evaluation across domains.
Problem

Research questions and friction points this paper is trying to address.

Improves faithfulness evaluation accuracy in LLMs
Combines metrics using tree-based fusion with human judgments
Enhances trustworthiness for diverse application scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses multiple elementary metrics into combined metric
Uses tree-based model to weight metrics via human judgements
Demonstrates stronger correlation with human judgements across domains
🔎 Similar Papers
No similar papers found.