RAL2M: Retrieval Augmented Learning-To-Match Against Hallucination in Compliance-Guaranteed Service Systems

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of hallucination in large language models (LLMs) when deployed in compliance-sensitive services, where response accuracy and regulatory adherence are critical. To mitigate this issue, the authors propose repositioning LLMs as query-response matching discriminators within a retrieval-based framework, thereby circumventing generative hallucinations. The approach introduces a query-adaptive implicit ensemble strategy that explicitly models the heterogeneity and interdependencies among multiple models, yielding a calibrated consensus decision. Built upon retrieval-augmented learning and a learn-to-match paradigm, the method significantly outperforms strong baselines on large-scale benchmarks, effectively harnessing collective intelligence to enhance both matching accuracy and overall system reliability.

Technology Category

Application Category

📝 Abstract
Hallucination is a major concern in LLM-driven service systems, necessitating explicit knowledge grounding for compliance-guaranteed responses. In this paper, we introduce Retrieval-Augmented Learning-to-Match (RAL2M), a novel framework that eliminates generation hallucination by repositioning LLMs as query-response matching judges within a retrieval-based system, providing a robust alternative to purely generative approaches. To further mitigate judgment hallucination, we propose a query-adaptive latent ensemble strategy that explicitly models heterogeneous model competence and interdependencies among LLMs, deriving a calibrated consensus decision. Extensive experiments on large-scale benchmarks demonstrate that the proposed method effectively leverages the"wisdom of the crowd"and significantly outperforms strong baselines. Finally, we discuss best practices and promising directions for further exploiting latent representations in future work.
Problem

Research questions and friction points this paper is trying to address.

hallucination
compliance-guaranteed service systems
retrieval-augmented
LLM
knowledge grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-Augmented Learning-to-Match
Hallucination Mitigation
Query-Adaptive Latent Ensemble
Compliance-Guaranteed Service Systems
LLM Consensus Judgment
🔎 Similar Papers
No similar papers found.