MIH-TCCT: Mitigating Inconsistent Hallucinations in LLMs via Event-Driven Text-Code Cyclic Training

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently exhibit cross-context inconsistent hallucinations in natural language processing (NLP) tasks, undermining reliability and trustworthiness. Method: This paper proposes an event-driven text-to-code cyclic training framework that establishes a bidirectional closed loop: text-to-code generation followed by code-to-text reverse distillation, enabling logical consistency transfer. It introduces an innovative event-alignment mechanism and a task-agnostic, multi-stage cyclic fine-tuning paradigm—overcoming the task dependency inherent in existing synthetic-data-based approaches. Contribution/Results: Evaluated on three mainstream LLMs, the method significantly reduces inconsistency hallucination rates across two representative NLP tasks while preserving original task performance without degradation. It offers a generalizable, lightweight, and architecture-agnostic solution for hallucination mitigation, advancing the state of the art in reliable LLM deployment.

Technology Category

Application Category

📝 Abstract
Recent methodologies utilizing synthetic datasets have aimed to address inconsistent hallucinations in large language models (LLMs); however,these approaches are primarily tailored to specific tasks, limiting their generalizability. Inspired by the strong performance of code-trained models in logic-intensive domains, we propose a novel framework that leverages event-based text to generate corresponding code and employs cyclic training to transfer the logical consistency of code to natural language effectively. Our method significantly reduces inconsistent hallucinations across three leading LLMs and two categories of natural language tasks while maintaining overall performance. This framework effectively alleviates hallucinations without necessitating adaptation to downstream tasks, demonstrating generality and providing new perspectives to tackle the challenge of inconsistent hallucinations.
Problem

Research questions and friction points this paper is trying to address.

Reduces inconsistent hallucinations in LLMs
Uses event-driven text-code cyclic training
Enhances logical consistency in natural language
Innovation

Methods, ideas, or system contributions that make the work stand out.

event-driven text-code cyclic training
leverages event-based text generation
transfers logical consistency effectively
🔎 Similar Papers
No similar papers found.
X
Xinxin You
Tsinghua University, Beijing, China
Xien Liu
Xien Liu
Tsinghua University
Deep LearningMedicalNLPLarge Language Models
Q
Qixin Sun
Beihang University, Beijing, China
H
Huan Zhang
iFLYTEK Research, Beijing, China
K
Kaiyin Zhou
Beijing University of Posts and Telecommunications, Beijing, China
S
Shaohui Liu
Beijing University of Posts and Telecommunications, Beijing, China
G
GuoPing Hu
iFLYTEK Research, Beijing, China
S
ShiJin Wang
iFLYTEK Research, Beijing, China
Si Liu
Si Liu
Fred Hutchinson Cancer Center
GenomicsBiostatisticsAnomaly DetectionOpen Category Detection
Ji Wu
Ji Wu
Tsinghua University
Artificial Intelligence,smart healthcaremachine learningpattern recognitionspeech recognition