Mitigating Hallucinations in Large Language Models via Causal Reasoning

📅 2025-08-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from logical inconsistency and hallucinations, primarily due to their inability to model causal structures among variables; existing approaches like chain-of-thought operate only at the token level and fail to encode conditional independence or satisfy causal identifiability assumptions. Method: We propose the first variable-level causal directed acyclic graph (DAG) modeling framework, construct the CausalDR dataset (25K samples), and introduce CDCR-SFT—a supervised fine-tuning method integrating causal graph construction, graph-based reasoning tracing, and causal identifiability verification. Results: Our approach substantially improves logical consistency: it achieves 95.33% accuracy on CLADDER—surpassing human performance for the first time—and reduces hallucination rates by 10% on HaluEval. It consistently outperforms baselines across four major LLMs and eight reasoning benchmarks.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) exhibit logically inconsistent hallucinations that appear coherent yet violate reasoning principles, with recent research suggesting an inverse relationship between causal reasoning capabilities and such hallucinations. However, existing reasoning approaches in LLMs, such as Chain-of-Thought (CoT) and its graph-based variants, operate at the linguistic token level rather than modeling the underlying causal relationships between variables, lacking the ability to represent conditional independencies or satisfy causal identification assumptions. To bridge this gap, we introduce causal-DAG construction and reasoning (CDCR-SFT), a supervised fine-tuning framework that trains LLMs to explicitly construct variable-level directed acyclic graph (DAG) and then perform reasoning over it. Moreover, we present a dataset comprising 25,368 samples (CausalDR), where each sample includes an input question, explicit causal DAG, graph-based reasoning trace, and validated answer. Experiments on four LLMs across eight tasks show that CDCR-SFT improves the causal reasoning capability with the state-of-the-art 95.33% accuracy on CLADDER (surpassing human performance of 94.8% for the first time) and reduces the hallucination on HaluEval with 10% improvements. It demonstrates that explicit causal structure modeling in LLMs can effectively mitigate logical inconsistencies in LLM outputs. Code is available at https://github.com/MrLYG/CDCR-SFT.
Problem

Research questions and friction points this paper is trying to address.

Mitigating logically inconsistent hallucinations in LLMs
Enhancing causal reasoning capabilities in language models
Reducing logical inconsistencies via explicit causal structure modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces causal-DAG construction and reasoning framework
Trains LLMs to model variable-level causal relationships
Uses supervised fine-tuning to reduce hallucinations
🔎 Similar Papers
No similar papers found.
Y
Yuangang Li
University of Southern California
Yiqing Shen
Yiqing Shen
Johns Hopkins
Yi Nian
Yi Nian
Independent Researcher
NLPTrustworthy AI
Jiechao Gao
Jiechao Gao
Stanford University
IoT&Cloud ComputingFederated LearningReinforcement LearningEnergy ManagementAI4Finance
Z
Ziyi Wang
University of Maryland, College Park
C
Chenxiao Yu
University of Southern California
S
Shawn Li
University of Southern California
J
Jie Wang
Stanford University
Xiyang Hu
Xiyang Hu
PhD, Carnegie Mellon University
Machine LearningTrustworthyHuman-AIGenerative AIOut of Distribution
Y
Yue Zhao
University of Southern California