Towards Faithful Chain-of-Thought: Large Language Models are Bridging Reasoners

📅 2024-05-29
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability of chain-of-thought (CoT) reasoning across diverse tasks and its frequent lack of faithfulness—i.e., generated answers not being fully supported by the intermediate reasoning steps. To this end, we introduce the first step-level causal modeling framework that explicitly distinguishes centralized versus distributed reasoning paradigms, uncovering causal dependencies among context, reasoning chains, and final answers. We propose “Inference Bridging”, a novel method integrating attribution analysis and semantic consistency as dual criteria to jointly optimize CoT filtering and prompt enhancement. Our pipeline comprises context retrieval, CoT generation, and step-aware re-ranking. Extensive experiments demonstrate significant improvements in both reasoning faithfulness and answer accuracy across multiple benchmarks, confirming effectiveness and strong cross-task generalization. The core contributions are: (1) step-granular causal modeling of CoT reasoning, and (2) a dual-criterion co-optimization framework for faithful, robust inference.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) suffer from serious unfaithful chain-of-thought (CoT) issues. Previous work attempts to measure and explain it but lacks in-depth analysis within CoTs and does not consider the interactions among all reasoning components jointly. In this paper, we first study the CoT faithfulness issue at the granularity of CoT steps, identify two reasoning paradigms: centralized reasoning and distributed reasoning, and find their relationship with faithfulness. Subsequently, we conduct a joint analysis of the causal relevance among the context, CoT, and answer during reasoning. The result proves that, when the LLM predicts answers, it can recall correct information missing in the CoT from the context, leading to unfaithfulness issues. Finally, we propose the inferential bridging method to mitigate this issue, in which we use the attribution method to recall information as hints for CoT generation and filter out noisy CoTs based on their semantic consistency and attribution scores. Extensive experiments demonstrate that our approach effectively alleviates the unfaithful CoT problem.
Problem

Research questions and friction points this paper is trying to address.

Analyzing factors affecting Chain-of-Thought (CoT) effectiveness and faithfulness.
Identifying issues with unfaithful CoT due to missing information in reasoning.
Proposing an algorithm to improve CoT by enhancing information recall and evaluation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes CoT effectiveness via problem difficulty, information gain, flow
Identifies unfaithful CoT through question-CoT-answer interaction analysis
Proposes algorithm enhancing CoT with extra question information recall
🔎 Similar Papers
No similar papers found.
J
Jiachun Li
School of Artificial Intelligence, University of Chinese Academy of Sciences; The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
P
Pengfei Cao
School of Artificial Intelligence, University of Chinese Academy of Sciences; The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
Yubo Chen
Yubo Chen
Institute of Automation, Chinese Academy of Sciences
Natural Language ProcessingInformation ExtractionEvent ExtractionLarge Language Model
K
Kang Liu
School of Artificial Intelligence, University of Chinese Academy of Sciences; The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
J
Jun Zhao
School of Artificial Intelligence, University of Chinese Academy of Sciences; The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences