Measuring Faithfulness of Chains of Thought by Unlearning Reasoning Steps

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work lacks quantitative evaluation of causal faithfulness—whether a large language model’s (LLM) chain-of-thought (CoT) genuinely reflects its parametric beliefs. Method: We propose FUR (Faithfulness via Unlearning Reasoning), a novel framework that performs gradient-driven, selective parameter erasure to *causally* remove individual CoT steps and measure resulting shifts in final predictions. Contribution/Results: Across four multiple-choice question-answering (MCQA) datasets and four LLM families, step-level interventions demonstrate that critical CoT steps exert strong causal influence on final outputs: erasing them induces significant prediction shifts, and regenerated CoTs consistently support the new answers—indicating perturbation of underlying parametric beliefs. Crucially, we show that high human interpretability of CoT does not imply parametric faithfulness. FUR establishes the first operational, scalable paradigm for assessing causal fidelity in LLM reasoning, advancing trustworthy AI evaluation.

Technology Category

Application Category

📝 Abstract
When prompted to think step-by-step, language models (LMs) produce a chain of thought (CoT), a sequence of reasoning steps that the model supposedly used to produce its prediction. However, despite much work on CoT prompting, it is unclear if CoT reasoning is faithful to the models' parameteric beliefs. We introduce a framework for measuring parametric faithfulness of generated reasoning, and propose Faithfulness by Unlearning Reasoning steps (FUR), an instance of this framework. FUR erases information contained in reasoning steps from model parameters. We perform experiments unlearning CoTs of four LMs prompted on four multi-choice question answering (MCQA) datasets. Our experiments show that FUR is frequently able to change the underlying models' prediction by unlearning key steps, indicating when a CoT is parametrically faithful. Further analysis shows that CoTs generated by models post-unlearning support different answers, hinting at a deeper effect of unlearning. Importantly, CoT steps identified as important by FUR do not align well with human notions of plausbility, emphasizing the need for specialized alignment
Problem

Research questions and friction points this paper is trying to address.

Measure faithfulness of reasoning steps
Unlearn reasoning steps for evaluation
Assess alignment with human plausibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unlearning reasoning steps
Measuring parametric faithfulness
Changing model predictions
🔎 Similar Papers
No similar papers found.